The battle of three centuries

The history of central Banks

Contemporary criticisms of central banks echo debates from times past
 
 
 
TWENTY years ago next month, the British government gave the Bank of England the freedom to set interest rates. That decision was part of a trend that made central bankers the most powerful financial actors on the planet, not only setting rates but also buying trillions of dollars’ worth of assets, targeting exchange rates and managing the economic cycle.

Although central banks have great independence now, the tide could turn again. Central bankers across the world have been criticised for overstepping their brief, having opined about broader issues (the Reserve Bank of India’s Raghuram Rajan on religious tolerance, the Bank of England’s Mark Carney on climate change). In some countries the fundamentals of monetary policy are under attack: Recep Tayyip Erdogan, the president of Turkey, has berated his central bank because of his belief that higher interest rates cause inflation. And central banks have been widely slated for propping up the financial sector, and denting savers’ incomes, in the wake of the financial crisis of 2007-08.

Such debate is almost as old as central banking itself. Over more than 300 years, the power of central banks has ebbed and flowed as governments have by turns enhanced and restricted their responsibilities in response to economic necessity and intellectual fashion. Governments have asked central banks to pursue several goals at once: stabilising currencies; fighting inflation; safeguarding the financial system; co-ordinating policy with other countries; and reviving economies.

These goals are complex and not always complementary; it makes sense to put experts in charge.

That said, the actions needed to attain them have political consequences, dragging central banks into the democratic debate. In the early decades after American independence, two central banks were founded and folded before the Federal Reserve was established in 1913. Central banks’ part in the Depression of the 1930s, the inflationary era of the 1960s and 1970s and the credit bubble in the early 2000s all came under attack.

Bankers to the government
 
The first central banks were created to enhance the financial power of governments. The pioneer was the Sveriges Riksbank, set up as a tool of Swedish financial management in 1668 (the celebration of its tercentenary included the creation of the Nobel prize in economics). But the template was set by the Bank of England, established in 1694 by William III, ruler of both Britain and the Netherlands, in the midst of a war against France. In return for a loan to the crown, the bank gained the right to issue banknotes. Monarchs had always been prone to default—and had the power to prevent creditors from enforcing their rights. But William depended on the support of Parliament, which reflected the interests of those who financed the central bank. The creation of the bank reassured creditors and made it easier and cheaper for the government to borrow.

No one at the time expected these central banks to evolve into the all-powerful institutions of today. But a hint of what was to come lay in the infamous schemes of John Law in France from 1716 to 1720. He persuaded the regent (the king, Louis XV, was an infant) to allow him to establish a national bank, and to decree that all taxes and revenues be paid in its notes. The idea was to relieve the pressure on the indebted monarchy. The bank then assumed the national debt; investors were persuaded to swap the bonds for shares in the Mississippi company, which would exploit France’s American possessions.

One of the earliest speculative manias ensued: the word “millionaire” was coined as the Mississippi shares soared in price. But there were no profits to be had from the colonies and when Law’s schemes collapsed, French citizens developed an enduring suspicion of high finance and paper money. Despite this failure, Law was on to something.

Paper money was a more useful medium of exchange than gold or silver, particularly for large amounts. Private banks might issue notes but they were less trustworthy than those printed by a national bank, backed by a government with tax-raising powers. Because paper money was a handier medium of exchange, people had more chance to trade; and as economic activity grew, government finances improved. Governments also noticed that issuing money for more than its intrinsic value was a nice little earner.

Alexander Hamilton, America’s first treasury secretary, admired Britain’s financial system.

Finances were chaotic in the aftermath of independence: America’s first currency, the Continental, was afflicted by hyperinflation. Hamilton believed that a reformed financial structure, including a central bank, would create a stable currency and a lower cost of debt, making it easier for the economy to flourish.

His opponents argued that the bank would be too powerful and would act on behalf of northern creditors. In “Hamilton”, a hit hip-hop musical, the Thomas Jefferson character declares: “But Hamilton forgets/His plan would have the government assume state’s debts/Now, place your bets as to who that benefits/The very seat of government where Hamilton sits.”

Central banking was one of the great controversies of the new republic’s first half-century.

Hamilton’s bank lasted 20 years, until its charter was allowed to lapse in 1811. A second bank was set up in 1816, but it too was resented by many. Andrew Jackson, a populist president, vetoed the renewal of its charter in 1836.

Good as gold
 
A suspicion that central banks were likely to favour creditors over debtors was not foolish.

Britain had moved onto the gold standard, by accident, after the Royal Mint set the value of gold, relative to silver, higher than it was abroad at around the turn of the 18th century, and silver flowed overseas. Since Bank of England notes could be exchanged on demand for gold, the bank was in effect committed to maintaining the value of its notes relative to the metal.

By extension, this meant the bank was committed to the stability of sterling as a currency. In turn, the real value of creditors’ assets (bonds and loans) was maintained; on the other side, borrowers had no prospect of seeing debts inflated away.

Gold convertibility was suspended during the Napoleonic wars: government debt and inflation soared. Parliament restored it in 1819, although only by forcing a period of deflation and recession. For the rest of the century, the bank maintained the gold standard with the result that prices barely budged over the long term. But the corollary was that the bank had to raise interest rates to attract foreign capital whenever its gold reserves started to fall. In effect, this loaded the burden of economic adjustment onto workers, through lower wages or higher unemployment. The order of priorities was hardly a surprise when voting was limited to men of property. It was a fine time to be a rentier.

The 19th century saw the emergence of another responsibility for central banks: managing crises.

Capitalism has always been plagued by financial panics in which lenders lose confidence in the creditworthiness of private banks. Trade suffered at these moments as merchants lacked the ability to fund their purchases. In the panic of 1825 the British economy was described as being “within twenty-four hours of a state of barter.” After this crisis, the convention was established that the Bank of England act as “lender of last resort”. Walter Bagehot, an editor of The Economist, defined this doctrine in his book “Lombard Street”, published in 1873: the central bank should lend freely to solvent banks, which could provide collateral, at high rates.

The idea was not universally accepted; a former governor of the Bank of England called it “the most mischievous doctrine ever breathed in the monetary or banking world”. It also involved a potential conflict with a central bank’s other roles. Lending in a crisis meant expanding the money supply. But what if that coincided with a need to restrict the money supply in order to safeguard the currency?

As other countries industrialised in the 19th century, they copied aspects of the British model, including a central bank and the gold standard. That was the pattern in Germany after its unification in 1871.

America was eventually tipped into accepting another central bank by the financial panic of 1907, which was resolved only by the financial acumen of John Pierpont Morgan, the country’s leading banker. It seemed rational to create a lender of last resort that did not depend on one man. Getting a central bank through Congress meant assuaging the old fears of the “eastern money power”. Hence the Fed’s unwieldy structure of regional, privately owned banks and a central, politically appointed board.

Ironically, no sooner had the Fed been created than the global financial structure was shattered by the first world war. Before 1914 central banks had co-operated to keep exchange rates stable. But war placed domestic needs well ahead of any international commitments. No central bank was willing to see gold leave the country and end up in enemy vaults. The Bank of England suspended the right of individuals to convert their notes into bullion; it has never been fully reinstated. In most countries, the war was largely financed by borrowing: central banks resumed their original role as financing arms of governments, and drummed up investor demand for war debt. Monetary expansion and rapid inflation followed.

Interwar failure
 
Reconstructing an international financial system after the war was complicated by the reparations imposed on Germany and by the debts owed to America by the allies. It was hard to co-ordinate policy amid squabbling over repayment schedules. When France and Belgium occupied the Ruhr in 1923 after Germany failed to make payments, the German central bank, the Reichsbank, increased its money-printing, unleashing hyperinflation. Germans have been wary of inflation and central-bank activism ever since.

The mark eventually stabilised and central banks tried to put a version of the gold standard back together. But two things hampered them. First, gold reserves were unevenly distributed, with America and France owning the lion’s share. Britain and Germany, which were less well endowed, were very vulnerable.

Second, European countries had become mass democracies, which made the austere policies needed to stabilise a currency in a crisis harder to push through. The political costs were too great. In Britain the Labour government fell in 1931 when it refused to enact benefit cuts demanded by the Bank of England. Its successor left the gold standard. In Germany Heinrich Brüning, chancellor from 1930 to 1932, slashed spending to deal with the country’s foreign debts but the resulting slump only paved the way for Adolf Hitler.

America was by then the most powerful economy, and the Fed the centrepiece of the interwar financial system (see chart 1). The central bank struggled to balance domestic and international duties. A rate cut in 1927 was designed to make life easier for the Bank of England, which was struggling to hold on to the gold peg it had readopted in 1925. But the cut was criticised for fuelling speculation on Wall Street. The Fed started tightening again in 1928 as the stockmarket kept booming. It may have overdone it.



If central banks struggled to cope in the 1920s, they did even worse in the 1930s. Fixated on exchange rates and inflation, they allowed the money supply to contract sharply. Between 1929 and 1933, 11,000 of America’s 25,000 banks disappeared, taking with them customers’ deposits and a source of lending for farms and firms. The Fed also tightened policy prematurely in 1937, creating another recession.

During the second world war central banks resumed their role from the first: keeping interest rates low and ensuring that governments could borrow to finance military spending. After the war, it became clear that politicians had no desire to see monetary policy tighten again. The result in America was a running battle between presidents and Fed chairmen. Harry Truman pressed William McChesney Martin, who ran the Fed from 1951 to 1970, to keep rates low despite the inflationary consequences of the Korean war. Martin refused. After Truman left office in 1953, he passed Martin in the street and uttered just one word: “Traitor.”

Lyndon Johnson was more forceful. He summoned Martin to his Texas ranch and bellowed: “Boys are dying in Vietnam and Bill Martin doesn’t care.” Typically, Richard Nixon took the bullying furthest, leaking a false story that Arthur Burns, Martin’s successor, was demanding a 50% pay rise. Attacked by the press, Burns retreated from his desire to raise interest rates.

In many other countries, finance ministries played the dominant role in deciding on interest rates, leaving central banks responsible for financial stability and maintaining exchange rates, which were fixed under the Bretton Woods regime. But like the gold standard, the system depended on governments’ willingness to subordinate domestic priorities to the exchange rate.

By 1971 Nixon was unwilling to bear this cost and the Bretton Woods system collapsed.

Currencies floated, inflation took off and worse still, many countries suffered high unemployment at the same time.

This crisis gave central banks the chance to develop the powers they hold today. Politicians had shown they could not be trusted with monetary discipline: they worried that tightening policy to head off inflation would alienate voters. Milton Friedman, a Chicago economist and Nobel laureate, led an intellectual shift in favour of free markets and controlling the growth of the money supply to keep inflation low. This “monetarist” approach was pursued by Paul Volcker, appointed to head the Fed in 1979. He raised interest rates so steeply that he prompted a recession and doomed Jimmy Carter’s presidential re-election bid in 1980. Farmers protested outside the Fed in Washington, DC; car dealers sent coffins containing the keys of unsold cars. But by the mid-1980s the inflationary spiral seemed to have been broken.

The rise to power
 
In the wake of Mr Volcker’s success, other countries moved towards making central banks more independent, starting with New Zealand in 1989. Britain and Japan followed suit. The European Central Bank (ECB) was independent from its birth in the 1990s, following the example of Germany’s Bundesbank. Many central bankers were asked to target inflation, and left to get on with the job. For a long while, this approach seemed to work perfectly. The period of low inflation and stable economies in the 1990s and early 2000s were known as the “Great Moderation”. Alan Greenspan, Mr Volcker’s successor, was dubbed the “maestro”. Rather than bully him, presidents sought his approbation for their policies.

Nevertheless, the seeds were being sown for today’s attacks on central banks. In the early 1980s financial markets began a long bull run as inflation fell. When markets wobbled, as they did on “Black Monday” in October 1987, the Fed was quick to slash rates. It was trying to avoid the mistakes of the 1930s, when it had been too slow to respond to financial distress. But over time the markets seemed to rely on the Fed stepping in to rescue them—a bet nicknamed the “Greenspan put”, after an option strategy that protects investors from losses. Critics said that central bankers were encouraging speculation.

However, there was no sign that the rapid rise in asset prices was having an effect on consumer inflation. Raising interest rates to deter stockmarket speculation might inflict damage on the wider economy. And although central banks were supposed to ensure overall financial stability, supervision of individual banks was not always in their hands: the Fed shared responsibility with an alphabet soup of other agencies, for example.
.

When the credit bubble finally burst in 2007 and 2008, central banks were forced to take extraordinary measures: pushing rates down to zero (or even below) and creating money to buy bonds and crush long-term yields (quantitative easing, or QE: see chart 2). As governments tightened fiscal policy from 2010 onwards, it sometimes seemed that central banks were left to revive the global economy alone.

Their response to the crisis has called forth old criticisms. In an echo of Jefferson and Jackson, QE has been attacked for bailing out the banks rather than the heartland economy, for favouring Wall Street rather than Main Street. Some Republicans want the Fed to make policy by following set rules: they deem QE a form of printing money. The ECB has been criticised both for favouring northern European creditors over southern European debtors and for cosseting southern spendthrifts.

And central banks are still left struggling to cope with their many responsibilities. As watchdogs of financial stability, they want banks to have more capital. As guardians of the economy, many would like to see more lending. The two roles are not always easily reconciled.

Perhaps the most cutting criticism they face is that, despite their technocratic expertise, central banks have been repeatedly surprised. They failed to anticipate the collapse of 2007-08 or the euro zone’s debt crisis. The Bank of England’s forecasts of the economic impact of Brexit have so far been wrong. It is hard to justify handing power to unelected technocrats if they fall down on the job.

All of which leaves the future of central banks uncertain. The independence granted them by politicians is not guaranteed. Politicians rely on them in a crisis; when economies recover they chafe at the constraints central banks impose. If history teaches anything, it is that central banks cannot take their powers for granted.


Oops: The Buyback Party Is Over

by: The Heisenberg


- Are you aware of just how important buybacks are for US equities?

- I certainly hope so, because if you look at net equity demand, there simply wouldn't be much (demand) if it weren't for repurchases.

- Here's Goldman's warning that details the extent to which this bid is "plunging".
 
 
I have seen (and participated in) all manner of schemes in my time.
 
The thing about schemes is that you don't want them to be too transparent. Because you know, if the scheme you're running is see-through, it's not much of a scheme, now is it?
 
Well, let me tell you something: the buyback scheme is just about as transparent as schemes get, and yet investors, presumably clinging to the same willful ignorance that keeps them from calling central bank market manipulation what it is, don't seem too inclined to recognize it.
 
Note that a scheme doesn't have to be nefarious. That's not (necessarily) what a scheme is. A scheme is just "a large-scale systematic plan or arrangement for attaining some particular object or putting a particular idea into effect," to quote the dictionary definition.
 
The buyback scheme fits that definition to a T. This is a really simple arrangement. Central banks drive down rates on safe haven assets, forcing investors down the quality ladder. The first place investors turn when government bonds are yielding nothing is corporate credit (NYSEARCA:LQD).
 
That demand drives down borrowing costs which incentivizes corporate debt issuance. The proceeds from that debt issuance are then used for buybacks. The buybacks artificially inflate corporate bottom lines. That, in turn, helps to buoy stock prices which, at the end of the day, is good for management's equity-linked compensation.
 
It really is just that simple.
 
Now a lot of people will say something like this: "don't let Heisenberg fool you, there's nothing wrong with this."
 
Under normal circumstances, that would be correct. But in today's world, the people who say that are wrong. Because the debt that funds these buybacks is being issued at artificially suppressed rates. So this is just balance sheet leveraging or financial engineering assisted by central banks. And it's creating a leverage problem that looks like this:
 
(SocGen)
 
Or, if you want some international context, like this:
 
(SocGen)
 
 
But irrespective of whether you think all this leverage is likely to create a problem when the cycle turns (more on that here), equity investors (NYSEARCA:SPY) need to ask themselves what it means for stock prices if the heretofore insatiable corporate bid dries up.
 
And guess what? It just dried up.
 
On Friday evening, Goldman was out with a harshly worded note the gist of which is, to quote the title of the piece, "management and investor obsession with buybacks fades." Here are some key excerpts (full note here):

Following years of prioritizing repurchases as a use of cash, corporations actually cut annual spending on buybacks by 11% in 2016 and executions YTD have plunged by 20% vs. last year. Meanwhile, authorizations YTD for new programs are proceeding at the slowest pace in five years. 
Experience shows that firms repurchasing shares at extremely high valuations regret those actions when the stock price inevitably de-rates. The median S&P 500 constituent currently trades at the 98th percentile of historical valuation across a variety of metrics. 
The GS Securities Division reports that buyback executions YTD have plunged by more than 20% compared with the similar year-ago period. 
Looking forward, repurchase authorizations also suggest that buyback growth will decelerate. S&P 500 firms have authorized $146 billion in share repurchases YTD, a 15% drop from the comparable point last year and the slowest pace since 2012 (see Exhibit 1).

It should be obvious why that matters, but in case it's not, here's one of my favorite visuals:
 
Buybacks
(Goldman)
 
 
See where the bid for stocks has come from for the last half decade?
 
Make of the above what you will, but do note that this is apparently one scheme that looks like it's going to come and go without retail investors ever fully appreciating its impact.
 
Of course, you never know what you've got 'till it's gone.


The Gloves Come Off: 'ETFs Are Weapons Of Mass Destruction'

by: The Heisenberg


- Two managers with $789 million in tow are out with what is quite simply the starkest warning on ETFs to date.

- Their full comments (from an April letter to investors) on the industry are highlighted here.

- This is their conclusion: "The WMDs during the Great Financial Crisis were three-letter words: CDS, CDO, etc. The current WMD is also a three-letter word".

 
Early last month, I wrote a little something called "This Just Doesn't Feel Right."
 
It was about ETFs.
 
As it turns out, that wasn't the first time I've expressed my reservations about the rampant proliferation of passive, low-cost investment vehicles. In fact, maligning ETFs has become something of an obsession for me of late.
 
ETFs are a textbook example of "too much of a good thing." Here's what I mean by "too much" (I've shown these charts before):
 
(Goldman)
 
 
There's little question that it was a good thing to offer investors an alternative to overpriced, actively-managed funds. To be sure, buying and holding a benchmark for almost nothing in terms of management fees is a far better way for most people to invest their money than trying to pick individual winners and/or paying someone else to pick individual winners.
 
Simply put: Beating the broad market is almost impossible over a sufficiently long investment horizon. So why try?
 
That goes double for periods during which central banks are essentially propping up assets by injecting trillions upon trillions in liquidity.
 
That said, if you step back and think about ETFs in a kind of common sense way, they actually seem like a really horrible idea. They are, for all intents and purposes, derivatives, which by extension means hordes of retail investors are now derivatives traders. Throw in the rising popularity of ETPs and you've also managed to turn retail investors into futures traders (for instance, no VIX ETP actually tracks the VIX, they track VIX futures).
That's bad. And it's made immeasurably worse by the fact that most ETF investors don't realize they're trading derivatives and most ETP investors don't realize they're essentially futures traders.
 
And it gets worse. The creation/destruction mechanism that serves as the scaffolding on which these vehicles are built, is far from the streamlined, miracle of financial engineering that ETF sponsors would have you believe it is.
 
In fact, for some ETFs (think high-yield funds), it's a liquidity mismatched nightmare. Investors are given the illusion of liquidity. For instance, you can trade the iShares iBoxx $ High Yield Corporate Bond ETF (NYSEARCA:HYG) all day every day. But that liquidity relies on what Barclays has correctly called "diversifiable flows."
 
You're selling and someone else is buying, eliminating the need for trading in the underlying bonds. But the problem comes in when the flows become unidirectional. If everyone is selling, someone, somewhere is going to have to trade the underlying bonds. Well guess what? The market for those bonds is thin. Which means the potential for a fire sale is high.
 
Similarly, we learned during the flash-crashing madness that unfolded on the morning of August 24, 2015 that equity ETFs are not immune when it comes to potentially wide NAV bases.
 
Have a look at the following two charts:
 
(Goldman)
 
(SocGen)
 
 
Don't worry so much about the specifics. The first visual purports to show how ETF short interest can rise above 100% without causing problems and the second seeks to trace the source of implicit transaction costs. There's a lot to learn from those, but again, just look at them as examples of how the process you've been made to believe is streamlined and simple, is anything but.
Up until 2016, a lot of the criticism of ETFs was confined to the conceptual level. Witness Howard Marks' warning in 2015 that no matter what anyone tells you, "an ETF cannot be more liquid than the underlying assets."
 
More recently, however, the criticism of ETFs has become more pointed. I outlined the case for why S&P ETFs (NYSEARCA:SPY) perpetuate a gross misallocation of capital here earlier this month (aside: some people didn't like that post - at all - but to be honest, it's another one of those "who you gonna believe, Heisenberg and Goldman or the critics on this platform?" moments).
 
And then there was the recent study by researchers at Stanford University and Emory University, which showed that ETFs are making it impossible for investors to "seek alpha" (so to speak).
The list just goes on, and on, and on.
 
Well, in the most recent and perhaps the most hyperbolic attack on the industry yet, Arik Ahitov and Dennis Bryan, who run the $789 million FPA Capital Fund, are out with a truly stark warning, detailed in a letter to investors dated April 7.
 
Bloomberg and a few other sites published some excerpts from it on Thursday, but I've got the whole letter and I wanted to bring to your attention the excerpts about ETFs. Here are the important bits (I'm going to eschew my usual highlights here because frankly, all of it should be highlighted):

Notwithstanding any of the concerns mentioned above, investors appear excited about the future as they continue to pour money into the stock market. They express this excitement by allocating a tremendous amount of capital into index funds and Exchange-Traded Funds (ETFs). Last year, passive funds had $563 billion of inflows, while active funds experienced $326 billion of outflows, according to Morningstar. Active U.S. equity funds manage $3.6 trillion and passive instruments are about to catch them at $3.1 trillion. When we add the $124 billion poured into ETFs in the first two months of 2017, active and passive investments are almost at parity. This does not even include the so-called active managers that tend to hug an index. The long-term trend is very pronounced. Since 2007, $1.2 trillion dollars disappeared from actively managed U.S. domestic equity funds and $1.4 trillion dollars were added to passive strategies. As the number of corporate listings continues to dwindle, more and more ETFs are brought to the marketplace.  
This leads to more ETFs (financial vehicles), some of which use leverage, chasing fewer and fewer actual companies. Financial vehicles using leverage to purchase a shrinking pool of real assets-sound familiar? 
The consequence of unrelenting inflows into passive funds is that stocks that are included in a major index receive ongoing support by the indiscriminate purchases made by these funds regardless of a company's fundamentals. The benefits are amplified for companies that are owned by dozens of ETFs and index funds. On the flip side, those unfortunate stocks that are not included in a major index receive the reverse treatment, as active managers that tend to be fully invested are forced to sell shares to meet the onslaught of redemptions they are facing. But the worst fate is saved for those orphan securities that are removed from an index. These stocks face both indiscriminate selling from index funds on their removal date and continued redemption-related selling from actively managed funds. Unfortunately, these buy and sell decisions are entirely disconnected from a company's fundamentals. This potentially sets the stage, should the tables turn, for an exceptionally compelling investment environment where companies with strong fundamentals are available for purchase at cheap valuations for those searching outside of the indices (as we often are). Moreover, as more investors move from active to passive investments, the market for many individual stocks becomes less liquid. With reduced liquidity, we expect increasing volatility in the marketplace.  
Last month Kopin Tan wrote in Barron's, "For weeks, the stretch from 3 p.m. to 4 p.m. became known as the market's happiest hour, since a surge in late-day buying often nudged indexes from the red into the green. This happened because ETFs and passive index funds, unlike actively managed ones, must rebalance by the end of the day to match the benchmarks they track. According to JP Morgan, a whopping 37% of daily New York Stock Exchange trading recently took place in the last 30 minutes of each session. But when the indexes turn down, will this be the unhappiest hour?" 
The weapons of mass destruction during the Great Financial Crisis were three-letter words: CDS (credit default swap), CDO (collateralized debt obligation), etc. The current weapon of mass destruction is also a three-letter word: ETF (exchange-traded fund). When the world decides that there is no need for fundamental research and investors can just blindly purchase index funds and ETFs without any regard to valuation, we say the time to be fearful is now.
Needless to say, that summarizes almost everything I've been saying for almost a year both here and elsewhere.
 
Now as always, I encourage you to draw your own conclusions based on the evidence presented, but I would once again beg you to at least consider that we're talking about some really - really - smart people here. And they've all come to essentially the same conclusion about this industry.
 
Some of it is probably sour grapes from people who have lost AUM. But a lot of it isn't. Believe me, Carl Icahn is doing just fine and doesn't need passive management to die in order to make money and yet he's saying the very same things.
 
So that's something to think really long, and really hard about over the weekend.


The People vs. Donald Trump

Simon Johnson
.  Trump and Congress



WASHINGTON, DC – In American politics, the next election is all that matters. Despite the Republicans’ big win in November 2016, US President Donald Trump’s ability to pass legislation still depends on what congressional Republicans expect to see happen in the November 2018 midterm election. Owing to a major shift in public sentiment in the past few months, many Democrats are now convinced that they will win seats, and potentially reclaim control of the House of Representatives.
 
One can already see grassroots activism gaining momentum in congressional districts that would not have seemed competitive just five months ago. For example, in California’s 45th district (in the traditionally conservative Orange County), University of California, Irvine, law professor Dave Min is taking on the incumbent Republican, Mimi Walters. This past November, Walters was reelected with 58.6% of the vote, but her district favored Hillary Clinton over Donald Trump by two percentage points.
 
This kind of House seat can easily flip to the Democrats in 2018, if a candidate like Min can persuade voters that Walters is out of touch – and too close to Trump. So Min has highlighted Walters’ support for Trump’s attempt to “repeal and replace” the Affordable Care Act (“Obamacare”), as well as her backing for his broader budget-cutting agenda. Moreover, her positions on many social issues seem quite distant from those of her constituency.
 
Min’s catchphrase has become “Where’s Mimi?”, because Walters has always seemed to avoid town hall meetings with constituents, even before growing anti-Trump anger made such occasions especially awkward for Republicans. And the anti-Trump protests occurring throughout the country have been sustaining their energy.
 
Indeed, recent special elections in Kansas and Georgia showed that no Republican seat is necessarily safe. In the race for Kansas’s 4th district seat, the Republican candidate Ron Estes won by less than ten percentage points in a constituency that Trump carried by 27 – and only after the national party was forced to mobilize massive resources on his behalf. And in the race for Georgia’s 6th district seat, Jon Ossoff, a Democrat, gained more votes than any other candidate, falling just short of the 50% threshold that he needed to win outright.
 
The Georgia special election will now be decided in a runoff. And yet the ultimate result doesn’t really matter, because the general swing in support away from Republicans is already evident. The Democrats need to flip only 24 seats to regain control of the House. Right now, that seems entirely feasible.
 
Trump, meanwhile, is unwittingly energizing the opposition by doubling down on his policies.
 
He may attempt, yet again, to repeal Obamacare. He is proposing tax cuts for the rich that will significantly increase deficits and the national debt. And he is pursuing various forms of financial deregulation – either through legislation, or by appointing regulators who are prone to capture – that will favor big banks.
 
In the early days of Trump’s presidency, it looked as though he could receive some support from congressional Democrats who were worried about 2018. Now, that dynamic has been completely reversed. Any Democrat who is up for reelection in 2018 will be standing firmly against Trump.
 
Without Democratic support, Trump will have a hard time passing the legislation that he has chosen to define his presidency. If parts of his legislative program do pass, they will become a further source of grievance, and candidates like Min will likely receive more donations. At the same time, if parts of his legislative program fail, Republican incumbents like Walters will look weak and ineffective.
 
To be sure, Republican incumbents will be raising a great deal of money, so the outcome of the 2018 midterm election is not a foregone conclusion. But Democratic fundraising will also be strong, and Democratic challengers in places like California’s 45th district will attract money and volunteers from around the country, not least with the help of new political technologies.
 
Groups such as Credo Action – the advocacy arm of a progressive mobile-phone company that uses its revenues to support five million activists – are already showing the way. Credo Action’s website includes an easy-to-use menu to express one’s support on a range of issues.
 
Likewise, Run for Something is working to fill the pipeline of Democratic candidates at all levels. Flippable is focusing specifically on seats that can be reclaimed (although they might want to add California’s 45th district). And Indivisible is distributing an already widely read guide for resisting Trump, with an emphasis on grassroots advocacy and community organizing.
 
These and many other progressive voter-mobilization efforts overlap in certain ways, and they are all competing for attention. The buzz of new organization and strategies recalls nothing so much as a dynamic start-up industry with many new entrants, which, in a sense, is exactly what the anti-Trump resistance has become.
 
The difference, of course, is that, rather than competing to make money, these organizations are encouraging civic engagement, and trying to get more people to vote for Democrats as a rebuke to Trump. This competitive process is already laying the groundwork for more effective political activism not just in 2018, but also in 2020, when the forces emerging today will seek to disrupt Trump himself.