Have we learnt the lessons of the financial crisis

Gillian Tett

Gillian Tett looks at why the warning signs were missed and where the next crash may strike
© Stephen McLaren

One day in the early summer of 2007, I received an email out of the blue from an erudite Japanese central banker called Hiroshi Nakaso. “I am somewhat concerned,” he began in typically understated manner, before warning that a financial crisis was about to explode because of problems in the American mortgage and credit market.

I was astonished. That was not because I disagreed with Nakaso’s analysis: by June 2007, I had been writing about the credit sector for a couple of years as the FT’s capital markets editor in London, and was uneasy. But I was surprised that it was Nakaso raising the alarm.

Tucked on the other side of the globe in Tokyo, he was working in the grey, fortress-like building that houses the Bank of Japan. His counterparts in the American and European central banks, close to the subprime markets, were not sounding concerned.

On the contrary, Alan Greenspan, the former chairman of the US Federal Reserve, had spent the previous decade celebrating the (seeming) triumph of western capital markets. And Greenspan’s successor, Ben Bernanke, had just declared that problems in the subprime mortgage market were so “limited” they would not create “significant spillovers”.

So why was Nakaso pessimistic? “Déjà vu”, he replied. A decade earlier, back in 1997, Nakaso had been working at the Japanese Central Bank when Tokyo plunged into its terrible banking crisis, sparked by $1tn of bad loans left by Japan’s 1980s real estate baburu keiki, or bubble.

We had met in that tumultuous period: I was then a Tokyo correspondent for the FT and we sometimes discussed the crisis over onigiri rice parcels and green tea. By the time I left Japan, in 2000, the crisis had mostly passed; westerners considered it a footnote in global financial history that reflected a peculiarly Japanese failure. Nobody at the Fed or on Wall Street dreamt that American finance might ever suffer the same humiliation as Japan. Nor did the luminaries in the City of London.

But Nasako had learnt, from bitter experience, about the perils of banker hubris. He knew that government officials often downplayed problems — both to themselves and to voters — and noticed that money markets were behaving in ways that suggested that investors and institutions were losing trust in each other. That created “striking similarities . . . with the early stages of our own financial crisis [in Japan],” he told me. “The crisis management skills of central banks and financial authorities will be truly tested.”

He was right. A few weeks later, in August 2007, the American and European financial systems did start to implode, as a result of the mortgage risk. The denouement did not happen quickly. But by the autumn of 2008 a slow-burn crunch had turned into a full-blown global crisis, epitomised by the dramatic collapse of Lehman Brothers and rescue of AIG. I felt deeply grateful for Nakaso’s insight.

But when I look back at that period now, I also feel frustrated. The International Monetary Fund calculates that between 1970 and 2011, the world has suffered 147 banking crises. Some were tiny: few today remember the 1994 bank crisis in Bolivia. Others were huge: The US 2007-08 crisis was so big that it raised public debt by 24 per cent of gross domestic product; for the 1997 Japanese crisis, the debt hit was 42 per cent.

But whatever their statistical size, crises share two things. First, the pre-crisis period is marked by hubris, greed, opacity — and a tunnel vision among financiers that makes it impossible for them to assess risks. Second, when the crisis hits, there is a sudden loss of trust, among investors, governments, institutions or all three. If you want to understand financial crises, then, it pays to remember that the roots of the word “credit” comes from the Latin “credere”, meaning “to believe”: finance does not work without faith. The irony, though, is that too much trust creates bubbles that (almost) inevitably burst.

Though it is 10 years since the Lehman Collapse, the questions are still pressing: why do we appear destined to suffer crises over and over again? Why can’t we learn from the past? And what does that mean for where the global system is heading today? After Japan and America, which part of the world will produce the next drama?

Photographs in this story are from ‘The Crash’ by Stephen McLaren, which will be published by Hoxton Mini Press on September 27. The images, shot in London, document the aftermath of the global financial crash. hoxtonminipress.com/products/the-crash


When I started writing about western capital markets in early 2005, I was not expecting another Japan-style shock. Far from it: like many others, I initially thought I was witnessing the financial equivalent of the internet revolution, a wave of wild innovation that would improve all our lives.

It seemed a reasonable bet. For centuries, the craft of banking has revolved around the relatively simple business of collecting deposits from companies, governments or consumers, and then lending the money out. Thus, in the 1980s Japanese bubble banks lent money to real estate developments; so too in the Savings and Loans boom in America during the same decade.

But while previous generations of bankers had hung on to their loans, like farmers tending a crop, in the late 20th century financiers became more like butchers making sausages. They started to buy loans from anywhere they could (including each other), chop these up, and then repackage them into new instruments that could be sold to investors with fancy names such as “collateralised debt obligations” (CDOs).

Every innovation revolution needs a sales patter, and this was no exception: the bankers told themselves that this slicing and dicing would make the financial system much safer. The idea was a modern twist on the old adage, “a problem shared is a problem halved”. In the past, banks had gone bust when borrowers defaulted because the pain was concentrated in one place; slicing and dicing spread the pain among so many investors that it would be easier to absorb. Or so the theory went.

But there was a catch. Since the techniques that bankers were using to slice and dice the loans were desperately opaque, it was hard for anyone to know who held the risks. Worse still, because bankers were so excited about repackaging debt, they were stimulating a new mania for making loans, seemingly with government blessing. What all this financial innovation concealed was an old-fashioned credit boom, particularly in American subprime mortgages.

Initially, few seemed concerned about these developments. No surprise, perhaps. This corner of finance was so geeky and tribal that most voters and politicians had scant idea that a revolution was under way. In any case, almost every unsustainable boom starts with the idea that innovators have found a new frontier. In the 18th-century South Sea Bubble, this was a mythical new country; in the 1840s railway mania or 1990s internet bubble, it was technology. In 2005 it was finance itself. “There is a dynamic which pushes banking and the penumbra of banking to excess, over and over again,” says Paul Tucker, the former deputy governor of the Bank of England. “People actually have got fairly good short-term specific memories [but] they just haven’t got good long-term memories, particularly when the technology of banking changes.”

Complexity made this worse. At a conference I attended later that year, hundreds of bankers met in a concrete, mural-filled municipal hall in southern France to discuss the securitisation — aka slicing-and-dicing — game. For two days they unveiled power-points drenched in Greek letters, algorithms and jargon, like a cult speaking a secretive holy language. But as the presentations unfolded, it was clear that very few investors or regulators — or even the bankers themselves — truly understood how the products worked. To the outside world the revolution seemed to be driven by computers; but it was also driven by blind trust.

It was easy to see why the bankers accepted this: the bubble was making them rich. “There was huge complacency on all sides,” says Bill Winters, former co-head of JPMorgan Banking and now chief executive of Standard Chartered. “The governments threw a huge pot of honey into the middle of the table and told everyone we don’t have to think about the bees.”

What was more surprising was that regulators also seemed reluctant to rock the boat. For some, the strength of the economy created a sense of complacency; for others, faith in free-market economics — and pride in western finance — made it inconceivable that Japan’s story might offer a warning lesson. “I originally assumed that people would act in a wholly rational way,” Greenspan recently observed. “That turned out to be wrong.”


Fast-forward to 2007, and another financial conference in Barcelona. By then the mania was intense: every room was filled to capacity with chino-clad financiers; the parties were studded with “champagne salutes”, toasting the innovations; the graphs in the triumphant power points only went up. On the sidelines of the conference, some bankers had formed a reggae-style amateur band, called “Da Leverage”. “It’s a joke,” one banker said. It looked uncannily similar to the baburu days in Tokyo, when people sprinkled gold leaf on their sushi and nobody imagined that real estate prices might fall.

When trust in the system finally did start to crack, a few days after the Barcelona event, the first signs came not in America but in Europe: BNP Paribas in France and IKB bank in Germany each announced problems with their holdings of US mortgage bonds. The technical reasons were complex. But essentially the problem was akin to a food-poisoning scare. As 2007 wore on, it became clear that significant numbers of American borrowers were defaulting on their mortgages; but because debt had been sliced and diced into new products, nobody knew where the poisonous risks sat in the financial food chain. So investors simply shunned all sliced-and-diced products. That caused the markets to seize up.

The authorities tried to rebuild confidence. But shattered trust is hard to restore — particularly when governments or bankers try to sweep problems under the carpet, say with creative accounting tricks. “You can put rotten meat in the freezer to stop it smelling — but its still rotten,” one Japanese official joked to me as he watched American attempts to reassure the markets, turning to some of the same tricks the Tokyo government had once tried — and failed — to use a decade before.

It was a hopeless task: the “slicing and dicing” process had left American, European and Asian markets so closely entwined that any panic was highly contagious. Month by month trust drained away: investors lost faith in the value of mortgage bonds, the judgment of rating agencies and the balance sheets of banks. When Lehman Brothers collapsed in September 2008, investors stopped believing that any institution was truly safe. For a terrifying couple of weeks credit vanished in America. The crisis only stopped when eventually the government stepped in: it recapitalised the banks, forcing them to recognise their losses, closed weak lenders, stopped many of the crazy acronym-soaked credit practices and flooded the markets with liquidity. In essence, this meant that Uncle Sam itself was now providing new pillars of faith — and trust — for the system.

The measures shocked many voters — and investors. But not my friends in Tokyo: Japan had eventually used similar moves itself to end its own 1990s crisis; so had many of the governments involved in the other 147 crises that the IMF has identified. The only surprise of 2008, perhaps, was that American officials and investors were so deeply shocked to become just another chapter in this history. Or as Ray Dalio, the creator of Bridgewater hedge fund (and one of the few who predicted it), observes: “These crises happen again and again. We can understand the mechanics, if we want. But can we learn?”


A couple of weeks ago I caught up with Nakaso again by phone. Since we first met in Tokyo two decades ago, a lot of financial water has flowed under the bridge. I have moved to America with the FT; he became deputy governor of the Bank of Japan, and then recently retired to join a research group in Tokyo.

“So do you think the financial system is healthier now?” I asked. Nakaso gave — as ever — a carefully balanced answer. Yes, some parts of finance were stronger: after the 2008 crisis the American government recapitalised their banks, and stopped the crazier practices of the credit boom. “The US authorities never thought that the Japanese experience could happen in their own country, but they eventually did learn the lessons [from Japan] and they reacted quicker than the Japanese did,” Nakaso observed. They were also more resolute than the European authorities. “The Europeans have been slower to do the things they needed to do,” says Henry Paulson, the former US Treasury secretary.

But finance is still not entirely “fixed”: non-bank investors have been taking dangerous risks, partly because super-loose monetary policy has made borrowing so cheap. And then there is the issue that the bankers joked about in Barcelona with their “Da Leverage” band: debt. One remarkable feature of the past decade is that between 2007 and 2017, the ratio of global debt to GDP jumped from 179 per cent to 217 per cent, according for the Bank for International Settlements.

This borrowing binge has not occurred in the areas of finance that caused the last crisis, such as subprime loans. Instead, the debt boom is among risky companies and governments, ranging from Turkey (which already faces a financial crunch) to America (where borrowing has accelerated under the administration of Donald Trump.) Meanwhile in China, gross public and private debt has doubled in the past decade to about 300 per cent of GDP. This surpasses even Japan’s wild 1980s debt binge.

So will China spark the next crisis? After all, I suggested to Nakaso, there are so many echoes of 1980s Japan: hubris; opacity; ambitious elites and breathtakingly rapid economic change.

“Maybe not,” Nakaso observed. He pointed out that while the debt numbers look worrying, China has some powerful strengths: fat currency reserves and a government that can act in a deft, resolute manner to fight a crisis, without being distracted by voters. “Using a central bank’s balance sheet is problematic for democratic countries.”

China has another crucial weapon: government officials in Beijing are obsessively interested in history; they want to understand other peoples’ disasters, to work out how to avoid them. “One of the great strengths of China today is that they are very thoughtful,” Timothy Geithner, the US Treasury Secretary under Barack Obama, recently observed. Or as Dalio echoes: “The Chinese are great at history — the [government] understands the mechanics of crises and it’s so much easier for them to make decisions, politically, than in the United States.”

“The Chinese have come to see us a number of times to talk about the Japanese financial crisis,” Nakaso observed, over the phone. What advice did the Japanese officials give? “The key message was prepare for the worst — [and] be vigilant. If you see the crisis you have to be quite bold.”

Will it be enough? Can China learn enough lessons to avoid the fate of Japan in 1997 — or America a decade ago? That trillion-dollar question will not be answered for several years. But the one thing that is already crystal clear is that if Beijing does ever succumb to its own boom and bust, the implications for the global economy could be devastating. “The world is much more dependent on China today than it was on Japan in 1990s,” says Winters. Never before have those financial history books mattered quite so much.

Photographs in this story are from ‘The Crash’ by Stephen McLaren, which will be published by Hoxton Mini Press on September 27. The images, shot in London, document the aftermath of the global financial crash

Gillian Tett is the FT’s US managing editor

 The Fed’s In A Box And People Are Starting To Notice  

It’s long been an article of faith in the sound money community that the Fed, by bailing out every dysfunctional financial entity in sight, would eventually be forced to choose between the deflationary collapse of a mountain of bad debt and the inflationary chaos of a plunging currency.

That generation-defining crossroad is finally in sight.

On one hand, a tight labor market is pushing inflation to levels that normally call for higher interest rates:

source: tradingeconomics.com

source: tradingeconomics.com

Today’s Fed-heads are old enough to remember the 1970s, when failure to get inflation under control produced a decade-long monetary crisis that was only resolved with (not exaggerating here) interest rates approaching 20%.

On the other hand, the yield curve – the difference between long-term and short-term interest rates – is trending towards zero and will, if it keeps falling, invert, meaning that short rates will exceed long. When this has happened in the past a recession has ensued.

With a system this highly leveraged it’s completely possible that the next recession will threaten the whole fiat currency/globalization/fractional reserve banking world. No one at the Fed wants to preside over that, leading some to view rising inflation as the lesser of two evils. See Atlanta Fed Chief Pledges to Oppose Hike Inverting Yield Curve.

yield curve Fed's in a box

A lot of people seem to be aware of the Fed’s dilemma. Here’s an excerpt from a recent Reuters article on the subject:

Fed’s Powell between a rock and hard place: Ignore the yield curve or tight job market?
Unemployment near a 20-year low screams at the U.S. Federal Reserve to raise interest rates or risk a too-hot economy. The bond market, not far from a state that typically precedes a recession, says not so fast. 
The decision of which to heed looms large when the Fed’s interest-rate setters meet next week. Which path they follow will begin to define whether Chairman Jerome Powell engineers a sustained, recession-free era of full employment, or spoils the party with interest rate increases that prove too much for the economy to swallow. 
New Fed staff research and Powell’s own remarks seem to put more weight on the risks of super-tight labor markets, which could mean a shift up in the Fed’s rate outlook and a tougher tone in its rhetoric. 
Goldman Sachs economists, for instance, contend the Fed’s “optimal” rate path is “well above market pricing under a broad range of assumptions.” They see four increases likely next year, while investors expect only one or two, a significant gap.

Sounds like the Fed is choosing door number one, focusing on inflation and tightening policy to counter it. That’s what it has typically done since the 1970s.

But this time, as I said, the risks are a lot higher. First, higher interest rates will, other things being equal, make the dollar stronger. This is a very big deal for the emerging market countries that have borrowed trillions of US dollars and will now have to pay off those loans in ever-more-expensive currency. Since they’ve borrowed most of these dollars from developed-world banks, that means trillions of dollars of potentially non-performing loans, leading to yet another massive bailout of European and American banks and the financial instability that that implies.

Second — and far more systemically dangerous — corporate, government and consumer debt (especially student debt) are all at record levels. Send the economy back into recession with higher interest rates, and government tax revenues, corporate sales and profits, and personal incomes all fall at the same time interest costs are soaring because of those higher rates.
So from emerging markets to US corporations to Washington’s budget to consumers’ balance sheets, the next recession might be death spirals all the way down. Which means rising interest rates will beget much, much lower rates before too long.

The pot of gold at the end of the rainbow

What to make of cryptocurrencies and blockchains

Cryptocurrencies and their underlying technology, blockchains, have been hyped to the skies. Tim Cross offers a realist’s guide

JOSEPH KENNEDY, John F. Kennedy’s father, supposedly said that when he started getting share tips from his shoeshine boy, he knew it was time to sell. That was in the late 1920s. One investor in cryptocurrencies recalled that remark when he saw advertisements on the London Underground that seemed to suggest pensioners invest in bitcoin. “Be More Brenda,” said the poster, featuring a white-haired lady claiming to have bought bitcoin in under ten minutes.

Cryptocurrencies are everywhere. According to one survey, 5% of Americans hold some cryptocurrencies—not bad for a financial product that is only a decade old. Bitcoin is the best-known, and in 2017 the dramatic rise in its price—from $3,000 in September to almost $19,000 by December—made headlines.

It was invented in 2008 by a reclusive cryptographer going by the name of Satoshi Nakamoto. He was dissatisfied with the conventional financial system, so he wanted to create an electronic version of cash that did not rely on a central operator and was free from direct control by a government or central bank. The idea took off. These days anyone who wants to get into cryptocurrencies can weigh the relative merits of bitcoin, ether, Monero, Dash, Litecoin and thousands of others. Many of those who bought in early have, on paper at least, made astonishing gains. Bitcoin’s price in 2010 was around 6 American cents. Even at its current price of $6,470, it would provide an early investor with a handsome profit—though not nearly as handsome as if he had sold at the peak last December.

In keeping with their do-it-yourself image, cryptocurrencies have given rise to initial coin offerings (ICOs), a way for cryptocurrency companies to crowdfund themselves. Cash is pouring in. According to one estimate, from Coinschedule, a firm that tracks such things, by early August 706 ICOs had raised almost $18bn from a mix of institutional investors and individuals this year. That compares with just 221 ICOs in the whole of 2017, raising $3.7bn.

Chain reaction

Many of those startups hope to capture the benefits of blockchains, the technology that underlies cryptocurrencies. In essence, a blockchain is a database designed to be distributed among many users, to be immutable, to work without oversight from any central authority and to dispense with the need for its users to trust each other. These qualities, it is argued, make it suitable for a huge variety of new and exciting business applications, which many companies are now trying to explore.

For example, a blockchain’s immutability and distributed nature would seem perfect for streamlining supply chains. A widget manufacturer in one country, its shipping agent, its customer in another country and customs authorities on both the sending and the receiving end could all use the same database to track the widget. Another promising idea might be to provide an incorruptible record of transactions covering anything from property deeds to the provenance of diamonds.

According to Crunchbase, an industry consultancy, in the first five months of this year a total of more than $1.3bn of venture capital was invested in blockchain startups. KPMG, a large consultancy firm, reckons that the amount of money venture capitalists want to invest in such things outstrips the opportunities to do so. Established companies are rushing to catch up. Technology firms such as IBM, Oracle and Amazon are giving their customers the chance to experiment with blockchains. KPMG offers a service to advise clients on blockchains, as do most of its rivals. Diar, a consultancy specialising in cryptocurrencies, lists dozens of blockchain-related patent applications, filed by companies as diverse as Bank of America, Intel, a chipmaker, RWE, an electricity firm, and British Telecom.

A bit of a let-down

This Technology Quarterly will take a more sceptical view. It will point out that, despite a decade of development, bitcoin has failed in its stated objective: to become a usable currency. Security is poor (according to one estimate, around 14% of the supply of big cryptocurrencies has been compromised); its decentralised nature inevitably makes it slow; there is no consumer protection; and the price is so volatile that not many people would want to use it as a means of exchange for goods and services. Other cryptocurrencies suffer from similar problems. Few merchants accept them.

At the same time the technology’s built-in antipathy to regulation has attracted plenty of people who feel the same way for the wrong reasons. Some cryptocurrencies amount to Ponzi schemes, and unscrupulous ICO operators have swindled investors. America’s authorities are investigating allegations of widespread price manipulation. Social-media firms have banned advertisements for ICOs amid concerns about fraud. Anyone thinking of investing in such instruments will need to do a lot of homework first.

Other drawbacks of bitcoin and such like are becoming increasingly apparent, too. The “mining” process required to verify all transactions is hugely power-hungry. Data centres have sprung up from Mongolia to Quebec, collectively consuming as much electricity as entire countries to run a system that cannot manage more than a handful of transactions per second.

The potential applications for the underlying blockchain technology look rather more attractive, but progress in developing them has been slower than hoped, and some apparent successes turn out to have been exaggerated. Because they are power-hungry and slow, the blockchains that drive cryptocurrencies have to be remodelled for use in business, which can make them less distinctive and more like other databases. Though the excitement surrounding the technology has provided a useful push to get interested parties around the table and start talking, most blockchain projects are still at the exploratory stage.

Putting a business on a blockchain is as complicated as any other big IT project. Those involved in the planning stage still have to ask the usual questions. What exactly is it meant to do? Why would an individual company want to sign up to such a shared venture? Who will design the system? Who will be in charge if things go wrong? And once a decision is made to build such a system, there will still be a lot of grunt work to be done. All this suggests that, whatever the benefits of blockchains, they will not arrive overnight.

One problem, says Gary Barnett, an analyst at GlobalData, a consultancy, is mutual incomprehension between insiders and outsiders. “There’s a ‘two tribes’ vibe about a lot of this,” he says. Because blockchains and cryptocurrencies are notoriously complicated, non-experts from other industries can end up confused by techno-speak, whereas advocates of the technologies are so excited by the potential that they give insufficient attention to important details of the industries they are aiming to revolutionise.

To understand the pros and cons of cryptocurrencies and blockchains, the best way is to start with bitcoin itself.

The toil to discover ‘new physics’ after Higgs boson breakthrough

Six years after Cern scientists found the ‘god particle’, progress is slow

Clive Cookson

Nothing new has emerged, even after the Large Hadron Collider's operating power was almost doubled in 2015 © AFP

The summer of 2012 was the most glorious in the history of high-energy physics. To huge public acclaim, Cern, the European physics centre near Geneva, announced the discovery of the Higgs boson— frequently called the “god particle” in the media — which gives matter its mass.

Higgs particles emerged from the subatomic debris of trillions of collisions between hydrogen nuclei travelling at almost the speed of light around the world’s most powerful atom smasher, Cern’s $8bn Large Hadron Collider. Their detection after the LHC had been running for less than three years gave a finishing touch to the so-called Standard Model of physics, which provides an internally consistent description of some — but not all — of the fundamental forces and particles in our universe.

After the Higgs triumph, everyone associated with Cern hoped the LHC would move quickly on to make discoveries in what scientists often call “new physics”, helping to explain phenomena beyond the scope of the Standard Model.

One priority is to elucidate the dark matter and dark energy that appear to dominate the universe but remain a complete mystery. Another is to explore the supersymmetry theory, which holds that a panoply of “superparticles” overlays the subatomic particles described in the Standard Model. A third is to understand why the universe consists of matter rather than anti-matter.

Sadly, nothing new has emerged, even after the LHC’s operating power was almost doubled in 2015. Physicists have only been able to dot i’s and cross t’s within the Standard Model. Just this week, for instance, Cern announced the first detection of the predicted decay of the shortlived Higgs boson into a pair of “bottom quarks”. That is big news for Higgs researchers, but there have been no more discoveries to excite people beyond the world of particle physics.

Scientists who study the LHC’s vast data sets — accumulating at about a quadrillion bytes per day — have occasionally seen tantalising hints of new physics. But these have evaporated on further analysis, before reaching the level of statistical significance that would allow the researchers to claim a discovery. Although something exciting might leap out of the data, few physicists expect this to happen in the near future.

The biggest physics announcement since Higgs took place in 2016, far from Cern. A $1bn facility in the US for the first time detected gravitational waves, generated by a cataclysmic collision between black holes in the distant universe. Further discoveries in the past two years, including a neutron star collision, show that gravitational waves could open up a window into the most energetic events in the universe — perhaps including its birth in the Big Bang.

In retrospect, senior Cern scientists do seem to have been over-enthusiastic about the prospects for new physics as they celebrated the Higgs discovery. While some theories suggested that new particles would soon show up at the LHC, other versions would require collisions with more energy than even the upgraded collider can provide.

But it would be wrong to write off the LHC as a failure, even if it does nothing more than complete the Standard Model.

As Cern physicist Tim Gershon of Warwick university puts it, the Higgs can act as a “novel microscope” into the universe on the smallest scales through its interactions with other particles and forces. That is analogous to the idea that new gravitational wave telescopes will probe the largest cosmological scales.

With more than 20 years of life still left in the LHC — and several further upgrades planned to increase both the number and energy of collisions in its 27km ring — the prospects of moving firmly into new scientific territory remain excellent.

In addition to these hardware improvements, some physicists are beginning to change the way they handle LHC data. Until now everyone has concentrated on “targeted searches”, scouring the fallout from trillions of collisions for signs of specific particles predicted by theorists.

For instance, the hypothetical “neutralino” is a candidate to be a constituent of dark matter and to play a role in supersymmetry. New “general searches” would look for any anomaly that does not fit the Standard Model, increasing the chance of discovering something unexpected. Although the information processing challenge is immense, advocates of this approach are relying on artificial intelligence and machine learning to spot novel patterns in the data.

Looking further ahead, the world’s high energy physicists are already planning an even more powerful successor to the LHC, to come on stream in the late 2030s. Candidate designs include the Future Circular Collider with a 100km ring, and the straight International Linear Collider, which is about 40km long.

But it is hard to see governments committing billions of dollars to build a machine on that scale until the LHC comes up with some new physics for it to investigate. The field needs another achievement that can be greeted with as much acclaim as the Higgs discovery.

Microwave Weapons Are Prime Suspect in Ills of U.S. Embassy Workers

Doctors and scientists say microwave strikes may have caused sonic delusions and very real brain damage among embassy staff and family members.

By William J. Broad

U.S. Marines outside the embassy in Havana in February. Diplomats working here reported strange noises and mysterious symptoms that doctors and scientists say may have resulted from strikes with microwave weapons.CreditCreditAdalberto Roque/Agence France-Presse — Getty Images

During the Cold War, Washington feared that Moscow was seeking to turn microwave radiation into covert weapons of mind control.

More recently, the American military itself sought to develop microwave arms that could invisibly beam painfully loud booms and even spoken words into people’s heads. The aims were to disable attackers and wage psychological warfare.

Now, doctors and scientists say such unconventional weapons may have caused the baffling symptoms and ailments that, starting in late 2016, hit more than three dozen American diplomats and family members in Cuba and China. The Cuban incidents resulted in a diplomatic rupture between Havana and Washington.

The medical team that examined 21 affected diplomats from Cuba made no mention of microwaves in its detailed report published in JAMA in March. But Douglas H. Smith, the study’s lead author and director of the Center for Brain Injury and Repair at the University of Pennsylvania, said in a recent interview that microwaves were now considered a main suspect and that the team was increasingly sure the diplomats had suffered brain injury.

“Everybody was relatively skeptical at first,” he said, “and everyone now agrees there’s something there.” Dr. Smith remarked that the diplomats and doctors jokingly refer to the trauma as the immaculate concussion.

Strikes with microwaves, some experts now argue, more plausibly explain reports of painful sounds, ills and traumas than do other possible culprits — sonic attacks, viral infections and contagious anxiety.

In particular, a growing number of analysts cite an eerie phenomenon known as the Frey effect, named after Allan H. Frey, an American scientist. Long ago, he found that microwaves can trick the brain into perceiving what seem to be ordinary sounds.

The false sensations, the experts say, may account for a defining symptom of the diplomatic incidents — the perception of loud noises, including ringing, buzzing and grinding. Initially, experts cited those symptoms as evidence of stealthy attacks with sonic weapons.

Members of Jason, a secretive group of elite scientists that helps the federal government assess new threats to national security, say it has been scrutinizing the diplomatic mystery this summer and weighing possible explanations, including microwaves.

[Like the Science Times page on Facebook. | Sign up for the Science Times newsletter.]

Asked about the microwave theory of the case, the State Department said the investigation had yet to identify the cause or source of the attacks. And the F.B.I. declined to comment on the status of the investigation or any theories.

The microwave idea teems with unanswered questions. Who fired the beams? The Russian government? The Cuban government? A rogue Cuban faction sympathetic to Moscow? And, if so, where did the attackers get the unconventional arms?

At his home outside Washington, Mr. Frey, the scientist who uncovered the neural phenomenon, said federal investigators have questioned him on the diplomatic riddle and that microwave radiation is considered a possible cause.

Mr. Frey, now 83, has traveled widely and long served as a contractor and a consultant to a number of federal agencies. He speculated that Cubans aligned with Russia, the nation’s longtime ally, might have launched microwave strikes in attempts to undermine developing ties between Cuba and the United States.

“It’s a possibility,” he said at his kitchen table. “In dictatorships, you often have factions that think nothing of going against the general policy if it suits their needs. I think that’s a perfectly viable explanation.”

Developing a new class of weapons

Allan H. Frey, at his home outside Washington. In 1960, he stumbled on an acoustic effect of microwaves that was eventually named after him.CreditAlex Wroblewski for The New York Times

Microwaves are ubiquitous in modern life. The short radio waves power radars, cook foods, relay messages and link cellphones to antenna towers. They’re a form of electromagnetic radiation on the same spectrum as light and X-rays, only at the opposite end.

While radio broadcasting can employ waves a mile or more in length, microwaves range in size from roughly a foot to a tiny fraction of an inch. They’re seen as harmless in such everyday uses as microwaving foods. But their diminutive size also enables tight focusing, as when dish antennas turn disorganized rays into concentrated beams.

The dimensions of the human head, scientists say, make it a fairly good antenna for picking up microwave signals.

Mr. Frey, a biologist, said he stumbled on the acoustic effect in 1960 while working for General Electric’s Advanced Electronics Center at Cornell University. A man who measured radar signals at a nearby G.E. facility came up to him at a meeting and confided that he could hear the beam’s pulses — zip, zip, zip.

Intrigued, Mr. Frey traveled to the man’s workplace in Syracuse and positioned himself in a radar beam. “Lo,” he recalled, “I could hear it, too.”

Mr. Frey’s resulting papers — reporting that even deaf people could hear the false sounds — founded a new field of study on radiation’s neural impacts. Mr. Frey’s first paper, in 1961, reported that power densities 160 times lower than “the standard maximum safe level for continuous exposure” could induce the sonic delusions.

His second paper, in 1962, pinpointed the brain’s receptor site as the temporal lobes, which extend beneath the temples. Each lobe bears a small region — the auditory cortex — that processes nerve signals from the outer and inner ears.

Investigators raced to confirm and extend Mr. Frey’s findings. At first they named the phenomenon after him, but eventually called it the microwave auditory effect and, in time, more generally, radio-frequency hearing.

The Soviets took notice. Not long after his initial discoveries, Mr. Frey said, he was invited by the Soviet Academy of Sciences to visit and lecture. Toward the end, in a surprise, he was taken outside Moscow to a military base surrounded by armed guards and barbed-wire fences.

“They had me visiting the various labs and discussing the problems,” including the neural impacts of microwaves, Mr. Frey recalled. “I got an inside look at their classified program.”

Moscow was so intrigued by the prospect of mind control that it adopted a special terminology for the overall class of envisioned arms, calling them psychophysical and psychotronic.

Soviet research on microwaves for “internal sound perception,” the Defense Intelligence Agency warned in 1976, showed great promise for “disrupting the behavior patterns of military or diplomatic personnel.”

Furtively, globally, the threat grew.

The National Security Agency gave Mark S. Zaid, a Washington lawyer who routinely gets security clearances to discuss classified matters, a statement on how a foreign power built a weapon “designed to bathe a target’s living quarters in microwaves, causing numerous physical effects, including a damaged nervous system.”

Mr. Zaid said a N.S.A. client of his who traveled there watched in disbelief as his nervous system later unraveled, starting with control of his fingers.

The high-pitched chirping that diplomats heard while working at the Consulate General of the United States in Guangzhou, China, might be explained by a phenomenon known as the Frey effect — radio-frequency hearing.CreditLam Yik Fei for The New York Times

Washington, too, foresaw new kinds of arms.

In Albuquerque, New Mexico, Air Force scientists sought to beam comprehensible speech into the heads of adversaries. Their novel approach won a patent in 2002, and an update in 2003. Both were assigned to the Air Force secretary, helping limit the idea’s dissemination.

The lead inventor said the research team had “experimentally demonstrated” that the “signal is intelligible.” As for the invention’s uses, an Air Force disclosure form listed the first application as “Psychological Warfare.”

The Navy sought to paralyze. The Frey effect was to induce sounds powerful enough to cause painful discomfort and, if needed, leave targets unable to move. The weapon, the Navy noted, would have a “low probability of fatalities or permanent injuries.”

In a twist, the 2003 contract was awarded to microwave experts who had emigrated to the United States from Russia and Ukraine.

It is unknown if Washington deploys such arms. But the Pentagon built a related weapon known as the Active Denial System, hailing it in a video. It fires an invisible beam meant to deter mobs and attackers with fiery sensations.

Russia, China and many European states are seen as having the know-how to make basic microwave weapons that can debilitate, sow noise or even kill. Advanced powers, experts say, might accomplish more nuanced aims such as beaming spoken words into people’s heads. Only intelligence agencies know which nations actually possess and use such unfamiliar arms.

The basic weapon might look like a satellite dish. In theory, such a device might be hand-held or mounted in a van, car, boat or helicopter. Microwave arms are seen as typically working over relatively short distances — across the length of a few rooms or blocks. High-powered ones might be able to fire beams across several football fields, or even for several miles.

The episode in Cuba

The Soviet collapse in 1991 cut Russia’s main ties to Cuba, a longtime ally just 90 miles from the United States. The shaky economy forced Moscow to stop providing Havana with large amounts of oil and other aid.

Vladimir Putin, as Russia’s president and prime minister, sought to recover the economic, political and strategic clout that the Soviets had lost. In December 2000, months after the start of his first presidential term, Mr. Putin flew to the island nation. It was the first visit by a Soviet or Russian leader since the Cold War.

He also sought to resurrect Soviet work on psychoactive arms. In 2012, he declared that Russia would pursue “new instruments for achieving political and strategic goals,” including psychophysical weapons.

In July 2014, Mr. Putin again visited Cuba. This time he brought a gift — the cancellation of some $30 billion in Cuban debt. The two nations signed a dozen accords.

A Russian spy ship, Viktor Leonov, docked in Havana on the eve of the beginning of reconciliation talks between Cuba and the United States in early 2015, and did so again in subsequent years. Moscow and Havana grew so close that in late 2016, the two nations signed a sweeping pact on defense and technology cooperation.

Raul Castro, president of Cuba, with Vladimir Putin, Russia’s president, at a welcoming ceremony for Mr. Putin in Havana in 2014.CreditIsmael Francisco/Associated Press

In Havana’s harbor, men fishing near the Russian warship, Viktor, Leonov, in 2015.CreditRamon Espinosa/Associated Press

As a candidate, Donald Trump faulted the Obama administration’s normalization policy as “a very weak agreement” and threatened to scrap it on reaching the White House. Weeks after he won the election, in late November 2016, the American embassy in Havana found itself battling a mysterious crisis.

Diplomats and their families recounted high-pitched sounds in homes and hotel rooms at times intense enough to incapacitate. Long-term, the symptoms included nausea, crushing headaches, fatigue, dizziness, sleep problems and hearing loss.

The State Department filed diplomatic protests, and the Cuban government denied involvement. In May, the F.B.I. opened an investigation and its agents began visiting Havana a half year after the incidents began. The last major one hit that summer, in August, giving the agents relatively little time to gather clues.

In September 2017, the Trump administration warned travelers away from Cuba and ordered home roughly half the diplomatic personnel.

Rex W. Tillerson, who was then the secretary of state, said the embassy’s staff had been targeted deliberately. But he refrained from blaming Cuba, and federal officials held out the possibility that a third party may have been responsible.

In early October, President Trump expelled 15 Cuban diplomats, producing a chill between the nations. Administration critics said the White House was using the health issue as a pretext to end President Barack Obama’s reconciliation policy.

The day after the expulsions, the Senate Foreign Relations Committee held a closed, top secret hearing on the Cuba situation. Three State Department officials testified, as did an unnamed senior official of the Central Intelligence Agency.

The Hypothesis

Beatrice A. Golomb, a medical doctor and professor of medicine at the University of California, San Diego, here in a beachside office, argues that microwave strikes can explain the diplomatic ills.CreditTara Pixley for The New York Times

Early this year, in January, the spooky impact of microwaves on the human brain never came up during an open Senate hearing on the Cuba crisis.

But in a scientific paper that same month, James C. Lin of the University of Illinois, a leading investigator of the Frey effect, described the diplomatic ills as plausibly arising from microwave beams. Dr. Lin is the editor-in-chief of Bio Electro Magnetics, a peer-reviewed journal that explores the effects of radio waves and electromagnetic fields on living things.

In his paper, he said high-intensity beams of microwaves could have caused the diplomats to experience not just loud noises but nausea, headaches and vertigo, as well as possible brain-tissue injury. The beams, he added, could be fired covertly, hitting “only the intended target.”

In February, ProPublica in a lengthy investigation mentioned that federal investigators were weighing the microwave theory. Separately, it told of an intriguing find. The wife of a member of the embassy staff, it reported, had looked outside her home after hearing the disturbing sounds and seen a van speeding away.

A dish antenna could fit easily into a small van.

The medical team that studied the Cuba diplomats ascribed the symptoms in the March JAMA study to “an unknown energy source” that was highly directional. Some personnel, it noted, had covered their ears and heads but experienced no sound reduction. The team said the diplomats appeared to have developed signs of concussion without having received any blows to the head.

In May, reports emerged that American diplomats in China had suffered similar traumas. Secretary of State Mike Pompeo called the medical details of the two groups "very similar” and “entirely consistent" with one another. By late June, the State Department had evacuated at least 11 Americans from China.

To date, the most detailed medical case for microwave strikes has been made by Beatrice A. Golomb, a medical doctor and professor of medicine at the University of California, San Diego. In a forthcoming paper to be published in October in Neural Computation, a peer-reviewed journal of the MIT Press, she lays out potential medical evidence for Cuban microwave strikes.

She compared the symptoms of the diplomats in Cuba to those reported for individuals said to be suffering from radio-frequency sickness. The health responses of the two groups, Dr. Golomb wrote, “conform closely.”

In closing, she argued that “numerous highly specific features” of the diplomatic incidents “fit the hypothesis” of a microwave attack, including the Frey-type production of disturbing sounds.

Scientists still disagree over what hit the diplomats. Last month, JAMA ran four letters critical of the March study, some faulting the report for ruling out mass hysteria.

But Mr. Zaid, the Washington lawyer, who represents eight of the diplomats and family members, said microwave attacks may have injured his clients.

“It’s sort of naïve to think this just started now,” he said. Globally, he added, covert strikes with the potent beams appear to have been going on for decades.

Francisco Palmieri, a State Department official, was asked during the open Senate hearing if “attacks against U.S. personnel in Cuba” had been raised with Moscow.

“That is a very good question,” Mr. Palmieri replied. But addressing it, he added, would require “a classified setting.”

For his part, Mr. Frey says he doubts the case will be solved anytime soon. The novelty of the crisis, its sporadic nature and the foreign setting made it hard for federal investigators to gather clues and draw conclusions, he said, much less file charges.

“Based on what I know,” he remarked, “it will remain a mystery.”

William J. Broad is a science journalist and senior writer. He joined The Times in 1983, and has shared two Pulitzer Prizes with his colleagues, as well as an Emmy Award and a DuPont Award. @WilliamJBroad