Covid transmission

Improving ventilation will help curb SARS-CoV-2

There has been a misunderstanding about how the virus spreads

On january 24th 2020 three families, together numbering 21 people, came independently to eat lunch at a restaurant in Guangzhou. 

It was the eve of the Chinese New Year. 

Extra seating had been squeezed in to accommodate more patrons than usual, and these families were crowded onto neighbouring tables along one wall of the windowless room (see plan). 

The largest of them—a party of ten who had arrived the day before from Wuhan—sat around the middle table. 

Later that day, one of their number developed fever and a cough and, at a hospital, was diagnosed with covid-19. 

Within two weeks, ten of the 21 were confirmed as being infected with sars-cov-2.

The families involved had never met and video footage showed they had no close contact during the lunch. 

An initial analysis by the Guangzhou Centre for Disease Control and Prevention proposed that the infection had spread via respiratory “droplets”. 

But medical lore has it that such droplets—defined as particles expelled while breathing that are more than five microns across—cannot travel more than a couple of metres after they have been exhaled. 

And some of those who became infected during the lunch were farther than that from the “index” patient.

It made no sense. 

How could a single infected person transmit the virus to nine others in just an hour when there had been no direct contact between them?

Current thinking

The outbreak at the Guangzhou restaurant was the first recorded “superspreading” event of the pandemic. 

Superspreading is loosely defined as being when a single person infects many others in a short space of time. 

More than 2,000 cases of it have now been recorded—in places as varied as slaughterhouses, megachurches, fitness centres and nightclubs—and many scientists argue that it is the main means by which covid-19 is transmitted.

In cracking the puzzle of superspreading, researchers have had to re-evaluate their understanding of sars-cov-2’s transmission. 

Most documented superspreadings have happened indoors and involved large groups gathered in poorly ventilated spaces. 

That points to sars-cov-2 being a virus which travels easily through the air, in contradistinction to the early belief that short-range encounters and infected surfaces were the main risks. 

This, in turn, suggests that paying attention to the need for good ventilation will be important in managing the next phase of the pandemic, as people return to mixing with each other inside homes, offices, gyms, restaurants and other enclosed spaces.

It has taken a long time for public-health experts to acknowledge that covid-19 routinely spreads through the air in this way. 

Social distancing and mask-wearing were recommended with the intention of cutting direct, close-range transmission by virus-carrying droplets of mucus or saliva breathed out by infected individuals. 

The main risk of spreading the illness indirectly was thought to come not from these droplets being carried long distances by air currents, but rather by their landing on nearby surfaces, on which viruses they were harbouring might survive for hours, or even days. 

Anyone who touched such an infected surface could then transfer those viruses, via their fingers, to their mouth, eyes or nose. 

This makes sense if sars-cov-2 spreads in the same way as influenza—which was indeed the hypothesis in March 2020, when the World Health Organisation ( who) declared the start of the covid-19 pandemic. 

Hence the advice to disinfect surfaces and wash hands frequently.

Doctors did know at the time that not all respiratory particles fall fast. 

Those smaller than five microns can become aerosols, staying aloft for hours and potentially travelling much farther than droplets, or simply accumulating in the air within a closed room. 

Anyone inhaling these aerosols could then become infected. But this was assumed not to matter, because aerosols were thought to be relevant only in specialist medical settings, such as when patients are attached to a ventilator in an intensive-care unit. Intubation, as this process is known, does indeed create aerosols, as the breathing tube is forced down a patient’s trachea. 

But a wider risk was not perceived. The who therefore played down the risks of aerosols, issuing guidance via its Twitter and Facebook pages at the end of March 2020 that the general public need not worry. 

“FACT: #COVID19 is NOT airborne,” it said, adding that any claims to the contrary were “misinformation”.

Physics envy

Researchers outside the medical world, however—especially those who study the physics of particles in the air—felt the evidence pointed in a different direction. 

The Guangzhou restaurant outbreak was an early warning. 

Around the same time, 1,300km across the country in Ningbo, 23 of 68 passengers on a bus fitted with an air-recycling system had been infected during a one-and-a-half-hour journey. 

But the worst known case of superspreading early in the pandemic was American. This happened at a choir practice in Skagit Valley, Washington State, in March 2020.

Of the 61 people present during a two-and-a-half-hour meeting, 53 became infected. 

In all these cases, investigation showed that those infected were not necessarily the people closest to the index patients, as might be expected if transmission had been by droplet or surface contact.

None of this surprised Lidia Morawska, a physicist at the Queensland University of Technology, in Brisbane, Australia. 

She had spent much of her career studying how pollution caused by so-called particulate matter, such as dust and smog, affects air quality. 

After the original sars outbreak, which happened in 2003, she began experiments to show how respiratory particles are generated in people’s throats and then transported through the air.

She demonstrated that received medical wisdom is wrong. 

Because exhaled breath is a moist, hot, turbulent cloud of air, a five-micron-wide droplet released at a height of one and a half metres (about the distance above ground of the average mouth or nose) can easily be carried dozens of metres before settling. 

Also, the generation of respiratory particles is not restricted to medical settings. 

Liquid drops of all sizes—including those defined as aerosols—are continuously shed while people are breathing, talking, sneezing or singing (see chart).

In July 2020 Dr Morawska wanted to bring this work to the attention of public-health agencies. 

She assembled a group of 36 experts on aerosols and air quality to write an open letter outlining their evidence for infection by smaller liquid drops and calling on the who to change its tune on airborne transmission. 

“We appeal to the medical community and to the relevant national and international bodies to recognise the potential for airborne spread of coronavirus disease 2019 (covid-19),” they wrote in Clinical Infectious Diseases. 

“There is significant potential for inhalation exposure to viruses in microscopic respiratory droplets (microdroplets) at short to medium distances (up to several metres, or room scale), and we are advocating for the use of preventive measures to mitigate this route of airborne transmission.” 

More than 200 other researchers from 32 countries also signed the letter.

One signatory was Jose-Luis Jimenez, an atmospheric chemist at the University of Colorado, Boulder. 

He says that the confusion in health circles over whether or not airborne transmission of sars-cov-2 is important can be traced back to medical textbooks that still contain outdated descriptions of how respiratory particles are produced and move.

But the widespread assertion, still stubbornly promulgated by the who, that droplets above five microns in diameter do not stay airborne, but rather settle close to their source, is a dodgy foundation on which to build public-health advice. 

According to Dr Jimenez, physicists have shown that any particle less than 100 microns across can become airborne in the right circumstances. 

All of this matters because hand-washing and social distancing, though they remain important, are not enough to stop an airborne virus spreading, especially indoors. 

Masks will help, by slowing down and partially filtering an infectious person’s exhalations. 

But to keep offices, schools, hospitals, care homes and so on safe also requires improvements in their ventilation.


Under pressure from physicists, the who recently acknowledged that better ventilation should be used to help prevent covid-19’s spread—and in March it published a “roadmap” to that effect. 

But the document fell far short of properly recognising the hazard of airborne transmission and, therefore, the need to control it. 

Despite overwhelming evidence that it happens, the agency still maintained that sars-cov-2 “mainly spreads between people when an infected person is in close contact with another person”.

Others, though, are acting on the new knowledge. 

Martin Bazant, a chemical engineer, and John Bush, a mathematician, both at the Massachusetts Institute of Technology, have devised a way to calculate how long it would be safe to stay within a room that contains an infected person. 

The pair described their model in a paper in a recent issue of the Proceedings of the National Academy of Sciences.

Applied to a typical American school class of 19 pupils and a teacher, the safe time after an infected individual enters a classroom that is naturally ventilated (that is, how long before the risk of infection is unacceptably high) is 72 minutes. 

This period can, though, be extended in two ways. 

One is by mechanical ventilation of the room, which increases the safe time to 7.2 hours. 

The other is by everyone wearing masks. 

In the absence of mechanical ventilation, mask-wearing increases the safe time to eight hours. 

But the real benefit comes from combining these approaches. 

That pushes the safe time up to 80 hours—almost 14 days if a school day is six hours long. 

Add in intervening weekends and a class wearing masks in a school room with adequate ventilation would thereby be safe for longer than the time it takes to recover from covid-19, which is typically between one and two weeks. 

School transmissions would thus be rare.

A caveat is that the modelling assumed a classroom with minimal talking, physical activity or singing by the pupils. 

But games lessons would usually be outdoors and singing lessons could be. 

As to too much talking, teachers might welcome an unimpeachable reason to tell pupils to keep quiet in class.

Infection risk will not always be distributed evenly around a room. 

Jiarong Hong, a mechanical engineer at the University of Minnesota, Minneapolis, therefore used computer models to study how aerosols would spread in a classroom, according to the location of an infected individual and the position of nearby fans or air filters. 

Assuming the teacher was infected, and so was releasing virus-laden aerosols at the front of the class, Dr Hong’s modelling shows that placing an air cleaner or extractor fan at the front of the room sets up an airflow which prevents the movement of such aerosols towards the pupils. 

An even better aerosol-cleansing effect is achieved when the fans and filters are elevated above the people in the room. 

This takes advantage of the rising air plumes created by body heat, which mean that exhaled aerosols tend to float upwards. 

Dr Hong’s modelling shows that even small, cheap box fans mounted in this way would do a good job of keeping classrooms safe and preventing aerosols from building up to dangerous levels.

Dr Hong has also modelled the air flow in the Guangzhou restaurant outbreak of January 2020. As the plan shows, he found that the movements of virus-laden aerosols around the three affected families of diners matched the seating positions of the people who eventually became sick. The outbreak occurred because there was no source of external fresh air and a nearby recirculating air conditioner redistributed aerosols from the infected person to the other tables, creating a contaminated bubble of air that was increasingly burdened with viruses over the course of the lunch.

The risk, then, is real. 

But how can the occupants of a room know whether it is well-ventilated? 

Just because a room feels spacious and an air conditioner is operating does not mean the air inside it is clean.

Here, Dr Morawska has a suggestion. 

In a (non-scientific) experiment last year, she took a carbon-dioxide meter into a large, high-ceilinged, air-conditioned restaurant near her home. 

CO2 concentrations can be a useful proxy for clean air. 

Outdoor air contains around 400 parts per million (ppm) of the gas, and people’s exhaled breath contains around 40,000ppm. 

Exhaling into a room therefore gradually raises its CO2 concentration unless the ventilation is good enough to remove the excess.

According to experts on air quality, anything below 500ppm in a room means the ventilation is good. 

At 800ppm, 1% of the air someone is breathing has already been exhaled recently by someone else. 

At 4,400ppm, this rises to 10%, and would be classed as dangerous. 

These sorts of levels are seen only in crowded spaces with poor airflow. 

To keep the risk of covid-19 low, CO2 levels should be well below 700ppm.

When Dr Morawska conducted her experiment, the restaurant had ten people in it—far fewer than would normally be allowed—and the CO2 concentration was already 1,000ppm when she arrived. Within an hour it had jumped to 2,000ppm. 

“We continued sitting during the dinner for another hour or so,” she says. 

“So if there was someone infected there, well this could have been a problem.”

Though anecdotal, that tale indicates a serious risk—and one which resonates beyond covid-19. 

All sorts of symptoms, from headaches, fatigue and shortness of breath to skin-irritation, dizziness and nausea, are linked to poor ventilation. 

It has also been connected with more absences from work and lower productivity.

The ventilation measures needed to deal with all this are not difficult, but existing regulations and design standards often have different objectives—particularly, these days, conserving heat and thus reducing energy consumption. 

That often means recirculating air, rather than exchanging it with fresh air from the outside world. (An exception is passenger aircraft, which refresh cabin air frequently.)

In situations where it is not possible to reduce health risks by ventilation alone—for example, places like nightclubs, where there are lots of people crowded together, or gyms, where they are breathing heavily—air filtration could easily be incorporated into ventilation systems. 

Air could also be disinfected, using germicidal ultraviolet lamps placed within air-conditioning systems or near ceilings in rooms.

All change

And then there is public awareness. 

“Before this pandemic it was completely socially acceptable to come to the office coughing, sneezing, spreading viruses around,” says Dr Morawska. 

“No one would say anything—even people educated to understand how infections are transmitted.”

That insouciance must be corrected, she says. 

The who must acknowledge the need to control airborne pathogens and governments must agree and enforce comprehensive standards for indoor air quality that keeps people healthy. 

One way to ensure compliance might be to issue ventilation certificates for buildings, similar to the food-hygiene certificates which already exist for restaurants. 

Occupants should also be given information about air quality routinely, she adds, through the use of monitors and sensors that can display a room’s carbon-dioxide levels or other relevant measures.

For new buildings this should not cost much extra, though replacing exiting ventilation systems might be costly. 

But not as costly as covid-19 has been. 

And if improvements in indoor air quality also reduced absenteeism and improved productivity, those gains might cover that cost. 

“Although detailed economic analyses remain to be done,” wrote Dr Morawska in a recent edition of Science, “the existing evidence suggests that controlling airborne infections can cost society less than it would to bear them.”

Some gripes about groupthink

‘While dissent leads to better decisions, it also leads to discomfort. Disagreement is valuable but agreement feels so much more comfortable’

Tim Harford 

     © Claire Merchlinsky

In his acid parliamentary testimony last week, Dominic Cummings, the prime minister’s former chief adviser, blamed a lot of different people and things for the UK’s failure to fight Covid-19 — including “groupthink”.

Groupthink is unlikely to fight back. 

It already has a terrible reputation, not helped by its Orwellian ring, and the term is used so often that I begin to fear that we have groupthink about groupthink.

So let’s step back. Groupthink was made famous in a 1972 book by psychologist Irving Janis. 

He was fascinated by the Bay of Pigs fiasco in 1961, in which a group of perfectly intelligent people in John F Kennedy’s administration made a series of perfectly ridiculous decisions to support a botched coup in Cuba. 

How had that happened? 

How can groups of smart people do such stupid things?

An illuminating metaphor from Scott Page, author of The Difference, a book about the power of diversity, is that of the cognitive toolbox. 

A good toolbox is not the same thing as a toolbox full of good tools: two dozen top-quality hammers will not do the job. 

Instead, what’s needed is variety: a hammer, pliers, a saw, a choice of screwdrivers and more.

This is obvious enough and, in principle, it should be obvious for decision-making too: a group needs a range of ideas, skills, experience and perspectives. 

Yet when you put three hammers on a hiring committee, they are likely to hire another hammer. 

This “homophily” — hanging out with people like ourselves — is the original sin of group decision-making, and there is no mystery as to how it happens.

But things get worse. 

One problem, investigated by Cass Sunstein and Reid Hastie in their book Wiser, is that groups intensify existing biases. 

One study looked at group discussions about then-controversial topics (climate change, same-sex marriage, affirmative action) by groups in left-leaning Boulder, Colorado, and in right-leaning Colorado Springs.

Each group contained six individuals with a range of views, but after discussing those views with each other, the Boulder groups bunched sharply to the left and the Colorado Springs groups bunched similarly to the right, becoming both more extreme and more uniform within the group. 

In some cases, the emergent view of the group was more extreme than the prior view of any single member.

One reason for this is that when surrounded with fellow travellers, people became more confident in their own views. 

They felt reassured by the support of others.

Meanwhile, people with contrary views tended to stay silent. 

Few people enjoy being publicly outnumbered. 

As a result, a false consensus emerged, with potential dissenters censoring themselves and the rest of the group gaining a misplaced sense of unanimity.

When it comes to social interaction, the aphorism is wrong: opposites do not attract

The Colorado experiments studied polarisation but this is not just a problem of polarisation. 

Groups tend to seek common ground on any subject from politics to the weather, a fact revealed by “hidden profile” psychology experiments. 

In such experiments, groups are given a task (for example, to choose the best candidate for a job) and each member of the group is given different pieces of information.

One might hope that each individual would share everything they knew, but instead what tends to happen is that people focus, redundantly, on what everybody already knows, rather than unearthing facts known to only one individual. 

The result is a decision-making disaster.

These “hidden profile” studies point to the heart of the problem: group discussions aren’t just about sharing information and making wise decisions. 

They are about cohesion — or, at least, finding common ground to chat about.

Reading Charlan Nemeth’s No! The Power of Disagreement In A World That Wants To Get Along, one theme is that while dissent leads to better, more robust decisions, it also leads to discomfort and even distress. 

Disagreement is valuable but agreement feels so much more comfortable.

There is no shortage of solutions to the problem of groupthink, but to list them is to understand why they are often overlooked. 

The first and simplest is to embrace decision-making processes that require disagreement: appoint a “devil’s advocate” whose job is to be a contrarian, or practise “red-teaming”, with an internal group whose task is to play the role of hostile actors (hackers, invaders or simply critics) and to find vulnerabilities. 

The evidence suggests that red-teaming works better than having a devil’s advocate, perhaps because dissent needs strength in numbers.

A more fundamental reform is to ensure that there is a real diversity of skills, experience and perspectives in the room: the screwdrivers and the saws as well as the hammers. 

This seems to be murderously hard.

When it comes to social interaction, the aphorism is wrong: opposites do not attract. 

We unconsciously surround ourselves with like-minded people.

Indeed, the process is not always unconscious. 

Boris Johnson’s cabinet could have contained Greg Clark and Jeremy Hunt, the two senior Conservative backbenchers who chair the committees to which Dominic Cummings gave his evidence about groupthink. 

But it does not. Why? 

Because they disagree with him too often.

The right groups, with the right processes, can make excellent decisions. 

But most of us don’t join groups to make better decisions. 

We join them because we want to belong. 

Groupthink persists because groupthink feels good.

Tim Harford has been named Wincott Foundation journalist of the year 2020. His new book is “How to Make the World Add Up” 

Why China Wants the Spratlys

The islands could help Beijing secure the Western Pacific without actually fighting for it.

By: Phillip Orchard

For several months, China has been relentlessly asserting its control over disputed parts of the South China Sea. 

Hundreds of vessels in the Chinese maritime militia – lightly armed fishing fleets that don't so much fish as serve as foot soldiers for the Chinese navy – have been squatting around various reefs in the disputed Spratly archipelago near Philippine and Vietnamese controlled areas. 

The Chinese coast guard, meanwhile, has apparently been blocking and harassing Philippine patrols around Scarborough Shoal, a flashpoint reef farther north. 

The United States reportedly warned China in 2015 that turning Scarborough into another artificial island was a red line.

The show of force illustrates how the seven artificial island bases that China has built in the Spratlys since 2013 can be put to good use in scenarios short of war. 

The surveillance, communications and logistics capabilities they house make it easier than ever for legions of Chinese vessels to occupy disputed areas in perpetuity, swiftly overwhelm interlopers, and assert de facto control over the waters and marine resources claimed by others. 

But in an actual, prolonged conflict with the U.S. and its allies, the tactical value of the Spratlys would rapidly diminish. 

And if China's “salami-slicing” campaign pushes the Philippines, in particular, to throw in fully with the U.S., China's biggest strategic challenge – the threat of a U.S.-led blockade – will become an order of magnitude more difficult to solve.

In other words, China is playing a risky game. But it’s worth it, evidently, given the role Beijing thinks the islands could play in the early days of a major conflict. 

More important is the role they could play in cementing Chinese dominance of the Western Pacific without fighting at all.

Missile Fodder or More?

China has been finding ways to assert its claims over disputed reefs across the South China Sea for decades, particularly around the Paracels in the northwestern part of the sea, where China fought a pair of brief engagements with Vietnam and today has more than 20 military outposts. 

But it took things to another level beginning around 2013 when dredgers started showing up at a handful of reefs in the Spratlys, located in southeastern waters off the shores of the Philippines, Malaysia and the Mekong Delta in Vietnam. 

By 2016, China had transformed seven partially- or fully-submerged features into artificial islands, creating some 3,200 acres of new land. 

Today, these are full-fledged military bases, featuring lengthy runways, deepwater ports, barracks, underground ammunition stores, and a vast array of radar and communications technologies.

And yet, over the course of a long kinetic conflict between China and the U.S., the value of China's bases in the Spratlys would be negligible. Indeed, there's a good chance the bases wouldn’t still be there at all.

There are two main problems that limit their value in a hot war. 

One, they’re not located particularly close to the chokepoints that the U.S. and its allies would try to control in order to close off vital Chinese sea lanes. 

Those are located in the Bashi Channel and Miyako and Tokara straits in the East China Sea and, to the south, the Malacca, Sunda and Lombok straits. 

Though China could use the Spratlys to threaten naval movements within the South China Sea itself, control of these waters is unlikely to be the main focus of the conflict. 

The ability to move forces from and through the Spratlys to break a blockade would be only marginally more advantageous than moving them from the Chinese mainland.

Second, and more fundamentally, the bases would be sitting ducks for enemy missiles. 

Long before major operations moved to the South China Sea – a stage where, theoretically, the bases could aid Chinese carrier operations, augment Chinese cyber and electronic warfare offensives, enhance China's edge in battlespace awareness, and/or facilitate asymmetric swarm attacks – their runways, communications and surveillance infrastructure would likely be rendered unusable. 

Whatever survived would be extraordinarily difficult to resupply. (There are also major questions about the islands' ability to withstand major environmental degradation, though it seems unwise to bet against China's engineering savvy and willpower.)

Fiery Cross, Mischief and Subi reefs are not, in other words, the Spratly equivalents of Guadalcanal and Iwo Jima. 

There would be no brutal island-hopping campaign where control of the islands was a vital objective. They'd simply need to be neutralized. 

This could be done from a distance, no bloody amphibious operations required.

But focusing on their (lack of) value in the latter stages of a major conflict would be misguided. 

A lot would have to go wrong for Chinese forces for things to even get to that point. Rather, in an all-out war with the U.S. and its allies, the main value of the Spratlys would come into play in the very initial stages.

At the most basic level, they would give the U.S. seven more targets to deal with sooner or later. 

This could matter a lot, depending on how much the U.S. succeeds in its current efforts to find places to deploy land-based missiles across the first island chain. 

The U.S. has few such options right now. 

It would have to rely on longer-range (and thus less precise) missiles fired from Guam and other positions farther afield, on the limited arsenals of allies willing to join the fight (which cannot be guaranteed), and on whatever munitions U.S. warships can carry, which is of course limited. 

So, at worst, the Spratly bases are useful missile sponges.  

At best, they dramatically expand the range of Chinese missiles and air power. 

They give China an ability to try to flood contested spaces with sheer numbers, overwhelming the relatively small number of U.S. or allied assets that would be available in the early days of a conflict, with everything from fishing boats and coast guard vessels to swarms of unmanned aerial and maritime vehicles to, of course, actual warships and fighter jets. 

Most important, they give China tremendous capacity to dominate the information domain, with the potential to effectively blind or even cripple U.S. assets if caught unaware.


 Each of these elements would be invaluable in helping China maintain the element of surprise, gain the initiative from the outset of a conflict, and put the U.S. back on its heels. 

Even if the bases don't last long after that, the tactical advantages of being able to set the terms in a conflict from the first shot can be decisive.

Winning Without Fighting

The main reasons for China's commandeering of the Spratlys have little to do with worst-case scenario war-planning, though. 

Realistically, war with the U.S. is still only a remote possibility, even if the apocalyptic risks of one are such that both sides have to prepare as if it may be inevitable. 

There’s a reason there hasn't been a great power war since the 1940s. 

The damage from one today could be even more catastrophic.

Some of China's objectives in the Spratlys, then, are much more straightforward. Shipping to and from China through the South China Sea is so vital to the Chinese economy that any threat of prolonged disruption could be existential to the Communist Party’s hold on power. 

It makes perfect sense for Beijing to want to have coast guard bases as close as possible to ward off potential threats before they become a reality. 

There are also basic material gains at stake. 

For example, fishing stocks in Chinese waters have been deteriorating at an alarming rate, and China has a lot of mouths to feed. 

Whether China's apparent determination to dictate who can fish and where is motivated more by distrust of its neighbors’ ability to prevent overfishing, or simply by the desire to grab what it can for itself, is a major driver of Chinese policy.

Even so, China can't ignore a potential conflict with the U.S. altogether. 

And it's making a big bet that dominating the Spratlys will ultimately help it prevail not only in the South China Sea but in the broader Western Pacific as well.

Put simply, China needs the Philippines on its side to truly deal with the threat of a U.S. and allied blockade on its access to the Western Pacific. 

So China has been employing a carrot-and-stick strategy to win Manila over, pairing military pressure with hefty amounts of aid and investment. 

In truth, it's always been more of a “bitter pill” strategy, using aid and investment to make it somewhat easier for Manila to swallow the reality that it doesn't exactly have a lot of options in the matter. 

China has made it clear that it's not about to back off, and, in doing so, has deepened suspicions in Manila that the U.S. isn't about to risk war with China in defense of Philippine material or sovereign interests. 

Eventually, China believes, Manila will conclude that its best long-term option is to flip into the Chinese camp. 

And in the meantime, Beijing can use its leverage to at least weaken the U.S.-Philippine alliance and discourage Manila from, for example, implementing a key 2014 agreement with the U.S. giving it rotational access to several Philippine bases. 

On these fronts, the strategy has been pretty successful.

Still, antagonizing the Philippines at a time when the U.S. is shedding distractions elsewhere in the world and getting serious about stitching together a robust alliance structure in East Asia is a risky bet, to say the least. 

The Duterte administration, which has bent over backward to stay on friendly terms with China, appears to be reaching something of a breaking point, with Foreign Minister Teddy Locsin Jr. lashing out at Beijing in a manner typically reserved for those on the president's list of enemies (drug dealers, Barack Obama, the pope, et al.). 

It's not unreasonable to wonder, then, if China is at risk of forcing Manila to go all-in on its alliance with the U.S. and greenlight, say, the reestablishment of U.S. bases throughout the country. 

This would be a monumental strategic setback.

But China pushed its chips in long ago. 

The Spratlys are too valuable in too many ways to back off now. 

Beijing is done biding time and hiding its capabilities. 

Under Xi Jinping, the strategic calculation is starting to look a whole lot like action for action's sake.

Will inflation thwart the US economic recovery?

Market Questions is the FT’s guide to the week ahead

FT reporters

US inflation worries are set to escalate this week with the release of core consumer price data, while mixed GDP signals for the UK economic recovery are expected. Inflation is also coming into focus for investors in China. © FT montage

Will inflation thwart the US economic recovery?

Corporate America is poised to deliver the best earnings season in years, raising hopes of a swift economic recovery. 

But executives are flagging one key pressure point during analyst calls: rising prices.

For months, Wall Street investors have been fretting over inflation as the nation’s economy reopens from the pandemic. 

On Wednesday, the US labour department’s core consumer price index for April could provide the clearest evidence yet that those price pressures could become a mounting threat to the recovery.

For March, core CPI, which excludes the more volatile prices of food and energy, remained relatively tame, at a 1.6 per cent year-on-year increase. 

But economists polled by Bloomberg expect that number to jump to 2.3 per cent for April, which would be the highest level since the coronavirus pandemic took hold in the US.

“Near term, it’s clear demand is going to be running ahead for some time, and I think that’s why we see inflation going up,” said Jean Boivin, head of BlackRock’s investment institute.

Bond investors wary of inflation backed out of US government bonds this year, sending the 10-year yield as high as 1.75 per cent in March. 

Since then, the yield has backed down to below 1.57 per cent. 

A drab set of US jobs figures on Friday could tame inflation nerves further. 

But rising consumer prices still rank high on worry lists, particularly if they gather forcefully enough to test the Federal Reserve’s resolve for plentiful stimulus.

“We keep hearing about risks, whether it’s valuations, whether it’s inflation, whether it’s rising [coronavirus] cases again,” said Esty Dwek, head of global market strategy at Natixis. 

“There are enough concerns that are persisting on investors’ minds that we haven’t fallen into complacency.” Aziza Kasumov

How quickly is the UK economy rebounding?

With the Covid-19 vaccination programme well advanced and restrictions easing, the UK economy is rebounding from its latest pandemic-induced decline at the start of the year. 

But for investors, the question is how quickly. 

On Wednesday, economic growth data for March will provide the latest clue. 

Economists surveyed by Bloomberg expect growth to accelerate to 1.3 per cent month on month from February’s marginal 0.4 per cent rise, partly as a result of a boost from the education sector as schools reopened. 

Quarterly data, however, is forecast to show a decline of 1.6 per cent.

Last week, the Bank of England upgraded its growth forecasts for 2021, bringing forward the point at which it expects the economy to recover its pre-pandemic peak to the final quarter of this year. 

Any further pick-up in growth could prompt investors to price in an earlier unwinding of the BoE’s bond-buying programme or even interest rate rises, markets currently anticipate two by the end of 2024 — which would boost sterling and knock gilt prices.

“Data is going to have a pretty direct read-through for Bank of England policy,” said George Buckley, chief UK economist at Nomura. 

“The stronger the recovery, the less of a need there is for as much monetary policy support.”

The BoE has “stepped back from guiding and hand-holding the market” leaving the door open for a shift in the implied path of interest rates, according to Bank of America UK economist Robert Wood.

“As economic data improve in the coming months, we continue to see little to hold back hawkish market moves,” Wood said. Tommy Stubbington

How worried should investors be about China inflation?

Chinese inflation is back in focus for investors as factory gate prices surge on the back of the nation’s rapid recovery from the coronavirus pandemic.

April’s consumer price index, out on Tuesday, is expected to show a rise of 1 per cent year on year, according to economists surveyed by Bloomberg. 

Meanwhile, producer price data out the same day are forecast to climb 6.5 per cent over the same period — their fastest pace of growth since 2017.

Producer prices, which measure factory gate rates, have been pushed higher in part because of the increasing cost of oil and an array of other commodities. 

In January, producer prices rose for the first time since the start of the coronavirus crisis, and in March they leapt 4.4 per cent.

Consumer price inflation entered negative territory for the first time in more than a decade in November. 

However, the measure was heavily skewed by pork prices, which make up a large amount of the basket of goods and spiked in 2019 because of swine fever.

Expected rises in inflation figures come as China’s economy continues to grow. 

By the end of last year, the key growth measure had already exceeded its pre-pandemic level, prompting a shift in rate change expectations and credit tightening in the months that followed.

Policymakers and advisers in the country have warned about the risk of asset bubbles, especially in the property sector.

Larry Hu at Macquarie said inflation was “the question on everyone’s mind”, but suggested it should be mild in China this year. 

“Look at the US, not China, for the inflation upside risk,” he noted. Thomas Hale

Global food prices post biggest jump in decade

40% surge in cost of agricultural commodities raises spectre of accelerating inflation

Emiko Terazono and Judith Evans in London 

The rise in world market prices will further increase food price inflation, especially among poorer countries reliant on imports for staples © Bloomberg

Global food prices have surged by the biggest margin in a decade, as one closely watched index jumped 40 per cent in May, heightening fears that the inflation initially stoked by pandemic disruption was accelerating.

The year-on-year rise in the UN Food and Agriculture Organization’s monthly index was the largest jump since 2011, as commodity prices surged.

The higher inflation will hit poorer countries reliant on imports for staple goods. 

For richer countries, the cost of raw ingredients accounts for only part of the overall price paid for products at supermarkets and restaurants. 

However, the rise in raw material prices has been so steep that big companies such as Nestlé and Coca-Cola have said they would pass on any increases.

Economists and analysts also warned that the return of eating out as lockdowns lifted around the world would add to price pressures.

“The decline in eating out was not totally compensated with eating at home, but as people start to go to restaurants again, you will see food prices rise,” said Abdolreza Abbassian, senior economist at the FAO.

The cost of labour, transport and shipping is expected to push prices higher in the coming months.

“The rise in the transport cost base with oil price increases and shipping bottlenecks, there is a lot of upward price pressure in the system,” said Caroline Bain at Capital Economics.

In the US, consumer goods companies were likely to face a 6.1 per cent rise in input costs this year compared with 0.7 per cent in 2020, according to analysts at Bernstein. 

Tyson Foods, a leading meat company, said raw material prices were more than 15 per cent higher, as well as cost increases in logistics, packaging and labour.

Bruno Monteyne, analyst at Bernstein, said the bout of inflation would increase polarisation in consumer markets between premium products aimed at wealthier consumers — many of whom have additional cash to spare after being stuck at home in the pandemic — and cheaper brands catering for more stretched buyers.

“If you are already buying organic, fair trade ready-to-eat mangoes, you’re probably not going to be too worried about it,” he said, adding that higher ingredients costs would hit mass-market brands.

Cash-strapped consumers will trade down to private-label products and, if inflation is prolonged, move to shopping in cheaper stores, he added.

In the UK and Europe, food inflation has been relatively contained, but the British Retail Consortium this week warned that prices could rise in the second half of the year. 

Helen Dickinson, BRC chief executive, said: “While prices are currently falling thanks to fierce competition between retailers, there are many cost pressures around the corner.”

The world’s consumer price inflation for food has already jumped 6.3 per cent in 2020, up from 4.6 per cent in 2019, according to the FAO as the pandemic played havoc with global supply chains, affecting the production and distribution of food. 

South America, with 21 per cent food price inflation, Africa and South Asia with 12 per cent and Oceania with 8 per cent were among the most affected regions.

The latest jump in food commodity prices reflected China’s soaring appetite for grain and soyabeans is adding to upward pressure on prices, along with a severe drought in Brazil and growing demand for vegetable oil for biodiesel, said analysts.

Bad weather in Brazil, a big exporter of corn and soyabeans, and rising demand for soyabean oil for biodiesel pushed prices higher, they added.

“China has continued to buy, but with Brazil’s drought proving to be more severe than expected, everyone has to pray that the weather in the US is going to be good,” said Abbassian.

The increases will severely impact countries that depend on overseas producers for their staples. 

In West Africa, the price of staples are up 40 per cent over a five-year average, with countries such as Nigeria, experiencing food inflation of 23 per cent, the highest level in 15 years, according to the UN World Food Programme.

The WFP also warned of vulnerable countries faced with soaring prices, including Lebanon, where food inflation soared to 400 per cent last year on the back of a currency crisis, the pandemic and the after-effects of the Beirut port explosion. 

Food price inflation in Lebanon is still more than 200 per cent. 

Countries such as Syria and Sudan are also struggling with food inflation of more than 200 per cent, the WFP said.

Additional reporting by Chloe Cornish in Beirut

Biden’s Budget Signal to the Fed

His economists assume negative real interest rates for a decade.

By The Editorial Board


A President’s budget is the clearest signal of his priorities, and we keep finding gems buried in the White House document released under holiday weekend cover on Friday. 

Well, maybe gems isn’t the right word, but consider the budget’s not-so-subtle message to the Federal Reserve about interest rates.

The budget includes economic assumptions for growth, prices, interest rates and unemployment. 

In Table S-9 on page 60 we learn that White House economists are assuming negative real interest rates all the way through the end of the 10-year budget window in 2031.

The budget anticipates that inflation will remain contained between 2.1% and 2.3% a year through 2031. 

This may be wishful thinking, but let it go. 

The budget also assumes that the average annual interest rate on three-month Treasury bills will remain below the inflation rate for all 10 years. 

The rate will be 0.2% in 2022 and never go higher than 2.2%, which it reaches in 2031.

The budget also assumes that the 10-year Treasury note will stay below the inflation rate through 2024. 

The White House has the 10-year Treasury never rising above 2.8% during the decade, even as the unemployment rate hits 3.8% in 2023 and stays there through 2031.

In other words, the White House is assuming that the Federal Reserve will maintain negative real interest rates despite eight years of full employment. 

Yet somehow inflation will remain contained.

These estimates aren’t engraved in stone. 

Events happen, circumstances change, and economic assumptions will change with them. 

The real meaning of these White House estimates is the message President Biden is sending to the Fed: His agenda requires that the central bank keep interest rates abnormally low, even below the inflation rate, for the entire Biden Presidency.

Democrats need the Fed to keep interest rates low so interest on the soaring federal debt doesn’t begin to crowd out the rest of the budget. 

It also needs the Fed to monetize the new debt via bond purchases, lest market demand for U.S. debt fall and interest rates need to rise to attract enough foreign and domestic buyers. 

Fed Chairman Jerome Powell has saluted over the last year and keeps buying $80 billion in Treasurys each month.

Fed officials insist in speeches that they’re independent of the Treasury and will do what it takes to avoid an inflation breakout. 

But the numbers and message in the Biden budget explain why that won’t be as easy politically as Fed officials aver.

Machiavelli in the Ruins of Greensill Capital

The collapse of Greensill Capital has a strong historical parallel in the decline and fall of the medieval Medici bank after it went too far with the financial innovations of its own day. The lessons from both failures are clear, but most likely will be forgotten until the next financial meltdown.

Harold James

PRINCETON – The collapse of Greensill Capital, a London-based financial services firm, offers a timely but costly warning about a number of contemporary trends. 

Clearly, we should be wary of the hype around financial innovation. 

But we also need to shine a brighter spotlight on the shady world of corporate lobbying, the regulation of risk, and other issues at the intersection of capitalism and government.

The success of US President Joe Biden’s China policy will depend on whether the two powers can cooperate in producing global public goods, while competing in other areas. 

The US-China relationship is a “cooperative rivalry,” in which the terms of competition will require equal attention to both sides of the oxymoron. That will not be easy.

Greensill reportedly tried to use former British Prime Minister David Cameron to entice the Saudi government to press investors to contribute more funds to SoftBank so that SoftBank could increase its backing of Greensill. 

Then, following the start of the pandemic, Cameron reportedly lobbied for Greensill to secure access to an emergency loan scheme, and pressed the National Health Service to adopt an app owned by Greensill to pay NHS staff daily instead of monthly.

But the fact that Greensill was pedaling an advanced-payment app doesn’t mean that it was a genuine financial innovator. 

In reality, its financing activities were largely limited to a narrowly focused steel business: Indian businessman Sanjeev Gupta’s GFG Alliance.

Why would government officials from Saudi Arabia to the United Kingdom place their trust in such a company?

The superficial answer is that Greensill was selling glitzy new financing models that promised to help the Saudis modernize their handling of the annual Mecca pilgrimage, and the NHS to streamline its payroll.

And yet, these offerings were nothing new. 

The great push of financialization that began in the late twentieth century has long been driven by securitization, which allows firms to devise an endless array of “new” products. 

The process involves bundling together a diverse set of assets to create an apparently safer or more transparent set, which can then be carved up and remarketed according to various criteria. 

Ultimately, types and levels of risk can thus be disaggregated and sold to those willing to hold them.

After the 2008 financial crisis, securitization was blamed for amplifying, rather than reducing, risk, and the euphoria around the process duly evaporated. 

But the practice didn’t end. 

In the case of the financier Lex Greensill’s eponymous firm, it was used to package and sell loans to major financial institutions like Credit Suisse.

Greensill was a major player in the niche market of supply-chain finance, whereby a lender advances payments to a major buyer’s suppliers in exchange for a fee. 

Supply-chain finance will be unfamiliar to most readers; but it isn’t new. 

In fact, historians see it as the oldest application of finance, born at a time when merchants generally lacked the money to pay for their shipments until their inventories had been distributed or sold. 

Financing the transaction by giving the merchant a credit on the security of an invoice or a promise to pay thus satisfied an unmet need. 

There is evidence that the process was familiar even to the ancient Mesopotamians.

Above all, though, supply-chain finance was the centerpiece of late medieval and early modern banking and finance. 

Here, the decisive innovation was the bill of exchange, a document requiring payment of a specific amount at some point in the future. 

Merchants would buy a bill of exchange and send it to the country from which they wanted to import, where it could be used to secure the ownership of a product – say, a bale of wool – by another merchant who would then present the bill to the agent of the original issuer.

Crucially, this process eliminated the need for transporting large quantities of cash.

But it also acted as an early credit instrument: since the issuers of bills often worked with large customer deposits, they could engage in other banking activities at the same time.

Greensill – but especially his credulous creditors (above all SoftBank and Credit Suisse) – would have done well to study some of these medieval banks, the best documented being those based in Florence. 

Among these, the most famous Florentine bank to this day was the House of Medici (who were also patrons of the arts, politicians, and even popes).

In The Rise and Decline of the Medici Bank: 1397-1494, the twentieth-century Flemish historian Raymond de Roover explains, among other things, how the bank managed branches not just in Rome, Venice, Naples, and Milan, but also, through partnership agreements, in Avignon, Geneva, Bruges, and London.

The branches in Bruges and London were the most problematic, in part because of the geographical distance, but also because they needed to interact constantly with strong and unpredictable states. 

As a consequence, the Medici bank’s local agents needed to lobby intensively, offering concessions to rulers in exchange for favors, such as permission to export the goods (wool) whose trade they were financing. 

This led them to lend more and more to governments, which used the money for their own purposes.

But the Medici bank’s financing of the English Wars of the Roses introduced a critical financial weakness. 

Because the London branch needed to lend ever-larger amounts to Edward IV for wars and dowries to secure political alliances, the first branch agent gave up in disgust. 

But he was replaced by Gherardo Canigiani, who became a devoted follower of the king – at the expense of the bank’s interests. 

In the end, the partnership had to be wound up in bankruptcy.

The Medici bank – which failed completely a few years later – served as an exemplary lesson for Niccolò Machiavelli, whose History of Florence attributed the bank’s downfall to the fact that its branch managers had started to act as princes themselves. 

This story was then re-purposed by Adam Smith to show how government enterprises (as the Medici bank had become) were corrupt and wasteful, allowing Lorenzo the Magnificent to use “the revenue of the state of which he had the disposal.”

Shakespeare also offered a word of caution. 

The Merchant of Venice begins with the merchant Antonio boasting about the extent of his portfolio diversification. “My ventures are not in one bottom trusted, / Nor to one place; nor is my whole estate / Upon the fortune of this present year: / Therefore my merchandise makes me not sad.” 

But soon enough, his ships and their cargo are lost at sea, leaving him with nothing to repay his debts.

The lessons of the intersection of finance and politics were not lost on Machiavelli, Shakespeare, and Smith. 

Will we continue to ignore them?

Harold James is Professor of History and International Affairs at Princeton University and a senior fellow at the Center for International Governance Innovation. A specialist on German economic history and on globalization, he is a co-author of The Euro and The Battle of Ideas, and the author of The Creation and Destruction of Value: The Globalization Cycle, Krupp: A History of the Legendary German Firm, Making the European Monetary Union, and the forthcoming The War of Words.