Federal Reserve debates tougher regulation to prevent asset bubbles

Officials worry that low interest-rate policies could encourage excessive risk-taking

James Politi in Washington

Eric Rosengren, president of the Boston Fed, says the US central bank lacks the tools to ‘stop firms and households’ from taking on ‘excessive leverage’ © Reuters


Senior Federal Reserve officials are calling for tougher financial regulation to prevent the US central bank’s low interest-rate policies from giving rise to excessive risk-taking and asset bubbles in the markets.

The push reflects concerns that the Fed’s ultra-loose monetary policy for struggling families and businesses risks becoming a double-edge sword, encouraging behaviour detrimental to economic recovery and creating pressure for additional bailouts.

It also highlights fears at the Fed that the financial system remains vulnerable to new shocks, despite massive central bank intervention this year to stabilise markets and the economy during the pandemic.

Eric Rosengren, president of the Federal Reserve Bank of Boston, told the Financial Times that the Fed lacked sufficient tools to “stop firms and households” from taking on “excessive leverage” and called for a “rethink” on “financial stability” issues in the US.

“If you want to follow a monetary policy . . . that applies low interest rates for a long time, you want robust financial supervisory authority in order to be able to restrict the amount of excessive risk-taking occurring at the same time,” he said. “[Otherwise] you’re much more likely to get into a situation where the interest rates can be low for long but be counterproductive.”

Neel Kashkari, the president of the Minneapolis Fed and a US Treasury official during the global financial crisis, told the FT that stricter regulation was needed to stave off repeated market interventions by the central bank — such as the kind made last decade and again this year.

“I don’t know what the best policy solution is, but I know we can’t just keep doing what we’ve been doing,” he said. “As soon as there’s a risk that hits, everybody flees and the Federal Reserve has to step in and bail out that market, and that’s crazy. And we need to take a hard look at that.”

One of the fears among some Fed officials is that the US central bank could be forced to raise interest rates earlier than it would like if financial sector risks are not kept under control and dangerous asset bubbles emerge.

Lael Brainard, a Fed governor, said in a speech last month that expectations of extended low interest rates were “conducive to increasing risk appetite, reach-for-yield behaviour and incentives for leverage”, thereby boosting “imbalances” in the US financial system.

She said it was “vital to use macroprudential” tools — meaning rules designed to curb risks — “as well as standard prudential tools as the first line of defence in order to allow monetary policy to remain focused on achieving maximum employment and 2 per cent average inflation.”

Mr Kashkari, who has called for higher capital requirements at large banks, said the financial system needed to be “fundamentally more resilient, both [in] the banking sector and the non-banking sector” during a period of turbulence like the current one.

“For me, monetary policy is a very poor tool to address financial stability risks,” he said.

Although no big regulatory changes are expected in the near term, the debate over tougher financial regulation could gather pace if Democrat Joe Biden wins the White House in November, making the political environment more favourable towards action.

Michael Barr, the dean of public policy at the University of Michigan business school and a former US Treasury official under Barack Obama, said: “You want to make sure that you’re using all the tools you have on financial stability, so that you don’t put the Fed in the position of cutting off growth.”

So far, the top officials at the Fed, including chairman Jay Powell and Randy Quarles, the vice-chair responsible for financial supervision, have signalled that they were comfortable with the central bank’s regulatory posture leading into the Covid crisis, reckoning that banks were healthy enough to survive the shock of the pandemic and support the US economy.

The Fed has been examining the turbulence that unfolded in the US Treasury and short-term funding markets in March, leading to a hefty rescue from the central bank, to see what fixes might be warranted.

The central bank has also capped dividend payments and banned stock buybacks at the largest banks to the end of the year, though Ms Brainard — a possible Treasury secretary in a Biden administration — argued that this did not go far enough and a full dividend ban was warranted.

Other Fed officials, however, argue that the tougher financial regulation being considered by the central bank could curb the ability of banks to dispense vital credit and reduce market turmoil in times of crisis.

Mary Daly, president of the Federal Reserve Bank of San Francisco, this week told reporters that she did not see much connection between loose monetary policy and financial risks. During the Fed’s rate cuts of 2019, which were triggered by trade tensions, financial stability concerns were raised but never materialised, she said.

“We should always watch for excess risk-taking, we should always watch for excess leverage,” she said. “But we shouldn’t regulate off the fear that could happen, and at the expense of so many millions of Americans who need the employment and the income and the access to the economy.”

Buttonwood

The tale of Iran’s stockmarket bubble is familiar—but also strange

There is more to it than savings having nowhere else to go


Around five years ago emerging-market investors were greatly excited by the prospects for Iran’s stockmarket. The lifting of sanctions in exchange for limits on its nuclear programme was in sight. 

Exports of oil, Iran’s main source of foreign currency, would increase, boosting the economy. And there was the hope that Iran might soon be included in equity indices tracked by global investors. A surge of buying by foreigners would surely follow.

Things turned out rather differently. Sanctions were lifted in 2016, only to be reimposed by America in 2018. Oil exports have dwindled. Iran did have a stockmarket boom, but it came much later than anyone had expected and foreigners played almost no part in it. And it was more bubble than boom. In a few short months this year, share prices rocketed (see chart).

The bubble has since popped. Share prices have fallen by around a quarter since early August and by a lot more in hard-currency terms. One rationale for the frantic stock-buying was Iranians’ desire to hedge against the lost purchasing power of the rial; sanctions make holding dollars offshore rather tricky. It is tempting to conclude that bad things happen when savings have nowhere else to go but the stockmarket. But Iran’s story is more complicated than that.


Start with the economy. The collapse of oil production that followed the reimposition of sanctions caused gdp to shrink by around 6% last year. A lack of oil revenue has hurt government finances and undermined the rial. 

But there is more to Iran’s economy than oil, says Maciej Wojtal of Amtelon Capital, a Europe-based fund that invests in Iran. It has a domestic market of 83m, mostly young people, roughly the same as Turkey. 

A range of industries, from white goods and cars to personal care and processed food, serve that market. Sanctions have not entirely suffocated trade. Iran’s neighbours—notably Iraq and Afghanistan—lack its industrial base, and so import a lot from it. Its neighbourhood includes Pakistan, Turkey and the UAE. 

A steadily weaker rial has over time boosted non-oil export industries, such as petrochemicals, metals, engineering services—and even chocolate and pastries, says Ramin Rabii of Turquoise Partners, a financial-services group.

The stockmarket reflects this industrial diversity. There are hundreds of stocks in a variety of sectors. And because crude is a state-owned business, it better reflects the non-oil economy. 

The market had a good run last year thanks in large part to improving export earnings. People took notice. Here was an asset class that acted as a hedge against the rial’s diminishing value. The scene was set for this year’s melt-up in share prices.

The trigger was the coronavirus pandemic, which hit Iran particularly hard. Industry stopped. Hard currency became even scarcer. 

Since January the rial has lost more than half its value against the dollar, according to Bonbast, which tracks the unofficial currency market. 

Iran’s central bank flooded the banking system with liquidity to try and limit the economic damage. Inflation picked up to almost 35%. Money soon found its way into asset prices, including shares. 

Iran’s government even raised a chunk of revenue through a big ipo in April. As in America, retail investors flooded in. The number of people active in the stockmarket went from 700,000 to 5m in a matter of months, says Mr Rabii.

There are shades of China in 2015 here: a fear of devaluation; a weak economy; and trapped capital feeding a stockmarket frenzy, cheered on by the government. That ended badly. But stocks were far from the only hedge in town in Iran. Property prices in Tehran have surged since American sanctions were reimposed, says Mr Rabii. 

Just about any hedge against a weaker rial has multiplied in value, from gold coins to second-hand cars. Others have noted a hint of bubble dynamics in hard currencies in Iran. The more they go up, the greater the temptation to keep hoarding them. Scarcity begets scarcity.

For Iran’s government, a stockmarket boom was the least worst way to absorb excess liquidity, says Esfandyar Batmanghelidj, of Bourse & Bazaar, a London-based think-tank that focuses on Iran’s economy. True, people who piled in at its peak are now nursing hefty losses. 

But a stockmarket bubble beats one in hard currencies, second-hand cars or property, all of which add to the cost of living. Optimists will point out that China survived its frenzy; the value of its stockmarkets has just hit a new high.

The impact of AI on business and society

New technology can bring huge benefits to society but first we have to take away the fear

Lucy Colback 


Artificial intelligence, or AI, has long been the object of excitement and fear.

In July, the Financial Times Future Forum think-tank convened a panel of experts to discuss the realities of AI — what it can and cannot do, and what it may mean for the future.

Entitled “The Impact of Artificial Intelligence on Business and Society”, the event, hosted by John Thornhill, the innovation editor of the FT, featured Kriti Sharma, founder of AI for Good UK, Michael Wooldridge, professor of computer sciences at Oxford university, and Vivienne Ming, co-founder of Socos Labs.

For the purposes of the discussion, AI was defined as “any machine that does things a brain can do”. Intelligent machines under that definition still have many limitations: we are a long way from the sophisticated cyborgs depicted in the Terminator films.

Such machines are not yet self-aware and they cannot understand context, especially in language. Operationally, too, they are limited by the historical data from which they learn, and restricted to functioning within set parameters.

Rose Luckin, professor at University College London Knowledge Lab and author of Machine Learning and Human Intelligence, points out that AlphaGo, the computer that beat a professional (human) player of Go, the board game, cannot diagnose cancer or drive a car. A surgeon might be able to do all of those things.

Intelligent machines are, therefore, unlikely to unseat humans in the near future but they will come into their own as a valuable tool. Because of developments in neural technology and data collection, as well as increased computing power, AI will augment and streamline many human activities.

It will take over repetitive manufacturing processes and perform routine tasks involving language and pattern recognition, as well as assist in medical diagnoses and treatment. Used properly, intelligent machines can improve outcomes for products and services.

Dealing with data

Adoption of AI has been particularly widespread in the financial services sector. Forrester, the research group, notes that about two-thirds of finance firms have implemented or are adding AI in areas from customer insights to IT efficiencies. Data analysis already detects fraud.

Jamie Dimon, chief executive of JPMorgan, noted in 2018 that as well as having the potential to provide about $150m of benefits each year, machine-learning systems allowed for the approval of 1m “good” customers who might otherwise have been declined, while an equal number of fraudulent applications were turned down.

AI is also useful in stock market analysis. Schroders, the fund manager, says such systems are basically “sophisticated pattern-recognition methods” yet they can nevertheless add value and improve productivity.

Schroders uses AI in tools that forecast the performance of companies after initial public offerings, monitor directors’ trades and analyse the language in transcripts of meetings.

Like many other businesses, the company also employs AI to automate low-judgment, repetitive back-office processes.

Interestingly Schroders believes we may already be at “peak AI” since the technology is “difficult to implement in a meaningful way for many of the high-complexity tasks that a typical knowledge worker does as part of their job”.

Professor Richard Susskind, author of Online Courts and the Future of Justice and technology adviser to the Lord Chief Justice of England and Wales, observes that “professionals invariably see much greater scope for the use of AI in professions other than their own”.

Elsewhere in professional services, law firms have applied language recognition to assess contracts, streamline redaction and sift materials for review in litigation cases, as well as to analyse judgments. The London firm Clifford Chance notes, however, that the facilitation of processes does not yet “transform the legal approach”.

Prof Susskind says: “I am in no doubt that much of the work of today’s lawyers will be taken on by tomorrow’s machines.” This could have major implications for how lawyers are trained and recruited.



Healthcare is another sector to benefit from AI’s rapid development.

Applied to large data sets, AI has identified new drug solutions, enabled the selection of candidates for clinical trials and monitored patients with specific conditions. Roche, for example, uses deep-learning algorithms to gain insights into Parkinson’s disease.

In the consumer sector, data and language analysis has been applied to develop translation apps, online moderation and product and content marketing. It has also identified epidemic outbreaks and verified academic papers.

In energy, Iberdrola, the Spanish multinational, has achieved efficiency gains that benefit both the company and the environment. It uses AI to improve the operation and maintenance of its assets through data analytics. Systems developed with machine learning co-ordinate the planning and delivery of maintenance, monitor electricity usage and optimise distribution.

Set against these advances, it should be acknowledged that AI has also worked in less benign ways: it has given criminals the means to commit sophisticated fraud and assisted in the creation and dissemination of “fake news”.

Sound recognition and analysis

Chatbots — software that can simulate conversation — have become the mainstay of many customer service centres and are used to answer questions on topics ranging from product options for online marketplaces to telephone inquiries at utilities and banks.

These digital assistants vary in sophistication and are limited by their command of what is known as “natural language processing”: the ability to treat words as more than mere inputs and outputs. This makes empathetic responses difficult to simulate, while the inability to comprehend context means that AI cannot distinguish a joke from a slur. Advances in this area could be transformational to the range of possible applications, as well as to acceptance by consumers.

Elsewhere AI developed by Huawei has been deployed by Rainforest Connection to fight illegal logging and poaching.

Dealing with images

Facial recognition is perhaps the best-known use of image analysis. From its application in identity verification to unlock mobile phones to its more sinister deployment by “surveillance states” — in Xinjiang province in China, for instance — its adoption is increasingly widespread.

There remain significant drawbacks to the technology, not least its unreliability in identifying the faces of people of colour — just one of the many ethical problems connected to the use of AI.



Less controversially, image analysis is being used in the medical industry. It can help in the identification and diagnosis of diseases such as cancer and its performance in eye scans is at least as accurate as that of human specialists.

In 2018 the US Food and Drug Administration approved a retinal scan algorithm designed by IDx, an Iowa start-up, that can diagnose diabetic retinopathy without the need for an eyecare specialist. The implications for healthcare could be far-reaching, both in terms of changes in the skills needed as well as improved access to care.

Image recognition has also been put to use in environmental conservation. A platform called Ewa Guard, jointly developed by Lenovo and Bytelake, remotely counts trees and monitors the health of forests. Lenovo, which is based in Beijing, has joined North Carolina State University in the US to apply deep-learning algorithms to identify farmland and monitor soil and crops to optimise water management.

A further possible application is in waste management, where image identification may assist robots to extract recyclable items based on logo or component recognition.

Personalisation

Personalisation of products and marketing is an area of rapid development which could greatly benefit manufacturers and retailers. A 2018 report from PwC, the Big Four accounting firm, estimated that the value derived from the effect of AI on consumer behaviour, for instance through product personalisation and an increase in free time, could be as much as $9.1tn by 2030.

Among the sophisticated algorithms to personalise internet content is that used by TikTok, the app that allows users to upload short videos. Byte Dance, TikTok’s owner, revealed in June that its system is based on user interactions, video information and to a lesser extent, device and account settings.

Cosmetics, too, can be personalised by data analysis. Companies such as Kao, a beauty group, use genetic data to tackle wrinkles and dermatological conditions.

Meanwhile the redesign of carmaking processes by Mercedes — converting “dumb” robots on its production line into human-operated, AI-assisted “cobots” — has enabled a previously impossible level of customisation, such that “no two cars coming off the production line are the same”, according to a report in Harvard Business Review.


So much for the way AI is being deployed in businesses around the world. What are the implications of its widespread adoption?

Businesses

For a business to adopt AI with any degree of success it must have a coherent and active strategy. Equally critical is that the strategy is controlled centrally rather than executed piecemeal: businesses need to consider the use of AI holistically, so that entire processes are reimagined, along with the redesign of tasks to blend machine and employee skills.

FT panellist Ms Ming cited an example in which her company came up with a tool to eradicate inefficiencies in manufacturing processes. While the technology did what was needed, “the companies were not ready to act” as their entire workflows would have to change.

This perhaps offers an advantage to companies that operate without the burden of legacy processes, but incremental change is still better than none. Research by Automation Anywhere and Goldsmiths, University of London found that “[AI] augmented companies enjoy 28 per cent better performance levels compared with competitors”.


Buy-in from employees is also essential and can be made easier by including the workforce in the process of redesigning their roles. Lenovo suggests that in future “as teams become more experienced, part of their training will be focused . . . in identifying which parts of their work are suitable to deploy AI towards”. Communication and transparency with employees is critical to engendering trust in the adoption of AI.

IT systems, too, are likely to need a radical overhaul to function in an AI world, and those built from scratch will be more effective than bolt-ons to existing software. Although the cost may be daunting, Clifford Chance argues that the marginal cost of AI systems is relatively low once they are built and offset by the fact that AI can help to “significantly reduce the cost of providing legal services”.

As well as establishing ownership of AI strategy at board level, companies will also need to consider how to deal with the ethical challenges the technology brings. Coupled with the focus on environmental, social and governance (ESG) goals encouraged by the Covid-19 crisis, is a need for more formalised ethics oversight on boards to ensure that AI implementation conforms with corporate values. Could chief ethics officer be the next boardroom position?

Businesses will have to consider the risk of deploying AI from multiple perspectives, including the legal, regulatory and ethical.

In a global survey of 200 board members, Clifford Chance found that “88 per cent agreed (somewhat or strongly) that their board fully understands the legal, regulatory and ethical implications of their AI use”, but that “only 36 per cent of the same board members said they had taken preliminary steps to address the risks posed by lack of oversight for AI use”.

------------------------

Employment

We are all familiar with blood-curdling predictions that AI could “steal our jobs”. The consensus among researchers, however, is that rather than put humans out of work, the adoption of AI is more likely to change both the nature of the jobs we do and how we carry them out.

In its Future of Jobs Report 2018 the World Economic Forum cited one set of estimates indicating that while 75m jobs may be displaced, 133m could be created to adapt to “the new division of labour between humans, machines and algorithms”.


Carl Frey, author of The Technology Trap and director of the Future of Work programme at Oxford Martin School, estimated in 2013 that 47 per cent of US jobs (based on occupation classifications) were at risk of automation, while UK categorisations gave a figure of 35 per cent.

These numbers have been widely debated but Mr Frey observes that they account for those jobs that can be restructured in order to be automated — and individuals can be allocated new tasks as long as they acquire fresh skills.

While occupations involving, say, the ability to navigate social relations are to a large extent secure, Mr Frey points out that this is true mainly for more complex interactions. For example, fast-food outlets, where interaction is not integral to the appeal of a product, use more automation technology than fine-dining restaurants.

As businesses’ reliance on AI increases, it is clear that a redistribution of labour is inevitable. To deal with the shift in skills that this implies, retraining the workforce is critical. The WEF notes that on average about half of the workforce across all sectors will require some retraining to accommodate changes in working patterns brought about by AI.

Prof Luckin points out that businesses have a huge amount of data on their staff that could be invaluable to understanding how to optimise redeployment. “The savvy businesses will be really trying to understand their current workforce and what workforce they need, and looking to see how they can retrain on that basis.”

Much of that education is likely to go to the higher-skilled segment of the workforce and “saving people” if not “saving jobs” will have to be considered. In the first instance, the burden may fall to governments but the threat to low-skilled workers could require businesses to pick up the slack, especially given the additional pressures caused by Covid-19.

So far it appears that the pandemic has accelerated the trend towards automation. The effect is being felt in call centres, part of an outsourcing services industry worth nearly $25bn to the Philippines in 2018. Even before the pandemic, the IT and Business Process Association of the Philippines noted that the increase in headcount in 2017 and 2018 had been just 3.5 per cent, against a forecast of nearly 9 per cent. One of the reasons for this is increased automation.

Call centre operators in countries such as the Philippines and India have suffered further from the requirement to work from home during the pandemic. They have been hampered by poor infrastructure, which ranges from a lack of IT equipment or fast internet to security considerations when dealing with customers’ financial information.

At the end of April, US-based outsourcer [24]7.ai said demand for some automated products had risen by half since the beginning of the year, well ahead of the call for human services.

Food preparation roles may also be at increasing risk of redundancy because of automation spurred by Covid-19, according to the European Centre for the Development of Vocational Training. The advent of robots such as Flippy, which can cook burgers and french fries and knows when to clean its own tools, shows that such a shift is not out of the question.

One domain in which AI has failed to encroach successfully, says Mr Frey, is the arts: creative output that is original and makes sense to people has not yet been successfully replicated, even if an algorithm could be programmed to produce something that sounds similar to Mozart.

“The reason is simply that artists don’t just draw upon pre-existing works, they draw upon experiences from all walks of life — maybe even a dream — and a lot of our experiences are always going to be outside of the training dataset.”

Mr Frey’s point is echoed by Prof Wooldridge, who said people will have to wait a long time for works created by AI that would “deeply engage” them.

Education

AI affects education many ways. People will need to be taught what AI is and how to use it, as well as the way its inputs and outputs are conceived. Education is also crucial to establishing public trust.

This summer’s school exam-marking controversy in the UK shows what happens when trust in computer-generated results is eroded. An automated system designed to mark A-levels in line with previous years led to a public outcry. A lack of transparency as to how the algorithms used would work, combined with a lack of confidence in the metrics used, undermined the exercise.

Prof Luckin stresses that if public consent and trust are to be gained, then AI-driven processes should be both transparent and easily explained.

Data literacy will be hugely important, says Prof Luckin, to ensure that people are equipped to assess and refine AI output.

“That’s the real problem. It was an algorithm and they took the human out of the loop. 

It needed much more human intervention with the data. It is just having someone who is contextually aware going ‘hang on a minute, that’s not going to work’.”

Finally, AI can also be used as a pedagogical tool, complementing the work of human teachers. It can assess our ability to learn and advise us on the best way to retain information. 

For example, Up Learn, a UK company, offers learning “powered by AI and neuroscience” and promises a refund in the event that customers do not achieve a top grade.

Ethics and bias

The widespread adoption of AI obviously raises ethical challenges, but numerous organisations have sprung up to monitor and advise on best practice. These include AI for Good, the AI Now Foundation and Partnership on AI.

Governments are also taking steps, with more than 40 countries adopting the OECD Principles on Artificial Intelligence in May 2019 as a “global reference point for trustworthy AI”. At about the same time, China released its Beijing AI Principles. In July, the European Commission published the results of its white paper consultation canvassing views on regulation and policy.


Despite this there is no globally agreed set of standards: regulation remains piecemeal.

The British A-level controversy drew attention to the problem of historical bias, showing how AI is dependent on data and programming inputs.

Diversity is another problem, both in terms of the poor representation of women among AI professionals but also in how AI is developed. Facial recognition, for instance, works best on white male faces, a “technical problem” for which, Ms Ming noted, there is limited incentive to fix in the absence of regulatory enforcement.


On the other hand, AI can help to promote diversity through “colour-blind” recruitment processes. Schroders, for example, uses AI tools when it looks for early-career trainees and graduates. “Given that the alternative is people looking at candidates’ CVs (with ample scope to favour candidates like themselves),” the company says, “this can be much more fair.”

Facial recognition technology raises further ethical concerns in relation to surveillance — for instance, of the Uighur population in China.

Abuse of data harvested through facial recognition is not restricted to the state, however. Identity fraud and data privacy are significant problems.

In July, UK and Australian regulators announced a joint investigation of Clearview AI, the facial recognition company whose image-scraping tool has been used by police forces around the world.

Other ethical problems loom. Gartner says that by 2022 one-tenth of personal devices will have “emotion AI” capabilities, allowing them to recognise and respond to human emotions, which will present opportunities for manipulative marketing. Accenture advises that the groundwork for the ethically responsible use of such technology needs to be laid now.

What does the future hold?

Businesses and employees alike need to be prepared for what is likely to be widespread and sometimes bewildering change as a result of AI adoption, and the ethical and regulatory challenges that will come with it.

“Doubters find it hard to grasp that the pace of technological change is accelerating, not slowing down,” says Prof Susskind.

“There is no apparent finishing line. Machines will outperform us not by copying us but by harnessing the combination of colossal quantities of data, massive processing power and remarkable algorithms.”

The Logic of Sino-Western Détente

It might be cathartic to opine noisily about another country’s standards and practices, but there is substantial historical evidence to suggest that a country’s citizens will tend to value economic opportunity over most other issues. That axiom applies as much to the US, the UK, and Europe as it does to China.

Jim O'Neill


LONDON – While much of the hand wringing over China has abated somewhat during the COVID-19 crisis, the fears animating Western attitudes toward that country have not disappeared, and could resurface at any moment. 

These tensions represent a major, vexing dilemma for the world, given China’s massive and growing economic power. And the situation certainly hasn’t been helped by the failure of the other major economic powerhouse, the United States, to manage the current crisis effectively.

Having survived an attempt on his life, Russian opposition leader Alexei Navalny is eager to get back to work. It is now clearer than ever that President Vladimir Putin's regime is morally irredeemable and in a state of decay.

Owing to my professional background, I usually approach issues like the Sino-American relationship first as a macroeconomist. But as the chair of Chatham House, I have been developing a more nuanced view of the issue, taking into account not just the economic dimension but also security, diplomacy, culture, and other factors.1

To that end, it seems only reasonable that we should adopt a broader “optimization framework” for understanding and managing relations between China and the West. 

Not to oversimplify matters, but if the economic opportunity that China represents can be expressed as X, Western leaders who want to confront China about actual or perceived transgressions need to weigh the potential costs of doing so against that benchmark.

Such thinking is only natural, and I suspect that it is already implicit in British and European governments’ approaches to China in recent years. But in following this framework, policymakers need to ask themselves a subtler question: Is strong economic engagement more effective than unbending confrontation in achieving the desired policy changes in China?

Answering such questions will require an open mind. During China’s semi-annual Golden Week holiday this month, many Chinese people appear to have traveled far and wide within the country without triggering another wave of COVID-19 infections. 

Yet when I point this out to other Westerners, their first instinct is to question the anecdotal evidence and reject the credibility of Chinese data. And even when they stipulate that the evidence may be sound, they say they aren’t surprised, given the degree of control that China’s authorities have over the Chinese people.

I would have more sympathy with this argument if China and other authoritarian countries were indeed the only ones to have prevented a serious second wave of COVID-19 infections this year. But similar stories can be found in places like Japan and South Korea, suggesting that we would do better to look for lessons than simply dismissing the evidence.

Soon, we will have data on China’s third-quarter real (inflation-adjusted) GDP growth, and many analysts expect to see an acceleration to around 5% year on year, coming on top of a second-quarter estimated growth rate of 2.6%. If so, there will be good reason to believe that China is experiencing a classic “V-shaped” recovery, putting it on track to register 8% growth in 2021.

These are just forecasts, of course, and any number of unforeseen developments could radically change the state of play, as 2020 has shown. 

But if the current growth figures are reasonably accurate, the implication is that China’s nominal GDP ($14.1 trillion in 2019) will match that of the US ($21.4 trillion) later this decade, or soon thereafter.

Moreover, at the current growth rate, China is poised to contribute an additional $1.5 trillion to global GDP just next year, and Chinese consumers will drive close to 40% of that. For comparison, $1.5 trillion is greater than the national GDP of all but the top 15 or so economies. 

China will effectively be creating another Australia or Spain in the space of a single year. And given that consumer spending continues to account for a growing share of China’s expansion, the scale of the economic opportunities on offer cannot be overstated.

That takes care of the macroeconomics, but we cannot ignore the other issues. China’s human-rights abuses are legion, particularly in Xinjiang. Its clampdown in Hong Kong and incursions in the South China Sea have heightened tensions across the region, as has the Belt and Road Initiative through which China is exercising its influence in other countries. 

The Chinese government’s insistence that even private companies toe the party line raises serious doubts for Western firms and governments dealing with the country.

These are serious concerns, and they take us back to the question posed by the optimization framework. Those advocating more confrontation with China must weigh the probabilities that their approach will succeed as intended, that it will curtail Chinese growth, and that it may reduce economic opportunities for the West. 

If all of these outcomes are borne out, Western leaders could decide that the strategy was worth it. But if there were a reasonable chance that China’s growth would continue while the opportunities for the West shrank, a policy of confrontation would be utterly self-defeating.

It might be cathartic to opine noisily about another country’s standards and practices, but there is substantial historical evidence to suggest that a country’s citizens will tend to value economic opportunity over most other issues. That axiom applies as much to the US, the United Kingdom, and Europe as it does to China.

Moreover, even if a country’s leadership still prefers a confrontational approach after considering the potential costs, it would have a much better chance of success by cooperating with other governments in a program of positive engagement rather than zero-sum brinkmanship. Surely diplomacy and other subtler forms of engagement would go further toward changing a country’s standards than saber rattling and trade warfare ever could.

If there is a change in US leadership next month, one hopes that it will set the stage for a renewed effort at the G20 to resurrect the post-war international order, and to bring governments back to the same table. Everyone has a role to play in working toward a more prosperous and inclusive future.


Jim O’Neill, a former chairman of Goldman Sachs Asset Management and a former UK Treasury Minister, is Chair of Chatham House.