What’s so great about private equity (part 1)

Robert Armstrong


On Monday I took note of the new head of Yale’s fabulously successful endowment, and his commitment to its huge allocation to private capital — more than a third of the endowment’s assets. 

It has been (from what I can tell) a big source of Yale’s strong returns. 

But I wonder where the extra return comes from. 

Is it leverage? 

Managerial skill? 

Something else?

The literature on this topic is so vast I’m starting to regret having asked the question. 

All I can do at this point is report on the first steps of what promises to be a long journey. 

I’ll start with private equity, the biggest sub-category of private markets. 

Where better to begin than with the man who designed Yale’s strategy. 

Here is David Swensen, who ran the endowment until his untimely death earlier this year, in an 2017 interview, talking about PE. 

The interviewer asked Swensen a question about leverage versus managerial skill as a source of PE returns:

I think the private equity . . .  where you buy the company, you make the company better . . . and then you sell the company is a superior form of capitalism. I’m really concerned about what’s going on in our public markets. 

I think short-termism is incredibly damaging. 

There’s this focus on quarter-to-quarter earnings. 

There’s this focus on whether you’re a penny short or a penny above the estimate . . . 

If you compare and contrast that with the — let’s say the buyout world, where you’ve got hands-on operators that are going to improve the quality of the companies, there’s no pressure for quarter-to-quarter performance. 

There’s no pressure to return cash at any cost. 

There’s an opportunity, with a five to seven-year time horizon, to engage in intelligent capital investments that will improve the long-term prospects of the company.

A superior form of capitalism! 

Intelligent capital investments! 

Drinks on me! 

What we have there is a theory of superior PE returns, from one of the smartest portfolio managers ever: less short-termism means better long-term returns. 

Swensen being Swensen, he added a jab at the industry’s fees:

The only problem is that you have to pay 20 per cent of the profits. 

Right? 

And that’s the hurdle, right? 

You have to [give] 20 per cent of the profits to the operator. 

If there were a fair deal structure, you wouldn’t want to put anything into the public securities markets. 

You’d want it all in private equity.

With Swensen’s endorsement in mind, I went to look for what the outperformance actually looks like. 

What I was surprised to discover is that the outperformance of US PE firms, at least for the last decade, is small and getting smaller.

How to measure PE returns is a debate in itself. 

There are lots of numbers out there. 

But they don’t disagree that much. 

Here is a chart from Bain & Company’s 2021 Global Private Equity Report, comparing PE returns to S&P 500 returns, measured by internal rate of return:  

No alt provided


PE had a good year last year. 

But over five years it’s basically tied with the S&P, and at 10, it’s ahead by just a percentage point or two. 

The good old days of the early 2000s are fading fast (relative returns in Asia and Europe are much better). 

The Bain report from 2020 put the same data (one year less current) into a line graph, so you can really see the collapse of the return spread since the financial crisis:

 

Here is another look at the recent returns data, from Cliffwater Research, which looked at the returns of 66 state pension plans, with some $3tn in assets. Here is how PE did for those plans in the decade ending in 2020:

 

PE’s performance was about the same as that of US stocks over the past decade. All the data sets I have seen confirm this. 

Now, if I thought, along with Swensen, that PE was a superior form of capitalism, I can think of three things I’d say in response to this lack of long-term outperformance:

1. US stocks have had a great decade. Keeping pace with them is impressive;

2. Of course the average PE fund doesn’t outperform stocks. You need to be with the best PE firms, with the most skill at improving companies’ operational performance, to see superior performance; 

3. Higher returns are not the only virtue of PE. It’s the stability of those returns that sets this asset class apart. 

Response number one fails. PE funds use a huge amount of leverage, and (until March last year) we had low market volatility and falling interest rates. If you can’t use your massive leverage to buy companies and beat the S&P under those conditions, you stink.

Number two does a little better. There may be individual PE funds that have a special sauce, just as there might be public equity funds that do (if anyone can say, in advance, which PE funds those are going to be, email me). But the question we are asking is about what generates returns for the asset class. 

Number three is very interesting, and it connects to Swensen’s point. PE investments don’t get marked to market, so of course they are going to be less volatile. That does not mean that the value of the business is not going up and down all the time. It just means the owners have the privilege of not knowing what the day-to-day value is. Homeowners like myself have the same privilege with regards to the value of our houses. If we did not, we would go insane.

Likewise, those pension fund managers who are getting normal equity returns on their PE allocations may be getting paid in relaxation. The inclusion of PE brings the volatility of their reported results down. 

Is this a good thing? I’m not sure. But people have strong opinions on the subject. Here is Dan Rasmussen of Verdad Capital, writing in 2018: 

John Burr Williams, who invented modern finance theory, wished for a day when experts would set security prices. He believed that expert valuations would result in “fairer, steadier prices for the investing public”.

The PE industry would seem to have made Williams’s dreams come true. Experts, rather than markets, determine the prices of PE-owned companies . . . 

The hurly-burly of the public markets is replaced by the considered judgment of an accounting firm that just so happens to be employed by the PE fund.

Rasmussen gives a powerful example. During the oil price crash in 2014-15, energy stock indices dropped by half, but energy private equity funds operating during that period were hardly marked down.

His conclusion is that the PE structure hides risk, and hiding risk is bad, because when risk is hidden, investors seeking returns push it higher and higher until something breaks:

To the extent that things do come out right in the end, reducing a few wiggles along the way really is not so problematic. But not seeing the wiggles can also encourage complacency, allowing valuations and leverage levels to climb and climb because the consequences of those decisions have not yet been felt. A lack of short-term accountability just means a delayed reckoning.

Is risk being pushed higher and higher? Well, here is the average leverage ratio equity deals, courtesy of PitchBook:

 

The trend in PE deal leverage is clearly up — from median debt of 4.5 times ebitda a decade ago to 7.5 times this year. Whether that leads to a crackup down the road, I have no idea.

Here is a rather speculative response to Rasmussen, though. The biggest PE firms can act to reduce actual volatility, not just perceived volatility. If you leveraged a bunch of your average small-cap public companies to 8 times ebitda, and a recession came along, a good portion of them would probably go bankrupt. But suppose it’s Blackstone or KKR who put all that leverage on. If their small caps are going bust, they can scare up some new money to keep them going; they know that recessions end. So the investors don’t lose all their money, as they would if they owned shares in a super-leveraged public company that went bust.

That response may not hold up to scrutiny, but it’s the best one I can think of right now. 

Debt ceiling fight pushes money market funds to brink

US political stand-off piles pressure on stretched short-term government bond supply

Kate Duguid in New York

Congress has not passed legislation to raise the debt ceiling, so issuance of Treasury bills has fallen © FT montage; Getty Images


The supply of the safest US government bonds has been cut this month after federal spending limits were reinstated, driving prices higher and reigniting problems for the money market fund industry — which has already been bailed out by the Federal Reserve once this year.

Treasury bills — US bonds which mature in a year or less — were already scant this year after the US lengthened the average duration of its new debt issues. 

Supply then took another hit after Congress failed to pass legislation in July that would have allowed the Treasury department to issue new debt — known as raising the debt ceiling.

Analysts estimate issuance of new Treasury bills has been cut by roughly $900bn so far this year. 

That limited supply has driven prices higher and yields — the premium investors are paid to hold the debt — down to levels just above zero.

When some yields turned negative in May, the Fed intervened to put a floor under those rates. 

But the worsening supply crunch is drastic enough that rates are heading back towards zero despite the Fed’s support.

“The Fed facility in its capacity now is helpful, but we’re starting to see it running out of gas,” said Tom Simons, money market economist at Jefferies.

Rock-bottom yields cause problems for money market funds, a $4.4tn industry that relies heavily on short-dated debt, erasing their profits or forcing them to close their doors to new investors. 

Money market funds are a linchpin in the global financial system because they are used by investors as a safe place to store cash for short periods.

“Money funds are having trouble making ends meet because of these very low rates. 

It’s not exactly a conducive environment to be a money-market fund unfortunately. 

As if zero interest rates weren’t enough, this is just piling on,” said Gennadiy Goldberg, senior US rates strategist at TD Securities.

That dynamic will only get worse in the next month, said Goldberg. 

He does not expect the spending limits to be lifted before the end of October at least. 

The US debt ceiling has been subject to partisan brinkmanship in recent years.

Although Republicans and Democrats in Congress have come to agreements to suspend those limits in the past, investors believe negotiations will go down to the wire. 

The Treasury department is not expected to run out of money until late October or early November.

After the decline in supply earlier this year drove some short-term rates negative, the Fed backstopped the market by paying interest on money placed in its Overnight Reverse Repo Facility. 

The facility provides money market funds with an alternative place to park cash, bolstering those short-term interest rates. 

The problem is that the RRP facility is now consistently being used at record levels, and is approaching the limits put on its usage by the Fed. 

“What that suggests — because the fed funds market is limited in terms of its participants — is that they’re getting up to their counterparty limit with the overnight reverse facility. 

So if they’re getting up to their counterparty limit, it stands to reason others are probably getting up to their counterparty limit as well, since we continue to see record highs on a pretty routine basis,” said Simons.

The Fed could ultimately raise the counterparty limits on the facility, which could relieve some pressure on the market, an option it signalled it was open to in the minutes from its July policymaking meeting. 

But because of pandemic-related monetary and fiscal stimulus, there is still an enormous amount of money in the economy chasing too few investments. 

That is likely to keep yields on bills low and money market funds under pressure.

“That poor market is stuck between a rock and a hard place. 

There is just nowhere to go,” said Goldberg.

The Fed’s State of Exception

Despite the recent surge of inflation in the United States, the Federal Reserve is keeping the federal funds rate in a range far below what its own monetary-policy rules would prescribe. But since history shows that this deviation cannot last indefinitely, it would be better to normalize sooner rather than later.

John  B. Taylor



STANFORD – Over the past few months, there has been a growing chorus of economic observers voicing concerns about the increase of inflation in the United States. 

Much of the commentary (including my own) has focused on the US Federal Reserve’s apparent continuation of easy monetary policy in the face of rising prices. 

Despite a sharp increase in the rate of money growth, the central bank is still engaged in a large-scale asset-purchase program (to the tune of $120 billion per month), and it has kept the federal funds rate in the range of 0.05-0.1%.

That rate is exceptionally low compared to similar periods in recent history. 

To understand why it is exceptional, one need look no further than the Fed’s own July 9, 2021, Monetary Policy Report, which includes long-studied policy rules that would prescribe a policy rate higher than the current actual rate. 

One of these is the “Taylor rule,” which holds that the Fed should set its target federal funds rate according to the gap between actual and targeted inflation.

The Taylor rule, expressed as a straightforward equation, has worked well when it has been followed over the years. 

If you plug in the current inflation rate over the past four quarters (about 4%), the gap between GDP and its potential for the second quarter of 2021 (about -2%), a target inflation rate of 2%, and a so-called equilibrium interest rate of 1%, you get a desired federal funds rate of 5%.

Moreover, the Taylor rule implies that even if the inflation rate falls to 2% by the end of this year (which would be well below most forecasts), and economic output reaches its potential, the federal funds rate still should be 3%. 

That is a long way from the near-zero level implied by the Fed’s forward guidance.

Since these calculations use the inflation rate averaged over the past four quarters, they are consistent with a form of “average inflation targeting” that the Fed itself endorsed last summer. 

They also follow the Fed’s own recently suggested equilibrium interest rate of 1%, rather than the 2% rate that has traditionally been used. 

If the latter had been used, the discrepancy between the policy rate in the rule and the actual level of the funds rate would be even larger.

These higher possible levels for the federal funds rate are largely being ignored in the Fed’s reported discussions. 

Instead, the Fed insists that today’s higher inflation is a temporary byproduct of the pandemic’s effect on inflation last year. 

Those who defend its current stance point out that market interest rates on longer-term bonds remain very low. 

On safe Treasury assets, the five-year yield is only 0.81%, and the ten-year yield is only 1.35% – well below the rates suggested by the Taylor rule when averaged over these maturities. 

Considering these factors, many commentators are saying not to worry: the markets are probably being rational when they forecast low rates.

The problem with this line of reasoning is that the low longer-term rates are likely being caused by the Fed’s own insistence on keeping low rates as far as the eye can see. 

As Josephine M. Smith and I show in a 2009 study, there is a “Term Structure of Policy Rules” to consider. 

Effectively, the policy rule for longer-maturity bonds depends on the policy rule for the much shorter-term federal funds rate, as perceived by people in the market. 

If the Fed convinces the market that it will stay low, the term structure of interest rates will imply lower longer-term rates.

Today’s situation is similar to that of 2004, when then-Fed Chair Alan Greenspan noticed that ten-year Treasury yields did not seem connected to moves in the federal funds rate. 

He called this a “conundrum,” because the actual short-term interest rate was not generating as large of an increase in long-term interest rates as one would expect based on previous experience. 

Monetary-policy tightening was not having as much of an effect on longer-term rates as it had in previous periods of tightening.

During this period, the federal funds rate deviated significantly from what would have been predicted by the Fed’s typical response, much as it is doing today. 

When the actual federal funds rate deviated significantly from the level suggested by policy rules, the short-term interest-rate response to inflation appeared to be much lower, at least from the perspective of market participants trying to assess Fed policy. 

And this perception of a smaller response coefficient in the policy rule may have led market participants to expect smaller longer-term interest-rate responses to inflation, and therefore lower long-term interest rates.

Today, it appears that the Fed is deviating from monetary-policy rules. 

It has beaten its own path for forward guidance, and the market is basing its estimates of future rates on the expectation that this deviation will continue. 

But history tells us that it cannot continue indefinitely. 

Eventually, the Fed will have to return to a policy rule, and when it does, the conundrum will disappear. 

The sooner this occurs, the smoother the recovery will be. 

There is still time to adjust and get back to a policy rule, but time is running out.


John B. Taylor, a former under-secretary of the US Treasury (2001-05), is Professor of Economics at Stanford University and a senior fellow at the Hoover Institution. He is the author of Global Financial Warriors and co-author (with George P. Shultz) of Choose Economic Freedom. 

 Free exchange

At the Jackson Hole meeting, the Fed ponders an uneven recovery

Research presented at the monetary policymakers’ virtual pow-wow offers guidance


BUSINESS CYCLES are never perfectly symmetric across time and space. 

Yet they have rarely been as uneven as the rebound from covid-19. 

Some parts of the global economy are straining to meet roaring demand even as others are limping along, battered by the spread of the virus. 

It is enough to take the fun out of monetary policy. 

Indeed, the Delta variant kept attendees of an annual symposium for central bankers from meeting in Jackson Hole, Wyoming, in the shadow of the majestic Teton mountains. 

Instead, they peered at their computer screens as they discussed how to shepherd an unbalanced economy through uncertain times.

A pressing question loomed over the proceedings: just how and when to tighten policy given high inflation and lingering unemployment. 

Tweaks to the Federal Reserve’s framework in recent years are meant to give it room to manage such difficult circumstances. 

It now aims to hit its 2% inflation target on average and will court high inflation to make up for past shortfalls. 

But surging prices are testing this approach. 

Data released as the conference began showed that the Fed’s preferred measure of inflation had risen to 4.2% in July, the highest in 30 years. 

Jerome Powell, the Fed’s chairman, made no suggestion to his fellow participants that he would drastically change course, and confirmed that he might begin to taper asset purchases later in the year. 

But policy, he cautioned, would have to change as new data come in.

Research presented at the symposium offered guidance on how to cope with a lopsided recovery. 

Veronica Guerrieri of the University of Chicago and her co-authors, for instance, considered how policymakers should respond when demand surges in some sectors and lags in others. 

If there is little scope for workers to shift from unfavoured industries to the up-and-comers, they write, then the shift in demand acts like a “cost-push shock” (similar to a spike in oil prices). 

In such cases, central banks typically accept some pain in the form of above-normal inflation and some in above-normal unemployment. 

But if workers can move, then there are benefits to the central bank’s facilitating this shift.

Easy money is not obviously the right answer. 

If loose monetary policy raises demand for both booming and busting sectors, then it might slow reallocation by acting to prop up businesses that ought really to close. 

But the authors argue that, in a world in which it is easy to adjust wages upward but tricky to cut them, inflation may in fact hasten the reallocation of workers. 

Because nominal wages in the lagging industries cannot easily fall, workers face little incentive to move to promising industries. 

Inflation, though, enables the real wage in lagging sectors to fall relative to that in booming ones, encouraging workers to move. 

Thus it might make sense, in the context of an uneven recovery, for monetary policy to have an inflationary bias.

Clear advice for Mr Powell, then. 

But if American firms continue to hire at the recent pace, the unemployment rate may fall back to its pre-pandemic level of 3.5% by the end of 2022. 

That presents the Fed with a new dilemma. 

While the unemployment rate has recovered quickly, labour-force participation has not: of the drop experienced in early 2020, just under half has been clawed back; the unemployment rate, by contrast, is more than 80% of the way back. 

Part of Mr Powell’s justification for the change in framework was the beneficial effects of tight labour markets, which he reckoned would eventually draw disadvantaged workers back into the labour force. 

But the patience needed to allow such effects to unfold could vanish amid high inflation and low unemployment.

Work presented by Bart Hobijn of Arizona State University and Ayesegul Sahin of the University of Texas at Austin on the “participation cycle” reaffirms the benefits of patience. 

It is not the case that workers from disadvantaged groups are especially likely to drop out of the labour force during downturns and are only enticed back after sufficiently long recoveries. 

Rather, the probability that a worker drops out is much higher for unemployed workers than employed ones, whatever their background. 

It is thus the higher unemployment rates that disadvantaged groups tend to face that are responsible for their leaving the labour force. 

And this effect begins reversing as soon as labour markets begin to recover. 

Greater job stability—that is, a higher probability of finding work and a lower probability of losing a job—reduces the flow of workers into unemployment and out of the labour force, raising the participation rate over time.

The effect is powerful; the authors estimate that a one-percentage-point decline in the unemployment rate tends to raise the participation rate by 0.65 percentage points, other things equal. 

The beneficial effect continues even after unemployment reaches a trough, with the participation rate typically reaching a peak nine months later. 

The upshot for policy is therefore broadly similar to where Mr Powell has ended up: a low unemployment rate need not imply that labour-market slack has run out, or that patience on the part of the central bank will not be rewarded.

When the odds are against you

Other research reinforced the doveish conclusion. 

Emerging markets, participants learned, could be dealt a setback by premature monetary tightening in the rich world, which would act to push up their borrowing costs and tighten financial conditions.

Outside the conference, however, Mr Powell is being bombarded by criticism of loose money. 

Inflation has now more than made up its shortfall since 2015, let alone the start of the pandemic. 

Some heads of regional Fed banks, such as Raphael Bostic of Atlanta, are eager to reverse quantitative easing soon. 

Prominent economists, such as Raghuram Rajan of the University of Chicago and Larry Summers of Harvard, have highlighted the dangers of prolonging asset purchases. 

Discussions at the conference suggest Mr Powell’s policies will defy this growing band of critics. 

They are likely to remain anxious for some time yet.

Climate Change Brings a Flood of Hyperbole

Despite constant warnings of catastrophe, things aren’t anywhere near as dire as the media say.

By Steven E. Koonin

             ILLUSTRATION: CHAD CROWE


The Intergovernmental Panel on Climate Change has issued its latest report assessing the state of the climate and projecting its future. 

As usual, the media and politicians are exaggerating and distorting the evidence in the report. 

They lament an allegedly broken climate and proclaim, yet again, that we are facing the “last, best chance” to save the planet from a hellish future. 

In fact, things aren’t—and won’t be—anywhere near as dire.

The new report, titled AR6, is almost 4,000 pages, written by several hundred government-nominated scientists over the past four years. 

It should command our attention, especially because this report will be a crucial element of the coming United Nations Climate Change Conference in Glasgow. 

Leaders from 196 countries will come together there in November, likely to adopt more-aggressive nonbinding pledges to reduce greenhouse-gas emissions.

Previous climate-assessment reports have misrepresented scientific research in the “conclusions” presented to policy makers and the media. 

The summary of the most recent U.S. government climate report, for instance, said heat waves across the U.S. have become more frequent since 1960, but neglected to mention that the body of the report shows they are no more common today than they were in 1900. 

Knowledgeable independent scientists need to scrutinize the latest U.N. report because of the major societal and economic disruptions that would take place on the way to a “net zero” world, including the elimination of fossil-fueled electricity, transportation and heat, as well as complete transformation of agricultural methods.

It is already easy to see things in this report that you almost certainly won’t learn from the general media coverage. 

Most important, the model muddle continues. 

We are repeatedly told “the models say.” 

But the complicated computer models used to project future temperature, rainfall and so on remain deficient. 

Some models are far more sensitive to greenhouse gases than others. 

Many also disagree on the baseline temperature for the Earth’s surface.

The latest models also don’t reproduce the global climate of the past. 

The models fail to explain why rapid global warming occurred from 1910 to 1940, when human influences on the climate were less significant. 

The report also presents an extensive “atlas” of future regional climates based on the models. 

Sounds authoritative. 

But two experts, Tim Palmer and Bjorn Stevens, write in the Proceedings of the National Academy of Sciences that the lack of detail in current modeling approaches makes them “not fit” to describe regional climate. 

The atlas is mainly meant to scare people.

As is now customary, the report emphasizes climate change in recent decades but obscures, or fails to mention, historical precedents that weaken the case that humanity’s influence on the climate has been catastrophic. 

The Summary for Policy Makers section says the rate of global sea-level rise has been increasing over the past 50 years. 

It doesn’t mention that it was increasing almost as rapidly 90 years ago before decreasing strongly for 40 years.

Extreme weather events are invoked as proof of impending disaster. 

But the floods in Europe and China and record temperatures across regions of the U.S. are weather, not climate—singular events, not decadeslong trends. 

Both Europe and China have experienced equally devastating floods in past centuries, but these are forgotten or deliberately ignored. 

The drought and wildfires in the Western U.S. are part of a trend going back a few decades, but forest management and expanding human presence in the forests are perhaps more important than climate change in causing these events.

The report expresses low confidence in most reported hurricane trends over the next century, and it remains uncertain whether there’s any trend beyond natural variability in Atlantic hurricanes. 

In other words, we have no scientific proof that humans have made hurricanes worse, despite what many say.

Refreshingly, the report deems its highest-emissions scenarios of the future unlikely, even though those are the ones you’re mostly likely to hear about in media reports. 

The more plausible scenarios have an average global temperature in 2100 about 2.5 degrees celsius warmer than the late 1800s. 

The globe has already warmed 1 degree since that time, and the parties of the Paris 

Accord arbitrarily agreed to limit further warming to another degree. 

But since humanity’s well-being has improved spectacularly, even as the globe warmed during the 20th century, it is absurd to suggest that an additional degree of warming over the next century will be catastrophic. 

In fact, the AR5 report from 2014 says even 1.5 degrees of additional warming by 2100 will have minimal net economic impact.

Good science is characterized by detail, data, proven models and reasoned debate. 

That takes time. 

Meanwhile, we should be wary of the torrent of hyperbole that is sweeping the globe.


Mr. Koonin, a professor at New York University, is author of “Unsettled: What Climate Science Tells Us, What It Doesn’t, and Why It Matters.”

Buttonwood

How the delisting of Chinese firms on American exchanges might play out

$1.5trn of market capitalisation is at stake


These days politicians in Beijing and Washington seem to agree on very little. 

Yet on the subject of ending the listing of Chinese firms on American exchanges they are in uncommon harmony. 

The collapse last year of Luckin Coffee, a Chinese beverage-delivery group listed on the Nasdaq that was caught inflating its sales, reignited political grievances in America. 

The result was the Holding Foreign Companies Accountable Act, which requires companies traded on American exchanges to submit to audits or face delisting within three years. 

The precise rules are still being drawn up, but will probably eventually involve a great shedding of shares.

China, for its part, seems happy for its companies to leave American markets. 

Its regulators seemed unbothered when their actions demolished the share price of Didi Global, a Chinese ride-hailing company, just days after it listed in New York. 

New rules from the country’s cyberspace watchdog will make it harder for some firms to list outside of China. 

A sudden rule change in late July made online-tutoring firms serving school-aged children ineligible for overseas listings, wiping billions of dollars from several New York-traded Chinese stocks.

Rare as this moment of Sino-American agreement is, it hardly spells good news for investors. 

The American market has come to host $1.5trn-worth of Chinese companies. 

That sort of market value has not been cast off by exchanges before. 

So what kind of damage might delisting do to shareholders?

That Chinese companies still trade in New York at all is remarkable. 

For a decade now Beijing and Washington have sparred over the fate of China’s American Depositary Receipts (adrs), as the shares of foreign companies trading in America are called. 

As a wave of accounting scandals at New York-listed Chinese firms began to wash over markets in 2011, American regulators started insisting on gaining access to certain accounting documents. 

Chinese officials have dug their heels in, refusing the requests and even making sharing the materials a crime.

There are some stocks for which delisting need not involve much pain. 

Many adr contracts say that investors can convert those shares into corresponding securities listed on other exchanges, notes Wei Shang-Jin of Columbia Business School. 

Some of the biggest Chinese companies have been prepared, pursuing secondary listings in Hong Kong to which shares can be transferred. 

This started with BeiGene, a biotech group, when it launched a secondary listing in Hong Kong in 2018. 

Alibaba, which raised $25bn in New York in 2014, held a second listing in Hong Kong in 2019 to raise another $11bn. 

Of the 236 Chinese companies listed in New York, 16 have secondary listings in Hong Kong, with a combined market capitalisation of $980bn.

The situation looks bleaker for shareholders in other firms. 

Stock prices will be dragged down by the potential for instability. 

(The Nasdaq Golden Dragon China Index, which tracks Chinese firms listed in New York, is down by 45% since February.) 

This will give managers and other company insiders a chance to buy out American shareholders’ stock on the cheap, says Jesse Fried of Harvard Law School. 

The companies could eventually relist in China or Hong Kong at much higher valuations, but the original investors in the adrs will not see a cent from the relisting. 

And shareholders are unlikely to have the right to review the valuation at which companies are taken private, notes Shaswat Das of King & Spalding, a law firm.

There is an even worse case. 

Some companies may simply “go dark”, meaning they stop reporting to American regulators and are delisted with no buyout at all. 

This might sound far-fetched—but it has happened before. 

In the aftermath of the accounting scandals of a decade ago, more than 100 Chinese companies vanished from New York’s exchanges, destroying some $40bn in market value. 

Many did not compensate investors. 

And shareholders in general stand little chance of recouping losses: because most Chinese groups have few assets in America, an angry shareholder seeking legal recourse would have to go to a Chinese court, says Joel Greenberg of Arnold & Porter, another law firm.

The smart move, then, is not to be caught holding these shares when delisting draws near. But here’s the catch. 

Ten years ago experts also called time on cross-border listings for Chinese groups. 

The market capitalisation of Chinese firms listed on American exchanges has risen ten-fold since.