The Berlin Wall and the rise of nationalism

The 1989 promise of liberal democracy was a squandered opportunity

Philip Stephens

web_Global nationalism
© Ingram Pinn/Financial Times

Two great earthquakes shaped the present global order.

The first, in 1989, seemed to promise an irresistible march towards liberal democracy and open markets. The opportunity was squandered by those intoxicated with their apparent triumph.

A second set of seismic shocks then saw the world turn back towards nationalism and protectionism.

The end-of-history theory of the fall 30 years ago of the Berlin Wall was always, well, ahistorical.

There was nothing ineluctable about the advance of political pluralism and market economics.

The splintering of Yugoslavia into warring nationalisms should have been sufficient warning against hubris.

And yet.

The peaceful dissolution of the Soviet Union, the glad embrace by formerly communist states of parliamentary systems, and rising prosperity in China and other emerging economies gave reasonable cause for optimism that the world was set on a new course.

The UN awoke from cold war paralysis.

The US-led expulsion of Iraqi forces from Kuwait secured the backing of a global coalition.

European integration looked very much like an exportable prototype.

The UN doctrine of “responsibility to protect” underscored collective abhorrence of ethnic cleansing and genocide.

So if the destination may not have been assured, the direction of travel indisputably was on the side of democracy.

The Washington-based think-tank Freedom House’s annual survey of rights and liberties pinpoints 2007 as a high-water mark, with a retreat ever since.

China and Russia have grown bolder in their embrace of authoritarianism.

The Arab spring has turned to winter.

Nations such as Turkey, Hungary and Poland have been sliding steadily into illiberalism.

Rising populism and anti-immigrant sentiment in rich western democracies, Freedom House’s 2018 report notes, have offered succour to leaders who “give short shrift to fundamental civil and political liberties”.

Among them, it adds, is US President Donald Trump who voices “feelings of admiration and even personal friendship for some of the world’s most loathsome strongmen and dictators”.

You can find half a dozen plausible explanations as to what went wrong. Russia’s fall to economic anarchy threw open the doors to a leader promising to restore order and national pride.

China was never going to forget its “century of humiliation” and sign up for a global order led by the US.

As imagined by its friends, the Pax Americana would be that of a benign hegemon overseeing the international peace.

The events of September 11 2001 saw Washington repudiate rules in favour of unilateral military intervention.

In 1990, US President George HW Bush had laboured to gather broad backing for the war against Iraq’s Saddam Hussein.

His son George W Bush declared simply that others were “with us or against us”.

Democracy becomes tarnished when its promoters deliver it from the bay of a B-52 bomber.

For its part, the EU overlooked the role national identity had played in eastern Europe’s uprisings against rule from Moscow.

Nations that had only recently reclaimed their sovereignty were unlikely to share the postmodern enthusiasm of existing members for the pooling of national decision-making.

In the Middle East, the west’s focus on elections overlooked the need for the institutions and conventions that underpin liberal democracy.

The ballot box alone was never going to transform Libya.

One way or another, all of these things chipped away at the gloss of superiority bestowed on the west by its victory in the cold war.

None were of great consequence when measured against the event that shredded the claims and ambitions of the post-cold-war order.

The 2008 global financial crash, and the subsequent recession, delivered a powerful economic blow to the rich democracies even as it shattered the illusions invested in liberal democracy and globalisation.

There it was for all the world to see — the west had got it wrong, and wrong on a scale not seen since the Depression in the 1930s.

For all the gains that accrued from globalisation — and the rising fortunes of hundreds of millions in China, India and elsewhere attest to them — the devotion to unfettered markets enshrined in the so-called Washington consensus had been a catastrophic error.

Financial capitalism, it turned out, was inherently unstable; and once destabilised it collapsed like a house of cards.

Suddenly, the state-directed capitalism favoured by autocrats no longer seemed anachronistic.

The real damage, though, was done at home.

The rise of populists — Mr Trump in the US, the Brexiters in Britain, myriad nationalist parties elsewhere — exposed a fundamental divide.

The gains of globalisation had been reaped in the west by the few; the social contract that hitherto sustained public faith in politics and the market had been broken.

The elites had grown richer at the expense of the majority.

These populists offer scapegoats in place of remedies.

Nor should anyone harbour illusions about authoritarian alternatives.

The world is not flocking to imitate Moscow, Budapest and Ankara.

Even as they scorn democracy, tyrants and demagogues feel compelled to pay it lip service.

There is though one obvious lesson.

If they want to restore authority abroad, western leaders must first rebuild credibility at home.

Santiago Under Siege

How could the most prosperous city of what is, by all accounts, Latin America’s most prosperous and law-abiding country explode in protests marred by riots and looting? And what do recent events teach us about citizen dissatisfaction and the potential for violence in modern societies?

Andrés Velasco

velasco98_CLAUDIO REYESAFP via Getty Images_santiagoprotestmaskfire

SANTIAGO – At least 19 dead and untold wounded. A half-dozen subway stations attacked with firebombs. Hundreds of supermarkets vandalized and looted. The downtown headquarters of the country’s largest power distributor in flames. A city of nearly seven million people paralyzed. After a state of emergency is declared, army units patrol the streets and enforce a curfew.

How could Santiago, Chile – the most prosperous city in what is, by all accounts, Latin America’s most prosperous and law-abiding country – come to this? And what do recent events teach us about citizen dissatisfaction and the potential for violence in modern societies?

In fact, we cannot be certain. It all happened with dizzying speed. And a few days after the violence came the peaceful protests. Last Friday, 1.2 million people marched in downtown Santiago, in the largest street protest since those that helped remove General Augusto Pinochet from office 30 years ago.

The most common explanation is that a 3% increase in metro fares caused public indignation at rising prices and high inequality to boil over. That must be true: people with sufficient income who feel they are treated fairly do not loot and riot. But as an explanation on which to base policy changes, the standard account risks being simplistic.

Take price increases. Yes, Chile has a history of inflation. And, yes, because it is more prosperous, Santiago is more expensive than most Latin American cities. Yet Chilean inflation in the 12 months to September was barely 2.1%, and the central bank has been cutting interest rates because inflation is below its target.

Or take income inequality. Yes, for an upper-middle-income country, Chile is very unequal, with a Gini coefficient (most economists’ preferred measure of income disparity) at a high level of 46.6 in 2017 (100 represents absolute inequality). Yet according to the World Bank, the coefficient has fallen from an eye-popping 57.2 when Chile returned to democracy in 1990. The notion that rising income inequality is behind citizen discontent does not fit reality.

To understand the causes of a social phenomenon, one always must ask: Why here? Why now? Neither inflation nor rising income inequality provides a satisfactory answer.

Others claim that Chileans are simply fed up with the intrusion of markets and profit-seeking into every corner of daily life. Again, this hypothesis has an air of plausibility. Polls show widespread dissatisfaction with private companies that provide public services ranging from water and electricity to health insurance and pension-fund administration.

Yet those same surveys also show anger at the quality of state-provided services, whether in hospitals, clinics, or foster-care facilities. Over half of parents choose to send their children to privately-run voucher schools, even when it involves paying a fee, despite the availability of free state schools of comparable quality. And in 2017 a substantial plurality of Chileans voted for President Sebastián Piñera, a billionaire businessman and unabashed apologist for capitalism who ran on a platform of reigniting growth.

So, what is it, then? Why are millions of Chileans still marching in protest, ten days after the violence erupted?

For starters, Chile is not alone. In the last decade, places as diverse as Great Britain, Brazil, France, Hong Kong, and Ecuador have experienced similar episodes. Whatever the immediate local trigger, the scope, intensity, and often the violence of the ensuing protests seemed out of proportion with the initial cause. Rapid social change fuels tensions and contradictions in modern societies – even rich and successful ones – that seem to keep them barely a step or two from mayhem.

In Chile, an obvious suspect is monopoly abuses. While general price inflation in Chile is low, some prices that matter for family budgets are high and rising. Regulatory regimes designed to ensure investment in utilities, for example, have given companies excessive leeway to keep prices high. Likewise, Chile’s pharmacy chains have been found guilty of collusion and price gouging, as have toilet paper producers, chicken farmers, and long-haul bus companies.

Here is the paradox. Collusion and price fixing did not begin yesterday in Chile. But until a decade ago, sanctions were weak and the agency in charge had little authority and few resources to investigate. When the law changed, scandals began erupting every few months, raising public awareness of, and indignation with, monopolistic behavior. Today, price fixing is a criminal offense that carries jail sentences, and it seems plausible that such behavior is receding. But that very progress may have helped plant the seeds of public anger.

Turn next to the labor market. Chile’s unemployment rate hovers around 7% and wages have been rising well ahead of inflation. The bad news comes when you look at the structure of employment. Nearly one-third of the labor force is either self-employed or works in domestic service, in many cases without a formal contract and benefits.

Among those who have a formal job, most work on short-term contracts. Employment rates for women and young people are among the lowest in the OECD. Discrimination is rampant. Hundreds of thousands of women who head households do not have a job, while millions of workers who have a job today cannot be sure they will have any kind of income tomorrow.

The list of reforms that would remedy this situation – such as adaptable work schedules, modernized severance payment schemes, easier part-time work, better job training, and anti-discrimination laws with real teeth – is pretty self-evident. That is what worked in other countries in similar circumstances.

But here is the next paradox: as Chile has become more democratic, the same problems that plague advanced democracies have appeared. Politically influential insiders have blocked reforms, while labor-market outsiders are not represented. Few politicians speak for the unemployed young woman with two kids and no high-school diploma, who seldom votes anyway.

Puny pensions also contribute to people’s sense of fragility. Chile’s individual capitalization system earns kudos abroad, but the reality on the ground is more complex. Precisely because the labor market functions badly, Chileans retire with fewer than 20 years of savings, on average, in their accounts. And due to sharply rising longevity (itself a tremendous developmental success), they can expect to live 20 years or more after retirement.

Pensions could be adequate only if the rates of return on those savings were huge, but they are getting smaller by the day, in line with falling global real interest rates. Government-funded minimum pensions for people with no savings at all, plus a top-up for those with very low pensions, help alleviate the plight of 1.3 million people at the bottom of the income scale. But now the middle class is feeling the pinch – increasingly so as Chile’s baby-boom generation begins to retire under the private system.

And while income inequality has not been getting worse, other kinds of inequality may well have become more evident. Chile has joined the OECD club of rich countries, but in many ways it remains a traditional society riven with class privilege. Business leaders and cabinet members tend to come from a handful of private secondary schools in Santiago, especially when right-wing parties are in power, as they are today. The elite often seems to live in a world of its own. Last week, Cecilia Morel, the president’s wife, described the looting as “an alien invasion.”

None of this is new. But it may have become more painfully evident as the country develops. A generation ago, few working-class children attended university. Today, seven of ten students in higher education are the first in their families to attend college. Once they graduate, the frustration begins: to land the best jobs, academic performance matters less than having the “right” surname or connections.

Anger at elites is rampant in Chile, but scorn for the country’s political class is particularly deep. In 2018, 70% of Chileans believed that the country was governed for the benefit a handful of powerful groups. Barely 17% and 14% expressed trust in parliament and in political parties, respectively.

This is relatively new. High regard for civilian politicians during the transition to democracy nearly three decades ago gave way to a growing perception of insularity, and then a wave of campaign finance scandals. Today, the absence of term limits and parliamentarians’ outsize compensation (among the highest in Latin America) are huge magnets for public anger.

Lack of trust in politicians weakens people’s hopes for the future. And Chile’s recent economic deceleration, standing as it does in sharp contrast to Piñera’s ringing promises of economic growth, has exacerbated the problem. Perhaps it was these dashed hopes that brought the many tensions and contradictions in Chile to a boil.

There is now a unique opportunity to rewrite the social contract and deal decisively with the sources of citizen anger. But the risks are many. One is that voters will conclude that Chile’s gains were all more illusory than real, and will therefore throw the baby out with the bathwater. Another is that the current climate of fear and division will bring a populist to power, as has happened in Mexico, Brazil, and now Argentina.

In Chile, polls already show gains for populists of the extreme right and left. If that trend continues, the country’s turmoil could be far from over.

Andrés Velasco, a former presidential candidate and finance minister of Chile, is Dean of the School of Public Policy at the London School of Economics and Political Science. He is the author of numerous books and papers on international economics and development, and has served on the faculty at Harvard, Columbia, and New York Universities.

Are Investors Being ‘Aggressively Passive’ in Bond ETFs?

For bond allocations, think twice before reflexively allocating to index ETFs.

By David L. Braun, Avi Sharon

With investors looking to reduce risk in uncertain late-cycle markets, it’s not surprising that flows into bond exchange-traded funds (ETFs) have surged this year, outpacing flows to equity ETFs for the first time in a decade.

The lion’s share of the $97 billion in fixed income ETF flows through September have gone to passive, index-tracking ETFs (according to Bloomberg). But are investors missing out by defaulting to passive approaches?

Many investors may assume ETFs are passive by nature or by definition (they are not). Passive bond ETFs are not the only option – and for investors looking to potentially improve performance, actively managed fixed income ETFs may offer distinct advantages.

Bonds are different when it comes to active management

A key rationale many investors cite for preferring index-tracking equity approaches is that active management hasn’t paid off historically. But this has not held true for fixed income allocations.

Over the past 10 years, the median active equity manager has underperformed passive peers by approximately 86 basis points (bps) and lagged stated benchmarks by another 20 bps (see Figure 1).

The opposite has been true for fixed income: The median active bond manager has beaten its stated benchmark by an average of 81 bps per year and outperformed passive peers by 91 bps over the past decade (as of 30 June 2019).

Are Investors Being ‘Aggressively Passive’ in Bond ETFs?

Bond market inefficiencies provide alpha opportunities

Why is the story so different for bonds than for equities? We believe it boils down to inefficiencies across the fixed income markets, which can provide rich ground for active managers to seek beyond-benchmark returns – often with more attractive risk profiles.

Equities are traded in milliseconds on public exchanges, while bonds are largely still traded over the counter, slowly and in large blocks. And whereas equities are highly standardized, perpetual securities, bonds are far more heterogeneous in their terms and have finite maturities.

New issues of bonds (analogous to initial public offerings for stocks) are frequent, constituting about 20% of the U.S. corporate bond market each year, versus about 1% for U.S. equities.

This can give a potential advantage to asset managers with strong credit teams to analyze these new issues.

Moreover, when looking at the most prominent bond and equity indices – the Bloomberg Barclays U.S. Aggregate Bond Index and the S&P 500 – both give greater weight to entities with some of the least attractive characteristics: stocks with the highest market caps, and issuers with the most debt outstanding. That means that passive bond investors are essentially lending more to those with more debt.

Reconsider being ‘aggressively passive’ in bond ETFs

The ETF vehicle offers certain advantages, including efficient trading on an open exchange, continuous pricing throughout the day, and a simple way to gain diversified allocations. But passive ETFs generally suffer from below-index performance given the impact of trading costs and fees.

And given that overall return potential for bonds may be modest relative to equities, the potential return differential between a passive and active approach could have a significant impact. Especially in a low yield world, the excess return potential from active management may be particularly meaningful as a means to boost overall returns (and it would represent a larger share of that overall return). Of course, as with all investments, it is possible to lose money.

For investors wanting to make the most of their fixed income allocations as they seek to manage risk, we believe the compelling potential return advantage of actively managed ETFs should not be overlooked.

Obama, Trump and the Wars of Credibility

By: George Friedman

The United States is in the process of shifting a core dimension of its strategic doctrine. In the past, the U.S. resorted to the use of force to address international threats. Barack Obama was the first president to argue that the use of force, particularly in the Middle East, was costly and ineffectual and that other means had to be used to exercise foreign policy.

He ran his first campaign for president on this basis. He was only partially able to shift the direction of U.S. strategy. Donald Trump has extended Obama’s policy and applied it more consistently by refusing to strike at Iran over the Persian Gulf crisis and the Saudi oil facilities attack and, most recently, withdrawing from the Syria-Turkey border.

The shift in strategy was something I predicted in my 2011 book, “The Next Decade.” The basic argument was that the United States is now a global power with no global challenger, only regional ones of various sizes. Having a strategic doctrine of responding to challenges with military force would leave the decision on when to go to war up to the adversary.

John F. Kennedy once said, “Let every nation know, whether it wishes us well or ill, that we shall pay any price, bear any burden, meet any hardship, support any friend, oppose any foe, in order to assure the survival and the success of liberty.”

This doctrine made sense in dealing with the Soviet Union, but in a less orderly world, it reads like a blank check on U.S. military power and an invitation to other nations to draw the U.S. into combat at their will. I reasoned that a more nuanced foreign policy would emerge in the 2010s, one that would compel the U.S. to become more disciplined and selective in committing U.S. forces to combat.

In the 74 years since World War II ended, the U.S. has spent about 28 years, roughly 38 percent of the time, engaged in large-scale, division-level combat, leaving over 90,000 U.S. military personnel dead. This includes the Korean War, the Vietnam War, the Afghan War and the War in Iraq, and there have been other deployments in smaller conflicts.

Nearly three decades over a 74-year period is a staggering amount of time for any nation to be at war, particularly the leading global power.

With the exception of Operation Desert Storm, the United States has not won any of these wars. Korea ended in an armistice, with both sides at roughly the same point as when they began. Vietnam ended with the enemy flag flying over Saigon.

Afghanistan, Iraq and related wars did not end in outright defeat, but they have not ended in victory.

Given that the United States crushed both Japan and, with the help of the Allies, Germany in World War II and emerged with overwhelming military power, the increased tempo of U.S. military operations since 1945, combined with consistently unsatisfactory outcomes, must be analyzed to understand the emergence of the Obama-Trump doctrine.

One explanation that must be dispensed with is that the American public does not have the patience to allow a war to be fought to a satisfactory conclusion. There was no anti-war movement of any significance during Korea.

There was an anti-war movement over Vietnam, but the conflict continued for seven years, and the public voted overwhelmingly for pro-war Richard Nixon and against anti-war George McGovern in 1972. There has been opposition to the Iraq War, but it was only a peripheral reason for the U.S. drawdown there, after nine years of war.

World War II was fought on a different scale. It was a total war, one that could not be lost. Defeat would have posed fundamental dangers to the United States, so all necessary resources were devoted to the war effort. It was the central focus of society as a whole. Bringing massive resources to bear, including atomic bombs at its conclusion, the United States emerged from the war victorious.

None of the other conflicts were total wars that involved existential threats to the United States.

During the Cold War, the interventions in Korea and Vietnam were the result of indirect U.S. interests. From the Truman administration’s perspective, Korea was outside core U.S. interests. The U.S. had no treaty with or strategic interest in South Vietnam. In both cases, the benefits of engaging in conflict were indirect.

The U.S. strategy in the Cold War was containment. The U.S. did not intend to invade the Soviet Union, or later China, but it opposed its expansion. The U.S. got involved in both Korea and Vietnam to defend the credibility of the doctrine of containment, fearing that a lack of U.S. engagement in these conflicts would be interpreted by the Soviets and Chinese as a lack of commitment to the doctrine.

Even more important, the U.S. was afraid that staying out of these wars would lead its allies to draw the conclusion that American guarantees were hollow and that the alliance structure needed for the containment strategy would collapse.

The U.S. engaged in the two wars, therefore, not out of strategic necessity but to demonstrate American reliability. They therefore could not be fought as total wars. The amount of effort required to show a willingness to engage was much less than the amount of effort needed to decisively crush enemy forces.

It was necessary to demonstrate U.S. will for global reasons, but imprudent to devote the force needed to win the war. It was also impossible to withdraw from the war, as abandoning a conflict would be the same as refusing to engage. The wars were being fought for the sake of demonstrating that the U.S. was willing to fight wars, and no coherent strategy or even clear definition of what victory meant or how to achieve it emerged. In a strange way, this made sense.

Maintaining the confidence of West Germany, Turkey, Japan and all other U.S. allies was of enormous strategic importance, and Korea and South Vietnam were needed to hold the alliance together. Over 90,000 died in wars that were gestures, yet how many more would have died if the gestures were not made? That was the logic, but the truth is that no one anticipated the length of engagement and amount of bloodshed in either war. Wars fought to reassure allies have no strategic basis on which to calculate such things.

What we will call the anti-jihadist wars were framed differently but had similar results. After 9/11, the U.S. goal was to destroy Islamic jihadists and governments that gave them haven and to impose governments favorably inclined to the United States. The problem was that terrorists are mobile. Al-Qaida was a global, sparse and capable force. It could exist anywhere, including hostile territory, and its members were capable and difficult to locate, making them excellent covert operators, as seen on 9/11.

To dismantle the organization, it was assumed that the U.S. had to deny al-Qaida sanctuary for its operations and have the cooperation of countries in the region, ensuring that they would resist al-Qaida and provide intelligence. The invasion of Afghanistan was designed to displace the Taliban and force al-Qaida to disperse.

The Taliban withdrew, dispersed and reformed. Al-Qaida was built to be mobile. This placed a premium on getting others to support the American effort, a difficult task inasmuch as the U.S. withdrawal from Lebanon and Somalia made them feel the U.S. wouldn’t back them up. In Iraq, there were many strands behind the U.S. invasion, but credibility was an important one. In the end, the problem was that al-Qaida was not destroyed when it had to mobilize. In addition, occupying a country that is hostile to foreign interference is impossible. Even the Nazis couldn’t defeat the Russian and Yugoslav partisans, and they were far less gentle than the U.S. was.

Demonstrating credibility was part of what motivated the jihadist wars, just as it motivated U.S. involvement in the wars in Korea and Vietnam. The problem with wars designed to demonstrate U.S. will, however, is that they are almost by definition without end. But if the U.S. is going to lead a coalition, credibility is a critical asset, even if the likelihood of success in the war is uncertain. There is therefore an inherent dilemma.

In World War II, the war was aligned with U.S. strategy. In the wars that have been fought since then, the conflicts have not been aligned with U.S. strategy. As a result, stalemate or defeat did not undermine basic U.S. interests. The conflicts created vacuums in regions where the U.S. had interests, but all forces were committed to what I will christen as wars of credibility. These were wars that didn’t have to be won, but only fought.

Given the sweeping breadth of U.S. power, and the lack of challengers that might absorb the U.S. as it was absorbed in World War II (including China and Russia), coalition building and management becomes an end in itself. And that leaves the U.S. constantly off balance, as in the long run it undermines coalitions anyway.

It was inevitable, therefore, that the U.S. would significantly curtail its military involvement and devote resources to upgrading the force, rather than constant deployment.

The next business revolution

American business schools are reinventing the MBA

About time 

ON A VISIT to New York in October Marc Benioff, boss of Salesforce, compared Facebook to cigarettes and backed a corporate tax hike to deal with homelessness in San Francisco. If badmouthing a fellow technology giant and cheering the taxman were not heterodox enough for a billionaire entrepreneur, Mr Benioff laid into American management education. It “programmes” students to favour profit over the public good. This, he noted, is out of step with “the new capitalism”.

Many deans concur. “We need our students to be thoughtful about the role of business in society, particularly at a moment in time when capitalism is coming under attack,” says William Boulding of Duke’s Fuqua School of Business. Nitin Nohria of Harvard Business School (HBS) reports how younger alumni and incoming classes want “the place of work to reflect purpose and values”. Jonathan Levin of Stanford’s Graduate School of Business (GSB) talks of business schools’ responsibility to recognise the societal consequences of corporate actions. “Corporations, their leaders and owners need to act to restore trust,” he intones.

America’s business schools still dominate our annual ranking of the world’s top MBAs (see table). But the industry is being shaken up. According to the Graduate Management Admission Council (GMAC), an industry association, American MBA programmes received 7% fewer applicants this year than last. Nearly three-quarters of full-time, two-year MBA programmes reported declines from coast to coast. Not even the most illustrious ones were spared: HBS (located in Boston) and Stanford’s GSB (in Palo Alto) both saw applications dip by 6% or so.

Schools face growing competition from overseas and online programmes—and, as Mr Benioff’s critique implies, questions over hidebound curriculums. “We’re being disrupted left, right and centre,” confesses Susan Fournier, dean of Boston University’s Questrom School of Business.

When management education boomed in the 1960s, American schools taught mostly American students. As the world economy globalised in the 1980s and 1990s, so too did American curriculums and student bodies. Sangeet Chowfla, who heads GMAC, now discerns a “third wave”: improved schools outside America are letting foreign students study closer to home (and future employers).

Many offer cheaper one-year MBAs, popular in Europe but uncommon across the pond.

Whereas three in four two-year MBA programmes in America saw declines in overseas applicants in the latest application cycle, numbers applying to Asian business schools rose by 9% from 2017 to 2018. A recent uptick in America’s anti-immigrant sentiment is accelerating the trend.

Americans, too, are cooling on MBAs. More than half of American schools report fewer domestic applicants. Soaring tuition costs, which have far outpaced inflation, put them off as much as they do foreigners. A top-notch MBA will set you back more than $200,000 (including living costs). Even with financial aid, many students are saddled with $100,000 debts at graduation.

The opportunity cost of forgoing two years’ worth of paycheques is higher when the economy is booming and labour markets are tight. Weak demand has caused the number of full-time MBA programmes in America to fall by nearly a tenth between 2014 and 2018, according to the Association to Advance Collegiate Schools of Business, another industry body.

Geoffrey Garrett, dean of the Wharton School, at the University of Pennsylvania, believes that a flight to quality is benefiting top institutions like his—and their graduates. Add non-wage compensation and alumni often recoup their investments in a few years. Not counting signing bonuses, the average base salary for graduates of the five American schools with the highest earning potential was $139,000.

Consultancies and investment banks, historically the keenest MBA recruiters, claim their appetite for holders of elite degrees has not diminished. A prestigious MBA “puts a floor on your career”, explains Kostya Simonenko, a 28-year-old consultant on leave from Oliver Wyman (which is paying for his course at Columbia Business School). Silicon Valley, which used to dismiss MBAs as overpaid know-nothings, has become less hostile.

As startups grow into large corporations, they need managers to help run things, not just software engineers to run code. A survey of recruiters by GMAC this year found that 80% of technology companies planned to hire MBAs, on a par with consultancies (82%) and financial firms (77%).

Even the finest schools, though, are not sheltered from the forces buffeting business education. Global competition and new technology platforms enable a lower cost structure for the delivery of high-quality courses. This forces “a reckoning of the MBA value proposition”, says Ms Fournier.

As part of that reckoning, Questrom has teamed up with edX, a big online-education firm, to offer a full MBA degree online for just $24,000, less than a third of the cost of its on-campus equivalent. Better to cannibalise yourself than let others do it, as Ms Fournier puts it. MIT’s Sloan School of Management provides similarly affordable bundles of online courses, dubbed MicroMasters, in areas like supply-chain management and finance.

These grant certificates but the credits will be honoured if a student one day decides to pursue a full degree. 2U, an online-education platform, is introducing deferred-tuition schemes for some hybrid MBA degrees. It will share the upfront costs with its business-school partners; students will pay only when they get a job.

It is not just how MBA courses are taught that is changing. So, too, is what they teach. Many budding woke capitalists agree with Mr Benioff—and demand to be taught business beyond the primacy of shareholder value. At Stanford Luisa Gerstner, a millennial MBA student from Germany, notes that sustainable capitalism plays a more central role in European schools. Julia Osterman, her American classmate, laments how, despite some social, environmental and ethical topics in its curriculum, core classes are still “too Finance 101”.

Some of their professors are not so sure. One greybeard at HBS estimates that a third of its faculty (and many older alumni) view the embrace of cuddly “stakeholder capitalism” as an unrigorous sop to political correctness. It certainly introduces lots of grey areas, Mr Boulding concedes. But, he says, schools can at least provide students with “frameworks for making choices”.

A new course at Duke is entitled “Capitalism and Common Purpose in a World of Differences”.

HBS has made “Leadership and Corporate Accountability” (which delves into “the responsibilities of business to the broader system in which it is embedded”) a required first-year course, with case studies weighing up things like the morality of looking beyond financial metrics at Japan’s Government Pension Investment Fund.

Recoding academies

Curriculums are being transformed in less lofty ways, too. Employers, who partly or wholly bankroll half of all executive education, which earns elite schools between $100m and $150m a year, want it to impart technical skills. In response, deans such as Costis Maglaras, the newish head of Columbia Business School (and an engineer by training), are bolting courses on data, analytics and programming onto the timetable.

As their popularity rises, they may displace stodgier subjects. Columbia used to offer several courses on debt markets but now offers perhaps one each academic year. Meanwhile, students have flocked to coding classes. The idea is not to turn business types into boffins but to prepare them to work with and manage technical staff, says Mr Maglaras. A recruiter for a big consultancy affirms that tech-savvy MBAs are “very attractive”.

Richard Lyons, former dean of the Haas Business School at the University of California, Berkeley, sees the future in providing lifelong professional education as a service: “Give alumni know-how on demand, searchable online.” Scott DeRue, dean of the University of Michigan’s Ross School of Business, is giving alumni tuition-free access to executive education.

“The new stuff will come from insurgents, not the big MBA schools,” thinks John Kao, a management guru who formerly taught at HBS. He wants training benchmarks and standardised transcripts to make skills portable and universally recognised.

At HBS, home to perhaps the most hallowed MBA, Mr Nohria accepts that the market for its traditional offering is shrinking. In a sign of the times, his school has frozen tuition fees. He sees a dramatic expansion for “unbundlers” of online education, who “separate knowing, doing and being”.

In time, he says, they will converge with “bundlers” like HBS. Far from collapsing, he reckons, management education will the richer for it.