An AI Bubble Won’t Trigger a Financial Crisis
The AI boom may be speculative, excessive, and reminiscent of earlier episodes like the dot-com crash. But given the nature of the financing and the investments being made, the risks to the financial system are minimal, and policymakers should turn their attention to the impact of AI on the real economy.
Lenny Mendonca and Martin Neil Baily
HALF MOON BAY, CALIFORNIA – A familiar anxiety has returned to financial markets.
Amazon is devoting $100 billion to data centers.
Meta has committed more than $600 billion to building them over three years.
Microsoft, Google, and Apple plan to spend hundreds of billions more.
With AI investments running into the trillions, are we witnessing a bubble, and what will happen if it bursts?
We have been here before.
One of us (Martin) served as chairman of the White House Council of Economic Advisers during the late 1990s technology boom, and the other watched California navigate the dot-com collapse and the 2008 financial crisis from various government and consulting roles.
Our experience tells us that even if an AI correction comes, it will not trigger the kind of financial crisis that devastated the economy in 2008-09.
The reason is that the structure of AI investment is fundamentally different from what we saw in these previous episodes.
The dot-com collapse is remembered for dramatic losses – the Nasdaq fell 77% from its March 2000 peak – and spectacular flameouts like Pets.com.
But the economic story was more nuanced.
Yes, there was overinvestment in internet infrastructure, and many big bets proved premature.
But that “overinvestment” left behind extraordinarily valuable infrastructure.
The fiber-optic cables laid during the boom enabled the broadband economy.
Server capacity that seemed excessive in 2001 became essential by 2005.
Productivity growth remained surprisingly strong as the bubble burst, proving that the underlying technology was transformative even if the initial business models were not.
Crucially, the dot-com bust never threatened the broader financial system.
The speculation was funded primarily through equity markets, not debt.
When valuations collapsed, stockholders lost money, but banks remained solvent.
There was no flood of defaults, no credit freeze, and no need for massive government bailouts.
By contrast, the 2008 crisis was about how assets were financed.
Subprime mortgages were packaged into securities, sold to pension funds and international investors, and used as collateral for more borrowing.
The entire global financial system became a house of cards built on rising home prices.
When those prices fell, the complex web of obligations unraveled: banks discovered they held worthless securities, credit markets froze, and what started as falling home prices cascaded into a global financial crisis requiring unprecedented government intervention.
The problem was not just overvaluation.
It was a combination of leverage, complexity, opacity, and the bleeding of housing risk throughout the entire financial system.
The AI boom is different.
The trillions being spent on AI infrastructure are coming primarily from the balance sheets of the world’s most profitable and cash-rich companies.
Apple, Microsoft, Google, Amazon, and Meta are not borrowing to build their AI capacity.
When Apple commits $500 billion over four years to AI infrastructure, it is not issuing debt that must be serviced regardless of revenue.
These are equity-financed investments by one of the most valuable companies in history.
If the returns are dismaying, Apple’s stock price might suffer, but the company will neither default nor trigger a cascade of failures through the financial system.
Similarly, the specialized nature of AI infrastructure limits contagion risk.
Data centers are not being securitized and sold to pension funds.
AI chips are not being used as collateral for derivatives.
Of course, there could be a sharp correction in tech stock valuations, and companies could scale back AI investments if returns disappoint.
But would any of this trigger a financial crisis comparable to 2008?
The structural differences suggest not.
The point is not that AI poses no economic risks.
It is that the risks are not primarily to financial stability.
AI outcomes that are worth worrying about include: failure to deliver significant productivity gains, leading to slower growth and disappointed expectations; rapid displacement of workers, which would require robust policies to support education and training; concentration of economic power, raising antitrust questions; and strains on electrical grids and construction capacity, which could curtail growth and increase households’ electricity bills.
Even if AI valuations are inflated, and current revenue projections prove wildly optimistic, the infrastructure being built has lasting value.
Those data centers aren’t going away.
The fiber connections being laid are permanent additions to our digital infrastructure and the trained AI engineers represent human capital that will generate value for decades.
If someone told you in 1999 that America would “overspend” hundreds of billions of dollars on internet infrastructure, you might have worried about the bubble.
But that overinvestment enabled the entire digital economy – YouTube, Netflix, cloud computing, and remote work.
The societal return was large, even though many companies failed and investors lost money.
Understanding that AI speculation is unlikely to trigger a financial crisis means that policymakers should get their priorities right.
Rather than over-regulating to prevent a financial crisis that is not coming, they should focus on risks to the real economy: worker displacement, algorithmic bias, data privacy, and competitive dynamics.
They should promote investments in complementary infrastructure such as electrical capacity, cooling systems, and trained workers.
And they should prepare for labor-market adjustments by bolstering unemployment insurance, retraining programs, and portable benefits.
Moreover, they should watch for leverage creep, because the one risk that could change our analysis is if AI assets do become heavily leveraged or securitized.
Financial regulators should monitor banks and pension funds accordingly.
Transformative innovations typically take decades to diffuse fully through the economy.
It took 40 years for electricity to revolutionize manufacturing; 20 years for the internet to reshape retail.
If AI is similarly transformative, today’s speculative investments could prove far-sighted.
The 2008 financial crisis taught policymakers to be vigilant about financial-stability risks.
But there are important differences between asset classes and financing structures.
The AI boom may be speculative.
It may be excessive.
It may even be a bubble.
But it is not a threat to the financial system.
Lenny Mendonca, Senior Partner Emeritus at McKinsey & Company, is a former chief economic and business adviser to Governor Gavin Newsom of California and chair of the California High-Speed Rail Authority.
Martin Neil Baily is Senior Fellow Emeritus in Economic Studies at the Brookings Institution and was Chairman of President Bill Clinton’s Council of Economic Advisers from 1999 to 2001.
0 comments:
Publicar un comentario