miƩrcoles, 25 de octubre de 2023

miƩrcoles, octubre 25, 2023

Will A.I. Transform the Economy, and if So, How?

By Paul Krugman

Credit...Illustration by Sam Whitney/The New York Times; images by Chris Saulit and niuniu/Getty Images


So, will artificial intelligence transform the economy? 

Today I thought I’d take a break from my usual preoccupation with ongoing crises to engage in a bit of bigthink about how technology may change the economic landscape in the years ahead, including a topic that seems important but hasn’t drawn much attention: how A.I. might change the U.S. budget outlook.

Starting last fall there was a huge surge in buzz, both positive and negative, about A.I. 

That buzz seems to have died down to some extent, with usage of ChatGPT, the most famous implementation of the technology, declining in recent months. 

And many more observers have realized that what we’ve been calling A.I. — or what more careful people call “generative A.I.” — isn’t really intelligence. 

What it is instead is extrapolation from pattern recognition. 

Or as some people I talk to put it, it’s basically souped-up autocorrect.

But that doesn’t mean that it’s not important. 

After all, a lot of what human workers, even workers considered highly skilled, do for a living is also arguably souped-up autocorrect. 

How many workers regularly engage in creative thinking? 

Even among creative workers, how much time is spent being creative as opposed to engaging in pattern recognition?

I don’t say this to disrespect knowledge workers, but rather to suggest that what we’re calling A.I. could be a big deal for the economy even if it doesn’t lead to the creation of HAL 9000 or SkyNet.

But how big? And what kind of a deal?

Obviously, nobody really knows. 

Some people are trying to figure out the impact from the bottom up, looking at various kinds of work and guesstimating how much of that work can be replaced or augmented by A.I. 

The most widely circulated numbers come from Goldman Sachs, whose base case has A.I. increasing the growth rate of productivity — output per person-hour — by almost 1.5 percentage points a year over a decade, for a total over that decade of about 15 percent:


Is this plausible? 

Actually, yes. 

One parallel, if you’ve studied the historical relationship between technology and productivity, is the productivity boom from 1995 to 2005, which followed decades of weak productivity growth.

As a recent paper from the Brookings Institution points out, this boom was mostly driven by “total factor productivity” — an increase in output per unit of input, including capital:

And economists often identify total factor productivity growth with technological progress. 

That’s sometimes a bit dubious, since T.F.P. is really a “measure of our ignorance,” simply the part of economic growth we can’t explain otherwise. 

But from 1995 to 2005 it seems fairly clear that the boom was driven by information technology.

Here’s another view of that boom, in which I show the natural log of productivity — so that a straight line corresponds to steady growth — and plot a continuation of the growth rate from 1973 to 1995 (the red line), so that you can see how actual growth compared:


By the time the productivity surge tapered off, productivity was about 12 percent higher than the previous trend would have led you to expect it would be. 

Since A.I. is arguably an even more profound innovation than the technologies that drove the 1995-2005 boom, 15 percent isn’t at all unreasonable.

But will higher productivity make us richer or simply reduce the number of jobs? 

Fears of technological unemployment — a term invented by none other than John Maynard Keynes in 1930 — go back at least to the early 19th century. 

They have even inspired one pretty good novel, Kurt Vonnegut’s “Player Piano.” 

While technology has often eliminated some jobs, however, historically this has always been, as Keynes wrote, “a temporary phase of maladjustment,” with other forms of employment rising to replace the jobs lost. 

For example, the Microsoft Excel shock — the rise of spreadsheet programs — seems to have eliminated many bookkeeping jobs, but these were replaced by increased employment even in financial analysis.

By the way, in that same essay, Keynes predicted a future in which people would work much less than they did in his time, and in which finding rewarding ways to fill our leisure hours would become a major social concern. 

The fact that this didn’t happen over the past 90 years is a reason to be skeptical about people making similar predictions now, such as Jamie Dimon, who predicted the other day that A.I. would lead to a three-and-a-half-day workweek.

However, while there’s no reason to believe that what we’re calling A.I. will lead to mass unemployment, it may well hurt the people who are displaced from their jobs and either have trouble finding new employment or are obliged to accept lower wages. 

Who are the potential losers?

The likely answer is that big impacts will fall on relatively high-end administrative jobs, many of them currently highly paid, while blue-collar jobs will be largely unscathed. 

Goldman Sachs again:

Now, while this seems right for generative A.I., there are other applications of big data that may affect blue-collar work. 

For example, with all the buzz around ChatGPT there has been relatively little attention paid to the fact that after years of failed hype, self-driving cars are actually beginning to go into service. 

Still, at this point it seems more likely than not that A.I. will, unlike technological progress over the past 40 years, be a force for lower rather than higher income inequality.

Finally, it seems worth considering how generative A.I. might bear on one issue that has regained prominence: worries about government debt.

Until recently, many economists, myself included, argued that public debt was less of a concern than many people imagine, because interest rates on debt were below the economy’s long-term growth rate, “r<g.” 

This meant that the common idea that debt would snowball, with interest payments leading to higher debt and hence to even higher interest payments, was wrong: The ratio of debt to G.D.P., the number that matters, would tend to melt rather than snowball.

But rapidly rising interest rates have made debt considerably more worrisome. 

Conventional estimates of the economy’s long-run sustainable growth rate, like those of the Federal Reserve, tend to put it around 1.8 percent. 

And real interest rates on federal debt are now above that number:


Discussions about debt sustainability are, however, oddly disconnected from the discourse about generative A.I. 

In fact, I’m pretty sure there are people warning both about a debt crisis and about mass unemployment from A.I., although I haven’t made the effort to track them down. 

But if optimistic estimates of the boost from the technology are at all right, growth will be much higher than 1.8 percent over the next decade, and debt won’t be a big concern after all — especially because faster growth will boost revenue and reduce the budget deficit.

All of this is, of course, highly speculative. Nobody really knows how big an impact A.I. will have. 

But again, it doesn’t have to be “true” artificial intelligence to be a big deal for the economy, and the best guess is that it will probably matter a lot.


Paul Krugman has been an Opinion columnist since 2000 and is also a distinguished professor at the City University of New York Graduate Center. He won the 2008 Nobel Memorial Prize in Economic Sciences for his work on international trade and economic geography.  

0 comments:

Publicar un comentario