|

EW HAVEN – The US stock market today is characterized by a seemingly unusual combination of very high valuations, following a period of strong earnings growth, and very low volatility. What do these ostensibly conflicting messages imply about the likelihood that the United States is headed toward a bear market?

To answer that question, we must look to past bear markets. And that requires us to define precisely what a bear market entails. The media nowadays delineate a “classic” or “traditional” bear market as a 20% decline in stock prices.
That definition does not appear in any media outlet before the 1990s, and there has been no indication of who established it. It may be rooted in the experience of October 19, 1987, when the stock market dropped by just over 20% in a single day. Attempts to tie the term to the “Black Monday” story may have resulted in the 20% definition, which journalists and editors probably simply copied from one another.
 
In any case, that 20% figure is now widely accepted as an indicator of a bear market. Where there seems to be less overt consensus is on the time period for that decline. Indeed, those past newspaper reports often didn’t mention any time period at all in their definitions of a bear market. Journalists writing on the subject apparently did not think it necessary to be precise.
 
In assessing America’s past experience with bear markets, I used that traditional 20% figure, and added my own timing rubric. The peak before a bear market, per my definition, was the most recent 12-month high, and there should be some month in the subsequent year that is 20% lower. Whenever there was a contiguous sequence of peak months, I took the last one.
 
Referring to my compilation of monthly S&P Composite and related data, I found that there have been just 13 bear markets in the US since 1871. The peak months before the bear markets occurred in 1892, 1895, 1902, 1906, 1916, 1929, 1934, 1937, 1946, 1961, 1987, 2000, and 2007. A couple of notorious stock-market collapses – in 1968-70 and in 1973-74 – are not on the list, because they were more protracted and gradual.
 
 
 
Once the past bear markets were identified, it was time to assess stock valuations prior to them, using an indicator that my Harvard colleague John Y. Campbell and I developed in 1988 to predict long-term stock-market returns. The cyclically adjusted price-to-earnings (CAPE) ratio is found by dividing the real (inflation-adjusted) stock index by the average of ten years of earnings, with higher-than-average ratios implying lower-than-average returns. Our research showed that the CAPE ratio is somewhat effective at predicting real returns over a ten-year period, though we did not report how well that ratio predicts bear markets.
This month, the CAPE ratio in the US is just above 30. That is a high ratio. Indeed, between 1881 and today, the average CAPE ratio has stood at just 16.8. Moreover, it has exceeded 30 only twice during that period: in 1929 and in 1997-2002.
 
But that does not mean that high CAPE ratios aren’t associated with bear markets. On the contrary, in the peak months before past bear markets, the average CAPE ratio was higher than average, at 22.1, suggesting that the CAPE does tend to rise before a bear market.
 
Moreover, the three times when there was a bear market with a below-average CAPE ratio were after 1916 (during World War I), 1934 (during the Great Depression), and 1946 (during the post-World War II recession). A high CAPE ratio thus implies potential vulnerability to a bear market, though it is by no means a perfect predictor.
 
To be sure, there does seem to be some promising news. According to my data, real S&P Composite stock earnings have grown 1.8% per year, on average, since 1881. From the second quarter of 2016 to the second quarter of 2017, by contrast, real earnings growth was 13.2%, well above the historical annual rate.
               
But this high growth does not reduce the likelihood of a bear market. In fact, peak months before past bear markets also tended to show high real earnings growth: 13.3% per year, on average, for all 13 episodes. Moreover, at the market peak just before the biggest ever stock-market drop, in 1929-32, 12-month real earnings growth stood at 18.3%.
 
Another piece of ostensibly good news is that average stock-price volatility – measured by finding the standard deviation of monthly percentage changes in real stock prices for the preceding year – is an extremely low 1.2%. Between 1872 and 2017, volatility was nearly three times as high, at 3.5%.
 
Yet, again, this does not mean that a bear market isn’t approaching. In fact, stock-price volatility was lower than average in the year leading up to the peak month preceding the 13 previous US bear markets, though today’s level is lower than the 3.1% average for those periods. At the peak month for the stock market before the 1929 crash, volatility was only 2.8%.
 
In short, the US stock market today looks a lot like it did at the peaks before most of the country’s 13 previous bear markets. This is not to say that a bear market is guaranteed: such episodes are difficult to anticipate, and the next one may still be a long way off. And even if a bear market does arrive, for anyone who does not buy at the market’s peak and sell at the trough, losses tend to be less than 20%. 
 
But my analysis should serve as a warning against complacency. Investors who allow faulty impressions of history to lead them to assume too much stock-market risk today may be inviting considerable losses.


Buttonwood

Analysts struggle to make accurate long-term market forecasts

Historically high valuations for equities complicate the task  
 
That is pretty clear with government bonds. Anyone buying a bond with a yield of 2% and holding it until maturity can expect, at best, that level of return (before inflation) and no more. (There is a small chance the government might default.) With equities, the calculations are not quite so hard-and-fast. Nevertheless, it is a good rule-of-thumb that buying shares with a low dividend yield, or on a high multiple of profits, is likely to lead to lower-than-normal returns.
So a sensible approach to long-term investing would assess the potential returns from asset classes, given their valuations and the fundamentals, and allocate assets accordingly. That is what GMO, a fund-management company, has been trying to do for decades. It has made some common-sense assumptions about the fundamental drivers of returns and then assumed that valuations would return to average levels over a seven-year period.
 
In one sense, this process has been a success. The assets that GMO thought would perform well have offered relatively high returns; the assets it thought would perform badly have offered low ones (see top chart). But if the ranking has been correct, the level of return has not been. Assets that GMO thought would yield a negative return of -10% to -8%, for instance, have in fact suffered average losses of only -2.8%.
 
GMO’s forecasts have been pretty accurate for asset classes such as emerging-market bonds and international (non-American) shares; annual returns have been within 1.5 percentage points of its forecasts. But for American equities, GMO was too gloomy, underestimating returns by around four percentage points a year.
 
The reason for this error is pretty clear. Equity valuations have not returned to the mean, as GMO thought they would, but have stayed consistently above their historical levels. GMO was fairly accurate in its forecast for dividend growth, but its erroneous estimation of valuation accounted for all the forecast error.
 
There are two possible conclusions. One is that GMO is simply wrong about mean reversion.

Equities have moved to a new, higher valuation level. This sounds uncomfortably like the famous quote from Irving Fisher, an economist, before the 1929 crash, that stocks had reached a “permanently high plateau”. But there is some justification for a valuation shift: American profits have been high, relative to GDP, for a long period of time. This may be a result of monopoly power in some industries, or perhaps of the reduced bargaining power of workers in an age of globalisation.

A more obvious argument is that, with yields on cash and government bonds so low, investors are willing to pay a high price for equities because they represent their only hope for decent returns. But given the low level of dividend yields and the sluggish rate of economic growth, profits will have to keep rising as a proportion of GDP to allow high equity returns to continue.
 
That seems unlikely.
 
Either there will be a political reaction—governments will clamp down on firms in response to public unrest—or, more prosaically, tighter labour markets mean that wage growth will start to erode profits.
 
Either way, it is understandable that GMO does not want to give up on the idea of mean reversion just yet. Its latest forecasts are pretty downbeat (see bottom chart). The real returns from most asset classes are expected to be negative; only emerging-market equities offer a decent return. Investors who disbelieve those forecasts are in essence betting that things will be “different this time”. That is certainly possible but it requires a lot of faith.


Barbarians at the Monetary Gate

Andrew Sheng, Xiao Geng
.
Bitcoin

HONG KONG – Financial markets today are thriving. The Dow Jones industrial average, the S&P 500, and the Nasdaq composite index have all reached record highs lately, with emerging-economy financial markets also performing strongly, as investors search for stability amid widespread uncertainty. But, because this performance is not based on market fundamentals, it is unsustainable – and very risky.
 
According to Mohamed El-Erian, the lost lesson of the 2007 financial crisis is that current economic-growth models are “overly reliant on liquidity and leverage – from private financial institutions, and then from central banks.” And, indeed, a key driver of financial markets’ performance today is the expectation of continued central-bank liquidity.
 
After the US Federal Reserved revealed its decision last month to leave interest rates unchanged, the Dow Jones industrial average set intraday and closing records; the Nasdaq, too, reached all-time highs. Now, financial markets are waiting for signals from this year’s meeting of the world’s major central bankers in Jackson Hole, Wyoming.
 
But there is another factor that could further destabilize an already-tenuous leverage- and liquidity-based system: digital currencies. And, on this front, policymakers and regulators have far less control.
 
The concept of private cryptocurrencies was born of mistrust of official money. In 2008, Satoshi Nakamoto – the mysterious creator of bitcoin, the first decentralized digital currency – described it as a “purely peer-to-peer version of electronic cash,” which “would allow online payments to be sent directly from one party to another without going through a financial institution.”
 
A 2016 working paper by the International Monetary Fund distinguished digital currency (legal tender that could be digitized) from virtual currency (non-legal tender). Bitcoin is a cryptocurrency, or a kind of virtual currency that uses cryptography and distributed ledgers (the blockchain) to keep transactions both public and fully anonymous.
 
However you slice it, the fact is that, nine years after Nakamoto introduced bitcoin, the concept of private electronic money is poised to transform the financial-market landscape. This month, the value of bitcoin reached $4,483, with a market cap of $74.5 billion, more than five times larger than at the beginning of 2017. Whether this is a bubble, destined to collapse, or a sign of a more radical shift in the concept of money, the implications for central banking and financial stability will be profound.
 
At first, central bankers and regulators were rather supportive of the innovation represented by bitcoin and the blockchain that underpins it. It is difficult to argue that people should not be allowed to use a privately created asset to settle transactions without the involvement of the state.
 
But national authorities were wary of potential illegal uses of such assets, reflected in the bitcoin-enabled, dark-web marketplace called Silk Road, a clearinghouse for, among other things, illicit drugs. Silk Road was shut down in 2013, but more such marketplaces have sprung up. When the bitcoin exchange Mt. Gox failed in 2014, some central banks, such as the People’s Bank of China, started discouraging the use of bitcoin. By November 2015, the Bank for International Settlements’ Committee on Payments and Market Infrastructures, made up of ten major central banks, launched an in-depth examination of digital currencies.
 
But the danger of cryptocurrencies extends beyond facilitation of illegal activities. Like conventional currencies, cryptocurrencies have no intrinsic value. But, unlike official money, they also have no corresponding liability, meaning that there is no institution like a central bank with a vested interest in sustaining their value.
 
Instead, cryptocurrencies function based on the willingness of people engaged in transactions to treat them as valuable. With the value of the proposition depending on attracting more and more users, cryptocurrencies take on the quality of a Ponzi scheme.
 
As the scale of cryptocurrency usage expands, so do the potential consequences of a collapse.
 
Already, the market capitalization of cryptocurrencies amounts to nearly one tenth the value of the physical stock of official gold, with the capability to handle significantly larger payment operations, owing to low transaction costs. That means that cryptocurrencies are already systemic in scale.
 
There is no telling how far this trend will go. Technically, the supply of cryptocurrencies is infinite: bitcoin is capped at 21 million units, but this can be increased if a majority of “miners” (who add transaction records to the public ledger) agree. Demand is related to mistrust of conventional stores of value. If people fear that excessive taxation, regulation, or social or financial instability places their assets at risk, they will increasingly turn to cryptocurrencies.
 
Last year’s IMF report indicated that cryptocurrencies have already been used to circumvent exchange and capital controls in China, Cyprus, Greece, and Venezuela. For countries subject to political uncertainty or social unrest, cryptocurrencies offer an attractive mechanism of capital flight, exacerbating the difficulties of maintaining domestic financial stability.
 
Moreover, while the state has no role in managing cryptocurrencies, it will be responsible for cleaning up any mess left by a burst bubble. And, depending on where and when a bubble bursts, the mess could be substantial. In advanced economies with reserve currencies, central banks may be able to mitigate the damage. The same may not be true for emerging economies.
 
An invasive new species does not pose an immediate threat to the largest trees in the forest. But it doesn’t take long for less-developed systems – the saplings on the forest floor – to feel the effects.
 
Cryptocurrencies are not merely new species to watch with interest; central banks must act now to rein in the very real threats they pose.
 
 


Sell the Swiss Franc if You Think the World Is a Better Place

Swiss franc proved slow to react to the brighter picture for the eurozone economy

By Richard Barley


SAFETY TRADE
How many euros one Swiss franc buys



For all of the controversy around aggressive monetary policy, there is little dispute that central bankers have succeeded in moving the values of stocks, bonds and currencies. The exception that proves the rule is the Swiss franc.

The Swiss National Bank has tried hard to battle the strength of its currency but where other central banks had big impacts, the Swiss franc has refused to fall. The central bank has gone deep into negative-interest-rate territory, taking its target rate to minus 0.75%. For four years it put an outright cap on appreciation against the euro, printing francs and buying foreign assets in a campaign that has taken its balance sheet to more than 100% of Switzerland’s gross domestic product.

The SNB’s cap did temporarily turn the Swiss franc around, but the chaos sparked by its January 2015 decision to abandon that policy still hasn’t fully reversed. On a trade-weighted basis and adjusted for inflation, the Swiss franc is still some 12% higher than its precrisis average, SNB data show. Against the euro, the franc has fallen 6% this year but is still stronger than the level the SNB targeted before January 2015.

The SNB’s efforts to weaken the franc have been overwhelmed by the actions of its much bigger neighbor, the European Central Bank. But now the ECB is gradually moving away from quantitative easing as the eurozone economy picks up.

On the Swiss side, extremely low inflation means the central bank has absolutely no reason to shift its policy. Annual inflation has only been above zero in 12 of the 67 months since the start of 2012 and has often been below that of Japan. The SNB forecasts inflation at 0.3% in 2018 and 1% in 2019.

The combination of weak inflation and loose monetary policy in Switzerland and stronger growth and less quantitative easing in Europe argues for the franc to weaken versus the euro.

The other factor pushing for a weaker franc is the improving prospects for the eurozone. Fear of the currency bloc breaking up benefits the franc. Swiss outward investment flows, much of which traditionally landed in the eurozone, have collapsed since the crisis. As those fears fade, so should the franc. The first time the Swiss franc showed signs of weakening this year was in the wake of the French elections, won by the staunchly pro-Europe Emmanuel Macron. A resumption of Swiss investment flows—which means investors sell francs and buy euros—could be a key factor in the franc’s path.

INTERRUPTED
Swiss residents net investments in foreign securities


EVEN LOWER, EVEN LONGER
Change in consumer prices from a year earlier


The Swiss franc won’t stop being a haven. When risk aversion rises, investors seem hard-wired to embrace it. Italian elections next year could be a flashpoint. But unless there is a specific threat to eurozone cohesion, such reversals shouldn’t prove too disruptive.

Put together, that makes the Swiss franc a sell against the euro. Even if the franc falls from its current €0.875 to €0.83, a big move in currencies, it would only be at the highest level that the Swiss central bank allowed it to go in 2015. A further decline would make those bankers happier. Investors should go along for the ride.


Economic Forecast: Theories Behind The Numbers

by: Dr. Bill Conerly

 
 
What's the theory behind my economic forecast? In simple language, it's complicated. My forecast is judgmental rather than coming from an econometric model. I begin with a "bottoms-up" prediction of major sectors, followed by a top-down reality-check. This article will explain my basic economic thought process.
 
The classical economists are in the back of my mind. They viewed the economy as self-regulating. The Pigou effect is one part of the process, but not the only one. I certainly believe we would have business cycles even in the absence of bad government policy, but I think the classicals were right that if the government took their hands off the levers of policy, the economy would come back to full employment. My judgment is that the speed of adjustment would be fairly rapid, but the slow-adjustment argument is worth considering.
 
 
Dr. Bill Conerly with data from Bureau of Economic Analysis
Economic Growth and Contraction
 
 
I read John Maynard Keynes' General Theory as a skeptical undergraduate, but I really took to it after reading Axel Leijonhufvud's splendid On Keynesian Economics and the Economics of Keynes. He said that textbook Keynesian economics had become mechanical, losing some key points. I took away two lessons. The first is that the basis for many spending decisions is soft.
 
We can teach MBA students how to find the optimum capital expenditure, but the calculations are based on a host of unknowns: future prices and future costs and future interest rates. The second lesson is that in an environment of unknowns, decision-makers often take on a herd mentality, leading to swings of optimism and pessimism. When forecasting business decisions, I look at the fundamental factors that should be driving capital spending, but I also recognize the possibility of these mood swings.

The monetarism of Milton Friedman and others heavily influenced me early in my career. I'm less a monetarist now, partly because it's so hard to find a measure of money supply for which demand is fairly stable. Nonetheless, I take monetary policy to be powerful, but not completely determinative for the economy.
 
"Real business cycle" theory shows that swings in new technology development can trigger booms and busts. For their work on this, Finn Kydland and Edward Prescott were awarded the Nobel prize in economics. Read the prize description for more about their work at the layman's level. When the tech sector develops new business models, there can certainly be a boost to capital spending - so long as the technology does not reduce the need for capital in total. I keep this in mind, but most of my effort focuses on other areas.
 
The dominant macroeconomic theory today is "new Keynesian," which emphasizes the business cycle consequences of market imperfections, such as wages and prices that adjust to new circumstances only gradually. I take this seriously, though with the proviso that when it is profitable to change wages and prices more quickly, that will happen.
 
That idea brings us to rational expectations, the notion that people make decisions using all of the information available to them. The idea is frequently criticized, even to the point of being laughed at, but let's state it a different way: You can't fool all the people all the time. Robert Lucas argues that economic policy which depended on tricking people would not long work.
 
Think of government spending to stimulate the economy. The stimulus comes only if people do not boost their savings so that they can pay a future tax bill. As more and more people worry about the federal debt burden, I think that policy is less effective.
 
I'm often asked about Austrian business cycle theory, which I studied both as an undergraduate and in my Ph.D. program at Duke. The simplistic version of this theory is that easy money policy from the Federal Reserve (or other central bank) causes more capital spending and more roundabout production processes than is warranted by the underlying productivity of capital and consumer time preference. Eventually, the misallocation of resources is corrected, resulting in a recession or depression. The downturn is necessary to correct the errors of the past. Although there are many insights from the Austrian school that I use regularly, I find the specifics of the business cycle theory less useful.

I also find a good bit of overlap between the economics of Keynes, with its emphasis on decision-making under uncertainty, and the Austrian view. This seems odd to those who see Keynesians as proponents of activist government and Austrians as opponents. The two schools are united in their emphasis on decision-making under uncertainty. (Although Hayek and Keynes are sometimes portrayed as antagonists, they were cordial colleagues. During World War II they walked the streets of Cambridge together as air wardens, checking that blackout curtains were in place.)
 
My economic theory is a mongrel, using a variety of insights developed by different theorists.
 
The most important basis is old-fashioned supply and demand, applied to different sectors of the economy. For instance, when I look at consumer spending on durables goods such as cars, I examine incomes and whether there might be pent-up demand. When business spending is surging, I'm concerned about supply-chain limitations.
 
In practice, forecasting is heavily judgmental. Some folks have computer models that they let run on autopilot. But each model is based on judgements about what factors to include and what to ignore.
 
The world is too complicated - and data too limited - to utilize all possible explanatory factors, so economists pick and choose. Whether with a computer model or a purely judgmental process, the economist makes his best guess in an atmosphere of uncertainty.