The Curious Case of the Missing Defaults


Venezuela bank

CAMBRIDGE – Booms and busts in international capital flows and commodity prices, as well as the vagaries of international interest rates, have long been associated with economic crises, especially – but not exclusively – in emerging markets. The “type” of crisis varies by time and place. Sometimes the “sudden stop” in capital inflows sparks a currency crash, sometimes a banking crisis, and quite often a sovereign default. Twin and triple crises are not uncommon. 

The impact of these global forces on open economies, and how to manage them, has been a recurring topic of discussion among international policymakers for decades. With the prospect of the US Federal Reserve raising interest rates in the near and medium term, it is perhaps not surprising that the International Monetary Fund’s 18th Annual Research Conference, to be held on November 2-3, is devoted to the study and discussion of the global financial cycle and how it affects cross-border capital flows.

Rising international interest rates have usually been bad news for countries where the government and/or the private sector rely on external borrowing. But for many emerging markets, external conditions began to worsen around 2012, when China’s growth slowed, commodity prices plummeted, and capital flows dried up – developments that sparked a spate of currency crashes spanning nearly every region.

In my recent work with Vincent Reinhart and Christoph Trebesch, I show that over the past two centuries, this “double bust” (in commodities and capital flows) has led to a spike in sovereign defaults, usually with a lag of 1-3 years. Yet, since the peak in commodity prices and global capital flows around 2011, the incidence of sovereign defaults worldwide has risen only modestly.

If the model fitted to almost 200 years of data is used to predict the share of countries in default, the predictions are consistently higher than what has materialized to date. This is the case of the missing defaults.

A caveat, as our study highlights, is that there is a potential mismeasurement of the “true” incidence of default, which we cannot begin to quantify at this time – namely, defaults or accumulated arrears on Chinese loans. China’s lending to many emerging markets, most notably commodity producers, rose significantly during the last boom. While most of this lending is from official Chinese sources, much of it is not reflected in the World Bank data, and unknown amounts may well be in default or protracted arreas.

This state of affairs describes the situation in a number of African commodity producers and Venezuela. While Venezuela’s government-run oil company continues to service its external bonds (which is why no default appears in the books of the credit rating agencies), debts owed to China are understood to be in arrears.

Measurement issues aside, there are two types of explanation for the missing defaults. The first is that emerging market economies are more resilient this time around. This view, which suggests a structural shift, was emphasized in early October during one of the most upbeat IMF/World Bank annual meetings in recent memory, and the message was echoed in The Economist’sspecial report “Freedom from financial fear.”

Recent studies suggest that less procyclical fiscal and monetary policies and stronger macroprudential measures during the inflow phase or boom may have left countries on a more solid footing to cope with sudden capital-flow reversals. In the past, it was all too common for policymakers to convince themselves that a boom in commodity prices and associated surge in government revenues was permanent. Government expenditures would then ratchet up during the boom, only to be slashed as revenues sank along with commodity prices. Aside from waning procyclicality, macroprudential policies and capital controls appear to help restrain the intensity of aggregate credit booms and asset bubbles, with policies in place during the boom enhancing economic resilience during the bust.

The second type of explanation focuses on external factors. The largest global surges in sovereign defaults have usually followed a capital-flow reversal that overlaps with a spike in international interest rates. The worst outcomes (Category 5 hurricanes of debt) involved a triple blow to a class of capital importers (the commodity producers).

Today, global liquidity conditions have not tightened as markedly or as rapidly as in the bust phase of previous cycles. Exceptionally low and stable interest rates have acted to dampen debt-servicing difficulties among the debtor countries and may also help explain the missing defaults.

In sum, while there is evidence to suggest that the macroeconomic management of capital inflow surges has been improving over time in emerging markets as a whole, one has to recall that prior to the 2007-2009 global financial crisis, a widely accepted view was that the advanced economies had tamed the business cycle. This was the short-lived era of the so-called Great Moderation.

Perhaps the change is structural. But a more cautious interpretation of the missing defaults is that the protracted nature of the downturn in international conditions has yet to take its cumulative toll, or that lingering weaknesses will only become evident once the major central banks move further along in renormalizing their policy stances.

Carmen Reinhart is Professor of the International Financial System at Harvard University's Kennedy School of Government.

Millennials play in the credit card casino

Amex and JPMorgan are the biggest winners from a closed financial system

by John Gapper

Kenneth Chenault knows how to play the game of life: the chairman and chief executive of American Express has received at least $370m during his 17-year tenure there. He was declared “the gold standard for corporate leadership”, by Warren Buffett, a long-time Amex investor, when he announced his departure.

Mr Chenault also knows when to take his chips off the table. The US credit card business, especially for high end cards such as Amex Platinum, is more competitive than it used to be. A points scramble has intensified since JPMorgan Chase launched its Sapphire Reserve card last year, with an annual fee of $450 and an initial sign-up bonus worth $1,500 in points.

His departure is an opportunity to ask questions about how card companies prosper, particularly in the US. The Supreme Court this month agreed to hear a case challenging Amex’s practice of barring merchants from steering customers towards cards that charge lower transaction fees. The ultimate question is who loses and who wins in what is an oddly structured market.

The traditional answer to the question is clear enough: people who pay for goods and service in cash lose, while card holders gain. As one study found, it involves “a regressive transfer from low income to high income consumers . . . amplified substantially by the practice of paying rewards”.

This is how the transfer works: companies such as Visa, MasterCard and Discover levy a charge on sellers of two per cent for each transaction (Amex’s average merchant charge is 2.4 per cent, which is why it tries to stop steering). Stores are barred by contract from charging card users more, so prices rise about 1 per cent for everyone, whether they pay by cash or card.

Some of the fees are channelled back by credit card companies to their customers in rewards and, since Discover launched the first cashback card in 1986, cash rebates averaging 1 to 2 per cent. Since rewards cards such as Amex tend to be held by people with prime credit scores, they gain most.

Recently, the market has become more like a game. The Sapphire Reserve card shook up Amex by offering top users superior rewards in return for a large annual fee. Those who play their cards right, earning triple points for travel and dining, can gain high returns. A Sapphire Reserve card holder could gain $7,950 net of fees over 10 years, Barclays analysts estimate.

This has sparked a contest among millennials with good credit, an aptitude for games, and time to plan their spending cleverly. Some apply for multiple cards to maximise rewards, unlike traditional users. “The question is not just how much disposable income you have but how much disposable time,” says Robert Harrow, an analyst at the research group ValuePenguin.

The contest has made life less comfortable for Amex, which had to raise its rewards by 21 per cent in the third quarter of this year. The total rewards paid by the top six US card issuers doubled to $23bn between 2010 and 2016, attracting more people to use credit. US credit card debt now exceeds $1tn, and Americans rate credit cards as their favourite payment method.

Like casinos, card issuers do not want their customers to master their games too well. Casinos frown on blackjack players who change the odds by counting cards, although it is legal. Both Amex and JPMorgan strive to prevent people signing up for cards for welcome rewards and later dropping them. Amex changed its terms this year to prohibit “abuse, misuse or gaming”.

The big winner is the house, as in any casino. Amex charged $18.7bn in merchant fees last year and paid out $6.8bn in rewards. Most card users are not single-minded enough to obtain the highest potential returns. Discover card holders can get 5 per cent cash back per quarter on $1,500 of purchases in some categories, but its average rewards rate is only 1.3 per cent.

Amex gains most from what it calls its “closed loop” — it issues cards and handles transactions. Its degree of control has similarities to Nintendo, which sells both consoles and games and this week raised sales projections for its Switch console. Complete control has its privileges: Amex’s US card business made a 35 per cent return on capital last year, far higher than investment banks.

The growth of super-prime cards has increased competition but it ultimately reinforces the credit card networks. The largest transfer of wealth is now from poorer cash buyers to millennial gamers but the latter have to work hard for their points and are rewarded in the currency of the card issuer.

While rewards have risen, little has affected where they come from — the US card market’s high transaction fees. Unlike the EU, which capped some transaction fees in 2015, the US relies largely on competition to minimise card purchasing costs and the price distortions that they create.

To judge by the profitability of Amex and others, competition is not working as well as it should. More of the “swipe economics” of transactions are being passed through to the most energetic users but there is room for reform in the market as a whole. The Supreme Court has work to do.


What’s Behind the Hype About Artificial Intelligence?


Apoorv Saxena, lead product manager at Google and co-founder of the AI Frontiers conference that will be held in Santa Clara, Calif., from November 3-5, speaks with Knowledge@Wharton about why interest in artificial intelligence is growing, what is likely to happen in the near future and which challenges will take longer to overcome. [Knowledge@Wharton is a media partner for the conference.]

An edited transcript of the conversation follows.

Knowledge@Wharton: Interest in artificial intelligence has picked up dramatically in recent times. What is driving this hype? What are some of the biggest prevailing misconceptions about AI and how would you separate the hype from reality?

Apoorv Saxena: There are multiple factors driving strong interest in AI recently. First is significant gains in dealing with long-standing problems in AI. These are mostly problems of image and speech understanding. For example, now computers are able to transcribe human speech better than humans. Understanding speech has been worked on for almost 20 to 30 years, and only recently have we seen significant gains in that area. The same thing is true of image understanding, and also of specific parts of human language understanding such as translation.

Such progress has been made possible by applying an old technique called deep learning and running it on highly distributed and scalable computing infrastructure. This combined with availability of large amounts of data to train these algorithms and easy-to-use tools to build AI models, are the major factors driving interest in AI.

It is natural for people to project the recent successes in specific domains into the future. Some are even projecting the present into domains where deep learning has not been very effective, and that creates a lot of misconception and also hype. AI is still pretty bad in how it learns new concepts and extending that learning to new contexts.

For example, AI systems still require a tremendous amount of data to train. Humans do not need to look at 40,000 images of cats to identify a cat. A human child can look at two cats and figure out what a cat and a dog is — and to distinguish between them. So today’s AI systems are nowhere close to replicating how the human mind learns. That will be a challenge for the foreseeable future.

Knowledge@Wharton: How would you separate the hype from the reality?

Saxena: A lot of the hype originates from the extrapolation of current trends and ignoring the reality of taking something from a research paper to an engineered product. As a product manager responsible for building products using the latest AI technology, I am constantly trying to separate the hype from reality. The best way to do this is to combine the healthy skepticism of an engineer with an optimism of a researcher. So you need to understand the underlying technical principles driving the latest cool AI demo and be able to extrapolate only the parts of the technology that have firm technical grounding. For example, if you understand the underlying drivers of improvements in say speech recognition it becomes easy to extrapolate the upcoming improvements in speech recognition quality. Combine that with a healthy skepticism of where natural language understanding is today, you will be able to identify the right opportunities in say what pieces of the call centers workflow will be automated in the near future.

Knowledge@Wharton: What is possible with AI in the near term, and what is more difficult to do?

Saxena: As I mentioned in narrow domains such as speech recognition AI is now more sophisticated than the best humans while in more general domains that require reasoning, context understanding and goal seeking, AI can’t even compete with a five-year old child. I think AI systems have still not figured out to do unsupervised learning well, or learned how to train on a very limited amount of data, or train without a lot of human intervention. That is going to be the main thing that continues to remain difficult. None of the recent research have shown a lot of progress here.

There is a very good quote from [Google engineering fellow] Geoff Hinton who is known as the father of deep learning. I might be misquoting him but it goes something like, “Deep learning actually spoiled AI because it made a lot of people think it can do everything when we know that it can only solve very limited kinds of problems.” I think there are still significant challenges in AI. There are no recent advances that tell us when we will get there or solve them anytime soon.

Knowledge@Wharton: AI is a vast field covering many areas, and some of them are quite confusing to non-experts. For example, you and Wharton operations, information and decisions professor Kartik Hosanagar wrote an article for Knowledge@Wharton last April about the democratization of machine learning. What is happening today in machine learning that impresses or surprises you the most?

Saxena: What impresses me is how, with the availability of really easy to use tools, how widely AI is being used to help the world. So we have heard about farmers in Japan using AI to sort their cucumbers, to sort through their produce to sort good produce from bad produce. Some logistics company in Africa is using AI to route packages. It always surprises me how hungry and how innovative and creative people are in using AI. Even though it’s limited in some ways, people are still using it and making it meaningful. I definitely am super impressed [with this phenomenon].

Knowledge@Wharton: In addition to machine learning, you also referred a couple of times to deep learning. For many of our readers who are not experts in AI, could you explain how deep learning differs from machine learning? What are some of the biggest breakthroughs in deep learning?

Saxena: Machine learning is much broader than deep learning. Machine learning is essentially a computer learning patterns from data and using the learned patterns to make predictions on new data. Deep learning is a specific machine learning technique.

Deep learning is modeled on how human brains supposedly learn and use neural networks — a layered network of neurons to learn patterns from data and make predictions. So just as humans use different levels of conceptualization to understand a complex problem, each layer of neurons abstracts out a specific feature or concept in an hierarchical way to understand complex patterns. And the beauty of deep learning is that unlike other machine learning techniques whose prediction performance plateaus when you feed in more training data, deep learning performance continues to improve with more data. Also deep learning has been applied to solve very different sets of problems and shown good performance, which is typically not possible with other techniques. All these makes deep learning special, especially for problems where you could throw in more data and computing power easily.

Knowledge@Wharton: Can you talk a little bit about some of the biggest breakthroughs in deep learning that you find most impressive?

Saxena: Deep learning is an exciting field with lots of experimentation and new techniques being proposed over the last two to three years. There are two that come to mind. One is reinforcement learning, which I will explain in a minute. And the other big thing that is happening is GANs, or Generative Adversarial Networks.

Both of these are breakthroughs because they address one of the key problems in AI that I highlighted — how to learn without a lot of human supervision. So in the most layman terms, reinforcement learning is essentially agent-based learning where an agent, a software program, is given an optimization goal and it tries to optimize by taking multiple paths and choosing the best path by learning from mistakes or errors. This is the same technique that led to advances in machine learning — how to play video games, such as game of Atari, or even in a more advanced strategy games like Go.

The other big area that has generated tremendous interest involves Generative Adversarial Networks or GANs in short. In layman’s terms, think about someone learning something with a buddy. So we essentially have two neural models competing and teaching each other and improving each other to expedite the learning process. GANs work well for class of problems called unsupervised learning — where you don’t have a lot of trained data to tell the machine what to learn. GANs have been applied to make significant progress in image generation and video morphing, and many more to come.

Knowledge@Wharton: The other area of AI that gets a lot of attention is natural language processing, often involving intelligent assistants, like Siri from Apple, Alexa from Amazon, or Cortana from Microsoft. How are chatbots evolving, and what is the future of the chatbot?

Saxena: This is a huge area of investment for all of the big players, as you mentioned. This is generating a lot of interest, for two reasons. It is the most natural way for people to interact with machines, by just talking to them and the machines understanding. This has led to a fundamental shift in how computers and humans interact. Almost everybody believes this will be the next big thing.

Still, early versions of this technology have been very disappointing. The reason is that natural language understanding or processing is extremely tough. You can’t use just one technique or deep learning model, for example, as you can for image understanding or speech understanding and solve everything. Natural language understanding inherently is different. Understanding natural language or conversation requires huge amounts of human knowledge and background knowledge. Because there’s so much context associated with language, unless you teach your agent all of the human knowledge, it falls short in understanding even basic stuff.

That’s where the challenge is. All the big companies you mentioned are investing heavily in this area. I see progress being made within narrow domains, like for example ordering a pizza or solving problems such as, “My bank account is running low, can you allow me to make this transaction?” Such problems will get solved in the near term. But when you come to more open ended discussions — imagine your AI assistant acting like your psychiatrist — those solutions are much further out because they require deeper understanding of human knowledge and emotions that AI will lack for the foreseeable future.

Knowledge@Wharton: What do you think is the future of the chatbot?

Saxena: When chatbots operate within specific vertical domains and contexts, as I said, chatbots will do well. When the context is fixed and doesn’t vary — and, more importantly, the user’s expectation of the chatbot is limited — I think in these areas chatbots will do really well.

Other areas we have seen chatbots being used for is what we call goal-oriented conversations.

For example, setting up a meeting or an appointment between two people can be completely handed over to a chatbot. Here the context is very limited of coordinating the calendars of two people or making a reservation in a restaurant. Instead of a human being calling a restaurant to make a reservation, a chatbot can do this automatically because the task and context are both very well defined. Anything beyond that is still difficult in my view.

Knowledge@Wharton: What is computer vision? Is it possible to make machines understand video the way that human beings do? What are the most promising business applications here, and the biggest challenges in making them a reality?

Saxena: Computer vision is the science of understanding images and videos. One example of understanding image is what objects are in an image. The same thing goes with videos. In a video, you consider the different scenes you see as well as the different people and objects in the scene.

And then describing each scene by correlating different images or scenes or frames within the video is also possible — or increasingly getting possible now — where AI can watch a video and summarize what it saw in the video. All these are within the realm of computer vision or visual understanding.

There are many areas where computer vision can be applied. One promising application of computer vision is in surveillance. We have the ability to detect anomalies in a surveillance video. Another big application is in the field of self-driving vehicles, where AI enables cars to understand what is on the road, detect objects, and then making decisions, and allowing the car to make decisions on those. That’s the other big área.

On the video front I clearly see huge improvements. Video is called dark data for a reason today because our ability to understand video is pretty limited. But imagine a world where machines can start understanding what’s in a video. You will see tremendous advances in the near future in machines helping humans generate videos on their own. It will not be completely automated, but one of the risks here is the ability to create fake videos. Recently you may have seen — it was pretty popular on social media – a video of Barack Obama speaking fake messages. It is very easy to morph videos and human lip-synch technology to make anybody believe anything. That really caused a lot of stir in this space. So the ability to modify video and make changes in a video, and make it realistic is going to be a huge challenge as well as a huge opportunity. So that is coming.

Knowledge@Wharton: That sounds incredible. Now, a number of big companies are active in AI — especially Google, Microsoft, Amazon, Apple in the U.S., or in China you have Baidu, Alibaba and Tencent. What opportunities exist in AI for startups and smaller companies? How can they add value? How do you see them fitting into the broader AI ecosystem?

Saxena: I see value for both big and small companies. A lot of the investments by the big players in this space are in building platforms where others can build AI applications. Almost every player in the AI space, including Google, has created platforms on which others can build applications. This is similar to what they did for Android or mobile platforms. Once the platform is built, others can build applications. So clearly that is where the focus is. Clearly there is a big opportunity for startups to build applications using some of the open source tools created by these big players.

The second area where startups will continue to play is with what we call vertical domains. So a big part of the advances in AI will come through a combination of good algorithms with proprietary data. Even though the Googles of the world and other big players have some of the best engineering talent and also the algorithms, they don’t have data. So for example, a company that has proprietary health care data can build a health care AI startup and compete with the big players. The same thing is true of industries such as finance or retail.

Knowledge@Wharton: Can you give any examples of startups that are doing the most significant work in AI? Why is their work important?

Saxena: There have not been many breakout successes in the AI-centric startups yet. When I say breakout successes, I mean multi-million or even billion dollar startups. There are a lot of promising startups across the board. For example, in the area of customer service I have seen startups doing well. In the area of HR automation I have seen some good startups.

Knowledge@Wharton: What are the top three areas in AI that everybody should be paying attention to in the next 12 to 24 months and why?

Saxena: I think the intersection of robotics and AI is going to be interesting. Robotics has been disappointing for a long time in terms of wide-scale adoption. This is one area in which I would say a combination of AI and robotics is going to be interesting. You will see some noteworthy applications coming up in that space. More human-like robots will be one big area, with advances in natural language understanding and visual understanding, and of course robotics. That is one area that I would definitely watch.

Self-driving cars are also a critical area. Within the next few years we will see commercial deployment of self-driving cars.

I am bullish on some of the advances we will see in video understanding. A combination of video understanding combined with virtual reality could create some interesting breakthroughs. That is another area we should keep watching. The common theme I see is not AI in particular, but AI combined with some other domain. That can create some compelling use cases in the near future.

Iran Nuclear Deal: The Limits of Diplomatic Niceties

By Jacob L. Shapiro


Though the rhetoric around the Iran nuclear deal has at times grown quite heated, Tehran now appears to be making a subtle but significant shift in its public comments on its ballistic missile program. Beneath the surface, however, this shift is more of a diplomatic gesture than a real change in policy.

At a conference in Tehran on Oct. 31, the commander of Iran’s Islamic Revolutionary Guard Corps reportedly said the country’s supreme leader had limited the range of Iranian-made missiles to 2,000 kilometers, or about 1,200 miles. The commander’s comments were couched in the usual threatening terms; at one point, he suggested that if the U.S. implemented new sanctions, Washington should move its bases out of the 2,000-kilometer range. But the commander’s declaration, at least on the surface, is a signal that Iran is willing to make some concessions to the United States in return for better relations.

Iranian officials had previously said Iran was pursuing a ballistic missile program as a deterrent, warning that no country would be able to limit the range, power or precision of its missiles. The shift marked by the commander’s comments is particularly surprising considering how acrimonious the relationship between Washington and Tehran has been in recent months. Iran has conducted at least two ballistic missile tests since U.S. President Donald Trump was elected last year – one in January and another in March. Those missile tests led, in part, to new U.S. sanctions against Iran and played a role in Trump’s decision to decertify the Iran nuclear deal earlier this month. Over the weekend, Iran’s Fars News Agency claimed that the Iranian president had refused to meet with Trump in New York in September.

Iran Rouhani
Iranian President Hassan Rouhani speaks during the U.N. General Assembly at the United Nations on Sept. 20, 2017, in New York. Kevin Hagen/Getty Images

Shared Interests

To understand why Iran might be willing to limit its ballistic missile program, we need to go back to the reason Iran signed the nuclear deal in the first place. The agreement, reached in July 2015, came down to shared interests between the United States and Iran. By summer 2015, the Islamic State had overrun parts of Shiite-dominated Iraq and Bashar Assad’s Alawite-led regime in Syria – both Iranian allies. Iran viewed the rise of the Islamic State as an existential threat. The United States, which had spent so much blood and treasure to try to bring liberal democracy to Iraq, also viewed the Islamic State as a significant threat, and was eager to recruit new allies in the fight to destroy the self-declared caliphate. That made the U.S. and Iran partners.

At the same time, international sanctions against Iran were beginning to bite, and Iran wanted badly to cast off its global pariah status and rejoin the global economy. Since the deal was signed, Iran’s economy has soared: Its gross domestic product grew by 12.9 percent year-on-year in the first quarter of 2017 and by a staggering 16.8 percent year-on-year in the fourth quarter of 2016, according to Iran’s central bank. Much of this growth can be attributed to Iran’s oil exports, which have reached roughly 4 million barrels per day, the same level Iran exported in 2012 before the sanctions went into effect. But with its economic recovery in full swing, Iran is now nervous that the current U.S. administration could cancel the nuclear deal and impose new sanctions that could jeopardize the economic progress it has already made.

Yet, while Iran was willing to give up certain aspects of its nuclear program to achieve this progress, it didn’t give up all of it. Iran agreed to eliminate almost all of its stockpiles of enriched uranium and to reduce the number of centrifuges by two-thirds, among other things. But, as Tehran has repeatedly pointed out, the nuclear deal did not cover Iran’s ballistic missile program. Weapons-grade fissile material is an important part of building nuclear weapons, but it is useless if it can’t be delivered to its target. And Iran, despite the technical progress it made in enriching uranium, does not possess the kind of advanced ballistic missile technology necessary for it to claim a legitimate nuclear deterrent.

Iran already has missiles, likely acquired in 2005, that can go beyond the supreme leader’s 2,000-kilometer limit. What it hasn’t been able to do is domestically build missiles that can go much beyond that. Its recent tests appeared to have included a new ballistic missile – the Musudan, acquired from North Korea, according to the Center for Strategic and International Studies – with a range greater than 2,000 kilometers; that’s what made these tests so provocative. (The Musudan is an intermediate-range ballistic missile with a range of about 4,000 kilometers.) After a test in July 2016, Iran modified the missile and gave it a new name – the Khorramshahr. It is believed that Iran tested this missile in January and displayed it during a military parade in September. Its experiments with the Musudan suggest that its lack of a domestically made ballistic missile that can go beyond 2,000 kilometers shouldn’t be confused with a lack of desire to build one.

Moot Point

It’s unlikely Iran will simply abandon its nuclear program. Like North Korea, Iran is a country with a deep-seated suspicion of American intentions. In 1953, the same year the armistice ended hostilities on the Korean Peninsula, a faction of the Iranian military supported and funded by the United States and the United Kingdom overthrew Mohammad Mossadegh, Iran’s democratically elected prime minister. Iran has not forgotten this. And so Iran has continued to develop its ballistic missile technology, which may not violate the terms of the deal but could certainly violate the spirit of it.

This leads us back to the IRGC commander’s comments. At this time, Iran can’t manufacture missiles with a range greater than 2,000 kilometers, so whether the supreme leader restricted the production of such weapons is a moot point. Iran is more interested in honing the technology it does have, preferring to test shorter-range, domestically produced missiles to increase their accuracy. Iran is also interested in keeping its economic momentum going, and in containing the inevitable Sunni jihadist insurgency that will accompany the Islamic State’s territorial dissolution. It does not need a global committee debating whether it is pushing for intercontinental ballistic missiles.

And so Iran is trying to signal to the U.S. and to the world that its ballistic missile program is defensive by claiming to have limited the range of its missiles. In the long term, this empty, rhetorical gesture will not carry much weight, especially as Iran’s missile technology improves. But in the short term, while the U.S. deals with an even more recalcitrant regime in pursuit of nuclear weapons in Pyongyang, and while the world seeks to get rich from economic opportunities in Iran, the nuclear deal partners are willing to accept diplomatic niceties if Iran refrains from provocative behavior. In any case, Iran risks little by making its ballistic missile program appear modest.