A MAD problem
Artificial intelligence revives a cold-war-style dilemma
America and China are weighing co-operation and competition
WHEN XI JINPING and Donald Trump meet in Beijing on May 14th-15th, they may discuss such vexing problems as war in the Middle East, trade imbalances and the status of Taiwan.
Add to that cheering list artificial intelligence.
Elites in both Beijing and Washington are unnerved by the technology’s rapid advance.
The smarter AI models become, the more vital they are for prosperity at home and geopolitical heft abroad.
But the risks they pose grow alongside.
Not since the creation of the atomic bomb have great powers faced such a dilemma.
The Trump administration increasingly recognises as much.
It has ditched its hands-off approach to tech regulation after recent sparring with Anthropic, an American lab, and is considering ordering new models to be vetted by the government.
Anthropic said in April that it had created Mythos, a model so capable at finding holes in cyber-defences that it could not be publicly released.
America and its rivals took note.
After initial scepticism, China’s state media noted Mythos’s “unprecedented cyber-attack capabilities”, while a Russian broadcaster called it “worse than a nuclear bomb”.
Fears that ever more capable models may launch cyber-attacks, design bio-weapons or slip loose from human control have made AI diplomacy urgent.
Some in America and China are quietly weighing whether they can agree on guardrails for a technology that each regards as essential to beating the other.
Distrust abounds.
Neither side wants to slow its own development and risk handing the other an advantage.
Some American tech types theorise that being the first to build a self-improving—and therefore ever more powerful—AI model could create an enormous strategic advantage.
Chinese experts tend to view AI as key to economic growth: more nuclear power than nuclear weapon.
America and China have a “mutual interest” in AI safety, Xue Lan, an adviser to the Chinese government, told attendees at an event in the US Capitol one evening in late April.
“If one country is not safe, all of us are not safe,” he said. Mr Xue and Yi Zeng, the boss of Beijing’s AI-safety institute, called for global efforts to regulate and even slow AI development.
That chimes with Chinese policy.
Soon after the arrival of ChatGPT in 2022, China began pushing for international co-operation on AI and the creation of a UN body.
Chinese diplomats have floated the idea of “pausing” AI development, and argue that global standards should be adopted to ensure humans remain in control.
Their plan calls on the West to share its AI kit with poor countries so as not to divide the world into tech haves and have-nots.
A bilateral approach is more likely, and favoured by officials in Washington.
Together America and China house 90% of the world’s frontier computing power and so are the only ones with real regulatory clout.
American officials also view China’s AI ecosystem as a particular source of risk.
Chinese models are mostly “open source”, meaning the weights that allow them to function are freely published, giving bad actors easy access to powerful AI tools.
Direct talks would not be entirely new.
Messrs Trump and Xi agreed to “work together” on AI when they met in Busan, South Korea, in October.
Before that, in 2024, Joe Biden secured agreement from Mr Xi that humans, not AI, would control nuclear warheads.
But the pace of emerging risks is forcing a rethink about any piecemeal approach to co-operation.
American and Chinese researchers already work together in less public ways.
For example, some of China’s biggest labs, including Baidu, have adopted open-source code written by developers at Anthropic to govern how AI agents communicate.
Several unofficial dialogues between tech bosses and retired officials from America and China have taken place in recent years, sometimes in secret.
Participants say that the technical experts needed to make real progress on aligning standards are often not in the room, however.
Three types of co-operation now seem possible.
First, dialogue.
America and China could engage in “strategic reassurance”, says one Western official.
In nuclear-arms talks countries routinely discuss their plans to manage risk, build trust and reduce the chances of miscalculation.
America and China may write AI rules in parallel, but not in co-ordination.
“If they read the same technical papers and have a similar ground truth, they may both take reasonable actions in response,” says Karson Elmgren of the Institute for AI Policy and Strategy, an American think-tank.
Second, America and China could agree on how to test the safety of models.
Without sharing their findings, both sides could adhere to common measures of dangerous behaviour or ways to spot motives that do not align with those of their human creators.
It is hard to distinguish data used to monitor AI safety from information important to AI development, notes Jeffrey Ding of George Washington University.
By withholding the results, both sides could allay fears of leaking technical data.
Third, trust but verify.
If Messrs Trump and Xi were feeling ambitious, they could seek a formal agreement to develop common safety tests and share the results of such evaluations.
Doing so, however, would probably require invasive means of checking that the agreement is being respected.
These could include inspections or transmitting information on data-centre activity to an international umpire, like the UN’s International Atomic Energy Agency, which monitors nuclear material.
All of these types of co-operation seem a long way off.
American AI researchers are sceptical that Chinese officials are sincere.
China’s AI labs have been comparatively lax on safety, perhaps because their models are not yet powerful enough to pose existential threats, or because they lack the time and money to test them.
Documents accompanying DeepSeek’s v4 model, released last month, omitted safeguards common in American labs.
Ryan Fedasiuk, a former Biden administration official, suggests that Chinese stated concerns about AI safety are performative and done “to make the Americans look bad”.
Despite China’s public support for global governance of AI, it has so far resisted detailed discussions with America on the issue.
In 2024 America sent senior security officials and technical experts to meet their Chinese counterparts in Geneva.
China sent political officials who refused to discuss AI safety until America lifted export controls on advanced computer chips, according to those familiar with the talks.
American diplomats complain that China has a long history of manipulating dialogue for political gain: it ended climate-change discussions in 2022 because Nancy Pelosi visited Taiwan, for example.
China’s AI discourse, including recent talk about Mythos, can reflect a paranoia that American labs use safety concerns to control the development of the technology.
Some fret that sharing safety data is a ruse to snatch China’s tech secrets.
Tong Zhao, an expert on nuclear-arms control negotiations at the Carnegie Endowment, an American think-tank, says that Chinese strategists are generally sceptical of agreements with a technologically superior counterpart.
China’s history of entering into “unequal treaties” in the 19th century has made officials wary of deals that may slow their development or lock them into a second-tier status.
Co-operation is all the harder because of the high stakes involved.
“If we don’t win in AI, then it’s game over,” America’s treasury secretary told the Wall Street Journal in April.
Mr Xi recently hailed AI as “epoch-defining”.
Soon enough, however, American and Chinese leaders may come to see AI as existential for other reasons.
“Sadly, the historical evidence suggests you only see real momentum after a tragic accident,” says Mr Ding, noting the global standards set after the Bhopal chemical disaster in 1984, or the Chernobyl meltdown in 1986.
AI negotiations remain a true test for human intelligence.
0 comments:
Publicar un comentario