sábado, 1 de julio de 2023

sábado, julio 01, 2023

Don’t blame us for AI’s threat to humanity, we’re just the technologists

Leaders in AI have taken time off from inventing something that could wipe out human life to warn us about it

Robert Shrimsley 

© Lucas Varela


So here’s a thought. 

Instead of pushing ahead with a technology that its leading inventors say could soon have the power to kill humans, how about not pushing ahead with it?

This radical notion is prompted by a warning from the man setting up the prime minister’s artificial intelligence task force. 

Matt Clifford observed that, “You can have really very dangerous threats to humans that could kill many humans, not all humans, simply from where we’d expect models to be in two years' time.” 

On second thoughts maybe I’m overreacting. 

His full remarks were more nuanced and anyway it’s not all humans. 

Just many of them.

But similarly apocalyptic warnings have come from leading figures in its development, writing under the aegis of the Center for AI Safety. 

In an admirably succinct warning, a who’s who of the AI industry stressed that: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

The heads of Google DeepMind, OpenAI and umpteen others have taken time off from inventing the technology that could wipe out all human life to warn the rest of us that, really, something should be done to stop this happening.

And these guys are supposed to be the geniuses? 

Across potting sheds in England, there are any number of slightly wacky guys who have invented a new machine which might be brilliant but might also burn down their house, and most of them have managed to work out by themselves that maybe the device is not such a great idea after all.

This is where the small-fry inventors have been going wrong. 

Perhaps instead of figuring out the risks for themselves, what they really needed to do was score several billion pounds’ worth of VC funding and then write a letter to the local council warning that they really ought to be controlled.

I recognise, to be serious, that great things are expected of artificial intelligence, many of which do not involve the obliteration of the human race. 

Many argue that AI could play a pivotal role in delivering a carbon-free future, though perhaps that’s just a euphemism for wiping out humanity.

As important is that the advances already made cannot be uninvented. 

But already AI chatbots are falsifying information — or “hallucinating” as its developers prefer to put it — and its inventors are not quite sure why. 

So there does seem to be an argument for slowing down and ironing out that teensy wrinkle before moving on to, you know, extinction-level technology.

A generous view of the tech leaders calling for themselves to be leashed is that they are being responsible and that it’s the other irresponsible actors they are worried about. 

They’d like to do more but, you see, the guys at Google can’t let themselves be beaten by the guys at Microsoft.

So these warnings are an attempt to shake politicians and regulators into action, which is damned sporting of them given that world leaders have such a stellar record of responding cooperatively and intelligently to extinction-level threats. 

I mean come on. 

They’ve mentioned it to the US Congress. 

I don’t think we could ask much more. 

And the British government is now on the case, which would be more reassuring if it wasn’t still struggling to process asylum seekers in less than 18 months.

With luck, the warnings will indeed shock governments into useful action. 

Maybe this leads to global standards, international agreements and a moratorium on killer developments.

Either way, the AI gurus’ consciences are now clear. 

They’ve done all they can. 

And if one day, around 2025, the machines do indeed gain the power to obliterate us — sorry, many of us — I like to think that in the final seconds the AI will ping out a last inquiry to the brilliant minds who knowingly blundered ahead with a technology that could destroy us without at that stage figuring out how to, you know, stop it doing so.

“Why did you carry on, knowing the risks?” asks SkyNet. 

And in their final seconds the geniuses reply: “What do you mean? 

We signed a statement.”

0 comments:

Publicar un comentario