jueves, 16 de noviembre de 2023

jueves, noviembre 16, 2023

We must stop AI replicating the problems of surveillance capitalism

There should be less focus on ‘Terminator’-style scenarios and more on economic data disclosure

Rana Foroohar

© Matt Kenyon


Artificial intelligence has been at the centre of the global conversation in recent days, with a major summit in the UK and a new executive order coming down from the White House. 

Much of the discussion has centred on how companies and regulators might prevent futuristic disasters triggered by AI, from nuclear war to a pandemic. 

But there’s a real time problem that’s getting far less attention: how to insure that AI doesn’t eat everyone’s economic lunch. 

I’m not referring just to the AI-related job disruption that may be coming down the road. 

That, at least, is a known challenge. 

I’m talking instead about the way in which AI will both replicate and increase the problems of surveillance capitalism. 

By this we mean the way in which user data and attention is controlled and monetised by a handful of large technology players who are able to extract economic rents that are disproportionate to the value that they add. 

As any number of antitrust actions in the US and Europe show, we have yet to tackle this problem in areas like internet search, digital advertising and social media, let alone AI. 

A big part of the reason for that is that “you can’t regulate what you don’t understand,” says Tim O’Reilly, the CEO of O’Reilly Media and visiting professor of practice at the UCL Institute for Innovation and Public Purposes. 

In a paper on rents in the “attention economy” released last week with Mariana Mazzucato and Ilan Strauss, O’Reilly argues that “the more fundamental problem that regulators need to address is that mechanisms by which platforms measure and manage user attention are poorly understood.” 

For O’Reilly and his co-authors, “effective regulation depends on enhanced disclosures.”

Set aside AI for a moment and consider the metrics used by giant search engines, ecommerce platforms and social media companies to monetise attention. 

These include the number of users and the time they spend on a site, how much they buy and in response to which ads, the ratio of organic clicks to ad clicks, how much traffic is sent to outside sites, the volume of commerce in a given industry and what percentage of fees go to third party sellers.

Any surveillance business model will make use of these key metrics. 

And yet, as the authors note, it is only the more traditional financial metrics that are reported regularly and consistently in public documents. 

This results in obfuscation because those financial reports are “almost completely disconnected from the operating metrics that are used to actually manage so much of the business.”

Companies will, of course, argue that such metrics are proprietary and would allow third parties to game their systems if they are known. 

But, as current antitrust cases involving Big Tech companies aim to demonstrate, such parties, along with customers, have been hurt themselves.

The trouble in gauging harm is that so much about digital business models and how they work is opaque. 

And this is even more true when we shift the focus to large language models and generative AI. 

While their operational models are different to those of search engines or ecommerce, they do also depend on user attention and algorithmic authority. 

And, as the regulatory conversations of the last few days have shown, these are very poorly understood, both alone and in relation to each other.

The new White House executive order has provisions that would force AI developers of “dual-use foundation models” — meaning those that could be used for either military or civilian purposes — to provide updates to federal government officials on security testing. 

Such testing would have to be “robust, reliable, repeatable, and standardised.” 

The US Department of Commerce is tasked with developing standards for detecting and labelling AI-generated content.

It’s a good start, but it’s not enough. 

White House deputy chief of staff Bruce Reed, who led efforts on the new executive order, told me last week that “we wanted to do everything we could with the tools that we have,” and that the administration hopes the order will “help build consensus around what we can do.” 

That might include Federal Trade Commission cases on AI monopoly power; the order explicitly calls for a “fair, open and competitive AI ecosystem”.

But 30 years after the advent of the consumer internet, Big Tech platforms themselves are only now facing major monopoly suits. 

There’s an argument to be made that we need a bit less focus on Terminator-style worst-case scenarios for AI, and much more specific economic data disclosure to curb the new technology in the here and now, before it has already gained too much power. 

For example, White House proposals don’t deal specifically with immediate economic harms such as the use of copyrighted data in training models.

There has been a robust debate about how to balance safety and innovation when it comes to AI. 

If the commerce department is smart, it might use the executive order as a lever to force AI developers, which include many large platforms, to open up their black boxes and show us how these businesses really work. 

That would be a step towards identifying key metrics for a public disclosure scheme, which is a must for good regulation. 

We failed to come up with a better accounting system for surveillance capitalism. 

Let’s not make that mistake with AI.

0 comments:

Publicar un comentario