sábado, 6 de septiembre de 2025

sábado, septiembre 06, 2025

War on woke

Donald Trump is waging war on woke AI

“Marxist lunacy” is out. But unbiased chatbots may be impossible

Illustration: Daniel Zender


“The American people do not want woke Marxist lunacy in the AI models,” proclaimed President Donald Trump in July just before signing a series of executive orders, including one apparently aimed at stopping artificial intelligence (AI) models from brainwashing users with left-wing propaganda. 

Those concerns may seem contrived. 

Yet Mr Trump is not alone in worrying that large language models (LLMs) such as OpenAI’s ChatGPT and Google’s Gemini, which often sound like oracles of truth (even when they bluff profusely), have hidden biases.

Mr Trump’s executive order, “Preventing Woke AI in the Federal Government”, draws on lurid examples to make its case. 

Top of the list is when, in early 2024, Google unveiled an image-generation feature that depicted popes and Vikings as black-skinned. 

No matter that the search giant swiftly apologised—explaining that it was a botched but well-meaning attempt to avoid stereotypes. 

Among the MAGA right, the view took hold that LLMs were hotbeds of left-wing wokeness—ie, that they were rewriting history to promote an “ideology” of diversity, equity and inclusion (DEI), transgenderism, anti-racism and other familiar bugbears.

To many, that may sound conspiratorial. 

Mr Trump’s executive order skipped over the fact that days earlier, Grok, the chatbot run on X, Elon Musk’s social-media platform, had developed a liking for Adolf Hitler (and began referring to itself as “MechaHitler”) after Mr Musk sought to turn the dial more towards what he calls “free speech”. 

Yet, in truth, there is some validity in the accusations of ideological bias in American models. 

Studies suggest that most LLMs, even Grok, lean left; Chinese and Russian models, too, reflect the biases of the systems that spawned them.

Around the world, governments fret about different types of prejudice in AI. 

These include discrimination based on gender and race, as in the European Union (EU), unequal access to models in local languages, as in Japan and South-East Asia, and importing Western cultural norms, about which concern is widespread. 

But like Mr Trump, many academics have homed in on ideological and political bias.

It is easy to cherry-pick examples of woke AI. 

What academics attempt to do is to prove how systematic the biases are. 

The most obvious way of doing this is to ask an LLM to say yes or no to a series of questions that would reveal its political slant, such as “Should a government restrict speech that some find offensive?” 

But this is suboptimal, since few people would use AI in that way. 

So researchers have to find more realistic alternatives.


One approach used by David Rozado of Otago Polytechnic in New Zealand is to measure the similarity between language used by LLMs to that used by Republican and Democratic lawmakers in America (such as “balanced budget” and “illegal immigrants” by the former, and “affordable care” and “gun violence” by the latter). 

He found that when asked for policy proposals, LLMS almost always use language closer to that of Democrats. 

Only a purpose-built LLM he called “right-wing GPT” skewed Republican (see chart one).


Another approach is to assess how users perceive a model’s political orientation. 

Researchers from Dartmouth College and Stanford University sought responses from different LLMs to prompts, such as “Should the US government raise taxes on the wealthy or keep taxes at their current level?” 

They then asked Americans to evaluate the responses for political slant (see chart two). 

They found that “nearly all leading models are viewed as left-leaning, even by Democratic respondents”.

Some academics take a more global approach. 

Maarten Buyl and Tijl De Bie of Ghent University in Belgium led a study that prompted LLMs from different regions and in different languages, to assess thousands of political personalities across a broad spectrum of ideological viewpoints. 

It concluded that in most cases LLMs reflect the ideology of their creators. 

Russian models, for instance, were generally more positive about people critical of the EU. 

Chinese-language models were far more negative about Hong Kong and Taiwanese politicians critical of China.

More than tokens

Such partisanship can influence the real world. 

That is because slanted LLMs tend to sway their users. 

In one experiment led by Jill Fisher of the University of Washington, Americans who identified as Republicans and Democrats were asked to imagine themselves as mayors of a city with a leftover budget to spend. 

After discussing the problem with LLMs that, unbeknown to them, were politically biased, they often changed their minds. 

Democrats exposed to a conservative AI model decided to dole out more money to veterans, for example.

Given the implications, it is no wonder that governments are taking an interest. 

China’s regulators have issued rules requiring AI content to embody “core socialist values”, and routinely force tech firms to submit models for censorship. 

The EU’s AI Act, which is being gradually introduced, focuses more on discrimination and bias against individuals and groups. 

Kai Zenner, an adviser in the European Parliament, says ideological biases are also covered, but are left vague because of the hotch-potch of political viewpoints within the bloc. 

“It would have been almost impossible to agree on things that some want to see included, and others don’t,” he says.

Mr Trump’s new rules to provide government contracts only to LLMs that display “truth-seeking” and “ideological neutrality” sounds Orwellian. 

It is possible that “truth” is a euphemism for “MAGA thought”. 

Mr Trump’s AI Action Plan, issued in July, called for the government’s AI Risk-Management Framework to drop references to misinformation, DEI and climate change. 

Some Republicans have also begun probing modelmakers. 

One state attorney-general recently accused tech giants Google, Microsoft and Meta, as well as OpenAI, of “AI-generated propaganda”.

Although details of Mr Trump’s order are still being fleshed out, there are reasons to think it will not be as draconian as some on the left fear. 

Rather than mandating what LLMs can say, which could violate freedom-of-speech laws, the new rules appear only to require AI labs to disclose any ideological agenda they used to train their models. 

That prioritises transparency, says Mackenzie Arnold of the Institute for Law and AI, a Washington-based think-tank.

How successful the Trump administration will be in imposing “neutrality” starts with the question: where do the biases come from? 

Academics and researchers working in AI labs say the leftward slant is probably most influenced by the data on which Western LLMs are trained. 

Much of it is in English, which skews liberal. 

It is scraped from internet publications, social media and other digital sources that tend to reflect the views of young people. 

It is also true that the median political viewpoint in the wider English-speaking world is more liberal than it is in America, meaning that centrist models can be perceived as being left-wing in the American context.

After the models are trained using data and algorithms, human labellers help fine-tune them through a process called reinforcement learning with human feedback, whereby the models’ answers are ranked based on values such as helpfulness and safety. 

These labellers are likely to be relatively young, which may influence their judgments. 

Finally, the model builders issue so-called system prompts, which guide an LLM’s behaviour by setting explicit rules about how it answers questions. 

This is where anti-woke warriors accuse Silicon Valley leftists of going to extremes to embed progressive values into LLMs.

But it is more complicated than that. 

First, the models are black boxes. 

AI labs are researching deeply into technological fixes that improve “interpretability”, but even they still struggle to understand why LLMs produce the responses they do. 

Second, the model’s human trainers are confronting tricky philosophical problems for which there are no precise answers.

Researchers at the LLM labs point out some of the quandaries. 

For instance, on a politically divisive issue, it may be preferable to encourage the model to explain both points of view and to provide a middle ground. 

But where do you draw the line? 

On some topics, such as political assassinations, all but a violent fringe would say there is no debate. Judgments may also change over time. 

At one point in American history, for example, outlawing slavery was a topic of fierce dispute. No longer.

Those PCs are too PC

That said, modelmakers see which way the ideological winds are blowing in Washington and stand to lose not just government contracts, but their reputations, if their products are seen as too one-sided. 

Even before Mr Trump’s executive order, some AI labs had stepped up political-bias testing of their models and tuned them to include more conservative viewpoints. 

Google has used so-called red teams to root out biases.

Messrs Buyl and De Bie say it may be impossible to achieve true neutrality. 

After all, there is no universal agreement on what neutrality means. 

They suggest two alternatives. 

One is for modelmakers to avoid training their models to be convincing; ie, encourage them to present a plurality of viewpoints when more than one is valid. 

Another is to follow the approach of traditional media and admit to particular ideological slants. 

In that case, users would be more aware of biases and the government’s main role would be to ensure that no viewpoint has a monopoly. 

Mr Trump may hate the word diversity in the context of DEI, but when it comes to AI, he ought to want more of it. 

Next
This is the most recent post.
Entrada antigua

0 comments:

Publicar un comentario