A World of Unintended Consequences
One of the great, underappreciated facts about our technology-driven age is that unintended consequences tend to outnumber intended ones. As much as we would like to believe that we are in control, scholars who have studied catastrophic failures have shown that humility is ultimately the only justifiable attitude.
Edward Tenner
Nearly 90 years ago, a 26-year-old newly credentialed Harvard sociology PhD and future American Philosophical Society member, Robert K. Merton, published a paper in the American Sociological Review that would become one of the most frequently cited in his discipline: “The Unanticipated Consequences of Purposive Social Action.”
While the language of the paper was modest, it offered an obvious but revolutionary insight: many or most phenomena in the social world are unintended – for better or worse.
Today, even management gurus like Tom Peters acknowledge that, “Unintended consequences outnumber intended consequences. … Strategies rarely unfold as we imagined.
Intended consequences are rare.”
Merton had promised a monograph on the history and analysis of the problem, with its “vast scope and manifold implications.”
Somewhere along the way, however, he abandoned the project, perhaps because it risked becoming a book about everything.
Moreover, his apparent retreat may have discouraged other social scientists from attempting it, revealing one of the paradoxes of the subject’s study: because it is so universal and important, it may be best suited for case studies rather than grand theories.
Ironically, while unintentionality-centered analysis might have produced a Copernican revolution in social science, it is more likely that it would have unleashed adverse unintended consequences for any scholar attempting it – just as Thomas Kuhn’s idea of scientific paradigms embroiled him in decades of controversies.
Besides, there are also ideological barriers to the study of unintended consequences.
For every enthusiast there seems to be a hater, and dwelling on the unintended consequences of an opponent’s policies invites retaliation in kind.
This was economist Albert O. Hirschman’s point in his own critique of the theme.
Hirschman himself had formidable credentials as a student of unintended consequences.
One of his most celebrated and controversial ideas, the “hiding hand,” was a spin-off of Adam Smith’s famous metaphor for the market (the invisible hand).
In Development Projects Observed, Hirschman noted that many successful programs might never have been launched had all the difficulties been known; but once a commitment was made, human ingenuity prevailed, and new and unforeseen solutions were found.
The Sydney Opera House, for example, exceeded its budget by 1,300%, but it turned out to be a bargain once it became Australia’s unofficial icon.
Uncovering the Unintended
In the 1990s, there was an energetic debate about unintended consequences that fixated on two themes.
One was the seemingly treacherous behavior of everyday devices, reflected in the famous Twilight Zone episode “A Thing About Machines,” in which the snooty, arrogant critic Bartlett Finchley is assassinated by his fed up domestic appliances.
The inverse theme concerned frequent, apparently foolish mistakes by users of technology, illustrated by failed experiments in the early Cold War space program.
Three giants dominated the study of unintended consequences in the 1990s: Yale sociologist Charles Perrow, Duke engineering professor Henry Petroski, and cognitive psychologist Donald Norman, originally of the University of California, San Diego.
Perrow’s work focused on technological disasters.
He had been inspired by the partial meltdown at the Three Mile Island nuclear plant in 1979, notable not for its loss of life or property, but for what it revealed about the world’s vulnerability to operator error in critical systems.
The event inspired a radical redesign of nuclear system controls, but also triggered a backlash against the technology that persists to this day.
Only recently have the consequences of fossil fuels and the electricity consumption of AI inspired a nuclear-energy revival – an unintended consequence of decisions aimed at preventing unintended consequences.
In a 1981 paper, Perrow introduced the influential concepts of “loose coupling” and “tight coupling” for analyzing risk in technological systems.
Instead of simply blaming operators, he focused on what he called “error-inducing systems.”
In the case of nuclear plants, he questioned the assumption that safety engineers could prevent all mistakes.
Even with the most careful design, some technical systems remain vulnerable to cascading failures triggered by a single, inevitable deviation – a case of tight coupling.
When two ships, the Stockholm and the SS Andrea Doria, collided in 1956, the resulting total loss illustrated what Perrow and other safety experts called a radar-assisted collision.
Petroski’s career as an analyst of unintended consequences was also shaped by the Three Mile Island incident.
He had been doing important work on fracture analysis at Argonne National Laboratory, a center for nuclear research.
Anticipating funding cuts, he returned to academia with a new focus on interpreting technical matters – especially engineering failures – for lay audiences.
For Petroski, failure was both lamentable and inevitable, as designers developed new paradigms in response to the breakdown of previous models.
As they grew more confident, they tended to increase the scale of their work until it reached a tipping point in disasters such as the sinking of the Titanic in 1912 and the 1940 collapse, caused by unanticipated harmonic vibrations under wind load, of the Tacoma Narrows Bridge (“Galloping Gertie”).
Unlike Perrow, Petroski regarded catastrophes as necessary tragedies in the improvement of safety.
But he might have added that new safety measures, if not carefully applied, could themselves result in new tragedies, as when the excursion ship SS Eastland capsized in the Chicago River; the lifeboats and deck stiffening – measures added after the Titanic tragedy – had made the Eastland fatally unstable, as economist George W. Hilton showed in 1995.
Finally, Norman’s specialty is human-centered design.
He believes that errors and injuries can be prevented through a better understanding of human behavior and technology.
Thus, he and his collaborator David Rumelhart debunked the supposed advantages of the Dvorak “simplified” keyboard layout, demonstrating that it increased typing speed only marginally over the conventional QWERTY format.
Moving Fast and Breaking Things
Notwithstanding all the anomalies and paradoxes discovered by these researchers, the technological world of the 1990s was never going to remain under anyone’s control.
True, that world represented not so much a revolution as a digital extension of existing organizations and authorities.
Newspapers published electronic editions to expand access to their content, often without paywalls, supported by new forms of banner advertising.
Scientific and medical authorities used the internet to distribute the latest research findings to the public.
And libraries and museums made scans of documents and works of art available remotely.
Despite the dot-com bubble’s collapse in 2000, the 15 years from 1990 to 2005 were the golden age of information technology.
In a 1995 bestseller The Road Ahead, Microsoft’s young CEO, Bill Gates, and his collaborators promised a consumer paradise of “friction-free commerce.”
The world could rely on Moore’s Law, the observation that computer processing power would double approximately every two years.
Then, following September 11, 2001, terrorist attacks in the United States and the 2008 global financial crisis, there appeared to be a new dawn, owing to the widespread belief that social media would finally fulfill the countercultural and progressive goals of replacing hierarchies with cooperative structures.
But, as Mike Tyson famously said, “Everyone has a plan until they get punched in the face.”
In hindsight, the world established by journalistic, academic, medical, and political authority has obviously been repeatedly punched in the face by a new generation of entrepreneurs creating unexpected empires.
The unintended consequences of the innovations of the late twentieth century and early twenty-first centuries were mostly micro, whereas the new ones – both positive and negative – would be macro.
First came Google, whose founders created an algorithm that seemingly directed information-seekers to the most relevant sources based on the results of previous searches.
Unlike the curated model of Yahoo, it left selection to algorithms, implying that popularity would trump authority.
It was a first step toward realizing the epistemological anarchism that had flourished in some corners of academia in the 1980s and 1990s.
Facebook and Twitter took crowdsourcing even further, promising new freedom to share and spread opinions without gatekeepers and without charge – an extension of Gates’s promised freedom from friction.
These platforms, along with others like YouTube, helped create the kind of virtual communities preached by the web visionaries of the 1980s and 1990s.
Clay Shirky, a professor of media studies at New York University and advocate of the “free culture movement,” exuberantly proclaimed the dawn of the social-media age in his 2008 book Here Comes Everybody: The Power of Organizing Without Organizations.
Web 2.0, as it was called, was an uppercut to traditional authorities’ jaw.
Social-media companies accumulated and deployed data on users’ backgrounds and browsing habits to target advertisements at a small number of likely buyers.
The diversion of advertising budgets toward these online audiences disrupted the business model of traditional media, leading to a string of closures and mergers that accelerated the concentration of US news media.
By the time the COVID-19 pandemic erupted, the staid media outlets that tended to give more weight to public-health messages from medical authorities had become increasingly scarce.
Who’s in Charge Here?
Another punch came with the rise of influencers from pop culture and politics.
During the pandemic, the British pop star Harry Styles reportedly had the greatest impact online, averaging 97,000 retweets per tweet about COVID-19.
Authorities and social-media platforms faced an intractable dilemma: because small numbers of dissenters were so zealous, their impact was disproportionate.
Yet moderating allegedly false information also helped spread it, since charges of censorship called even more attention to the content in question.
Even among credentialed experts, social media have elevated certain groups – namely, what I call the “alt-thorities”: men and women who had been marginalized in their fields but succeeded in appealing directly to the public.
Some of these figures now hold high positions in the Trump administration.
They include the protectionist economist Peter Navarro, the heart surgeon turned TV quack physician Mehmet Oz, the health economist Jay Bhattacharya, and especially the former environmental lawyer and anti-vaccine campaigner Robert F. Kennedy, Jr., who currently serves as US Secretary of Health and Human Services.
All have shattered the norm of politically neutral, consensus-driven science.
Online content creators, who had been promised freedom from arrogant gatekeepers, have also taken it on the chin.
In 2019, even before the pandemic, a study by the US Authors Guild revealed that in the eight years since 2009 (itself a recession year), the income of writers had declined by 42%, owing to the impact of Google, Facebook, and Amazon.
For example, Amazon’s dominance in the print and digital book markets encouraged the purchase of used copies and accelerated the consolidation of publishers, reducing competition for new book contracts.
Now, AI is delivering another blow to creators – writers, visual artists, and composers alike.
The leading large language models (LLMs) were originally trained on countless, mostly unlicensed texts and images created by humans.
Plaintiffs in litigation against these firms now face a lobbying juggernaut, with the firms taking refuge behind the principle of “fair use.”
Worst of all, the newer LLMs are increasingly being trained on their own previous output, threatening a downward spiral in quality known as model collapse or “sloppification.”
Early after the introduction of LLMs, many elite professionals believed that while AI might replace journeyman work, it could also help people enhance their performance.
But recent research challenges this assumption.
In a recent study assessing generative AI models’ impact on the employment of freelance workers, the authors conclude that “we do not find evidence that high-quality service, measured by [freelancers’] past performance and employment, moderates the adverse effects on employment.
In fact, we find suggestive evidence that top freelancers are disproportionately affected by AI.”
While businesses still seem to prefer elite workers, few are beginning to shift toward AI-enhanced journeymen, especially because evidence suggests that the elite may not be enjoying the expected multiplier effect that comes with a new technology; on, they may be less productive because of it.
Is “influencing” a substitute for creators’ declining conventional prospects?
Probably not.
In 2024, the Wall Street Journal reported that nearly half of all influencers made $15,000 or less in the previous year.
One TikToker received just $120 for a video viewed ten million times.
Finally, and tragically, influencing can lead to literal punches in the face, as police officers defending the US Capitol on January 6, 2021, learned.
During the previous two decades, many thinkers had predicted that self-organized protests through mobile networks and social media – dubbed “smart mobs” by the progressive technological visionary Howard Rheingold – would topple one corrupt authoritarian regime after another.
In the event, the pro-Trump insurrectionists imagined they were doing just that.
Shirky proved all too correct when he wrote: “The power to coordinate otherwise dispersed groups will continue to improve; new social tools are still being invented ... the freedom to act as a group is inherently political.”
But he left out the inverse: usability implies abusability – or what I call deviant ingenuity.
Anything Can Happen
I wonder what Merton would have made of the world of 2025, which seems more awash in unintended consequences than he or his generation ever imagined.
He was wise not to become a prophet.
Silicon Valley now seems to be biting the very institutions that created it – the Ivy League, UC Berkeley, Stanford, and the National Science Foundation, not to mention the progressive newspapers and magazines that once extolled utopian cyberculture.
Nor does there seem to be any feasible way to counterpunch.
Yet, paradoxically, I find cause for hope in the idea that what left- and right-wing partisans see as powerful trends are, in fact, merely the cumulative results of a chaotic sequence of chance events.
While these results could well lead to catastrophe, they hold just as much potential for unexpected positive outcomes.
Consider the fate of Vienna. Between World War I and II, it was the defeated capital of a dissolved empire, torn between the violence of communists and fascists, then annexed by Nazi Germany, and then partly occupied by the Soviets until 1955.
Yet today, Vienna is ranked the world’s most livable city for the third year in a row.
Perhaps we can take comfort in Viennese journalist Alfred Polgar’s famous quip (often misattributed to his contemporary Karl Kraus): “The situation is hopeless but not serious.”
Edward Tenner is a Distinguished Scholar of the Smithsonian’s Lemelson Center and the author, most recently, of Why the Hindenburg Had a Smoking Lounge: Essays in Unintended Consequences (APS Press, 2025).
0 comments:
Publicar un comentario