Why that open letter urging an AI development pause is problematic

Why that open letter urging an AI development pause is problematic

Applied ethics isn't a checklist. It's about putting in the time and effort to understand risks to wellbeing with the express intent of avoiding, mitigating and monitoring harm. It makes sense then to assume that the open call to pause AI development is a good thing. Well, yes – but no.

Let's talk about some things that are going on with the touted open letter signed by the likes of Elon Musk, Steve Wozniak, Yuval Noah Harari, Andrew Yang, Gary Marcus and Tristan Harris. More than an open letter it is in many ways a letter of misdirection. And who authored it exactly?

I have myself extensively criticised the current hype and on the surface perhaps it should make sense for me to applaud any call "to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4".

So before I begin listing why I don't applaud this letter, I want bring in some concepts from the elements of digital ethics. While there are many of the 32 outlined elements that apply here, these three cover a lot of bases:

  1. Monoculture. The homogenity of those who have been provided the capacity to make and create in the digital space means that it is primarily their mirror-images who benefit – with little thought for the wellbeing of those not visible inside the reflection.
  2. Power concentration. When power is with a few, their own needs and concerns will naturally be top of mind and prioritized. The more their needs are prioritized, the more power they gain.
  3. Ethicswashing. Ethical codes, advisory boards, whitepapers, awards and more can be assembled to provide an appearance of careful consideration without any real substance behind the gloss.

The open letter itself could be seen as a potential example of ethicswashing. The idea being that by claiming to show attention to human wellbeing the job of ethical consideration is done. Enough people around the world can internalise the idea that these powerful individuals are doing what they can to stop dangers posed by AI. If they fail, the overarching message could be that "they've at least acted with the best of intent". It's a neat parlor trick that could give many powerful actors an opportunity for shedding their accountability.

"We tried. We have the receipts. Look at this letter".

It conveniently also leaves out of the equation any expressed interest in listening to more voices. So there is no true way of measuring any effort beyond signing the letter.

The most obvious way of measuring the success of this letter would be if a 6-month pause for AI development beyond the capacity of GPT-4 is respected. It's still rather nebulous how this actually helps anyone. It's also unclear who should verify whether or not a system more powerful than GPT-4 is being worked on during this announced timeframe, and how.

Why the letter should be cause for concern

I admit to having started my line of argument already but here are some more reasons I believe further skepticism and questioning of the open letter is in order.

The host organisation

It's curious how in times where source criticism is talked about on a daily basis it has been lost on many journalists exactly where this open letter is hosted and what this organisation stands for. The Future of Life Institute is one of several organisations in a network of billionaire-driven technocrat fellowships that safeguard and promote a philosophy known as long-termism, with its roots in the effective altruism movement.

You may have heard of long-termism and the idea of ensuring that what we build today safeguards the interests of future humans that have not been born yet. But you may have missed how representatives of this philosophy have been shown to prioritise lives far into the future over current lives ("they are more in number"), arguing that lives in rich countries are more important to save than lives in poor countries ("they contribute more"), and suggesting that climate change can be toned down as it isn't an existential threat ("at least not to all of humanity").

As Rebecca Ackerman writes:

[Effective Altruism]’s ideas have long faced criticism from within the fields of philosophy and philanthropy that they reflect white Western saviorism and an avoidance of structural problems in favor of abstract math—not coincidentally, many of the same objections lobbed at the tech industry at large.

The understanding of this philosophy gives context to some of the next points.

Sidenote: It was somewhat bizarre to see many people asking for confirmation that Max Tegmark had in fact signed the letter, when he is actually the president of the institute that is hosting the letter! Basic source checking still has a ways to go.

Essentially no mention of all the current harm

It's not like there isn't already harm happening due to these types of tools. Why would an open letter claiming to care for human wellbeing not acknowledge or outline the harm that is in fact happening today due to AI – and should be stopped now? It's an obvious opportunity for boosting awareness.

I am speaking for example of:

  • Security issues such as data privacy and breaches, that have already happened
  • The fact that these tools are trained on vast amounts of biased data and serve to perpetuate that bias
  • The fact that workers in Kenya and elsewhere are being exploited to train these tools, having to suffer harm to remove harmful content. A practice long employed by social media companies.
  • An increase in the capture of biometrically inferred data that will severely impact human free will as it enables widespread personal manipulation and gives authoritarian regimes more power to suppress dissent. Or encourages democracies to move towards authoritarianism, when putting the disenfranchised in harms way.
  • Risks to climate due to significant energy use in large neural network training. It's valid to note that the ones who benefit the most from AI are the rich, and the ones who suffer most from the climate crisis are the poor. The latter group don't appear to get a lot of say in what these suggested 6 months of pause mean for them.
  • Bad actors with nefarious intent becoming hugely empowered to do damage with malware and scams, but also to invent chemical weapons.
  • How the tools are already disrupting art, literature and education (including ownership of all training data) without opportunity to address these issues in a reasoned manner
  • Exclusion of a large part of the global population simply due to the limited number of languages that these tools are trained on.
  • Unsubstantiated claims of sentience that lead to unfounded fears (a harm that the letter itself contributes to)

The letter explicitly states that training of tools more powerful than GPT-4 should be paused. As if harm is only what happens beyond this. I would argue that time is better spent addressing the harm that already is happening than any harm that might happen.

The focus of the letter does make more sense when you understand that "a humanity-destroying AI revolt" is what is explicitly of greater concern to the technocrats within the longtermist movement. Not the here and now.

Boosting the idea of sentience

The letter does an impressive job of fearmongering when it comes to convincing the reader of a soon-to-arrive intelligence that will "outnumber, outsmart, obsolete and replace us". Their biggest concern expressed as a "loss of control of our civilization".

It could very well be the case that the authors of this letter are truly afraid, as all of a sudden they have this sense of being abused by AI in the same way that millions of other people are already being abused by AI.

This would explain why the letter doesn't delve into the current harms of AI. Those harms simply don't apply to the authors of the letter. The implicit fear is that the authors themselves could now be negatively impacted.

From the letter:

This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

It's the next version of AI that worry the authors, not the one that is already harming.

Misleading citations

The very first citation in the letter is the infamous Stochastic Parrots paper. But the authors of the letter completely misrepresent the study. In the words of one of the paper's researchers herself, Dr. Timnit Gebru of DAIR Institute:

The very first citation in this stupid letter is to our #StochasticParrots Paper, "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]"

EXCEPT

that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have "human-competitive intelligence." They basically say the opposite of what we say and cite our paper?

You may want to read that again. This open letter that is selling the idea of sentience uses as its first reference a paper that explains how these types of sentience claims are one of the biggest harms. I mean, we could have stopped there.

As professor Emily M. Bender, also a Stochastic Parrots co-author, points out in her own comments on the letter, there are further issues when it comes to how well the citations support the letter's arguments.

The choice of "6 months"

You do have to wonder about the suggested timeframe in the letter:

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

Anyone who has ever worked in IT knows that 6 months is rarely enough to turn anything around, and certainly not the premise of any AI harm. Why would this length of time be chosen as the baseline? What could realistically be corrected after only 6 months? A half-year may in some contexts feel like a long time but in this context it's the blink of an eye. This truly feels like a red herring.

When suggesting that "governments should step in and institute a moratorium" if the authors' stated rule of 6 months is not followed it's not clear what governments are in question and should abide by their instruction. But we certainly know that few countries have a say in this development, even if the impact of these tools are already significant for most.

A further concern is that a true interest in helping people would not focus on drafting a predefined premise such as the one outlined in the open letter. The authors have already decided the exact rules for what is needed (6 months pause or else). There is no acknowledgement of the importance of more perspectives than the one presupposed by the authors.

Ethics requires that we involve people who are harmed. Anyone serious about ethics would emphasise the importance of involving people who are at risk and who are already suffering consequences. Anyone serious about ethics would not make prior assumptions about how that inclusion of voices should happen.

But here we are… apparently 6 months of pausing development, of a specific type of AI that hasn't been released yet, is the answer.

We vs. they

There is a lot of We-posturing when talking about AI. But it's clear that the people who are building AI are a small number of They. It's not an Us. The issue at hand is how much power the They should be allowed to wield over the rest of the world.

And from the letter (my emphasis):

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt

I do wonder if all the people suffering from the bias in these tools already today will identify themselves as part of this We, and enjoy the wonder of this "AI summer", preparing themselves to adapt. I'd suppose many are already being forced to adapt moment-to-moment, rather than being provided more autonomy. Likely watching others reap the rewards.

The way this letter ignores ongoing harm, and insinuates sentience, speaks volumes about its intent.

The main reason the technocrat billionaires are conjuring AI at the same time as they explain how afraid they are the of thing they are conjuring: it’s because they believe they have a better chance of not suffering the abuse everyone else is suffering if they can claim to be its master.

This is what they mean with the word “loyal” in that open letter.

References

Ethics advocates expressing concern about the letter

The Open Letter to Stop ‘Dangerous’ AI Race Is a Huge Mess
The letter has been signed by Elon Musk, Steve Wozniak, Andrew Yang, and leading AI researchers, but many experts and even signatories disagreed.
Policy makers: Please don’t fall for the distractions of #AIhype
Below is a lightly edited version of the tweet/toot thread I put together in the evening of Tuesday March 28, in reaction to the open…
Timnit Gebru (she/her) (@timnitGebru@dair-community.social)
The very first citation in this stupid letter, https://futureoflife.org/open-letter/pause-giant-ai-experiments/, is to our #StochasticParrots Paper, “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]” EXCEPT that on…
PR as open letter
So the “Future of Life Institute” just published an open letter arguing for the “AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4“. That letter has (at the point of me writing this) 1123 signatures, some of them by very respected and knowledgable…

About longtermism

The Dangerous Ideas of “Longtermism” and “Existential Risk” ❧ Current Affairs
<p>So-called rationalists have created a disturbing secular religion that looks like it addresses humanity’s deepest problems, but actually justifies pursuing the social preferences of elites.</p>
Inside effective altruism, where the far future counts a lot more than the present
The giving philosophy, which has adopted a focus on the long term, is a conservative project, consolidating decision-making among a small set of technocrats.
Why longtermism is the world’s most dangerous secular credo | Aeon Essays
It started as a fringe philosophical theory about humanity’s future. It’s now richly funded and increasingly dangerous
Against Jackpot-Longtermism
Examining the stories tech billionaires tell themselves

A curated list of relevant podcast episodes

AI Hype - Episode Playlist
A curated podcast playlist by Per Axbom, with a critical take on the ongoing AI hype. | 10 episodes.

More sources for the article

On the Dangers of Stochastic Parrots | Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
Carbon Emissions and Large Neural Network Training
The computation demand for machine learning (ML) has grown rapidly recently, which comes with a number of costs. Estimating the energy cost helps measure its environmental impact and finding greener strategies, yet it is challenging without detailed information. We calculate the energy use and carbo…
ChatGPT Data Breach Confirmed as Security Firm Warns of Vulnerable Component Exploitation
OpenAI has confirmed a ChatGPT data breach as a security firm reported seeing the use of a component affected by an exploited vulnerability.
ChatGPT and large language models: what’s the risk?
Do loose prompts* sink ships? Exploring the cyber security issues of ChatGPT and LLMs.
AI and the American Smile
How AI misrepresents culture through a facial expression.
Who’s Training Our AI Apps? - The Riveter
Candace Nelson Sweet Success
Training AI to be really smart poses risks to climate
As artificial intelligence models grow larger and consume more energy, experts have begun to worry about their impact on Earth’s climate.
AI suggested 40,000 new possible chemical weapons in just six hours
It’s apparently not that hard to design poisons using AI.
Four reasons why hyping AI is an ethical problem
Bias, discrimination, privacy violations, lack of accountability — AI entails a lot of ethical problems. Hyping AI creates additional…
Worldcoin Promised Free Crypto If They Scanned Their Eyeballs With “The Orb.” Now They Feel Robbed.
The Sam Altman–founded company Worldcoin says it aims to alleviate global poverty, but so far it has angered the very people it claims to be helping.

Just another one of Sam Altman's companies.

AI and Human Rights
Content for Raoul Wallenberg Talk on AI and Human Rights. Full transcript, slidedeck and references.
AI responsibility in a hyped-up world
It’s never more easy to get scammed than during an ongoing hype. It’s March 2023 and we’re in the middle of one. Rarely have I seen so many people embrace a brand new experimental solution with so little questioning. Right now, it’s important to shake off any mass hypnosis and

Comment