Feb 6, 2022

Feb 6, 2022

Feb 6, 2022

The Double-Edged Sword of Comedy

The Double-Edged Sword of Comedy

The Double-Edged Sword of Comedy

Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.
Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.
Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.
Yellow Flower
Yellow Flower
Yellow Flower

Last week, the Future of Life Institute published an open letter  proposing a six-month moratorium on the “dangerous” AI race. It has since been signed by over 3,000 people, including some influential members of the AI community. But while it is good that the risks of AI systems are gathering visibility within the community and across society, both the issues described and the actions proposed in the letter are unrealistic and unnecessary.
The call for a pause on AI work is not only vague, but also unfeasible. While the training of large language models by for-profit companies gets most of the attention, it is far from the only type of AI work taking place. In fact, AI research and practice are happening in companies, in academia, and in Kaggle competitions all over the world on a multitude of topics ranging from efficiency to safety. This means that there is no magic button that anyone can press that would halt “dangerous” AI research while allowing only the “safe” kind. And the risks of AI which are named in the letter are all hypothetical, based on a longtermist mindset that tends to overlook real problems like algorithmic discrimination and predictive policing, which are harming individuals now, in favor of potential existential risks to humanity.

It’s not too late to flip the narrative, to start questioning the capabilities and limitations of these systems and to demand accountability and transparency that many in and beyond the field have already been calling for.

Instead of focusing on ways that AI may fail in the future, we should focus on clearly defining what constitutes an AI success in the present. This path is eminently clear: Instead of halting research, we need to improve transparency and accountability while developing guidelines around the deployment of AI systems. Policy, research, and user-led initiatives along these lines have existed for decades in different sectors, and we already have concrete proposals to work with to address the present risks of AI.

Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.
Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.
Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.

Another crucial step toward safety is collectively rethinking the way we create and use AI. AI developers and researchers can start establishing norms and guidelines for AI practice by listening to the many individuals who have been advocating for more ethical AI for years. This includes researchers like Timnit Gebru, who proposed a “slow AI” movement, and Ruha Benjamin, who stressed the importance of creating guiding principles for ethical AI during her keynote presentation at a recent AI conference. Community-driven initiatives, like the Code of Ethics being implemented by the NeurIPS conference, are also part of this movement, and aim to establish guidelines around what is acceptable in terms of AI research and how to consider its broader impacts on society.


The recent open letter presents as a fact that superhuman AI is a done deal. But in reality, current AI systems are simply stochastic parrots built using data from underpaid workers and wrapped in elaborate engineering that provides the semblance of intelligence. It’s not too late to flip the narrative, to start questioning the capabilities and limitations of these systems and to demand accountability and transparency—moves that many in and beyond the field have already been calling for. This must be done in institutions by not just policymakers, but also the users of these technologies, who have the power to help shape both the present and the future of AI. Since AI models are increasingly deployed in all sectors of society, including high-stakes ones like education, medicine, and mental health, we all have a role to play in shaping what is and is not considered acceptable: This includes participating in democratic processes aiming to legislate AI, refusing to use systems that aren’t sufficiently transparent, and demanding oversight and accountability from the creators and deployers of AI technologies.

Join the ensemble.
Tell us about your dangerous idea.