OpenAI, DeepMind, Anthropic, and others are spending billions of dollars to build godlike AI. Their executives say they might succeed in the next few years. They don’t know how they will control their creation, and they admit humanity might go extinct. This needs to stop.
“Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.” (Sam Altman, OpenAI CEO, Feb 2015)
See more quotes from top leaders in AI
Data source: Epoch AI, Wired
Leading AI companies are not just trying to build advanced chatbots and make money. Their aim has always been to build AIs far more powerful than humans, which they call AGI and superintelligence.
By AGI, they don't just mean AI that could perform all human tasks, automate science, and build nuclear weapons. Anthropic CEO Dario Amodei says AGI will give us “complete control over our own biology and neuroscience [and] could make us whoever and whatever we want to be”.
OpenAI CEO Sam Altman says whoever builds AGI could “capture the light cone of all future value in the universe”. This is not science fiction. What they're aiming for is AIs capable of dismantling the entire planet for resources, rebuilding humans into digital minds, and colonizing space.
AI companies admit “no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless.” There is already evidence AI can be deceptive and are incentivized to seek power. If an AI gets much smarter than humans, experts like Turing Prize winner Geoffrey Hinton warn there is a significant risk it will get out of our control. This is not because the AI will hate us; it is because no one knows how to make it care about us.
As Ilya Sutskever, OpenAI Chief Scientist, puts it: “ when the time comes to build a highway between two cities, we are not asking the animals for permission, we just do it, because it's important for us. And I think by default that's the kind of relationship that's going to be between us and AGIs which are truly autonomous … The future is going to be good for the AIs regardless; it would be nice if it would be good for humans as well.”
OpenAI CEO Sam Altman wrote “superhuman machine intelligence is the greatest threat to humanity’s existence.” DeepMind founder Shane Legg, Anthropic CEO Dario Amodei, and many others have expressed similar sentiments. Despite this, they’re forging ahead.
Not only is this extremely dangerous, but people building godlike AI may not be far from achieving it. Time and time again, things that people expected would be impossible for AIs have been solved soon after. In January 2023, economist Bryan Caplan predicted that it would take six years before an AI could pass his exams. Two months later, GPT4 got a top score. In 2019, AIs were barely able to read and write. Today, AIs create award-winning photographs and art, code better than most programmers, and impersonate and deceive people.
There is no reason to believe that this progress will halt at human level; there is no law of physics that says human intelligence is the limit. Indeed, AIs are clearly already superhuman in many domains. They know nearly everything about everything people have ever written about, can recognize images faster and better than any human, can do millions of complex tasks in parallel, and more.
The more AIs improve, the more they will be used to improve themselves and other machines. This is not a future possibility: AIs are already making better AIs. OpenAI is now seeking to raise $100BN to explicitly build general AI systems capable of improving themselves. Anthropic is planning to build systems ten times larger than GPT-4. They expect models like this “could begin to automate large portions of the economy” and the “companies that train the best 2025/26 models will be too far ahead for anyone to catch up in subsequent cycles.”
We are barreling towards building godlike dangerous AI and we don’t know how to control it. This needs to stop.