AI extinction risk is just a distraction from present day harms.
You’re asking for a lot.
Regulation hurts innovation.
Regulating AI is infeasible.
Regulation might make things worse, because governments are bureaucratic and don’t understand the state of the art.
If we don’t build AGI, other less cautious or worse intentioned actors will.
We need to study state-of-the-art AI to learn how we could eventually align AGI.
Deploying AI now, and potentially experiencing small catastrophes caused by it, will help society prepare and increase the likelihood of a sufficient government response.
AGI development has benefits.
AI companies are building AI because they’re trying to make the future awesome.
Other AI development has benefits. We shouldn’t stop other AI to stop AGI.
Language models don’t really understand things. They’re just doing autocomplete.
Language models aren’t agents; they can’t do complex planning and take actions.
AIs can never be smarter than humans.
AI will never be powerful enough to kill everyone.
It will take a long time for AI to go from smarter than humans to godlike.
Why would AI hate humans?
Why would AI want to kill everyone?
As AI gets smarter, won’t it realize what’s moral?
Can’t we just train AIs to know what’s good?