SVG
Commentary
Wall Street Journal

IAEA for AI? That Model Has Already Failed

mike_watson
mike_watson
Associate Director, Center for the Future of Liberal Society
.
Caption
Samuel Altman testifies before the Senate on May 16, 2023, in Washington, DC. (Win McNamee via Getty Images)

Having been confined to academic discussions and tech conference gab-fests in years past, the question of artificial intelligence has finally caught the public鈥檚 attention. ChatGPT, Dall-E 2 and other new programs have demonstrated that computers can generate sentences and images that closely resemble man-made creations.

AI researchers are sounding the alarm. In March, some of the field鈥檚 leading lights joined with tech pioneers such as Elon Musk and Steve Wozniak to call for a six-month pause on AI experiments because 鈥渁dvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.鈥� Last month the leadership of OpenAI, which created ChatGPT, called for 鈥渟omething like an IAEA鈥濃攖he International Atomic Energy Agency鈥斺渇or superintelligence efforts,鈥� which would 鈥渋nspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.鈥� If the alarmists are right about the dangers of AI and try to fix it with an IAEA-style solution, then we鈥檙e already doomed.