Trade EverythingJul 11
free markets are responsible for our prosperity. let’s build more of them.
Tarek MansourA short statement that reads “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” was published by the Center for AI Safety today. Titled “Statement on AI Risk,” it’s signed by OpenAI CEO Sam Altman, CEO of Google Deepmind Demis Hassabis, and Anthropic CEO Dario Amodei, among many other notable figures and scientists in AI.
The letter comes on the heels of Altman’s appearance before Congress, where he suggested a AI licensing regime for models whose capacities may be risky, but only “as we head toward AGI.”
At the end of March, many of the the Center for AI Safety’s signatories — Altman, among others, excepted — signed an open letter published by the Future of Life Institute called “Pause Giant AI Experiments,” also signed by Elon Musk.
The Center for AI Safety is a non-profit whose mission is “to reduce societal-scale risks from artificial intelligence.”
-Brandon Gorrell
0 free articles left