Eliza Strickland, writing in IEEE Spectrum »
The just-released AI Safety Index graded six leading AI companies on their risk assessment efforts and safety procedures… and the top of class was Anthropic, with an overall score of C. The other five companies—Google DeepMind, Meta, OpenAI, xAI, and Zhipu AI—received grades of D+ or lower, with Meta flat out failing.
While the report does not issue any recommendations for either AI companies or policymakers, Tegmark feels strongly that its findings show a clear need for regulatory oversight—a government entity equivalent to the U.S. Food and Drug Administration that would approve AI products before they reach the market.
“I feel that the leaders of these companies are trapped in a race to the bottom that none of them can get out of, no matter how kind-hearted they are,” Tegmark says. Today, he says, companies are unwilling to slow down for safety tests because they don’t want competitors to beat them to the market. “Whereas if there are safety standards, then instead there’s commercial pressure to see who can meet the safety standards first, because then they get to sell first and make money first.”