Major AI Firms Fall Short on Global Safety Standards, Study Finds

Major AI Firms Fall Short on Global Safety Standards, Study Finds

GeokHub

GeokHub

Contributing Writer

1 min read
1.0x

GLOBAL — Dec 3 (GeokHub) A recent safety-index assessment has revealed that several of the world’s leading artificial-intelligence companies — including OpenAI, Anthropic, xAI and Meta — are falling well below emerging global standards for AI safety and risk management.

Get daily updates from GeokHub with the latest tech news, trends and innovations by subscribing to our Newsletter

The independent review found that none of the companies evaluated has a robust, credible framework in place to safely control advanced or potentially superintelligent AI systems. Basic safeguards — including transparent risk-assessment processes, concrete oversight mechanisms and clear mitigation plans — were largely missing or underdeveloped.

Though some firms have publicly committed to safety goals, the report warns that their implementation remains shallow. As development races ahead, critics argue that without stronger regulation or industry-wide safeguards, the gap between AI capabilities and safety practices could pose serious social, ethical, and existential risks.

Share this article

Help others discover this content

Continue Reading

Discover more articles on similar topics that you might find interesting