
Major AI Firms Fall Short on Global Safety Standards, Study Finds

GeokHub
Contributing Writer
GLOBAL — Dec 3 (GeokHub) A recent safety-index assessment has revealed that several of the world’s leading artificial-intelligence companies — including OpenAI, Anthropic, xAI and Meta — are falling well below emerging global standards for AI safety and risk management.
The independent review found that none of the companies evaluated has a robust, credible framework in place to safely control advanced or potentially superintelligent AI systems. Basic safeguards — including transparent risk-assessment processes, concrete oversight mechanisms and clear mitigation plans — were largely missing or underdeveloped.
Though some firms have publicly committed to safety goals, the report warns that their implementation remains shallow. As development races ahead, critics argue that without stronger regulation or industry-wide safeguards, the gap between AI capabilities and safety practices could pose serious social, ethical, and existential risks.








