
Google, Microsoft, xAI Submit AI Tools for Safety Testing
Three tech giants are voluntarily testing their AI models with the US government before public release. The move expands safety partnerships and shows growing commitment to responsible AI development.
Major tech companies are stepping up to make AI safer for everyone.
Google, Microsoft, and Elon Musk's xAI have agreed to submit their artificial intelligence tools to the US Department of Commerce for safety testing before releasing them to the public. The voluntary partnerships expand on agreements made by AI companies like OpenAI and Anthropic during the previous administration.
The testing happens through the Center for AI Standards and Innovation, which has already evaluated 40 AI models. Some cutting-edge models have stayed under wraps after testing, though officials haven't specified which ones.
"These expanded industry collaborations help us scale our work in the public interest at a critical moment," said CAISI director Chris Fall. The evaluations will cover testing, collaborative research, and best practices for commercial AI systems.
The agreements include Google's Gemini chatbot, which is now being used by US defense and military agencies. Microsoft's CoPilot and xAI's Grok chatbot will also undergo safety reviews.

The move represents a shift in how the Trump Administration approaches AI oversight. While last year's executive orders focused on removing regulatory barriers to "win the AI race," the White House is now embracing more collaborative safety measures.
The Ripple Effect
This partnership shows that innovation and responsibility can go hand in hand. By testing AI tools before release, tech companies are protecting users while still advancing technology.
The timing matters too. As AI becomes more powerful and integrated into defense systems, getting safety right from the start protects everyone. Recent claims about models being "too powerful" for public release highlight why testing matters.
These voluntary agreements prove that tech leaders and government can work together on challenges that affect us all. When companies choose safety over speed, everyone benefits from smarter, more trustworthy AI tools.
The message is clear: the future of AI includes both breakthrough innovation and careful safeguards.
More Images

Based on reporting by BBC Technology
This story was written by BrightWire based on verified news reports.
Spread the positivity!
Share this good news with someone who needs it


