
Tech Giants Agree to US Safety Tests for New AI Tools
Google, Microsoft, and xAI will now let the US government test their AI models for safety risks before public release. The voluntary program marks a major step toward protecting people from potential AI-related dangers.
Major tech companies are opening their doors to government safety inspectors in a move that could make artificial intelligence safer for everyone.
Google, Microsoft, and xAI have agreed to let the US Department of Commerce test their AI models before releasing them to the public. The Center for AI Standards and Innovation will examine each system for cybersecurity threats, biosecurity risks, and potential chemical weapons dangers.
"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," said CAISI Director Chris Fall. The center has already completed 40 evaluations of other AI models, including some cutting-edge systems not yet available to the public.
Microsoft shared that these safety checks will help them stay ahead of emerging threats like AI-powered cyber attacks for their Copilot assistant. The company sees the partnership as a way to build stronger, more secure AI tools for their users.
OpenAI and Anthropic signed similar agreements back in 2024, though those deals have been renegotiated under the current administration. OpenAI recently provided the government with early access to ChatGPT 5.5 for national security testing before its public launch this week.

The company is also working on GPT-5.5-Cyber, a specialized model designed to strengthen cyber defense capabilities. Right now, only a limited group of first users can access this security-focused tool while experts develop responsible deployment strategies.
The Bright Side
This collaboration shows that innovation and safety can work hand in hand. Rather than slowing down progress, these voluntary partnerships give companies expert guidance on identifying risks before they reach millions of users.
The approach also represents a practical middle ground between over-regulation and the wild west of unchecked AI development. By using existing regulatory bodies and domain experts instead of creating new bureaucratic layers, the program keeps oversight focused and efficient.
For everyday people, this means the AI tools landing on our phones and computers will have passed through an extra layer of scrutiny designed to catch problems early.
The tech industry is choosing transparency and accountability as AI becomes more powerful and widespread.
More Images


Based on reporting by Euronews
This story was written by BrightWire based on verified news reports.
Spread the positivity!
Share this good news with someone who needs it

