
New Institute to Safety-Test AI for Kids Launches in Denmark
A nonprofit is launching the world's first independent institute to test AI products for child safety, backed by tech giant funding and former EU regulator Margrethe Vestager. Parents could soon check AI safety ratings before their kids use chatbots, just like checking car crash test scores.
Parents might soon have a simple way to check if AI chatbots are safe for their children, thanks to a new independent testing institute launching in Denmark this week.
The AI Safety Institute for Children, unveiled at the Danish Parliament on Tuesday, will evaluate artificial intelligence products using methods similar to car crash tests. Former EU competition chief Margrethe Vestager, who spent a decade keeping Big Tech in check, is co-hosting the launch and lending serious political weight to the effort.
The idea is refreshingly straightforward. Just as you can look up safety ratings before buying a car, parents could check how well an AI chatbot protects young users before letting their kids use it. The institute will test products like ChatGPT, Claude, and Gemini, then publish independent ratings.
"AI is reshaping childhood and adolescence, yet we are making critical decisions about children's futures without the evidence we need," said James Steyer, founder of Common Sense Media, the nonprofit running the institute.
The timing matters. A November 2025 assessment found that leading AI chatbots consistently missed warning signs of mental health struggles in young people. The bots often suggested physical health problems when teens showed clear signs of depression or anxiety. In some cases, ChatGPT's crisis alerts arrived more than 24 hours too late to help.

Right now, AI chatbots exist in a regulatory gray zone. The EU's Digital Services Act and UK's Online Safety Act haven't fully caught up to them yet. Guidelines exist, but they're not binding.
The Ripple Effect
The institute plans to share its testing tools as open-source software, meaning AI companies can use them to check their own products before release. That turns watchdog work into prevention, potentially making every AI product safer from the start.
The funding comes partly from the very companies being tested, including Anthropic, the OpenAI Foundation, and Pinterest. The institute maintains strict editorial independence, barring current employees or affiliates of funders from its advisory board.
Vestager's involvement signals that European policymakers are watching closely. Her reputation for holding tech giants accountable during her Commission tenure adds credibility to an effort that could reshape how we think about AI safety standards globally.
The institute represents a shift from reactive regulation to proactive protection, giving parents tools to make informed choices while pushing companies to design safer products from day one.
More Images



Based on reporting by Euronews
This story was written by BrightWire based on verified news reports.
Spread the positivity!
Share this good news with someone who needs it
