X social media platform logo representing company's AI safety improvements and user protection measures

X Blocks AI Tool From Creating Explicit Deepfakes

✨ Faith Restored

Elon Musk's company X has added strong safeguards to prevent its AI chatbot Grok from creating sexualized images of real people. The move comes after governments worldwide demanded action to protect women and children from deepfake abuse.

Tech companies are stepping up to stop AI tools from being weaponized against real people, and X just made a major move to protect users.

Elon Musk's social media platform X announced Wednesday it has blocked its AI chatbot Grok from creating or editing images that undress real people. The company now geographically blocks all users, including paid subscribers, from generating images of real people in revealing clothing in countries where such content is illegal.

The changes follow swift action from regulators around the world. California Attorney General Rob Bonta launched an investigation to determine if the company violated state law. Britain's media regulator Ofcom opened a similar probe on Monday, while the European Commission pledged to carefully assess whether the new protections go far enough.

The pressure came after concerning findings from Paris nonprofit AI Forensics. Their analysis of over 20,000 Grok-generated images found that more than half showed people in minimal clothing, with the vast majority being women and 2% appearing to be minors.

X's safety team moved quickly to address the problem. The company limited image creation and editing to paid subscribers only, preventing anonymous users from accessing the tool. They specifically blocked prompts that could create non-consensual explicit content.

X Blocks AI Tool From Creating Explicit Deepfakes

Indonesia became the first country to block Grok entirely on Saturday, with Malaysia following suit the next day. These decisive government actions showed that protecting citizens from AI-generated abuse is becoming a global priority.

The Ripple Effect

This case shows how quickly the tech world can respond when governments, researchers, and companies work together. Within weeks of the AI Forensics report, multiple countries had launched investigations, two nations blocked the tool entirely, and X implemented new protections.

The European Commission acknowledged X's efforts, with spokesperson Thomas Regnier stating they would assess the changes to ensure they effectively protect EU citizens. This collaborative approach between regulators and tech companies sets a precedent for addressing AI safety concerns in real time rather than years down the road.

Other AI companies are now watching closely. As artificial intelligence becomes more powerful and accessible, the industry is learning that building guardrails must happen alongside innovation, not after harm occurs.

X's response demonstrates that tech giants can act fast when public safety demands it.

Based on reporting by DW News

This story was written by BrightWire based on verified news reports.

Spread the positivity! 🌟

Share this good news with someone who needs it

More Good News