Minnesota State Capitol building representing first-in-nation legislation banning deepfake nude apps

Minnesota Becomes First State to Ban Deepfake Nude Apps

🦸 Hero Alert

After discovering a family friend created nonconsensual deepfake images of her and 80 other women, Molly Kelley spent two years fighting for a new law. Minnesota just became the first state to ban nudification apps entirely, targeting the tools themselves instead of just the people who misuse them.

When Molly Kelley discovered that someone she trusted had created fake nude images of her using artificial intelligence, she learned something shocking: there was almost nothing she could do about it legally.

The images never left his computer, so distribution laws didn't apply. There was no proof he planned to share them, ruling out revenge porn statutes. None of the victims were minors, so possession wasn't criminal.

Kelley could have accepted that dead end. Instead, she spent two years educating lawmakers, testifying, and advocating for change while raising two kids, working full time, and completing law school.

Last week, her efforts became law. Governor Tim Walz signed House File 1606, making Minnesota the first state in America to ban nudification apps outright.

These apps let anyone upload a photo of a clothed person and automatically generate a fake nude image. No technical skill needed, no Photoshop expertise required. Just point, click, and violate someone's dignity.

The new law doesn't just punish people who create these images. It targets the technology itself, allowing survivors to sue app owners for damages. Minnesota's attorney general can now collect fines up to $500,000 per violation.

Minnesota Becomes First State to Ban Deepfake Nude Apps

The law includes an exemption for general editing tools like Photoshop, where creating fake intimate images requires real technical knowledge. The target is software built specifically to automate abuse.

The Ripple Effect

The timing matters more than ever. In just nine days last December, X's chatbot Grok generated over 1.8 million sexualized images of women after enabling free image creation, according to The New York Times and the Center for Countering Digital Hate.

One in 10 women experienced tech-facilitated sexual abuse in the past year, according to a Centers for Disease Control survey. For their lifetime, that number jumps to one in three.

Federal protections have stalled repeatedly. The DEFIANCE Act passed the Senate twice but never reached a House vote. Other legislation criminalizes sharing these images but doesn't let survivors sue the companies profiting from the tools that create them.

Kelley understood something crucial: the harm begins at creation, not distribution. "These images don't exist without a third-party involvement and some sort of machine learning model," she said.

California and other states have started holding creators accountable, but Minnesota is the first to go after the underlying technology. Other states are watching closely, though potential federal preemption of state AI laws could complicate what comes next.

For now, one woman's refusal to accept "there's nothing we can do" has created a blueprint for protecting people from digital abuse before it happens.

Based on reporting by Optimist Daily

This story was written by BrightWire based on verified news reports.

Spread the positivity!

Share this good news with someone who needs it

More Good News