
UK Speeds Up Rules to Block Non-Consensual Images Online
Britain's tech regulator just fast-tracked new protections requiring companies to block intimate images shared without consent, moving the deadline up by months. The decision comes as part of a broader push to make the internet safer for women and girls.
Britain is accelerating new rules that will require tech companies to actively block non-consensual intimate images from spreading online, including AI-generated deepfakes.
Ofcom, the UK's communications regulator, moved its final decision from later this year to May after recognizing the urgent need for stronger protections. The new measures will take effect this summer, forcing social media sites and apps to use "hash matching" technology that can detect and remove harmful content before it spreads.
The timing matters because these images can be shared and reshared countless times within hours. Hash matching works like a digital fingerprint, identifying known illegal images instantly across platforms so they can be blocked automatically.
Elena Michael, a campaigner from the advocacy group #NotYourPorn, welcomed the announcement but stressed the need to see how it works in practice. She pointed out that going after individual perpetrators isn't enough when multiple tech companies play a role in allowing harmful content to spread.
The new rules are part of a larger safety overhaul that includes making livestreaming safer for children and requiring faster responses to spikes in harmful content. Those additional protections will be finalized in autumn.

Meanwhile, the UK government is taking an even tougher stance through an amendment to the Crime and Policing Bill. Tech firms will have just 48 hours to remove non-consensual intimate images after they're reported, or face substantial fines and potential service blocks in the UK.
Prime Minister Keir Starmer called it the latest step in a "21st century battle against violence against women and girls" online. His message to tech companies was clear: they're now on notice.
The Ripple Effect
This accelerated timeline signals a shift in how governments are approaching online safety. Instead of waiting years for voluntary compliance, regulators are setting firm deadlines and backing them with real consequences.
The hash matching technology already exists and works effectively on some platforms. Expanding it across all major sites and apps means harmful images could be stopped from spreading almost immediately instead of staying online for days or weeks while victims struggle to get them removed.
For survivors of image-based abuse, the 48-hour removal requirement represents a meaningful change from current practices where content can remain accessible indefinitely. It acknowledges that every hour an intimate image stays online without consent causes additional harm.
By summer, millions of people using UK-based platforms will have stronger protections in place.
More Images

Based on reporting by Independent UK - Good News
This story was written by BrightWire based on verified news reports.
Spread the positivity! π
Share this good news with someone who needs it

