
Signal Creator Brings Privacy Tech to Meta's AI Chatbots
The privacy advocate who made end-to-end encryption mainstream is now working with Meta to protect your conversations with AI chatbots. Billions of people chat with AI daily, but unlike your private texts, those conversations aren't encrypted.
Moxie Marlinspike, the creator of Signal's encryption technology that protects billions of daily messages, announced this week he's bringing privacy protections to Meta's AI systems.
Right now, when you chat with an AI assistant, your conversation isn't private. The companies can see everything you say, and so can their employees, hackers, and potentially governments.
Marlinspike's new platform, Confer, aims to change that. He's integrating his privacy technology directly into Meta AI, bringing the same protection you get in an encrypted text message to your AI conversations.
The timing matters because people are sharing deeply personal information with chatbots. They're asking about health concerns, financial decisions, and confidential work matters. Without encryption, all that data sits on company servers, ready to be used for AI training or accessed by others.
Meta already uses Marlinspike's encryption in WhatsApp, where it protects over a billion accounts. But when WhatsApp introduced its Meta AI chatbot last year, those conversations weren't shielded the same way your personal chats are.

Why This Inspires
This collaboration represents a crucial step toward making privacy the default in AI, not an afterthought. Cryptography researchers are calling Confer "probably the best private AI solution" available today, though they note it's still evolving.
The technology works differently than traditional message encryption because generative AI is fundamentally more complex. Marlinspike has been building Confer on open source models, but working with Meta gives him access to more advanced AI systems.
WhatsApp head Will Cathcart emphasized the importance on social media. "People use AI in ways that are deeply personal and require access to confidential information," he wrote. "It's important that we build that technology in a way that gives people the power to do that privately."
Cryptographer JP Aumasson praised the approach despite acknowledging it's not perfect yet. "Moxie knows what he's doing and has a solid track record," he told reporters.
Other researchers agree the collaboration could set a new standard. "I really hope more AI chatbots adopt this approach," said Mallory Knodel, a cryptography researcher at New York University who studies privacy in AI systems.
Marlinspike built his career on making privacy accessible to everyone, not just tech experts. He's now applying that same philosophy to AI at a moment when billions of people are forming new digital habits around chatbots.
The full details of how the technology will integrate into Meta's systems haven't been released, but the commitment signals a major shift in how tech companies think about AI privacy.
More Images
Based on reporting by Wired
This story was written by BrightWire based on verified news reports.
Spread the positivity!
Share this good news with someone who needs it


