
MIT Student Turns Sound Into Dancing Visual Art With AI
A master's student who grew up without access to music lessons is now creating technology that lets anyone turn their favorite songs into stunning, ever-changing visual performances. His AI system makes music visible in real time.
Mariano Salcedo couldn't afford music lessons growing up in Mexico and Texas, but that didn't stop him from dreaming about creating something beautiful with sound. Now, as one of MIT's first master's students in Music Technology and Computation, he's building an AI that lets anyone turn music into mesmerizing visual art.
Salcedo's system uses neural cellular automata, a technology that grows images capable of regenerating and responding to stimuli. When paired with music, these images literally "show" sounds in action, creating unique visual performances that dance and shift with every beat.
The best part? Salcedo designed a web interface where anyone can plug in their favorite songs and watch them come to life. Users can adjust how the visuals respond to the music's energy, creating one-of-a-kind displays that complement the listening experience.
Salcedo's path to this breakthrough wasn't straightforward. He started MIT as a mechanical engineering student through the Questbridge program, completing two-thirds of his degree before a life-changing encounter with a chatbot sparked his fascination with AI. He switched majors completely and started over.
Music called him back when he began DJing at MIT. Without traditional instrument training, he discovered he could create engaging soundscapes using technology and a digital audio workstation.

Professor Eran Egozy, who co-founded Harmonix Music Systems (the company behind games like Rock Band), became Salcedo's mentor. "He is a beautiful example of a multidisciplinary researcher who thinks deeply about how to best use technology to enhance and expand human creativity," Egozy says.
The program itself represents a new frontier, bringing together MIT's humanities and engineering schools to explore computational approaches to music. A fellowship funded by Alex Rigopulos, Egozy's Harmonix co-founder, supports Salcedo's work.
Why This Inspires
Salcedo's journey shows how obstacles can redirect us toward unexpected innovations. The kid who couldn't afford music lessons is now democratizing creative expression, making it possible for anyone to visualize sound without expensive equipment or training.
His commitment goes beyond technology. Salcedo hopes more diverse voices in AI development will help address bias and ensure responsible use of these powerful tools.
MIT's humanities school selected Salcedo to deliver the student address at the 2026 Advanced Degree Ceremony. "It's an honor, and it's daunting," he says, though he's eager to embrace the responsibility.
Technology gave Salcedo a voice when traditional pathways were closed, and now he's building tools to amplify creativity for everyone.
Based on reporting by MIT News
This story was written by BrightWire based on verified news reports.
Spread the positivity!
Share this good news with someone who needs it

