
AI Learns Obscure Code It Was Never Taught, Hits 96% Success
A USC student helped AI master a programming language 10,000 times rarer than Python by simply letting it learn from its mistakes. The breakthrough could transform how AI helps us solve problems in medicine, engineering, and beyond.
An AI just learned to code in a language so rare that even the researchers guiding it couldn't speak a word of it.
Minda Li, an undergraduate at USC Viterbi School of Engineering, discovered something that challenges everything we thought we knew about artificial intelligence. She found that AI doesn't just regurgitate what it's seen before. With the right approach, it can actually learn and improve in areas where it barely had any training.
Li tested this with GPT-5 and Idris, a programming language so obscure it has only 2,000 code repositories online. Python, for comparison, has 24 million. That's 10,000 times less data for the AI to learn from.
At first, GPT-5 struggled miserably. It solved only 39% of coding exercises in Idris, compared to 90% in Python. Li tried providing documentation and reference guides, which helped a little, but nothing dramatic happened.
Then she tried something beautifully simple. Every time the AI wrote code that failed, she captured the error message from the compiler and fed it back to the model. She let it read its mistakes, understand what went wrong, and try again. Up to 20 times per problem.

The results shocked even Li herself. The success rate jumped from 39% to 96%. "I thought we'd probably get a 10% jump," she said. "I was surprised that just that alone, seemingly one simple thing, just keep recompiling, keep trying, was able to get to 96%."
Her advisor, Professor Bhaskar Krishnamachari, sees this as a fundamental shift in how we understand AI. "Used to be, maybe a year or two ago, you would say an AI model is only as good as the data it has seen," he explained. "This paper is saying something different."
The Ripple Effect spreads far beyond programming languages. Krishnamachari envisions using this feedback method to help AI design safer buildings, prove mathematical theorems, or reason through legal arguments. Any field with clear rules that can provide objective feedback could benefit.
Imagine an AI designing a bridge, receiving feedback that the structure is unsafe, and iterating until it's perfect. Or helping doctors develop treatment plans, learning from each patient outcome to improve the next recommendation.
The method works because it unlocks capability that was already hiding inside the AI, waiting for the right key. Li and Krishnamachari found that key: structured feedback and the freedom to learn from failure, just like humans do.
Neither researcher could write a single line of Idris code themselves, yet they guided an AI to near-mastery of it. That's the real breakthrough. We don't need to be experts to help AI become one.
Their findings will be presented at IEEE SoutheastCon 2026 this week, offering a glimpse of a future where AI doesn't just repeat what it knows but actually learns what it doesn't.
More Images




Based on reporting by Phys.org - Technology
This story was written by BrightWire based on verified news reports.
Spread the positivity!
Share this good news with someone who needs it


