Computer screen showing colorful code with AI-generated review comments highlighting potential bugs

AI Tool Catches Bugs in AI-Written Code for Big Companies

🤯 Mind Blown

Anthropic just launched a smart reviewer that automatically checks AI-generated code before it goes live, helping companies like Uber and Salesforce ship faster with fewer bugs. The tool costs $15-25 per review but could save enterprises thousands in fixing errors later.

Software companies have a new problem: their AI coding assistants work so fast that human reviewers can't keep up with checking all the code.

Anthropic just solved this bottleneck with Code Review, an AI tool that automatically checks other AI-written code for bugs and security risks. The product launched Monday for enterprise customers already using Claude Code, the company's coding assistant.

"Now that Claude Code is putting up a bunch of pull requests, how do I make sure those get reviewed in an efficient manner?" said Cat Wu, Anthropic's head of product. Pull requests are the standard way developers submit code changes before they become part of the final software.

The timing matters because AI coding tools have exploded in popularity. Claude Code alone has hit $2.5 billion in run-rate revenue since launching, with enterprise subscriptions quadrupling this year.

Here's how it works: once turned on, Code Review integrates with GitHub and automatically analyzes every code submission. It leaves comments directly on the code, explaining what might break and how to fix it. The system uses a color-coded priority system: red flags show the most serious problems, yellow highlights potential issues, and purple marks concerns tied to older code.

AI Tool Catches Bugs in AI-Written Code for Big Companies

The Ripple Effect

Multiple AI agents work together to review code from different angles, then a final agent ranks the findings and removes duplicates. The focus stays on logic errors that could cause real problems, not just style preferences. "A lot of developers have seen AI automated feedback before, and they get annoyed when it's not immediately actionable," Wu explained.

Companies like Uber, Salesforce, and Accenture are already testing the tool. Engineering leaders can customize it to check for internal best practices and security standards on top of the basic bug detection.

The service costs $15 to $25 per review on average, depending on code complexity. While that sounds steep, catching bugs early prevents much costlier fixes after software ships to customers.

The bigger picture shows AI tools creating their own ecosystem. As AI writes more code faster than ever, other AI tools step in to review that work at the same speed. This cycle lets development teams move quickly without sacrificing quality.

Enterprise teams can now generate new features faster while actually reducing bugs compared to traditional coding methods.

More Images

AI Tool Catches Bugs in AI-Written Code for Big Companies - Image 2

Based on reporting by TechCrunch

This story was written by BrightWire based on verified news reports.

Spread the positivity!

Share this good news with someone who needs it

More Good News