
AI Conference Rejects 497 Papers for Illicit AI Use
A major machine learning conference used hidden watermarks to catch researchers using AI to write peer reviews, leading to nearly 500 paper rejections. The bold move shows how academic communities are fighting to preserve trust in the age of AI.
The International Conference on Machine Learning just sent a clear message: trust matters more than convenience.
Organizers rejected 497 papers submitted to their Seoul conference this July after discovering their authors used AI inappropriately during peer review. The conference requires each paper's authors to review other submissions, but some researchers broke the rules by having AI write those reviews for them.
The detection method was brilliant. Conference organizers hid invisible watermarks in papers sent out for review. When someone fed the paper into an AI tool to generate their review, the watermark triggered the AI to include specific telltale phrases in its response. Those phrases revealed exactly who was taking shortcuts.
The rejection represents about 2% of all submissions. While that might sound small, it's a significant stand in a field where AI use has become nearly universal.
A 2025 survey from publishing company Frontiers found that more than half of researchers now use AI for peer review, often violating journal and conference policies. The practice has sparked fierce debate among scientists about whether AI belongs in the review process at all.
This year, ICML took an innovative approach. They created two separate review streams for the first time. One stream banned AI entirely, while the other allowed limited use. Authors and reviewers could choose their preference.

The 497 rejections came from people who violated the rules in either stream. Conference organizers explained their decision in a March 18 blog post, emphasizing that protecting community trust matters more than anything as the field evolves rapidly.
The Ripple Effect
The academic community's reaction has been mostly positive. Researchers on social media applauded the enforcement, with some suggesting other conferences adopt similar methods or even ban violators from future submissions.
The watermarking technique could spread beyond this single conference. Other academic gatherings now have a proven method to detect AI misuse without relying on honor systems alone.
Not everyone agrees the policy will work long term. Zhengzhong Tu, a computer scientist at Texas A&M University, warned it might backfire by pushing reviewers to find workarounds, potentially leading to even worse quality reviews.
But the conference's message stands: as AI becomes more powerful and pervasive, human judgment and honesty must remain at the center of scientific progress. Marie Soulière, who oversees editorial ethics at Frontiers, noted the case shows research communities desperately need clear guidance on responsible AI use.
The stakes are high because peer review forms the foundation of scientific trust. When researchers skip the hard work of evaluating their colleagues' studies, the entire system weakens.
By taking decisive action, ICML reminded thousands of scientists that some shortcuts aren't worth taking.
More Images




Based on reporting by Nature News
This story was written by BrightWire based on verified news reports.
Spread the positivity!
Share this good news with someone who needs it

