AI research boom draws scrutiny after single author claims 113 papers in one year



By Ryan General
Academics say artificial intelligence research is being diluted by a surge of rapidly produced papers that receive only cursory peer review. The criticism has intensified after Kevin Zhu, a recent Berkeley graduate, claimed authorship on 113 AI papers in a single year. Eighty-nine of those papers were accepted for presentation in December at a leading international conference focused on AI and machine learning, prompting questions among researchers about how such a volume of work could be carefully evaluated.
Prolific publishing draws scrutiny
Zhu, who completed his bachelor’s degree in computer science at the University of California, Berkeley, now runs Algoverse, an AI research and mentoring company that works primarily with high school students. Many of those students are listed as co-authors on the papers, according to conference records and public descriptions of the program. Zhu graduated from high school in 2018 and began publishing academic work shortly afterward.
The papers attributed to Zhu cover a wide range of topics, including the use of AI to locate nomadic pastoralist populations in sub-Saharan Africa, evaluate skin lesions and translate Indonesian dialects. Researchers familiar with those fields say each area typically requires specialized domain knowledge, curated datasets and repeated experimental validation. They note that producing such work at scale would normally involve years of focused research rather than months.
On LinkedIn, Zhu promotes publishing “100+ top conference papers in the past year” and states that the work has been cited by organizations including OpenAI, Microsoft, Google, Stanford, MIT and Oxford. Academics, including University of California, Berkeley computer science professor Hany Farid, caution that early citations can occur before findings are independently tested and do not necessarily reflect methodological rigor. They also note that conference citations often precede journal-level review.
Review capacity strained
In an interview with The Guardian, Farid described Zhu’s papers as “a disaster,” saying the case illustrates how publication incentives and automation have outpaced existing safeguards.
Observers noted how submission volumes at major AI conferences have grown sharply in recent years. Flagship events such as NeurIPS and the International Conference on Learning Representations now receive tens of thousands of papers annually, according to conference organizers. Reviewers, who are typically unpaid volunteers, are often assigned multiple submissions under strict deadlines.
For Farid, the current system is a mess. “I’m fairly convinced that the whole thing, top to bottom, is just vibe coding,” he was quoted as saying, referring to the reliance on AI tools to rapidly generate software and research outputs.
This story is part of The Rebel Yellow Newsletter — a bold weekly newsletter from the creators of NextShark, reclaiming our stories and celebrating Asian American voices.
Subscribe free to join the movement. If you love what we’re building, consider becoming a paid member — your support helps us grow our team, investigate impactful stories, and uplift our community.
Share this Article
Share this Article