AI’s unchecked use in academia has exposed cracks in the publishing ecosystem, but this disruption may be the push needed to reform research culture.
Image Source: alvarez/via Getty Images
In recent years, multiple research papers have mentioned the term ‘vegetative electron microscopy’. What kind of sophisticated scientific theory or instrument is this about? None whatsoever. It is a nonsensical term picked up by Artificial Intelligence (AI) algorithms, which subsequently found its way into multiple scientific journals. With the proliferation of AI into virtually every facet of society, its application to academic research is not particularly surprising. However, in this case, its influence can be particularly damaging. Scientific publications and research are a cornerstone of society, and any tampering or spread of misinformation therein is likely to have severe ramifications for human scientific progress and innovation. That being said, it is also possible that this may turn out to be a blessing in disguise for the academic community.
Scientific publications and research are a cornerstone of society, and any tampering or spread of misinformation therein is likely to have severe ramifications for human scientific progress and innovation.
There have been more than 20 research papers referring to vegetative electron microscopy, a term that does not exist in scientific literature and essentially has no basis in reality. Where did it originate, and why does it feature so prominently in so many papers? The term was first spotted in a since-retracted paper published in the Environmental Research and Pollution Research journal by a Russian chemist who commented on it on an online forum called PubPeer in 2022. Subsequent investigation by a software engineer made him conclude that it was most likely an artefact of Optical Character Recognition (OCR) software, which ended up conflating the terms ‘vegetative cell production’ with ‘electron microscopy’ written on neighbouring columns in a paper published in the 1950’s, which was later picked up by Large Language Models (LLMs).
Figure 1

Source: Retraction Watch
The error may have been further compounded by the fact that the Farsi terms for scanning and vegetative are almost identical, which likely resulted in a typo or made it confusing for LLMs to differentiate between the two terms.
The repeated reference to the term in multiple scientific papers raises several troubling questions, the most important being, how were they able to bypass the academic vetting process? For a research paper to be published in any scientific journal, it must first be peer-reviewed by multiple referees. The fact that the term found its way into journals by reputed publishers—such as Springer and Elsevier—is particularly disconcerting. The whole episode also raises the distressing possibility that there may be several more such cases which have not yet been discovered or accounted for. While screening tools such as ‘Problematic Paper Screener’ are now flagging these ‘tortured phrases’, they cannot do so for undiscovered terms.
The whole episode also raises the distressing possibility that there may be several more such cases which have not yet been discovered or accounted for.
While this may be a particularly egregious example, other cases of fraudulent AI use have also cropped up over the years. For instance, there is the now-infamous case of Rafael Luque—a Spanish chemist who published a study every two working days in 2022—and has 11 of his studies retracted by publishers. The number of ‘hyperprolific researchers’ publishing a substantial number of papers annually (over 60) has gone up significantly in recent years, with countries such as Thailand, Saudi Arabia, Spain, India, Italy, Russia, Pakistan, and South Korea witnessing the largest increase. On the other hand, China and the United States (US) possess the largest number of extreme publishing authors in all areas, excluding Physics. With scientific research cuts already underway in the US, including for institutes such as the National Science Foundation and National Aeronautics and Space Administration (NASA), this is the exact opposite of the credibility the scientific community needs at the moment.
According to publicly available portals—Scimago Journal and Country Rank—over 350,000 academic papers were published in India across various journals in 2024. From the numbers alone, it can be concluded that a sizeable portion of these do not constitute groundbreaking research. This is corroborated by scientific indices such as the commonly used H-Index, with India receiving an overall score of 925, behind global leaders such as the US (3213), the United Kingdom (2048), and China (1455). However, this is not an issue exclusive to emerging economies such as India, especially if other metrics are taken into consideration. Even developed nations such as the US, Germany, and Japan were behind China in terms of citations per paper. On the other hand, China itself has historically faced several allegations of fraudulent academic research publications. These instances highlight systemic flaws within the global academic ecosystem and are not confined to specific nations.
Although several reasons are contributing to the issue, more often than not, it is not simply malice or profit that serve as the driving motive behind disingenuous academic research. Most research funding and scholarships come with stringent publication requirements, often irrespective of whether researchers have genuinely novel findings to share. This pressure compels many early-career academics to publish mundane or even fraudulent work—an outcome of the pervasive 'publish or perish' culture. In this context, increasingly efficient AI models like ChatGPT offer a valuable tool for them to contend with this dilemma.
The growing use of AI in academia, particularly for nefarious purposes, is gradually leading to enhanced scrutiny within academic circles and the adoption of new research-integrity tools such as Argos.
Although the consequences of unchecked AI use in research papers may seem severe, there is a distinct possibility that it may reinvigorate academic research. The growing use of AI in academia, particularly for nefarious purposes, is gradually leading to enhanced scrutiny within academic circles and the adoption of new research-integrity tools such as Argos. This is likely to usher in a more rigorous vetting and peer review process, as growing instances of unchecked AI use in academic writing are becoming a major source of embarrassment within academia, not to mention a major threat to scientific credibility and integrity. While the unchecked use of AI may presently appear to hinder scientific progress, it holds the potential to eventually refine academic research and scientific writing. By eliminating mundane and generic publications, AI could contribute to elevating the overall credibility of research. In parallel, it may also prompt broader reform in academic culture—reducing the pressure on researchers to publish under rigid timelines and funding constraints
Technological revolutions have a way of disrupting society, instigating chaos before beckoning a new era of stability, innovation, and progress. The AI revolution is poised to forge a similar path; societal instability and disruptions are just par for the course. While the consequences may appear especially severe in academia, in this case, the storm likely precedes the true tempest.
Prateek Tripathi is a Junior Fellow with the Centre for Security, Strategy and Technology at the Observer Research Foundation.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Prateek Tripathi is an Associate Fellow at the Centre for Security, Strategy and Technology. His work focuses on an emerging technologies and deep tech including quantum ...
Read More +