The title may sound rhetorical. However, for many reviewers, it increasingly reflects lived reality. In an environment where manuscripts are often AI-assisted, submission volumes are rising, and deadlines remain fixed, expecting reviewers to work entirely without Artificial Intelligence (AI) support is becoming difficult to sustain.
At the same time, reviewers operate within a system defined by contradiction. They are voluntary contributors, yet bound by strict timelines; essential to scholarly publishing, yet often unrecognized; expected to ensure rigor, yet restricted in their use of tools that could support their work. AI has not created these tensions. AI has just made the tensions more visible and more urgent.
This article argues that the debate is no longer about whether AI should be used in peer review, but about how to responsibly integrate its use into a system that already places uneven demands on reviewers.
The Phantom Quality
One of the most immediate effects of AI is the transformation of manuscript quality, at least at the surface level. Many submissions today are more fluent, better organized, and rhetorically polished. However, this creates a misleading signal: linguistic clarity is no longer a reliable indicator of scholarly quality.
AI-assisted tools can produce coherent and persuasive texts that mask weaknesses in research design, data analysis, or conceptual contribution. For reviewers, this means that traditional cues such as grammar, flow, and readability. are no longer sufficient. The task has shifted toward deeper interrogation: Is the methodology sound? Are the claims supported by evidence? Does the study genuinely contribute to knowledge?
In short, AI has not simplified peer review. AI has made peer review more demanding.
The Invisible Reviewer
These increasing demands must be understood in light of the conditions under which reviewers work. Peer review remains largely voluntary, yet it is foundational to scholarly publishing. Reviewers are expected to provide careful, timely, and constructive feedback, often under strict deadlines.
Yet recognition remains minimal. In some cases, reviewers do not even receive a formal acknowledgement email, let alone any form of honorarium. This is particularly striking in an era where many journals charge substantial article processing charges, raising legitimate questions about how value is distributed within the publishing system.
The situation is even more complex in contexts such as Indonesia. Manuscript reviewing is not formally recognized at all within the Beban KerjaDosenor Faculty Workload System. Although some academics attempt to include reviewer certificates under Penunjang (supporting academic activities), such efforts are informal and uncertain. Recognition ultimately depends on individual assessors. It means that peer review contributions are not systematically credited.
The result is a structural paradox: reviewers are essential to the global academic system, yet their labor remains largely invisible and inconsistently valued, particularly in Indonesia.
Transformations in Peer Review Practices
It is within this context that the debate on AI use in peer review must be situated. Many journals for which I have served as a reviewer or an editor prohibit the use of AI tools, primarily due to concerns about confidentiality, accountability, and data protection, as emphasized by the Committee on Publication Ethics. These concerns are valid. Uploading unpublished manuscripts into public AI systems risks exposing sensitive intellectual content.
However, a blanket prohibition is increasingly difficult to sustain. Reviewers are already operating under pressure: voluntary yet deadline-bound, essential yet under-recognized. Now they are expected to comply with additional restrictions, including limitations on tools that could support their work.
This creates a tension between expectation and support. A system that demands efficiency and rigor, while limiting access to efficiency-enhancing tools, risks placing reviewers in an untenable position.
At the same time, publishing practices are evolving in a different direction. Some publishers are actively integrating AI into their editorial workflows. A notable example is AIRA, developed by Frontiers Media. AIRA performs multiple quality checks, flags potential ethical concerns, and supports reviewer selection, while leaving final decisions to human editors and reviewers.
Models such as AIRA are significant because they reframe the role of AI. Rather than being an external, unregulated tool, AI becomes part of a controlled and accountable infrastructure. Responsibility for data governance and confidentiality is managed by the publisher, not outsourced to individual reviewers.
From a reviewer’s perspective, this distinction is crucial. While the use of open AI tools raises legitimate ethical concerns, publisher-provided systems offer a more secure and pragmatic alternative. They acknowledge the realities of reviewer workload while maintaining necessary safeguards.
Under Pressure
The coexistence of AI prohibition and AI integration reveals a deeper contradiction in scholarly publishing. On one hand, reviewers are discouraged, or even forbidden, from using AI. On the other hand, publishers are increasingly embedding AI into their own systems.
The current state of peer review reveals a deeper structural tension within scholarly publishing. On one hand, reviewers are expected to meet high standards of rigor, deliver timely evaluations, and uphold the integrity of academic work. On the other hand, they operate within a system that relies on voluntary labor, offers limited recognition, and imposes increasing procedural constraints.
These contradictions are not merely inconvenient. They raise questions about the sustainability of peer review as a system. The issue is not simply whether AI should be used, but whether the current structure of peer review is aligned with the realities it imposes on reviewers.
Finding the Balance
A more constructive path forward is not a strict prohibition, but responsible integration. Several practical steps can be considered:
Such measures would better align expectations with realities, supporting reviewers without compromising ethical standards.
Return of the Reviewers
AI is not a future disruption. AI is a present condition of scholarly publishing. For reviewers, the challenge is not to resist AI, but to navigate its role within an already strained system.
However, the deeper issue is structural. A system that relies on voluntary labor, imposes strict deadlines, offers little recognition, and restricts supportive tools risks undermining its own foundations.
In this context, “Mission Impossible? Rethinking Peer Review Without AI” is not merely a provocative title. It reflects a growing disconnect between what reviewers are expected to do and the conditions under which they are asked to do it.
Addressing this disconnect requires more than regulating AI. It requires rethinking peer review itself so that expectations, recognition, and tools are brought into meaningful alignment.
Abdul Syahid has been teaching English since 1995 and earned his doctorate in English Language Teaching from the State University of Malang in 2015. Since 2020, he has served as a faculty member at Universitas Islam Negeri Palangka Raya, Indonesia. His research focuses on language testing and assessment. He actively collaborates with scholars from Indonesia, Malaysia, Iran, and the United States, including Professor Donald Freeman, on a nationwide teacher training initiative based on a global framework. He has reviewed over 120 manuscripts and serves on editorial boards such as SAGE Open while valuing time with his family.
View All Posts by Abdul SyahidThe views and opinions expressed in this article are those of the author(s) and do not necessarily reflect the official policy or position of their affiliated institutions, the Asian Council of Science Editors (ACSE), or the Editor’s Café editorial team.
Over the past several years, in my roles as researcher, reviewer, and editorial contributor, I have progressively noticed a ...
Read more ⟶
The Asian Council of Science Editors (ACSE) is pleased to support Peer Review Week (PRW) 2026, taking place from 14 to 18 Sep...
Read more ⟶
Peer review is fundamental to ensuring scientific rigor, credibility, and trust in scholarly publishing. It is a critical q...
Read more ⟶