Loading...

AI in Editorial Workflows: Practical Lessons for Editors, Reviewers, and Journals

By  Wulfran Fendzi Mbasso Mar 02, 2026 1535 0

Artificial intelligence is no longer a future discussion in scholarly publishing;it is already embedded in daily editorial practice.

Whether we acknowledge it formally or not, AI tools are being used by authors, reviewers, editors, and journal staff for language polishing, screening, summarization, formatting support, and workflow acceleration. The real question for journals today is not whether AI should exist in editorial processes, but how it should be used responsibly, transparently, and in ways that strengthen, not weaken, editorial quality.

From a practical editorial standpoint, AI offers clear benefits when applied to repetitive or time-consuming tasks. Many editorial offices, especially those operating with small teams and limited budgets, face increasing submission volumes, delayed reviewer responses, and administrative bottlenecks. In this context, AI can help reduce friction. For example, AI-assisted tools can support initial checks for formatting completeness, reference consistency, plagiarism-risk flagging patterns (as a complement to dedicated systems), language clarity issues, and manuscript structure completeness. This does not replace editorial judgment, but it can save valuable time before a manuscript enters deeper scientific evaluation.

One of the most useful applications is triage support. Editors often spend substantial time identifying whether a submission meets basic journal scope and quality thresholds before peer review. AI can help generate quick summaries of manuscript objectives, methods, and claimed contributions. It can also help identify missing elements, such as ethics statements, data availability declarations, or poor alignment between title, abstract, and conclusions. Used carefully, this can help editors make faster and more consistent screening decisions. However, these outputs should always be treated as advisory, not determinative. AI summaries can sound confident while overlooking critical methodological flaws or domain-specific nuances.

Another practical area is reviewer invitation and communication support. Reviewer fatigue is now a reality across many disciplines. Editorial teams often send many invitations before securing enough reviewers. AI can assist by drafting professional invitation messages, reminders, and decision letters with a clear tone and structure. This is particularly helpful for multilingual editorial offices, where language fluency may affect communication speed. Yet here again, the lesson is clear: AI can support communication, but human oversight is essential to ensure context, professionalism, and fairness. Standardized AI-generated communication can become impersonal if used without editorial refinement.

AI is also increasingly used at the copyediting and production preparation stage, especially for language polishing and consistency checks. For regional journals seeking greater international visibility, this can be especially valuable. Many strong manuscripts are delayed or rejectednot because the science is weak, but because the writing is unclear. AI-based language assistance can improve readability, helping journals present research more professionally. At the same time, journals must avoid creating the impression that polished language automatically signals scientific rigor. Editorial quality must continue to prioritize methodological soundness over surface fluency.

That said, the adoption of AI in editorial workflows also raises serious concerns. The most immediate is integrity and accountability. If a journal relies too heavily on AI-generated assessments, who is responsible when an important issue is missed? If an editor uses AI to summarize reviewer reports or draft decisions, how do we ensure the final decision reflects the real scientific concerns rather than a simplified machine interpretation? In editorial work, accountability must remain human. AI can assist; it cannot bear responsibility.

A second challenge is confidentiality. Manuscripts under review contain unpublished research, and editorial correspondence often includes sensitive evaluations. Journals should be cautious about using public AI tools in ways that expose manuscript content, reviewer comments, or author identities without clear data protection safeguards. This is especially important for journals that do not yet have formal digital policies. Editorial teams need explicit guidance on what may or may not be entered into AI systems, and under what conditions.

A third challenge is bias and inconsistency. AI tools are trained on large datasets that may reflect language, disciplinary, or regional biases. This can affect how manuscripts are summarized or how writing quality is interpreted, particularly for authors writing in English as an additional language. If editorial teams are not careful, AI may unintentionally reinforce inequities that journals are trying to reduce. A practical lesson here is that AI should be used to support editorial inclusion, not to create new barriers.

SO, WHAT SHOULD JOURNALS DO IN PRACTICE?

First, journals should develop a simple internal AI-use policy, even if it is only one page. It should define acceptable uses (e.g., language editing, formatting checks, administrative drafting) and restricted uses (e.g., fully automated editorial decisions, uploading confidential materials to unapproved tools). A short policy is better than no policy.

Second, journals should maintain a human-in-the-loop model at every critical decision point: scope screening, peer review decisions, ethical concerns, and acceptance/rejection outcomes. AI may help organize information, but editorial judgment must remain central.

Third, journals should invest in capacity-building for editors and staff. The gap today is not only technological; it is procedural. Many teams are experimenting informally with AI but without shared standards. Brief training sessions on prompt design, risk awareness, confidentiality, and output verification can make a major difference in quality and consistency.

Fourth, journals should promote transparency. Where appropriate, author and reviewer guidance can clarify how AI may be used in manuscript preparation and editorial processes. Transparency builds trust, especially at a time when stakeholders are concerned about hidden automation.

Finally, journals should adopt a mindset of measured experimentation. AI should not be introduced as a replacement strategy, but as a workflow improvement tool with defined goals: reducing administrative delays, improving communication quality, and supporting consistency. Journals can start small, monitor outcomes, and adjust based on evidence.

AI can be a valuable partner in editorial workflows, especially for journals facing limited resources and rising expectations. But the strongest lesson is this: the value of AI in publishing depends less on the tool itself and more on the editorial culture governing its use. Journals that combine AI efficiency with human judgment, ethical safeguards, and transparent policies will be better positioned to improve both workflow performance and scholarly trust.

Keywords

AI in editorial workflows responsible AI use human-in-the-loop editorial policy peer review support confidentiality scholarly publishing technology integration

Wulfran Fendzi Mbasso
Wulfran Fendzi Mbasso

Obtained a PhD thesis in Electrical Engineering, Option Optimization of Renewable Energy Systems. I am passionate about the field of Electrical Engineering and Industrial Informatics. I received several global certifications. My research focuses on power system control, optimization, automation, and electronics. I am a dedicated PhD holder in renewable energy optimization. I possess extensive electronics, electrical engineering, telecommunications, and automation experience. My research involves innovative ways to optimize renewable energy use for a sustainable future. I develop advanced energy efficiency methods due to my expertise in these areas. My collaboration with international researchers has given me a broad view of research in several electrical engineering fields. This partnership has led to articles in Renewable Energy Systems, Energy Control, and Electricity Quality. I act as a reviewer for IJRER, Heliyon, Hindawi, AJEBA, and Sustainable Energy Research. I am also active on ResearchGate, where I support science and engineering with my humble perspective.

View All Posts by Wulfran Fendzi Mbasso

Disclaimer

The views and opinions expressed in this article are those of the author(s) and do not necessarily reflect the official policy or position of their affiliated institutions, the Asian Council of Science Editors (ACSE), or the Editor’s Café editorial team.

Recent Articles

From Opaque Workflows to Auditable Integrity: A Framework for Reform
From Opaque Workflows to Auditable Integrity: A Framework for Reform

The legitimacy of scholarly publishing hinges on two interdependent pillars: trust in the rigor and honesty of published wo...

Read more ⟶

Length vs. Substance: Reflections on Reviewer Reports and Editorial Expectations
Length vs. Substance: Reflections on Reviewer Reports and Editorial Expectations

Peer review is the backbone of scholarly publishing, ensuring the quality, credibility, and integrity of scientific researc...

Read more ⟶

Journey Through the Publishing Minefield: Navigating Trust in a Digital Age
Journey Through the Publishing Minefield: Navigating Trust in a Digital Age

Back when I was a PhD candidate, the finish line was clear: to graduate, I had to publish. I remember sitting at my desk, ...

Read more ⟶