Navigating Generative AI in Academic Publishing: An Interview With Benjamin Luke Moorhouse

Generative artificial intelligence is everywhere, but few academics openly discuss its use. Benjamin Luke Moorhouse, a researcher on GenAI and language teaching, and his colleagues spoke with journal editors to uncover the key challenges AI tools pose to academic writing and the peer review process. Here’s what they found.

“As a discipline, we are still figuring out what we consider to be responsible use of GenAI in academic publishing”, says Benjamin Luke Moorhouse in conversation with Alexandra Hinz from De Gruyter Brill. The discipline in question is applied linguistics, and Moorhouse is the lead author of a recent qualitative study on the use of generative artificial intelligence (GenAI) in academic publishing. As Associate Professor in the Department of English at the City University of Hong Kong, he is interested in the big questions surrounding the integration of AI in scholarly work.

Benjamin Luke Moorhouse

Facing the harsh reality of “publish or perish,” how can academic authors overcome their fear of rejection and become more transparent about their use of AI tools? How are journal editors and peer reviewers dealing with the explosive increase in submissions caused by AI? What other concerns do they have, and how can these be addressed?

Together with his colleagues Sal Consoli from the University of Edinburgh and Samantha M. Curle from the University of Bath, Benjamin Luke Moorhouse interviewed ten journal editors in applied linguistics to explore these questions and more. We were eager to learn about their key findings, especially as it’s Peer Review Week, celebrated under the theme “Rethinking Peer Review in the AI Era.”

Alexandra Hinz: What are the most pressing challenges that journal editors in applied linguistics are facing regarding generative AI? How do these challenges compare to those in other academic disciplines?

Benjamin Luke Moorhouse: Our study suggests that GenAI is exacerbating existing pressures that journal editors in applied linguistics have been facing for a while. The pressure to publish in “good journals” felt by academics and graduate students has led to a rapid increase in the quantity of submissions editors need to handle. This is making it increasingly harder for them to identify real and tangible contributions to knowledge. This also makes it difficult for editors to find competent reviewers, as more manuscripts mean there is demand for more reviewers. In our study, all the editors talked about the difficulty of finding qualified reviewers.

“The editors felt a kind of ‘tension’ between supporting innovative practice and safeguarding the quality and rigour of knowledge production.”

This situation seems to be made more challenging with the developments of GenAI. The editors talked about AI accelerating the research process and therefore increasing submissions that add additional strain to the system. They were also unsure about what constitutes a suitable use of GenAI in applied linguistics research – there is a risk that these tools could lead to questionable research practices that are not easily identified by the reader. The editors felt a kind of “tension” between supporting innovative practice and safeguarding the quality and rigour of knowledge production.

Although it is difficult to compare to other disciplines, the field of applied linguistics focuses on language in context and has a rich history of diverse methodologies and research traditions – this means editors need a broad knowledge base. It also means that GenAI could be used in a huge range of research tasks.

AH: How are journal editors currently detecting or verifying AI-assisted writing, if at all? How effective do you think these methods are?

BLM: At the moment, at least from our study, editors do not seem to know of a good way to detect or verify AI-assistive writing. Some editors noted obvious examples of AI use, but these tended to be in low-quality submissions that would get desk-rejected anyway. The manuscripts had issues around coherency and fake references.

In fact, most of the editors did not have an issue with authors using AI to help them to improve the language quality of a manuscript – with some even finding the use of AI to polish the language in a manuscript as desirable. However, there were concerns about authors losing their “voice” if they rely too much on AI. Important to the editors were human accountability and transparency.

AH: What support or guidance do authors, editors and reviewers need to use GenAI responsibly in academic publishing?

BLM: Our study suggests that, as a discipline, we are still figuring out what we consider to be responsible use of GenAI in academic publishing. At the moment, it seems that authors are reluctant to declare their use of GenAI due to fear that they may be judged negatively by editors, reviewers and readers. This can lead to deceptive use, where authors use it but do not declare it. At the same time, authors are unsure of what uses are acceptable, and what uses should be declared. I believe we need to work on removing the stigma attached to the use of GenAI in research and instead focus on exploring ways in which GenAI can impact study quality. In essence, we need to normalise the use of GenAI in our research.

“I believe we need to work on removing the stigma attached to the use of GenAI in research and instead focus on exploring ways in which GenAI can impact study quality.”

To help do this, Marie Yeo, Hassan Nejadghanbar and I, in a paper just accepted for publication in TESOL Quarterly, propose a disciplinary framework for using GenAI in applied linguistics research. The framework is built on four elements of quality study proposed by Luke Plonsky in 2024: (1) transparency, (2) methodological rigor, (3) ethics, and (4) societal value, with an additional element proposed by us, (5) human accountability.

I believe, frameworks such as these, exemplars of responsible AI use, and methodological explorations of the affordances and limitations of GenAI for research tasks can guide scholars in making more informed decisions about the use of GenAI at different stages of their research process and writing for publication. Frameworks, exemplars, and methodological explorations can also serve as important resources to assist editors and reviewers as they evaluate the quality of a study.

Most importantly, we need to shift our way of thinking about these tools: From seeing their use as a sign of deficit to seeing their deliberate, purposeful, rigorous and ethical use as a sign of enhancement.

AH: Looking ahead five years, how do you see peer review in applied linguistics evolving, especially in relation to the integration of GenAI? Are you generally optimistic or pessimistic about these changes?

BLM: I consider myself to be a cautious optimist. I believe that GenAI does have the potential to assist researchers in their work and, therefore, increase the knowledge base of our discipline – leading to social good. However, I can also see that these tools could contaminate our knowledge base, as well as increase the pressure on scholars, reviewers and journal editors. In the editorial process, I can see AI playing a role in screening papers, suggesting expert reviewers, and analysing reviewer comments. I would still want to see a human editor making the decisions, but provided with more data to help them.

“I would still want to see a human editor making the decisions, but provided with more data to help them.”

I am not sure how peer review in applied linguistics may evolve in the next five years. The current model seems to be at breaking point. I get two to three requests to serve as a reviewer a day. I think journals will need to explore expanding their editorial boards so reviewers can get more recognition. It would be great if graduate students were given training in peer review as part of their programmes – this could increase the pool of expert reviewers.

AH: What is your personal relationship with GenAI? Are you using it in your academic work or editorial duties?

BLM: I have a love/hate relationship with GenAI. It is a highly disruptive and transformative technology that has changed many aspects of my work. In my teaching, I have needed to reconceptualise my course aims and assessments to reflect the development of these tools. I have witnessed opportunities for learning lost as students overuse AI tools, as well as trust breaking down between teachers and students when teachers think students have used AI improperly. At the same time, I have found utility in assisting with my own work, mainly through drafting documents and getting inspiration.

Most of my relationship with GenAI has been about figuring out the impact it could have on applied linguistics and language teaching and trying to contribute to the discussion and debates about how it can and should be used, as well as the knowledge language teachers, learners, and scholars need to use it critically, effectively and appropriately. This is exciting work, and I am grateful I can contribute to helping my colleagues navigate the uncertainties GenAI has brought to the field.

[Title image by hapabapa/iStock Editorial/Getty Images Plus]

Benjamin Luke Moorhouse

Prof. Benjamin Luke Moorhouse is an Associate Professor in the Department of English, City University of Hong Kong, China. His research focuses on the lived experiences, competencies and professional learning of language teachers and teacher educators. Currently, he is exploring the impact of GenAI on language teaching and learning.

Alexandra Hinz

Alexandra works as Digital Communications Editor in the Web Content team of De Gruyter Brill. You can get in touch with her via alexandra.hinz [at] degruyterbrill.com.

Pin It on Pinterest