Research Integrity at De Gruyter Brill: An Interview with Agnieszka Bednarczyk-Drąg and Darren Green
The foundations of our trust in published research are not always visible on the page. Ensuring mutual responsibility for academic and ethical standards depends on editorial processes, clear authorship, and the careful handling of data. Yes, we're talking about research integrity – and this is only the tip of the iceberg.
Research integrity is a concept that feels lofty and elusive until you begin to look at the everyday work on the ground. In practice, it’s about small and stubborn questions: Who wrote this? Where was the data sourced? Has anything been subtly changed along the way? Most of the time, the answer is perfectly straightforward. But not always.
That is where the real work begins. Every journal article and book chapter relies upon a chain of checks and balances designed to maintain trust, accountability and ethical standards in the scholarly record. This includes plagiarism, authorship disputes, ethical approval, and newer risks surrounding paper mills and ‘creative’ uses of artificial intelligence.
To ensure that research integrity at De Gruyter Brill is a lived practice and not just an empty word, we have established a dedicated committee, which meets monthly to discuss all aspects of publication ethics. The committee is chaired by Senior Publication Ethics Manager Darren Green and includes Agnieszka Bednarczyk-Drąg, Director of Open Access Journal Development – two ethics experts committed to guiding a variety of stakeholders in the research cycle towards best practices.
What does research integrity mean to them? What kind of challenges have they faced on the frontline? And how do we maintain our standards in a competitive academic environment, where systematic misconduct is far from an isolated incident? Digital Communications Manager Billy Sawyers sat down with Agnieszka and Darren to find out.
Billy Sawyers: Can you please briefly introduce yourselves and outline what your work involves at De Gruyter Brill?
Agnieszka Bednarczyk-Drąg: As Director of Open Access Journal Development, I drive strategic development across our portfolio. I work to create innovative publishing solutions that ensure journals maintain their competitive edge while serving the academic community. As a member of the De Gruyter Brill Publication Ethics Committee, I champion the highest standards of research integrity.
Our team aims to develop solutions that strengthen the portfolio. We work with editorial teams on research integrity practices, educating them in using tools for assessing manuscripts against manipulation. We also commission articles and identify topics to increase the scope of the journals.
Darren Green: My role as the Senior Publication Ethics Manager sits within the Product Development function. In that role, I support editors, journal teams, and colleagues across the business on all issues around research integrity and publication ethics.
In practice, this involves overseeing and advising on a wide range of issues, from allegations of plagiarism through to more potentially much more complex matters such as data integrity, peer review concerns, and legal and reputational risks. A key part of the role is ensuring cases are handled consistently and in line with established standards, particularly those of the Committee on Publication Ethics (COPE), whose guidelines inform our editorial and ethical policies. Alongside casework, I’m also heavily involved in developing policies, tracking trends, and integrating tools that support day-to-day editorial decision making, such as the STM Integrity Hub checker.
“Research integrity involves building systems that ensure content is accurate, transparent, ethical, and trustworthy.”
BS: Research integrity can sometimes sound a little abstract. When it comes to your work in journal operations, what does it look like in a practical sense?
DG: My work spans both handling individual cases and supporting systems and policies and workflow. But in practical terms, research integrity issues show up every day in the checks and processes built into publishing. This includes plagiarism screening, authorship and affiliation requirements, policies around ethical approval and consent, as well as guidance for editors on how to handle concerns. It also means having structured processes to respond when issues arise, so that concerns can be assessed, escalated, and resolved in an efficient and, if necessary, transparent way.
AB: It is not an abstract principle; it shows up in everyday decisions and safeguards across journal management. Practically, research integrity involves building systems that ensure content is accurate, transparent, ethical, and trustworthy. That means taking many aspects of equal importance into consideration.
So, starting from clear editorial standards and policies, we follow COPE guidelines and educate our editors to follow its principles. This involves organizing the editorial office to ensure roles and responsibilities are clearly defined and eliminating potential conflicts of interest. We often have to start from scratch.
On an individual journal level, we have numerous checks at various stages. This involves plagiarism detection, screening potential paper mill signals, image or data manipulation, and checking for responsible use of AI. During peer review, we also look at authorship changes or the accuracy of data interpretation. And we keep training up our editors to spot signals of potential peer review manipulation, conflicts of interests, excessive self-citations or ‘citation cartels’.
These are preventive measures, but we also have procedures for investigations after a manuscript is published, handling corrections or retractions when manipulation is discovered. So if you were to describe it in everyday language, you would first need to explain the process.
“Part of the answer is good quality, well-organized peer review. That’s the frontline defense”
BS: Recently, scandals involving paper mills and systematic misconduct have attracted some bad press for academic publishing – for science in general. But I’m curious about what we don’t see in the headlines: what problems and solutions do you have to work with on the frontlines of research integrity, and are all subject areas equal in this regard?
DG: Part of the answer is good quality, well-organized peer review. That’s the frontline defense. Looking at the 2025 misconduct cases raised, we had a relatively small number: 63 allegations. In close to half of the resolved cases, we found no misconduct at all. This tells me that peer review already does a good job. The quality of our portfolio is reflected in the data – we don’t see many retractions. This is not to say we should rest on our laurels: we need to be vigilant about all of these problems. What’s important is that our publishing culture is robust, and peer review is the absolute backbone of that.
Further reading: What are paper mills, and how big of a problem are they really?
The majority of cases we saw in 2025 were plagiarism. Misconduct cases were split pretty much 50-50 across the humanities and social sciences (HSS) and science, technology, and medicine (STM). Data fabrication, image fabrication, and ethical consent issues are much more prevalent in the hard sciences. There is also more risk to some hard science journals regarding paper mills.
AB: Paper mill activity is also more prevalent in STM and medical life sciences. HSS has a different kind of problem with copyrights and plagiarism issues. Paper mills are not as visible in HSS, and even less in humanities.
BS: How has artificial intelligence impacted your work, and what are some of the implications for research integrity?
DG: I get asked about AI all the time, and we know this is an area where editors are seeking more guidance and support from publishers across the board. AI is already having a noticeable impact on how research is produced and assessed. On the author side, we are also seeing (not surprisingly) increasing use of AI tools for drafting, editing, and in some cases, generating reference lists – this is one area where we have seen problematic, undisclosed use. For example, some authors have been generating reference lists that contain links and DOIs that are either incorrect or completely fabricated. They aren’t checking their work carefully, if at all, and in the worst case, these references make it into the published literature and then need to be corrected after the fact.
“I think that the direction of travel is toward a more detailed set of cross-disciplinary standards where authors will be asked to disclose which tools they used for precisely which purposes, and in what context.”
Ideally, I’d like to see every chapter and every article reference list being checked before publication to validate all references (that is, to check that the references are correct, and that the DOIs are functional and link to the correct article). So, checking and detection tools can and will be part of the answer, but detection alone is not the solution. Good policy and guidance is also needed. Last year, we already established a high-level AI Policy for Authors, which we encourage colleagues to read, and we’ve also recently conducted an internal survey to assess our editors’ needs around this topic. It will be interesting to see the results of that survey, which will help us shape future strategies.
Overall, I think that the direction of travel is toward a more detailed set of cross-disciplinary standards where authors will be asked to disclose which tools they used for precisely which purposes, and in what context. It’s worth mentioning here in fact that COPE and STM (the International Association of Scientific, Technical & Medical Publishers), along with the International Science Council and the Global Young Academy, are leading a cross-publisher initiative to develop a “Global Reporting Standard for AI Disclosure in Research”. De Gruyter Brill has already had input into the consultation phase of this project, and there will a specific track dedicated to this at the World Conference on Research Integrity (May 3-6, 2026, in Vancouver, Canada). The goal is to develop some more robust and detailed guidance which, we hope, will be available later this year. For more information on this, please see COPE’s announcement from March this year here.
AB: I can add that AI is not only used by authors. The other side of the coin is using AI tools in the peer review process by reviewers. That is an issue in itself. We have little control over external reviewers, so we have to warn them that unpublished work is confidential and cannot be fed to machine learning tools – the peer review process requires human activity. We have spotted AI tools used in the preparation of reviewers’ reports, addressing them on a case-by-case basis for now.
“For me, the ‘gotcha’ stories of catching misconduct are always best because they grab your attention and convey important messages that stay with you for longer.”
DG: That’s a really good point. We have to have a clear line there. Philosophically, conceptually, it’s absolutely clear: an AI can’t be an author because it can’t take responsibility for the work. AI is not accountable for its own work product. It’s also clear that AI can’t do peer review because that is about expert knowledge and judgement. Using it that way is fraudulent behavior.
BS: Addressing fraudulent publishing behavior is often framed as catching those who resort to misconduct. Are there more positive, constructive aspects to this work?
AB: The biggest challenge is finding enough evidence. This can be time-consuming and frustrating when you cannot find a way to dig deeper, when all the clues lead to inconclusive answers. This then produces other challenges relating to communication and expectations. Authors, editors, and whistleblowers expect quick results, straightforward processes and clear answers, but often the interpretation of the situation is difficult, and decisions cannot be taken quickly.
So, for me, the ‘gotcha’ stories of catching misconduct, often presented in the media or on Retraction Watch, are always best because they grab your attention and convey important messages that stay with you for longer. By citing real examples when we speak to our editors, we can educate them on the scale and importance of this problem.
Working on cases of scientific misconduct is rewarding because I learn a lot about various mechanisms, and in turn that helps to investigate further cases. Our work has made us better prepared and helped us build better workflows, practices, and policies. We understand that we cannot stay in place; we must evaluate our policies and adapt constantly. For me, this is very rewarding and interesting.
DG: That’s a great answer. A lot of the work is described in terms of detection, but in practice, it’s about building better systems and strengthening infrastructure. Whether it’s tools for screening submissions or shared resources and frameworks, there is a strong collaborative element. Many of these challenges are shared across publishers, so progress comes through coordination.
“I tend to think of research integrity like personal health: the goal isn’t to treat problems after they arise, it’s much better to prevent them in the first place.”
I tend to think of research integrity like personal health: the goal isn’t to treat problems after they arise, it’s much better to prevent them in the first place. The fewer issues we’re dealing with at the back end, the stronger the system is overall.
That’s why so much of our focus is on prevention – particularly through the provision of clear, evolving guidance. Our Publishing Ethics policy is a living framework; we’re constantly updating and refining it to reflect new challenges. Recent updates, for example, clarify our stance on geo-territorial neutrality and our expectations regarding standardized institutional affiliations – areas that can become quite hotly contested in some cases (where there are ongoing geo-political conflicts, for example).
Ultimately, it’s both a responsibility and an opportunity. As a major publisher publishing around 110,000 articles and chapters each year, we’re in a position not just to respond to issues, but to help shape better practices across the industry, which is a great place to be in.
[Title Image dddb/DigitalVision Vectors/Getty Images]