Artificial Intelligence in Academia: The Lowdown

Artificial intelligence in academia is evolving rapidly. What was once seen as a curiosity is now becoming part of everyday practice. But what works – and what doesn’t? In our latest AI webinar for academic librarians, two experts share their insights and experiences.

Click here to catch-up with De Gruyter and Gold Leaf’s quarterly webinar series.

The eleventh De Gruyter Brill webinar for librarians – and the third about Artificial Intelligence – took place on Thursday 11th September. The speakers were Dr Andrew Cox, Senior Lecturer in the School of Information, Journalism and Communication at the University of Sheffield and Dr Iman Magdy Khamis, the Library Director at Northwestern University in Qatar. Linda Bennett of Gold Leaf was the moderator.

The first DGB webinar on AI, which took place in December 2023 when the use of AI was relatively new to western universities, was very much a blue skies exploration of Artificial Intelligence. The second, which was broadcast in December 2024, took a more experiential approach: the speakers described some of the innovative solutions they had developed by harnessing AI. Artificial Intelligence: the lowdown complemented both by assessing the ways in which AI is being used in universities now and the nature of its benefits and limitations.

You are currently viewing a placeholder content from YouTube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

Dr Andrew Cox: “Three Dilemmas Around Generative AI”

Dr Cox identified three dilemmas that AI poses when applied to academic tasks: the sycophancy problem; the replicability problem; and the information quality problem. He said that identifying and, insofar as possible, addressing these problems is crucial, since surveys have shown that to date in 2025, 47% of students have used AI in some way for assessments – and this figure has jumped from 12% in only one year. At present, Google is still the first port of call for students when undertaking research; but ChatGPT (together with an institution’s own website) comes second.

“Fundamentally, AI is designed to create a happy customer, not a critical thinker.”

Andrew Cox

The sycophancy issue is interesting because it makes AI seem more human / thinking than it actually is. For example, if it shows the user an image of a dog and the user says the image is in fact of a cat, it will agree.

“Replicability” is the term Dr Cox used to describe AI’s inconsistency: his experiments showed that the same sentiment analysis task given to Gemini resulted in different responses when replicated. But perhaps most serious of all – in the context of academia –  is the information quality issue. AI can be guilty of providing plausible but wrong information; no or invented citations; introducing bias – e.g., by reinforcing stereotypes; failing to reflect cultural diversity; and fostering poor user practice. It often does not know about local information sources and can spread “disinformation”. There are ways of combating these shortcomings, particularly by getting better answers by devising very specific prompts, but this requires both patience and expertise. More widely, AI can cause a “crisis of information quality”.

This is not to deny that AI has the potential to improve the learning experience, but there are still potentially damaging aspects of its use that continually need to be kept in mind: it can create dependence; it can cause cognitive offloading (i.e., the loss of the user’s power to think critically); it can adversely affect the social dimension of learning; and user access to AI solutions is at present unequal.

Dr Cox’s conclusion is that “AI is great for society, but also terrible.” If managed positively it can assist with the solution / amelioration of a wide range of contemporary challenges – e.g., climate change; if it is not managed properly, the reverse is true.

Dr Iman Khamis: “The Future of Libraries – Enhancing User-Centered Services in the Digital Age”

Dr Iman Khamis said she is particularly interested in exploring the ways in which Artificial Intelligence can help librarians by assisting in the execution of many repetitive or automated library tasks.

“The future of libraries with AI is not less human, it’s more powerful, relevant and essential than ever.”

Iman Khamis

She described a specific AI application called Annif, which is an open source tool for automated subject indexing developed at the National Library of Finland. She also referred to “smart library” initiatives that have explored the feasibility of automatically cataloguing scholarly articles using Library of Congress Subject Headings.

Moving on to the use of robotics in libraries, Dr Khamis showed how robots can provide remote reference services; virtual programming; remote tours; collaborative meetings (organised by telepresence robots); and enhance accessibility for library users with disabilities.

Dr Khamis concluded with the following observations:

  • AI is not replacing librarians; it is empowering
  • Librarians are evolving into data stewards, AI ethicists and digital literacy leaders.
  • Libraries are becoming innovative hubs, not just information repositories.
  • Embracing AI means that librarians can have more strategic roles, engage more with the community and deliver smarter services.

Q & A session

The presentations produced a particularly lively and thoughtful Q & A debate. The questions and the speakers’ responses well repay devoting the time to watch the full video version of the webinar. Here are some tasters:

  • The speakers were asked whether AI truly empowers librarians, or whether there is a danger that senior management in institutions across the world might see the advent of AI as a reason for cutting down library staff numbers even further. Iman agreed that the danger is there, but enlightened managers will see that librarians instead need to develop new skills to help their patrons.
  • They were asked their view on the pace of uptake of AI in academic libraries. Is it more readily embraced by the West than the East? Andrew said that first of all it is necessary to define what “AI” means: it takes many shapes and forms. However, it may be true that in countries like China and Singapore, use of AI is really at the forefront, whereas in the West there is a long tradition of suspicion of technologies and quite a lot of conflict about AI meaning loss of human contact and its consequent effect on learning. Nevertheless, most of the AI tools used in the West were developed in the USA, probably not with “the dominant preoccupation of how best to protect society”.
  • How does AI sit alongside the traditional content that publishers provide? Andrew said that studies have been undertaken in the UK which show that people increasingly seem to trust AI, even for providing information on health issues. How information is generated and used is therefore changing and publishers and authors are certainly fearful that AI will take the stuff they’ve created over many years and either use it to chuck out stock answers or create new answers. However, in his view there are too many good resources held by libraries that AI “hasn’t got its grip on” to be seriously worried about this. Iman said that AI is not reliable enough yet, but because millions of undergraduates and postgraduates are now uploading material, it might be more reliable in the future.
  • Can AI be used to supply a meaningful literature search? Again, Andrew said that the answer is probably “not yet”. ChatGPT and Gemini will conduct literature searches of sorts, but these are not rigorously academic and are very US-Biased. The range of tools available probably increases rather than alleviates the confusion. However, AI can be helpful in scoping which research terms to come up with for best results, etc. “If I was a worker in health or the law, I probably wouldn’t rely on it; but as an undergraduate looking for a ‘good enough’ answer, I’d be quite tempted.” He also pointed out that true scholars actually enjoy compiling literature searches. “We’re being told by tech companies that certain tasks are boring, but actually … are they boring, or are they the whole point?” Iman said there are bespoke tools for literature reviews, but these do not include generative AI tools, which are “large language tools”. Even Google Search is not a “good enough” tool; and the purpose-built literature search tools are expensive.
  • Would AI promote a certain school of thought over others? Andrew said that probably all systems are biased … bias exists in library collections, but “we’ve made good strides in addressing it”. Iman said, “Generally, when it comes to bias, we already have bias in AI”, because it picks up what’s on the Internet. Therefore, “we must not put our students into a danger zone and expect them to find their own way out.”

The speakers were enthusiastically applauded by the audience and warmly thanked by Linda Bennett and Andrea Gregor-Adams of De Gruyter Brill.

To catch up with the whole event, please find the video recording here.

[Title image by Michael Dziedzic via Unsplash]

Linda Bennett

Linda Bennett is the founder of Gold Leaf, a consulting firm that provides business development and market research for publishers and the publishing community.

Pin It on Pinterest