Taking Libraries into the Future, Part 1: An Interview with Mark Hughes

Artificial intelligence: good, bad or neutral? Whatever one thinks of AI, its impact on research, teaching and learning is becoming near impossible to ignore. In the first part of a new blog series, Mark Hughes, University Librarian at Cardiff Metropolitan University, shares his views on how to tackle the slippery giant that is transforming academia.

Learn more about “Taking Libraries into the Future,” De Gruyter and Gold Leaf’s webinar series for librarians.

Artificial intelligence has been both hyped and blasted. Having first identified it as a threat to the human race, US president Joe Biden now says that its powers should be harnessed through international collaboration: “Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI.” Rishi Sunak has warned that in a worst-case scenario the world could lose all control over AI, preventing it from being switched off and causing an alien takeover.

The opportunities and threats that AI poses in the academic sphere were quickly recognized, partly because students worldwide are internet-savvy, partly because academics have long been carrying out research online and feeding the information they’ve gathered into an application that might give it some semblance of logic was an inevitable next step. Has AI changed academia forever, is it merely a useful tool, or does the answer lie somewhere between the two?

Today we kick off Taking Libraries into the Future, a new interview series in partnership with Gold Leaf. Linda Bennett spoke Mark Hughes, University Librarian at Cardiff Metropolitan University and a discerning observer of artificial intelligence, for some insights.

Mark Hughes at WHELF
Mark Hughes speaking at the Wales Higher Education Libraries Forum (WHELF), where he is currrently chair.

Linda Bennett: Thank you very much for taking part in this interview, Mark. I know you think there are no ‘experts’ on AI, but you are just about as close as anyone.

Mark Hughes: It’s my pleasure. I feel flattered that you have asked!

LB: Quick-fire thoughts on AI. Good or bad? Threat or opportunity? Saving the planet or dooming the human race?

MH: Artificial intelligence isn’t inherently good or bad. It represents a logical progression of technologies that have been developing for years. I am an optimist and a huge science fiction fan. Sci-fi has a deep well of AI depictions and provocations to draw from and I view an AI future through the lens of Iain Banks’ Culture series rather than James Cameron’s Terminator books. It’ll be down to how we use it, and good or bad use of AI is very much in our own hands.

“AI development will move at a pace which will be impossible for universities to match; therefore, if we get into counter-resistance mode, we will almost certainly fail. We must embrace it.”

LB: Tell readers a bit about Cardiff Metropolitan University.

MH: The university is a medium-sized higher education institution with a strong teaching focus. It has developed significant areas of excellence, including teacher training, Business Management, Sport and Sports Science and Design and Technology. We have about 640 FTE (full-time equivalent) academic staff and about 12,000 FTE students. We have developed mature partnerships with institutions in Wales and internationally – in Sri Lanka, Singapore, Lebanon, Greece and Cyprus – and there are many international students on campus, too.

I am head of library services and in charge of total library information provision, within which are a small team of academic skills specialists. We don’t yet have the size or scope of services of larger institutions, but I believe we punch above our weight and we are consistently rated very highly in student feedback.

LB: You have obviously thought a great deal about AI and how it can be used in an academic context. What advice would you give to key stakeholders if they asked you?

MH: If a student came to me for advice on AI, my first question would be: How are you already using it? I don’t think there is enough understanding about what students are already doing. I’d then say, “You are going to be exposed to AI for the rest of your life, long after you leave university, and it is in your interests to use it well. Understand what it will give you: it is not human, not infallible and not always a reliable source of the truth. If you use it appropriately, it will save you time and help grow your understanding of certain things … but if you use it inappropriately, it will cause you trouble. Context and critical thinking are everything: What you bring to the tool is what makes it work for you.”

I’d say pretty much the same thing to academics, but I’d add that AI is often seen as ChatGPT and then the brand becomes the thing. Don’t think of AI as just ChatGPT and Large Language Models (LLMs), there are many more tools available – some tools automate tasks and enable working with large data sets more effectively. But all AI tools should be used with caution.

“Can we ban AI? No. Can we control it? We’d struggle. It is here and moving too quickly for us to control meaningfully as individuals or institutions, so we can neither stop nor ignore it, but we can offer guidance.”

I’d give the same advice to researchers, but caution that AI should not be used to provide an interpretation of the data. It’s not capable of making an intuitive leap: You must bring to it your own knowledge of the subject. Our Vice-Chancellor and senior staff see clearly that AI presents both threats and opportunities for teaching and research. In my own view, AI development will move at a pace which will be impossible for universities to match; therefore, if we get into counter-resistance mode, we will almost certainly fail. We must embrace it. There is a strong case for investing in our staff to help them understand it and pass on what they have learned.

LB: Do you think it is possible for a university to maintain valid guidelines or best practice parameters for AI, or is it too much of a moving target?

MH: There need to be reasonable guidelines. Can we ban AI? No. Can we control it? We’d struggle. It is here and moving too quickly for us to control meaningfully as individuals or institutions, so we can neither stop nor ignore it, but we can offer guidance in the generic sense. The detail of what’s acceptable needs to be driven by the school or course, or even at module level.

We must also accept that AI is altering the way we think and behave. Take student assessment: This whole piece is just starting to be unpicked. If we try to reinvent assessment on the basis of what we already do but somehow ‘shield’ it from AI’s impact, we can never win. We can instead recognize the value of assessment as authentic activity which embeds learning and works well in an AI world.

LB: How far do you think legitimate use of AI is covered by existing mandates and policies about plagiarism, copyright and so on? How do you think it will be deployed within the current publishing landscape? What are the threats for publishers, and do you have any advice or safeguarding measures for them?

MH: Existing policies on plagiarism no longer work – there is a huge gap between mandates and policies and issues of copyright and academic integrity. There is a lot of catching up to do, not least in the areas underpinning data and ethics, and not just at universities, but within the wider social context. We are still in the early stages: There will be lots of mistakes until we reach a sensible place.

Don’t miss: Keep scrolling for more insights on AI from our librarian focus group.

As far as publishers are concerned, I usually sit on the consumer side, but – taking a different perspective – the impact on publishers is huge. The key headache for publishers relates to the content they provide. They collect massive amounts of content that may now be partially or wholly AI-generated. That’s at the start of the process. Then, post-publication, how do you share research when AI may consume, change and re-present it, without diluting its value? Publishers are rightly praised for providing high-quality content that is rigorously peer-reviewed – the challenge is how to maintain that unique place in an AI-dominated landscape without devaluing the content. They must also figure out how to deploy AI in their own internal processes, to make them more efficient.

Partly because of the whole open access scenario and partly because of historical tensions, there is a lot of baggage interfering with the relationship between librarians and publishers. Together we have the potential to develop a scholarly communications ecosystem in which we can work successfully as partners, but we need to ditch that baggage first. The trick is to find the opportunities that AI can offer us to make the scholarly communications space more efficient, simpler, cheaper and more transparent.

My advice to publishers would be not to try to work all this out in your own little bubble. Engage with expert advisory groups that have the technical, ethical and user know-how to understand AI in the round and how it entwines, or could entwine, with all your current practices.

“How do you share research when AI may consume, change and re-present it, without diluting its value?”

LB: In which ways is Cardiff Met using AI or thinking about using it to carry out tasks?

MH: Within the Library Service we’re experimenting with different tools, including AI meeting summarizers and automated note-taking.

LB: Are you aware of any instances where AI has been a truly positive force for good?

MH: There is lots of activity on the research side: We have a fantastic research librarian who’s making some really good stuff happen and sharing it. There is also a university-wide “Embracing AI” working group, coordinated by the Pro-Vice-Chancellor for Student Engagement, which is collating all we’re doing to share good practice.

LB: Are you aware of any problems or difficulties caused by the abuse of AI?

MH: Early on, there was an uptick in academic misconduct cases, which prompted us to put in additional scaffolding and support for our students. We didn’t take the ‘big stick’ approach, but introduced some safeguards to help patrons use AI appropriately. We’re now trying to build this into how we teach students about information literacy and academic practice.

From the other end of the telescope, some researchers trying to follow up on information they’d discovered found that it was AI-generated – an apparently referenced piece of research was a ‘hallucination’ that didn’t exist. The early versions of ChatGPT were prone to that phenomenon, though it has since significantly improved.

LB: I understand that you are a leading light within a group of like-minded librarians who are discussing the potential of AI for their institutions/libraries. Could you tell us a little more about this?

“The information inequity gap will grow until we get more of a handle on AI, not just in higher education, but in the wider environment.”

MH: SCONUL (Society of College, National and University Libraries) has an active Technical and Markets Strategy Group, and I chair a sub-group that considers the use of AI in libraries. We’ve been engaging major suppliers in conversations – we’ve spoken to six so far – and the report on our findings is imminent. We have an ongoing program of events driven by 90-minute virtual coffee mornings to share knowledge about AI. We’re trying to get conversations going across the sector. AI is not just a technical development – at its heart it’s a knowledge issue: It is about how information is created, curated, repurposed and discovered. And if that isn’t at the heart of librarianship, I don’t know what is!

LB: Picking up on the start of this conversation, how do you see the future? What do you think will be the impact of AI on tertiary education in the next 5 years; the next 10 years? Will it be a great equalizer in academia, or a polarizer?

MH: Again, I’m naturally an optimist. I think the next five years will be very challenging but promising overall. Artificial intelligence will affect almost every aspect of tertiary education. Higher education is not used to adapting at this pace, but we’ll get there. I think the information inequity gap will grow until we get more of a handle on AI, not just in HE, but in the wider environment.

There is significant scope for making mistakes in the next five years, but we will learn from them rather than fail. We are in the technology ‘goldrush’ stage, but that will be followed by something more structured and considered as international legislation and everyone across the world catches up.

After ten years, I think AI will be well-embedded and what academics and bright students are good at – critical thinking – will be even more essential in an AI world. It will result in a better delivery experience for library users, more streamlined delivery models and more space to deliver better what tertiary education is all about – giving people the knowledge, skills and cognitive ability to contribute to 21st century society.

LB: Mark, Thank you very much indeed for these magnificent insights.

FOcus Group: What do Librarians think of AI?

“Libraries are creating tech spaces to support undergraduate students and appointing faculty to explore how AI can be used, including the creation of new curricula.” Beau Case, Dean of Libraries, University of Central Florida, USA. Read our interview with Beau from January 2024.

“We have a large group of AI researchers who run CARE-AI, a center dedicated to the ethical use of AI. There are also people here who are interested in AI for what it might do for crop science and agriculture.” Ian Gibson, Associate University Librarian, University of Guelph, Canada.

“Our catalogues have very low quality records for material acquired in the ‘50s, ‘60s and ‘70s that poorly serve our users. We are keen to explore how to apply AI to achieve enhanced records so these works aren’t overlooked by researchers. We are interested in working with collaborative groups such as Hathi Trust and others to enhance old records.” Roxanne Missingham, University Librarian, Australian National University.

“One of my reference librarians told me the other day that she used AI put together a mini book exhibition. These exhibitions are thematic and change frequently. So using AI is smart.” Valerie Khaskin-Felendler, Head Librarian, Humanities Reading Room, Ben-Gurion University of the Negev, Israel.

[Title image by Google DeepMind/Pexels]

Linda Bennett

Linda Bennett is the founder of Gold Leaf, a consulting firm that provides business development and market research for publishers and the publishing community.

Mark Hughes

Mark Hughes is Head of Library Services at Cardiff Metropolitan University. He currently chairs the Wales Higher Education Libraries Forum (WHELF).

Pin It on Pinterest