A few weeks ago, a professor showcased the impressive capabilities of artificial intelligence (AI) within his field, which had created results of very good quality. He voiced his concern that the rapid advancement of AI might make his work superfluous. I mentioned to him my paper about the prospective limits of AI. He was very sceptical that such a paper would make sense, because in his opinion AI has already surpassed even the excessive expectations people have voiced in the past. Here, I aim to briefly explain why a discussion of the limits of AI’s performance is possible and relevant.
I’ll begin with an example. A dean at our university was given the task of creating a faculty development plan. This is a time-consuming process, due to the fact that professors are inherently independent individuals with strong opinions about their affairs. Synthesizing these viewpoints into a consensus and presenting it to the faculty in the jargon of faculty management is a challenging task. However, the dean had become acquainted with ChatGPT and delegated this task to AI. He subsequently reported that the outcome was quite satisfactory. He felt that it had made his job much easier and that he couldn’t have done it better in terms of the content and language of the plan.
I do, however, see challenges in the three categories listed below that will arise when tasks are delegated to AI.
1. Respecting Individuals’ Autonomy and Life Goals
Will AI ever be able to produce a development plan for a faculty, a department or an individual that is of such good quality that we can justifiably rely on it? This prospect seems doubtful, even though AI may generate an excellent text. In the example, the dean thought that AI could make his job easier and produce a result of even better quality. But was AI really doing what the dean should have been doing?
“Creating our own future is a challenging job that we cannot delegate.”
In reality, the task was not simply to write a text — it was to synthesize a plan that the faculty was developing for itself. The text written by AI was imposing something different on us than a plan we would have come up with through arguments with our colleagues. If we struggle for a long time to find the right wording and define the right goals, these goals will be carried by the faculty. They will be OUR goals that we WANT to achieve. If faculty members are presented with an AI-generated text, they will feel no passion to strive toward a common goal.
AI can give us goals, but if we accept them, we have given up our autonomy. We no longer determine what we want to do and what we want to become in the future, because we have allowed AI to dictate it to us. Ultimately, we are the ones who have to decide which options are worth pursuing. Creating our own future is a challenging job that we cannot delegate.
2. Self-Improvement — Authenticity and Performance
Imagine putting a super-intelligent AI assistant in your child’s room. One that can answer questions instantly and flawlessly. Children ask questions all the time, and the AI assistant never gets tired, unfriendly, or upset. Besides, it would be available 24/7. If you gave the assistant access to scientifically based psycho-social skills, it would be an excellent educator.
“A robot cannot tell us how it deals with gray hair on an emotional level because it has no hair, no ageing body, and no limited lifespan.”
At first glance, this solution sounds ideal. However, the answer given to a child must not only be correct, but also authentic. A device cannot empathize with what it’s like to have a stomach ache. It might give very good advice on what to do to relieve the stomach ache, but it cannot feel sympathy for the child. Nor can a device tell me about the efforts my friend made to save his marriage before divorcing from his wife, and how these efforts have changed him. A robot can share information about how other people deal with ageing — for example, when their hair turns gray — but it cannot tell us how it deals with gray hair on an emotional level because it has no hair, no ageing body, and no limited lifespan.
3. Effort and Appreciation
Suppose a very good friend of mine is seriously ill. Visiting him in the hospital may not be an enjoyable experience for me, but I visit him anyway. I sacrifice my time for him because he is important to me. In other words, I sacrifice a piece of my most precious resource — my life. An AI assistant does not have a limited lifetime, and therefore cannot express the feeling that somebody is important to it.
For my friend, it’s not just a matter of receiving positive signals. An app on my phone could do that, speaking encouraging words to my friend every hour on the hour. It could keep saying, “You’re brave!” or “Keep it up!” It could send words of encouragement and praise to make him feel good. Yet no one arranges to send such messages. Why? Because what matters is not that we are praised, but that we are praised for good reasons.
“What matters is not that we are praised, but that we are praised for good reasons.”
If the reasons for the praise are right, we are happy about it. For example, people strive to meet the daily step quota set by a smartwatch, because it means real achievement. They want to receive praise that is justified, so they don’t try to fool the watch. But a well-founded positive acknowledgement from a human surpasses praise from a smartwatch, because it conveys genuine appreciation — a sentiment that robots cannot convey.
What Can We Learn from This?
Before AI became as powerful as it is today, humans crafted their own development plans, explained the intricacies of life to their children, visited friends in hospital and engaged in meaningful conversations. Today, AI excels in numerous tasks, sometimes even outperforming humans. However, relinquishing human responsibilities to AI reveals crucial capabilities that AI cannot measure up to. We need to understand what these capabilities are.
AI challenges us to analyze our actions more precisely and to compare our achievements not merely with a machine’s top performance, but with our own desires and ideals. Until now, we have understood our actions as having two dimensions: the dimension of practical execution and the dimension of experiencing and expressing appreciation, authenticity, and autonomy. This multidimensionality is something that AI will never be able to duplicate.
Learn more in this related German-language title from De Gruyter (Open Access)
[Title image by imaginima/E+/Getty Images]