For a Human-Centered AI

What Artificial Intelligence is not and why it must be urgently regulated

November 3, 2023

The point of view of two outstanding experts about the new AI-based technologies and about how they can be used effectively and responsibly

Artificial Intelligence is now anywhere and it has permeated almost any field of knowledge, not least that of communication. For this reason, the Italian Federation for Public Relationships (FERPI) organized a conference on this subject on 27th October 2023 at the University of Trento inviting as speakers dr. Paolo Traverso, head of Strategic Marketing and Business Development of Fondazione Bruno Kessler and Carlo Casonato, Professor of Comparative Constitutional Law and appointed professor of the Jean Monnet chair of Artificial Intelligence Law (T4F).

The attending professional communicators admitted that they already make use of Artificial Intelligence for their work, particularly ChatGPT, to generate text or images and shorten the time of their daily work, especially during particularly pressing work times. But how can we make sure we are using these new tools correctly, both professionally and ethically? And how should the copyright of a text or image generated by an algorithm, and not by a human being, be handled?

This last assumption was the starting point for Traverso’s talk, which attempted to describe what AI is not. The first, seemingly obvious thing that AI cannot be is: human. Of course, using ChatGPT may give rise to the doubt that you are actually dealing with a human being and that behind the lines that appear in chat there may be someone in the flesh typing them so much so that, some users, relate to it by offering apologies and thanks. However, this is only appearance because the machine – and the technology behind it – do not reason as we do, do not rely on the same concepts or assumptions as we do but appear to do so because they have been trained to achieve precisely that.

In medicine, for example, analytical Artificial Intelligence is used with excellent results, which by examining an image of a retina can determine whether the eye is affected by retinopathy or not. These systems manage to be more accurate and immensely faster than a person because they have been trained through images entered by the professionals themselves and therefore rely on an immense knowledge base. At the same time, however, their task and capabilities end here, in the formulation of a diagnosis, about which, however, they cannot discuss precisely because they are not human persons with logical, dialogical and professional skills.

Similarly, generative AI, that is, the one behind tools such as ChatGPT and which, as its name implies, generates texts, images and more, is trained with words, sentences, images, blogs, e-books in such a way that it in turn is able to generate new products by similarity and comparison. In the case of text processing, for example, Chat GPT puts together sentences that (almost always) make sense because it is grounded on a body of texts from which it takes its cues. This is the reason why, if we write “I eat,” ChatGPT is likely to propose “a sandwich/an apple/pasta” as direct object rather than “a shoe/a table/television.” It functions by imitation and collocation, not logically, much less creatively. This is still the reason why, when asking ChatGPT who Bruno Kessler is, the system initially provides accurate information but at some point it goes wrong, to the point of claiming that Kessler was a scientist because typically research centers are named after such people rather than particularly enlightened politicians.

Even the narrative and poetic texts we can create with AI are not creative, that is, they do not spring from a thinking mind or emotion transposed onto paper-or screen-but are mere strings of words, albeit of a certain level and pleasantness.

We must therefore be aware of the mechanisms behind new technologies in order to use them to our advantage and in a truly effective way.

From these few examples of use, it is already clear how pervasive and transformative Artificial Intelligence is to our lives, which is why it needs to be properly regulated. Professor Casonato spoke on this topic, focusing in particular on what or who is behind such powerful technology, which could take advantage of it to guide political voting, purchasing trends, and much more.

Constitutionalism aims to limit power and direct it to the welfare of society by guaranteeing its rights. Similarly, if AI has transformative power it should be regulated, for the same reasons, because there are private powers behind AI, entrepreneurs and businessmen who might let their interests prevail over social welfare.

What can law do? Reclaim the principles and rights we already have in the constitution and adapt them to meet the new challenges of this technology.

And, as AI may be confused with humans, we need to think about an integration of informed consent that makes it transparent whether an image was created or a text was written by a human or ChatGPT. There remains, in this regard, the problem of copyright of images or texts created with AI, which has not yet been resolved because it currently rests on a personal and anthropomorphic basis. Usually, responsibility is related to autonomy, but how is this issue addressed in the case of machines? It cannot be excluded that an insurance will be needed in the future for the use of AI systems.

Another fundamental concept is the so-called right to discontinuity and inconsistency: AI proposes things to the user that they have already searched, read, ordered, etc., because they are profiled. This can be dangerous because in the long run it could, for example, influence political orientation or provide biased (dis)information (echo chambers); since profiling is conservative and polarizing in nature, the user is no longer exposed to different opinions but only to those akin to his or her own, and this essentially prevents the foundations of democracy. This is why it is necessary to claim the right to an instrument that forces the echo chambers, loosens them and makes them permeable so that we can be exposed to something different, discontinuous and inconsistent with the previous historical; that is, we must claim the right to be exposed to confrontation with what is different from ourselves because only this can give rise to a free and conscious society. One attempt that timidly goes in this direction is the “Surprise me” button recently introduced by the streaming platform Netflix, which allows you to receive proposals of movies and series that do not conform to your tastes, an excellent example of integrative AI: machine and human collaborate to increase people’s intellectual capabilities, not to replace them.

Raising the stakes, this is achievable through close collaboration and synergy between media, jurists, and computer scientists to achieve AI-based systems that are respectful of the AI Act.

Stephen Hawkings said that Artificial Intelligence could be as much the best thing as the worst ever happened to humanity: it is still very early to make predictions, but we have already begun to identify possible risks and attention points of the most powerful technology we possess.


The author/s