Law and power in the era of Artificial Intelligence
Risks and benefits of the pervasive use of Artificial Intelligence technologies, especially in the medical field.
We often hear strong expressions such as “technological revolution”, “transition to new dimensions”, “revolution in terms of development” to talk about the advent of new Artificial Intelligence technologies (hereinafter AI) and it is therefore necessary to shed light on it so that citizens will feel involved in the process and not just a means of what happens. AI, in fact, is gradually permeating wider and wider spheres of our lives, percolating from public to private ones in a way that is not always perceptible, at least consciously, which is why it is necessary to adopt an increasingly multifaceted and multidisciplinary approach to examining the benefits, but also the risks or at least the challenges, inherent in the process.
One of the fields most interested in AI in the past few years has been medicine, which was extensively discussed by Dr. Marta Fasan in the final conference of the TrAIL project (Trento AI Laboratory, scientific Head Carlo Casonato), which underlined the importance of the legal dimension for the rise of AI in this field (but not only). AI is the main tool through which to achieve a new paradigm shift in the medical field (deep medicine) in order to promote increasingly precise and personalized diagnosis and therapies and to increase prevention among individuals and self-managed health, i.e. health increasingly managed by empowered and aware individuals, who actively participate in the choices that affect them.
The advantages of the application of new technologies are many, starting from the shorter time to produce a drug without affecting its safety, large-scale benefits for public health (think for example of pandemic screening), more accurate diagnostic processes, development of smart medical platforms that provide support to doctors, chatbots, apps and so on.
In addition to the improvements made in the technical and diagnostic fields, the application of AI in medicine also entails increased EFFICIENCY as it helps to optimize time management, which is often critical, just think of the management of emergencies in an emergency room or the reduction of diagnosis time.
There are also advantages from a legal and social point of view, such as the PROMOTION of SUBSTANTIAL EQUALITY and the RIGHT TO HEALTH.
However, we also need to understand the inevitable legal implications, i.e. whether the diffusion of AI in medicine can have an impact on the rights and principles of medical issues, or whether there are critical issues related to its use in medicine and what they might be. As in almost everything, in fact, advantages also come with risks and criticalities, for example those related to data (possibility of having unrepresentative data, vitiated by bias and discrimination, with the risk of repeating errors and prejudices already present in society), technological gap (very complex and currently expensive technologies that could lead to unequal access to the benefits that AI promises and increasing risks for more vulnerable groups, such as older adults). Also, many physicians may not fully trust new approaches with AI and vice versa.
Another concern is the so-called automation bias, i.e., physicians and patients might lose their critical approach to technology and fail to spot its errors, relying on it blindly, or there might be a decrease in physicians’ skills (deskilling), who would become less used to certain things and leave them solely to technology (risk of technological paternalism). A first possible solution is to make the issues less data driven but more “humanized” and based on a body of prior knowledge, especially in the medical field but also in the legal field, where there is talk of using AI in courts to process judicial decisions, perhaps in minor cases, or to streamline the legal process.
It is therefore necessary to ensure full explainability and comprehensibility of the processes that led the AI system to a certain meaning and to ensure that the algorithms are not discriminatory (principle of algorithmic non-discrimination), for example by encouraging inclusivity and pluralism also in the teams working on these programs.
At the same time, citizens should be able to either give or deny consent to the use of AI technologies (principle of non-exclusivity).
Insight was also offered by FBK researcher Chiara Ghidini, who deals precisely with symbolic AI applied to medicine at the Process and Data Intelligence Unit she leads. According to the researcher, precisely because of the pervasiveness and implications that AI can have in this area, it would be desirable to stimulate the same debate that took place on biotechnology and nuclear power, both among experts and with the general public.
For example, since each one of us now generates data in a more or less constant and conscious way, we need to understand whether we can donate them for research or not, what the possible implications of this are, whether there is a political or legal context behind it that protects individuals. On the basis of these reflections, Ghidini then explained which are, according to her, the three most imminent challenges facing those who deal with Artificial Intelligence:
- Incremental tendency to do more and more through AI:
While so far AI has been applied in well-defined and circumstantial domains, what we are trying to do now is to induce it to tackle complex problems, for example assisting a patient that needs long-term care, associating disease treatment with mental well-being, recognizing an emotion or a mental condition.
- Emergence of the need to be reliable and understandable from a technological point of view, always involving the human being in the process (human in the loop). In fact, researchers must keep in mind – and try to overcome – bias, decision-making opacity (not knowing why I came to a conclusion is a problem), and understand the output created by an AI technology. Not secondary is also the problem of accessibility to these technologies, especially for an audience of non-digital natives.
With regard to the dreaded AI “biases”, Ghidini points out that the fact that they exist, and that they emerge in the use of the technologies concerned, means that certain preconceptions do in fact exist in society and it is actually a good thing that they are brought to our attention, as we can thus try and solve and overcome them.
- Shift from prediction to recommendation: the new algorithms should try to provide mainly useful recommendations to avoid something or to do it in an appropriate way. For example, the aim is to create an algorithm that won’t tell you that you are about to hit a wall, but will rather suggests how to avoid it.
Paolo Traverso, director of FBK’s Marketing Strategy and Business Development Unit, also emphasized the need for regulation as the industry’s next challenge: people should not be governed by technology, but regulations at the same time should not be a brake on AI development, but rather empower people to understand why it’s important to be able to use their data, what purpose does it serve and how it is used. And the fact that, in AI technologies, there is the issue of transfer control, i.e., the need for human intervention at some point in the process (think of medicine but also of automotive or the legal system) indicates that machines not only have limitations that require human intervention in order to be overcome and it is right and desirable that it be and remain so.