For a Human-Centered AI

AI Act: a guide to Europe’s first regulation of artificial intelligence

December 20, 2024

Metterei Exploring the balance between technology innovation and the protection of human rights.

The European Union is committed to harnessing the transformative potential of artificial intelligence while addressing its associated risks. In line with its digital strategy, the EU has introduced the AI Act, a piece of legislation that focuses on identifying and mitigating risks by taking a quality and transparency approach.

Prof. Carlo Casonato and  Giulia Olivato, Ph.D. from the University of Trento Law School held a webinar on December 17 organized by FBK Academy in collaboration with Jean Monnet Chair (T4F – Training for the Future). During the event they explained the rationale and structure of the regulation and, in a video interview, answered key questions. Together with talks by Alessandro Sperduti, director f the Center for Augmented Intelligence, and Paolo Traverso, head for Strategic Planning at FBK. They offered an in-depth analysis of the practical implications of the AI Act.

What are the key points of the AI Act introduced by the EU?

“There are many issues covered by the AI Act, the key point being the risk-based approach. The European Union decided not to divide AI on the basis of the areas of application – for example, medicine, agriculture, justice, and public administration – but decided to make a comprehensive, risk-based regulation.  This means that systems are divided according to their share of risk to society, fundamental rights, and democracies. The European Union has identified some systems that pose a risk that is deemed unacceptable and will be banned as of February 2025. Other systems, however, present minor transparency issues. In these cases, it is critical to ensure that people know whether they are interacting with a human being or a machine, or whether an article or video was created by artificial intelligence or reproduces a real image. A significant portion of the systems will fall into the high-risk category, on which the AI Act focuses many of its provisions. These systems offer enormous potential, but they also present risks that are intended to be mitigated through specific requirements.  The first requirement concerns the use of datasets that are as error-free and representative of reality as possible. The second is transparency: it is essential that those using artificial intelligence be able to explain the logic that led the AI to generate that particular output.  Finally, a crucial requirement for high-risk systems is‘human oversight’: there must always be a person responsible for the decisions made by these systems,”Prof. Carlo Casonato, from the University of Trento Law School, answered.

How to balance rights protection and technology promotion?

“Balancing the promotion of fundamental rights and the maintenance of technology innovation is one of the central aspects of the AI Act, particularly with regard to risk identification and mitigation. One of the main challenges will be to apply horizontally conceived legislation, therefore valid for all sectors, to specific practical applications. In fact, reasoning in terms of fundamental rights in too abstract a way risks compromising the concreteness needed to assess the impact, both positive and negative, that these systems can have on society and the economy.  Importantly, the AI Act places great emphasis on the entire value chain. Since risk is the cornerstone concept of the regulation, the AI Act has two specific provisions to address it. On the one hand, there is the risk management system, entrusted to AI system providers. On the other hand, there are the fundamental rights impact assessment system, which will be managed only by a few deployers, i.e., selected professional users.  The latter are in fact the best suited to examine the characteristics of the population on which the system will be deployed and to assess the consequences on fundamental rights,”Giulia Olivato from the University of Trento School of Law, said.

How to support understanding and application of the AI Act?

Alessandro Sperduti, director of FBK’s Center for Augmented Intelligence, argues that “the role of academia and research is above all to promote culture, which is key to a thorough understanding of the issues and factors that influence the application of complex systems such as artificial intelligence-based ones. This task is crucial and must start as early as the university classrooms and then extend to the research level. In addition, we must absolutely be able to communicate, in a clear and simple manner, concepts that, by their nature, are complex.”

“I believe that research centers must and can play a key role in addressing the challenges posed by the AI Act.  This regulation invites us to a crucial transformation: moving from quantity to quality. The United States and China have invested billions to develop extremely powerful systems capable of tackling any problem. We, on the other hand, need to focus on creating systems based on correct, clean and transparent data, where it is clear what information they have been trained with and that they be more reliable.  This is an ambitious challenge, but it represents the only way to compete in a race that cannot be won by focusing solely on quantity. More reliable systems are what many institutions, from public administration to businesses, really need. When I talk about research, I don’t just mean the research carried out by artificial intelligence experts, but also the vital contributions of lawyers, regulatory experts, and domain experts.  The best artificial intelligence for medicine is built with doctors, for law with jurists, and for Industry 4.0 with engineers,” concluded Paolo Traverso, FBK’s Director for Strategic Planning.


The author/s