For a Human-Centered AI

FBK at work for AI compliant solutions

April 17, 2025

The new "IA compliance mission" has been presented. An urgent challenge for our society, essential to accompany the acceleration of current technological development with confidence and responsibility. Michela Milano, head of the Research Center for Digital Society, inspired and led a participatory session, focusing on methodological and application aspects.

Artificial Intelligence (AI) is transforming Europe, as reflected in the AI Act Regulation (EU) 2024/1689, which addresses the responsible use of AI systems in the European market. The regulation establishes varying levels of obligations and compliance requirements depending on the type of AI system and the sectors in which they are used.

In this dynamic scenario, Fondazione Bruno Kessler stands as a hub of excellence in AI, with a strong focus on reliability (Trustworthy AI) and compliance (AI Compliance).

“FBK’s “AI Compliance Mission”,’Prof. Milano said,“stems from the need to translate regulatory and ethical obligations into concrete technical requirements for AI systems. The primary goal is to develop innovative digital resources that support designers and developers in creating AI solutions that are compliant by design. This means integrating reliable AI principles and regulatory and ethical requirements from the early stages of design and development.”

FBK addresses the complexity of AI compliance by considering various aspects: the importance of human agency and oversight, technical robustness and security, data privacy and governance, transparency (traceability and explainability), diversity, non-discrimination and fairness, social and environmental welfare, and accountability.

The analysis extends to different elements such as data, algorithms and models, adopting both qualitative and quantitative approaches.

One of FBK’s strengths is advanced research to quantitatively measure key dimensions of reliable AI, such as fairness and robustness, and to define mitigation methodologies for existing systems and “compliant by design” approaches for new ones.

Applications concern bias in generative AI as well as in other models.

Efforts are underway to investigate solutions aimed at identifying and correcting biases in data and models. This is evident in projects across various domains, such as human resources (e.g., job matching), healthcare (e.g., medical image analysis for diagnostics), and public administration—including procurement management and the allocation of public resources—with a focus on areas like sustainability and cybersecurity.

Access to education is another crucial area where algorithmic techniques can be applied—specifically, to help break the link between inequality and limited opportunities for advanced education. This has profound implications for the right to study and for promoting conditions that support the growth of talented and deserving individuals.

FRANCESCHINI_MICHELAMILANO_AI_-27

In cases where a precise measurement is not possible, qualitative metrics can still characterize the effort required to achieve it. For example, explainability has a cost that can be measured and so we can select algorithms to achieve explainability.

FBK is committed to promoting the adoption of these resources in various sectors, including public administration, high-risk industries, healthcare and energy.

“The goal”, the director of the Center for Digital Society (Digis) added, ‘is to position FBK as a key player in the national and European landscape of trusted AI and AI compliance.”

Through the “AI Compliance Mission”, FBK aims to address the growing demand for innovative methods and tools to harness the transformative potential of AI, while mitigating risks and aligning its use with ethical and societal values. The goal is to seize emerging opportunities and support informed, transparent decision-making processes.

The internal FBK discussion provided an opportunity to update the mapping of AI compliance-related applications within FBK through a questionnaire. Approximately 50 researchers participated in the event, not only from the Center for Digital Society but also from the Centers for Digital Industry, Augmented Intelligence, Digital Health & Wellbeing, and IRVAPP. They engaged in a structured brainstorming session organized around four working tables

Three focus groups concentrated on as many application domains of AI compliance.  Thus, FBK’s “work in progress” emerged, particularly with regard to industrial, PA and health-related applications.

A final group focused on a technical discussion of open questions and available tools—independent of specific application domains—that can support key requirements such as transparency, privacy (including GDPR compliance), data governance, security, diversity, and non-discrimination.

Eleonora Mencarini and Alessandro Cimatti (industry domain), Stefano Micocci and Chiara Leonardi (health domain), Maurizio Napolitano and Paolo Massa (PA domain) and Elisa Ricci (technical table) contributed to the realization of the workshop activity as experts or facilitators.

The interdisciplinary discussion that followed highlighted the depth of expertise at FBK—not only in research excellence, but also in the integration of ethical, regulatory, and technical skills to shape a future where AI truly serves humanity.

 

 


The author/s