For a Human-Centered AI

A guide for ethical AI adoption in public administrations

April 11, 2025

How can we build an ethical artificial intelligence (AI)? AI Ethics Canvas is a practical and interdisciplinary instrument based on research conducted by FBK’s Digital Common Lab within the AIxPA project, led by the Autonomous Province of Trento with FBK’s collaboration. Riccardo Nanni was among the main players in building the canvas. This tool presents a set of questions to which a multidisciplinary team needs to answer to devise an AI-driven service for public administrations (PAs). The AI Ethics Canvas helps to balance functionaries’ duties and obligations with citizens’ needs by providing a clear method to address these challenges. It is available for everyone to download.

Applying AI to PA work promises to accelerate bureaucratic processes and provide functionaries with instruments to elaborate solutions oriented to citizens’ specific needs.

On the one hand, a common expectation is for the instrument to accelerate processes. On the other hand, one needs to slow down to ensure humans can supervise and take responsibility for machine-aided solutions.

Making an ethically correct decision is a process that requires decision-makers to elaborate huge information and to possess the right competences. In Europe, there is a strong sense of responsibility constantly driving EU regulations. Among these, the AI Act seeks to regulate AI development based on value-driven choices.

Such attention to values has always been high among FBK’s AI researchers. This is why the matter of meeting this challenges in projecting AI-driven services is central to AIxPA, a project driven by the Autonomous province of Trento within the flagship project Progetto Bandiera – Intelligenza artificiale nel sistema della PA (PNC A.1.3 “Digitalizzazione della Pubblica Amministrazione della Provincia Autonoma di Trento” – CUP: C49G22001020001).

The project thus started with a co-creation activity involving numerous FBK researchers and functionaries from Trento’s PA. After several meetings involving FBK units (I3 and DCL in particular), we moved towards drafting guidelines for ethical AI implementation. Based on the guidelines, the Digital Commons Lab developed a graphical instrument helping PAs to define the necessary requirements for AI-based services.

The canvas presents questions that need to be addressed with a variety of competences. Each page of the canvas is focused on necessary steps concerning the value chain of AI services creation: from data analysis to communication to the general public and training for functionaries using AI instruments. These include important steps such as analytical methodological choices and the elaboration of algorithms.

The AI Ethics Canvas was elaborated by Riccardo Nanni, Pietro Bizzaro, Munazza Usmani and Maurizio Napolitano from DCL and Albana Celepija From DSLab.

It started with a careful analysis mainly conducted by Riccardo (postdoc in political science) on ethical AI frameworks. His work was then supported by Pietro Bizzaro (FBK PhD student in computer science with a law degree) who contributed by joining his expertise in law and technology, Albana Celepija (PhD student in informatics) who contributed her expertise in informatics as a developer, Munazza Usmani (postdoc in GeoAI) who elaborated on models and Maurizio Napolitano (capo dell’unità DCL) with his expertise on data governance.

The outcome is an instrument aiding PA decisionmakers in configuring ethical AI adoptions within new generation administrative processes and service provision. 

An iterative co-creation process

The canvas use scenario starts with a group of people discussing the creation of an AI-based service that mainstreams ethical values.

The group is composed of all those people who need to play a role in the creation of the service and its reuse for citizens at various stages. Each of them offers their own skills and is expected to answer the questions on each sheet.

The canvas is currently undergoing further development and validation. Although it is oriented to use by the Trento PA, it is applicable to all European PAs that wish to adopt AI in their operations.

The canvas is available for download on github and the related website.

Ethical boundaries: from values to design

When a PA adopts a new service, it needs to address more in-depth questions than those usually concerning private actors.

We refer in particular to privacy, non-discrimination, and equity. For example, a technology managing information flows to provide economic benefits to families or other subsidies based on income criteria should necessarily access personal or company data to validate the request.

This implies the need to strike a balance between key values. 

Open questions, shapes and colours

Creating a canvas is a rather widespread scheme in the field of design thinking and serves as a cognitive shortcut to obtain an overall picture of a complex phenomenon, and to be able to break down the definition of a macro objective into coordinated parts. Each such part is also accompanied by guiding questions that help decisionmakers to think about their choices and their implications vis-a-vis the specific objectives of their activities.

Similarly, in this case the general objective is to guide PAs in establishing decision-making processes before adopting a new AI-based solution. The questions throughout the Canvas were built in dialogue with PA interlocutors, listening to their needs and informing them about the characteristics of their planning choices.

The AI ​​Ethics Canvas is accompanied by practical instructions for its filling and a glossary to define the key aspects. A practical legend reminds the users that each background color is associated with a professional profile. This is to distinguish at a glance among: AI scientist (who researches and develops AI systems and can be external to the institution), AI engineer (who trains, implements, maintains AI systems) and AI user (who uses AI systems on their job – e.g. councilors or officials).

Five steps

Five elements compose the analysis, each represented in a separate canvas page. For each of them the AI Ethics Canvas proposes a graphical tool to guide its users throughout a conceptual path from generic to particular ethical questions.

The first step relates to data. Which actors are involved? Which biases exist in the data collection process? How can we mitigate them? How do we monitor them?

Next come algorithms: what is their objective? Which type of instrument are we building? Is the training dataset biased? Are algorithmic outputs interpretable?

A central step is that of methods of analysis, that is, one’s capacity to interpret algorithmic outputs. In particular, who is taking responsibility for preventing discrimination, ensuring privacy, and copyright protection? What other aspects should be considered? 

Last but not least, we reach two pages addressing matters of context and functional requirements. Among social and cultural elements the canvas examines key factors for a successful AI implementation, that is, adequate and clear communication to the general public so that citizens perceive its benefits. The last page collects functional requirements for the practical implementation of all the previous steps.

Available under this license: cc by sa 4.0 international 

The project results from an exchange between FBK researchers and functionaries from the Province of Trento. It was a capacity building activity for functionaries to build a practical instrument for PAs across Italy and Europe that are trying to build chatbots, predictive systems or other AI-based solutions. The tool can be reused and modified by each single PA. This contributes to FBK’s everyday commitment “For a Human-Centred AI”.

The canvas also received a scientific validation in the article AI Ethics Canvas: A graphical tool to design and deploy ethical artificial intelligence for public administrations. The experience of the Autonomous Province of Trento, Italy” – coauthored by RICCARDO NANNI, Pietro Giovanni Bizzaro, Munazza Usmani, Maurizio Napolitano and Albana Celepija.

In particular, Albana Celepija’s contribution was to link the outcome of the AI Ethics Canvas to software components that guarantee ethical and legal obligations are respected.

DOWNLOAD THE CANVAS

P.S.: we know what you are wondering. No, we used no generative AI to draft this or the scientific article. This work is part of “Progetto Bandiera – PNC (Piano Nazionale per gli investimenti Complementari) – “AI data: Intelligenza artificiale nel sistema della PA”, CUP C49G22001020001. Flagship Project – PNC (Piano Nazionale per gli investimenti Complementari) – ‘AI data: Artificial Intelligence in the Public Administration’” CUP C49G22001020001. Coordinator: PAT, with FBK as “leading implementation partner”.

 


The author/s