For a Human-Centered AI

“Can che ha bias non morde?”

May 23, 2022

In the new episode of Radio FBK, an interesting conversation about the relationship between discriminations and technology. Protagonists of the dialogue, Luisa Bentivogli and Marco Guerini, respectively Researcher of the Machine Translation unit of the Digital Industry Centre and Head of the Language and Dialog Technologies Research Unit of the Digital Society Centre.

The outbreak of the use of advanced information technologies, in particular related to the use of artificial intelligence techniques, sets major challenges with regard to the inclusion and enhancement of gender, ethnicity, religion or abilities. For example, a first challenge is put forth by AI products: the feminization of voice assistants or biases in the recognition of images of black women’s faces show how we must avoid the risk of creating a technology that incorporates, and amplifies, the stereotypes and prejudices that we are trying hard to face.

The dialogue between the two experts examines the contribution of research in this particularly complex and rapidly evolving field. Starting from the very definition of bias, we realize the richness of the references mapped in the literature. Moreover, the experience of our daily lives shows us that it is not at all said that prejudice is a negative thing, on the contrary it can even prove to be redeeming in certain circumstances. So, how could data and algorithms possibly amplify existing inequalities and discrimination in our society?

This topic helps us to keep the spotlight on the commitment made by the Bruno Kessler Foundation research community to fight discrimination, a topic that is the subject of in-depth research and awareness-raising initiatives throughout Europe, every year in May.

In particular, the audiovisual narrative that we present here gives the idea of the liveliness of the ongoing debate in the scientific community, since to properly understand the phenomenon of algorithmic bias it is necessary to deepen many co-related aspects: the values and stereotypes within language, language as a form of thought that builds narrative universes (which may include or exclude) and finally the decisive role played not only by data but also by the models that are trained on them.

Now algorithmic choices involves us personally in an increasingly extensive and relevant way, often without most people noticing. As always, technology is neutral, but the use that is made of it can be addressed in a socially constructive way or not.

The research contributions describe the possibility of using neural networks to generate counter-narratives capable of countering online hate speech and bring out all the difficulties in the development of an inclusive language that eliminates gender distinctions where they are not necessary or for those who do not feel represented by a male or female linguistic output (non-binary people). There are many questions still open about the potential and the limits of available approaches and tools. Where do we stop? When is a language likely to change from suggestion to imposition?

We are curious to hear the reactions or comments of anyone who has decided to spend some time to listening or watching our podcast. Knowledge is a challenge that involves collective intelligence and no one is excluded. Write to us on Facebook, Instagram, Linkedin or Twitter. We will read and answer to all of you as soon as possible.

This podcast is part of the Science and Society series. Discover all the productions of Radio FBK and don’t miss the next episodes: follow us on Spreaker or Spotify.

 


The author/s