For a Human-Centered AI

Gender bias and human language technology

January 7, 2021

Steps forward to resolving "gender bias" in machine translation systems

Recent developments in language technology have changed our lives for the better in many ways: facilitated writing tools, voice transcription and spam filtering. However, in recent years these systems have also shown discriminatory behaviors. For example, chatbots that pour sexist insults on the web, or AI recruiting tools that discard women’s resumes. In other words, these technologies can reproduce pre-existing social asymmetries, including gender bias.

Even machine translation systems, both of written and spoken texts, are not exempt from this bias. In fact, it has been found that these systems entrench stereotyped gender roles – for example by systematically translating the English word doctor as “dottore” [the Italian word for male doctor, editor’s note] – and that they suffer from a generic default towards male forms (e.g.: “sono stato” [Italian for I have been when a male is speaking, editor’s note]) even when a woman is talking. This phenomenon is largely due to the data used to develop the systems, which contain those biases (limited or unfavorable representations of women) from which machine translation learns. Gender bias is therefore both a technical and social problem, which causes lower system performance when it comes to translating for women, but above all it amplifies the under-representation of a disadvantaged demographic group.

This became research work for two PhD students, Beatrice Savoldi and Marco Gaido, who were followed by researchers from FBK’s Machine Translation Unit. A first paper, Gender in danger? Evaluating Speech Translation Technology in the MuST-SHE Corpus (Luisa Bentivogli et al.), was presented at the most important conference on machine translation, ACL 2020. Focusing on the rendering of gender phenomena in speakers, this study explored how speech translation systems could exploit audio information (the speaker’s voice) to generate the correct linguistic form in translation. However, this solution is not enough.

With the article Breeding Gender-Aware Direct Speech Translation Systems, which received a special mention as oustanding paper at the COLING’2020 conference, the FBK research group has taken the study of gender bias a step further: going beyond inferences based on the voice of speakers to translate gender. In fact, assumptions based on these biometric characteristics can sometimes be more dangerous than useful, for example in the case of children with high-pitched voices or women with deep voices. The resulting problem, in this case, goes beyond the male/female binomial and affects a wider spectrum of individuals (transgender people, people with vocal disabilities, children, etc.) who belong to other groups with little or no representation.

For this new study, the group made use of another “FBK brand” resource: the TED Talks MuST-C corpus, created for the development of speech translation systems. In the early phase of the work, the gender of the speakers of the talks, identified by detecting the pronouns used in the biographies published on the TED website, was added to MuST-C. These pronouns indicate in fact the gender with which the speakers present themselves and, consequently, accept to be mentioned in a translation. At a later time, specialized translation systems were created (i.e. trained exclusively on male or female datasets) whose performance in gender translation improves considerably compared to that of a generic system. Moreover, these systems, having learned to produce only one linguistic gender form, are able to produce it independently of the audio information they receive. For example, if a speaker with an apparently deep, masculine voice referred to herself with the use of feminine forms, the system would be able to ignore the recorded basic frequencies and still produce the required feminine form. Conversely, a boy with a still high pitched voice would receive the male form required in translation.

These systems are therefore designed to mitigate the problem of gender bias and be usable by a great variety of individuals, but at the moment they need gender information integration for a correct and respectful rendering of diversity.


The author/s