For a Human-Centered AI

Researc-HER-s power: hard sciences told by the women who do them

March 1, 2021

Cycle of webinars organized by HIT - Trentino Innovation Hub to stimulate discussion on avant-garde research lines

The webinar is part of the series offered by HIT – Hub Innovation Trentino which saw three outstanding guests who spoke, from their first-hand experience standpoint, of artificial intelligence in research, businesses and citizens related areas.

Alessandra Sala works as director of AI and Data Science at Shutterstock and is a Women in AI Ireland ambassador, a non-profit association that aims to bring more women into the world of technology and AI to fill the existing gap, a gap that is not only professional but also ethical and social as it precludes the possibility of having different actors who deal with defining technology from an ethical and social point of view, bring their perspectives.

Alessandra was asked how AI can transform citizens’ business and interaction models in relation to the products offered and she started by emphasizing the need, nowadays, for a citizen-friendly business model that will rethink the economic models themselves and boost greater collaboration between universities and businesses not only to develop new AI models, but also to bring new products into society in the service of citizens. The values of privacy and human rights must therefore be brought back to the center to make the citizen not just a user of the products offered but a thinking and informed consumer.

Elisa Ricci, another speaker, has a dual affiliation as associate professor of AI and deep learning at the University of Trento and head of FBK’s Deep Visual Learning unit , which studies artificial intelligence applied to video images and robotics with projects funded by Europe and in partnership with Facebook, Snapchat, Huawei, etc.

Therefore, given her proven experience with very large organizations, Elisa spoke about the added value brought by artificial intelligence to companies, in particular to those processing large amounts of data (eg: monitoring and video surveillance) that need practical and fast solutions that will preserve privacy. It is not new that industrial scenarios where humans and robots coexist, like in increasingly robotic production chains, are ever more plausible. Going a little further with our imagination, in the future companies that deal with personnel selection for example could be assisted by systems based on artificial intelligence that make a screening of candidates starting from the analysis of emotions, facial movements, etc.

Chiara Ghidini is an FBK researcher who, after her doctorate at La Sapienza University of Rome and post-doc positions in Manchester and Liverpool, returned to Italy to deal with symbolic artificial intelligence, i.e. using explicit rules in reasoning, combined with inductive reasoning (e.g. that of children).

Chiara is the head of FBK’s PDI (Process and Data Intelligence) research unit and is the scientific co-head of the Digital Health and Wellbeing center, which connects AI with health.

Chiara illustrated how the increasingly massive presence of artificial intelligence in society, health and administration leads to a significant improvement, including in day-to-day life, for citizens, for example by streamlining bureaucratic procedures or assisting doctors in the care of patients by exploiting predictive models for identifying, for example, the ideal care for older patients (when it is better to treat a patient at home, when in the hospital, what the patient’s living conditions are, etc.) or for offering constant monitoring services for chronic diseases such as diabetes via chatbots or virtual assistants, which can help develop effective and non-invasive solutions for the patient while providing valuable data to primary care physicians.

Such an important advent of these new technologies has inevitably raised and continues to raise ethical and privacy issues. The latter have been partially curbed with the introduction of the GDPR, the European regulation on privacy, an important tool that has limits, though, for example when the data provided is stored or processed in a cloud that is not located in Europe, like in the U.S. or China, where the regulation does not apply.

There are also limitations imposed by the GDPR, for example if the provision makes the acquired data unusable due to overly stringent privacy limitations that do not include the necessary relationship of trust such as that existing, for example, between doctor and patient, according to which the data provided by the individual are used for the collective good. Therefore, in addition to legal regulation, there is also a behavioral and pragmatic one, which Europe is shaping with specific white papers on the development and ethics of AI.

A key role in all this is undoubtedly played by education and communication of new technologies: IT technicians who develop AI solutions must be supported by sociologists and philosophers who will tackle ethical issues. In fact, at the moment the university system lacks courses that go beyond the purely technical aspects, which should hopefully be included in future curricula to the end of identifying solutions that could also be implemented in the field of diversity inclusion and gender gap.

Moreover, the communication on progress in the AI field must be appropriate so as to prevent privacy related fears in citizens, who should receive correct information on what artificial intelligence can actually do and what it cannot, for example extract only certain types of data.

 


The author/s