For a Human-Centered AI

Artificial intelligence will help in fighting fake news

November 14, 2023

Interview with Riccardo Gallotti, head of FBK's Complex Human Behaviour LAB and coordinator of the European AI4TRUST project. Goal: to support information professionals by debunking misinformation and disinformation thanks to advanced Artificial Intelligence techniques

I first met Riccardo Gallotti at the San Pio X neighborhood fest in Trento: a mutual friend introduced us and explained the project he is working on. We talk about the neighborhood and its strengths, since he is looking to buy a house in town. I sense that this young researcher from Crema, with a background as a theoretical physicist, hides a leading scientist’s experience behind his somewhat carefree appearance.
It is no coincidence that he directs the CHuB Lab (The Complex Human Behaviour Lab) at Fondazione Bruno Kessler, which deals with statistical modeling of individual and collective human behavior, and coordinates a six-million-euro European project to counter online disinformation with the help of Artificial Intelligence.
It is called “AI4Trust,” and it aims – thanks to the alliance between humans and machines – to combat fake news online. Although this term does not appeal to Riccardo Gallotti, who prefers to use misinformation – stemmed from an unintentional mistake – or disinformation, when fake news is spread with manipulative intentions.
We meet a few weeks later to explore his research in more detail.
Disinformation is a sensitive and high-impact issue, including politically: according to a 2019 Ipsos international survey as many as 86 percent of 25,000 respondents worldwide claimed to have run into fake news on the Internet, particularly on Facebook, and nearly nine in 10 admitted to having initially believed it.
In 2016, misinformation likely played a major role in Donald Trump’s election to the White House, and during Covid fake news, often conspiracy-driven, created great confusion at a time of distress and uncertainty.
And, in 2020, that Riccardo Gallotti’s engagement on the misinformation front began, when he became involved in a project launched by Professor Manlio De Domenico, now at the University of Padua. The scientist had the idea of tracking the evolution of misinformation around a virus that no one cared about yet: the Infodemic Observatory for Covid-19 was created that would later be supported by the WHO as a tool to monitor the spread of misinformation around the pandemic.

Mr. Gallotti, how does the project you coordinate, “AI4Trust,” work?

“First of all, we identified three issues to work around: climate change, public health, and the issue of migrants, all of which have a global impact and on which misinformation spreads. Using data science tools we probe a set of keywords but also key users and groups found on social media and
on some online news outlets. The result is a large amount of data that is fed to artificial intelligence, which extracts the emerging themes and narratives online and reports them to the international team of information professionals, who check their veracity, select them and submit them for critical scrutiny. In addition, thanks to AI and mathematical tools peculiar to the science of complex networks, we can tell how much fake news is the result of a deliberate concerted campaign or not.”

What is the goal of the project?

“The goal is to create a consultation platform where professionals can have an eye on “trending topics,” i.e., the issues around which misinformation is growing at a given time. In this case, artificial intelligence acts as a filter, which on the basis of the patterns on which it has been “trained” manages to provide already selected material to human reviewers who have the final say.”

But how does one go about training artificial intelligence?

“Let’s take the example of “fake news”: a group of experts is asked to compile a table with 1,000 examples of fake news and another 1,000 ones evaluated as true. Based on this list, the AI learns to differentiate one from the other. This process, however, does not stop with the first 1000 pieces of data; it is only the beginning of continuous learning based on interaction with other experts, who validate the AI’s suggested responses on new stories. In this way, its expertise is continuously improved.” .

The new front in the battle against online misinformation is the so-called deep fakes…

“Yes, these are photographs, audios or videos edited so realistically by AIs that they appear to be real: for example, the politician who is made to say something different from what he or she really said, which is very dangerous, for example, in times of war. Paradoxically, it is easier for an AI to recognize deep fakes in a video or audio than fake news in a written text.  Also because some communities in which misinformation has spread, on some issues have learned to use neologisms to stay under the radar”

The problem is that AI can be used to construct and spread fake news…

“Yes, it is a very powerful disinformation tool: it can be used to create fake news and also to construct it with the right language to penetrate certain social groups. But the opposite is also true: the power of AI can become a tool to combat it. But in the end, human intervention is always needed, because the real limitation of machines is their inability to understand context and thus interpret some messages correctly.”

What does this mean?

“AIs like ChatGpt have not been trained to say real things, they are only able to string words together that fit together. But if you ask them to explain Socrates’ philosophy, they can only do it with a high schooler’s depth of understanding at the time being.”

AI will become more and more present in our lives. What might the consequences be?

“Trust in these tools is built through use. Already many people are writing texts using AI. A student of mine proposed that I study fanfiction (i.e., works of amateur fiction written by fans taking stories or characters from an original work as their starting point, ed.) and try to grasp their evolution with the arrival of AIs, which are often used to write them.  In general, I imagine that we will use more and more tools like ChatGpt, and in this way the language used by AI will become prescriptive. Little by little our writing will conform to what is considered ‘right’ by its language models.” Finishing our conversation and our two coffees, Riccardo Gallotti tells me, “I may have finally found a home, right at San Pio X.”

________________________________________

Article by Mattia Pelli pubblished on il T quotidiano on Sunday, November 5, 2023.

 


The author/s