Conversations with Machines
Generative AI now knows how to persuade more than a human being: Riccardo Gallotti at Wired Next Fest 2025 discusses the risks and opportunities of this new form of persuasion
At Wired Next Fest 2025, during the session “Conversations with Machines” held at Palazzo del Bene in Rovereto, Riccardo Gallotti, Science Ambassador and Head of the Computational Human Behaviour (CHuB) laboratory at Fondazione Bruno Kessler, spoke with Alfio Ferrara, Full Professor of Computer Science at the University of Milan and delegate for AI Literacy. The conversation, moderated by Philip Di Salvo, addressed an issue that is no longer just technological in nature but concerns the very quality of public debate: the growing ability of artificial intelligence to persuade humans.
Riccardo Gallotti described the work of his research team, which investigates how people behave and interact in complex contexts, studying phenomena such as social cooperation, trust, and disinformation. “At CHuB Lab, we work in an interdisciplinary way, side by side with psychologists, sociologists, economists, transport engineers, and AI experts,” he explained. In recent years, our focus has shifted particularly toward disinformation and fake news, in a landscape that has been radically transformed by the emergence of generative AI.” Today, anyone—without specific skills—can create realistic texts, images, and videos: a powerful tool that opens up extraordinary creative possibilities but also presents serious risks, from hate campaigns to political manipulation. Added to this are more sophisticated threats, such as microtargeting and online disinformation campaigns run by bots, which can amplify persuasive messages and steer public opinion in targeted and hard-to-control ways.”
To concretely measure AI’s persuasive power, Gallotti and his team—together with researchers from EPFL and Princeton—conducted an experiment published in Nature Human Behaviour. Over 900 participants took part in mini-academic debates, where a human debater and a GPT-4-based model faced off on political and social topics. The result was surprising: in 64.4% of cases, the AI was more persuasive to the audience than the human opponent. Furthermore, when participants knew they were engaging with a machine, they were more likely to change their opinions than when debating with a human.
These findings mark a critical threshold: persuasion, traditionally considered a human social skill, is now within reach of algorithms accessible to anyone. “AI has made persuasion easier, faster and more accessible,” Gallotti said. This is why the discussion cannot be limited to technical capabilities—it must also reflect on how these technologies will impact democracy, information, and interpersonal trust. A central issue is also the quantification of language: machines do nothing but map training data—what they have “consumed” in recent years and continue to assimilate. The language they produce is statistical and can shift depending on the information provided. “If the machine cannot be critical, we have to be,”added Gallotti.
At this point in the conversation, Prof. Alfio Ferrara emphasized the importance of AI literacy—the widespread ability to critically interpret machine-generated messages. He stressed that it is no longer sufficient to consider these technologies as technical knowledge reserved for experts: it is essential to make them understandable to all, so that citizens can grasp how machine language is generated and develop critical tools to assess it. If algorithms learn from our behavior and adapt their arguments accordingly, it becomes crucial that everyone possesses the skills to recognize their limits, risks, and potential.
Alongside the risks there are also tangible opportunities. With projects like AI4Trust, Gallotti and the CHuB Lab contribute to the development of a platform that integrates AI models with data analysis tools and collaborative fact-checking practices. The goal is to identify and track the spread of disinformation online in real time, providing journalists, policymakers, and civil society with reliable tools to counter its impact.
In conclusion, as Gallotti and Ferrara emphasized, the persuasive power of artificial intelligence is no longer a technological curiosity but a phenomenon with direct implications for the quality of information, the functioning of democracy, and the relationships of trust among people. For this reason, beyond risk analysis, research and institutions are working to develop tools and initiatives that use the same technologies to combat disinformation and support a more informed and inclusive public debate.
_________________________________________________________________________________________________
The WIRED Next Fest Trentino is organized by WIRED Italia in partnership with the Autonomous Province of Trento – Department of Economic Development, Work, Family, University and Research – Trentino Marketing, Trentino Sviluppo, Azienda per il Turismo Rovereto, Vallagarina e Monte Baldo, Municipality of Rovereto.
The Scientific Committee chaired by the Head of Content of WIRED Italia works on the construction of the program, with the participation of the University of Trento, Fondazione Bruno Kessler, Fondazione Edmund Mach, Fondazione Hub Innovazione Trentino, the Provincial Institute for Research and Educational Experimentation – IPRASE and MUSE – Museo delle Scienze.