For a Human-Centered AI

The challenge for digital content integrity in the era of deepfakes and GenAI

March 11, 2026

Because digital identity, encryption, and provenance standards are key tools for ensuring authenticity, traceability, and trust

Until a couple of years ago, it was often still possible to sense that an image or audio recording had been manipulated: a visual detail out of place, an imperfection in the voice. Today, with the evolution of Generative AI tools, the perceptual quality is increasingly convincing and detecting fakes is becoming more complex. This is also a key area of work for FBK’s Center for Cybersecurity, which studies how to guarantee the authenticity and traceability of digital content in a rapidly changing landscape.

Alongside positive applications, these tools also make it possible to create fake content, known as deepfakes. The consequences are evident and can affect both individuals and companies: increasingly  sophisticated scams, cases of manipulated video calls that have led executives to transfer large sums of money, completely falsified remote business onboarding processes, and the spread of defamatory or pornographic material. As these technologies become more accessible and integrated into different applications, a parallel market of services designed to facilitate cybercrime activities is also emerging.

In this scenario, the security objective is not only to guarantee the confidentiality of data, but also its integrity, understood in a broader sense. When an image is altered from its original state, its technical integrity is violated; but there is also a more subtle level, that of the semantic integrity of the information conveyed by the content. This also depends on the type of transformations an image or video has undergone and the context in which they are used. For example, scaling or light filtering of an image can be considered low-impact modifications compared with the addition or removal of objects or the combination of multiple different visual sources. With these latter transformations, the semantics of the content can be manipulated and radically altered. For this reason, transparently verifying the origin and transformations of visual information—whether an image or a video—in order to assess its reliability is more important than ever. If we cannot establish with reasonable confidence whether and how it has been manipulated, it becomes difficult to maintain trust in digital content.

For years, research has been developing techniques to identify manipulated content by analyzing its statistical characteristics. Today, however, work is also progressing through a complementary approach: not just recognizing fakes afterward, but building a system o trust  starting from the moment content is created“The idea is to associate content with cryptographically verifiable metadata, uniquely linked to the image or video,” says the director of the Cybersecurity Center, Silvio Ranise“This metadata is signed using certified digital identities, making it possible to identify not only who created the image or video but also who modified its content and what operations were performed.”

 

The C2PA (Coalition for Content Provenance and Authenticity) specification also moves in this direction, aiming to create ecosystems in which content is ideally tracked and signed from its acquisition to its final distribution. The main challenge is adoption, so that these systems become an integral part of digital platforms.

Cecilia Pasquini, a researcher at the FBK Center for Cybersecurity, is involved in two European working groups engaged in drafting a code of practice to operationalize the transparency obligations set out in Article 50 of the AI Act. “We are working to define appropriate technological approaches for marking and labeling AI-generated content. The goal is to identify effective and interoperable technical tools—from signed metadata to watermarking—that make real traceability of digital content possible,”explains Cecilia Pasquini.

The topic is part of an evolving regulatory framework. At the national level, initiatives are also increasing, including the recognition of deepfakes as a crime and a growing connection between AI compliance and cybersecurity. In Italy, responsibility for AI market surveillance and compliance with the AI Act has been entrusted to the National Cybersecurity Agency (ACN). This decision places ACN at the center of the national digital security and innovation strategy, with direct implications for deepfake management. Because the AI Act imposes transparency and labeling obligations for AI-generated content (such as deepfakes that imitate people), the role of ACN is not limited to cyber defense but also extends to ensuring that AI tools meet strict ethical and security standards. This means that ACN is called upon to promote the use of defensive AI for the detection and attribution of manipulated content, transforming the fight against deepfakes from a retrospective technological response into a systemic and regulatory approach that aims to establish trust in, and the origin of, digital content from the moment it is created. “It is also from this perspective of supporting institutional action that our activities—both in support of the C2PA specification and through participation in European working groups—should be understood,” concludes Center for Cybersecurity Director Silvio Ranise.


The author/s