
Slow, Fast, and Lightning-Fast Thinking: Between “Artificial Unconsciousness” and “AI Literacy”
Artificial intelligence (AI) is redefining how we think, make decisions, and interact with the world.
As it permeates our daily activities, filtering information, predicting outcomes, and offering solutions at unprecedented speed, AI becomes an extension of our abilities, potentially reshaping our cognitive processes. Within this context, the impact of AI on human skills emerges as a crucial concern and the concept of “AI literacy” becomes pivotal. More than just technical know-how, AI literacy entails developing the critical awareness, ethical understanding, and interpretative skills needed to ensure that AI remains a resource, rather than a substitute, for authentic human judgment. Knowing how to use and take advantage of the many AI applications at our disposal, while recognizing the limits of AI and being aware that it cannot grasp cause-and-effect relationships will be crucial to preserve our distinctively human role in decision-making.
In his book “Incoscienza Artificiale” (Artificial Unconsciousness) and in a recent article published in Nature, Massimo Chiaratti explores this topicby introducing a new concept, “System 0”, which extends the cognitive model proposed by Daniel Kahneman in Thinking, Fast and Slow. Chiaratti suggests that AI can be seen as an additional level of human cognition that can influence our thinking skills and decision-making process.
Daniel Kahneman, Nobel Prize for Economics, identified two main systems of thought that guide our decision-making processes:
- System 1 is fast, intuitive, and automatic, allowing us to respond rapidly to daily situations without significant mental effort—such as recognising a familiar face or reacting instinctively to danger.
- System 2 is slow, analytical, and deliberate. This is where critical, rational thought comes into play, requiring more energy and time. It is activated when we tackle complex problems or make strategic decisions.
AI introduces a third dimension, which Chiaratti refers to as “System 0,” acting as an intermediary between us and reality. Unlike human systems, System 0 neither understands nor possesses awareness. Nevertheless, it can process data and generate predictions with unmatched speed and accuracy. In doing so, System 0 becomes a filter we rely on when making decisions based on the forecasts the AI has already produced for us. On the one hand, this represents an enhancement of our abilities: System 0 provides us with tools, computational capacity, and data-processing capabilities otherwise unattainable by human means. On the other, it poses a risk to critical thinking, as we may be tempted to offload decisions onto the algorithm.
The impact of AI on human skills is therefore a crucial point of discussion. While AI can free up time and energy for higher-value activities and help develop new knowledge by way of innovative, personalised learning tools, an excessive reliance on algorithms risks flattening our skill sets and eroding our habit of independent thinking.The term “Artificial Unconsciousness” stresses how AI is incapable of grasping cause-effect relationships or the significance of the data it processes, including the texts it produces. Understanding causal links—rather than just numerical correlations—remains a distinctly human aptitude. Consequently, even if we choose to integrate machine capabilities, it is vital to preserve human judgment in decision-making (“human in the loop” principle) to prevent systemic and unpredictable risks arising from abdication of human responsibility and uncritical use of technology.
The book concludes by proposing solutions for confronting AI-related challenges: we will need to train people in both STEM subjects and those complementary to AI, invest in research and in public data-collection infrastructure, and support a regulatory framework that promotes international collaboration. Since AI is a global issue, it is necessary to strengthen international cooperation by establishing international research centres, increasing investments, and implementing policies to attract talent from different countries. In businesses, managers need to be sufficiently competent in technology to understand how to adopt AI in organizational processes, thereby promoting both productivity and new, high-quality employment opportunities. It is also important to prevent AI-based innovations from becoming exclusively proprietary, limiting competition and the sharing of knowledge. In academia, greater integration of computational sciences with social sciences is essential, while in the field of AI development, the proliferation of “no code” tools is promising, as they enable access to those technologies without requiring specialized programming skills.
AI is inevitable, but delegating our human responsibility is not. The impacts of AI, whether positive or negative, are tightly linked to political and corporate choices. Hence the governance of AI is essential, and the ability to navigate a variety of disciplines is needed to address this challenge. The AI Act is an important step in this direction, particularly Article 4 that entered into force on February 2nd 2025, introducing the obligation of “AI literacy” and requiring that everyone involved in the implementation or use of AI systems possesses adequate skills. AI literacy includes the abilities, knowledge, and technical and ethical awareness needed for the informed and responsible use of AI. The concept of “AI literacy” extends beyond operating AI systems. It also involves recognising the risks and opportunities associated with these technologies, emphasising the responsibility shared by providers, operators, and users in ensuring a critical approach to AI systems, and reaffirming the central role of human expertise.
Failure to comply with Article 4 may entail legal consequences, especially for organisations deploying high-risk AI systems, although the exact enforcement mechanisms are still under discussion.
Nonetheless, it is important to view this requirement not merely as a regulatory obligation. Investing in AI-related skills presents a strategic opportunity for businesses: training people within companies and organisations in the use and potential risks of AI not only mitigates the risk of sanctions but also accelerates AI adoption and innovation. If applied with foresight, Article 4 can thus serve both as a tool to regulate AI and a catalyst for its further development.
The EU AI Office is currently gathering information on the measures taken by organisations and businesses that have in part anticipated this legislation by joining the AI Pact, with the aim of creating a “living repository” of good practices in AI literacy. Yet the European Commission has not provided uniform standards, emphasizing the need for flexibility and the impossibility of adopting a one-fits all approach.
While this framework allows firms to tailor initiatives to their specific needs, it also leaves many open questions: What exactly should AI training programmes include? Which skills are most needed? Will AI training be only privately or also publicly funded? And how should its real impact be evaluated?
These issues pose intriguing challenges for businesses, research and public bodies, highlighting the importance of Impact evaluation to confirm whether training initiatives truly empower individuals or merely check a regulatory box.
Moreover, although the AI Act focuses primarily on AI providers and deployers, the broader concept of AI literacy is relevant not just for the workforce but also for younger and older generations, including children, parents and teachers. By investing in comprehensive, carefully evaluated training, we can ensure that technology remains an instrument rather than a substitute for human thinking, fostering the awareness of AI’s benefits while preventing the trap of “Artificial Unconsciousness”.