How ChatGPT visualizes itself. Photo from Wikimedia Commons.
A new study explores how AI is integrated into our thinking, creating a personalised cognitive layer that shapes decision-making and impacts human agency.
By Giuseppe Riva, Catholic University of Milan and Massimo Chiriatti, Catholic University of Sacred Heart Milan
November 13, 2024
It is becoming more common for people to check their phone’s weather app before deciding what to wear, use Google Maps to navigate and ask ChatGPT to draft an email. These everyday AI interactions are becoming so seamless that they now extend our cognitive capabilities beyond natural limits.
This phenomenon, identified by researchers from the Catholic University of Sacred Heart, Milan, in a new multidisciplinary study published by Nature Human Behavior, has been termed “System 0.”
Understanding “System 0”
Exploring how AI systems work is essential, as their influence extends human cognitive capabilities but lacks moral agency. This imbalance becomes problematic with personalization, which can confine users within “filter bubbles” that limit critical thinking and may erode independent judgement. The opacity of AI algorithms further complicates matters.
A team of researchers with expertise in AI, human thinking, neuroscience, human interactions and philosophy describe “System 0” as an autonomous AI layer increasingly embedded in our thinking and decision-making.
To understand System 0, it’s essential to consider how minds work. Psychologists like Daniel Kahneman describe human thought as having two systems: System 1 (fast, intuitive) and System 2 (slow, analytical).
System 1 handles routine tasks, like recognizing faces or driving a familiar route, while System 2 tackles complex problems, like solving a maths equation or planning a trip.
How “System 0” adapts to us
System 0, however, introduces a new layer feeding data to both systems, adapting to personal habits.
When choosing a restaurant, for instance, System 1 may respond to photos on Yelp while System 2 assesses reviews and prices – but both interact with AI-tailored recommendations based on dining preferences, budget constraints, and past choices.
The AI doesn’t just present generic recommendations; it creates a personalised information landscape based on your history of interactions.
Similarly, when navigating with Google Maps, users’ instincts and planning rely on AI-processed data. This data also incorporates travel habits, creating personalised, predictive guidance. System 0 not only processes real-time data but also leverages users’ historical preferences. However, integrating AI into cognitive processes raises documented challenges.
System 0 maintains a persistent memory of choices, behaviours, and preferences across multiple domains, creating what researchers call a “cognitive shadow” – an AI-driven layer that not only tracks current actions but also retains past choices.
The ethics of AI in decision-making
Yet, the implications of this memory-enhanced cognitive layer are profound. System 0’s memory-based design means it doesn’t just assist with current data but is informed by an AI memory of past behaviour. This poses critical questions about autonomy and privacy, as decisions are increasingly influenced by digital insights into personal patterns.Moreover, unlike human cognition, System 0 functions differently.
As Pearl and Mackenzie discuss in The Book of Why, AI can recognize patterns in data but struggles with causality – a fundamental part of human reasoning. This limitation means that while AI can identify patterns, it may miss the underlying causal relationships that humans naturally grasp.
The difference brings concerns about AI’s role in our decision-making.AI lacks true semantic understanding, as researchers note, even as it produces responses that resemble human thought.
The “black box” problem
The increasing reliance on System 0 into human cognitive processes raises fundamental concerns about human autonomy and trust that researchers have extensively studied. Taddeo and Floridi (2011) introduce the concept of “e-trust” – a unique trust dynamic where AI wields influence without possessing moral agency.
This asymmetry becomes problematic with AI personalization, which narrows our information exposure, potentially confining users within “filter bubbles” that limit critical thinking.
Users often accept AI suggestions even when they conflict with personal preferences, a tendency termed “algorithmic homogenization,” which may erode independent judgement.
The opacity of AI algorithms compounds this issue. With complex AI processes often hidden from users, understanding AI conclusions becomes difficult, affecting our capacity for informed judgement.
This “black box” problem raises serious concerns about accountability and human agency in AI-assisted decision-making: as System 0 integrates more deeply into our decision-making processes, the distribution of responsibility becomes increasingly unclear. When decisions are made through human-AI collaboration, it creates a “responsibility gap” where neither humans nor AI systems can be held fully accountable for outcomes.
Perhaps most importantly, the integration of System 0 impacts human fundamental nature as thinking beings. Although System 0 augments cognitive abilities, it risks reducing human agency.
Balancing AI’s potential with human autonomy
As people rely on AI for decision support, they may lose vital opportunities to hone cognitive skills.
This could lead to a problematic form of cognitive offloading, where dependency on external systems undermines intellectual growth. For these reasons, designing AI that supports autonomy without overstepping is crucial for maintaining this powerful human-AI partnership.
As AI evolves, understanding and shaping our relationship with System 0 will become increasingly crucial. The challenge lies in harnessing its potential while preserving human qualities – the ability to create meaning, exercise judgement, and maintain our intellectual independence.
Massimo Chiriatti is Chief Technical & Innovation Officer di Lenovo. His focus is on AI solutions, aiming to contribute to a world where artificial intelligence assists humanity in developing a better version of itself. He’s recognized for his expertise in AI Strategy and has taught at prestigious global universities such Catholic University of Sacred Heart and Luiss University.
Giuseppe Riva PhD is Director of Humane Technology Lab at the Catholic University of Milan, Italy, where he is Full Professor of General and Cognitive Psychology. Humane Technology Lab (HTLAB) is the Laboratory of the Università Cattolica that was set up to investigate the relationship between human experience and technology. The Humane Technology Lab considers the psycho-social, pedagogical, economic, legal, and philosophical aspects related to the growing spread of digital technologies, especially Artificial Intelligence and Robotics.
Subscribe to our newsletter.
Originally published under Creative Commons by 360info™.