Where Has the Child Gone? The Cognitive Risks of Growing up in the LLM Age
- Monica Albini

- Nov 25, 2025
- 6 min read
Updated: Nov 26, 2025

Have you ever asked ChatGPT for guidance, searched for some information, or asked it to rewrite something? If the answer is yes, you’ve probably been analyzed and categorized — without you realizing it.
According to a study by OpenAI, Duke University, and Harvard University, 77% of ChatGPT conversations (quite probably including yours) can be classified into three main topics: practical guidance, seeking information, and writing.
This makes me wonder.
Over the past few years, younger generations, particularly Gen Z, have found themselves navigating an environment where economic instability, rapid technological change, and social pressure are making personal and professional growth increasingly challenging. Many struggle to develop the confidence and clarity needed to articulate their ideas and build a stable future.
And yet, the skills that could help them move forward in their personal or professional lives — such as seeking guidance, searching for reliable information, and learning to express themselves effectively — are the same three domains that dominate the vast majority of conversations with today’s most widely used LLM. Another study from Wuhan University highlights the concept of “cognitive fatigue,” which can arise both from a cognitive underload and from insufficient cognitive stimulation — and AI is the culprit.
But rather than being trivial or purely functional, these domains represent the core human abilities that support learning, self-development, and social connection. If approached consciously, they could offer young people a genuine path out of uncertainty.
What we ask AI the most, and why it matters
The domain of practical guidance is the sphere of relationships and learning through others. Today, the ability to ask for help has become a rare and unique skill. Showing vulnerability or openly seeking support may be interpreted as a sign of weakness or as an inability to handle challenges independently. While this is not always the case, the risk of being judged or misunderstood can discourage people from reaching out for guidance when they truly need it.
At the same time, asking for help is a quality that nurtures empathy, trust, and mutual guidance, which represents the essence of relationships between parents and children, or teachers and students, or mentors and mentees. When this educational and emotional dimension is shifted into ChatGPT (or any other LLM), part of that exchange is delegated to the machine. Consequently, the risk is that the human experience of dialogue and confrontation becomes reduced or absent, fostering forms of social withdrawal, such as the “hikikomori” phenomenon known in Japan, where young people choose to isolate themselves and avoid social interaction.
The seeking information domain is the sphere of cognitive formation, the act of curiosity, exploration, and dialogue with the world. It can manifest through questioning, studying, and investigating. To better understand this idea, it’s helpful to use the image of the “Fanciullino” (the “little child”) described by the Italian poet Giovanni Pascoli at the end of the 19th century.
Pascoli’s theory of the “Fanciullino” suggests that within every person, adults too, lives a childlike spirit capable of seeing reality with wonder, amazement, and sensitivity, something that adults tend to lose because of the daily hustle and bustle and rational thinking. The risk is that now, with the rise of AI, losing that childlike innocence happens much earlier. Young people increasingly gravitate toward what is quick, easy, and effortless (and what ChatGPT and the like can supply endlessly — or until you need to upgrade to Pro). According to the Wuhan study, users who frequently rely on AI for searching and synthesizing information may gradually become less engaged in spotting discrepancies and critically evaluating content. They miss out on the educational value of discovery and the time to make mistakes (and learn from them).
Finally, the writing domain, the sphere of expression. Writing teaches us clarity, coherence, argumentation, and self-awareness. Since childhood, we’ve been taught to put our ideas and feelings into words, not only to express ourselves and be understood, but also to understand who we are. From this sprang the once-popular habit of keeping a secret diary. Today, having a diary might be considered old-fashioned and uncool by some (though hopefully not everyone!).
When we let the machine do the writing for us, we risk losing the practice of self-reflection, reasoning, and personal storytelling, which is the set of skills that help us grow as individuals. Instead, the younger generations move towards a more detached, isolated way of thinking and communicating. This phenomenon has even led to the description of today’s youth (the so-called Generation Z) as the “Mowgli Generation”, a reference to the main character in The Jungle Book.
At first glance, the term might sound endearing, but in reality, it refers to something problematic: children and teenagers grow up with little human contact, struggle to integrate, and show significant delays in developing social and cognitive skills. Not because they need to fend for themselves in the jungle like Mowgli did, but because of the technology. Interestingly, in OpenAI’s report, the writing domain appears to be the most diverse in terms of user requests. From a sample of approximately 1.1 million conversations collected between May 15, 2024 and June 26, 2025, typical queries included: “write fiction” (1.4%), “argument or summary generation” (3.6%), “translation” (4.5%), “personal writing or communication” (8.0%) and “edit or critique provided text” (10.6%).
In short, this report isn’t just a categorization of user-LLM interactions based on careful taxonomic work, but it also identifies three fundamentally human-defining acts.
The educational challenge is not to replace the machine (the tide may be impossible to stem), but to integrate it. Examined closely, the machine reminds us that we, humans, are socially wired, curiosity-driven, and storytelling-fueled creatures.
Educational solutions for an AI-first world
So, what types of solutions can we envision to address trends no one really asked for? Here is a sketch:
Develop programs about relational AI literacy, teaching people how to seek advice from AI without replacing genuine human dialogue. This could include classroom activities designed to translate into human terms what the machine suggests. For instance, it’s easy to fall into the habit of asking ChatGPT anything and everything first. And that’s fine, as long as the goal remains to restore intentionality and sensitivity to the act of giving and receiving advice. One guiding question could be: “What is missing that makes this answer less human?”
It’s also essential to teach our students to become critical and active participants in the technological landscape. The goal is to help them maintain a healthy level of skepticism and awareness when engaging with AI tools (or any tool, for that matter).
To avoid the risk of losing cognitive exercise and the ability to compare different sources, one possible solution is to promote hands-on workshops where students compare ChatGPT’s answers with academic sources, expert opinions, and real-world observations. ChatGPT has become a primary source of information for millions of people, and thanks to such shared and collaborative learning experiences, students can keep their critical thinking and evaluative skills actively, asking questions like “Why does this answer seem plausible or not?”. This approach could also serve as a concrete example of a pedagogical use of the LLM to teach how to research: teachers can demonstrate how a question shapes an answer, helping students learn to formulate better, more meaningful queries.
Also, it’s not new the shift of methodology that has been echoing through the halls of schools and universities concerning the way student learning is assessed. If written assignments can no longer be evaluated in the traditional way because they are often influenced by AI, the solution remains to make students think critically in class about the texts generated by AI. Prompts such as “Make this text more personal.” or “What would you change in this paragraph?” can become real exercises for style and personalization. A good example comes from Tilburg University’s course entitled “AI-Assisted Academic Writing”, which teaches students how to use AI constructively in academic writing by identifying reliable sources, revising and structuring their thoughts, and refining style with the help, rather than replacement, of AI.
ChatGPT has evolved from a simple technical tool into a mass cognitive infrastructure, perfectly integrated into people’s daily thinking, for both professional and personal purposes.
The proportion of work-related conversations is changing, making room also for more personal exchanges around three essentially human acts: asking for help, seeking information, and expressing oneself.
If this is perpetuated, especially among younger generations, it could gradually lead to reduced cognitive effort, weakened critical thinking, and reduced motivation for analytical thinking. However, concrete examples of educational institutions tackling these risks already exist, and they show that a change of direction is not only necessary but also possible. With the right educational and ethical approaches, AI can become a true “gym for modern thinking,” helping all of us to train curiosity and awareness in an increasingly digital world.






