Introduction: Why is the sky blue?
A 6-year-old boy enters the kitchen where his mother is working. He walks past her to the Google Home device on the kitchen table and asks, ‘Okay Google, why is the sky blue?’ The device provides a detailed and scientific answer. The boy leaves.
Six-year-olds tend to ask many questions, and this one has been told not to disturb his mother while she is working, so he turns to Google Home instead. The mother remains undisturbed, yet her feelings are suddenly ambivalent – she is proud of her son for finding an effective way to get an answer, relieved that her work was not interrupted, but, more than anything, she is worried. Answering a child’s questions is not just about providing information; it is an act of connection, guidance, and shared discovery. Now, one of her parental functions – helping her child make sense of the world – has been delegated to a machine. This mother is the lead author of this paper, in which we examine the possible consequences of the emerging artificial intelligence (AI)-based human–machine interaction, the shifting habits of knowledge acquisition and retention, and what they may lead to in the future.
For most of human history, children grew up surrounded by an ‘attachment village’ (Neufeld and Maté Reference Neufeld and Maté2013) of relatives and neighbours to talk to and learn from. In modern nuclear families, youngsters are surrounded by screens that provide information and, more recently, by smart devices to interact. This shift continues into adulthood, which is increasingly characterised by loneliness due to a reliance on digital interactions at the expense of human connections (Turkle Reference Turkle2015). Now, with the rapid advancement of AI-driven technologies, particularly generative AI (genAIFootnote 1), people are likely to engage with devices such as Alexa, Google Home, or voice-enabled ChatGPT on their smartphones, asking questions, seeking advice, and ultimately forming a kind of relationship with a machine, potentially becoming dependent on it. Understanding how this will affect the way they learn, remember, and create is of great importance to this paper.
Problem statement and research questions
These issues are already being explored by scholars across various disciplines, particularly in education. GenAI can enhance learning experiences, help teachers provide individualised feedback, facilitate innovative assessment methods, and offer personalised learning support tailored to individual needs, improving overall learning outcomes (Long et al. Reference Long, MacBlain and MacBlain2011; Graesser et al. Reference Graesser, Hu, Sottilare, Fischer, Hmelo-Silver, Goldman, Reimann and Reimann2018; Zawacki-Richter et al. Reference Zawacki-Richter, Marín, Bond and Gouverneur2019; Kasneci et al. Reference Kasneci, Sessler, Küchemann, Bannert, Dementieva, Fischer, Gasser, Groh, Günnemann, Hüllermeier, Krusche, Kutyniok, Michaeli, Nerdel, Pfeffer, Poquet, Sailer, Schmidt, Seidel, Stadler and Kasneci2023; Mollick and Mollick Reference Mollick and Mollick2023). However, despite strong arguments for accuracy in content creation (Leiker et al. Reference Leiker, Gyllen, Eldesouky, Cukurova, Wang, Rebolledo-Mendez, Dimitrova, Matsuda and Santos2023), its reliability as a learning tool is undermined by various instabilities linked to the probabilistic nature of large language models (LLMsFootnote 2), algorithmic bias, and hallucinationsFootnote 3 (Ji et al. Reference Ji, Lee, Frieske, Yu, Su, Xu, Ishii, Bang, Madotto and Fung2023; Tacheva and Ramasubramanian Reference Tacheva and Ramasubramanian2023; Yan et al. Reference Yan, Greiff, Teuber and Gašević2024). Moreover, the immediacy and conversational character of genAI-driven chatbots encourage a sense of trust and dependence (Shanmugasundaram and Tamilarasu Reference Shanmugasundaram and Tamilarasu2023), which is particularly concerning in the context of memory processes, as misattribution and exposure to manipulated material can shape recall in unpredictable ways (Clinch et al. Reference Clinch, Mack, Ward, Steeds, Dingler and Niforatos2021).
Research indicates that modern digital tools serve as external memory aids, facilitating cognitive offloading and potentially freeing mental resources for other tasks. However, their impact on human memory is nuanced. These concerns are insightfully discussed in Schacter’s (Reference Schacter2022) article, which provides a useful overview of empirical research on the psychological effects of digital media on memory. On the one hand, studies on memory transience – where reliance on technology for specific tasks might impair recall – have found no evidence of broader memory deficits (Marsh and Rajaram Reference Marsh and Rajaram2019; Hesselman Reference Hesselmann2020). Certain mnemonic benefits have also been observed: for example, using AI-generated summaries can reinforce memory (Jones et al. Reference Jones, Benge and Scullin2021). On the other hand, specific impairments have been documented: increased usage of GPS navigation technology has been linked to a decline in hippocampal-dependent spatial memory (Dahmani and Bohbot Reference Dahmani and Bohbot2020), and the act of taking digital photographs has been found to decrease recall accuracy for image details (Henkel Reference Henkel2014).
There is a tension, then, between findings suggesting no widespread cognitive decline from general digital tool use, versus evidence indicating specific memory deficits that often correlate with the nature and intensity of use, particularly overreliance on these technologies. Studies generally align in suggesting that overreliance on various digital tools, even for simple tasks, can result in diminished memory recall performance (Firth et al. Reference Firth, Torous, Stubbs, Firth, Steiner, Smith, Alvarez-Jimenez, Gleeson, Vancampfort, Armitage and Sarris2019). This has led to concepts like digital dementia, which describes the decline in cognitive abilities caused by excessive use of digital technology (Spitzer Reference Spitzer2012). Another concept, known as biological pointers, refers to the phenomenon where, due to cognitive offloading, individuals remember where to find information rather than internalising the information itself (Skulmowski Reference Skulmowski2023). These phenomena are a growing concern, particularly among younger generations, as they can lead to attention deficits and impair the acquisition of memories and learning (Manwell et al. Reference Manwell, Tadros, Ciccarelli and Eikelboom2022).
Understanding the effects of prior digital technologies on memory is crucial as we consider the potential impacts of genAI. The interactive, generative, and increasingly integrated nature of these tools may amplify the existing concerns about cognitive offloading and recall impairment or introduce novel challenges to human memory processes, setting the stage for the subsequent discussion in this paper. Some concerns have been highlighted in existing research regarding genAI’s influence on cognitive processes, particularly in problem-solving and creative tasks (Dergaa et al. Reference Dergaa, Ben Saad, Glenn, Amamou, Ben Aissa, Guelmami, Fekih-Romdhane and Chamari2024), as well as diminishing the human capacity for critical thinking (Rudin Reference Rudin2019; Lee et al. Reference Lee, Sarkar, Tankelevitch, Drosos, Rintel, Banks and Wilson2025), leading to intellectual dependence, addiction, overreliance on AI-driven tools, complacency, and metacognitive laziness (Fan et al. Reference Fan, Tang, Le, Shen, Tan, Zhao, Shen, Li and Gašević2024; Harbarth et al. Reference Harbarth, Gößwein, Bodemer and Schnaubert2024; Retkowsky et al. Reference Retkowsky, Hafermalz and Huysman2024; Zhai et al. Reference Zhai, Wibowo and Li2024; Yankouskaya et al. Reference Yankouskaya, Liebherr and Ali2025).
GenAI’s potential to shape human cognition is undeniable; the long-term consequences of these effects are uncertain. Examining how these technologies might affect humans requires either careful observation and analysis of societal shifts – a process that will take time – or, as we opt for in this paper, theoretically charged explorations of how these interactions may evolve in the future. This paper is a joint effort by researchers from the fields of media, education, and cognitive psychology in search of an understanding of how these tools may affect human processes of remembering and forgetting. It also attempts to evaluate the individual, cultural, and social implications of collaborating with genAI for human creativity. To guide this inquiry, we adopt a multi-layered approach to memory: drawing on cognitive psychology to examine individual processes such as offloading and recall, and on media and memory studies to explore new ways to remember and to create. Hence, we raise two research questions: (1) How might genAI tools affect human memory, remembering, and forgetting? And (2) What are the individual, cultural, and social implications of co-creating with genAI for human creativity? To answer these questions, we present two speculative scenarios illustrating the possible influence of genAI on memory and creative skills.
The first scenario, prompting to remember, examines the potential effects of AI-driven tools as a form of extended mind (Clark and Chalmers Reference Clark, Chalmers and Menary2010). With futuristic personal memory assistants integrated into wearable devices, individuals will benefit from the cognitive offloading of information and delegating their memory work to machines (Kaufman Reference Kaufman, Sternberg and Kaufman2011; Storm and Soares Reference Storm, Soares, Stein, Roediger and Nairne2021). But, while reliable, unlimited, and permanent, this constant outsourcing may foster immediate forgetting and diminish memory retention, cognitive flexibility, and adaptation. Eventually, it could alter the way societies remember and interact, as collective memory relies on the ability for shared recollection.
The second scenario, prompting to create, explores changes in creativity when individuals use prompting to generate content, thus performing together with AI-driven tools as co-creators. This collaboration may transform the human creative process into a human–machine interaction, where humans, or Homo Promptus, guide machine output and simultaneously navigate within its rules and limitations.
This paper proceeds as follows: Section two lays down an interdisciplinary theoretical framework crisscrossing media, memory, and education, and proposing the comparative framework ‘Prompting as play’, based on Huizinga’s Homo Ludens. Section three outlines potential scenarios and discusses them through the lens of media, memory, and education studies. Section four concludes the analysis, situating it within a broader societal, regulatory, and scientific context.
Rethinking human agency for memory and creativity
This section examines AI tools as ‘extensions of [wo]man’, using Cognitive Offloading and Dual Process Theory to explain human information processing. It then explores the potential of genAI for intellectual oppression versus intellectual emancipation. Next, it discusses how an AI memory ecology is shaping a third way of memory, leading to the emergence of artificial collective memory, which influences human engagement with the world. Finally, it introduces the concept of Homo Promptus as a potential successor to Homo Ludens, redefining the role of play in culture and society.
McLuhan’s interpretation of media as ‘extensions of [wo]man’ reads as a direct prediction of what AI is capable of today:
Rapidly, we approach the final phase of the extension of [wo]man - the technological simulation of consciousness, when the creative process of knowing will be collectively and corporately extended to the whole of human society, much as have already extended our senses and our nerves by various media. Whether the extension of consciousness, so long sought by advertisers for specific products, will be “good thing” is a question that admits of a wide solution. (McLuhan Reference McLuhan1964, 19).
For decades, people have existed within an ‘agentic hybridity’ (Merrill Reference Merrill2023), delegating some memory-related tasks to ‘media prostheses’ (Landsberg Reference Landsberg2004; Reading Reference Reading, Garde-Hansen, Hoskins and Reading2009; Pogačar Reference Pogačar2017), various electronic devices which store data: contact details, important dates, and visual life memories (Henriksen Reference Henriksen2024). Now, the reliance on comprehensive external machine-based storage, organisation, and synthesis introduces different dynamics: unlike previous tools, AI-driven ones are highly adaptive, evolving their responses based on user input; no earlier technology could dynamically adjust to each media user. Not only do these technologies enable memory offloading and data processing, but they can also facilitate instant information retrieval. As such, they gradually reshape how we remember and interact with memories, replacing traditional recollection with prompting. As memory becomes increasingly externalised, the act of remembering shifts from an organic, personal process to a machine-mediated oneFootnote 4. GenAI tools have the potential to function as more complete external memory repositories or even generators of narratives about one’s past experiences, offering information that might not have been personally encoded. This extensive outsourcing of memory keeping, where the machine can hold the primary or even sole record, marks a significant shift towards a more deeply machine-mediated mnemonic process, differing in kind from earlier technological aids or social recall. The difference lies in the complexity of the cognitive processes the user offloads: previous technologies, from textual archives to high-resolution videos, primarily offload storage, precise retrieval, and basic computation, thereby extending working memory. In contrast, new generative tools offload synthesis, complex reasoning, and problem-solving, which target higher-level cognitive functions and significantly influence learning.
As AI becomes part of what Clark and Chalmers (Reference Clark, Chalmers and Menary2010) refer to as ‘the extended mind’, potentially blurring the line between the tool and the user, it may empower humanity with extended abilities of thought and creativity. Alternatively, it could foster dependency on AI-driven systems, turning them into a necessity, prosthetics for the mind and memory. Cognitive Offloading and Dual Process Theory are central to understanding AI’s impact on (meta)cognition.
On human information processing
Cognitive offloading refers to delegating cognitive tasks (memory, calculations, navigation) to external tools or actions (Liu et al. Reference Liu, Lin, Zheng, Hu, Wang, Wang, Du and Dong2018). Offloading information to digital devices can be linked to benefits as well as detriments for cognitive processes. Detriments refer to weakening memory retrieval as a result of overreliance on the tool, reducing neural connectivity and synchronisation of brain regions (Sparrow et al. Reference Sparrow, Liu and Wegner2011; Risko and Gilbert Reference Risko and Gilbert2016; Grinschgl et al. Reference Grinschgl, Papenmeier and Meyerhoff2021), or fostering metacognitive distortions, such as overestimating one’s knowledge and inflated self-assessment (Hamilton and Yao Reference Hamilton and Yao2018). Benefits refer to mitigating proactive interference, enhancing focus on other tasks (Sparrow et al. Reference Sparrow, Liu and Wegner2011; Storm and Stone Reference Storm and Stone2015). This process can allow the brain to allocate resources toward other high-memory-load tasks, including critical thinking, problem-solving, and creative endeavours (Morrison and Richmond Reference Morrison and Richmond2020). Overall, the influence of digital tools on intellectual growth versus superficial knowledge acquisition remains an open question (Firth et al. Reference Firth, Torous, Stubbs, Firth, Steiner, Smith, Alvarez-Jimenez, Gleeson, Vancampfort, Armitage and Sarris2019). Notably, cognitive offloading’s role in fostering associative thinking, divergent thinking, and cognitive flexibility is underexplored. This raises questions about whether cognitive offloading truly enhances higher-order thinking or simply reshapes cognitive strategies in a way that prioritises accessibility over deep comprehension.
Dual Process Theory distinguishes between two modes of human information processing: System 1, which is fast, automatic, and intuitive, and System 2, which is slower, more deliberate, and analytical. These processes also relate to how information is processed, encoded, and retained in human memory (Evans Reference Evans2008; Kahneman Reference Kahneman2011). For information to be effectively retained and utilised for problem-solving, it must be deeply processed, actively engaged with, and structured in a way that enhances retrieval and application (System 2). This means that information is better remembered when individuals generate material themselves rather than passively consume it – a phenomenon known as the Generation Effect (Slamecka and Graf Reference Slamecka and Graf1978; Bertsch et al. Reference Bertsch, Pesta, Wiscott and McDaniel2007), and when individuals make connections with the real-world context – known as Context-Dependent Memory (Smith and Vela Reference Smith and Vela2001; Hupbach et al. Reference Hupbach, Hardt, Gomez and Nadel2008). By offloading cognitive tasks and generating ideas, genAI tools can enhance System 1’s intuitive creativity while freeing System 2 for higher-order thinking and problem-solving. However, while genAI-assisted processes may boost efficiency, they do not necessarily encourage deep, reflective thinking. This lack of cognitive engagement can weaken memory consolidation and reduce the ability to form new associations, both of which are crucial for creativity, particularly in generating novel ideas through divergent thinking, brainstorming, and exploring diverse perspectives. Recent research highlights how brain connectivity patterns support real-life creative behaviour through the structure of semantic memory (Ovando-Tellez et al. Reference Ovando-Tellez, Kenett, Benedek, Bernard, Belo, Beranger, Bieth and Volle2022), suggesting that a well-developed internal memory network fosters cognitive flexibility and innovation. If AI overuse leads to a reliance on external systems for knowledge retrieval, it may weaken this internal memory structure, ultimately reducing cognitive flexibility – the brain’s ability to switch between different types of thinking and adapt to new or complex situations. In other words, AI tools may disrupt the brain’s organisation of learned knowledge into efficient structures, such as schemata and neural representations, thereby hindering the internalisation of robust knowledge (Oakley et al. Reference Oakley, Johnston, Chen, Jung and Sejnowski2025).
While offloading factual memory to digital sources may enable cognitive resources to be redirected toward more complex creative activities, the long-term impact of these practices remains uncertain, requiring further research as AI and digital memory tools can become integral to daily life (Firth et al. Reference Firth, Torous, Stubbs, Firth, Steiner, Smith, Alvarez-Jimenez, Gleeson, Vancampfort, Armitage and Sarris2019). These cognitive processes, balancing unconscious automation and conscious effort, not only shape memory and problem-solving, but present an added challenge for human intelligence: how ‘not to lose oneself’ when navigating this new hybrid memory environment.
Forsaking one’s intellect
Education and knowledge transfer have long been built around the need for someone to ‘explicate’ to another. Explications assume that a ‘teacher’ possesses knowledge that ‘learners’ lack. Using AI-generated summaries can reinforce memory, effectively creating an ever-more intelligent teacher with ‘ignorant’ learners. Rancière (Reference Rancière1991) objects to and describes two ensuing types of intelligence:
[…] there is an inferior intelligence and a superior one. The former registers perceptions by chance, retains them, interprets and repeats them empirically, within the closed circle of habit and need. This is the intelligence of the young child and the common [wo]man. The superior intelligence knows things by reason, proceeds by method, from the simple to the complex, from the part to the whole. It is this intelligence that allows the master to transmit his[her] knowledge by adapting it to the intellectual capacities of the student and allows him[her] to verify that the student has satisfactorily understood what [s]he learned. Such is the principle of explication. (text adjusted for inclusion, Rancière Reference Rancière1991, 7).
With Rancière (Reference Rancière1991, Reference Rancière2007), we dare assume everyone is equally intelligent and that intelligence is universal. How can it not be when it is defined as not the equality in value of all manifestations of intelligence; instead, as manifest or present in all of us (Rancière Reference Rancière2007). It is the same intelligence that kids use to learn their mother tongue that they mobilise to learn throughout their life course. They do that ‘by observing and retaining, repeating and verifying, by relating what they were trying to know to what they already knew, by doing and reflecting about what they had done’ (Rancière Reference Rancière1991, 10), processes that are crucial for deep learning (Oakley et al. Reference Oakley, Johnston, Chen, Jung and Sejnowski2025). Intelligence is the same: it translates signs into other signs and compares what is learned to what has been learned before. Therefore, acknowledging the equality of intelligence between teachers and learners is intellectual emancipation in itself, since ‘what stultifies common people is not the lack of instruction, but their belief in the inferiority of their intelligence’ (Rancière Reference Rancière1991, 39). Since learning may no longer be bound to the presence of teachers, masters of knowledge, genAI, and other AI tools have the potential to contribute to learners’ intellectual emancipation, albeit that is not always the case.
In times when hybrid human–artificial intelligence is theorised as the result of collaborations between humans and machines, several intellectual risks may arise, notably the potential to give up one’s capacity to think and act independently. Hybrid intelligence could further disrupt the ‘master–student’ dialectics, often premised on knowledge deposition by teachers and unnecessary explications, commonly described as ‘banking education.’ Freeing learning from teachers’ ‘necessary’ knowledge as a premise for their mastery is one challenge; avoiding a new dependence on AI-generated knowledge is another. When the first has been addressed from the viewpoint of intellectual emancipation (Rancière Reference Rancière1991), there is an excellent opportunity to extend this emancipatory logic to user–genAI interactions.
While intellectual dependence on digital technologies is not new, AI technologies are far more complex and capable of ‘reasoning’ that was previously impossible to achieve. There are elements of ‘authority’, ‘individuality’, and ‘persuasion’ that make interacting with genAI tools risky for the intellect. They can provide answers fast enough to develop a dependence on them. What makes matters worse is that genAI tools are simultaneously confident-sounding, enshrined in the discursive authority of ‘intelligence’ and prone to hallucinations provided via ‘anthropomorphic features masking a lack of embodied coherence’ (Larson et al. Reference Larson, Moser, Caza, Muehlfeld and Colombo2024, 375).
One way to theorise a possible role for user–genAI interactions is to guide such interactions into ones that foster questioning instead of answers. When Larson and colleagues argue for Socratic questioning delivered via prompting, we are reminded of Rancière’s proviso: ‘There is a Socrates sleeping in every explicator’ (1991, 29). If Socratic questioning is based on a dialogue that is interested in leading interlocutors to one pre-conceived truth, that which they are wrong (see Sokoloff Reference Sokoloff2020), questioning based on ‘ignorance’ is instead recommended. For one, what we mean by ignorance is that whether the produced answer by genAI is correct is a non-issue, since it is a question. Two, it is an opportunity for genAI to mimic the role of an ‘ignorant teacher’, which Rancière theorises, so long as the human–AI relationship is vis-à-vis an object of study (a text, a song, a body of knowledge) that could be fed to genAI tools.
To guarantee a sufficient level of cognitive and metacognitive effort while engaging in a hybrid intelligence, genAI tools should be prompted for questions rather than generating factual knowledge. If AI were to function as an ‘ignorant teacher’, compelling learners to seek answers in the thing being studied, pose questions, and take charge of their intellect rather than passively receive explanations, or guiding questions to present conclusions, it could shift from being a crutch to a catalyst. Memory is an essential component of human intelligence (Rancière Reference Rancière1991), even if, for a long time, memorising has been perceived as a lower cognitive function (Krathwohl Reference Krathwohl2002; cf. Oakley et al. Reference Oakley, Johnston, Chen, Jung and Sejnowski2025). Instead of fostering intellectual dependence, an ignorant-genAI could activate human intelligence, mobilising the will to think, explore, and learn the unknown in an unknown way.
Collective memory goes artificial
Replacing the transformative ‘memory ecologies’, where memories became highly mediatised (Brown and Hoskins Reference Brown and Hoskins2010) and where new existential insecurities and vulnerabilities have arisen (Lagerkvist Reference Lagerkvist2013; Lagerkvist Reference Lagerkvist2015), a new AI memory ecology is emerging. In this new landscape, ‘Generative AI, and related technologies and services both enable and endanger human agency in the making and the remixing of individual and collective memory’ (Hoskins Reference Hoskins2024, 2). Talking about the effects of genAI, Hoskins introduces the concept of the third way of memory – a hybrid form of memory that never existed, a collaborative output of humans and machines. This AI-mediated memory offers new possibilities for imagination and alternative ways of remembering, yet remains glitchy, uncanny, and potentially disruptive. GenAI fundamentally alters both what memory is and what it does, ‘at the same time offering new modes of expression, conversation, creativity, and ways of overcoming forgetting’ (Hoskins Reference Hoskins2024, 1). In this context, it is reasonable to believe that this transformation begins even before memory itself forms – before an individual has the chance to remember or forget. As such, genAI not only provides ways of overcoming forgetting but also of overcoming remembering. If machines, rather than humans, take on the function of remembering and providing memories on demand, what are the implications for collective memory?
Halbwachs and Coser (Reference Halbwachs, Coser and Coser1992) is often cited as the first to claim that memory is collective, rather than individual (see Schwartz Reference Schwartz, Smelser and Baltes2001; Erll Reference Erll, Erll, Nünning and Young2008). At the beginning of the twentieth century, Halbwachs argued that memory is always moulded by the frameworks of social groups, nations, or families. According to him, collective information-gathering allows a group to acquire and retain more knowledge than individuals could alone, integrating that knowledge into group identity. Today, genAI technologies mark a transition from collective memory to artificial collective memory, in which a portion of humanity’s collective memory is encoded in the large datasets used to train AI models, a kind of ‘digital twin’ of collective memory (Kollias Reference Kollias2024). Unlike humans, AI structures information as a sequence of data points aligned to ‘the line of best fit’ (Mackenzie Reference Mackenzie2015), and this alignment is often shaped by commercial priorities rather than ethical concerns or public interest, and oriented on maximising user engagement with content which genAI retrieves or generates (Richardson-Walden and Makhortykh Reference Richardson-Walden and Makhortykh2024). Furthermore, genAI’s focus on probabilistic modelling means that average data points are often prioritised. These transformations raise concerns about the future of collective memory and human engagement with the world, linking to Johan Huizinga’s view of play as a defining element of civilisation.
Prompting as play: a comparison framework
In Homo Ludens, Huizinga explored the fundamental role of playFootnote 5 in human culture and society. He argued that play is not merely a form of entertainment but a crucial force shaping language, law, art, and social structures, as civilisation ‘arises in and as play, and never leaves it’ (Huizinga Reference Huizinga1955, 173). At the same time, he observed the decline of Homo Ludens, as ‘We moderns have lost the sense for ritual and sacred play’ (Huizinga Reference Huizinga1955, 158). This sentiment remains relevant today, in a world where efficiency, productivity, and technological mediation tend to dominate human activity, while play is increasingly reduced to entertainment or gamification. Digital technologies, and particularly genAI, are characterised by multimodality, virtuality, interactivity, and connectivity, and are often designed to mediate game-like activities. At the same time, genAI tools encourage users to generate content quickly and efficiently, often reducing creativityFootnote 6 to a series of inputs and outputs. This may democratise creative work by lowering barriers to entry, but it may also lead to standardisation, as individuals rely on pre-trained datasets that reflect existing biases and conventions, turning creativity into a process of recombination rather than genuine innovation, where human authors curate and refine machine-generated artefacts rather than generate new ideas from scratch.
Huizinga’s concept of play as an essential cultural driver resonates with genAI prompting, which enables alternative forms of creativity. Like child play, prompting fosters exploration within defined rules, recombining elements in novel ways. However, while ‘poetry was born in play and nourished on play; music and dancing were pure play’ (Huizinga Reference Huizinga1955, 173), the act of AI-generated creation is more about strategy and efficiency than spontaneous play.
In Homo Ludens, Huizinga identified several core attributes of play, describing it as:
-
- Voluntary: Play is freely chosen and not imposed, it is ‘an expression of human freedom’ (Huizinga Reference Huizinga1955, 7–8);
-
- Bound by rules: Play follows a specific set of rules that define its structure and cannot be changed: ‘The rules of a game are absolutely binding and allow no doubt’ (Huizinga Reference Huizinga1955, 11);
-
- Goal-oriented: Play is purposeful, but its goals are intrinsic, focusing on the act itself rather than external rewards; it is ‘outside the sphere of necessity or material utility’ (Huizinga Reference Huizinga1955, 132);
-
- Having a visible order: Play ‘creates order, is order. Into an imperfect world and into the confusion of life it brings a temporary, a limited perfection’ (Huizinga Reference Huizinga1955, 10);
-
- Confined within limits of time and space: ‘A closed space is marked out for it, either materially or ideally, hedged off from the everyday surroundings. Inside this space the play proceeds, inside it the rules obtain’ (Huizinga Reference Huizinga1955, 27);
-
- Imaginative: Play creates a space separate from everyday reality, fostering creativity and experimentation, taking place ‘outside and above the necessities and seriousness of everyday life’ (Huizinga Reference Huizinga1955, 26);
-
- Presenting knowledge as magical power: Play often involves an element of mystery, where ‘The answer to an enigmatic question is not found by reflection or logical reasoning. It comes quite literally as a sudden solution’ (Huizinga Reference Huizinga1955, 110);
-
- Bringing happiness: the ‘play-mood is one of rapture and enthusiasm, and is sacred or festive in accordance with the occasion. A feeling of exaltation and tension accompanies the action, mirth and relaxation follow’ (Huizinga Reference Huizinga1955, 132).
GenAI prompting shares several of these attributes. Like play, it is voluntary, rule-bound, goal-oriented, structured, and imaginative. Prompting also appears magical when users receive unexpected results. However, crucial differences remain. Prompting is often extrinsically motivated, aimed at producing functional outcomes rather than engaging in play for its own sake. Unlike the fixed rules of play, genAI interaction evolves with every system update, altering the parameters of engagement. Furthermore, while traditional play is confined within specific temporal and spatial boundaries, genAI prompting, facilitated by wearable devices, is increasingly untethered, making it an ever-present activity.
Most significantly, Huizinga’s concept of play is defined by intrinsic joy, festivity, and spontaneity, whereas prompting prioritises efficiency. Child play thrives on chaos and surprise, but AI-generated outputs are shaped by algorithmic predictability. The imaginative element central to Huizinga’s play risks being overshadowed by machine logic, stifling inspiration and spontaneity.
In the dark-toned final chapter of Homo Ludens, Huizinga argued that ‘civilization today is no longer played’ (Huizinga Reference Huizinga1955, 206), attributing this decline to external factors such as the global commercialisation of culture and ‘worship of technological progress’ (Huizinga Reference Huizinga1955, 192). He observed that 20th-century art, driven by utility rather than aesthetics, had lost its playfulness: ‘The man who is commissioned to make something is faced with a serious and responsible task: any idea of play is out of place’ (Huizinga Reference Huizinga1955, 166). Modern digital technologies reintroduced play culture to some degree, fostering what has been termed Homo Ludens 2.0 in the transition from Web 1.0 to Web 2.0, when gamification became a key driver of digital interaction (Frissen et al. Reference Frissen, Lammes, de Lange, de Mul, Raessens and Frissen2015; Bozkurt and Durak Reference Bozkurt and Durak2018). However, while gamification remains prevalent in digital media, it is less prominent in the AI media ecology. While genAI tools retain inherent game-like qualities such as multimodality and interactivity, they emphasise utility and efficiency over entertainment; the more they are used as mnemonic prosthetics, the more they shift from being an enhancement to a necessity.
The critical shift lies in how play and creativity are engaged. Previously, in creative activities, the playful element involved a process of trying, exploring, searching, eventually finding, and creating, where the effort and discovery were part of the rewarding experience. GenAI, by contrast, through its prompting mechanism, allows for the near-instantaneous generation of outcomes, as the result simply arrives. This transformation of even the most modern forms of play by genAI’s utilitarian, result-oriented nature signals the potential rise of Homo Promptus, who constantly interacts with AI-driven tools to solve tasks. It may mark a fundamental change: a world in which creativity will increasingly be defined by the ability to construct effective prompts rather than by free, inspirational exploration and effortful play. If genAI tools serve as cognitive prostheses that shortcut the process of playful struggle and discovery, the opportunities for the kind of play that fosters deep creativity and learning may be reduced. Just as a man with a limp, upon finding a suitable stick, would likely use it as a walking aid rather than a cricket bat, those who rely on genAI may see their play opportunities diminished. Building on this idea, we propose a cultural shift: the emergence of Homo Promptus, whose engagement with the world is defined by this new dynamic of prompting for outcomes, potentially eroding the original Huizingan element of playfulness in culture and society. For now, we can only imagine potential scenarios for these developments.
Two scenarios: ‘prompting to remember’ and ‘prompting to create’
This section explores two possible future scenarios of human–computer interaction, focusing on learning, remembering, and creativity in the new AI memory ecology, and examines them through the lens of education, memory, and cognitive psychology.
First scenario: How could he forget?
It is the first day back at school after summer, and the children are playing on the playground. A boy approaches them and says, ‘Hi, I am new in your class. We have just moved here! We live right on this street. My name is John.’ Jane’s, one of the kids, personal memory assistant, embedded in her smart glasses, automatically scans his face and memorises the details: ‘John, same class, lives on Baker Street.’ The next time they meet, Jane will need to scan John’s face to recall his name. Luckily, it takes only a second, and all the information will appear in front of her eyes.
Jane is amused that her father forgot about her mum’s birthday, but her mum is not; she is angry. Indeed, how could he forget? Jane does not understand. Jane does not know the date of her mother’s birthday either – it has always been outsourced to her smart device. But the system is set up to remind her a week before so she can prepare a gift, and again on the day itself to remind her to congratulate her mother. It’s the same for all her relatives and friends – there’s no way to forget someone’s birthday! Jane wonders why her dad does not have the same reminder system.
At school, her memory assistant can sometimes be annoying. In history class, so many dates and names are mentioned that they almost blur her vision. Instead of trying to memorise, Jane simply ignores them. Her smart device not only holds the answers but provides them automatically when relevant – she does not even need to form a question or identify what she needs to remember. So, why bother paying attention? The whole process of recognising, asking, searching, and retrieving is compressed into the frictionless scan of her smart glasses.
In math class, her memory assistant feels more like a tool than a distraction. Relevant formulas appear when needed; solving equations without them would be impossible. But like the historical dates, the formulas vanish from her mind the moment the task is done – or rather, they were never learned. Occasionally, Jane hesitates, uncertain whether what she has written is correct. Yet she has no way of evaluating the result, because the intuitive grasp that comes from understanding is missing.
As Jane grows, she notices she does not recall many details from her past. What remains are mostly visual impressions, music tunes, tastes, and smells, while dates, years, names, and faces are hard to remember. She needs to have a particular starting point, a trigger, to browse memories from her device, but those memories are mostly factual. When she reviews them, all the dates, places, and key moments are laid out as if written by someone else. It is not that Jane has forgotten; it is that she has never been the one to memorise. If Jane were to reflect on how she recalls things, she might say it feels strange, almost as if the memory does not truly belong to her. But it has always been this way, or at least, for as long as she can remember.
Discussion
Research has already shown that digital prosthetics can cause immediate forgetting and diminish cognitive flexibility. Jane’s experiences demonstrate how the disturbing effects of digital dementia might progress in the future. From the perspective of human intelligence, by offloading memory work, she no longer needs to engage with information on a deeper level, which leads to a lack of deep understanding. With Rancière (Reference Rancière1991), we learn that memory is the most elemental form of intelligence: ‘There is not one faculty that records, another that understands, another that judges’ (1991, 25). It helps us decode and recode what other intelligence has coded before by comparing what is known to what is not known yet. As such, if recording, understanding, and judging are all part of the same intelligence, and one cannot be done without the other, then Jane runs the risk of a weakened ability to recognise her intellectual capacities. What differentiates Jane’s experience from earlier generations who relied on books or even search engines is that she does not need to look for the information. The smart assistant is ever-present, passively capturing and proactively supplying knowledge. The argument here is not that Jane cannot learn with that device, but that such learning may not materialise in devotion to the learned thing and thereafter recognising one’s capacity to think and act independently. At the same time, the scenario can be seen as the promise of the emergence of hybrid human-artificial intelligence, which ‘could’ afford a more democratic pedagogical relationship between ‘teachers’ and ‘learners,’ where genAI is a tool for inquiry, but the prompting and the ensuing answer mark the beginning rather than the end of the learning experience. The risk remains that education and learning may be reshaped in ways that prioritise practical convenience over intellectual engagement, making it crucial to ensure that AI-driven tools encourage and support, rather than replace, the fundamental human capacity to learn and remember, while fostering learners’ belief in their intellectual freedom. Intellectual emancipation is nothing other than this belief manifested before every human–machine interaction. Our ‘intelligence partly depends on the mnemonic ability to recall, recollect, remember, and recognise past events and prior knowledge and, in turn, to learn from them’ (Merrill Reference Merrill2023, 176); therefore, learning with genAI should not create dependence on the machine for one’s intelligence. AI-driven tools should not serve as explicators but as reminders for learners to devote themselves to what they are learning.
From the cognitive psychology perspective, Jane’s reliance on her smart glasses illustrates how cognitive offloading can alter the way people learn and remember. Instead of actively processing and recalling information, she depends on quick retrieval. Storing factual information externally can benefit short-term working memory and improve cognitive performance by reducing interference from unnecessary details. However, memory retention works best when people actively generate and connect information to real-life contexts. Jane’s case reflects a disruption in explicit memory formation, rather than a failure of implicit memory systems: her assistive glasses supply semantic information (e.g., ‘This is Mark,’ ‘Your brother’s birthday is on Friday’) without Jane needing to encode or retrieve it herself, so she is not required to direct attention to these stimuli at the time they occur. Over time, this reduces the active rehearsal necessary for consolidating information into long-term memory. As a result, episodic memories – personal recollections of what happened, when, and with whom – are poorly stored; her personal recollections are fragmented and lack contextual richness. Her semantic memory is also undermined, as factual knowledge (e.g., names, dates) is rarely stored internally. In contrast, implicit memory (skills, habits, and non-conscious learning) remains unaffected, as it does not rely on deliberate encoding processes. Since Jane does not actively engage with the information, she does not build lasting neural connections, which weakens long-term knowledge retention. Without deliberate memory training, such as repeated retrieval practice and spaced repetition, the brain’s internal representations may decline, impairing neuroplasticity capabilities crucial for effective learning. This underuse of the brain’s high-level processing systems can weaken internal knowledge essential for reasoning, intuition, and expertise, thereby limiting the ability to form new associations and engage in critical thinking over time.
As smart technology becomes more embedded in daily life, balancing efficiency with active cognitive engagement will be crucial for maintaining strong memory and critical thinking skills. This shift is not about gaining access to stored facts but about reshaping the very process of how we encode and retrieve them. We may be entering an era where reliance on external biological pointers supplants the deep internalisation of knowledge. While human memories are inherently reconstructive and subjective, often filled in using our internal schemata, genAI tools, in parallel, possess the capacity to ‘creatively’ complete information, thus providing plausible yet potentially inaccurate outputs, whether actively guided or not. A significant concern here is related to the ‘illusion of knowledge’, often observed when individuals rely excessively on external aids. While various tools, like books or digital archives, have always played a role in supporting learning, their use required meta-awareness – knowing what one knows, what information is needed, and how to effectively locate it. GenAI, however, increasingly outsources these essential cognitive efforts, leading to hollowing out of metacognitive habits, where users bypass the mental work of self-correction, reflection, and prediction error mechanisms. Conversely, when designed with sound cognitive principles, future technology can enhance learning by complementing, rather than replacing, the brain’s natural mechanisms. For instance, technology can provide scaffolded practice and hints, prompting active engagement with the material. Just as photographs help reconstruct memories, augmented real-time data can reinforce associative learning and strengthen semantic memory. The goal is a balance where external tools support the development of deep, resilient internal knowledge.
From the memory studies perspective, Jane’s experiences with her memory assistant illustrate a third way of memory structure that is detached from emotional context or social influence. Her lack of direct involvement in the process of remembering highlights the shift away from individual and socially constructed memory. Even while we remember things personally, human memory has always been shaped within social frameworks, meaning the same event can be understood differently across different cultural, professional, social, and age groups: ‘Memory is a collective function’ (Halbwachs and Coser Reference Halbwachs, Coser and Coser1992, 183). Individuals recall experiences in specific social contexts: childhood memories, for example, often stem from stories told by parents or repeatedly recalled at family gatherings. Such shared memory recollections play a crucial role in shaping personal identity (Van Bergen et al. Reference Van Bergen, Evans, Harris, Branagh, Macabulos and Barnier2024). The emerging AI memory ecology introduces adjustments to these familiar patterns. While collective memory is meant to be dynamic and co-constructed through social interactions, the hybrid memories that Jane gets, facilitated by the AI-driven assistant, are static and devoid of the adaptability and emotional depth that come from human recollection. If information is directly sent to the digital personal archive, opportunities for collective interpretation and shared reconstruction diminish. Digitally stored information remains unchanged over time – stable and less intertwined with emotions. When hybrid memories are recalled with the help of future digital smart tools, their presentation will no longer adapt to personal context but will instead depend on genAI capabilities, which, by its non-human nature, does not understand the meaning of data but processes it as a sequence of data points (Moretti Reference Moretti2013). This contrasts drastically with human memory, which is dynamic, context-sensitive, and enriched by imagination and social interaction. Using hybrid memory may, over time, result in social interactions becoming more transactional and less emotionally resonant, as the co-construction of shared memories – vital to friendships, families, and communities – gives way to individually retrieved, machine-curated accounts. It also underscores the potential risks to inspiration and creativity in the future AI memory ecology.
Second scenario: It’s the end of the Muses as we know them
Jack is 17, and when he meets a particular girl, butterflies flutter in his stomach. He thinks that he wants to impress her. He sees her sitting on a bench, absorbed in a big book of poems. Jack decides to do something outstanding: he will write a poem about her and, through it, tell her how beautiful she is! But Jack has never tried to write poetry. In fact, he does not like reading at all. It does not matter – making a poem will take him a minute. He provides his smart assistant with the main ingredients: a picture of the girl, the name of the book she is reading, and a prompt to write a poem that complements the girl’s looks in the style of the book’s author. The poem is instantly ready. Jack prints it out on his smartphone, sits beside the girl, and hands her the freshly warm manuscript. She takes the paper hesitantly but reads the text and smiles at Jack. ‘So you are a poet,’ she says. ‘Oh yes, I am,’ Jack replies and starts a conversation smugly, clearly pleased with the outcome.
Later, inspired by his success at making poetry, Jack joins the poetry club, where he learns from his fellow poets what the most important skill of a poet is. They do not talk about sources of inspiration, imagination, the Seven Muses, or reading other poets. What truly matters is giving good prompts to their smart assistants and knowing the tools: poetic forms, meter, and rhythm, all essential to building a poem’s structure.
Soon, he impresses his poet-club peers with his outstanding prompting skills, as he generates poems about different epochs better than others. His secret is to ask the assistant to include some distinctive elements or mentions of historical events of that time in the text. Jack himself does not know history, but he trusts his smart assistant and never cross-checks what is created. The use of unfamiliar words and peculiar details makes his poems stand out, surprising his fellow poets, who have never encountered any like this before. Jack is acknowledged within the group as a creative innovator.
His skills continue to develop, and 1 day, he prompts: ‘Write a poem: 15th century, Spain, love and war, iambic pentameter, line length about seven words, add four alliterations and seven metaphors. One page long. Include distinctive elements or mentions of historical events of that time. The poem should be similar to award-winning poems.’ The result exceeds all expectations. Jack is unanimously recognised as the best poet in the poetry club.
Sometimes, club members point out that Jack is unable to cite any of his poems. But he is a poet, not a reader! Jack himself never reads poems. For him, poetry has nothing to do with literature; it is about performance: delivering results that impress others, and the feeling of achievement.
Jack’s poetry continues to earn him accolades, but the joy he once felt in impressing others begins to fade. Sometimes, he wonders if, 1 day, there was something else, something more substantial and meaningful, in being a poet.
Discussion
In the evolving AI memory ecology, the challenge is not whether genAI should be used in creative and intellectual work but how. Addressing the emergence of hybrid human-artificial intelligence, genAI should ideally support learning and creativity rather than replace them. Users must resist the temptation to surrender their intellect and fall into what Rancière (Reference Rancière1991) calls ‘the laziness of the mind.’ Jack’s journey as a ‘poet’, driven entirely by prompting, exemplifies the risks inherent in co-creating with genAI: his creative process lacks engagement with literature, personal interpretation, and the metacognitive effort required to maintain intellectual emancipation. Essentially, Jack renounces his intelligence, bypassing the needed devotion to the thing that makes poetry. Prompting one’s intellectual emancipation using genAI itself should be taught and learned. If Jack decides to become a poet in the traditional sense, genAI can serve as a tool to help him reflect on and deepen his creative process. For example, Jack could upload his poems and prompt genAI to generate reflective questions over the meanings he wished to convey and how accessible they are. These questions could spark his curiosity, inspire further exploration, and the (re)writing of his poetry based on ‘getting lost’ in genAI’s questions. Upon getting lost, one finds their intellect most easily.
From the cognitive psychology perspective, Jack’s use of genAI aligns with studies on cognitive offloading that suggest that delegating tasks to external tools can enhance immediate performance. However, while lowering cognitive load can theoretically free up mental resources for more deep, reflexive thinking, essential for creativity, this connection is not well-established. Overreliance on external aids may, in fact, reduce engagement in reflective thinking, potentially hindering creative processes. Creative insight relies on spontaneous internal processing and self-generated ideas, which are disrupted when AI-generated content is accepted uncritically (Beaty et al. Reference Beaty, Benedek, Wilkins, Jauk, Fink, Silvia, Hodges, Koschutnig and Neubauer2014). Furthermore, general knowledge suffers because information is not retained long-term and cannot be used effectively. In contrast, memory training that engages the brain in forming new associations fosters the cognitive flexibility necessary for creative thought (Fink et al. Reference Fink, Benedek, Koschutnig, Pirker, Berger, Meister, Neubauer, Papousek and Weiss2015).
GenAI can support creativity by providing structure to thoughts and freeing individuals from cognitive fixations, potentially enhancing multimodal memory encoding and richer associations. Yet, deep, effortful engagement and active memory formation remain crucial: while Jack may produce poetry, he misses the iterative, challenging process that refines artistic expression. Passive offloading leads to overreliance, where generated content is accepted without question. By relying on AI rather than actively constructing themes or verses, Jack predominantly uses System 1 processing (fast and automatic), skipping System 2 thinking necessary for deeper learning. Consequently, he misses key processes like the Generation Effect, which strengthens memory and understanding through active creation. Although his AI-generated poems impress others, his internal knowledge of poetry, history, and creative expression remains limited. This reflects research showing that reliance on external sources can lead individuals to overestimate their knowledge while reducing their ability to apply it in new situations. Therefore, it is essential to use genAI-driven tools in a way that supports rather than impedes the cognitive processes vital for creativity.
From the memory studies perspective, genAI as ‘media prostheses’ proposes a radically new creative dynamic. Traditionally, a writer with a historical story idea would research by reading books and watching films on that period, exploring language, rituals, cuisine, and the political context to ensure that the resulting work accurately represents the realities of that time. Depending on the creative concept, an artist would focus even more on visual details, exploring fashion, interiors, or transport. With genAI, much of this becomes unnecessary: when Jack delegates writing poetry to the smart assistant, it probabilistically selects the relevant historical details. Jack is not a plagiarist who copies lines and ideas from others; his creativity lies far from literature and consists primarily of prompt engineering. Similarly, on a larger scale, the proposed scenario suggests that society may move away from internal, reflective creativity toward an external, performance-oriented one, where only the result matters, and the search for inspiration and meaningful details is diminished, delegated to mnemonic prosthetics. This tendency may continue further, affecting collective memory. Other poets are impressed by Jack’s ability to generate historical context; they accept the generated creations without question. Such uncritical embrace of AI-generated narratives subtly illustrates how the formation of collective memory may shift, becoming increasingly ‘fed’ in its own turn by the artificial collective memory of genAI.
The new norm of AI media ecology offers endless creative possibilities, but the muses now work in a very different manner. Ultimately, the act of creation becomes about optimising prompts rather than engaging with the deeper emotional and intellectual aspects traditionally associated with artistic expression. As individuals perform together with AI-driven tools as co-creators, this collaboration may transform the creative process into a human-machine exchange, where humans guide machine output while navigating within its rules and limitations. They must continuously adjust prompts based on the output, learning the best ways to align with the machine’s logic, yet never fully knowing the outcome or having complete control over their creation. Huizinga argued that play is not merely recreational but a fundamental component of culture. Creativity flourishes through play, embodying intuition, adaptability, and the ability to think beyond conventional boundaries – qualities that machines fundamentally lack. It is not just the arrangement of words or colours in predefined patterns; it is the pursuit of inspiration and the reimagination of the familiar. Without this playful element, creativity risks becoming a mechanical practice that lacks the excitement of inspiration or the exhilaration of ‘Eureka!’ moments. If genAI continues to shape creative processes, a new paradigm may emerge – one defined not by spontaneous ideation but by strategic prompting. This shift marks the rise of Homo Promptus, an individual who navigates creativity through the skill of articulating queries that yield desired outputs.
Conclusion: ‘Dark They Were, and Golden-Eyed’
The Martians stared back up at them for a long, long silent time from the rippling water… (Ray Bradbury Reference Bradbury1950).
This paper sets out to answer two research questions: (1) How might genAI tools affect human memory, remembering, and forgetting? And (2) What are the individual, cultural, and social implications of co-creating with genAI for human creativity? It does that by demonstrating how McLuhan’s idea of media tools as ‘extensions of [wo]man’ finds renewed relevance in the context of the new AI memory ecology, which does not necessarily restrict human agency in memory and creativity but changes it in ambivalent ways.
First, outsourcing recall to genAI may lead to forming a shared co-remembering dynamic between people and their digital memory assistants. While AI-driven systems ‘enable fundamentally new levels of automation and delegation’ (Heintz Reference Heintz, Larsson, Ingram Bogusz and Andersson Schwarz2020, VII), they at the same time challenge traditional workings of remembering and forgetting, turning into mnemonic prosthetics.
Second, the human creative process, previously rooted in inspiration, may change when individuals have the ability to externalise creative thinking to machines, guiding outputs through prompts rather than independently searching for ideas. This process parallels play in its exploratory and rule-bound nature but diverges in its reliance on an external power, as the role of the creator is shifting: from an originator of ideas to a curator of machine-generated outputs.
Whether this transformation signifies a loss of human ingenuity or an evolution in cognitive adaptability depends on how this process unfolds and how it is framed within broader cultural shifts. A major challenge for culture and society is that we actively ‘want these systems to complement us’ (Heintz Reference Heintz, Larsson, Ingram Bogusz and Andersson Schwarz2020, VIII). Their development and global adoption are encouraged by governments and commercial enterprises in pursuit of increased productivity and scientific progress. One of Europe’s Digital Decade programme targets for 2030 is to have 75% of EU companies using AIFootnote 7. Regulatory bodies acknowledge concerns about AI surpassing human capabilities in various domains, responding with legislative measures (the EU AI Act, GDPR, the Executive Order on AI). The European Ethics Guidelines for Trustworthy Artificial Intelligence stress that ‘AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights’ (AI HLEG 2019). Yet, there remains no clear solution for achieving this in practice: ‘Oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches’ (AI HLEG 2019).
The dynamics of human-machine oversight mechanisms vary across contexts. Essentially, ‘the question is not about humans or AI, but rather how to best structure the relation between humans and AI, [where] the most important skill is computational thinking, which is all about solving problems using methods from computer science’ (Heintz Reference Heintz, Larsson, Ingram Bogusz and Andersson Schwarz2020, IX). In line with this idea of the upcoming need for change in necessary human skills, we foresee the gradual appearance of Homo Promptus, which signals a reconfiguration of human thinking. While some may view this transition as an erosion of creative autonomy, others may see it as a useful adaptation to new epistemic conditions and mastering much-needed computational thinking.
Addressing these concerns demands an interdisciplinary effort – one that brings together AI researchers and developers, as well as social scientists. Our position paper ultimately calls for sustained interdisciplinary inquiry, concrete strategies, and educational initiatives that cultivate AI literacy and help maintain intellectual independence in the future. By outlining speculative yet plausible future scenarios, we do not seek to impose dystopian narratives but to make a necessary step toward critically assessing the long-term implications of the new AI memory ecology. These observations serve as entry points for deeper reflection on the societal transformations that are only beginning to unfold.
Acknowledgements
We would like to thank anonymous reviewers for their valuable comments.
Funding statement
This work received partial funding from Vinnova’s (Sweden’s Innovation Agency) ADAPT-project. This work was partially supported by the Wallenberg AI and Transformative Technologies Education Development Program (WASP-ED) funded by the Marianne and Marcus Wallenberg Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests
The authors declare none.
Katerina Linden (Linköping University) is a postdoctoral researcher at Linköping University, working in the Reasoning and Learning Lab within the Division of Artificial Intelligence and Integrated Systems. Her research focuses on the impact of AI on human intelligence, exploring its implications for memory, society, business, education, and policy.
Hugo-Henrik Hachem (Linköping University) is a postdoctoral researcher at Linköping University’s Reasoning and Learning lab. His research focuses on the philosophy and goals of lifelong learning and (older) adult education.
Vasiliki Kondyli (Lund University) is a postdoctoral fellow at the Memory Lab of Lund University, supported by the Swedish Research Council. Ηer research develops in the intersection of cognitive psychology, human-centred technologies, and design.