I. Introduction
The rapid integration of artificial intelligence (AI) into education is transforming traditional learning environments creating an urgent need for the development of effective regulatory frameworks that ensure the responsible use of AI, protect students’ rights, and support the evolving roles of educators and institutions.Footnote 1 Yet, the research exploring the legal frictions between AI regulation and education regulation, particularly in how these regulatory fields intersect and clash as AI becomes more deeply embedded in educational settings is practically non-existent.Footnote 2 While frictions between AI regulation and education regulation are not inherently problematic, they become critical when they hinder fundamental rights such as educational equality, access to education, and the pursuit of high-quality education which are all essential to EU’s participatory democracy.Footnote 3 This gap highlights the critical need to study these regulatory frameworks to ensure that the deployment of AI in education (AIED) aligns in a way that fosters positive and effective regulatory outcomes.
In response to this gap, the main aim of this paper is to critically assess the effectiveness of AIED regulation under the AI Act,Footnote 4 with particular attention to how it interacts with existing education regulation. It focuses on higher education (HE),Footnote 5 a relevant setting for this research given the increasing use of high-risk AI systems – such as those used in admissions, assessment of learning outcomes, academic level determinations, and exam proctoring – in response to the rising demand for scalable, data-driven solutions.Footnote 6 Furthermore, this paper focuses on legal requirements that appertain to human oversight, a central element in both AI regulation and education regulation: AI regulation mandates human oversight to ensure accountability, transparency, and the protection of fundamental rights, while education regulation emphasises the empowerment of teachers in guiding and maintaining the integrity of the educational environment. In many ways, the role of the teacher remains the most central yet unresolved issue in this context, as it is still unclear how their professional responsibilities and authority will be aligned with the growing use of AIED from a regulatory perspective.Footnote 7
The paper proceeds in four parts. It begins by classifying how AIED is deployed in HE from both a technological and legal perspective. Second, it provides a broad overview of the legal standards governing the deployment of AIED in HE, with a focus on how human oversight is framed within both AI regulation and education regulation at the European Union level. Third, this paper applies the “effectiveness test,” a conceptual model developed by Mousmouti to evalulate the effectiveness of the AI Act based on four key dimensions: its purpose, the coherence and feasibility of the substantive content it provides, the anticipated and actual results it produces, and its structural integration within the broader legal framework.Footnote 8 The paper concludes with a discussion about how the regulation of AIED in HE could be made more effective.
II. AIED
Artificial Intelligence in Education (AIED) has been a research field for decades, with its roots tracing back nearly a century to Sidney Pressey’s development of the first mechanised multiple-choice machine – an early attempt to automate feedback and ease teachers’ grading burden.Footnote 9 Since then, AIED has evolved in parallel with broader developments in AI, moving through three key phases.Footnote 10 Early systems were rule-based, relying on expert-designed logic to deliver fixed instructional responses – like intelligent tutoring systems that followed pre-set scripts.Footnote 11 The advent of machine learning introduced a new generation of systems capable of learning from educational data to personalise feedback, predict student performance, and dynamically adapt content.Footnote 12 Most recently, the emergence of generative AI has brought powerful foundational models into education, enabling tools that can generate explanations, create learning materials, or simulate human-like dialogue, raising both opportunities and challenges for teaching, learning, and assessment.Footnote 13
Another common way to categorise AIED is by grouping it into three broad areas: learning with AI, using AI to learn about learning, and learning about AI, also known as AI literacy.Footnote 14 Learning with AI involves “the use of AI-driven tools in teaching and learning” and can be further divided into learner-supporting AI, teacher-supporting AI and institution-supporting AI.Footnote 15 Using AI to learn about learning includes collecting and analysing digital traces – data generated through learners’ interactions with educational technologies, such as clicks, responses, and emotional states.Footnote 16 Learning about AI, also known as AI literacy has two dimensions.Footnote 17 First, it involves teaching learners and educators of all ages about AI technologies, including machine learning, natural language processing, and their foundations in coding and statistics.Footnote 18 Second, it involves helping citizens understand the broader societal impacts of AI, such as ethical concerns, bias, surveillance, and job disruption.Footnote 19
It is important to emphasise that, beyond pedagogical domains, AI systems are increasingly being deployed in the operational aspects of education, even though the boundaries between the two often overlap.Footnote 20 For example, universities and other educational institutions are turning to AI to streamline administrative functions such as admissions, scheduling, and resource allocation, potentially reshaping how educational opportunities are distributed and managed.Footnote 21 Likewise, dedicated external agencies may use AI-driven tools to assess institutional performance or accredit programs.Footnote 22 These developments show that AI is not just confined to the classroom, but it also shapes the broader institutional environment in which teachers work and students learn.
From a legal perspective, AIED may be classified as prohibited, high-risk, limited-risk or minimal risk under the AI Act. Prohibited AIED includes AI systems that infer emotions of a natural person in the area of educational institutions, including both public and private institutions.Footnote 23 These are defined as systems that infer emotions or intentions from biometric dataFootnote 24 – such as facial expressions, voice, or body language – and are banned in schools and universities due to serious concerns about their scientific validity, cultural bias, and potential infringement on fundamental rights, including privacy, dignity, and freedom of thought.Footnote 25 Recital 44 of the AI Act explicitly warns of the discriminatory and unreliable nature of such technologies, especially when applied to vulnerable populations like students. Applications using emotion recognition to support language learningFootnote 26 , assess students’ attention or interestFootnote 27 , or evaluate applicants during admissions testsFootnote 28 – are likely to be captured by the ban on emotion recognition in educational settings.Footnote 29 Limited exceptions exist for medical and safety applications, such as to support students with autism or to enhance accessibility for blind or deaf learners.Footnote 30
AI systems that are not prohibited may still fall within the high-risk category if they serve critical educational functions. This includes systems used for admissions decisions, determining academic placement, evaluating learning outcomes, or conducting exam proctoring – each of which can significantly affect a student’s educational trajectory.Footnote 31 High-risk classification is based on the potential impact of these systems on individuals’ rights and opportunities, particularly their ability to secure a livelihood.Footnote 32 In contrast, limited-risk AIED refers to systems that interact with learners or educators but do not make or substantially influence consequential decisions. For example, an AI-powered chatbot used to respond to student inquiries will likely be considered limited-risk. These systems are subject to transparency obligations under Article 50 and fall under broader provisions on AI literacy and responsible deployment.Footnote 33
Finally, minimal-risk AIED refers to tools that pose negligible risks to fundamental rights, health, or safety – such as spell-checkers or systems used to assign classrooms. While these tools are not subject to specific obligations under the AI Act, their use – particularly when scaled or embedded within more complex educational platforms – raises broader concerns around ethical deployment, and, as Pagallo notes, even so-called “low-risk” systems may generate significant environmental harms that fall outside the legal risk classification.Footnote 34 This highlights the extent to which current legal categories may fail to capture the wider societal and ecological consequences of AIED deployment.
III. Regulating AIED
Currently, the legal framework surrounding AIED is still evolving, with many aspects of its implementation and regulation remaining unclear, especially regarding its impact on the relational dynamics between teachers and students and the potential effects on educators’ roles, responsibilities, and autonomy. As noted at the outset, this paper concentrates on the legal requirements that address human oversight in both AI regulation and education regulation at the EU level, rather than attempting to cover all regulatory dimensions of AIED. This methodological focus is grounded in the view that oversight provisions offer a critical entry point into understanding the effectiveness of regulation. Koh et al. put it succinctly: “the big elephant in the room” in discussions about AIED is the question of what, exactly, remains the role of the teacher?Footnote 35
1. AI regulation
Article 22 of the GDPR lays the foundation for human control of AIED where the technology involves the use of personal data.Footnote 36 Under Article 22 GDPR, data subjects have the right not to be subject to solely automated decisions involving the processing of personal data that result in legal or similarly significant effects. To invoke Article 22(1), three cumulative conditions must be satisfied: (1) there must be an individual decision; (2) it must be based exclusively on automated processing; and (3) it must have legal consequences or similarly significant impacts on the data subject. Article 22(2) allows for fully automated decision-making or profiling in specific situations, such as when it is permitted by Member State law. Where fully automated decision making is permitted under an exception, certain “suitable measures” are required to safeguard the data subject’s rights.Footnote 37
Article 22 has been interpreted by courts and Data Protection Authorities (DPAs) as requiring guidelines and instructions that ensure there are trained personnel who are capable of critically evaluating AI-generated outcomes.Footnote 38 These legal authorities have highlighted the importance of ensuring that human reviewers do not simply rubber-stamp AI-based decisions, but instead, exercise meaningful and independent oversight.Footnote 39 The human review process must be substantial enough to influence or override the AI’s decision if necessary, ensuring that individuals affected by automated decisions have a genuine avenue for redress. Furthermore, organisations are often required to implement training programs for those involved in the oversight process, equipping them with the skills needed to understand and critically assess AI systems. These guidelines and instructions, including trainings, are considered essential to maintaining a high standard of accountability and transparency in the use of AI technologies, particularly in contexts where decisions have significant impacts on individuals’ rights and freedoms such as in HE.Footnote 40
Like the GDPR, the AI Act aims to safeguard fundamental rights and freedoms by ensuring appropriate human oversight in the use of AI systems. Article 14 of the AI Act mandates that high-risk AI systems must be designed and developed to allow effective oversight by natural persons during their operation, including through suitable human-machine interface tools. There are two ways to ensure human oversight. First, it can be built into the high-risk AI system by the provider.Footnote 41 Second, it can be identified by the provider and implemented by the user of the AI system.Footnote 42 Article 14 should further be read in conjunction with Article 13, which mandates that high-risk AI systems must be designed and developed to be sufficiently transparent, enabling users to understand their operation. Furthermore, Article 26 of the EU AI Act requires the deployer of an AI system such a university who implements an AIED system to implement appropriate technical and organisational measures to ensure the AI system is used according to its accompanying instructions, including those related to human oversight.
The requirements of the AI Act oblige providers and deployers of AIED to adopt a “human-oversight-by-design” approach when developing AI systems.Footnote 43 Design-based regulation requires that regulatory standards are embedded directly into the design of products and systems at the outset of their development and throughout their lifetime, thereby decentralising the role of standard setting, monitoring and enforcement from state actors to private actors.Footnote 44 This approach is rooted in Lessig’s idea that computer code can be more efficient than traditional law in controlling human behavior, as code can impose architectural restraints that entirely prevent certain actions, unlike laws that allow for disobedience.Footnote 45 It requires the engagement and competency of technical professionals to design systems that influence human behavior in line with regulatory goals such as the private Ed Tech companies that develop many of these technologies.Footnote 46
Article 14 of the AI Act has a broader scope than Articles 22 of the GDPR since it requires human oversight not only for the processing of personal data but also for non-personal data, and aims to ensure health, safety, and the protection of fundamental rights beyond just data protection.Footnote 47 It also explicitly includes “the extra-juridical concept” of automation bias as a potential source of distortion for human oversight agents.Footnote 48 Essentially, the AI Act mandates risk-based, flexible, and design-focused requirements for human oversight by providers and users which are further reinforced by the more stringent human intervention mandates in the GDPR that apply specifically to data controllers.Footnote 49 Here, it becomes evident that a distributed form of responsibility for human oversight is emerging in the law where different actors across the AI lifecycle share the duty of ensuring that AI systems operate ethically and safely.Footnote 50
Finally, it is important to mention that Recital 56 explicitly recognises the importance of deploying AI systems to promote “high-quality digital education and training”, enabling learners and teachers to develop digital skills, media literacy, and critical thinking necessary for participation in the economy, society, and democratic processes. Additionally, Article 4 mandates that providers and deployers of AI systems ensure sufficient AI literacy among their staff and users, considering their technical knowledge, experience, and the context in which AI systems are used. Furthermore, and as noted at the outset of this paper, the AI Act classifies several categories of AI systems in education as high-risk, including those used to determine access or admission, evaluate learning outcomes, steer learning processes, assess appropriate education levels, and monitor prohibited behavior during tests.Footnote 51
2. Education regulation
Education is recognised as a fundamental human right in international human rights law.Footnote 52 In Europe, the Charter of Fundamental Rights of the European Union declares, “Everyone has the right to education and to have access to vocational and continuing training.”Footnote 53 Education is not only a fundamental human right in its own regard but also a vital enabler of all other human rights.Footnote 54 Furthermore, it plays a critical role in fostering individual development and empowering citizens to participate meaningfully in democratic life.Footnote 55
Unlike the right to education, academic freedom lacks consistent legal recognition across jurisdictions and is absent from the Universal Declaration of Human Rights.Footnote 56 That said, it is explicitly recognised in the EU Charter of Fundamental Rights which states in Article 13: “The arts and scientific research shall be free of constraint. Academic freedom shall be respected.”Footnote 57 Academic freedom is generally understood to derive from freedom of thought and expression, and at its core, it safeguards teachers’ autonomy in deciding how to teach and what tools to use.Footnote 58
This distinction between the legal recognition of the right to education and the more variable protection for academic freedom provides important context for understanding the professional standards expected of educators in HE. The 1997 UNESCO Recommendation Concerning the Status of Higher-Education Teaching Personnel offers a detailed articulation of these expectations, framing HE teaching as a profession grounded in public service, requiring expert knowledge, lifelong learning, and a commitment to high standards in scholarship and research.Footnote 59 It further calls for adequate working conditions that support teaching, research, and community engagement, and emphasises that HE personnel must enjoy the full range of civil, political, social, and cultural rights, including freedom of expression and protection from arbitrary interference or punishment.Footnote 60 Importantly, it upholds the right of academic staff to teach free from coercion, in accordance with professional standards and human rights norms, and to actively participate in the development of curricula and teaching methods.Footnote 61
Against this backdrop – where education is recognised as a fundamental right, and HE teaching is framed as a public service grounded in academic freedom – concerns about the ethical implications of AIED began to intensify in the mid-2010s, particularly with the rapid expansion of machine learning technologies.Footnote 62 As ethical concerns about AIED began to attract greater attention, early policy efforts sought to develop strategic approaches for its integration that would safeguard fundamental rights and uphold core public values.Footnote 63 One notable example is the framework developed by the Institute for Ethical AI in Education (UK), one of the first to offer practical guidance for the responsible and rights-based use of AI in educational contexts.Footnote 64 The report explicitly states, “Humans are ultimately responsible for educational outcomes and should therefore have an appropriate level of oversight of how AI systems operate.”Footnote 65
2019 marked a global milestone with the release of UNESCO’s Beijing Consensus, which set out a vision for the sustainable development of AI in education in line with the 2030 Agenda and Sustainable Development Goal 4 on quality education.Footnote 66 It emphasises a human-centered approach, stating that “…while AI provides opportunities to support teachers in their educational and pedagogical responsibilities, human interaction and collaboration between teachers and learners must remain at the core of education.”Footnote 67 The Consensus calls for redefining teachers’ roles and competencies, strengthening teacher training institutions, and developing targeted “capacity-building programs” to prepare educators to work effectively in AI-rich environments.Footnote 68 Follow-up UNESCO guidance has reinforced that AIED must be designed to enhance human-centered pedagogy, ensuring that educators retain meaningful oversight and remain central to teaching and learning processes.Footnote 69
The Digital Education Action Plan (DEAP) 2021–2027, published by the European Commission in 2020, further outlines a strategic framework to modernise education and training systems for the digital age.Footnote 70 The main purpose of the DEAP is to provide the EU’s strategy for supporting a long-term, inclusive, and high-quality digital transformation of education and training systems across Member States, especially in light of the challenges exposed and accelerated by the COVID-19 pandemic.Footnote 71 One of its major outputs is the 2022 Ethical Guidelines on the Use of Artificial Intelligence (AI) and Data in Teaching and Learning for EducatorsFootnote 72 , which emphasise the need for educators to remain central in decision-making processes.Footnote 73 The guidelines which have a strong basis in the requirements set by the European Commission’s High-Level Expert Group on AIFootnote 74 , stress that teachers must be “in the loop” when AI systems are used, with clearly defined roles that safeguard their pedagogical and ethical responsibilities.Footnote 75 They highlight the importance of ensuring that teachers can monitor AI outputs, identify anomalies or potential discrimination, and intervene – especially in situations requiring empathy or nuanced judgment.Footnote 76 The guidelines also call for safeguards against over-reliance on AI, mechanisms for learners to opt out if concerns are not addressed, and adequate training and information for educators and school leaders to ensure safe and rights-respecting implementation.Footnote 77 In doing so, the framework reinforces that teacher agency and responsibility are essential for the trustworthy and effective use of AIED.
IV. Towards more effective regulation
Regulatory effectiveness – widely regarded as a cornerstone of good regulation – concerns how well the law achieves its intended aims, anticipates potential challenges, and adapts to changing circumstances to deliver meaningful societal outcomes.Footnote 78 To put it differently, effectiveness refers to the capacity of regulation to achieve the goals it is designed to accomplish – essentially asking whether it works as intended.Footnote 79 It focuses on the practical functioning of legal rules and the mechanics through which they produce results.Footnote 80
Zamboni argues that effectiveness is not an absolute value in itself, but a relative criterion – a path that legislation must follow to realise the values that a community or its political representatives regard as the foundations of society, such as democracy.Footnote 81 He contends that the concept of effectiveness in legislation rests on three interconnected elements.Footnote 82 First is the original idea, which represents the political or normative goal that the legislation is intended to achieve – what lawmakers set out to accomplish.Footnote 83 The original idea is a political expression that can be uncovered in the preparatory works, parliamentary discussion or preamble of the statute.Footnote 84 Second are the results of legislation, meaning the actual impact or outcomes produced once the law is enacted and applied in practice. The results include both intended and unintended consequences produced by the legislation in a certain community.Footnote 85 Third is the context or conditions in which the legislative process takes place – the real-world environment, including institutional, social, and political factors, that shapes how ideas are translated into outcomes. The actual situation reflects the social, economic, political, legal and cultural reality in which the legislation operates.Footnote 86
Zamboni further argues that effectiveness can be understood in two distinct ways. Internal effectiveness refers to the extent to which new legislation changes the legal system itself, focusing on legal policy outputs.Footnote 87 In contrast, external effectiveness concerns the broader societal, economic, and political impact of those legal changes – what he calls legal policy outcomes.Footnote 88 In short, internal effectivness concerns the capacity to really change the law whereas external effectiveness concerns the capacity to really change society.Footnote 89
While Zamboni emphasises the theoretical foundations of effectiveness – highlighting its normative underpinnings, contextual dependencies, and dual internal/external dimensions – Mousmouti offers a more practical perspective by developing a structured framework to assess whether legislation functions effectively in practice. Her model identifies four core dimensions of effective legislation: its purpose, the coherence and feasibility of the substantive content it provides, the anticipated and actual results it produces, and its structural integration within the broader legal framework.Footnote 90 Although the model is primarily designed to support legislative drafters throughout the stages of lawmaking, it nonetheless provides clear and objective benchmarks for scrutinising legislation, grounded in the axiomatic assumption that effectiveness is a core quality of good law.
This section draws on Zamboni’s theoretical conception of legislative effectiveness and applies Mousmouti’s analytical model as a practical framework to evaluate the AI Act – both in terms of its capacity to achieve its stated regulatory objectives and its coherence and interaction with the broader legal system, particularly in relation to education regulation. It begins with an evaluation of the purpose of the AI Act. The content and results are then examined by considering the coherence and feasibility of its provisions, alongside both anticipated and emerging impacts, particularly in the context of high-risk AIED. Finally, the structural integration of the Act is assessed by exploring its interaction with existing education regulation and the broader legal framework.
1. Purpose: assessing legislative intent and clarity
The first step of the effectiveness test focuses on identifying and assessing the purpose of the law in relation to the problem it seeks to address and the policy objectives it supports.Footnote 91 This involves examining how clearly the purpose is articulated, whether it sets a measurable and meaningful benchmark for implementation and interpretation, and how well it aligns with expected outcomes.Footnote 92 A clear and unambiguous purpose should offer direction to both implementers of the law and interpreters of it, enabling an evaluation of whether the law has achieved its intended goals when viewed retrospectively.Footnote 93
The AI Act’s purpose is outlined in Recital 1, which states that the Regulation aims to “improve the functioning of the internal market” by creating a harmonised legal framework for the development and use of AI systems, in line with Union values.Footnote 94 It seeks to promote “human-centric and trustworthy artificial intelligence” while ensuring a high level of protection for health, safety, fundamental rights, democracy, and the environment, and at the same time fostering innovation and supporting the free movement of AI-based goods and services. While this recital provides an explicitly stated purpose, it also sets out an exceptionally broad and ambitious agenda.
A central tension within the AI Act’s legislative purpose lies in its simultaneous promotion of innovation and protection of fundamental rights – an ambition that becomes particularly fraught in the context of the deployment of AIED in the HE. As educational institutions increasingly rely on AI systems provided by private Ed Tech companies, the traditional balance of power is shifting. What was once a public, participatory space – where decisions about pedagogy, assessment, and student advancement were shaped by educators and institutional governance – is now being reshaped by commercial actors whose primary accountability lies not with the public, but with shareholders and market logics.Footnote 95 While outsourcing in HE is not new, the intrusion of Ed Tech suppliers into core pedagogical functions marks a departure from prior practice.Footnote 96 These are no longer merely ancillary services such as IT infrastructure or course management platforms; rather, AI tools are now implicated in decisions that bear directly on educational content, student evaluation, and the conferral of credentials.Footnote 97 In sum, educational technologies have concentrated power and profit in the hands of dominant private-sector actors, and the growing adoption of AIED risks deepening this privatisation by reinforcing a market-driven model that treats education as a commercial commodity rather than a public good.Footnote 98
The AI Act attempts to safeguard against potential abuses through principle-based legal requirements that are embedded into the design of high-risk systems, such as mandates for human oversight, fairness, and transparency.Footnote 99 However, the practical implementation of these requirements often falls to the technology providers themselves, who are afforded significant discretion in determining what measures are appropriate. It is these providers – more precisely, their software engineers and product teams – who ultimately encode legal principles into system architectures, effectively translating vague regulatory standards into binding technical features. This arrangement creates a de facto delegation of normative authority, whereby private actors not only design AI systems but also shape the meaning and operationalisation of legal safeguards.Footnote 100 In effect, the very entities whose technologies may pose risks to students, teachers and public values are entrusted with embedding the protections meant to guard against those risks, although they will be subject to some oversight by a variety of national and EU-level agencies.Footnote 101
While many EdTech companies may choose to follow the relevant harmonised standards to demonstrate conformity with the AI Act’s requirements – rather than risk misapplying the law’s underlying principles when designing their systems – this may result in a formalistic and technocratic approach that avoids substantive engagement with the normative foundations of the legislation.Footnote 102 Adhering to standards may not necessarily guarantee that broader concerns – such as pedagogical integrity, student autonomy, or non-discrimination – are meaningfully addressed. Here, Havinga et al. explain, “General AI standards rarely address education-specific issues which can increase issues of access and inclusion in educational settings and the devaluation of regional or minority languages and their protection and promotion contributing to the building of a Europe based on democracy and cultural diversity.”Footnote 103 This reality raises important questions about the sufficiency of technical standards as proxies for legal and ethical compliance, particularly in settings where the stakes of algorithmic decision-making are closely tied to individual rights, institutional trust and public values.Footnote 104
Ultimately, while some flexibility may be necessary to allow EdTech providers to operationalise legal requirements in ways that support innovation and avoid overly cumbersome compliance burdens, this discretion must be carefully balanced against the imperative to protect fundamental rights and uphold the public values that underpin education systems. In the context of HE, the AI Act’s broad and sometimes competing objectives – promoting innovation, ensuring legal harmonisation, and safeguarding fundamental rights – can produce uncertainty when applied to complex, value-laden environments such as teaching and learning. This ambiguity leaves broad room for private actors to define what compliance looks like, potentially without sufficient engagement with the normative and pedagogical commitments enshrined in education law. In this regulatory gap, the right to education risks being subordinated to commercial priorities,Footnote 105 weakening the institutional integrity of public universities and their democratic mandate.Footnote 106
The conflict between market-driven AI solutions and the core mission of higher education – grounded in education, research, and public service – highlights the need for regulatory frameworks that prioritise effectiveness not only in outcomes but also in the process of decision-making itself.Footnote 107 As De Benedetto explains, effectiveness should guide both the “whether” and the “how” of regulatory action.Footnote 108 In the context of education, this principle requires that institutional decisions to adopt AIED in the first place go beyond claims of efficiency or innovation and instead assess whether such tools genuinely align with core educational values and goals.
2. Content: evaluating legislative coherence and feasibility
The second step of the effectiveness test focuses on evaluating the “substantive content” of the law to determine whether its legislative choices are responsive to the real-world situation it aims to address.Footnote 109 This step includes assessing whether the rules are realistic, the feasibility of compliance and enforcement, and “the consistency and alignment between the choice of rules, enforcement mechanisms and communication” to target audiences.Footnote 110 The goal is to ensure that the law is practical, enforceable, and aligned with its objectives. Retrospectively, this step examines whether the assumptions made during lawmaking were accurate and whether the law’s implementation produced the intended results or revealed barriers to compliance and effectiveness.Footnote 111
Applying this lens to the AI Act involves examining how its rules function in practice – particularly in contexts like education, where implementation can be complex. One relevant example concerns the role of HE institutions, which are considered “deployers” under the AI Act whenever they use “an AI system under (their) authority.”Footnote 112 As such, they are subject to specific obligations outlined in Article 26, provided the technology in question qualifies as high-risk under Annex III. In practice, this means that individual teachers – who interact directly with students and often manage the use of AIED – act as de facto agents of the institution in fulfilling these legal duties.Footnote 113 Teachers must not only use the AIED in accordance with the provider’s instructions but they must also have the necessary competence, training, and authority to supervise the system.Footnote 114 They are further responsible for monitoring the system’s operation, assessing the relevance and representativeness of input data (to the extent they control it), and taking appropriate steps if any risks or serious incidents arise.Footnote 115
In HE, the role of “human oversight” will no doubt be operationalised through teachers or academic administrators who are expected to act as safeguards against system failures. On paper, their involvement is framed as essential to ensuring that AIED functions in a fair, accurate, and accountable manner.Footnote 116 Oversight is presented as a mechanism for aligning AI systems with the core educational values of the HE institution and promoting trust.Footnote 117 However, this conception tends to idealise human oversight without fully addressing how such responsibilities are distributed, enacted, or supported within institutions.Footnote 118
Essentially, the AI Act places formal responsibility on institutions, but it embeds compliance obligations directly into the everyday practices of teaching staff. This shift of operational responsibility to teaching staff raises serious concerns about workload, the need for specialised training, and the compatibility of these duties with professional autonomy. While the measures provided for in Article 26 may be important for ensuring that AI operates within ethical and legal boundaries, the approach demands a level of expertise that many educators, trained primarily in pedagogy and research, may not possess. Although the law requires that teachers be supported in their oversight role, in practice this often translates into demands for additional training in AI literacy and assistance in interpreting instructions for use.Footnote 119 These measures, while valuable, risk creating new layers of bureaucratic responsibility for educators already stretched thin. In other words, the expectation that teachers monitor AI systems closely and intervene when necessary may overburden them and detract from their core educational mission, potentially contributing to burnout. Here, the main point is that the law risks overlooking the actual institutional capacities and resource limitations that shape the deployment of AIED in HE.Footnote 120
Paradoxically, AI may increase the workload for educators rather than alleviate it, as the integration of AI-based educational technologies may require extensive training, time invested in understanding the instructions of use for AIED and “thought and effort” into working with AI outputs.Footnote 121 Instead of simplifying tasks, these technologies may lead to a restructuring of teachers’ roles, where the promise of automation results in more complex responsibilities and potentially diminishes the quality of their engagement with students.Footnote 122 For instance, teachers might need to invest time in learning how to operate automated grading software and then verify the fairness of the results – essentially spending as much time as they would grading manually, while also losing valuable insights into how their students learn. Beneath the surface, these shifts may reflect a broader pattern of deskilling and hidden labor, where educators are required to perform additional repetitive or compensatory work to support automated systems – often for outcomes that are no more effective, and sometimes less reliable, than the practices they replace.Footnote 123
3. Results: examining legal implementation and impact
The third step of the effectiveness test focuses on results – both anticipated and actual.Footnote 124 During the lawmaking phase, it examines whether the law clearly defines expected outcomes, includes mechanisms for monitoring and evaluation, and ensures that the necessary data will be collected to assess implementation.Footnote 125 This involves asking what results are to be achieved, how they will be measured, and whether adequate structures are in place to track progress.Footnote 126 In the evaluation phase, the focus shifts to whether the intended results were actually achieved, what broader impacts emerged, and whether there is a clear link between those outcomes and the law’s original purpose, content, and context.Footnote 127
First, it is unclear whether the AI Act establishes the necessary foundations to evaluate the real-world impact of AIED on teaching and learning. While the Act includes obligations related to record-keepingFootnote 128 and post-market monitoringFootnote 129 , these mechanisms may be insufficiently tailored to capture outcomes relevant to the education sector – particularly those tied to professional judgment and student-teacher relationships. For example, there is limited guidance on how the effectiveness of human oversight will be assessed in practice, or how its role in safeguarding pedagogical integrity and student rights will be monitored over time. Ostensibly, oversight bodies will verify whether the relevant harmonised standards have been followed. However, as discussed above, this approach risks reducing oversight to a formalistic compliance exercise, rather than ensuring it functions as a substantive safeguard aligned with the law’s broader educational and rights-based objectives.
This gap is all the more critical given the risk of what has been termed “artificial stupidity.”Footnote 130 Originally, the term described the deliberate simplification of AI systems by its designers to match user skill levels in the computer-gaming context.Footnote 131 Here, it is used more broadly in line with recent research to highlight how an AI system can also unintentionally produce flawed, irrational, or counterproductive outcomes that eventually undermine its intended purpose.Footnote 132 For example, an AIED system that recommends remedial math lessons to a student after detecting a dip in performance – without recognising that the student was recovering from illness – demonstrates artificial stupidity by ignoring contextual factors essential for sound educational judgment.Footnote 133
One mechanism intended to bridge this gap is the Fundamental Rights Impact Assessment (FRIA), which in principle offers a space for anticipatory reflection on the societal and ethical dimensions of AIED deployment.Footnote 134 According to Article 27, public sector bodies, as well as private entities providing public services – such as education – are required to carry out a FRIA before deploying a high-risk AI system. This assessment must include: a description of the deployer’s processes in which the AI system will be used; the intended duration and frequency of use; the categories of individuals or groups likely to be affected in the specific context; the specific risks of harm to those affected; and an explanation of the human oversight measures in place, including actions to be taken should those risks materialise.Footnote 135 Yet the practical function of the FRIA as a tool for rights protection is heavily dependent on how institutions interpret and implement it. Here, universities may lack both the expertise and resources to assess the full scope of risks, especially when AIED tools serve institutional priorities like efficiency, scalability, or data-driven performance management. Furthermore, there is an absence of concrete guidance on how to evaluate AIED’s impact on academic freedom, pedagogical relationships, or the discretion of educators from a methodological perspective, particularly as institutions must also navigate overlapping obligations under other regulatory frameworks, such as the GDPR’s Data Protection Impact Assessment and the systemic risk assessments required by the Digital Services Act.Footnote 136 Here, there is a risk that the FRIA may devolve into a procedural checkbox rather than serving as a meaningful safeguard.
Second, most high-risk AIED systems will be subject to self-assessment to demonstrate conformity with the law, with third-party assessment required only when biometric data is processed. As a result, technology providers retain considerable discretion in identifying risks and determining what constitutes an acceptable level of harm.Footnote 137 This discretion raises important questions about accountability, transparency, and the consistency of risk management across different educational contexts.Footnote 138 More fundamentally, it casts doubt on whether the results that emerge from the implementation of the AI Act can be meaningfully linked back to its stated purpose – namely, the protection of fundamental rights and the promotion of trustworthy AI. Without robust, independent oversight mechanisms or clear criteria tailored to the complexities of education, there is a real risk that the Act’s purpose will be undermined in practice, even if formal compliance is achieved.
Finally, the effectiveness of the AI Act in addressing the real-world impacts of AIED also depends on whether its classification system for high-risk AI remains responsive to emerging challenges in the education sector. While Annex III sets out an initial list of high-risk use cases, Article 7 empowers the European Commission to amend this list through delegated acts. This mechanism introduces a degree of regulatory flexibility, allowing new educational use cases to be added when they pose equivalent or greater risks to health, safety, or fundamental rights compared to those already covered. However, this adaptability is constrained by the structure and conditions of Article 7. Any amendment must fall within the domains already listed in Annex III, and be justified by demonstrable harm or substantiated risk. In practice, this means that the classification system remains reactive against the urgent need for proactive approaches to regulation to prevent student harms before they occur: new educational use cases may only be reclassified as high-risk after evidence of harm emerges or significant concerns are raised.Footnote 139
4. Context: understanding the law’s interaction with the broader legal framework
The fourth step of the effectiveness test examines how the law fits within the broader legal framework, assessing its coherence, consistency, and interaction with existing laws.Footnote 140 During the lawmaking phase, it considers whether the proposed legislation integrates smoothly into the legal order, avoids overlaps or contradictions, and offers clear, understandable changes.Footnote 141 Retrospectively, it evaluates whether the law has created gaps, redundancies, or inconsistencies in practice, and whether such issues were foreseeable or can be resolved through legal or policy adjustments.Footnote 142
Article 14 of the AI Act mandates human oversight is implemented into system design but, as discussed above, grants significant discretion to technology providers in determining what is appropriate and cost-effective, potentially concentrating power in the hands of technology companies and allowing them to shape educational practices in ways that prioritise their commercial interests over public concerns like educational quality and equity.Footnote 143 Additionally, the core values of EU and national educational regulations, which emphasise active participation from teachers, families, students, and the community in school governance, may be at odds with the technical demands of AI oversight.Footnote 144 This shift towards technocratic compliance could undermine participatory educational structures and diminish inclusive, transparent, community-driven decision-making in favour of more centralised, opaque, commercially influenced approaches.
Moreover, the way human oversight is structured in AI regulation may not always align with the educational principle of professional judgment, potentially undercutting the role of teachers, who, as emphasised by EU policy, are regarded as being “at the heart of education.”Footnote 145 For instance, an AI system used for grading might require human oversight in the form of periodic checks to ensure the algorithm is not biased. However, this form of oversight might be more procedural and less focused on the nuanced, context-sensitive decisions that educators are trained to make. This could lead to situations where the human oversight required by AI regulation is a formality rather than a meaningful exercise of professional judgment. Delegating key aspects of teaching to AI inserts a digital intermediary into the classroom, flattening the contextual and relational dimensions of pedagogy and sidelining the expert judgment that teachers exercise in responding to the diverse needs of learners.Footnote 146 In doing so, such practices risk contravening education regulations and professional standards that recognise the teacher as an autonomous professional, entrusted with making informed, context-sensitive decisions in the best interests of students.Footnote 147
Under the AI Act, teachers are expected to exercise oversight over the functioning and outcomes of AIED systems – such as validating algorithmic decisions, interpreting outputs, or intervening when errors occur. However, this supervisory role operates within a feedback loop in which teachers not only oversee AI systems but are themselves shaped by their outputs and influence. For instance, AIED systems may shape how teachers assess students, structure lessons, or prioritise content, thereby constraining pedagogical discretion. This dynamic reflects a form of human-AI coevolution, where the very technologies meant to assist are also reconfiguring the practices and judgments of those tasked with oversight.Footnote 148 Teachers thus operate simultaneously as agents of legal compliance and as participants in a recursive system that may limit their professional autonomy over time.
Furthermore, when AI is deployed in education, teachers may find themselves in an increasingly precarious position – not as empowered decision-makers, but as scapegoat for liability.Footnote 149 As the so-called “human in the loop,” the teacher is expected to oversee, interpret, or validate AI outputs, often without adequate training, resources, or authority to meaningfully intervene. When things go wrong – such as biased grading, flawed student assessments, or exclusionary outcomes – it is the teacher, not the system, who is most likely to be held responsible rather than institutional decision-makers.Footnote 150 In this way, teachers become the moral and legal “crumple zone”, absorbing the fallout of decisions shaped by opaque algorithms and institutional pressures.Footnote 151 This dynamic undermines professional trust and autonomy in contradiction with educational regulatory frameworks and professional standards that affirm the teacher’s role as an autonomous professional, entrusted with making context-sensitive, pedagogically sound decisions in the best interest of students.
Another area of tension between AI regulation and education regulation concerns the uneven capacity of schools and educators to meet new technical demands. Requiring teachers to take on responsibilities that presume a level of AI literacy or technical expertise may place an additional burden on already stretched educational staff, particularly in under-resourced schools.Footnote 152 This risks deepening existing disparities between institutions that have access to training, infrastructure, and support, and those that do not.Footnote 153 As a result, the implementation of AI regulation in educational contexts may inadvertently conflict with EU education policy objectives aimed at promoting equity and reducing structural inequalities in education systems across Member States.Footnote 154
It is further necessary to observe how legal obligations for transparency, notification, and explanation may not easily align with pedagogical goals.Footnote 155 These obligations found in both the GDPR and the AI Act risk introducing procedural burdens that can interfere with the educational environment. Teachers, as the primary human overseers of AI systems in the classroom, may be required to issue frequent notifications or explain AI-driven processes, shifting their focus from pedagogy to legal compliance. This can fragment the learning experience, disrupt classroom dynamics, and compromise students’ ability to engage meaningfully with the material.
Finally, conflicts between AI regulation and education regulation also intersect with questions of academic freedom, particularly regarding the extent to which teachers can exercise discretion over the adoption and use of AI tools in their practice.Footnote 156 Tensions arise when educators are expected – or required – to integrate AI systems into their teaching, even in cases where they may have reservations about the pedagogical value, data protection implications, or broader institutional purposes of such tools. As noted at the outset, a distinction is often drawn between AI-supported and AI-driven education: in the former, teachers remain active decision-makers, using AI as a tool to support their chosen methods; in the latter, AI systems play a central role in managing instruction, assessment, and course progression, with teachers positioned primarily as supervisors or troubleshooters.Footnote 157 In this shift from pedagogical autonomy to oversight, the space for academic freedom may be reduced, particularly where the design and deployment of AI systems pre-structure the teaching process in ways that limit meaningful teacher input.Footnote 158
V. Discussion
If the AI Act is assessed functionally, as a tool intended to produce concrete regulatory outcomes, its effectiveness must be judged by its ability to deliver on its stated goals: ensuring safe, lawful, and trustworthy AI systems while safeguarding fundamental rights and promoting innovation.Footnote 159 Applying the effectiveness lens reveals a more complex picture than a simple yes-or-no verdict that the AI Act effectively regulates AIED. In some respects, the AI Act can be said to function effectively: it articulates a broad regulatory purpose, embeds obligations for technology providers and deployers as well as establishes mechanisms such as FRIAs and post-market monitoring. However, these results do not necessarily speak to the quality or adequacy of the regulation in educational contexts.Footnote 160 In other words, mechanisms such as post-market monitoring and FRIAs provide important procedural safeguards, but their substance and operation may fail to account for the distinct pedagogical, institutional, and relational dimensions of educational environments.
In this sense, there may be an original deficiency in the legal design of the AI Act. The legislative choices embedded in the law – including, for example, its framing of human oversight, reliance on self-assessment for most AIED systems, and minimal tailoring to the specificities of HE – reflect a misalignment between regulatory instruments and the social realities of educational practice. If the law’s anticipated outcomes (e.g., safeguarding fundamental rights, supporting trustworthy AI) are only weakly realised when AIED is deployed in HE, this may point to a mischaracterisation of the risks of AIED, an overestimation of institutional capacities, or a failure to anticipate the frictions between AI regulation and education.Footnote 161
Addressing these issues requires a recalibration of regulatory strategies to better reflect the operational and normative complexities of educational practice. This could involve the introduction of co-regulatory mechanisms that incorporate input from educators, students, and institutional stakeholders; the development of sector-specific codes of conduct; and the refinement of oversight obligations to account for the realities of institutional capacity and pedagogical autonomy.Footnote 162 Importantly, the AI Act must be seen as a dynamic legal instrument – what Xanthaki refers to as “phronetic” legislation – that is open to revision in light of emerging challenges and shifting contexts.Footnote 163 In education, this means moving beyond formalistic compliance toward a framework that genuinely supports the public mission of universities and respects the professional role of educators.
Ultimately, the effectiveness of the AI Act in educational settings depends not only on its internal coherence but also on its ability to integrate meaningfully with existing legal frameworks. When AI regulation fails to accommodate the specific requirements of education law – such as the protection of academic freedom, the recognition of teacher autonomy, and the commitment to inclusive, high-quality education – it risks producing regulatory frictions that undermine both legal regimes. While such frictions are not inherently problematic, they become legally and politically significant when they impair the fulfilment of fundamental rights like the right to education and the EU’s broader commitment to participatory democracy.Footnote 164
VI. Conclusion
While AIED offers the potential for increased efficiency, cost-effectiveness, and innovative solutions to numerous challenges in education, its deployment also carries the risk of unintended negative consequences, such as perpetuating biases, compromising student privacy, diminishing the role of educators, and creating disparities in access to advanced learning technologies.Footnote 165 As commercial, scientific, and political interests converge to promote AI-driven models of schooling, the role of educators is being redefined.Footnote 166 The AI Act, while well-intentioned, tasks teachers with overseeing, interpreting, and validating AIED, effectively positioning them as the primary agents responsible for its implementation and accountability. This shift risks not only displacing pedagogical judgment but also reconfiguring academic roles, as educators are increasingly expected to perform functions more akin to technology managers than autonomous professionals.Footnote 167 At the same time, universities adopting AIED at scale may begin to resemble startup environments, where innovation is prioritised over educational values such as equity, critical inquiry, and professional trust.Footnote 168 These developments highlight a central tension: while the AI Act sets out to ensure trustworthy AI and protect fundamental rights, its limited alignment with education regulation and its underestimation of the social and institutional dynamics of teaching may ultimately undermine those very goals. Ensuring effective regulation of AIED therefore requires sustained attention to the specificities of educational contexts, including the evolving role of teachers, the institutional capacities of HE, and the broader normative commitments of the education system itself.
Financial support
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation.