This paper examines the legal implications of the explicit mentioning of automation bias (AB) in the Artificial Intelligence Act (AIA). The AIA mandates human oversight for high-risk AI systems and requires providers to enable awareness of AB, i.e., the human tendency to over-rely on AI outputs. The paper analyses the embedding of this extra-juridical concept in the AIA, the asymmetric division of responsibility between AI providers and deployers for mitigating AB, and the challenges of legally enforcing this novel awareness requirement. The analysis shows that the AIA’s focus on providers does not adequately address design and context as causes of AB, and questions whether the AIA should directly regulate the risk of AB rather than just mandating awareness. As the AIA’s approach requires a balance between legal mandates and behavioural science, the paper proposes that harmonised standards should reference the state of research on AB and human-AI interaction, holding both providers and deployers accountable. Ultimately, further empirical research on human-AI interaction will be essential for effective safeguards.