This journal utilises an Online Peer Review Service (OPRS) for submissions. By clicking "Continue" you will be taken to our partner site https://mc.manuscriptcentral.com/forum-cfl. Please be aware that your Cambridge account is not valid for this OPRS and registration is required. We strongly advise you to read all "Author instructions" in the "Journal information" area prior to submitting.
Guest-edited by Ljupcho Grozdanovski, Associate Research Professor, FNRS/University of Liège and Jérôme De Cooman, Research Professor University of Liège.
This issue brings together 13 rigorous studies that analyse various facets of the interrelationship between fairness and individual safeguards in the field of AI. They do so by examining, on the one hand, the need for specific and tailor-made individual safeguards able to prevent instances of unfairness (discrimination, harm) from occurring, and, on the other hand, by clarifying the role those safeguards play in rectifying such instances, should they occur. Committed to interdisciplinarity, this issue offers important and original insights from a variety of disciplines and perspectives: that of engineers (regarding the fairness constraints that should frame the training and validation of AI systems); that of regulators (regarding AI-specific regulatory frameworks that integrate fairness into their design) and that of courts (regarding the rights and procedures that allow the achieving of fair outcomes in disputes involving AI systems).