13.1 Introduction: Blurred Boundaries/New Compositions
There are many ways in which the presence and growing prominence of private actors in international public governance can be framed or understood.Footnote 1 While some portray this in the language of public–private partnerships or functional integration,Footnote 2 others, in a Marxist register, have diagnosed the work of international organizations as being guided by and serving a transnational capitalist class of private corporations and financial interests.Footnote 3 For those who approach the place of private actors within global governance with a public law sensibility, this blurring of boundaries poses particular problems of power, influence and the absence of regulatory control.Footnote 4 These problematizations and the nature of our legal responses are connected to how we perceive the public and private to be tied together – contractually, transactionally, economically, culturally, ideologically, etc. In line with the ambition in this volume to theorize the ways in which international organizations engage the world (and the world engages international organizations), this chapter studies these dynamics of influence and interrelation from an infrastructural perspective: by tracing emergent – materially and relationally enacted – compositions of power and authority where the boundaries between public and private cannot easily be drawn (and where the need for doing so might dissolve).Footnote 5 ‘Thinking infrastructurally’, with Kingsbury, can help us observe shifts in the logic and location of global governance, while moving beyond the binary public–private divide (and the legal imagination grafted onto it).Footnote 6 Before exploring the conceptual and normative dimensions of this approach (Section 13.5), I explore different dimensions of infrastructural change in global security and their effects: infrastructures of data collection and analysis (Section 13.2), policy implementation (Section 13.3) and policy reorientation (Section 13.4). In tracing the material entanglements between international organizations and private actors in the field of global security governance, a key observation is how pre-emptive security and informational capitalism are mutually co-dependent and co-constitutive on an infrastructural level.
13.2 Infrastructures of Data Collection: Europol and the Role of Private Parties
A first case study relates to the recent amendment and expansion of Europol’s mandate and how it is tied to an ongoing controversy over the institution’s collection and control of vast data troves – a data ‘black hole’, as The Guardian called it.Footnote 7 This controversy was sparked by an order of the European Data Protection Supervisor (EDPS), directed at Europol, to delete data it retained on individuals who have no established links to a criminal activity (described as a ‘data subject categorization’).Footnote 8 This decision concluded an investigation launched by the EDPS in 2019 into Europol’s ‘big data challenge’,Footnote 9 which was based on the observation that the use by the organization of ‘big data analytics’ – the computational processing of large datasets without direct connection to pre-identified criminal activities or suspects – posed ‘high risks for data subjects’ and could have a ‘severe impact on their fundamental rights’.Footnote 10 European Digital Rights (EDRi) – an advocacy group working to defend and advance digital rights – described this use by Europol of data mining techniques to ‘identify’ potential criminals as the EU’s ‘own Snowden scandal’ – the collection and processing of big data as a form of mass surveillance.Footnote 11
Yet, from the perspective of Europol, continued access to this data deluge is essential to develop the analytical capabilities at the heart of its pre-emptive security practices and strategies. These strategic objectives resonate, for example, in the use of new technologies in border security, where, as a study by the EU Parliament noted, ‘AI algorithms … [a]part from verifying and identifying known persons … are also used to identify unknown persons of interest based on specific data-based risk profiles’.Footnote 12 This aligns with a much broader trend in security governance, as Gavin Sullivan summarizes, where ‘[p]atterns or clusters’ distilled from large amounts of data are not exclusively ‘aimed at targeting “known” threats [but] emerge from correlational associations … between heterogeneous data sources algorithmically analyzed to detect previously “unknown” risks’.Footnote 13 This ‘politics of possibility’, to use Louise Amoore’s concept,Footnote 14 is reflected in Europol’s new task to ‘identify[] persons … who constitute a high risk for security’ and hinges on an infrastructure through which ‘large and complex data sets’ can be accessed and analysed.Footnote 15 This is also expressed in one of the institution’s key tasks: to ‘collect, store, process, analyze and exchange information’, which has led to some scholars labelling Europol as an ‘enormous data processing agency rather than a law enforcing police office’Footnote 16 and its description in a report for LIBE (European Parliament’s Committee on Civil Liberties, Justice and Home Affairs) as a ‘criminal information hub’ enabled by a ‘computerized information system’.Footnote 17 Yet, it is precisely the collection, retention and mining of these ‘large and complex data sets’ – the retention of petabytes of uncategorized data, data held on people without a ‘clearly established link with criminal activity’ – that was the subject of the order by the EDPS.Footnote 18 In ordering Europol to delete data without appropriate categorization after a limited retention period, this formal intervention directly targeted the use of big data for purposes of ‘mass surveillance and the use of predictive policing’.Footnote 19
It is in direct response to this challenge that the legal mandate of Europol was amended in the new regulation adopted in June 2022.Footnote 20 Underlining the unique expertise of Europol in ‘processing large and complex data sets’,Footnote 21 the regulation, first of all, retroactively legalizes the continued storage and analysis by the organization of big data troves without appropriate data subject categorization. Article 74b now states that ‘Europol may carry out a pre-analysis of [uncategorized] personal data [received before 28 June 2022] for a period of up to 18 months from the date the data were first received’.Footnote 22 This retroactive legalization renders the order by the EDPS void and thereby, some argued, poses a distinct rule of law challenge.Footnote 23 With the ‘amended regulation’, the EDPS laments, its ‘order to delete these large datasets would become ineffective’.Footnote 24 In addition to this retroactive legalization, the new regulation also provides a more structural resolution to Europol’s ‘big data challenge’, in stipulating that the organization can conduct a ‘pre-analysis’ of uncategorized data – the petabytes of collected raw data targeted by the EDPS – to finds patterns, links and relations with existing criminal activity.Footnote 25 In addition, the regulation also provides legal ground for the processing of big data in the context of ‘ongoing specific criminal investigation[s] within the scope of Europol’s objectives’.Footnote 26 The new data processing powers reflected in Article 18 (6a) and 18a, as the Meijers Committee also argued, express a form of technological determinism, where the infrastructural affordances and demands of big data sources and processing capacities alter and expand institutional mandates and powers.Footnote 27
Closely aligned with this storage and ‘pre-analysis’ of ‘large and complex data sets’ is the attention in the updated regulation to Europol’s development of new algorithmic tools of risk assessment and analysis. As The Guardian observed, indeed, the new regulation would ‘legalize the data cache and preserve its contents as a testing ground for new AI and machine learning tools’.Footnote 28 In the regulation, this resonates in ‘Europol’s new task to proactively monitor and contribute to research and innovation activities’ which is clarified to include ‘the development, training, testing and validation of algorithms for the development of tools for the use by law enforcement authorities’.Footnote 29 ‘Europol should play a key role’, the regulation additionally sets out, ‘in promoting the development and deployment of ethical, trustworthy and human-centric artificial intelligence’.Footnote 30 This extended, proactive mandate in the field of AI development is underlined by the provisions that the automated processing of personal data – including special categories of data that reveal ‘racial or ethnic origin, political opinions, religious or philosophical beliefs, … trade union membership [or a] persons’ sex life or sexual orientation’ – can, under specific conditions, be justified as a part of research and innovation activities (which could be understood as the development by Europol of computational tools for pre-emptive algorithmic risk assessment).Footnote 31 This extended mandate in relation to technological development also situates Europol as a focal point in relation to its member states (‘Europol should play a key role in assisting Member States in developing new technological solutions based on artificial intelligence that are relevant to the achievement of Europol’s objectives’) and to an expanding range of external partners and private parties (as we notice in the creation of ‘the EU Innovation Hub for Internal Security as a collaborative network of innovation labs’).Footnote 32 Intimately intertwined with the objective to mine and analysis big data troves (through the forms ‘pre-analysis’), this mandate to proactively develop new AI analytics is a crucial development in the relation of Europol to its members and the outside world.
In order to expand the data sources that Europol can use for these analytical purposes, one of the key innovations of the new regulation, central to my argument here, is that the organization now has the mandate to ‘receive personal data from private parties’.Footnote 33 ‘Private parties’, a recital to the regulation amending Europol’s mandate notes, ‘hold increasing amounts of personal data, including subscriber, traffic and content data, that is potentially relevant for criminal investigations’.Footnote 34 This can relate to data held by large tech platforms, financial institutions, airline companies or online service providers. Europol, the regulation observes, ‘should have measures in place to facilitate cooperation with private parties, including with respect to the exchange of information’, thereby providing a ‘single point of contact’ for private parties to share data sets that are scattered across multiple jurisdictions and not easily attributable.Footnote 35 In this sense, the provisions of data exchange with private parties have an explicit jurisdictional purpose: Europol’s role is to receive and analyse multi-jurisdictional data with the aim of identifying and informing the relevant national authorities. Interestingly, we observe here that the spatial, material and legal disjunction between the digital infrastructures curated by private companies and the territorial jurisdiction of national law enforcement agencies generates a need for the expansion of Europol’s legal mandate and the construction of a transnational infrastructure for data exchange.
The new regulation now provides Europol with a mandate to receive data directly from private parties (without member state involvement),Footnote 36 to transmit and transfer data to private parties (under specific conditions of necessity and proportionality)Footnote 37 and to request data held by private parties (via member states).Footnote 38 Importantly, Europol will also be providing the ‘infrastructure … for exchanges between the competent authorities of member states and private parties’, even in relation to crimes that do not fall within Europol’s objectives (in which case the institution merely serves as a data processor).Footnote 39 This is a remarkable expression of how the organization’s authority and importance in transnational security governance are tied to the construction and curation of a digital infrastructure through which states, security agencies and private parties are connected. The new regulation underlines and extends these infrastructural interdependencies, which are tied to Europol’s existing SIENA platform (Secure Information Exchange Network Application) and its integrated data management concept.Footnote 40 This integrated approach entails an ‘architectural infrastructure’ of data management where separated silos of data were replaced by ‘an overarching EU database of criminal data and criminal intelligence’ so that ‘linkages between data and behavioural patterns may be discerned’.Footnote 41 Infrastructural design choices do not only determine Europol’s position within the security landscape and the technological possibilities of intervention, in this sense, but also shape the affordance of legal protection – with the prospects of privacy, purpose limitation and other data protection standards being severely eroded.Footnote 42
In a report on the proposed amendment to Europol’s regulation (which has now been adopted) drafted for LIBE, this expanded mandate to directly demand, receive and transfer data from (and to) private parties has been described as ‘a considerable paradigm shift from the existing powers of the agency’.Footnote 43 The new structure places the institution in direct relation with private parties, thereby circumventing member states, and, the report notes, poses ‘significant risks for the protection of fundamental rights, in particular privacy and protection of personal data’.Footnote 44 This positioning of Europol as a focal point in a network between states and corporate actors reflects ‘the emergence of the trend in the past years to establish direct channels of communication between law enforcement and private parties and foster a public–private partnership’.Footnote 45 These material channels, the report notes, ‘fundamentally change the powers of the agency and the relationship it has with Member States’.Footnote 46 This change is not unique to Europol’s engagement with private actors. In its significant Resolution 2396 from 2017 on threats of terrorism and extremism, the UN Security Council underlines the importance of digital infrastructures for pre-emptive practices of ‘evidence-based risk assessments’ and ‘screening procedures’.Footnote 47 To this end, the resolution states, the Security Council ‘[e]ncourages … cooperation with the private sector … especially with information communication technology companies, in gathering digital data’.Footnote 48 It is through the alignment and infrastructural integration of private digital platforms with international organizations that the practices and politics of global security governance are being rewritten today.
The changes – and associated controversies – regarding Europol’s mandate and pre-emptive security practices show a particular pathway by which international organizations ‘engage the world’. The ever-closer cooperation and partnership with private parties, I argued, is not sustained through ‘contracting out’ specific tasks but through the material configuration of Europol as an infrastructural focal point for the collection, analysis and distribution of data. This consolidation of Europol as a ‘criminal information hub’ – a ‘centre of calculation’ where private actors are inserted – significantly alters and extends its mandate. The associated legal problems of privacy, data protection and non-discrimination can be situated in a context of data-driven, pre-emptive security practices, where big data is algorithmically analysed to find patterns, inferences, propensities of ‘high risk’ behaviour and detect previously unknown threats.Footnote 49 Europol’s updated regulation provides the legal architecture for precisely this mode of governance: a mandate for the storage and retention of ‘large and complex datasets’, a pro-active aim of developing algorithms and AI tools to ‘pre-analyse’ these big data sources with the aim of identifying ‘high risk’ individuals, and a formal integration of private parties into this emergent data ecology. This paradigm shift, as I argued, is infrastructurally enacted and it is by ‘thinking infrastructurally’ that the authority of Europol (in relation to states as well as affected individuals) as well as the impact on fundamental rights can be understood. As private platforms are displacing states and international organizations as privileged sites for the creation and accumulation of data about people and populations,Footnote 50 we can observe the emergence of global security infrastructures marked by what Mark Andrejevic describes as ‘the blurred boundaries between state and consumer surveillance and, thus, between control, convenience, and care’.Footnote 51
13.3 Infrastructures of Implementation: Hash-Sharing and Algorithmic Content Moderation
In Section 13.2, we observed how private actors are tied to contemporary infrastructures of security governance as important sites of data collection and data sharing. In this section, I map out a different mode of public–private co-operation, in which international organizations involve online platforms and corporate actors to directly implement transnational security policies. This is specifically relevant in relation to practices of countering terrorism online, or – to use the language of the UN CTC (Counter-Terrorism Committee) and CTED (Counter-Terrorism Committee Executive Directorate) – to ‘counter terrorist narratives’.Footnote 52 Private actors – and particularly large online platforms – are perceived here as essential in containing the use by terrorist and violent extremist organizations of the internet for the purpose of recruitment, propaganda or the dissemination of terrorist and violent extremist content.Footnote 53 In the wake of the Christchurch mosque shootings in 2019, which had been accompanied by a white supremacist manifesto and a live-streamed video of the shooting, both of which had been widely shared on online platforms, the Christchurch Call to Action was adopted with the stated aim to ‘eliminate terrorist and violent extremist content online’.Footnote 54 Remarkably, this call, initiated by New Zealand and France, did not only target states and public actors but also involved obligations for ‘online service providers’ to – among other commitments – ‘take transparent … measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content-sharing services, including its immediate and permanent removal’.Footnote 55 To fulfil these promises, private platforms pledged to make ‘cross-industry efforts’ by ‘investing in and expanding the GIFCT’ – the Global Internet Forum to Counter Terrorism (a coalition of large tech corporations with the mission to ‘to prevent terrorists and violent extremists from exploiting digital platforms’).Footnote 56
The emergence of this forum, and the enrolment of private actors in the struggle to ‘counter terrorist narratives’ more generally, has a longer lineage. In the Comprehensive International Framework to Counter Terrorist Narratives – developed by the CTC and CTED – the UN Security Council stressed that it ‘has long noted the importance of public–private partnerships in efforts to counter incitement to commit acts of terrorism’,Footnote 57 devoting an entire section of the framework to this theme. Underlining the role of ICT and social media platforms, the framework specifically stresses the need to develop novel technological tools to monitor, study, block, filter and remove terrorist content online.Footnote 58 This resonates in resolution 2396, where the Security Council stresses the need for ‘cooperation with [the] private sector’ in countering terrorism online, by ‘developing counter-terrorist narratives and through innovative technological solutions’.Footnote 59 The Security Council further calls on ‘the GIFCT to continue to increase engagement with governments’ and points to the UN CTED – ICT4 Peace’s initiative of Tech Against Terrorism to ‘protect the internet’ from terrorist use.Footnote 60 Working in close collaboration with the GIFCT network, Tech Against Terrorism – a private actor launched through UN CTED in April 2017 and implemented by the UK-based NGO QuantSpark Foundation – works ‘on behalf of the UN CTED to support the global tech industry to tackle terrorist exploitation of their technologies’.
The development of ‘innovative technological solutions’ is central to the work of both TAT and the GIFCT. Within the context of TAT, the Terrorist Content Analytics Platform (TCAP) was developed to conduct a ‘scalable automated data analysis of terrorist content including its symbolism, narratives, and metadata, in order to identify and classify at scale’.Footnote 61 Working with advanced technological tools of content identification (including automatic web scraping), classification (in line with an institution-specific inclusion list of terrorist organizations), automated notification of affected platforms, and the development of a ‘hashing function to create an algebraic record of each content type’, TCAP seeks to ‘automate[] the swift detection and removal of verified terrorist content’.Footnote 62 In this chain of content moderation, TCAP sees a significant role for ‘AI driven processes’.Footnote 63 The ‘hashing function’ to which TCAP refers is central to the GIFCT’s digital infrastructure for removing online terrorist content: the ‘hash-sharing database’ (HSDB). The HSDB, the GIFCT states, is ‘a shared, safe and secure industry database of perceptual hashes of known images and videos produced by terrorist entities on the UN’s designated terrorist groups list, which GIFCT members had chosen to remove’.Footnote 64 These ‘hashes’, the GIFCT’s Interim Executive Director and Director of Technology explain, are ‘digital signatures’ of image or video content that ‘cannot be reverse-engineered’ to the original files.Footnote 65 Access to the HSDB allows all participating platforms to rapidly and proactively review and remove terrorist content from their platforms without having to share the specific content. Considering the intractable and inherently political problem of defining terrorism and violent extremism (in contrast to, for example, opposition or political resistance), the GIFCT’s initial standard for inclusion in the database – which is now being expanded – was the relation between online content and entities listed on the UN Security Council consolidated list of designated terrorist entities.Footnote 66 The HSDB, in other words, entails a direct response to the engagement by international organizations – with the Security Council and the EU Internet Forum leading the way – of private platforms in the international framework to counter terrorist narratives.
Tied to the standard-setting process of the UN’s sanctions list,Footnote 67 the HSDB thereby allows platforms to moderate online content at an enhanced speed and scale – with large platforms reporting millions of images and videos corresponding with included hashes having been taken down automatically and quasi instantaneously.Footnote 68 The hash-sharing database thus entails an infrastructural implementation of counterterrorism policy enacted through the automated filtering of online content with technological tools of hashing and algorithmic content classification. As Gavin Sullivan has argued, this is a regulatory practice that works at an ‘architectural level’: it does not reactively respond to harmful online behaviour but proactively prevents it from manifesting.Footnote 69 This ‘infrastructural’ or ‘architectural’ mode regulation through the HSDB entails significant regulatory effects: it allows for automated forms of content moderation; aligns and amplifies the power of large online platforms in what Evelyn Douek has described as ‘content cartels’;Footnote 70 changes the temporal mode of intervention (from reactive to proactive); deflects the legislative demands for more stringent platform regulation; displaces definitional questions and political controversies on the notion of what constitutes terrorism (as elaborated in Section 13.4); and leads to new configurations of global governance – ‘bringing’, as Sullivan demonstrates, ‘states and platforms, and the dynamics of global security and informational capitalism, into novel and productive relation’.Footnote 71 To grasp these changes in global security governance, I believe it is important to study digital infrastructures and technologies such as the HSDB or the TCAP not as passive tools employed for already existing policy agendas but as active agential forces that shape how global security problems are perceived, performed and addressed.Footnote 72 The focus hereby shifts from a study of governance of infrastructure to governance by infrastructure.Footnote 73
Significant legal concerns have been raised regarding the workings of the HSDB as a global security infrastructure. Observing that this technology – and its corporate creation and ownership – entails a form of ‘regulatory power over discourse that has never been privately possessed before’, Jennifer Cobbe warns for the advent of ‘algorithmic censorship’ as a mode of governmentality with possibly detrimental effects for ‘open and inclusive spaces for communication and discourse’.Footnote 74 Danielle Keats Citron, in a similar vein, has observed a ‘censorship creep’ in how detection tools for child sexual abuse material have entered a much more politically contested domain of online violent extremism.Footnote 75 Human rights scholars such as Thiago Dias Oliva, in turn, have signalled specific legal concerns related to algorithmic content moderation in relation to freedom of expression and access to information,Footnote 76 while Gavin Sullivan shows that the pre-emptive and proactive logic of the HSDB erodes the rule of law and disables legal accountability and practices of contestation and review.Footnote 77 For the purposes of this chapter, however, the key claim is that tools such as the HSDP and the TCAP display a distinct way in which international organizations engage the world: they exemplify the enrolment of corporate actors into practices of global security governance formally set out by international institutions. This enrolment is only in limited ways expressed and implemented in formal legal or institutional forms. It rather unfolds through the construction of emergent socio-technical infrastructures of implementation. In tracing these new infrastructural relations, divisions between public versus private or global versus local can no longer be taken for granted but dissolve into novel, entangled and material compositions of global regulatory power.
13.4 Infrastructural Logics: Experimentalism and the Scripts of Global Security Governance
Sections 13.2 and 13.3 have pointed to the importance of public–private partnerships in the field of global security governance in relation to practices of data collection and the implementation of an international institutional agenda of countering terrorism online. In this section, I aim to trace the role of private actors, public–private partnerships and their material entanglements to the ways in which global security concerns are perceived, known, enacted and acted upon – to the changing rationalities and logics of security governance. In line with the preceding sections, I argue that these changes are best understood by situating them at a socio-technical or infrastructural level.
The role of private actors in shaping the orientation and agenda of new security practices is obvious in their direct involvement with the drafting of strategies for technology adopted. While the recent strategic report by EU DG Home on Opportunities and Challenges for the Use of Artificial Intelligence in Border Control, Migration and Security was drafted by Deloitte,Footnote 78 the Frontex report on Artificial Intelligence-Based Capabilities for the European Border and Coast Guard was written by the RAND Corporation.Footnote 79 This ‘resonance’ between security practices and corporate risk consultancy, as Louise Amoore has described it,Footnote 80 translates itself in the experimental, open-ended and ongoing process by which new data-driven decision-making tools are designed as a series of use cases and modules that can be iteratively ‘sequenced’, ‘adapted’ and ‘plugged in’.Footnote 81 This experimental orientation is explicit in the 2025 UK Border Strategy, which states that ‘the private sector must take the lead on border innovation’, with the role of government being limited to ‘creating an environment that encourages experimentation and technology adoption’.Footnote 82 This shift in governance logic, sparked by private actors and sustained by the recourse to new digital technologies, instantiates what Claudia Aradau described as ‘experiments without protocols’ – a distinct mode of governmentality that is not based on rational plans and formal causal criteria but opens up to the space of play of machine learning models.Footnote 83 What we observe, then, is the advent in global governance of what Fleur Johns qualifies as ‘an open-ended, opportunistic, now-oriented disposition [t]hat has been identified with entrepreneurship’.Footnote 84 As private actors become enrolled in (the design of) institutional practices of security governance, we see the emergence of what Johns describes as a ‘lean start-up’ mentality that ‘favours experimentation over elaborate planning … and iterative design over traditional “big design up front”’.Footnote 85
The effects of these emergent infrastructural logics, where adaptiveness and operational actionability figure as central concern,Footnote 86 can be observed in the different empirical spheres of Section 13.3. The ‘digital forensics’ envisaged and employed by Europol in its newly mandated ‘pre-analysis’ has an explicit orientation, in this sense, towards the iterative and open-ended detection of links, patterns and propensities in ‘big data’. The importance of ‘new technologies to process data’ to ‘swiftly detect links [and] trends’ was already recognized in Europol’s prior regulation,Footnote 87 and now resonates in the institutional task to pre-emptively identify ‘who constitute[s] a high risk for security’ and contribute to the ‘development, training, testing and validation of algorithms’ necessary for this task.Footnote 88 Echoing the language of corporate technology development, the design of new ‘solutions based on artificial intelligence’ should be supported by an ‘EU Innovation Hub for Internal Security as a collaborative network of innovation labs’.Footnote 89 The infrastructural and technological developments that are inscribed in this new regulation, in short, express a distinct logic of governing security problems – a logic by which interventions are not (exclusively) guided by pre-determined formal rules and criteria (about who constitutes a risk or what the properties of deviant behaviour precisely entail) but by inferences and ‘behavioural patterns’ that tools of machine learning are able to distil and render actionable.Footnote 90
A similar development is noticeable in the strategic ambition of EU DG Home ‘to harness AI for the benefit of borders, migration and security in Europe’.Footnote 91 At the core of this agenda is a focus on ‘risk assessment’ with the ‘general aim to find patterns and cluster individuals for further investigation’.Footnote 92 The ‘[c]lassification categories’ guiding this assessment, the strategy further notes, could be ‘defined based on a risk threshold or specific indicators’ or could be ‘less pre-defined where applications are grouped based on some “learned” similarity’.Footnote 93 AI would be essential in this process to ‘partition data into clusters’ and to ‘identify patterns which were not observed (as “strange” before)’.Footnote 94 This entails a logic of governance inherently tied to the affordances of new technological tools and infrastructures where the identification of security risks – as well as the interventions these invite – results not from fixed legal classifiers of enmity but, at least in part, from continuous and ‘unsupervised uncovering of correlations’.Footnote 95 This logic resonates in the development of ETIAS (European Travel Information and Authorisation System) which relies on AI to uncover hidden correlations and craft an ‘algorithm enabling profiling’ to counter security-, illegal immigration- and epidemic risks.Footnote 96
The most explicit manifestation of this changing governance logic, perhaps, relates to the GIFCT and its inclusion policy or taxonomy of what constitutes terrorist and violent extremist context.Footnote 97 In the initial stages of the forum and with the aim of finding ‘common ground’ with the approach taken by states and international organizations,Footnote 98 the HSDB would only include hashes associated with terrorist entities on the UN Security Council’s Consolidated Sanctions List. We can observe an infrastructural alignment between the UNSC and GIFCT, in other words, in how the technical content moderation tools of the latter encode the listing practices of the former – a dynamic of automated censorship that renders the controversies over what qualifies as terrorist content (in contrast to political criticism, for example) only more salient. Yet, as Tom Thorley (Director of Technology at GIFCT) observed, the strict reliance on this list imported a set of political and ideological biases embedded in the history of counterterrorism.Footnote 99 To overcome this problem of bias and the inability of static state-centred lists to adapt to a changing security landscape, the GIFCT undertook an initiative to broaden its hash-sharing database taxonomy.Footnote 100 In expanding the taxonomy of terrorist and violent extremist content, the list-based approach is supplemented with a method that would detect and classify content on the basis of ‘behavioural indicators’.Footnote 101 The qualification of what constitutes terrorist or violent extremist content would be guided, in this sense, by the distillation of ‘behavioural patterns’ in real-time data aided by the use of AI. The use of machine learning for content classification differs strongly from the process of hash-matching in that actively extracts and generates features of what qualifies – on a behavioural level – as prohibited content.Footnote 102 In one of the proposed tools – the Dynamic Matrix of Extremism and Terrorism (DMET) – the idea is to ‘develop models that algorithmically classify groups (or content) into categories’ where ‘large baskets of indicators would be associated probabilistically with each level [of extremism]’ on a continuum from ‘partisanship’ to ‘terrorism’.Footnote 103 This classification model is described as ‘dynamic’, ‘iterative’ and based on ‘cognitive and behavioural cues’ that translate in patterns and profiles of extremist content.Footnote 104 Interestingly, the designers of the matrix recognize that this behavioural classification may flag content expressing the ‘collective right to self-determination’ or instantiating the exercise of fundamental rights – in which case platforms should use ‘discretion’.Footnote 105 We observe how these international legal questions are now addressed by different actors and in very different terms, mediated by the global infrastructure of hash-sharing and the algorithmic processes of content identification and classification on which it hinges.Footnote 106 Yet, while recognizing the normative choices that are involved in this process, the recurring promise in the GIFCT’s taxonomy expansion is to classify content in a manner that is ‘agnostic to ideology’ – a ‘value-free modularity’ that works around the intractable political problem of qualifying and addressing violent extremist content.Footnote 107
While the current GIFCT taxonomy remains closely centred around the UN’s Consolidated Sanctions List, we have now seen the initial introduction of behavioural indicators.Footnote 108 This is a conscious project of expanding the taxonomy ‘iteratively’, which, as Sullivan notes, entails ‘an approach to regulation driven less by normative considerations than practical and infrastructural conditions’.Footnote 109 In moving beyond the ‘biased’ list, we can see the emergence of ‘[AI] processes’ which rely ‘on a pipeline of real-time, open-source media [to] bas[e] inclusion decisions on behavior rather than ideology or third-party designations’.Footnote 110 This shift from designated, state-based, formal lists to an expanded, adaptive taxonomy of behavioural indicators reconfigures the very notion of what terrorism entails and how it can be acted upon. In its orientation towards emergent patterns, behavioural indicators and cognitive cues, this mode of security governance clearly resonates with the ambition of Europol’s algorithmic ‘pre-analysis’ of large and complex datasets, and the strategic aspirations by DG Home to use AI to ‘find patterns and cluster individuals for further investigation’. This changing governance rationality is mediated by specific technological possibilities and infrastructural conditions where the public and private are entangled and mutually recomposed. It expresses a distinct logic, which, Fleur Johns has noted, entails an ‘attentiveness to emergent patterns of all kinds’ – a logic of governance that consists of ‘automated dives into vast and shifting oceans of data’ where possibilities are being ‘worked up iteratively and inductively from the inferences that may be drawn from the data available, as limited or partial as those data may be’.Footnote 111 In this distinct mode of governance, Mark Andrejevic argues, the ‘homogeneity of discipline is replaced by continual processes of experimentation and environmental modulation calculated to generate more data and … anticipate and foreclose through intervention’.Footnote 112 The diagnosis of this particular governance logic is not primarily about the rising power of private actors or the tremendous opportunity for private profit in this field (although both phenomena are widely documented), but about the ways in which security problems are perceived or addressed. The relationship between international organizations and private parties, in this context, is not one of ‘contracting out’ public tasks: the practice of data collection and analysis that structures governance routines in the security domain now relies on private infrastructures and ‘derives its animus and logic from the market’.Footnote 113
13.5 Conclusion: Thinking Infrastructurally: The Materiality of Rule and Affordances of Rights
Much can be said about the technologies of security governance and infrastructures of public–private partnership canvassed in Section 13.4. We could focus our critical analyses on the expansion of organizational mandates, the synergies between informational capitalism and security governance, the change in regulatory scripts and rationalities and the associated erosion of fundamental rights. In relation to these important research strands, the intervention of this chapter is a modest one: to grasp the evolving dynamics through which international organizations ‘engage the world’ – along the invitation in this volume – I want to highlight the importance, as noted by Benedict Kingsbury, of ‘thinking infrastructurally’.Footnote 114 This mode of thinking, I have argued, allows us to grasp salient changes in global security governance which I described by mapping emergent infrastructures of data collection, policy implementation and standard-setting in this field. It is through infrastructural changes that we see how new entanglements and compositions of public–private ordering emerge – how international organizations ‘engage the world’ and are materially engaged by it. This perspective opens up a distinct mode of studying the law and governance of (and by) international organizations that disrupts the traditional doctrinal dilemmas and categories within the field and holds both methodological and normative potential. In concluding, I touch briefly on both elements.
Thinking infrastructurally about the public–private partnerships that we observed in the extension of Europol’s mandate or in the tasks performed by TAT and the GIFCT implies, first, a methodological reorientation to the agency and vibrancy of material and socio-technical actants. Inspired by different strands of new materialism, actor-network theory and science and technology studies, this implies an attentiveness to how material objects, sites or relations produce patterns of social ordering – how they enable and constrain institutional or political projects.Footnote 115 Rather than considering the human and non-human – the social and material – as distinct ontological zones, this implies an account of international law and global governance where latent and often invisibilized infrastructural conduits (the ‘missing masses’ that Bruno Latour refers to) are allowed to (re)appear on stage as active agential elements.Footnote 116 This entails what Bowker and Star described as an ‘infrastructural inversion’: a change in our objects of analysis from formal legal rules or institutional forms to modes of governance by infrastructure.Footnote 117
In my view, this methodological reorientation, which is making inroads in the field of international law,Footnote 118 revolves around three dimensions. First, it urges us to recognize the agency of things: the ways in which infrastructures are not only (and never fully) defined by legal or political aspirations but also – and this is the central claim here – reshape, disrupt or displace these aspirations through their own distinct and materially mediated forms of normative ordering. This perspective invites us to approach GIFCT’s hash-sharing database or Europol’s ‘digital forensics’ of ‘pre-analysis’ and infrastructures for public–private data-sharing not as passive tools for already existing governance projects but as agential participants in emergent governance networks. Second, and related, this reorientation adopts a relational ontology where agency – including regulatory agency in global security governance – is not seen as a property of particular (institutional) actors but as ‘a matter of relation, and a process that entails a potentially infinite spectrum of matter around and within us’.Footnote 119 In this sense, the a priori public–private binary cannot be maintained: we are witnessing new relational compositions in which material infrastructures perform a pivotal role.Footnote 120 Finally, thinking infrastructurally, from my point of view, also implies attention to the performative or world-making dimension of material technologies. This points to how material tools – such as those employed for sensing, ordering and translating data – enact the normative categories and forms of knowledge upon and through which they intervene (as we observed in the category of ‘high risk’ individuals or the very notion of ‘terrorism’ itself).Footnote 121 From this perspective, thinking infrastructurally would allow us to trace the socio-technical cuts between what matters and what is excluded from mattering.Footnote 122 In short, a methodological orientation towards more-than-human agency, relational ontology and performativity allows us to appreciate the active participation of technological tools and infrastructural compositions in practices of global governance – in how problems are defined, rules implemented, power assembled and distributed. I see mapping and designing these emergent compositions as a critical avenue for international (institutional) law.
In addition to a methodological reorientation, this infrastructural perspective, secondly, also entails a normative agenda. Concerns about the effect of new digital technologies of surveillance and content moderation on human rights figure prominently in the different initiatives discussed in this chapter. While the GIFCT pledges to a policy of ‘countering terrorism while respecting human rights’,Footnote 123 Europol’s new regulation notes that the institution ‘should ensure that the development, use and deployment of new technologies are guided by the principles of transparency, explainability, fairness and accountability, do not undermine fundamental rights and freedoms and are in compliance with Union law’.Footnote 124 These legal considerations relate specifically to the prevention of bias and the protection of personal data.Footnote 125 There is a risk, however, that these repeated normative commitments underestimate how the existence and exercise of human rights are tied to specific material decision-making practices. As Julie Cohen has convincingly observed: ‘discourses about fundamental rights have relied on a set of unstated and unexamined assumptions about the material environment’s affordance – the conditions of possibility that material environments offer for individual, collective and organizational activity’.Footnote 126 The concept of affordances is key here. Originating in design studies and environmental psychology, it has been developed in social theory to highlight how human thought and action is both enabled and conditioned by the properties of material and socio-technical settings.Footnote 127 This inspired legal scholars to stress how also the exercise of (fundamental) rights hinges on particular material configurations.Footnote 128
Thinking infrastructurally, in this sense, allows us to critically explore how new technological tools, practices and systems impact (or erode) the exercise of human rights. What is the salience of privacy and purpose limitation principles in relation to Europol’s curation and analysis of large and complex datasets or its infrastructures of integrated data management? (as signalled in Section 13.2). How do the proactive algorithmic content moderation practices enabled by the hash-sharing database influence prospects of legal review and redress? (as Section 13.3 analysed).Footnote 129 How can we make anti-discrimination claims against decisions that are not based on fixed normative criteria but on real-time patterns and relational associations detected in data? (as I pointed to in Section 13.4). The infrastructures of governance described in this chapter, in other words, risk to undermine the material conditions for human rights protection – to disable the affordances on which the exercise of rights has always implicitly hinged. The exercise of thinking – and acting – infrastructurally, then, aims to guide our normative attention and interventions to practices of material and socio-technical design, which are aimed at (re)opening possibilities for individual or collective legal and political action and imagination.Footnote 130 This practice of critical infrastructural design opens promising pathways for the field of international institutional law beyond its orientation towards institutional design or its preoccupation with public law analogies.