Reimagining Policing Position Paper 2

Working Paper Series: Policing in the Digital Age

Paper 2: Renewing the foundations of consent-based policing

deliberations were marked by a consistent tension: between the compelling operational case for deploying new technological capabilities and the equally compelling case for ensuring that such deployments remain anchored in public understanding, accountability, ethics, human rights, and consent. We argue that this tension can be resolved...

Reimagining Policing


 

 

Paper 2 of 4

Peelian Principles Renewed:

Legitimacy in Policing, Technological Transformation, and the Renewal of Public Consent

A Working Group on Policing and Digital Transformation

Tom Kirchmaier and Mick O’Connell

16 March 2026

ABSTRACT

We re-examine consent- based policing principles - ideas prominently articulated in the Anglo policing tradition by Sir Robert Peel — consent, prevention, minimal force, and political independence — in the context of digital transformation and artificial intelligence. Emerging from a workshop convened at the Centre for Economic Performance, London School of Economics, in February 2026, we argue that the normative architecture as influenced by Peelian policing principles retains enduring validity but requires active reinterpretation to remain meaningful in the digital age across all jurisdictions. As algorithmic systems reshape operational policing at a pace that outstrips public understanding and institutional governance, the conditions for consent-based policing are under structural pressure. Drawing on deliberations among senior police leaders, academics, and technologists, we examine how this pressure manifests in concrete institutional challenges: the governance of AI-enabled decision-making, the management of police data architecture, the alignment of career structures with digital specialism, the relationship between organisational leadership and operational expertise, and the forging of partnerships across public and private sectors - with the need for enhanced collaboration across organisational boundaries — with other public agencies, international partners, and private sector actors — on a common operational picture. We contend that renewed Peelian commitment demands more than rhetorical reaffirmation — it requires operationalising ethics and human rights by design, embedding accountability, respecting human rights, transparency, and restraint into the conception, procurement, deployment, and evaluation of technological capabilities. Technological innovation must serve legitimacy, not displace it. We establish the normative foundations for a broader initiative examining institutional forms, organisational structures, and technological systems required for legitimate and effective policing in an increasingly digital society.

Keywords: Peelian principles; policing by consent; algorithmic governance; digital transformation; police legitimacy; ethics and human rights by design; artificial intelligence; organisational design; data governance; accountability.

1. Introduction: The enduring challenge of legitimate policing

In 1829, Sir Robert Peel presided over the establishment of the Metropolitan Police Service in London, and articulated a set of principles that have since become one of the most prominent reference points within Anglo-derived policing traditions and have influenced wider discussions on the relationship between policing institutions and the public they serve. These principles rest on a single animating idea: that policing derives its authority not from coercive power alone, but from the active consent of the governed. The police are, in Peel's formulation, citizens in uniform, and their effectiveness is inseparable from their legitimacy.

That framework was forged in a specific historical conjuncture. The early nineteenth century in the United Kingdom was characterised by localised and largely pre-industrial communities, face-to-face interactions between constables and citizens, and a relatively transparent and spatially bounded information environment. The mechanisms by which public trust was built and maintained — visible patrol, community familiarity, personal accountability — were suited to this context.

The contemporary policing environment presents a categorically different set of conditions. Digital transformation has altered not only the landscape of crime — extending offending across jurisdictions, enabling new forms of harm at scale, and creating novel categories of victimisation — but also the operational environment of policing itself. Artificial intelligence and machine learning are starting to be embedded in investigative processes, resource allocation, and risk assessment. Generative AI, and especially Agentic AI, creates new capacities for both harmful activity and law enforcement response. Mass data collection, surveillance infrastructure, and inter-agency data sharing have become central to operational effectiveness. Biometric technologies, predictive analytics, and automated decision-support systems are reshaping the relationship between police institutions and the individuals they monitor, protect, and in some circumstances, suspect.

Against this backdrop, the question animating our inquiry — and the broader research initiative of which it forms a part — is both urgent and foundational: what does it mean to police by consent in the digital age, and what institutional, organisational, and normative conditions are necessary to realise it?

This report is the second in a series of four emerging from a workshop convened at the Centre for Economic Performance (CEP) at the London School of Economics in February 2026. We brought together the perspectives of senior police leaders, academics, and technologists who engaged in sustained deliberation about the pressures facing policing in an era of technological and geopolitical change including globalisation. Those deliberations were marked by a consistent tension: between the compelling operational case for deploying new technological capabilities and the equally compelling case for ensuring that such deployments remain anchored in public understanding, accountability, ethics, human rights, and consent. We argue that this tension can be resolved — but only through deliberate institutional effort of a kind that goes considerably beyond current practice in most policing jurisdictions.

We offer this paper not as a definitive settlement but as a contribution to an ongoing and necessary dialogue, as we are unanimous in their belief that this dialogue must engage the public as genuine partners if it is to achieve its legitimate purpose. The digital transformation of policing is too consequential to be managed as a purely technical matter by institutions acting without sustained public engagement.

2. The Peelian Inheritance: Consent-based policing principles and their historical context

2.1 The principles stated

Peel's consent-based policing principles articulate a coherent philosophy of policing organised around several interlocking commitments that hold value within and beyond Anglo jurisdictions. The police exist to prevent crime and disorder, not merely to respond to it. Their authority depends upon public approval, and their effectiveness is measured by the absence of crime rather than the evidence of police action. Force is a last resort, to be used only to the extent necessary and proportionate to the situation. The police must be politically independent — servants of the law rather than instruments of any particular faction, government, or interest group. And fundamentally: the police are the public, and the public are the police.

These commitments are not merely procedural. They embed a substantive account of the relationship between state authority and individual freedom — one that places the burden of justification on those who exercise coercive power and locates legitimacy in ongoing consent rather than formal grant. In this sense, they constitute a particular model of public order, most clearly articulated in policing traditions originating in the United Kingdom and influentional beyond, in which police authority depends on the active, informed acquiescence of the governed.

These consent-based principles were developed not as abstract philosophy but as practical guidance for a newly constituted professional police force confronting deep public scepticism. Before the Metropolitan Police existed in the United Kingdom, wealthy citizens relied on private forces for protection while the poor had none. Peel's principles were partly a reassurance — a promise that this new institution would operate in the public interest, with the public's consent, and under the public's scrutiny. That original purpose remains directly relevant today, when new technological capabilities again raise questions about whose interests are served by policing power and on what terms it is exercised.

While the Peelian model is frequently invoked as the philosophical cornerstone of consent-based policing—anchored in the principles of legitimacy through consent, minimal coercion, and service to the public—we recognise that it represents only one normative framework within a broader constellation of international policing traditions. Comparative analyses and discussions reveal that many jurisdictions have developed alternative foundations for legitimacy, often grounded in state authority, communitarian norms, or postcolonial adaptations of imported institutional forms. Each model, however, now faces convergent pressures arising from the rapid modernisation of policing practice, particularly through the integration of artificial intelligence, data analytics, and surveillance technologies. These innovations challenge the ethical and epistemological basis upon which policing legitimacy has historically rested: the Peelian emphasis on moral consent is no less destabilised than regimes premised on state sovereignty or social cohesion. Thus, the contemporary diffusion of technological capability exposes the shared vulnerability of divergent policing paradigms to questions of accountability, transparency, and the distribution of power within technologically mediated public spaces.

2.2 The question of contemporary relevance

A recurring question in workshop deliberations was whether a consent-based model to policing, as best exemplifed by the Peelian approach just outlined, is fit for purpose in a contemporary context or whether digital transformation has rendered it obsolete. The consensus view — arrived at through vigorous debate — was that Peelian principles remain not merely relevant but foundationally necessary. What has changed is not their validity but the conditions under which they must be realised, and those changed conditions require considerably more institutional effort to honour them than was needed in 1829.

We note that Peel's principles were created before the existence of cars, planes, or telephones, and that each successive wave of technological change had required reinterpretation without invalidating the underlying framework. The core principle of treating the public fairly and building confidence has not changed; what has changed is the technical means by which police action affects people's lives, the scale at which it does so, and the opacity of the mechanisms through which consequential decisions are made.

One particularly important observation emerged from comparative perspectives within the workshop: the principle of policing by consent is not universally shared. In some policing traditions, coercive compliance rather than public consent provides the operating model. That contrast illuminates what is distinctive and worth preserving about the Peelian tradition — and why we reflect are clear that the digital transformation of policing, if not actively governed by Peelian values, can risks producing outcomes that are technically efficient but lacking in public legitimacy. However, it is also noted that whilst principles like such as transparency, accountabilty and public engagement may be considered a hindrance to technical efficiency, this is not necessarily always the case. In fact, including diverse views from the public, for instance, improves policing.

3. Digital transformation and the sources of legitimacy in policing

3.1 The changing crime landscape

Digital transformation has profoundly reshaped the crime landscape that policing institutions are required to address. Crimes formerly geographically bounded — fraud, harassment, theft of intellectual property, child exploitation, and organised criminality — are now routinely conducted across jurisdictions, exploiting the borderless frictionless character of digital connectivity. The volume and complexity of digital evidence in both digital-native and digitally facilitated offences has grown exponentially, placing significant pressure on the capacity of existing legal and institutional frameworks.

Generative AI has introduced challenges of a qualitatively novel kind. Synthetic media, automated social engineering, and AI-enabled fraud present new categories of harm and new challenges for detection, attribution, and prosecution. The same technologies that enable beneficial innovation — large language models, image analysis, voice cloning — also lower the barriers to sophisticated criminality in ways that regulatory and investigative frameworks were not designed to address.

Importantly, police forces are already past at the limits of what their own human and organisational capacity can process. One force's digital evidence output was described as running to terabytes daily — a volume that makes comprehensive human review structurally impossible. A such the threats of this nature cannot be addressed by any single institution acting alone. It requires a collective response to this demand — across public agencies, international law enforcement, and private sector actors. The implications are not that technology should be uncritically adopted, but that the choice is no longer whether to use AI-enabled tools; it is how to do so across various actors in ways consistent with the principles of legitimacy, accountability and public trusts. In this sense, the technological transformation of policing is not a threat the consent-based model to policing, but — a failure to respond by leadership and a failure of government - ultimately leading to a failure to serve the public. Without AI and data science, policing cannot process the vast amount of all available data fairly and will be forced to prioritize resources on only the most immediate concerns and leave a large number un-investigated which could have a bearing on the necessity to maintain transparent to their institutional processes.

3.2 The algorithmic turn and its governance challenges

Beyond the crime landscape, digital transformation is reshaping internal policing operations. Algorithmic decision-support tools are increasingly integrated into risk assessment, resource allocation, and investigative prioritisation. Facial recognition and other biometric technologies are deployed in public spaces. Large-scale data analysis enables the identification of individuals as persons of interest on the basis of pattern-matching across aggregated datasets, with significant human rights implications that have not yet been systematically adequately resolved by law or institutional practice.

The common feature of these developments is the introduction of opacity into processes that consent-based principles require to be justifiable and accountable. Where an officer exercising discretion can be asked to explain the basis for their decision, an algorithmic system's logic may be inaccessible not only to the individual affected but to the officers who rely upon it and the managers who deploy it. The risk is not hypothetical. AI systems can achieve high apparent accuracy for entirely the wrong reasons — pattern-matching on incidental features of the training data rather than on the causal factors the system is supposed to identify. For example, recidivism models are routinely used to assess the risk of a convict reoffending but are based on information both relevant - such as the number of prior convictions - and irrelevant, such as whether friends and family have criminal records - which would not be admissible information to base a judgement on in court. The error may be undetectable without sustained scrutiny by domain experts who think carefully about what the system is doing, rather than accepting its outputs at face value. The lesson is that AI systems require constant testing and validation, that their outputs can be systematically wrong in ways that are difficult to detect, and that human expertise is indispensable to responsible deployment — not as a token safeguard, but as a continuous discipline of critical engagement.

This has direct implications for the governance of police use of AI. The workshop identified a particular danger in the 'human in the loop' model when applied to high-volume repetitive tasks. Where officers are asked to review a large number of AI-generated decisions — whether on firearms licensing, intelligence assessments, or forensic outputs — the psychological dynamics of repetitive review mean that genuine scrutiny rapidly gives way to passive endorsement. The implication is not that human oversight should be abandoned, but that training and governance design must be realistic about the conditions under which meaningful oversight is actually achievable.

3.3 The public trust gap

Perhaps the most significant structural challenge identified in workshop deliberations were the gaps between the pace of technological change and the development of public understanding, institutional governance, and oversight, leading to questions of trust. Technologies that are operationally deployed — with real consequences for citizens' lives and liberties — are routinely introduced faster than the public deliberation, ethical scrutiny, and regulatory development that legitimacy requires.

We have seen, in more than one jurisdiction, the consequences of deploying technically defensible capabilities without adequate public communication. Where citizens are not told clearly how a system works, what data it uses, or what safeguards govern it, the resulting backlash can damage institutional trust in ways that the operational benefits of the technology do not compensate for. The lesson is pointed: public trust takes generations to build and moments to destroy. The failure in such cases is not technical but communicative and institutional — a failure of the transparency and engagement required to sustain public legitimacy.

This gap operates within a broader context of societal unease about the pace and direction of technological development. This unease is not irrational — it reflects genuine uncertainty about who controls powerful technologies, in whose interests they operate, how to operate them lawfully, and what recourse exists when things go wrong. Policing institutions that treat public concern as a communications problem to be managed rather than a principle to be upheld are likely to find that trust erodes in proportion to their apparent evasiveness. Transparency, is not merely good practice — it is a precondition for consent.

4. Renewing consent-based policing principles: a framework for the digital age

4.1 Policing by consent in an algorithmic environment

The principle of policing by consent holds that the authority of police institutions derives from and is sustained by public approval. Workshop deliberations clarified an important distinction: consent is not merely passive acquiescence or the absence of active resistance. It is an active, informed, and ongoing disposition of the public toward the institutions that exercise coercive power in their name. Policing by consent does not mean that everyone approves of every police action; it means that the overall framework of police power is exercised with the general approval of the governed — an approval that must be continuously earned and cannot be assumed.

Renewing the consent principle in the digital age requires policing institutions to commit to a level of transparency about technological capabilities and governance arrangements that exceeds current norms in most jurisdictions. This means proactive public engagement about the types of AI and data-driven tools deployed, the purposes for which they are used, the oversight arrangements governing their use, and the mechanisms through which citizens can seek redress if those tools affect them adversely when in breach of human rights laws. It means accepting that public understanding is a prerequisite for technically effective capabilities — and treating such public trust not as an obstacle to be managed but as a requirement to be respected.

However, consent is not monolithic. Different communities experience policing differently, and the legitimacy of police institutions varies across social groups in ways that reflect histories of discriminatory enforcement and unequal treatment. This is why vulnerable populations and minorities must be particularly regarded when deploying new technologies. Digital technologies do not emerge onto a blank slate: they are deployed into social contexts already shaped by inequalities of power and trust. The workshop documented instructive comparative evidence of this dynamic. In one jurisdiction, the adoption of a computerised domestic violence risk assessment tool was driven precisely by the need to overcome police officers' prejudicial under-enforcement against certain populations — the technology was a mechanism for ensuring consistent application of obligations that human discretion had been systematically failing to honour. In another context, allocation of officers by cultural identity and gender to handle certain cases reflected a recognition that public acceptance of policing depends on matching institutional capacity to community expectations in nuanced ways.

Both examples illustrate that technology can serve rather than undermine policing values — but only if its deployment is guided by a genuine commitment to fairness and is subject to ongoing scrutiny. The same domestic violence example also illustrated the reverse: once the algorithmic tool was in place, some officers found ways to manipulate its inputs to reflect their prejudices, systematically inflating risk scores for particular migrant groups. Technological governance cannot be a one-time design exercise; it must be a continuous discipline of monitoring, audit, and accountability.

4.2 Prevention as the primary objective

Peel's insistence on prevention as the primary goal of policing — that the police's ability to prevent crime is the truest measure of its efficiency — retains its normative priority in the digital age and in some respects acquires new urgency. We argue strongly that the preventive principle has become underweighted in contemporary policing culture and design, absorbed into a reactive and detection-focused operational orientation that does not reflect Peel's original insight.

Digital capabilities offer significant new opportunities for preventive policing but realising them requires a shift in the orientation of institutional investment. One particularly striking example of genuine prevention emerged from the workshop: a partnership between a UK policing unit and a social media messaging company Snapchat, in which intelligence about criminal exploitation of platform vulnerabilities was shared with the company in time to patch those vulnerabilities before they could be further abused. This model — working with technology platforms to prevent harm at source rather than responding to it after the fact — represents precisely the kind of preventive approach that Peel envisioned, adapted to the conditions of the digital age. The ambition should be to extend such models — in which prevention requires operational collaboration across a network of public and private partners, building shared situational awareness sufficient to act before harm occurs. The common operational picture is as much a prevention tool as a response tool.

However, the pursuit of preventive capability through digital means creates risks that are themselves normatively significant. Predictive tools, if not carefully designed and governed, can embed and amplify existing biases, noting that they may have an inherent weakness at predicting the future because they are trained on patterns of past data directing police attention toward already over-policed communities on the basis of patterns that reflect historical discrimination rather than prospective risk. Prevention pursued through means that erode the legitimacy of police institutions is not, in any meaningful sense, the prevention Peel envisioned.

4.3 Minimal force and technological coercion

Peel's insistence on the minimal use of force must, in the digital age, be extended beyond the physical domain to encompass the exercise of coercive power through technological means. Surveillance, data collection, and algorithmic monitoring do not involve physical force, but they constitute forms of power over individuals and communities that require the same principled restraint. The minimal force principle, reinterpreted for the digital age, implies commitments to data minimisation, necessity and proportionality, and the avoidance of chilling effects — recognising that the mere existence of surveillance infrastructure, for example, can alter individual behaviour in ways that are coercive - affecting the human right to freedom of expression, even in the absence of formal enforcement action.

We are particularly concerned about the cumulative effect of surveillance technology on public cooperation with police. The relationship between police effectiveness and public willingness to report crime and provide information is fundamental to consent-based policing — if victims do not call, most problems go undetected; if communities disengage, the information that enables prevention is lost; if communities resist or reject police tactics, policing by consent breaks down. There is a tipping point beyond which the visible presence of monitoring technology reduces rather than increases public cooperation. The implication is that the deployment of surveillance capability must be governed not only by its direct effects on targeted individuals but by its systemic effects on the police-public relationship across communities.

There are also important concerns about centralised data accumulation — including commercial platforms that aggregate policing data at national or international scale. The risk identified was not merely one of data security, but of creating informational architectures that are inherently difficult to govern responsibly, where the power conferred by data concentration creates structural vulnerabilities that are difficult to reverse once established.

It was also considered in alignment with the principle of policing by consent, the contemporary analogue to the “minimal use of force” may be understood as the minimal and proportionate use of technology. Responsible innovation in policing therefore demands both ethical restraint and professional foresight. The experience of one agency in misusing a technology can reverberate far beyond its own jurisdiction, undermining public confidence and impeding the broader trajectory of innovation across the policing community.

A case concerning conducted energy weapons (tasers) illustrated this dynamic vividly: a single tragic incident in the United Kingdom in 2009 delayed national rollout for nearly a decade, reshaping public and institutional attitudes toward the technology. In an increasingly networked world, technological missteps are amplified and globalised, with the potential to erode legitimacy and professional trust across borders.

Consequently, poor adoption practices not only risk the integrity of individual investigations or organisational reputations but may also generate adverse jurisprudence and stifle technological progress for the profession as a whole. The recent withdrawal of generative AI tools, such as CoPilot, by several forces in the United Kingdom following incidents of misuse underscores the delicacy of this balance and the enduring importance of disciplined, ethical implementation.

4.4 Political independence and algorithmic accountability

The principle of political independence of the police — that police institutions are servants of the law rather than instruments of political power (noting that not all territories are afforded this privilege) — acquires new dimensions in the context of AI-enabled operations. Algorithmic systems embed normative choices about which outcomes to optimise, which populations to prioritise, and which values to trade off against others. The design, procurement, and deployment of such systems are exercises of substantive political judgment, even when presented as technical decisions.

Maintaining the political independence of policing in the algorithmic era requires extending accountability to these embedded choices. This means ensuring that the normative assumptions built into AI systems used for policing purposes are subject to public scrutiny and independent oversight — not merely technical audit by the institutions that deploy them. It also requires confronting the significant accountability challenges that arise from the commercial dynamics of police technology procurement. Poor contract negotiations have resulted in police departments effectively ceding ownership of and access to their own data — a situation in which the fundamental informational assets of a legitimate police institution pass into the control of private commercial actors. The legal sophistication required to structure technology partnerships in ways that preserve accountability is a specialised skill that many forces lack and that is rarely prioritised in procurement processes.

The political independence of police institutions faces a structural challenge from the dynamics of politics itself. If governments fail to provide police with the technological capabilities they need under proper oversight and with adequate safeguards, the political space created by that failure will tend to be occupied by actors who will provide those capabilities without comparable oversight and accountability constraints. The argument for engaging seriously with technology governance is therefore not merely normative but strategic: the alternative to consent-based digital policing is not no digital policing, but policing that operates outside the frameworks of accountability and the protection of human rights.

5. Institutional challenges: Where consent-based policing values meet organisational reality

5.1 Organisational structure and leadership

One of the most substantive debates in workshop deliberations concerned the relationship between operational experience and organisational leadership in policing institutions. The question — whether senior police leadership requires frontline policing experience, or whether the demands of managing large complex organisations require a different and complementary set of capabilities — proved both practically and normatively important for restructuring institutions to be more adaptive in the digital age.

The case for requiring leaders to have operational grounding rested on the particular character of high-risk organisations. In institutions where officers operate under conditions of genuine physical and moral risk, the bond created by shared operational experience is constitutive of the authority that leaders need to command the trust and cooperation of those they lead. As one participant observed, drawing on examples from military leadership, a leader who has never shared the risk of those they command faces a fundamental legitimacy deficit that cannot be overcome by administrative competence alone. This is not an abstract point: operational understanding is not merely symbolic — it enables leaders to exercise informed judgment about the operational implications of decisions, to maintain credibility with front-line officers, and to ensure that institutional design reflects operational realities.

At the same time, we are equally clear that large policing institutions — with budgets comparable to significant corporations, complex human resources challenges, and sophisticated technology procurement requirements — demand management capabilities that are distinct from operational leadership and that the current system of promotion through operational performance does not reliably develop the necessary talent nor organisational capability. A recurring observation was that officers frequently rise to senior positions with strong operational credentials and then find themselves responsible for functions — estates management, budget oversight, major procurement decisions — for which their operational career has given them little preparation. The result is the systematic misallocation of both operational and managerial talent.

The consensus view that emerged from this debate was not a binary choice between operational and managerial leadership, but a more nuanced model in which the two functions are distinguished and resourced appropriately. The language proposed in deliberations — the separation of operational leadership from organisational management — reflects this nuance: the head of policing services should have operational grounding and the legitimacy that comes with it, while a chief executive function managing the organisational construct brings broader managerial capability from within or beyond the policing sector. Some current models — the arrangement in certain military structures and the distinction between military officer-ship and enlisted personnel more generally, for example, or elements of the UK’s Metropolitan Police's own command architecture — already gesture toward this hybrid. The argument is for making it systematic and deliberate rather than ad hoc.

The implications for consent-based policing principles are significant. An institution that develops operational leaders but not organisational managers will struggle to implement the complex governance arrangements that ethical technology deployment requires. The skills needed to design and oversee responsible AI procurement, to manage data partnerships with appropriate legal sophistication, and to build accountability frameworks that are operationally meaningful rather than merely formal — these are organisational management skills as much as operational leadership skills. Upholding the values that Peel envisaged in the context of policing in a digital age requires both.

5.2 Career pathways, specialisation, and incentive design

A related challenge identified in deliberations concerns the structural misalignment between the specialisations that digital policing requires and the career pathways and incentive structures that current policing systems provide. We see a consistent pattern across jurisdictions: officers who develop deep expertise in technically demanding areas — cybercrime investigation, digital forensics, data analysis, online child exploitation — face a career structure that rewards them for abandoning that specialism in order to advance.

The consequence is severe and accumulative. Private sector organisations — particularly in technology, finance, and professional services — actively recruit specialist officers who have been trained at public expense and whose expertise is directly applicable to commercial needs. The incentive for an officer with deep digital forensics capability to remain in policing when advancement requires moving back into uniformed general duties, at lower effective remuneration than the private sector offers for the same specialism, is limited. The talent loss that results is not merely an efficiency problem; it is a structural barrier to building the institutional capacity that effective digital policing requires.

We propose a fundamental rethinking of the incentive architecture. A deeper transformation is required—one that extends beyond reimagining incentives to reconceptualising the very structure of policing careers. The traditional model of linear service spanning 25 to 35 years, with progression tied to rank and tenure, no longer reflects the realities of a rapidly evolving, specialist-dependent profession. Future-ready workforce models might instead be designed around differentiated career arcs—five, ten, or twenty years in duration—tailored to particular domains of practice, such as digital investigation, intelligence analysis, or community safeguarding.

Such an approach would allow policing institutions to recruit and train individuals for targeted periods of contribution, combining a larger cohort of shorter-term specialists with a smaller cadre of highly skilled, longer-term professionals who provide institutional continuity and leadership. Moreover, reconceptualising professional entry and exit points—enabling technologists, analysts, or domain experts to be deputised or seconded for defined periods—could enhance the permeability of policing as a profession and help address capability gaps. This would require a cultural and structural shift away from equating legitimacy solely with the number of warranted officers employed. In the digital era, expertise rather than warrant may increasingly define the core of policing legitimacy and capacity.

This challenge extends beyond sworn officers to police staff — civilian employees who may hold highly specialised and operationally critical skills, including in technology, data analysis, intelligence, and forensics, but who face extremely limited progression opportunities because the promotion architecture was designed around a uniformed career structure. Only a small fraction of a modern policing organisation actually requires the sworn powers that define the conventional police officer role. The remainder could, in principle, be highly specialised professionals managed under career structures appropriate to their skills. The failure to design institutions in this way represents a significant waste of human capital and a structural impediment to digital transformation.

The incentive design question also extends to the decision-making behaviour of middle managers and institutional leaders. We argue that incentives must be treated as a core element of organisational design — not as an afterthought but as a primary driver of the behaviours that institutions produce. Police leaders make procurement decisions, partnership decisions, and technology governance decisions on the basis of the incentives they face; designing those incentives thoughtfully is as important as designing the formal governance frameworks within which those decisions are taken.

5.3 Training, empowerment, and decision authority

Workshop deliberations identified a further structural dysfunction in the relationship between specialist training and operational decision-making authority. Officers who receive intensive specialist training frequently find themselves unable to act on their expertise without referral upward to hierarchical superiors who may lack the technical knowledge to evaluate the decisions they are asked to approve. The effect is both to waste the specialist capability that has been developed at institutional expense and to locate decision authority in individuals who are institutionally positioned to exercise it but epistemically poorly placed to do so.

This problem is particularly acute in rapidly evolving technical domains where the knowledge required to make good decisions is unevenly distributed across the organisational hierarchy. A senior officer approving a digital forensics decision, or a chief constable authorising an AI deployment, may have extensive operational experience and institutional authority but limited technical understanding of the specific decision they are being asked to make. The result is that formal accountability and genuine expertise are decoupled — a situation that creates both operational inefficiency and governance risk.

The implication drawn in deliberations was for a more distributed model of decision authority that empowers specialists to act within defined parameters while maintaining appropriate oversight and accountability for decisions that exceed those parameters. This is not a proposal for the removal of hierarchical oversight, but for its intelligent design — ensuring that oversight mechanisms are matched to the expertise required for the decisions being overseen, rather than simply allocated by rank.

5.4 Mission clarity and the prevention imperative

Deliberations also surfaced concerns about the clarity of the core policing mission in an era of institutional complexity and digital transformation. Peel’s emphasis on prevention as the primary objective has become diluted in many institutional contexts, absorbed into a predominantly reactive and detection-focused operational culture. The risk is not that detection and investigation are unimportant — they are fundamental — but that prevention has become the implicit casualty of institutional pressures that reward reactive response and measure success primarily through crime clearance rates and response times.

The digital transformation of policing creates both new opportunities and new risks for the prevention mission. AI-enabled analytics can support genuinely preventive intelligence; partnership models with technology platforms can address the conditions that enable harm before harm occurs. But digital tools can also accelerate reactive response in ways that further embed a detection-focused culture, if the institutional incentives and mission framing do not actively prioritise prevention. Meeting this challenge, in the spirit of Peel’s principles, requires making the prevention imperative explicit and operational — embedding it in how institutions measure performance, how they allocate resources, and how they design partnerships.

5.5 Collective model of policing

Effective crime prevention in the twenty-first century requires a collective model of policing grounded in genuine partnership and shared purpose. Policing institutions cannot prevent crime in isolation, for the social conditions that contribute to criminality—poverty, family instability, substance misuse, and mental ill-health—lie far beyond the scope of what any single agency can address.

A credible prevention mission therefore depends on embedded collaboration between policing, social services, healthcare, education, and wider civil society. This collective orientation, we argue, should not be treated as ancillary, but as a design principle at the heart of institutional and strategic reform. Moreover, collective policing extends to the “harder” edges of law enforcement cooperation.

Police forces increasingly operate in complex ecosystems with other public authorities, international partners, and private sector actors, including telecommunications providers, financial institutions, and technology platforms.

Confronting threats that transcend borders and sectors demands shared situational awareness and coordinated operational capacity. The objective must be to move beyond ad hoc information exchanges towards the development of a common operational picture—an integrated understanding of risk and response across institutional boundaries—supported by the governance, organisational, and technical frameworks explored in Section 7.

6. Ethics and Human Rights by design: From assertion to operationalisation

6.1 The limits of ethics as assurance

A consistent finding across workshop deliberations was that ethical commitments expressed at the level of principle fail to translate into practice unless they are embedded in the operational and institutional structures through which technology is conceived, procured, deployed, and evaluated. The assertion of ethical values — in mission statements, codes of conduct, public communications — is a necessary but insufficient condition for ethical practice alone. When ethics sits downstream of operational decision-making — as a form of post-hoc assurance or reputational management — it is unable to shape the choices that determine whether technologies are used in ways genuinely consistent with legitimacy, accountability and public trust.

This observation is particularly salient for policing institutions, which face acute pressures toward operational effectiveness, and in which the deployment of new technological capabilities can generate significant institutional momentum that is difficult to reverse once initiated. The history of technology adoption in policing suggests that ethical scrutiny applied after the fact — after systems have been procured, integrated into operational workflows, and made essential to institutional functioning — is structurally too late. By that point, the path dependencies created by sunk costs, institutional habit, and technical integration make meaningful reform extremely difficult. Similar concerns also arise in relation to technology adoption necessitating equal consideration to responsibilities towards not just ethics but also privacy and human rights requirements.

We also raise a related concern about the dynamics of ethics as post-hoc review. In contexts where every operational failure generates an independent review and a cascade of recommendations, the resulting blizzard of guidance can paradoxically reduce rather than enhance accountability — making it impossible to identify priorities, creating compliance burdens that overwhelm operational capacity, and producing a culture of defensive documentation rather than genuine ethical reflection. The goal should be accountability mechanisms that are proportionate, actionable, and embedded in operational practice — not ones that generate extensive paper trails without changing institutional behaviour.

6.2 Ethics by design: Core requirements

The alternative model — ethics by design — treats ethical principles, with human rights needs not as constraints applied to pre-formed technical decisions but as generative inputs to the processes through which capabilities are conceived, developed, and governed. In this model, the question is not 'how do we ensure this system is ethical?' but 'what would it mean for this capability to be ethical, lawful, in accord with human rights and how do we build institutions and processes that realise that?'

At a high-level, AI in policing can been seen to affect, in particular, the legal rights to:

  • privacy and data protection (UDHR Art. 12 and ICCPR Art. 17);
  • equality and non-discrimination (UDHR Art. 1, 2 and 7; ICCPR Art. 2,3 and 26);
  • liberty and freedom from arbitrary arrest “without a clear and reasonable motive, based on the law and established by evidence” (UDHR Art. 3, 9 and 13; ICCPR Art. 9 and 12);
  • fair trial and due process (UDHR Art. 10; ICCPR Art. 14); and
  • the presumption of innocence (UDHR Art. 11; ICCPR Art. 14).

Operationalising this approach requires several interrelated institutional commitments. Ethical analysis must be integrated into the earliest stages of capability development — at the point of need identification, requirements definition, and procurement design, rather than at the point of deployment review. This requires policing institutions to develop internal capacity for ethical analysis that is technically literate, operationally informed, and institutionally empowered to shape decisions, not merely to comment on them.

Governance frameworks must provide for ongoing monitoring and evaluation of deployed technologies against ethical and legal criteria, with clear mechanisms for modification or withdrawal where those criteria are not met. This is not a one-time compliance check but a continuous discipline of critical reflection. It requires that the data necessary for ethical audit — including data on differential impacts across communities, especially vulnerable demographics, error rates and their distribution, and the use of override capacities by human operators — is systematically collected, independently accessible, and subject to public reporting.

Accountability structures must be deliberately designed to ensure that responsibility for ethical outcomes is both clearly allocated and meaningfully exercised. In the context of AI-enabled operations and digitally mediated collaborations, the challenge of diffuse accountability becomes particularly acute: responsibility for harmful outcomes is often distributed across designers, procurers, deployers, and end-users, creating ambiguity that can undermine both legitimacy and redress.

Effective governance therefore depends on tracing these chains of responsibility with precision and ensuring that the institutional actors possessing the greatest capacity to shape ethical outcomes—often those involved in procurement and system design—bear proportionate accountability for them.

Procurement processes thus occupy a pivotal position in the ethical architecture of digital policing. They determine not only which technologies enter operational use, but also the standards of transparency, fairness, and human oversight that accompany them. To operationalise “ethics by design” meaningfully, policing institutions must move beyond treating ethical review as a downstream compliance exercise and instead embed ethical reasoning, stakeholder scrutiny, and societal impact assessment as generative inputs at the earliest stages of capability conception and acquisition.

6.3 Transparency, documentation, and the body camera lesson

Workshop deliberations surfaced a powerful and somewhat counterintuitive insight about the relationship between transparency and accountability: the resistance to making institutional processes fully documentable and visible is often a symptom of the ethical problems that transparency is designed to reveal. The body camera provides an instructive illustration. Initially framed as a tool for protecting officers against false accusations, body camera technology rapidly demonstrated its value as a mechanism for protecting the public from misconduct by officers and protecting good officers from the misconduct of bad colleagues. The sensors and automatic activation mechanisms subsequently developed to prevent cameras being deactivated during misconduct represent precisely the kind of iterative governance design that ethics by design requires — identifying how technical safeguards can be circumvented and designing around those circumventions.

The parallel with AI documentation is instructive. Officers who resist AI-mediated recording of their decisions are, in many cases, reacting against a mechanism that renders the exercise of discretion more legible and thus more accountable. Such resistance is understandable at an individual level—particularly where officers fear that documentation might be used punitively—but it stands in tension with the accountability demands of policing. As one participant observed, AI systems are inherently transparent in their documentation of process and decision, and it is precisely this transparency that provokes discomfort among some practitioners. The broader implication is that technical infrastructure alone cannot secure accountability or transparency. The institutional culture that sustains genuine ethical accountability must evolve in tandem with the technical architecture that enables it. Without this parallel development, technological transparency risks being met not with openness and learning, but with avoidance and resistance—thereby undermining both the promise of innovation and the legitimacy of its application in policing contexts.

The lesson extends to how institutions communicate about failure. One participant described an approach in which an officer who accidentally injured a member of the public during an arrest immediately explained what had happened, documented it transparently, ensured compensation was provided, and consequently never received a complaint. The contrast with institutional cultures in which errors are concealed or minimised — generating the mistrust that eventually produces formal complaints, investigations, and independent reviews — was stark. The same principle applies to AI systems: taking concerns seriously, demonstrating accountability, and communicating openly about failures builds the kind of trust that institutional evasiveness destroys.

6.4 Independent oversight and acceptable failure

No model of ethics by design can be self-governing. The institutional actors who procure and deploy technological capabilities have interests — in operational effectiveness, institutional reputation, and the maintenance of existing practices — that create structural pressures toward optimistic assessment of ethical risks and underinvestment in ethical safeguards. These pressures are not the product of bad faith; they are structural features of organisational decision-making that independent oversight is specifically designed to counteract.

Effective independent oversight of police technology requires genuine independence — not merely the formal independence of bodies that lack the resources, access, or mandate to conduct meaningful scrutiny. It requires technical expertise sufficient to interrogate the design and implementation of complex algorithmic systems, access to the data necessary to assess real-world impacts rather than merely intended designs, and a mandate that encompasses not only the lawfulness of technological deployments but their proportionality, fairness, and implications for legitimacy and public trust.

We must also confront the question of acceptable failure in policing institutions. Society gives officers significant powers and pays them at modest rates for exercising those powers in conditions of genuine difficulty and uncertainty. Getting things wrong — even in ways that cause harm — is an unavoidable consequence of operating in those conditions. The question of how to define acceptable failure rates, communicate about them honestly, and maintain public confidence in a context of inevitable imperfection has not been adequately resolved — and we think the silence on this is itself a governance failure. An environment in which every mistake generates a cascade of review and recommendation — however legitimate the impulse behind that response — creates conditions in which the priority becomes documentation over judgment, and the avoidance of visible failure over the pursuit of genuine improvement. The same challenge will arise with AI systems, where public expectations of error rates may be systematically mis-calibrated relative to what any realistic system can deliver.

 

7. Data governance as practice

7.1 Ownership, control, and the architecture of accountability

Workshop deliberations on data governance revealed a complex constellation of tensions between operational effectiveness, accountability, and the management of digital infrastructure in a multi-agency environment. While the principle of data sovereignty—the requirement that police institutions retain ownership and control over sensitive data—emerged as foundational, its practical scope must extend beyond the boundaries of any single organisation. In an era of interconnected policing, the same commitment to sovereignty applies when multiple public authorities, international partners, or regulated private entities collaborate. Each institution must retain full control, custodianship, and accountability for its own data, even when contributing to shared analytic environments or collective intelligence frameworks.

The sandbox model, often used to facilitate controlled experimentation by commercial vendors, provides a useful starting point but remains too narrowly conceived for the complexities of contemporary policing data ecosystems. It should be reconceptualised to include inter‑agency and cross‑border collaboration, enabling secure data sharing between sovereign institutions while maintaining the integrity of each organisation’s legal, ethical, and operational responsibilities. Such architectures of shared governance would allow data to circulate in ways that enhance collective awareness and capability without diluting accountability or compromising legitimate control.

At the same time, the principle of data sovereignty must be coupled with an understanding of data as an institutional resource rather than a private or investigative possession. Although access should continue to be governed on a need‑ or right‑to‑know basis, the institutional value of data—in informing trends, developing analytical insight, and improving operational learning—far outweighs its utility to individual investigators or the discrete platforms on which it resides. A mature data governance framework therefore requires not only technical controls but also cultural and organisational mechanisms that reinforce the collective stewardship of data as a public asset held in trust.

Finally, genuine accountability in data governance depends on institutional competence in legal, contractual, and ethical matters. The recurring failures of procurement and partnership agreements, in which ownership or access to critical data has been inadvertently ceded, underscore the need to treat commercial and legal capability as a core policing function. Building these capabilities is integral to maintaining sovereignty, ensuring interoperability across partners, and safeguarding the legitimacy of collective data governance.

7.2 Interoperability, legacy systems, and the cost of inaction

A significant theme in deliberations was the scale of the challenge posed by legacy systems and the lack of common data standards across policing institutions, affecting interoperability gains. The interoperability challenge extends equally across organisational boundaries: between police forces, between domestic public agencies, and between international partners. One participant observed that the annual cost of maintaining legacy technology infrastructure in UK policing alone runs to approximately £2 billion — funding that is thereby unavailable for the development of new capabilities. Forces operating on systems that predate modern data management practices face not only operational inefficiencies but structural barriers to the AI-enabled interoperability that effective digital policing requires.

The fragmentation of data across siloed systems — designed for discrete operational purposes without consideration of how they might be combined or queried in an integrated way — creates analytical limitations that go beyond the obvious inconvenience of incompatible formats. As one participant observed, when switching from one tool to another, the failure to specify common data formats in procurement contracts means that valuable historical data becomes effectively inaccessible — locked in formats that no current system can read. This problem is compounded when forces lack internal expertise to identify what data they hold, where it is stored, and how it might be accessed.

The governance implications are substantial. Courts require that the provenance and continuity of evidence be demonstrable — that the chain of custody can be traced from collection to presentation, and that any processing of that evidence can be explained and justified. The introduction of AI into this chain creates new challenges for evidence integrity that have not yet been resolved by legal precedent. Few AI-based evidence cases have been tested in court, leaving policing institutions without the case law needed to understand the admissibility standards they must meet. In this context, caution about the deployment of AI in evidentiary contexts is not conservatism for its own sake but a recognition of the genuine legal uncertainties that remain to be resolved.

7.3 Integrated data and the frontline

Against the challenges of data fragmentation, we articulate a compelling vision of what integrated data architecture could enable. Officers attending a domestic incident could, in principle, arrive with access to relevant information from social services, health records, previous contacts across agencies, international partners and other sources that would enable them to understand the situation they are entering and to respond appropriately. The contrast with the reality in which officers are dependent on colleagues to make phone calls to other agencies while they manage the immediate situation — with all the delays and gaps that this implies — was stark. Some jurisdictions have implemented integrated public sector data pools that approach this model — demonstrating that the aspiration is institutionally achievable, not merely theoretically attractive.

The vision of what an integrated data architecture could enable is technically achievable while preserving the sovereignty principles established in Section 7.1. As an example; Privacy‑enhancing technologies (PET) - including secure multiparty computation, homomorphic encryption, and federated learning - now provide a capability for multiple organisations to perform joint analytical tasks without transferring or exposing underlying sensitive data. These technologies allow each institution to retain full control over its information assets while contributing securely to collective analytic processes. As such, privacy‑enhancing computation constitutes the technical foundation that reconciles data sovereignty with operational interoperability and can serve as the enabling infrastructure for the “common operational picture” described in Section 5.4.

We also raise important qualifications to the enthusiasm for data integration. Providing officers with automated background information before they attend a scene can create dangerous overconfidence — a disposition to enter situations with fixed expectations that may not match the reality they encounter. The value of integrated data depends on how it is presented, what weight it is given, and whether officers are trained to treat it as one input among several rather than as a definitive account of what they will find. These are questions of training, governance, and institutional culture as much as of data architecture.

8. Bringing the public with us: Engagement, restraint, and partnership

8.1 The political economy of trust

Public trust in policing institutions is not a fixed endowment that can be drawn upon indefinitely; it is a dynamic resource that must be actively maintained and is vulnerable to erosion through institutional conduct. In the context of digital transformation, the conditions for trust maintenance are particularly demanding. Citizens are asked to consent to the exercise of significant and opaque technological powers on the basis of institutional assurances whose validity they cannot independently assess.

There is a structural asymmetry in the trust dynamic that we must name directly: the public's capacity to assess institutional conduct is limited by the technical complexity of AI systems and the opacity surrounding their mode of deployment, while the potential for visible failures to generate disproportionate trust damage — as experience has repeatedly shown — is significant. This asymmetry creates an institutional incentive toward opacity that is directly at odds with the transparency that policing by consent requires. Overcoming that incentive requires institutional cultures in which transparency is understood as a strategic asset rather than a liability — a source of the legitimacy on which long-run institutional effectiveness depends.

We reflect on the broader context in which trust is being asked for. Society has institutionalised trust in complex technical systems — aviation, medicine, finance — through regulatory frameworks, professional standards, and accountability mechanisms that allow individuals to rely on those systems without needing to understand them in detail. The parallel for policing technology is instructive: citizens do not need to understand how AI systems work to trust them, but they do need to trust the institutions and oversight mechanisms that govern how those systems are used. Building that second-order trust — trust in governance, not just in technology — is the central challenge of legitimacy in digital policing.

8.2 Engagement as public practice

The workshop identified public engagement as a core institutional responsibility rather than a discretionary addition to operational policing. This means moving beyond consultative models in which policing institutions present decisions already taken and invite public comment, toward genuinely participatory approaches in which the public are involved in shaping the normative frameworks within which technological capabilities are developed and deployed.

Such engagement is demanding. It requires the development of new institutional capacities for public communication about technical matters, including the ability to explain the logic and limitations of complex algorithmic systems in terms accessible to non-specialist audiences. It requires sustained commitment over time, rather than episodic consultation at moments of political sensitivity. And it requires honesty about uncertainty — acknowledging when new technologies carry genuine unknowns about their effectiveness, their unintended consequences, and their long-term social impacts, rather than presenting technological adoption as an unambiguously positive development whose implications are already well understood.

The officers who interact daily with the public are as important to this engagement as the institutional communications of senior leadership. The trust that citizens have or lack in policing is built or eroded in individual interactions, and the attitudes of officers toward the technologies they use are communicated in those interactions whether intentionally or not. An officer who does not understand why a particular AI system is being used, or who harbours doubts about its fairness that they have not been able to raise through institutional channels, will communicate that uncertainty to the public they interact with. Making digital policing a living commitment for officers — not just a policy position articulated by senior leadership — requires investment in internal communication, training, and cultural development alongside external engagement.

8.3 Restraint as institutional virtue

Perhaps the most demanding element identified in workshop deliberations is the exercise of voluntary restraint in the face of available technological capability. The argument for restraint is not that technological capabilities are inherently dangerous or that their deployment is never justified, but that the legitimacy of consent-based policing depends on the public's confidence that those capabilities will be used proportionately, accountably, and with appropriate safeguards— and that confidence cannot be sustained if institutions adopt a posture of maximising capability deployment constrained only by formal legal permission.

Restraint, in this sense, is an institutional virtue rather than a technical specification. It requires a culture within policing institutions that treats the exercise of technological power as genuinely requiring justification — not merely legal authorisation — and that regards the development of legitimacy as a central strategic priority. We acknowledge the difficulty of this requirement in an environment where technological change is rapid, operational pressures are intense, and the cost of inaction can be measured in crimes not prevented and victims not protected. The urgency of technological adoption is real. But so is the cost of adoption that outpaces governance — as demonstrated by the recurring examples of technology deployments that generated public backlash, discriminatory outcomes, or evidence integrity failures precisely because the institutional structures necessary to govern them responsibly were not in place at the time of deployment.

9. Towards a renewed consent-based policing consensus: Design principles for the digital age

The analysis developed in preceding sections and the deliberations from which it emerged suggest a set of design principles for policing institutions seeking to realise values, such as those associated with the tradition of Robert Peel, in the digital era. These principles are not comprehensive prescriptions but essential elements — the normative foundations upon which more detailed institutional design must be built. They are offered as contributions to ongoing international dialogue rather than settled conclusions.

9.1 Transparency as a default

Policing institutions should adopt transparency as an institutional default, whilst respecting the necessity not to compromise operational effectiveness — not merely in the sense of reactive disclosure in response to formal requests, but in the proactive provision of meaningful information about the technological capabilities they deploy, the governance arrangements governing those deployments, and the outcomes they produce. Transparency must be substantive rather than merely formal: information that is provided but not accessible — by virtue of technical complexity, volume, or framing — does not satisfy the need for genuine public accountability.

9.2 Consent as process, not event

The consent of the public to police technological capabilities must be understood as an ongoing process rather than a one-time event. This requires policing institutions to invest in continuous public engagement, to revisit and revalidate the basis for consent as technological capabilities evolve and as evidence about their impacts accumulates, and to be genuinely responsive to the findings of that engagement — including when those findings suggest that existing deployments need to be modified, constrained, or discontinued.

9.3 Ethics and Human Rights as architecture

Ethical principles and the requirements of human rights law must be embedded in the design of technological capabilities, governance frameworks, and operational processes — not applied as post-hoc constraints. This requires developing the internal capacity to conduct meaningful ethical and human rights analysis at each stage of the technology lifecycle, building independent oversight arrangements with genuine authority and adequate resources, and creating accountability structures that allocate responsibility for ethical outcomes clearly and meaningfully. Ethics and human rights cannot sit downstream as assurance or remediation; it must shape how capabilities are conceived, procured, deployed, and evaluated.

9.4 Proportionality as discipline

The deployment of technological capabilities must be governed by a genuine commitment to proportionality — using the minimum capability necessary to achieve legitimate objectives, rather than maximising capability deployment within the limits of legal permission. This requires clear criteria for capability deployment, mandatory review processes that take proportionality seriously as a substantive rather than formal requirement, and robust mechanisms for accountability when those criteria are not met.

 

9.5 Human capital aligned to digital demands

Career structures, training systems, and incentive architectures must be redesigned to develop and retain the specialist human capital that digital policing requires. Qualification and capability-based progression, allowing specialists to advance in remuneration and professional recognition without being forced into unwanted leadership roles, is a necessary condition for building institutional capacity that matches digital operational demands. This applies equally to sworn officers and to police staff, only a fraction of whom require the full powers of a constable / officer but all of whom require the capacity to operate effectively in complex digital environments.

9.6 Partnership as purpose

The relationship between policing institutions and the public, agencies, international law enforcement partners, private sector actors such as telecommunications providers, financial institutions, and technology platforms, as well as organisations they work with must be understood as one of genuine partnership rather than service delivery or information extraction. This means treating public engagement as a core institutional responsibility, investing in partnerships with technology platforms oriented toward prevention, building data governance arrangements with other public sector bodies that serve integrated frontline response, and structuring commercial technology partnerships with the legal sophistication needed to preserve accountability and data ownership.

9.7 Data governance as practice

Police institutions must retain ownership and control of sensitive data as a non-negotiable principle of good governance. This does not preclude hybrid arrangements or commercial partnerships, but it requires that the terms on which such arrangements are made preserve meaningful accountability and that the legal and commercial competence needed to negotiate them is treated as a core institutional capability. It must reflect the cross-boundary dimension: that data governance as a practice includes the conditions under which organisations can collaborate and share intelligence across institutional boundaries while each retains full sovereignty over its own data. Legacy systems, interoperability failures, and the accumulation of data in forms that cannot be used responsibly must be addressed as strategic priorities rather than tolerated as administrative inconveniences.

10. Conclusion: Technological capability in the service of policing legitimacy

The principles of consent-based policing that Robert Peel elaborated are not relics of a pre-digital era whose relevance has been overtaken by technological change, nor are they exclusive to Anglo jurisdictions. They are expressions of a normative commitment — to consent, prevention, proportionality, and accountability — that is likely more important in our global and digital age than it was in 1829. The challenge is not to preserve these principles unchanged but to realise them in conditions that their author could not have anticipated — conditions characterised by exponential data volumes, AI-enabled decision-making, pervasive surveillance infrastructure, and a pace of technological change that consistently outstrips the institutional and governance frameworks through which it should be managed.

Our deliberations suggest that realising consent-based policing values in the digital age requires more than good intentions. It requires deliberate institutional effort across multiple dimensions simultaneously: transparency as a default, ethics and human rights principles embedded in the architecture of technological systems, independent oversight with genuine authority, career structures and incentive systems aligned to digital demands, partnerships designed for prevention, and data governance that treats accountability as a strategic priority rather than a compliance burden.

Above all, it requires the institutional discipline to subordinate technological capability to legitimacy — to resist the structural pressure toward capability maximisation and to accept, as a matter of principled commitment, that the test of effective policing in the digital age is not what technology makes possible, but what consent makes legitimate. That discipline is demanding. The alternative — policing that deploys capability ahead of consent, and that treats the frameworks of accountability as obstacles to operational effectiveness rather than conditions of institutional legitimacy — represents a more fundamental failure than any operational shortcoming.

We are under no illusion about the difficulty of the path ahead. The decisions now being made about the technological capabilities of policing institutions — what to procure, how to deploy, under what governance — will shape the relationship between the public and the institutions that police them for decades to come. Getting those decisions right, in a manner consistent with the values that consent based policing embodies, is one of the central institutional challenges of this generation. The working group is committed to contributing to meeting it — and to doing so through the kind of sustained, honest, and public engagement that the consent based principles themselves require.

About this initiative

This is the second in a series of four publications emerging from our working group, convened at the Centre for Economic Performance, London School of Economics and Political Science, in February 2026. We bring together senior police leaders, academics, technologists, and other practitioners to develop a comprehensive framework for understanding and guiding the future of policing's contribution to public safety provision in the digital era.

The initiative is deliberately locationally agnostic, emphasising questions of organisational design, digitalisation, and interoperability that transcend particular jurisdictional or ideological contexts. It is positioned as an ongoing working group rather than a discrete conference, with plans to continuously revisit and refine analyses based on feedback and emerging developments. This approach treats each paper as a contribution to ongoing dialogue rather than a definitive pronouncement, reflecting the genuine uncertainty and rapid evolution that characterise the landscape under analysis.

The central research question animating the initiative asks: “What institutional forms, organisational structures, and technological systems are required for legitimate and effective policing in an increasingly digital society?” The three thematic papers in the series address this question from complementary perspectives: this first paper establishes the normative foundations; subsequent papers address organisational design and technological infrastructure respectively. Together, they are intended to inform and shape early discourse between the public and policing institutions toward a renewed consent based / Peelian style consensus of public trust in policing approaches for the twenty-first century.

We invite feedback, critique, and engagement with these ideas as part of an ongoing dialogue about the future of democratic policing in an age of digital transformation. Please post your comments on our discussion page.  Or if you would like to submit your comments privately or anonymously please email them to T.Kirchmaier@lse.ac.uk.

Participants

Irakli Beridze, UNICRI, United Nations

Chris Church, INTERPOL

Megan Hoey, Capgemini

Tom Kirchmaier, CEP/LSE

Rachel Lewis, City of London Police

William Lyne, Met Police

João Mota, VOID Software

Mick O'Connell, UNICRI & Critical Insights Consultancy Ltd

Emily Owens, CEP/LSE & UCI

Emma Persson, UNICRI, United Nations, Centre for AI and Robotics

Michalis Pittalis, Cyprus Police

Liam Price, Royal Canadian Mounted Police (RCMP)

Dominic Reese, North Rhine-Westphalia Police, Germany

Inger Marie Sunde, Politihøgskolen / Norwegian Police University College

Chris Sykes, Greater Manchester Police (GMP)

Dick van Veldhuizen, Roseman Labs

Ben Waites, Europol

Liz Ward, Met Police

The views expressed in this paper are those of the authors and do not necessarily reflect the official positions or policies of the organizations with which they are affiliated. Special thanks to Catherine Ojo and Janey Tietz for invaluable support with this project. The proceedings were transcribed using Notion, the summaries were produced using Notion and Claude, and the presentation drafted using Gamma. All automated summaries were edited and signed off by us humans. All errors are our own.

13 March 2026