Tim B

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Tim B.

Johnson
JSGS 806
12/06/2020

The Implications of Predictive Policing for Accountability, Transparency,


and the Reallocation of Resources
According to a report by the University of Toronto's Citizen Lab, Canadian law enforcement agencies

have, to varying degrees, started acquiring, using, and developing algorithmic policing methods

(Kenyon 2020). The report also acknowledged that the true extent to which predictive policing methods

are used is unknown (Kenyon 2020). Considering the rise in predictive algorithmic policing across

many countries (Kaufman, Egbert and Leese 2019, 674), it seems reasonable to conclude that Canada is

following suit. Accordingly, this paper accepts that predictive policing is becoming commonplace in

Canada, despite an apparent lag behind some other countries.

I juxtapose the adoption of data-driven predictive methods against calls for police reform in the

name of human rights, i.e. those voiced in the Black Lives Matter (BLM) movement. The analysis of

this juxtaposition presents urgent considerations for policymakers.

If society sincerely wants to address police reform, an examination of the impacts of predictive

policing on key demands for reform is required. Thus, this paper asks the fundamental question of

whether or not calls for transparency, accountability, and a reallocation of police resources are

compatible with increased use of predictive policing.

Through exploratory research of peer-reviewed and grey literature, I situate this paper within

the context of pre-existing critical and descriptive discourse. In doing so, I echo the need for safeguards

issued by the Citizen Lab, but aim to provide specificity to where predictive policing methods conflict

with the aforementioned three demands. A few broad-stroke recommendations for mitigation strategies

are made, though the focus resides predominantly on identifying problems where policy intervention

seems most needed.


The paper begins with an explanation of “predictive policing.” Next, an overview of the reform

demands of accountability, transparency, and reallocating resources is provided, including some

implications of predictive policing upon each. To conclude, I deduce that increased adoption of

predictive policing demands safeguards to ensure not just ‘human-in-the-loop’ decision-making but

rather humans-in-the-loop decision-making.

Predictive policing: What it is, its aims, and what data it uses:

First, predictive policing refers to police use of algorithmic risk assessment to calculate the

likelihood of future criminal behaviour (Peeters and Schuilenburg 2018, 276). It is also understandable

as a form of stochastic governance through control (Sanders and Sheptycki 2017). This section uses

the following two facets to describe predictive policing and how it works:

1. decisions regarding aims or targets; and

2. the use of data and how that data influences decision-making.

Targets of predictive policing:

There are many aims or targets of predictive policing. For clarity, this paper classifies these

aims into four subsets. Algorithms are used to project patterns (Kaufman, Egbert and Leese 2019), that

is, forecast the probability of:

1. locations where crimes will likely be committed;

2. offenders who are deemed high risk for committing future crimes;

3. identities of likely perpetrators of previously committed crimes; and

4. the likely victims of future crimes.

This is by no means an exhaustive account, for example, the flag-hypothesis forecasts crime

risk based on an object's vulnerability to theft (Kaufman, Egbert and Leese 2019, 682). However, the
where and whom seem to be the most common forms of decision-making for predictive analytics.

Deterrence via targeted deployment to 'hot spot' areas during high risk times seems to be the most

common intervention motivated by predictive algorithms. The higher the projected risk, the more need

for surveillance and patrol (Peeters and Schuilenburg 2018, 276). Such preemptive deterrence is

believed to bear the potential to effectively and efficiently lower crime rates.

Closed or open data and proprietary software:

The foundational assumption is that the future will resemble the past (Bennett Moses and Chan

2018, 810). So, predictive analytics are based on historical data sets. The data can be drawn from

closed police data sets or open data, i.e. big data. There is also a trend in big data usage toward

“platformization” in police work, which is, “an organizational process in which manifold data sets and

databanks, especially from police-external sources, are cross linked” (Egbert 2019, 84). This trend

correlates with the use of machine learning in algorithms. A necessary final point is that software

packages used by police are often privately owned. So, how algorithms work is often subject to

proprietary privacy laws.

1. Accountability as providing an explanation and facing consequences:

Accountability is concerned with controlling the power of police and requiring police explain

their conduct (Bennett Moses and Chan 2018, 817). Thus, accountability has two components, that

police explain their conduct and face suitable consequences for their actions (power control).

1.1 Peeters and Schuilenburg describe the use of computer generated scores to prevent crime as

very similar to how 'consumer scores' get used by credit card companies. (Peeters and Schuilenburg

2018, 268) They refer to predictive policing as 'administrative law' (Peeters and Schuilenburg 2018,

270), wherein investigatory decision-making, “shifts from individual judgement to bureaucratic

classification based on the algorithms that are designed into risk assessment tools.” (Peeters and
Schuilenburg 2018, 268) Understanding predictive policing as a logical extension of bureaucratic

classification creates a conflict regarding the need for police to explain their conduct. Peeters and

Shuilenberg argue that bureaucratic practice “shows a tendency toward a self-imposed reduction of

professional discretion.” (Peeters and Schuilenburg 2018, 275) The authors contend that “the outcomes

of an algorithm are more determined by its input and design than by the individual assessment of a

case. What is more, bureaucratic structures are highly disciplining for the behaviour of those subjected

to it.” (Peeters and Schuilenburg 2018, 275)

Thus, bureaucratic structures reduce the extent to which individuals exert discretion over their

own decisions. And, algorithms only add new rules to discipline behaviour. Police may claim that

predictive scores are only a tool to assist in their decision-making, but Peeters and Shuilenberg suggest

a more deterministic relationship between algorithmic outcomes and individual decision-making. A

human tendency to have more faith in numerical data over other forms of evidence, regardless of

validity (Israni 2017), adds to accountability issues of bureaucratic decision-making. When asked to

explain decisions, officers can now blame software. Algorithms offer a means of outsourcing decision

making to software, which in doing so, outsources accountability.

1.2 Feedback loops refer to self-perpetuating, self-affirming biases through the use of historic

crime data (Bennett Moses and Chan 2018, 810). One example being the fact of officially reporting a

crime and how it gets classified are subject to bias. Then, that biased data is used to justify intervening,

thereby creating more crime data to be used for future predictions. Through feedback loops,

discrimination becomes entrenched into police methods. Automation bias refers to a computer

program's suggested answers being over trusted (Freeman 2016, 99). In machine learning, automation

bias can also refer to confirmation bias, or feedback loops within an algorithm that uses machine

learning. Feedback loops are an accountability issue because algorithms can “mathwash” (Mok 2017)

discrimination by giving historically prejudiced decision-making statistical legitimacy.


1.3 As accountability relates to facing consequences, the use of body cameras is informative.

Body cameras as a technological tool claimed to improve police accountability. A 2016 study found

that 92.6% of prosecutions involving the use of body-cam evidence were against private citizens, just

over 8% against law enforcement (Merola et al. 2016, 16). So, even when police adopt technologies to

(ostensibly) improve accountability, this example suggests other outcomes.

These three issues imply that increased adoption of predictive policing technologies raises

issues of accountability. This points to a need for safeguards that establish where legal liability resides

with respect to “software assisted” decision-making.

2. Transparency as access to knowledge

Transparency can be understood as the ability to “dig deeper” (Bennett Moses and Chan 2018,

818). In this paper, it is the ability to access answers with respect to how an algorithm works, i.e. its

variables and/or how they are weighted. Transparency is thus in one sense more rudimentary than

accountability, because in order for police to provide answers they must first be able to provide

answers. In conceptualizing transparency as access to knowledge, I highlight two potential problems.

The first is the issue of proprietary black boxes, the second being algorithmic black boxes.

2.1 The issue of proprietary black boxes respecting predictive software ought to instill a sense

of ethical urgency for policy intervention. Proprietary black boxes occur when policing software is

“subject to commercial-in-confidence provisions and licensing contracts” (Bennett Moses and Chan

2018, 818). In the recent Wisconsin Supreme Court case of State v. Loomis, the ethical importance of

algorithmic transparency was centerstage. In Loomis, a sentence was issued based on an algorithm

generated recidivism risk assessment from a program called COMPAS (Israni 2017). On appeal, the

defendant-appellant claimed that: “[h]e knew what information COMPAS was using, his criminal

history and a pre-sentencing questionnaire he had filled out, but had no idea how it was using said
information to predict his recidivism.” (Israni 2017)

This case creates a precedent for programmers to deny access to algorithmic analysis to outside

parties. This is extremely problematic, even if we think that a computer might be less biased than a

judge or a police officer. In Loomis, the defendant was denied information regarding how variables

were weighted in a calculation that determined his prison sentence. Not requiring the disclosure of how

variables get weighted allows for biases like racial discrimination to remain unchecked, even when

racial indicators are not explicitly included in an algorithm's calculation. This is because proxy

indicators can be used for racial factors, i.e. one's postal code (Turner Lee, Resnick and Barton 2019).

Thus, how a postal code gets weighted could disproportionately influence the risk score that an

algorithm generates Saskatoon's Police Service development of their own in-house software to avoid

proprietary opacity (Kenyon 2020) further suggests that some form of policy intervention is warranted

to ensure transparency measures are being met with respect to proprietary algorithms.

2.2 The second problem can occur even if proprietary transparency measures seem satisfactory.

Simply put, how an algorithm produces a risk assessment can be too complicated to understand, even

for the programmers themselves.This is especially the case when algorithms incorporate machine

learning- which always carries an element of opacity (Oswald et al. 2018). Take for example, a specific

risk assessment known as the HART model that “contains over 4.2 million decision points, all of which

are highly interdependent on the ones that precede them” (Oswald et al. 2018, 234). Algorithms can

themselves be black boxes, given the relative human capacity to understand how they work even with

access to the various variables and programming.

These two problems call for, at the very least, intensive deliberation and discussions involving

lawmakers, policy analysts, policymakers, police representatives and software providers about the need

for transparency within predictive policing. Requiring that police have basic algorithmic literacy in

order to reflect on their decisions is a bare minimum. That the public at large has access to the 'what'
and 'how' of a proprietary algorithm must be considered, perhaps in the form of mandatory disclosure

measures. Lastly, it might be of value to ask if the benefits of using inherently opaque algorithms

reliant upon machine learning is really higher than the cost of never having total comprehensibility.

3. Reallocation of Police Resources:

I argue that a contradiction exists within how predictive policing defines the problem of crime

that carries implications for resource reallocation. Deborah Stone's typology of causal stories can help

to extrapolate this contradiction. Predictive policing operates upon paradoxical reasoning that logically

supports a reallocation of resources.

Deborah Stone argues that how we understand occurrences, that is, intentions of actions and

their consequences is crucial “in the struggle over problem definition” (Stone 1989, 284). Actions can

either be unguided or purposeful, with intended or unintended consequences (Stone 1989, 285). She

clarifies that this taxonomy is not complete upon acknowledging the existence of 'complex' historical

or structural causal explanations (Stone 1989, 288). I suggest that predictive policing relies on

structural causal explanations while operating as though crime (its problem definition), were a simple

causal story of purposeful intentional action with intended consequences.

Certain algorithms incorporate census data to identify communities at risk of higher crime rates.

One police officer in a 2019 study, stated they had relied on variables like “socio-economic status,

residential stability, linguistic isolation, and race and ethnicity” (Kaufman, Egbert and Leese 2019,

681). This acknowledges a correlative relationship between socio-economic, structural explanations

and crime. And still, policing 'criminal behaviour' through patrol and deterrence relies on an

oversimplified attribution of intent. As Stone puts it, “predictable stochastic outcomes have been

transformed[...]into conscious intent“ (Stone 1989, 291). Thus, the use of software that includes

variables related to socio-economic status and ethnicity is at odds with a problem definition that begets
intervention through deterrence in the immediacy of spatiotemporal predictions.

Calls for reallocation of police resources to other social and health services implicitly reject the

simple definition of 'crime' as simply purposeful action with intended consequences. As Stone would

see it, predictive policing “assumes that the effects of an action are its purposes [which] is to commit

the teleological fallacy.” (Stone 1989, 290) In acknowledging structural causes of crime, predictive

policing contradicts an oversimplified definition of 'criminal behaviour'. And yet, it is this

oversimplified definition that justifies channelling resources toward strategies of intervening via police

presence.

Predictive policing could therefore be its own strongest case against itself with the questions of

resource reallocation. Given the increasing emphasis placed on preventing crime from taking place, and

that police also acknowledge socio-economic indicators in algorithmic predictions, the logic supports

reallocating resources to other public services. Meaning that, intervening earlier, i.e. to address socio-

economical causes of crime is the next logical step for predictive policing.

In Sum:

To conclude, this paper has identified problems that predictive policing presents for reform

demands of accountability, transparency, and reallocating resources. I deduce that the increased

adoption of predictive policing urgently demands policy intervention to ensure not just human-in-the-

loop decision-making but rather humans-in-the-loop decision-making. Full disclosure and transparency

of how algorithms make their calculations must not be confined to police or software proprietors, but a

concern for the federal government, lawmakers, policymakers, the public, and law enforcement

agencies alike. For now, I will channel the above analysis into a demand for algorithmic transparency

in the form of regulatory safeguards that ensure decision-making incorporates humans-in-the-loop from

diverse perspectives.
Bibliography

Egbert, Simon. 2019. “Predictive Policing and the Platformization of Police Work.” Surveillance &
Society 17(1/2): 83-88. DOI: https://doi.org/10.24908/ss.v17i1/2.12920.

Freeman, Katherine. “Algorithmic Injustice: How the Wisconsin Supreme Court Failed to Protect
Due Process Rights in State v. Loomis,” North Carolina Journal of Law & Technology 18(5): 75-
106. https://scholarship.law.unc.edu/cgi/viewcontent.cgi?article=1332&context=ncjolt.

Israni, Ellora. 2017. “Algorithmic Due Process: Mistaken Accountability and Attribution in State v.
Loomis.” Harvard Journal of Law and Technology: JOLT Digest..
https://jolt.law.harvard.edu/digest/ algorithmic-due-process-mistaken-accountability-and-
attribution-in-state-v-loomis-1.

Kaufmann, Mareile, Simon Egbert and Matthias Leese. 2019. “Predictive policing and the politics of
patterns.” British Journal of Criminology 59: 674-692. https://doi.org/10.1093/bjc/azy060.

Kenyon, Miles. 2020. “Algorithmic Policing in Canada Explained.” The Citizen Lab. September 1,
2020. https://citizenlab.ca/2020/09/algorithmic-policing-in-canada-explained/#:~:text=Location
%2Dfocused%20algorithmic%20policing%20technology,given%20set%20of%20geographical
%20areas.

Lee, Nicole Turner, Paul Resnick, and Genie Barton. 2019. “Algorithmic bias detection and
mitigation: Best practices and policies to reduce consumer harms.” The Brookings Institute. May
22, 2019 https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-
practices-and-policies-to-reduce-consumer-harms/.

Merola, L.M., C. Lum, C.S. Koper, and A. Scherer. 2016. Body Worn Cameras and the Courts: A
National Survey of State Prosecutors: Report for the Laura and John Arnold Foundation.
Fairfax, VA: Center for Evidence-Based Crime Policy, George Mason University.
https://bja.ojp.gov/sites/g/files/xyckuh186/files/media/document/bwcprosecutors.pdf.

Mok, Kimberly. 2017. “Mathwashing: How algorithms can hide gender and racial biases.” The New
Stack. December 8, 2017. https://thenewstack.io/hidden-gender-racial-biases-algorithms-can-big-
deal/.

Moses, Lyria Bennett and Janet Chan, “Algorithmic prediction in policing: assumptions, evaluation,
and accountability.” Policing and Society 28(7): 806-822.
https://doi.org/10.1080/10439463.2016.1253695.
Oswald, Marion, Jamie Grace, Sheena Urwin and Geoffrey C. Barnes. 2018. “Algorithmic risk
assessment policing models: lessons from the Durham HART model and ‘Experimental’
proportionality.” Information and Communications Technology Law 27(2): 223-250. DOI:
10.1080/13600834.2018.1458455.

Peeters, Rik and Schuilenburg, Marc. 2018. “Machine Justice: Governing Security through the
Bureaucracy of Algorithms.” Information Polity 23: 267-280. https://marcschuilenburg.nl/wp-
content/uploads/2018/08/MachineJustice.pdf.

Sanders, Carrie .B and James Sheptycki. 2017. “Policing, crime and ‘big data’; towards a critique of
the moral economy of stochastic governance.” Crime, Law and Social Change 68: 1–15 (2017).
DOI: 10.1007/s10611-016-9678-7.

Stone, Deborah A. 1989.“Causal stories and the formation of policy agendas.” Political Science
Quarterly 104 (2): 281-300. https://www.jstor.org/stable/2151585.

You might also like