How To Hold Algorithms Accountable
How To Hold Algorithms Accountable
Algorithms are now used throughout the public and private sectors, informing decisions on everything from
education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big
data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an
electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of
the societal risks posed by over-reliance on these systems and work to hold themaccountable.
Various industry efforts, including a consortium of Silicon Valley behemoths, are beginning to grapple with the
ethics of deploying algorithms that can have unanticipated effects on society. Algorithm developers and product
managers need new ways to think about, design, and implement algorithmic systems in publicly accountable
ways. Over the past several months, we and some colleagues have been trying to address these goals by crafting
a set of principles for accountable algorithms.
Let’s consider one case where algorithmic accountability is sorely needed: the risk assessment scores that inform
criminal-justice decisions in the U.S. legal system. These scores are calculated by asking a series of questions
relating to things like the defendant’s age, criminal history, and other characteristics. The data are fed into an
algorithm to calculate a score that can then be used in decisions about pretrial detention, probation, parole, or even
sentencing. And these models are often trained using proprietary machine-learning algorithms and data about
previous defendants.
Recent investigations show that risk assessment algorithms can be racially biased, generating scores that, when
wrong, more often incorrectly classify black defendants as high risk. These results have generated considerable
controversy. Given the literally life-altering nature of these algorithmic decisions, they should receive careful
attention and be held accountable for negative consequences.
Algorithms and the data that drive them are designed and created by people. Even for techniques such as genetic
algorithms that evolve on their own, or machine-learning algorithms where the resulting model was not hand-
crafted by a person, results are shaped by human-made design decisions, rules about what to optimize, and
choices about what training data to use. “The algorithm did it” is not an acceptable excuse if algorithmic systems
make mistakes or have undesired consequences.
Accountability implies an obligation to report and justify algorithmic decision-making, and to mitigate any
negative social impacts or potential harms. We’ll consider accountability through the lens of five core principles:
responsibility, explainability, accuracy, auditability, and fairness.
This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
https://www.technologyreview.com/s/602933/how-to-hold-algorithms-accountable/
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego. 1/2
Any unauthorized use or reproduction of this document is strictly prohibited*.
8/14/2019 How to Hold Algorithms Accountable - MIT Technology Review
Responsibility. For any algorithmic system, there needs to be a person with the authority to deal with its adverse
individual or societal effects in a timely fashion. This is not a statement about legal responsibility but, rather, a
focus on avenues for redress, public dialogue, and internal authority for change. This could be as straightforward
as giving someone on your technical team the internal power and resources to change the system, making sure
that person’s contact information is publicly available.
Explainability. Any decisions produced by an algorithmic system should be explainable to the people affected by
those decisions. These explanations must be accessible and understandable to the target audience; purely technical
descriptions are not appropriate for the general public. Explaining risk assessment scores to defendants and their
legal counsel would promote greater understanding and help them challenge apparent mistakes or faulty data.
Some machine-learning models are more explainable than others, but just because there’s a fancy neural net
involved doesn’t mean that a meaningful explanation can’t be produced.
Accuracy. Algorithms make mistakes, whether because of data errors in their inputs (garbage in, garbage out) or
statistical uncertainty in their outputs. The principle of accuracy suggests that sources of error and uncertainty
throughout an algorithm and its data sources need to be identified, logged, and benchmarked. Understanding the
nature of errors produced by an algorithmic system can inform mitigation procedures.
Auditability. The principle of auditability states that algorithms should be developed to enable third parties to
probe and review the behavior of an algorithm. Enabling algorithms to be monitored, checked, and criticized
would lead to more conscious design and course correction in the event of failure. While there may be technical
challenges in allowing public auditing while protecting proprietary information, private auditing (as in
accounting) could provide some public assurance. Where possible, even limited access (e.g., via an API) would
allow the public a valuable chance to audit these socially significant algorithms.
Fairness. As algorithms increasingly make decisions based on historical and societal data, existing biases and
historically discriminatory human decisions risk being “baked in” to automated decisions. All algorithms making
decisions about individuals should be evaluated for discriminatory effects. The results of the evaluation and the
criteria used should be publicly released and explained.
There’s lots of room to adapt and interpret these principles to your own context, and of course political,
proprietary, or business concerns will intervene. But we do think that considering these ideas throughout the
design, implementation, and release cycles of development will lead to more socially responsible deployment of
algorithms in society.
How do you get started? We outline some pragmatic questions that the product and development team can work
through to form a social impact statement that addresses these principles.
Nicholas Diakopoulos is an assistant professor at the University of Maryland, College Park. Sorelle Friedler is an
assistant professor at Haverford College, and an affiliate at the Data & Society Research Institute.
This document is authorized for use by Karina J Mark, from 9/23/2019 to 12/14/2019, in the course:
https://www.technologyreview.com/s/602933/how-to-hold-algorithms-accountable/
MGTA495-990166: Special Topics: Legal and Ethical Issues w/Data (Simon) --MSBA, University of California, San Diego. 2/2
Any unauthorized use or reproduction of this document is strictly prohibited*.