0% found this document useful (0 votes)
8 views

Introduction

The document discusses the increasing reliance on artificial intelligence (AI) and machine learning algorithms in decision-making across various sectors, highlighting their potential to replicate and amplify human biases. It emphasizes the need for proactive measures to address algorithmic bias, including public policy recommendations, self-regulatory practices, and consumer strategies to ensure fair and ethical deployment of these technologies. The paper also presents examples of algorithmic bias, such as Amazon's recruitment tool, which demonstrated gender bias due to unrepresentative training data.

Uploaded by

arlinediku95
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Introduction

The document discusses the increasing reliance on artificial intelligence (AI) and machine learning algorithms in decision-making across various sectors, highlighting their potential to replicate and amplify human biases. It emphasizes the need for proactive measures to address algorithmic bias, including public policy recommendations, self-regulatory practices, and consumer strategies to ensure fair and ethical deployment of these technologies. The paper also presents examples of algorithmic bias, such as Amazon's recruitment tool, which demonstrated gender bias due to unrepresentative training data.

Uploaded by

arlinediku95
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

ntroduction

The private and public sectors are increasingly turning to artificial


intelligence (AI) systems and machine learning algorithms to
automate simple and complex decision-making processes. 1 The
mass-scale digitization of data and the emerging technologies that
use them are disrupting most economic sectors, including
transportation, retail, advertising, and energy, and other areas. AI is
also having an impact on democracy and governance as
computerized systems are being deployed to improve accuracy and
drive objectivity in government functions.

The availability of massive data sets has made it easy to derive new
insights through computers. As a result, algorithms, which are a set
of step-by-step instructions that computers follow to perform a task,
have become more sophisticated and pervasive tools for automated
decision-making. 2 While algorithms are used in many contexts, we
focus on computer models that make inferences from data about
people, including their identities, their demographic attributes, their
preferences, and their likely future behaviors, as well as the objects
related to them.3
“Algorithms are harnessing volumes of macro- and micro-data to
influence decisions affecting people in a range of tasks, from making
movie recommendations to helping banks determine the
creditworthiness of individuals.”

In the pre-algorithm world, humans and organizations made


decisions in hiring, advertising, criminal sentencing, and lending.
These decisions were often governed by federal, state, and local
laws that regulated the decision-making processes in terms of
fairness, transparency, and equity. Today, some of these decisions
are entirely made or influenced by machines whose scale and
statistical rigor promise unprecedented efficiencies. Algorithms are
harnessing volumes of macro- and micro-data to influence decisions
affecting people in a range of tasks, from making movie
recommendations to helping banks determine the creditworthiness
of individuals. 4 In machine learning, algorithms rely on multiple data
sets, or training data, that specifies what the correct outputs are for
some people or objects. From that training data, it then learns a
model which can be applied to other people or objects and make
predictions about what the correct outputs should be for them. 5

However, because machines can treat similarly-situated people and


objects differently, research is starting to reveal some troubling
examples in which the reality of algorithmic decision-making falls
short of our expectations. Given this, some algorithms run the risk of
replicating and even amplifying human biases, particularly those
affecting protected groups. 6 For example, automated risk
assessments used by U.S. judges to determine bail and sentencing
limits can generate incorrect conclusions, resulting in large
cumulative effects on certain groups, like longer prison sentences or
higher bails imposed on people of color.

In this example, the decision generates “bias,” a term that we


define broadly as it relates to outcomes which are systematically
less favorable to individuals within a particular group and where
there is no relevant difference between groups that justifies such
harms.7 Bias in algorithms can emanate from unrepresentative or
incomplete training data or the reliance on flawed information that
reflects historical inequalities. If left unchecked, biased algorithms
can lead to decisions which can have a collective, disparate impact
on certain groups of people even without the programmer’s
intention to discriminate. The exploration of the intended and
unintended consequences of algorithms is both necessary and
timely, particularly since current public policies may not be
sufficient to identify, mitigate, and remedy consumer impacts.

With algorithms appearing in a variety of applications, we argue that


operators and other concerned stakeholders must be diligent in
proactively addressing factors which contribute to bias. Surfacing
and responding to algorithmic bias upfront can potentially avert
harmful impacts to users and heavy liabilities against the operators
and creators of algorithms, including computer programmers,
government, and industry leaders. These actors comprise the
audience for the series of mitigation proposals to be presented in
this paper because they either build, license, distribute, or are
tasked with regulating or legislating algorithmic decision-making to
reduce discriminatory intent or effects.
RELATED CONTENT
Auditing employment algorithms for discrimination
TECHNOLOGY & INFORMATIONAuditing employment algorithms for
discrimination
Alex Engler
March 12, 2021
Ethical algorithm design should guide technology regulation

TECHNOLOGY & INFORMATIONEthical algorithm design should guide


technology regulation
Michael Kearns, Aaron Roth
January 13, 2020
Challenges for mitigating bias in algorithmic hiring

TECHNOLOGY & INFORMATIONChallenges for mitigating bias in


algorithmic hiring
Manish Raghavan, Solon Barocas
December 6, 2019

Our research presents a framework for algorithmic hygiene, which


identifies some specific causes of biases and employs best practices
to identify and mitigate them. We also present a set of public policy
recommendations, which promote the fair and ethical deployment of
AI and machine learning technologies.

This paper draws upon the insight of 40 thought leaders from across
academic disciplines, industry sectors, and civil society
organizations who participated in one of two
roundtables.8 Roundtable participants actively debated concepts
related to algorithmic design, accountability, and fairness, as well as
the technical and social trade-offs associated with various
approaches to bias detection and mitigation.
Our goal is to juxtapose the issues that computer programmers and
industry leaders face when developing algorithms with the concerns
of policymakers and civil society groups who assess their
implications. To balance the innovations of AI and machine learning
algorithms with the protection of individual rights, we present a set
of public policy recommendations, self-regulatory best practices,
and consumer-focused strategies–all of which promote the fair and
ethical deployment of these technologies.

Our public policy recommendations include the updating of


nondiscrimination and civil rights laws to apply to digital practices,
the use of regulatory sandboxes to foster anti-bias experimentation,
and safe harbors for using sensitive information to detect and
mitigate biases. We also outline a set of self-regulatory best
practices, such as the development of a bias impact statement,
inclusive design principles, and cross-functional work teams. Finally,
we propose additional solutions focused on algorithmic literacy
among users and formal feedback mechanisms to civil society
groups.

The next section provides five examples of algorithms to explain the


causes and sources of their biases. Later in the paper, we discuss
the trade-offs between fairness and accuracy in the mitigation of
algorithmic bias, followed by a robust offering of self-regulatory best
practices, public policy recommendations, and consumer-driven
strategies for addressing online biases. We conclude by highlighting
the importance of proactively tackling the responsible and ethical
use of machine learning and other automated decision-making tools.
Examples of algorithmic biases

Algorithmic bias can manifest in several ways with varying degrees


of consequences for the subject group. Consider the following
examples, which illustrate both a range of causes and effects that
either inadvertently apply different treatment to groups or
deliberately generate a disparate impact on them.

Bias in online recruitment tools

Online retailer Amazon, whose global workforce is 60 percent male


and where men hold 74 percent of the company’s managerial
positions, recently discontinued use of a recruiting algorithm after
discovering gender bias. 9 The data that engineers used to create the
algorithm were derived from the resumes submitted to Amazon over
a 10-year period, which were predominantly from white males. The
algorithm was taught to recognize word patterns in the resumes,
rather than relevant skill sets, and these data were benchmarked
against the company’s predominantly male engineering department
to determine an applicant’s fit. As a result, the AI software penalized
any resume that contained the word “women’s” in the text and
downgraded the resumes of women who attended women’s
colleges, resulting in gender bias. 10

You might also like