Introduction
Introduction
The availability of massive data sets has made it easy to derive new
insights through computers. As a result, algorithms, which are a set
of step-by-step instructions that computers follow to perform a task,
have become more sophisticated and pervasive tools for automated
decision-making. 2 While algorithms are used in many contexts, we
focus on computer models that make inferences from data about
people, including their identities, their demographic attributes, their
preferences, and their likely future behaviors, as well as the objects
related to them.3
“Algorithms are harnessing volumes of macro- and micro-data to
influence decisions affecting people in a range of tasks, from making
movie recommendations to helping banks determine the
creditworthiness of individuals.”
This paper draws upon the insight of 40 thought leaders from across
academic disciplines, industry sectors, and civil society
organizations who participated in one of two
roundtables.8 Roundtable participants actively debated concepts
related to algorithmic design, accountability, and fairness, as well as
the technical and social trade-offs associated with various
approaches to bias detection and mitigation.
Our goal is to juxtapose the issues that computer programmers and
industry leaders face when developing algorithms with the concerns
of policymakers and civil society groups who assess their
implications. To balance the innovations of AI and machine learning
algorithms with the protection of individual rights, we present a set
of public policy recommendations, self-regulatory best practices,
and consumer-focused strategies–all of which promote the fair and
ethical deployment of these technologies.