0% found this document useful (0 votes)
68 views20 pages

Algorithmic Bias

Computer algorithms can make decisions faster than humans, but may not be fair if biased. Algorithms are trained on data, and if the data is incomplete, imbalanced or improperly selected, it can introduce bias. For example, an AI résumé screening tool from Amazon was found to be sexist, and predictive policing algorithms have been found to be biased against black people based on past crime data. Transparency into algorithms and their training data is important for accountability, but bias can be difficult to detect and may affect different groups in varying ways.

Uploaded by

Seham Othman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views20 pages

Algorithmic Bias

Computer algorithms can make decisions faster than humans, but may not be fair if biased. Algorithms are trained on data, and if the data is incomplete, imbalanced or improperly selected, it can introduce bias. For example, an AI résumé screening tool from Amazon was found to be sexist, and predictive policing algorithms have been found to be biased against black people based on past crime data. Transparency into algorithms and their training data is important for accountability, but bias can be difficult to detect and may affect different groups in varying ways.

Uploaded by

Seham Othman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Algorithmic Bias

A computer can make a decision faster. That doesn’t make it


fair.
Algorithmic Bias
• Humans are error-prone and
biased, but that doesn’t mean that
algorithms are necessarily better.
Algorithmic Bias
• But these systems can be biased
based on who builds them, how
they’re developed, and how they’re
ultimately used. This is commonly
known as algorithmic bias.
Algorithmic Bias
• Typically, you only know the end
result: how it has affected you, if
you’re even aware that AI algorithm
was used in the first place.
• Did you get the job? Did you see
that political candidate’s ad on your
Facebook timeline? Did a facial
recognition system identify you?
Were you denied in job application?
Difference between traditional algorithms
and AI algorithms

• Algorithms are traditional ways of • We design the steps in a traditional


solving a problem with a computer. It solution in an algorithm. So, we tell the
is a sequence of mathematical steps algorithm what the 1st step in a chain
which are performed inside of a piece is and it moves on and performs the
of a software. In principle, they can be next operation and next and so on. In
done with a pencil and paper by you the end, the answer is obtained. These
and I. But, when it is big problems, it is are found in everything from our
much easier to let computers to them. mobile phones, engine management
units in our cars, microwave ovens, etc.
Difference between traditional algorithms
and AI algorithms
• But, we would not regard those as AIs • Those strategies that it finds lead more
in a more modern sense. The very big likely than not to success, it will
difference with AI machine learning is reinforce inside its memory. And those
we don’t tell the algorithm the strategies that it finds lead to failure, it
sequence of steps that it should take would begin to inhibit. And after
ahead of time in order to solve the enough iterations and runs of this
problem. We give it a lot of data to try procedure, the algorithm itself, the AI
things out and we tell the algorithm system, has learned the likely ways in
very clearly what a success or failure which it can construct a solution,
looks like. It would try various which gives a high chance of a
strategies to succeed. successful outcome.
Difference between traditional algorithms
and AI algorithms
• That is really a learning process. The only thing we need to do is to tell it the
rules of the game, what success looks like and provide it with an
environment in which it can iterate and work out these strategies , typically
by providing it with a lot of data
Machine learning-based systems are
trained on data. Lots of it.

• When thinking about “machine • This involves exposing a computer


learning” tools, it’s better to think to a bunch of data — any kind of
about the idea of “training.” data — and then that computer
learns to make judgments, or
predictions, about the information
it processes based on the patterns it
notices.
Machine learning-based systems are
trained on data. Lots of it.
• Often, the data on which many of these decision-making systems are trained
or checked are not complete, balanced, or selected appropriately, and that can
be a major source of — although certainly not the only source — of
algorithmic bias.
Machine learning-based systems are
trained on data. Lots of it.
• An example of how training data
can produce sexism in an algorithm
occurred a few years ago, when
Amazon tried to use AI to build a
résumé-screening tool.
• Later, Amazon scrapped 'sexist AI'
tool
Machine learning-based systems are
trained on data. Lots of it.
•An experiment at the Massachusetts Institute of
Technology, which trained an AI on images and
videos of murder and death, found it interpreted
neutral inkblots in a negative way.
Machine learning-based • "Data matters more than the
systems are trained on algorithm.
data. Lots of it.
• "It highlights the idea that the
data we use to train AI is reflected
in the way the AI perceives the
world and how it behaves."
Machine learning-based systems are
trained on data. Lots of it.
And in May last year, a report claimed • Sometimes the data that AI "learns"
that an AI-generated computer program from comes from humans intent on
used by a US court was biased against mischief-making so when Microsoft's
black people, flagging them as twice as chatbot Tay was released on Twitter in
likely to reoffend as white people. 2016, the bot quickly proved a hit with
racists and trolls who taught it to
Predictive policing algorithms were defend white supremacists, call for
spotted to be similarly biased, because the genocide and express a fondness for
crime data they were trained on showed Hitler.
more arrests or police stops for black
people.
Machine learning-based systems are
trained on data. Lots of it.
One study showed that software trained • Dr Joanna Bryson, from the University
on Google News became sexist as a of Bath's department of computer
result of the data it was learning from. science said that the issue of sexist AI
When asked to complete the statement, could be down to the fact that a lot of
"Man is to computer programmer as machines are programmed by "white,
woman is to X", the software replied single guys from California" and can
'homemaker". be addressed, at least partially, by
diversifying the workforce…It should
come as no surprise that machines are
picking up the opinions of the people
who are training them.
Machine learning-based systems are
trained on data. Lots of it.
• Companies using AI might say they’re taking precautions, taking steps to use more
representative training data and regularly auditing their systems for unintended bias
and disparate impact against certain groups.
• Just because a tool is tested for bias — which assumes that engineers who are
checking for bias actually understand how bias manifests and operates — against
one group doesn’t mean it is tested for bias against another type of group.
• This is also true when an algorithm is considering several types of identity factors at
the same time: A tool may deemed fairly accurate on white women, for instance, but
that doesn’t necessarily mean it works with black women.
• Read more here
Transparency is a first step for
accountability
• One of the reasons algorithmic bias can seem so opaque is because, on our
own, we usually can’t tell when it’s happening.
Transparency is a first step for
accountability
• How can you critique an algorithm — a sort of black box — if you don’t
have true access to its inner workings or the capacity to test a good number
of its decisions?
• Sharing demographic information about both the data used to train and the
data used to check artificial intelligence should be a baseline definition of
transparency.
• We will likely need new laws to regulate artificial intelligence, and some
lawmakers in the US are catching up on the issue.
Recommended Readings
to bypass paywall: https://12ft.io/
Algorithmic bias is a complicated and broad subject. To learn more, check out these sources:
• Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code
• Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism
• Rachel Thomas, Getting Specific About Algorithmic Bias
• “Google Has a History of Bias Against Black Girls”, Time, 2018
• “When An Algorithm Helps Send You to Prison”, The New York Times, 2017
• “What Went So Wrong with Microsoft’s Tay AI?”, readwrite, 2016
• Watch Joy Buolamwini’s TED talk: How I’m fighting bias in algorithms
• Yaël Eisenstat, The Real Reason Tech Struggles With Algorithmic Bias, WIRED, 2019
More on AI:

• Stephen Wolfram, What Is ChatGPT Doing … and Why Does It Work?


• AI Is Exposing Who Really Has Power in Silicon Valley. Your data
helped build ChatGPT. Where’s your payout?, The Atlantic, 2023
Recommended Videos
• A five-part series exploring the impact of algorithms on our everyday
lives: All Hail the Algorithm

You might also like