0% found this document useful (0 votes)
45 views7 pages

SLIDES PHIL1001 W09b - Algorithms & Decisions, Part 2 (2024)

Uploaded by

Yashwardhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views7 pages

SLIDES PHIL1001 W09b - Algorithms & Decisions, Part 2 (2024)

Uploaded by

Yashwardhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

5/2/24

Agenda
1. Recap/revision
1.1. A.I. Algorithms & Human Decisions.

1.2. Three ethical concerns.

2. More ethical concerns.

2.1. Algorithmic Bias

2.2. Treating People as Individuals


Week 09, Lecture B :
Algorithmic Decision-Making: 2.3. Agency Laundering
Bias, Treating as Individuals, and Agency Laundering

Part 1 Section 1.1


A.I. Algorithms:
Recap/Revision What they are;
what they are for.

Features Input Layer Hidden Layers Output Layer

Test
Scores
Did they
graduate?
Name of
Sample Machine- School

Learning Algorithm: Extra


Curricular
Activities

University Admissions After


School
Job

Address/
Suburb

Parents’
Income

1
5/2/24

Features Input Layer Hidden Layers Output Layer

Test
Scores

? Name of
School
Likelihood
of Some Areas Where
Graduating?
Extra
Curricular Machine Learning
Activities
After Algorithms Are Used for
School
Job Decision-Making
Address/
Suburb

Parents’
Income

(a) Online Advertisements.

(b) Social Media; Video & Music Streaming Feeds.

(c) Medical Diagnostics.


Section 1.2
(d) University Admissions & Assessments.
Three Ethical
Concerns
(e) Human Resources: e.g. hiring & firing.
Recapped
(f) Mortgage Lending.

(g) Criminal Sentencing and Parole Decisions.

Concern 1: Amplifying bigotry and stereotypes.


E.g. Search engines returning bigoted, pornographic,
or negative stereotyped results for search phrases Part 2
denoting marginalized groups.
Ethical Misgivings
Concern 2: Personalization creating “filter bubbles”. With Algorithmic
• Users cut off from evidence that might challenge their beliefs. Decisions Continued
• Increases mistrust and polarization.

Concern 3: Lack of transparency about decisions.


• Hinders people’s autonomy.
• (Discussed in workshops)

2
5/2/24

Section 2.1 Prelude:


Algorithmic Bias Old-Fashioned Bias
and One Hope For
Algorithms

The story of mom’s first job interview (1961) Some takeaways:

(a) Mom was not qualified for the job, but was hired on the basis
of a characteristic that was arbitrary from the P.O.V. of the job.

(b) This was unfair to more qualified candidates, & thus, wrong.

(c) This phenomenon happens all the time, even if not publicly
acknowledged. (Hirers may not even be aware.)

(d) A hope for algorithmically assisted hiring decisions?

Can we use machine-learning algorithms to help us to make


unbiased and morally justifiable hiring decisions?

And now for some bad news

“Algorithmic Bias”

3
5/2/24

Example 1:
Amazon’s Recruitment
Algorithm
Dastin, Jeffrey (2018). “Amazon Scraps Secret AI Recruiting Tool That Showed
Bias Against Women”. Reuters.
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-
recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G [Accessed 2/5/2022]

“In effect, Amazon’s system taught itself that male candidates


were preferable. It penalized resumes that included the word
‘women’s,’ as in ‘women’s chess club captain.’ And it downgraded
graduates of two all-women’s colleges […]. Example 2:
“Amazon edited the programs to make them neutral to these
Health Care Algorithm
particular terms. But that was no guarantee that the machines used by US Hospitals
would not devise other ways of sorting candidates that could
prove discriminatory, the people said.”
(Dastin 2018).

(a) The Algorithm is used to help decide which patients to refer to a


“care management program” for people predicted to have
“complex health care needs”.

(b) It predicted that on average black and white patients had similar levels
of risk—and so were referred to the program in similar proportions.

(c) But it was later found that, in fact, a much greater proportion of black
patients than white patients had serious health problems.

(d) So, where a black and white patient are equally sick;
the white patient was generally assigned a higher risk score,
and was more likely to be referred to the program.
Ledford, Heidi (2019). “Millions Affected by Racial Bias in Health Care Algorithm.”
Nature 574: 608-609. https://www.nature.com/articles/d41586-019-03228-6
(e) The bias was the result of measuring degree of sickness in terms of
how much $ was spent on the patient’s health.

4
5/2/24

For other examples see Johnson (2020: 3-5)*


(or in Johnson 2021: 9943-9945)

Question:
How could these algorithms
be biased if data about
gender & race was excluded?

Gabrielle Johnson (2021). “Algorithmic Bias: on the implicit biases of social


technology.” Synthese 198:9941–9961
* The ”Online First” edition of Johnson’s paper was in 2020, and had different pagination.

Features Input Layer Hidden Layers Output Layer


The Proxy Problem
Test
Scores
• An algorithm may come to sort subjects based on
proxy attributes that correlate with a ”socially sensitive” ? Name of
attribute. School
E.g. Post codes / neighborhoods. Likelihood of
Extra Graduating?
Curricular
Activities
• Its predictions may then be similar to what After
it would be if had explicitly utilized the School
the socially sensitive attribute. Job

Address/
Suburb
Johnson (2020).
Parents’
Income

What to do about algorithmic bias?

(a) “Curate the training data”.


Johnson (2020: 15)

Downside: sacrifices the accuracy of the predictions. Section 3.2


“For now…there are no purely algorithmic solutions to the
problems that face algorithmic bias.”
Treating People
Johnson (2020: 16-17)
as Individuals
(b) Address the problems and injustices driving the correlations
in the social environment.

Johnson (2020: 11)

5
5/2/24

Rubel, Castro, & Pham:


Alan Rubel, Clinton Castro, & Adam Pham
• Even if a person is not harmed by the use of an algorithm
it may be wrong to use it to distribute benefits and burdens
to an individual based on statistical data.

• An attractive moral principle:


• It is pro-tanto morally wrong to penalize someone
on the basis of what other people have done.

______ (2020). "Algorithms, Agency, and Respect for Persons". • In sum: each person has a right to be treated as an individual.
Social Theory and Practice 46(3):547-572.
• The Case of Wisconson vs. Loomis

COMPAS: COMPAS:
Correctional Offender Management Profiling for Alternative Sanctions. Excerpts from the questionnaire given to defendant:

• Assesses the risk of recidivism for convicted criminals.


• Used by judges to determine sentences.
• The data is comprised of information from the individual’s past
criminal record plus their answers to questions such as:

(1) “How often did you get in fights while at school”?


(2) Yes or no: “A hungry person has a right to steal.”
(3) Yes or no: “If people make me angry…I can be dangerous.”

But it also includes question such as those on the next slide…


Source:
cf. Angwin et al. (2016) https://www.documentcloud.org/documents/2702103-Sample-Risk-Assessment-COMPAS-CORE

COMPAS: Hypothetical case: Aaron & Darren the drug mules.


Excerpts from the questionnaire given to defendant:
Both arrested and convicted of transporting a large quantity
of a controlled substance. First offense for both.

Aaron Darren

• Low crime neighborhood. • High crime neighborhood.

• No friends have been • Several friends have


victims of crime. been mugged.
• Biological parents. • Adopted.
• No family members • Father convicted of
ever arrested. car theft.
• No family members • Mother has alcohol
drink or take drugs. problem.

6
5/2/24

For Rubel, Castro, & Pham’s discussion, see their:

Section 3.3
Agency
______ (2020). "Algorithms, Agency, and Respect for Persons".
Social Theory and Practice 46(3):547-572.
Laundering
(Especially pages: 556-558, and 561-564.)

• For a skeptical take on the right to be treated as an individual, see


Lippert-Rasmussen, Kasper (2011) “We are all Different”. Journal of Ethics 15:47-59.

Rubel, Castro, & Pham:


• “Using an algorithm to make decisions can allow an agent to
distance herself from morally suspect actions by attributing
morally relevant characteristics to the algorithm”.
“e.g. relevance, neutrality, reliability”
• “Agency Laundering”.

• A plausible moral principle?


It is pro-tanto morally wrong to engage in “agency laundering”:
i.e., to obscure your responsibility for a decision
that adversely affects other other people.

Rubel, Castro, & Pham (2020: 564-567)

You might also like