Lecture 13-Ethics - Requirements
Lecture 13-Ethics - Requirements
13 - Ethics
Marina Jirotka
Human Centred Computing
Department of Computer
Science
1
MT 2025
Outline - ethics
• Research ethics
2
Ethical principles in research (and practice)
involving human participants
3
Credit: https://www.robotips.co.uk/ethical-black-box
fi
fi
Examples of ethical issues and
concerns
4
how it started how it's going
5
How can technologies go wrong?
6
automation bias -
human tendency to trust the output of an algorithm more
than their own intuition, deriving from an overconfidence
of systems and inscrutability of assumptions, processes
and limitations
exacerbated by:
marketing, AI hype, overselling system
model complexity, lack of understanding
algorithmic bias -
algorithms systematically failing to treat individuals equally
causing disproportionate harm or inconvenience to certain
groups over others exacerbated by:
marketing, AI hype, overselling system
model complexity, lack of understanding
7
Algorithmic bias
• usually unintentional
allocative harms
e.g. access to credit, university representational harms
admissions, job appointments, work, e.g. representations and classifications
access to accommodation, policing, relating to gender, ethnicity…
security, pricing…
10
What enables
these actors to
do this?
Deliberate harms emerge from the use of technology to harm and exploit
others, either by malicious or self-interested actors to exploit common welfare in
the name of personal gain.
11
Concerns about Machine Learning
Inscrutability, lack of
accountability - Lack of
transparency in what models are
learning, their source data, and how
they are applied 12
‘Vicious Circle’ ?
Biased data or features
feedback loops
Reluctance to
override system
‘decisions’ Systems being
unaccountable
predictions biased
Reasons for
results /
‘decisions’ being
Automation bias more data uninterpretable
reinforcing
behaviour, reluctance to override predictions
13
‘Vicious Circle’ ? - predictive policing
Biased data or features
Results being
unfair or biased -
reliance on
algorithmic bias: black
datasets neighbourhoods already over-
represented in dataset
• exaggerated
by feedback
loops
Systems being
Reluctance to predict crimes in these areas,
increase policing in these areas
unaccountable
override system Reasons for
results /
‘decisions’ ‘decisions’ being
more data uninterpretable
reinforcing
crimes in these areas
14
UnBias
15
Risk assessments for offenders: COMPAS
16
“Although these measures were cra ed with the best of
inten ons, I am concerned that they inadvertently undermine
our e orts to ensure individualised and equal jus ce… they may
exacerbate unwarranted and unjust dispari es that are already
far too common...“
US Attorney General Eric Holder, 2014
17
ti
ff
ft
ti
ti
• only 20% of those predicted to commit a violent crime had gone on to
do so;
• of those deemed likely to re-offend, 61% went on to be arrested, when
misdemeanours such as driving with an expired license were included;
• black people were almost twice as likely to be falsely labelled as at risk
of future offending than white people;
• white people were mislabelled as low risk more often than black
people;
• even when statistical tests were run to isolate the effect of race from
criminal history, recidivism, age and gender, black people were still 77%
more likely to be labelled as at risk of committing a future violent crime
than white people and 45% more likely to be labelled as at risk of
committing any kind of crime.
18
19
• Unequally wrong for false positives in different populations =
unfair (Pro Publica argument)
23
fi
• Bias/unfairness in algorithms can produce uneven outcomes – and outcomes
that most nega vely a ect disadvantaged and vulnerable groups
• Pursuing ‘fairness’ in the opera on of algorithms is challenging due to
compe ng de ni ons of fairness. It is di cult/impossible to agree on a
standard de ni on of fairness in the abstract and apply it to a speci c scenario
• Algorithmic transparency is a problema c solu on due to reluctance of
pla orms/developers and complexity of the details to be communicated.
Transparency is also meaningless without accountability.
• Algorithmic accountability – what might this involve? Industry standards;
independent system evaluation and monitoring; an auditing system; a right
to appeal?
• Users o en appear apathe c - they are not comfortable with their online
experience and the ways in which their data are collected but do not believe
that they can change this.
24
tf
ti
ft
fi
fi
ti
ti
ti
ff
ti
ti
ti
ffi
ti
fi
Embedding ‘fairness’ into design and development
• Toolkit
25
fi
26
We want our algorithms in a sense to follow higher values,
moral values that we think are more important than giving an
exact re ec on of the world. And that I think is a very
interes ng, but also in a sense very shady area in which, are we
going to use the data as it is? Or are we going to change the
data, or not change but adapt the way we look at the data to
serve our purpose of being non-discriminatory…algorithms are
inherently poli cised [as they are connected to social policy
and poli cal power]
UnBias stakeholder workshop par cipant
27
ti
fl
ti
ti
ti
ti
How can we try to avoid harms in development?
28
How to involve your stakeholders
ethically in the process of technology
development?
(Research ethics)
29
30
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4066473/
Ethical issues
• Collection of users’ data on
a real-world system without
consent
• Involvement of human
participants MUST be ethical
33
When is mostly okay?
• When your project uses data collected for this project, and your
data are fully anonymised and not identifiable, and the
outcomes of the project have no potential ethical implications
34
Your starting point at Oxford CS:
www.cs.ox.ac.uk/research-ethics
35
How to ensure your requirements work is ethical?
• Informed consent: Participants should know about the
research and what it involves. They should give their explicit
consent to take part and have the right to refuse or withdraw
(some exceptions if observing in public places).
• Participants: Make sure participants are comfortable, be
careful about discussing sensitive topics.
• Study settings: Be very careful about observing and video
recording in settings that are normally private (e.g.
bathrooms, hospitals etc.)
• Data management: Store all data carefully and use
anonymity procedures when writing up and presenting
results.
36
How to ensure your requirements work is
ethical?
Informed consent: Participants should know about the research and what it
involves. They should give their explicit consent to take part and have the right to
refuse or withdraw (some exceptions if observing in public places).
37
ff
ffi
An information sheet
• What are we trying to nd out?
• State the purpose of your study and provides an overall summary of the project
• Why have I been invited to take part?
• Do I have to take part?
• Provide a participate an option to refuse or ask questions
• What will happen if I take part?
• Details: process, what to be collected, where, how long, who etc
• What are the possible disadvantages of taking part?
• What are the advantages of taking part?
• Will the research be published? Could I be identi ed from any publications or other
research outputs?
• Data Protection
• Who is conducting this research?
• What if there is a problem?
• Further information and details
38
fi
fi
How to ensure your requirements work is ethical?
Participants: Make sure participants are comfortable, be careful
about discussing sensitive topics.
Study settings: Be very careful about observing and video recording
in settings that are normally private (e.g. bathrooms, hospitals etc.)
40
fi
fi
fi
fi
How to ensure your requirements work is ethical?
1. Go to departmental intranet: h ps://intranet.cs.ox.ac.uk/
2. Click on Research Support and scroll down to Research Ethics
3. This page has all the informa on needed about what courses to take, how to access them, and how to start an applica on in Worktribe
4. Worktribe link: h ps://ox.worktribe.com/ - Single Sign-on required
5. Once logged in, you can start a new applica on by hovering over Ethics_Create applica on, and the system will take you through it from
there.
41
tt
ti
tt
ti
ti
ti
Further reading
• Hellman, D. (2019). “Measuring algorithmic fairness.” Virginia Public Law and
Theory Legal paper. 2091-39.re
• Hedden, B. (2021). “On statistical criteria of algorithmic fairness.” Philosophy
and Public Affairs. 49(2): 209-231
• Suresh, H. and Guttag, J. (ms). “A framework for understanding unintended
consequences of machine learning.” arxiv.org/abs/1901.10002
42