0% found this document useful (0 votes)
8 views42 pages

Lecture 13-Ethics - Requirements

The document outlines ethical principles in technology development, emphasizing the importance of stakeholder involvement, informed consent, and minimizing harm. It discusses algorithmic bias and its implications, highlighting the need for transparency and accountability in AI systems. Additionally, it provides guidelines for ensuring ethical research practices involving human participants, including data management and participant comfort.

Uploaded by

Fuck Hackers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views42 pages

Lecture 13-Ethics - Requirements

The document outlines ethical principles in technology development, emphasizing the importance of stakeholder involvement, informed consent, and minimizing harm. It discusses algorithmic bias and its implications, highlighting the need for transparency and accountability in AI systems. Additionally, it provides guidelines for ensuring ethical research practices involving human participants, including data management and participant comfort.

Uploaded by

Fuck Hackers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Requirements

13 - Ethics

Marina Jirotka
Human Centred Computing
Department of Computer
Science
1
MT 2025
Outline - ethics

• How does your technology affect stakeholders?

• Ethical designs and considerations

• How to involve your stakeholders ethically in the process


of technology development?

• Research ethics

2
Ethical principles in research (and practice)
involving human participants

• Bene cence – well-being of people, maximisation of bene t,


minimisation of harm.

• Respect for persons – humans as autonomous agents,


rights to privacy and security, informed consent.

• Justice – avoiding exploitation, discrimination etc.

3
Credit: https://www.robotips.co.uk/ethical-black-box
fi
fi
Examples of ethical issues and
concerns

4
how it started how it's going

5
How can technologies go wrong?

6
automation bias -
human tendency to trust the output of an algorithm more
than their own intuition, deriving from an overconfidence
of systems and inscrutability of assumptions, processes
and limitations
exacerbated by:
marketing, AI hype, overselling system
model complexity, lack of understanding

algorithmic bias -
algorithms systematically failing to treat individuals equally
causing disproportionate harm or inconvenience to certain
groups over others exacerbated by:
marketing, AI hype, overselling system
model complexity, lack of understanding
7
Algorithmic bias
• usually unintentional

• different consequences for different people

• when to address this

• when algorithm (system) deployed - too late

• when algorithm (system) has been designed - too hard to redesign


(too late)

• when algorithm (system) being designed - who is responsible to take


action

• before algorithm (system) designed - anticipation


8
Algorithmic bias

systematic and repeatable errors in a computer system


that create unfair outcomes, such as privileging one
arbitrary group of users over others

allocative harms representational harms


An allocative harm is when a system A representational harm
provides different groups unequal is an
opportunities, resources, algorithmically curated or
capabilities. created depiction
that is discriminatory or
"An allocative harm can range from a small otherwise harmful.
but significant and systematic difference in
treatment, all the way to complete denial of a
particular service." https://machinesgonewrong.com/bias_i/
9
Algorithmic bias:

allocative harms
e.g. access to credit, university representational harms
admissions, job appointments, work, e.g. representations and classifications
access to accommodation, policing, relating to gender, ethnicity…
security, pricing…

10
What enables
these actors to
do this?
Deliberate harms emerge from the use of technology to harm and exploit
others, either by malicious or self-interested actors to exploit common welfare in
the name of personal gain.

Accidental harms emerge from "unintentional side effects" - due to complexity


or ignorance.
Why does this happen?
These harms disproportionately affect minorities, disadvantaged groups, low-
paid workers among those who are already exploited and overlooked.

11
Concerns about Machine Learning

Automation bias - blind faith in


opaque algorithms - which are
often deeply flawed or limited

Feedback Loops - the amplifying


effects of applying an algorithm to a
system and its accelerating

Inscrutability, lack of
accountability - Lack of
transparency in what models are
learning, their source data, and how
they are applied 12
‘Vicious Circle’ ?
Biased data or features

Input being unfair or


biased - reliance on
datasets
algorithmic bias: features over-
• exaggerated by represented in dataset

feedback loops
Reluctance to
override system
‘decisions’ Systems being
unaccountable
predictions biased
Reasons for
results /
‘decisions’ being
Automation bias more data uninterpretable
reinforcing
behaviour, reluctance to override predictions

13
‘Vicious Circle’ ? - predictive policing
Biased data or features
Results being
unfair or biased -
reliance on
algorithmic bias: black
datasets neighbourhoods already over-
represented in dataset
• exaggerated
by feedback
loops
Systems being
Reluctance to predict crimes in these areas,
increase policing in these areas
unaccountable
override system Reasons for
results /
‘decisions’ ‘decisions’ being
more data uninterpretable
reinforcing
crimes in these areas

14
UnBias

• Are algorithms ever neutral?


• How can we be sure that algorithms operate in our
best interests?
• How can we judge the trustworthiness and
fairness of systems that rely heavily on algorithms?
• How can we develop resources and
recommendations that help prevent, control and
mitigate some of the problems associated with
these systems?

15
Risk assessments for offenders: COMPAS

16
“Although these measures were cra ed with the best of
inten ons, I am concerned that they inadvertently undermine
our e orts to ensure individualised and equal jus ce… they may
exacerbate unwarranted and unjust dispari es that are already
far too common...“
US Attorney General Eric Holder, 2014

17
ti
ff
ft
ti
ti
• only 20% of those predicted to commit a violent crime had gone on to
do so;
• of those deemed likely to re-offend, 61% went on to be arrested, when
misdemeanours such as driving with an expired license were included;
• black people were almost twice as likely to be falsely labelled as at risk
of future offending than white people;
• white people were mislabelled as low risk more often than black
people;
• even when statistical tests were run to isolate the effect of race from
criminal history, recidivism, age and gender, black people were still 77%
more likely to be labelled as at risk of committing a future violent crime
than white people and 45% more likely to be labelled as at risk of
committing any kind of crime.

18
19
• Unequally wrong for false positives in different populations =
unfair (Pro Publica argument)

• Equally right in predicting recidivism = fair (Northpointe


argument)

• Base populations have different levels of recidivism so algorithm


cannot be equally wrong and equally right for both populations

• Technical measures to ‘correct’ for societal unevenness?


20
• What causes unfairness/bias in algorithms?
• data
• values encoded into the algorithm

“Algorithms are inescapably value-laden.


Opera onal parameters are speci ed by developers
and con gured by users with desired outcomes in
mind that privilege some values and interests over
others…”
Mittelstadt, Allo, Taddeo, Wachter, Floridi, 2016
21
ti
fi
fi
• What do we mean by fairness in algorithms?
• Human vs algorithm bias in decision-making
• What model of fairness?
• Technical measures to ‘correct’ for societal
unevenness?

• (lack of) Transparency

• (lack of) Accountability


22
Wisconsin vs Loomis case, 2016.

• long prison sentence challenged by Loomis as unable to


assess the COMPAS algorithm

• state supreme court reasoned that knowledge of the


algorithm's output was a suf cient level of transparency

23
fi
• Bias/unfairness in algorithms can produce uneven outcomes – and outcomes
that most nega vely a ect disadvantaged and vulnerable groups
• Pursuing ‘fairness’ in the opera on of algorithms is challenging due to
compe ng de ni ons of fairness. It is di cult/impossible to agree on a
standard de ni on of fairness in the abstract and apply it to a speci c scenario
• Algorithmic transparency is a problema c solu on due to reluctance of
pla orms/developers and complexity of the details to be communicated.
Transparency is also meaningless without accountability.
• Algorithmic accountability – what might this involve? Industry standards;
independent system evaluation and monitoring; an auditing system; a right
to appeal?
• Users o en appear apathe c - they are not comfortable with their online
experience and the ways in which their data are collected but do not believe
that they can change this.

24
tf
ti
ft
fi
fi
ti
ti
ti
ff
ti
ti
ti
ffi
ti
fi
Embedding ‘fairness’ into design and development

• Toolkit

• Raising awareness – developers have a responsibility as


a community

• Technical solutions and xes

• Issues as a creative resource for design

25
fi
26
We want our algorithms in a sense to follow higher values,
moral values that we think are more important than giving an
exact re ec on of the world. And that I think is a very
interes ng, but also in a sense very shady area in which, are we
going to use the data as it is? Or are we going to change the
data, or not change but adapt the way we look at the data to
serve our purpose of being non-discriminatory…algorithms are
inherently poli cised [as they are connected to social policy
and poli cal power]
UnBias stakeholder workshop par cipant

27
ti
fl
ti
ti
ti
ti
How can we try to avoid harms in development?

• Better requirements and design practices

• In PD workshops and elicitation

• Foreseeing possible consequences

• Mapping possible consequences to requirements and


design options

• Identifying the ethical dimensions of the system


development and deployment

28
How to involve your stakeholders
ethically in the process of technology
development?

(Research ethics)

29
30
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4066473/
Ethical issues
• Collection of users’ data on
a real-world system without
consent

• Affecting users’ emotions


positively as well as
negatively

• Involvement of human
participants MUST be ethical

• = avoiding causing harm


31
“[The work] was consistent with Facebook’s
Data Use Policy, to which all users agree
prior to creating an account on Facebook,
constituting informed consent for this
research.”
Obtaining informed consent and allowing participants to
opt out are best practices in most instances under the
US Department of Health and Human Services Policy for
the Protection of Human Research Subjects (the
“Common Rule”). Adherence to the Common Rule is
PNAS policy, but as a private company Facebook
was under no obligation to conform to the
provisions of the Common Rule when it collected the
data used by the authors, and the Common Rule does 32
not preclude their use of the data.
When to consider the research ethics of your project?

• When your project involves direct interactions with


humans: such as questionnaires, user studies, focus groups,
interviews, observations, design workshops etc

• When your project involves the use of any data generated


by humans: network traffic, biometrics, social media data

• When your project has the potential to affect humans,


directly or indirectly
• Using or creating a technique that enables identification or
targeting of humans
• Finding or creating system vulnerabilities, etc

33
When is mostly okay?

• When your project uses pre-existing datasets


• Data are fully anonymised and not identifiable

• When your project uses data collected for this project, and your
data are fully anonymised and not identifiable, and the
outcomes of the project have no potential ethical implications

34
Your starting point at Oxford CS:
www.cs.ox.ac.uk/research-ethics

35
How to ensure your requirements work is ethical?
• Informed consent: Participants should know about the
research and what it involves. They should give their explicit
consent to take part and have the right to refuse or withdraw
(some exceptions if observing in public places).
• Participants: Make sure participants are comfortable, be
careful about discussing sensitive topics.
• Study settings: Be very careful about observing and video
recording in settings that are normally private (e.g.
bathrooms, hospitals etc.)
• Data management: Store all data carefully and use
anonymity procedures when writing up and presenting
results.
36
How to ensure your requirements work is
ethical?
Informed consent: Participants should know about the research and what it
involves. They should give their explicit consent to take part and have the right to
refuse or withdraw (some exceptions if observing in public places).

- Collection of any format of data directly from humans: questionnaire responses,


audio/video recordings, designs contributed by your participants, photos of them
- Use of any data generated by humans: network tra c, biometrics, social media
data
- Creating systems that have the potential to a ect humans, directly or indirectly
- An information sheet should be provided before the study to support
informed consent
- A consent form should be signed / agreed on with the participants before they
participate in your study

37
ff
ffi
An information sheet
• What are we trying to nd out?
• State the purpose of your study and provides an overall summary of the project
• Why have I been invited to take part?
• Do I have to take part?
• Provide a participate an option to refuse or ask questions
• What will happen if I take part?
• Details: process, what to be collected, where, how long, who etc
• What are the possible disadvantages of taking part?
• What are the advantages of taking part?
• Will the research be published? Could I be identi ed from any publications or other
research outputs?
• Data Protection
• Who is conducting this research?
• What if there is a problem?
• Further information and details

38
fi
fi
How to ensure your requirements work is ethical?
Participants: Make sure participants are comfortable, be careful
about discussing sensitive topics.
Study settings: Be very careful about observing and video recording
in settings that are normally private (e.g. bathrooms, hospitals etc.)

- Participants should also be provided with an option to


withdraw at any time
- Put additional considerations if sensitive topics or deception (or any
disruptive) techniques may be used
- Are you being fair when deciding your participant inclusion criteria?
- Additional considerations for speci c populations, such as people
with special education needs, disabilities, underrepresented
communities etc.
39
fi
How to ensure your requirements work is ethical?
Data management: Store all data carefully and use anonymity
procedures when writing up and presenting results.
• Data anonymisation
• Delete any data containing any personally identi able information
immediately after processing/transcription
• Remove all personally identi able information in the transcription
such as name, location, schools, companies, etc
• Consent forms or contact details should be kept securely
• Data safeguarding
• GDPR
• Linkage le
• Real identity and anonymised identity to support data withdrawal
• Should be kept separately and safely to avoid re-identi cation

40
fi
fi
fi
fi
How to ensure your requirements work is ethical?
1. Go to departmental intranet: h ps://intranet.cs.ox.ac.uk/
2. Click on Research Support and scroll down to Research Ethics
3. This page has all the informa on needed about what courses to take, how to access them, and how to start an applica on in Worktribe
4. Worktribe link: h ps://ox.worktribe.com/ - Single Sign-on required
5. Once logged in, you can start a new applica on by hovering over Ethics_Create applica on, and the system will take you through it from
there.

41
tt
ti
tt
ti
ti
ti
Further reading
• Hellman, D. (2019). “Measuring algorithmic fairness.” Virginia Public Law and
Theory Legal paper. 2091-39.re
• Hedden, B. (2021). “On statistical criteria of algorithmic fairness.” Philosophy
and Public Affairs. 49(2): 209-231
• Suresh, H. and Guttag, J. (ms). “A framework for understanding unintended
consequences of machine learning.” arxiv.org/abs/1901.10002

42

You might also like