0% found this document useful (0 votes)
50 views60 pages

Data Science Ethics - Lecture 10 - Ethical Deployment

Uploaded by

Niloofar Fallahi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views60 pages

Data Science Ethics - Lecture 10 - Ethical Deployment

Uploaded by

Niloofar Fallahi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Data Science & Ethics

Lecture 10

Prof. David Martens


[email protected]
www.applieddatamining.com
@ApplDataMining
Lecture 10
▪ Ethical Deployment
• Access to system (censorship)
• Governance
• Unintended consequences

▪ Beyond current risks


▪ Exam
▪ Conclusion

1
Ethical Deployment
▪ Access to system (censorship)
• Limited Access
• Different treatment for different predictions
• Cautionary tale: Censoring search

2
Access to System
1. Limited Access
• Within company
➢ Personal and sensitive data
➢ Banks/Hospitals/Universities/etc.: logging interactions

• Part of system inaccessible


➢ Google Gorilla problem: Gorilla no longer predicted
➢ Filter Bubble: what news and updates you get exposed to?
➢ Explanations: which (equally “good”) explanations to show?

• System inaccessible for some people


➢ Link with bias: Streetbump, smartphones
➢ Clearview.AI: should be limited to law enforcement agencies? Only of
human rights-respecting countries?

▪ Lot of power: importance of transparency 3


4
Access to System
2. Different versions for different persons
• Uber software based on geo-fencing

• Post-GDPR, numerous non-European webpages blocked for


European readers.
• Google allowing to remove links in Europe

5
Different Treatments
▪ Data-driven Price Differentiation
• Examples abundant: Common in ecommerce (eg plane
tickets), Targeted advertising, Risk-based pricing
• Increases overall welfare, if it allows to serve a new market
otherwise not served (eg student discounts)

▪ Transparency and fairness


• Discrimination based on sensitive attribute?
• Personal data used?
• Transparent in differentiation?

▪ Costlier hotel rooms shown to Mac vs PC visitors [291]


• Fair?
• How to be transparent?
6
Google Search in China
▪ Complex dance between Google and China
▪ Why does Google want to be in China?
1. Market size

8
Google Search in China
▪ Complex dance between Google and China
▪ Why does Google want to be in China?
1. Market size
2. Learn things

▪ Why does China want to welcome Google?


1. Improved search results
2. Chinese companies having access to expertise and prestige
3. Legitimation of Chinese Communist Party

9
Google Search in China
▪ 2006: Google launched google.cn (censored version)
• Google would reveal that some results have been removed
• This notification policy was later also implemented by Baidu
▪ 2010: Google removes censorship after hacking attack
• China locked access to Google (and Facebook and Twitter) leading to the
“Great Firewall of China”
• Chinese search engines started to remove notifications that search results
were removed.
▪ 2018: The Interecept reveals that Google is working on a new censored
version, Dragonfly.
• Internal uproar, Google ended the project, no plans to launch search in
China (reported in 2019)

▪ Lessons learnt
• No easy answer
• Transparency as a force for good
10
• Impact of employees
Unintended consequences
▪ Will AI bring about human extinction?
Unintended consequences
1. AI acts differently than intended
2. Impact of AI on humans is uninteded
Unintended Consequences
AI acts differently

▪ Flash crashes in stock market

13
Unintended Consequences
AI acts differently

▪ AI acts differently: Tai, the racist chatbot

14
Unintended consequences
AI acts differently

▪ Wikipedia bots
• These bots add links to other Wikipedia pages, undo
vandalism, flag copyright violations, check spelling, etc.

▪ Study on how Wikipedia bots act (Tsvetkova et al , 2019)


• Over the period 2001–2010
• “Wikipedia bots spent years fighting silent, tiny battles with
each other”
• Undetected!

15
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0171774
Unintended consequences
Impact of AI unintended
Cashier
▪ Another job at risk
▪ United States: 3.4 million work as cashier (2015)
▪ At the front line in the covid-19 crisis

17
https://www.cnsnews.com/news/article/rudy-takala/top-2-us-jobs-number-employed-
salespersons-and-cashiers
Amazon Go
▪ Product named “Just Walk Out”
▪ Data and analysed on Amazon Web Services
▪ Amazon handles the installation and 24h helpdesk

18
19
“New jobs will emerge”
▪ 2020 US presidential candidate Andew Yang: he described the daunting
task of trying to offset the lost jobs by spurring entrepreneurship and
attempting to create jobs: “We were pouring water into a bathtub with
a giant hole ripped in the bottom”
Jobs lost, nothing new
▪ 2000-2010: US lost about 5.6 million manufacturing jobs, 88% of which
are attributed to automation and an increase in productivity
▪ John Keynes (1930):
• we are being inflicted with a new disease, ‘technological unemployment’:
“unemployment due to our discovery of means of economising the use of
labour outrunning the pace at which we can find new uses for labour”
• “We are suffering”
▪ 16th century, Willam Lee
• Invented stocking frame knitting machine
• Seeked patent protection, and met with Queen Elizabeth I: “Consider thou
what the invention could do to my poor subjects. It would assuredly bring to
them ruin by depriving them of employment, thus making them beggars”
Jobs lost, Jobs created
▪ Jobs created (MIT study ‘The Work of the Future’)
• More productive workers in non-automated areas
• Total economic pie increases
• New jobs emerge (60% of 2018 jobs did not exist in 1946)

▪ So we’re fine?
• Net gain job increase not evenly distributed
• Process takes time, with short term hardship and social unrest
Impact of AI
▪ Consensus seems: 30-40% of jobs at risk due to automation over next
decade(s) in advanced economies
▪ Which ones? Also non-routine jobs

AI
Solutions
1. Reskilling
• Hard to automate skills, and learn to work with machines
• “And as technology keeps changing, we need to focus more on
continuous education throughout our lives. And yes, giving everyone
the freedom to pursue purpose isn’t going to be free. People like me
should pay for it, and a lot of you are going to do really well, and you
should, too.” Zuckerberg (2017)
Solutions
2. Universal Basic Income
• Andrew Yang: 1,000 US$ per adult per month
• Advocated by Mark Zuckerberg, Elon Musk, Jack Dorsey, Larry Page
• Paid by: everyone, the wealthy, the innovators, the unemployed?

3. Learn to live with the freedom of not working


• Being a stay-at-home mom or dad, artist, author argubaly not held in as
high esteem as being a succesful entrepeneur or CEO.
• “reexamine what we value, what we are collectively willing to pay for –
whether it’s teachers, nurses, caregivers, moms or dads who stay at
home, artists, all the things that are incredibly valuable to us right now
but don’t rank high on the pay totem pole – that’s a conversation we
need to begin to have.” Barack Obama (2016)
• “[W]e have been trained too long to strive and not to enjoy”
Keynes (1930)
Solutions
▪ Transparency
▪ Creativity

▪ By the government: regulation (cf. robot tax)


▪ By businesses: transparency, transition plan
▪ By you?

26
Governance
1. Set up an Ethical Oversight Committee

2. Establish a Policy

3. Implement the Policy

27
Governance - Committee
▪ What?
• To establish a policy
• To review potential data science uses
• To guide additional tooling and training
▪ Who?
• Representatives from all roles, with diverse background
• Senior management
• Potentially impacted groups
▪ Facebook’s Oversight Board
• Facebook’s “Supreme Court”, started in 2020
• Takes on appeal requests
• Members: Nobel Peace Prize Laureate, journalists, academics from
various disciplines and the former prime minister from Denmark
• Independent?
28
Facebook’s Oversight Board
▪ Example decision

https://en.wikipedia.org/wiki/Oversight_Board_%28Meta%29

29
Governance - Policy
▪ Key principles that relate to the (relevant) data science practises
▪ Can guide employees, be used in training, used to remedy violations.
▪ Likely dependent on size of company and sector.

▪ Example: AI at Google: our principles


▪ We will assess AI applications in view of the following objectives. We
believe that AI should:
1. Be socially beneficial.
2. Avoid creating or reinforcing unfair bias.
3. Be built and tested for safety.
4. Be accountable to people.
5. Incorporate privacy design principles.
6. Uphold high standards of scientific excellence.
7. Be made available for uses that accord with these principles.
AI applications we will not pursue listed as well
Pichai, Sundar (2018). AI at Google: our principles. https://www.blog.google/technology/ai/ai-principles/ 30
Governance - Implementation
▪ Make employees aware of the policy
• Training
• Communication key
▪ Process to get feedback
▪ Accountability: Effective and demonstrable measures, with potentially
negative consequences

31
Lecture 10: Ethical Deployment
▪ Ethical Deployment
• Access to system (censorship)
➢ Limited Access
➢ Different treatment for different predictions
➢ Cautionary tale: Censoring search

• Governance
• Unintended consequences

▪ Beyond current risks


▪ Exam
▪ Conclusion

32
Robot Rights and Duties
▪ Robot rights
• Similar to human and animal rights?
• Citizenship? “Saudi Arabia bestows citizenship on a robot
named Sophia” (2017)

Star Trek Star Trek


The Next generation Voyager I, Robot

33
Robot Rights and Duties
▪ Robot duties
• To serve humans?
• Own legal status
➢ Short term: robot tax
➢ Risk of shifting blame

• Can AI be accountable? “A panel convened by the United


Kingdom in 2010 revised Asimov's laws to clarify that AI is the
responsibility either of its manufacturers, or of its
owner/operator.”

34
Weaponisation of AI
▪ Oct 31st 2019, U. S. DoD on ethical use of AI
• “always be able to look into the 'black box‘”
▪ No more ethical thinking by “soldiers”
▪ Robot take-over of mankind?
▪ Efforts by U.S. Navy for autonomous drone weapons
• Similar announcements by Russia and Korea
• Global arms race looming
▪ Ban AI weapons? Petition started by Stephen Hawking:
“Future of Life”

35
Laws of Robotics
▪ Isaac Asimov (1942)
▪ Laws to guide the behavior of autonomous robots
1. A robot may not injure a human being or, through inaction, allow a human
being to come to harm.
2. A robot must obey the orders given it by human beings except where such
orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not
conflict with the First or Second Laws

“I, Robot” 36
Artificial General Intelligence (AGI)
▪ AGI: The hypothetical intelligence of a machine that has the
capacity to understand or learn any intellectual task that a
human being can.
▪ AGI: a median remote co-worker (Sam Altman)
▪ Also called “strong AI”
▪ Many tests
• Turing test: conversation between a human and a machine,
where a third person would not be able to distinguish the
human from the machine
• Coffee test (Wozniak, co-founder Apple): a machine enters an
average home and makes coffee (finds machine, coffee, water,
mug, brew the coffee)
• Robot College Student Test (Goertzel): a machine enrolls in a
university, takes and passes classes and obtains degree 37
Artificial General Intelligence (AGI)
▪ AGI: Like a median remote co-worker (Sam Altman)

38
39
Technological Singularity
▪ Metaphor borrowed from physics:
• Center of a black hole is a singularity
• As one would go through a black hole, the laws of physics as
we know them will no longer apply

“Interstellar”
40
Technological Singularity
▪ Singularity is the point at which “technological growth
becomes uncontrollable and irreversible, resulting in
unforeseeable changes to human civilization” (Wikipedia)
▪ “Artificial Super Intelligence”

41
https://innovationtorevolution.wordpress.com/2014/10/29/technological-singularity-from-fiction-to-reality/
Technological Singularity
▪ Singularity is the point at which “technological growth
becomes uncontrollable and irreversible, resulting in
unforeseeable changes to human civilization” (Wikipedia)
▪ Three important aspects
1. Superhuman: artificial intelligence outperforms human
intelligence
2. Exponential growth of technology: “explosion of intelligence,
resulting in a superintelligence”
3. Large, unforeseeable impact on humans: “all the change in
the last million years will be superseded by the change in the
next five minutes.” (Kevin Kelly, co-founder of Wired
Magazine)

42
https://www.youtube.com/watch?v=gpKNAHz0zH8
Technological Singularity
▪ Term from fiction novel “Marooned in RealTime” (1896)
by Vernor Vinge
▪ Popular interpretation by Ray Kurzweil

43
Ray Kurzwell
▪ Inventor, entrepreneur, futurist
▪ Good at making predictions about technology
• 1990: predicted computer would defeat world chess
champion by 1998 (1997: IBM DeepBlue defeated Kasparov)
• 1990: Prediction explosion of www when there where only 2.6
million Internet users in the world
• Kurzweil's claimed accuracy rate comes to 86%
• “the best person I know at predicting the future of artificial
intelligence“ Bill Gates
• Why is he good at predicting the future?
➢ Predictions based on his belief of exponential progress of
technology, while “our intuition is linear”

44
Ray Kurzwell
▪ Director at Google
• Larry Page (co-founder Google): “Do it here. We'll give you the
independence you've had with your own company, but you'll
have these Google-scale resources.”
▪ Received 21 honorary doctorates, and honors from three
U.S. presidents.
▪ Plenty of critics as well: see for example
https://spectrum.ieee.org/computing/software/ray-kurzweils-slippery-futurism/

▪ Singularity prediction
• Turing test would be passed in 2029
(stated when fax machine wasn’t invented yet)
• Book: Age of Spiritual Machines (1999)

https://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-
45
singularity-artificial-intelligence
Technological Singularity
▪ Pessimistic
• Bill Gates, Elon Musk, Stephen Hawkins
• Ex Machina, Terminator, I Robot, 2001: a Space Odyssey, etc.

▪ Optimistic
• Kevin Kelly, Ray Kurzweil, Yann LeCun
➢ We continue to augment our own thinking by offloading non-
cognition
➢ “We build the tools then the tools build us.” (Marshall McLuhan)
▪ We continue to become more non-biological
▪ The extended mind
▪ Not us versus them

46
Types of AI
Intellectual power

Artificial Super Intelligence:


Better than humans on
any intellectual task

Artificial General Intelligence:


Can learn any intellectual task
that a human can
Artificial Narrow Intelligence:
Better than a human on a specific task
(eg credit scoring)

Time 47
Human Extinction?

https://pauseai.info/pdoom

49
Future of AI hard to foresee

50
Conclusion
▪ Future of AI hard to foresee, but impact surely large
▪ Important to think about what is right and wrong
• Dealing with AI
• AI itself

51
Lecture 10
▪ Ethical Deployment
• Access to system (censorship)
➢ Limited Access
➢ Different treatment for different predictions
➢ Cautionary tale: Censoring search

• Governance
• Unintended consequences

▪ Beyond current risks


▪ Exam
▪ Conclusion

52
Exam

Presentation /4
Question 1 /6
Question 2 /6
Question 3 /4

Total /20

53
Exam - example
▪ Two large questions
1. Explain and/or illustrate certain techniques or concepts.
For example, explain the methods (or metrics) to include (or measure)
fairness in the preprocessing stage. Illustrate with an example.
2. Answer a discussion case (similar to the ones we’ve covered in class),
referring back to the seen techniques, concepts and cautionary tales.
See next slide
• Maximum two pages for each
▪ Five small questions, to test knowledge (max. 3 lines)
1. What is a Bonferroni correction?
2. What is a zero-knowledge proof?
3. What is demographic parity?
4. What is Homomorphic encryption?
5. What is Technological Singularity?
-1 per wrong answer
54
Example exam question
▪ The University of Antwerp has a wide variety of data on their students: their home address,
courses enrolled, absence due to covid-19, grades, etc. The rector (head of the university)
receives the following request from the Belgian ministry of education: “We ask that all
universities make the data on their students public, but to “anonymize” the names of the students
by hashing them and not including home address or other personal information in the dataset. For
each student, we want the following fields to be included in the dataset: a hashed version of the
student’s name, the courses he or she enrolled in, his/her grades on these courses, days of absence
in 2019 and 2020 due to covid-19, study program, nationality, date of birth, postal code and
gender. In that way social science research can be moved forward, by finding patterns in this data,
and universities could benefit from the discovered insights.”

▪ Answer the following questions:

(a) Briefly explain and illustrate how the hashing would work.
(b) What would be potential ethical pitfalls or outcries from students? Think of the different concepts from the
FAT Flow framework.
(c) What would be useful techniques so that this dataset could be leveraged for data science research, while
ensuring ethics?

▪ Suppose the university would want to use this data, to predict who will end up in a “good”
position after graduating.

(a) What are the ethical issues related to this task? 55


FAT Flow
▪ A Framework for Data Science Ethics

https://repository.uantwerpen.be/docstore/d:irua:1463
Cautionary tales

Uber’s God view


DeepFake
Sabotage
Data Science Ethics
▪ Data Science Ethics…
• Is about right and wrong
• Requires open discussions
• Is not easy
• Can bring value

▪ Warning signs: “No one has to know.” / “Don’t mail about this.”

▪ For data science projects:


• Consider ethics already before starting
• Don’t be afraid to add an ethical section in your reports or emails
Topics presentations
- In-depth discussion of case of limited access (eg on the ones of slide 4).
- Other cases on different versions of AI models for different persons, and the
ethical dilemmas and implications.
- Behavior modification by making predictions come true: the paper by Galit
Shmueli on different treatments based on the predictions of AI models (see
also the online video by her).
- Unintended consequences of using AI trading bots in the stock market.
- A post-AGI world: what it might look like.
(Should be grounded in some academic work.)
- What the AI Act says about the governance of AI, and the link with the FAT
Flow framework.
- AGI and ASI: estimated timeline and opinion of big tech leaders and renowned
academic leaders.

59

You might also like