Data Science Ethics - Lecture 10 - Ethical Deployment
Data Science Ethics - Lecture 10 - Ethical Deployment
Lecture 10
1
Ethical Deployment
▪ Access to system (censorship)
• Limited Access
• Different treatment for different predictions
• Cautionary tale: Censoring search
2
Access to System
1. Limited Access
• Within company
➢ Personal and sensitive data
➢ Banks/Hospitals/Universities/etc.: logging interactions
5
Different Treatments
▪ Data-driven Price Differentiation
• Examples abundant: Common in ecommerce (eg plane
tickets), Targeted advertising, Risk-based pricing
• Increases overall welfare, if it allows to serve a new market
otherwise not served (eg student discounts)
8
Google Search in China
▪ Complex dance between Google and China
▪ Why does Google want to be in China?
1. Market size
2. Learn things
9
Google Search in China
▪ 2006: Google launched google.cn (censored version)
• Google would reveal that some results have been removed
• This notification policy was later also implemented by Baidu
▪ 2010: Google removes censorship after hacking attack
• China locked access to Google (and Facebook and Twitter) leading to the
“Great Firewall of China”
• Chinese search engines started to remove notifications that search results
were removed.
▪ 2018: The Interecept reveals that Google is working on a new censored
version, Dragonfly.
• Internal uproar, Google ended the project, no plans to launch search in
China (reported in 2019)
▪ Lessons learnt
• No easy answer
• Transparency as a force for good
10
• Impact of employees
Unintended consequences
▪ Will AI bring about human extinction?
Unintended consequences
1. AI acts differently than intended
2. Impact of AI on humans is uninteded
Unintended Consequences
AI acts differently
13
Unintended Consequences
AI acts differently
14
Unintended consequences
AI acts differently
▪ Wikipedia bots
• These bots add links to other Wikipedia pages, undo
vandalism, flag copyright violations, check spelling, etc.
15
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0171774
Unintended consequences
Impact of AI unintended
Cashier
▪ Another job at risk
▪ United States: 3.4 million work as cashier (2015)
▪ At the front line in the covid-19 crisis
17
https://www.cnsnews.com/news/article/rudy-takala/top-2-us-jobs-number-employed-
salespersons-and-cashiers
Amazon Go
▪ Product named “Just Walk Out”
▪ Data and analysed on Amazon Web Services
▪ Amazon handles the installation and 24h helpdesk
18
19
“New jobs will emerge”
▪ 2020 US presidential candidate Andew Yang: he described the daunting
task of trying to offset the lost jobs by spurring entrepreneurship and
attempting to create jobs: “We were pouring water into a bathtub with
a giant hole ripped in the bottom”
Jobs lost, nothing new
▪ 2000-2010: US lost about 5.6 million manufacturing jobs, 88% of which
are attributed to automation and an increase in productivity
▪ John Keynes (1930):
• we are being inflicted with a new disease, ‘technological unemployment’:
“unemployment due to our discovery of means of economising the use of
labour outrunning the pace at which we can find new uses for labour”
• “We are suffering”
▪ 16th century, Willam Lee
• Invented stocking frame knitting machine
• Seeked patent protection, and met with Queen Elizabeth I: “Consider thou
what the invention could do to my poor subjects. It would assuredly bring to
them ruin by depriving them of employment, thus making them beggars”
Jobs lost, Jobs created
▪ Jobs created (MIT study ‘The Work of the Future’)
• More productive workers in non-automated areas
• Total economic pie increases
• New jobs emerge (60% of 2018 jobs did not exist in 1946)
▪ So we’re fine?
• Net gain job increase not evenly distributed
• Process takes time, with short term hardship and social unrest
Impact of AI
▪ Consensus seems: 30-40% of jobs at risk due to automation over next
decade(s) in advanced economies
▪ Which ones? Also non-routine jobs
AI
Solutions
1. Reskilling
• Hard to automate skills, and learn to work with machines
• “And as technology keeps changing, we need to focus more on
continuous education throughout our lives. And yes, giving everyone
the freedom to pursue purpose isn’t going to be free. People like me
should pay for it, and a lot of you are going to do really well, and you
should, too.” Zuckerberg (2017)
Solutions
2. Universal Basic Income
• Andrew Yang: 1,000 US$ per adult per month
• Advocated by Mark Zuckerberg, Elon Musk, Jack Dorsey, Larry Page
• Paid by: everyone, the wealthy, the innovators, the unemployed?
26
Governance
1. Set up an Ethical Oversight Committee
2. Establish a Policy
27
Governance - Committee
▪ What?
• To establish a policy
• To review potential data science uses
• To guide additional tooling and training
▪ Who?
• Representatives from all roles, with diverse background
• Senior management
• Potentially impacted groups
▪ Facebook’s Oversight Board
• Facebook’s “Supreme Court”, started in 2020
• Takes on appeal requests
• Members: Nobel Peace Prize Laureate, journalists, academics from
various disciplines and the former prime minister from Denmark
• Independent?
28
Facebook’s Oversight Board
▪ Example decision
https://en.wikipedia.org/wiki/Oversight_Board_%28Meta%29
29
Governance - Policy
▪ Key principles that relate to the (relevant) data science practises
▪ Can guide employees, be used in training, used to remedy violations.
▪ Likely dependent on size of company and sector.
31
Lecture 10: Ethical Deployment
▪ Ethical Deployment
• Access to system (censorship)
➢ Limited Access
➢ Different treatment for different predictions
➢ Cautionary tale: Censoring search
• Governance
• Unintended consequences
32
Robot Rights and Duties
▪ Robot rights
• Similar to human and animal rights?
• Citizenship? “Saudi Arabia bestows citizenship on a robot
named Sophia” (2017)
33
Robot Rights and Duties
▪ Robot duties
• To serve humans?
• Own legal status
➢ Short term: robot tax
➢ Risk of shifting blame
34
Weaponisation of AI
▪ Oct 31st 2019, U. S. DoD on ethical use of AI
• “always be able to look into the 'black box‘”
▪ No more ethical thinking by “soldiers”
▪ Robot take-over of mankind?
▪ Efforts by U.S. Navy for autonomous drone weapons
• Similar announcements by Russia and Korea
• Global arms race looming
▪ Ban AI weapons? Petition started by Stephen Hawking:
“Future of Life”
35
Laws of Robotics
▪ Isaac Asimov (1942)
▪ Laws to guide the behavior of autonomous robots
1. A robot may not injure a human being or, through inaction, allow a human
being to come to harm.
2. A robot must obey the orders given it by human beings except where such
orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not
conflict with the First or Second Laws
“I, Robot” 36
Artificial General Intelligence (AGI)
▪ AGI: The hypothetical intelligence of a machine that has the
capacity to understand or learn any intellectual task that a
human being can.
▪ AGI: a median remote co-worker (Sam Altman)
▪ Also called “strong AI”
▪ Many tests
• Turing test: conversation between a human and a machine,
where a third person would not be able to distinguish the
human from the machine
• Coffee test (Wozniak, co-founder Apple): a machine enters an
average home and makes coffee (finds machine, coffee, water,
mug, brew the coffee)
• Robot College Student Test (Goertzel): a machine enrolls in a
university, takes and passes classes and obtains degree 37
Artificial General Intelligence (AGI)
▪ AGI: Like a median remote co-worker (Sam Altman)
38
39
Technological Singularity
▪ Metaphor borrowed from physics:
• Center of a black hole is a singularity
• As one would go through a black hole, the laws of physics as
we know them will no longer apply
“Interstellar”
40
Technological Singularity
▪ Singularity is the point at which “technological growth
becomes uncontrollable and irreversible, resulting in
unforeseeable changes to human civilization” (Wikipedia)
▪ “Artificial Super Intelligence”
41
https://innovationtorevolution.wordpress.com/2014/10/29/technological-singularity-from-fiction-to-reality/
Technological Singularity
▪ Singularity is the point at which “technological growth
becomes uncontrollable and irreversible, resulting in
unforeseeable changes to human civilization” (Wikipedia)
▪ Three important aspects
1. Superhuman: artificial intelligence outperforms human
intelligence
2. Exponential growth of technology: “explosion of intelligence,
resulting in a superintelligence”
3. Large, unforeseeable impact on humans: “all the change in
the last million years will be superseded by the change in the
next five minutes.” (Kevin Kelly, co-founder of Wired
Magazine)
42
https://www.youtube.com/watch?v=gpKNAHz0zH8
Technological Singularity
▪ Term from fiction novel “Marooned in RealTime” (1896)
by Vernor Vinge
▪ Popular interpretation by Ray Kurzweil
43
Ray Kurzwell
▪ Inventor, entrepreneur, futurist
▪ Good at making predictions about technology
• 1990: predicted computer would defeat world chess
champion by 1998 (1997: IBM DeepBlue defeated Kasparov)
• 1990: Prediction explosion of www when there where only 2.6
million Internet users in the world
• Kurzweil's claimed accuracy rate comes to 86%
• “the best person I know at predicting the future of artificial
intelligence“ Bill Gates
• Why is he good at predicting the future?
➢ Predictions based on his belief of exponential progress of
technology, while “our intuition is linear”
44
Ray Kurzwell
▪ Director at Google
• Larry Page (co-founder Google): “Do it here. We'll give you the
independence you've had with your own company, but you'll
have these Google-scale resources.”
▪ Received 21 honorary doctorates, and honors from three
U.S. presidents.
▪ Plenty of critics as well: see for example
https://spectrum.ieee.org/computing/software/ray-kurzweils-slippery-futurism/
▪ Singularity prediction
• Turing test would be passed in 2029
(stated when fax machine wasn’t invented yet)
• Book: Age of Spiritual Machines (1999)
https://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-
45
singularity-artificial-intelligence
Technological Singularity
▪ Pessimistic
• Bill Gates, Elon Musk, Stephen Hawkins
• Ex Machina, Terminator, I Robot, 2001: a Space Odyssey, etc.
▪ Optimistic
• Kevin Kelly, Ray Kurzweil, Yann LeCun
➢ We continue to augment our own thinking by offloading non-
cognition
➢ “We build the tools then the tools build us.” (Marshall McLuhan)
▪ We continue to become more non-biological
▪ The extended mind
▪ Not us versus them
46
Types of AI
Intellectual power
Time 47
Human Extinction?
https://pauseai.info/pdoom
49
Future of AI hard to foresee
50
Conclusion
▪ Future of AI hard to foresee, but impact surely large
▪ Important to think about what is right and wrong
• Dealing with AI
• AI itself
51
Lecture 10
▪ Ethical Deployment
• Access to system (censorship)
➢ Limited Access
➢ Different treatment for different predictions
➢ Cautionary tale: Censoring search
• Governance
• Unintended consequences
52
Exam
Presentation /4
Question 1 /6
Question 2 /6
Question 3 /4
Total /20
53
Exam - example
▪ Two large questions
1. Explain and/or illustrate certain techniques or concepts.
For example, explain the methods (or metrics) to include (or measure)
fairness in the preprocessing stage. Illustrate with an example.
2. Answer a discussion case (similar to the ones we’ve covered in class),
referring back to the seen techniques, concepts and cautionary tales.
See next slide
• Maximum two pages for each
▪ Five small questions, to test knowledge (max. 3 lines)
1. What is a Bonferroni correction?
2. What is a zero-knowledge proof?
3. What is demographic parity?
4. What is Homomorphic encryption?
5. What is Technological Singularity?
-1 per wrong answer
54
Example exam question
▪ The University of Antwerp has a wide variety of data on their students: their home address,
courses enrolled, absence due to covid-19, grades, etc. The rector (head of the university)
receives the following request from the Belgian ministry of education: “We ask that all
universities make the data on their students public, but to “anonymize” the names of the students
by hashing them and not including home address or other personal information in the dataset. For
each student, we want the following fields to be included in the dataset: a hashed version of the
student’s name, the courses he or she enrolled in, his/her grades on these courses, days of absence
in 2019 and 2020 due to covid-19, study program, nationality, date of birth, postal code and
gender. In that way social science research can be moved forward, by finding patterns in this data,
and universities could benefit from the discovered insights.”
(a) Briefly explain and illustrate how the hashing would work.
(b) What would be potential ethical pitfalls or outcries from students? Think of the different concepts from the
FAT Flow framework.
(c) What would be useful techniques so that this dataset could be leveraged for data science research, while
ensuring ethics?
▪ Suppose the university would want to use this data, to predict who will end up in a “good”
position after graduating.
https://repository.uantwerpen.be/docstore/d:irua:1463
Cautionary tales
▪ Warning signs: “No one has to know.” / “Don’t mail about this.”
59