0% found this document useful (0 votes)
84 views13 pages

Unit 8

Uploaded by

dhruvaldevaliya1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views13 pages

Unit 8

Uploaded by

dhruvaldevaliya1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT 8: AI ETHICS AND VALUES

ETHICS IN ARTIFICIAL INTELLIGENCE

 In today's rapidly evolving technological landscape, artificial intelligence (AI) has emerged as a lot to transform
various aspects of human society.
 But ethics need to be considered apparently.
 Ethics refers to the moral principles that govern human behavior and decision-making. It encompasses concepts
such as right and wrong, fairness, justice, and accountability.
 AI ethics aims to ensure that AI systems are developed and used in ways that are fair, transparent, accountable,
and aligned with human values.
EXAMPLE

 Suppose a CCTV camera was to spot your face in a crowd outside a sports stadium. In the police data center
somewhere in the city/ country, an artificial neural network analyzes images from the CCTV footage frame by-
frame. A floating cloud in the sky causes a shadow on your face and the neural network (by mistake) finds your
face similar to the face of a wanted criminal.
 If the police were to call you aside for questioning and tell you they had reason to detain you, how would you
defend yourself?
 Was it your fault that your shadowed face has resemblance by few degrees with a person in the police record?
THE FIVE PILLARS OF AI ETHICS

 Explainability refers to the interpretability of AI systems, allowing users to understand how algorithms make
decisions and predictions.
 Fairness in AI is to remove bias and discrimination from algorithms and decision-making models.
 Robustness in AI systems indeed refers to their ability to consistently provide accurate and reliable results
regardless of the conditions they encounter and for extended periods.
 Transparency involves openness and disclosure about the design, operation, and implications of AI systems.
 Privacy refers to the right of individuals to control their personal information and to be free from unwanted
intrusion into their lives.
BIAS

 Bias, in simple terms, means having a preference or tendency towards something or someone over others, often
without considering all the relevant information fairly.
 It can lead to unfair treatment or decisions based on factors like personal beliefs, past experiences, or
stereotypes.
 Question. 1: Why are most images that show up when you do an image search for “vacation” seen as beaches?
 Question 2: Why are most images that show up when you do an image search for “nurse” seen as females?
 In today's interconnected world, artificial intelligence (AI) technologies play an important role in various aspects
of our lives, from healthcare to finance to criminal justice.
 However, as AI systems become more wider, it is essential to recognize and address the presence of bias in these
technologies.
 Bias awareness means understanding that AI systems might have unfair preferences because of different things
like the information they were taught with, the rules they follow, or the ideas they were built upon.
SOURCES OF BIAS

 1.Training Data Bias: AI learns from data, so it’s important to check this data for bias.
 Data Sampling: Look at who is included in the data. If some groups are over-represented or under-represented, it
can lead to problems.
 Example - Facial Recognition: If a facial recognition program is trained mostly on pictures of white people, it might
not work well for people of color.
 Example - Police Tools: If security data comes mainly from areas with many Black residents, it could lead to unfair
treatment in policing.
 Labeling Bias: How data is labeled can also cause bias. If labels are inconsistent or certain groups are left out, it can
hurt qualified candidates in job applications.
 Impact of Bias: Bias in training data can lead to unfair or inaccurate AI decisions in various areas like security and
hiring.
SOURCES OF BIAS

 2. Algorithmic bias: Using bad data can lead algorithms to make repeated mistakes or unfair decisions.
 If the training data has bias, the algorithm can make that bias worse.
 Programming Errors: Bias can also come from mistakes made by developers, like giving too much importance to
certain factors based on their own biases.

 3. Cognitive bias happens when people's experiences and preferences influence how they think and make
decisions.
 Impact on AI: These biases can affect AI systems through the choice of data or how that data is prioritized.
 Example - Data Selection: If someone prefers data from Americans, they might ignore important information from
other populations worldwide.
ROLE PLAY ACTIVITY: UNDERSTANDING BIASED AI (NO NEED TO
REMEMBER)
1. Facial Recognition Technology
 What it is: Machines that recognize faces in pictures or videos.
 Problem: They often don't work well for people with darker skin or for 4. Healthcare Algorithms
women.  What it is: AI used in hospitals to help diagnose patients and
 Example of what can happen: If a machine mistakes someone for a criminal predict their health outcomes.
because of this bias, that person could get wrongly arrested. This makes people
lose trust in police.  Problem: Some systems treat people differently based on race or
income level.
2. Predictive Policing Algorithms
 Example of what can happen: If a certain group doesn't get the
 What it is: Programs that guess where crimes might happen based on past same quality of care, their health may suffer more, leading to bigger
crime data. health problems.
 Problem: These programs can unfairly target certain neighborhoods, especially 5. Credit Scoring Systems
where many people of color live.
 Example of what can happen: If police focus too much on these neighborhoods,
 What it is: Computers that evaluate how likely someone is to pay
it can lead to unfair treatment of residents and increase tensions between the back loans.
community and the police.  Problem:These systems can be unfair to low-income people and
3. Algorithmic Hiring Systems people of color, often giving them lower scores.
 What it is: Computers that help companies choose job applicants.  Example of what can happen: This means they might get denied
loans, making it harder for them to improve their financial situation.
 Problem: Sometimes these systems unfairly favor certain groups, like men over
women or people from certain backgrounds.
 Example of what can happen: This means qualified candidates might be
ignored just because of their gender or race, making workplaces less diverse.
MAKING AI FAIR AND UNBIASED
Why It's Important:
 Avoiding Unfairness: When AI systems are biased, they can make unfair situations worse. For example, if a biased AI helps with hiring,
it might unfairly treat certain groups of people, leading to discrimination.
 Building Trust: If people think AI isn't fair, they won't want to use it. This lack of trust can cause problems for everyone who relies on
technology.
 Doing What's Right: Fixing bias in AI is important for ethical reasons. We want to ensure that AI is developed and used responsibly.

How to Reduce Bias in AI:


 Diverse Data: Use a wide variety of information to teach AI. The more different examples it sees, the less biased it can become.
 Detecting Bias: We need methods to find and check for bias in AI before it’s used. This could involve looking at how the AI makes
decisions for different groups of people.
 Fair Algorithms: Create algorithms that prioritize fairness when making decisions. These special rules help ensure that AI treats
everyone fairly.
 Transparency: AI systems should be clear about how they make decisions. When people understand how AI works, they can identify
and fix any bias.
 Inclusive Teams: Having a team with diverse backgrounds helps catch biases that others might miss, making sure the AI is fair for
everyone.
WHAT ARE AI POLICIES?
 AI policies are rules and guidelines that help us use artificial intelligence (AI) in a safe, fair, and responsible way. They are important
because they build trust and encourage innovation.

 Key Points About AI Policies:


 Respect for People:
 The first rule is to treat people well. This means:
 Being fair to everyone.
 Being clear and honest about how AI works.
 Making sure AI is safe to use.
 Taking responsibility if something goes wrong.

 Clear Rules Needed:


 We need specific guidelines for using AI, covering things like:
 Ensuring AI doesn’t make unfair decisions (like showing bias).
 Making sure AI is safe.
EXAMPLES OF AI POLICY GUIDELINES FROM BIG COMPANIES AND
ORGANIZATIONS: (ONLY REMEMBER NAMES)

 IBM AI Ethics Board:


 Focuses on creating ethical rules for using AI in different fields.
 Works on fairness, transparency, and making sure AI doesn’t have biases.

 Microsoft’s Responsible AI Page:


 Offers tools to help companies assess their AI for fairness and bias.

 Google’s AI Principles:
 Sets guidelines for ethical AI development, focusing on fairness, safety, and accountability.

 European Union’s Ethics Guidelines for Trustworthy AI:


 Establishes principles for trustworthy AI, including fairness and respect for people.
A MORAL DILEMMA

 What is a Moral Dilemma?


 A moral dilemma is a tricky situation where you have to make a choice, and there isn’t a clear right or wrong
answer. Each option you choose can lead to good or bad results, and each choice may be based on different
important values.

 Dilemmas in Artificial Intelligence (AI)


 When it comes to artificial intelligence (like self-driving cars), there can be moral dilemmas too. These happen
when the design or use of AI conflicts with our moral values. For example, should a self-driving car protect its
passengers or people walking on the street if an accident is about to happen?
WHAT IS THE MORAL MACHINE?

 What is the Moral Machine?


 what a self-driving car should do in an
emergency.
 For example, imagine the car can either:
 Swerve to avoid hitting pedestrians but
put its passengers at risk, or
 Stay straight and hit the pedestrians but
keep the passengers safe.
 You have to choose what you think the car
should do and explain why you made that
choice.

You might also like