Introducing AI
Introducing AI
(Music)
At IBM, we define AI as
anything that makes machines act more intelligently
We like to think of AI as
augmented intelligence.
We believe that AI should not attempt
to replace human experts, but rather extend human capabilities
and accomplish tasks that neither humans
nor machines could do on their own.
The internet has given us access to more information, faster.
Distributed computing and IoT have led to massive amounts of data,
and social networking has encouraged most of that data to be unstructured.
With Augmented Intelligence, we are putting information
that subject matter experts need at their fingertips,
and backing it with evidence so they can make informed decisions.
We want experts to scale their capabilities and let the machines do the time-consuming work.
How do we define intelligence?
Human beings have innate intelligence, defined as the intelligence that governs every activity in our body.
This intelligence is what causes an oak tree to grow out of a little seed, and an elephant to form from a
single-celled organism.
How does AI learn? The only innate intelligence machines have is what we give them.
We provide machines the ability to examine examples and create machine learning models based on the
inputs and desired outputs.
And we do this in different ways such as Supervised Learning, Unsupervised Learning,
and Reinforcement Learning, about which you will learn in more detail in subsequent lessons.
Based on strength, breadth, and application, AI can be described in different ways.
Weak or Narrow AI is AI that is applied to a specific domain. For example, language translators, virtual
assistants,
self-driving cars, AI-powered web searches, recommendation engines, and intelligent spam filters.
Applied AI can perform specific tasks, but not learn new ones, making decisions based on programmed
algorithms, and training data.
Strong AI or Generalized AI is AI that can interact and operate a wide variety of independent and unrelated
tasks.
It can learn new tasks to solve new problems, and it does this by teaching itself new strategies.
Generalized Intelligence is the combination of many AI strategies that learn from experience and can
perform at a human level of intelligence.
Super AI or Conscious AI is AI with human-level consciousness, which would require it to be self-aware.
Because we are not yet able to adequately define what consciousness is, it is unlikely that we will be able
to create a conscious AI in the near future.
AI is the fusion of many fields of study. Computer science and electrical engineering determine how AI is
implemented in software and hardware.
Mathematics and statistics determine viable models and measure performance.
Because AI is modeled on how we believe the brain works, psychology and linguistics play an essential role
in understanding how AI might work.
And philosophy provides guidance on intelligence and ethical considerations.
While the science fiction version of AI may be a distant possibility,
we already see more and more AI involved in the decisions we make every day.
Over the years, AI has proven to be useful in different domains, impacting the lives of people and our
society in meaningful ways.
I remember that morning going to the lab and I was thinking this is it,
this is the last Jeopardy game. It became real to me when
the music played and Johnny Gilbert said from IBM Research in Yorktown Heights, New York,
this is Jeopardy and I just went. Hear it is one day.
This is the culmination of all this work. To be honest with you I was emotional.
Watson. What is Shoe?
You are right. We actually took the lead. We were ahead of
them, but then we start getting some questions wrong.
Watson? What is Leg?
No, I'm sorry. I can't accept that. What is 1920's?
No. What is Chic?
No, sorry, Brad. What is Class?
Class. You got it. Watson.
What is Sauron? Sauron is right and that puts
you into a tie for the lead with Brad. The double Jeopardy round of
the first game I thought was phenomenal. Watson went on a terror.
Watson, who is Franz List? You are right.
What is Violin? Who is the Church Lady?
Yes. Watson. What is Narcolepsy?
You are right and with that you move to $36,681. Now, we come to Watson. Who is Bram Stoker
and the wager, hello $17,973 and a two day total of $77,147.
We won Jeopardy. They are very justifiably proud of what they've
done. I would've thought that technology
like this was years away but it's here now. I have the bruised ego
to prove it. I think we saw something important today.
Wow, wait a second. This is history. The 60th Annual Grammy Awards,
powered by IBM Watson. There's a tremendous amount of unstructured
data that we process on Grammy Sunday.
Our partnership with the recording Academy really is focused on
helping them with some of their workflows for their digital production.
My content team is responsible not only for taking all this raw stuff that's
coming in, but curating it and publishing it.
You're talking about five hours of red carpet coverage with 5,000 artists,
making that trip down the carpet with a 100,000 photos being shot.
For the last five hours, Watson has been using AI to analyze the colors,
patterns, and silhouettes of every single outfit that has passed through.
So we've been able to see all the dominant styles and
compare them to Grammy shows in the past. Watson's also analyzing the emotions
of Grammy nominated song lyrics over the last 60 years.
Get this, it can actually identify the emotional themes in
music and categorize them as joy, sadness, and everything else in between. It's
very cool. Fantasy sports are an incredibly
important and fun way that we serve sports fans.
Our fantasy games drive tremendous consumption across ESPN digital
properties, and they drive tune-in to our events and studio
shows. But our users have a lot of
different ways they can spend their time. So we have to continuously improve our game
so they choose to spend that time with us. This year, ESPN teamed up with
IBM to add a powerful new feature to their fantasy football platform.
Fantasy football generates a huge volume of content -
articles, blogs, videos, podcasts. We call it unstructured data -
data that doesn't fit neatly into spreadsheets or databases.
Watson was built to analyze that kind of information and turn it into usable insights.
We train Watson on millions of fantasy football stories,
blog posts, and videos. We taught it to develop a scoring
range for thousands of players, their upsides and their downsides,
and we taught it to estimate the chances of player will
exceed their upside or fall below the downside. Watson even assesses a player's media buzz
and their likelihood to play. This is a big win for our fantasy football
players. It's one more tool to help them decide which
running back or QB to start each week. It's a great complement to
the award-winning analysts our fans rely on. As with any Machine Learning,
the system gets smarter all the time. That means the insights are better,
which means are you just can make better decisions and have a better chance to
win their matchup every week. The more successful our fantasy players are,
the more time they'll spend with us. The ESPN and IBM partnership
is a great vehicle to demonstrate the power of enterprise-grade AI to millions
of people, and it's not hard to see how
the same technology applies to real life. There are thousands of business scenarios
where you're assessing value and making trade-offs.
This is what the future of decision-making is going to look like.
Man and machine working together, assessing risk and reward,
working through difficult decisions. This is the same technology
IBM uses to help doctors mine millions of pages of medical research and
investment banks fund market moving insights.
(Music)
Lesson Summary
In this lesson, you have learned:
AI-powered applications are creating an impact in diverse areas such as Healthcare, Education,
Transcription, Law Enforcement, Customer Service, Mobile and Social Media Apps, Financial Fraud
Prevention, Patient Diagnoses, Clinical Trials, and more.
Robotics and Automation, where AI is making it possible for robots to perceive unpredictable
environments around them in order to decide on the next steps.
Airport Security, where AI is making it possible for X-ray scanners to flag images that may look
suspicious.
Oil and Gas, where AI is helping companies analyze and classify thousands of rock samples to help
identify the best locations to drill for oil?
Some famous applications of AI from IBM include:
Watson playing Jeopardy to win against two of its greatest champions, Ken Jennings and Brad
Rutter.
Watson teaming up with the Academy to deliver an amplified Grammy experience for millions of
fans.
Watson collaborating with ESPN to serve 10 million users of the ESPN Fantasy App sharing insights
that help them make better decisions to win their weekly matchups.
Week 2
1.Cognitive Computing (Perception, Learning,
Reasoning)
AI is at the forefront of a new era of computing, Cognitive Computing.
It's a radically new kind of computing, very different from the programmable systems
that preceded it, as different as those systems
were from the tabulating machines of a century ago.
Conventional computing solutions, based on the mathematical principles that emanate from
the 1940's, are programmed based on rules and logic
intended to derive mathematically precise answers,
often following a rigid decision tree approach. But with today's wealth of big data and
the need for more complex evidence-based decisions, such a rigid approach often breaks or
fails to keep up with available information. Cognitive Computing enables people to
create a profoundly new kind of value, finding answers and insights
locked away in volumes of data. Whether we consider a doctor diagnosing a
patient, a wealth manager advising
a client on their retirement portfolio, or even a chef creating a new recipe,
they need new approaches to put into context the volume of
information they deal with on a daily basis in order to derive value from
it. These processes serve to enhance human expertise.
Cognitive Computing mirrors some of the key cognitive elements of human expertise,
systems that reason about problems like a human does.
When we as humans seek to understand something and to make a decision,
we go through four key steps. First, we observe visible phenomena
and bodies of evidence. Second, we draw on what we know to interpret
what we are seeing to generate hypotheses about what
it means. Third, we evaluate which hypotheses are right
or wrong. Finally, we decide, choosing
the option that seems best and acting accordingly. Just as humans become experts by going through
the process of observation, evaluation, and decision-making,
cognitive systems use similar processes to reason about the information they read,
and they can do this at massive speed and scale.
Unlike conventional computing solutions, which can only handle
neatly organized structured data such as what is stored in a database,
cognitive computing solutions can understand unstructured data,
which is 80 percent of data today. All of the information that is produced primarily
by humans for other humans to consume. This includes everything from literature,
articles, research reports to blogs, posts, and tweets.
While structured data is governed by well-defined fields that
contain well-specified information, cognitive systems rely on natural language,
which is governed by rules of grammar, context, and culture.
It is implicit, ambiguous, complex, and a challenge to process.
While all human language is difficult to parse, certain idioms can be particularly challenging.
In English for instance, we can feel blue because it's raining cats
and dogs, while we're filling in a form,
someone asked us to fill out. Cognitive systems read and interpret text
like a person. They do this by breaking down
a sentence grammatically, relationally, and structurally, discerning meaning
from the semantics of the written material. Cognitive systems understand context.
This is very different from simple speech recognition,
which is how a computer translates human speech into a set of words.
Cognitive systems try to understand the real intent of the users language,
and use that understanding to draw inferences through
a broad array of linguistic models and algorithms. Cognitive systems learn, adapt,
and keep getting smarter. They do this by learning from their interactions
with us, and from their own successes and failures,
just like humans do.
In this lesson, you have learned:
Cognitive computing systems differ from conventional computing systems in that they can:
Read and interpret unstructured data, understanding not just the meaning of words
but also the intent and context in which they are used.
Reason about problems in a way that humans reason and make decisions.
Learn over time from their interactions with humans and keep getting smarter.
Machine Learning
Machine Learning, a subset of AI,
uses computer algorithms to analyze data and
make intelligent decisions based on what it has learned.
Instead of following rules-based algorithms,
machine learning builds models to
classify and make predictions from data.
Let's understand this by exploring a problem
we may be able to tackle with Machine Learning.
What if we want to determine whether a heart can fail,
is this something we can solve with Machine Learning?
The answer is, Yes.
Let's say we are given data such as
beats per minute, body mass index,
age, sex, and the result
whether the heart has failed or not.
With Machine Learning given this dataset,
we are able to learn and create a model
that given inputs, will predict results.
So what is the difference between this and using
statistical analysis to create an algorithm?
An algorithm is a mathematical technique.
With traditional programming, we take data and rules,
and use these to develop
an algorithm that will give us an answer.
In the previous example,
if we were using a traditional algorithm,
we would take the data such as beats per minute and BMI,
and use this data to create an algorithm that
will determine whether the heart will fail or not.
Essentially, it would be an if-then-else statement.
When we submit inputs,
we get answers based on
what the algorithm we determined is,
and this algorithm will not change.
Machine Learning, on the other hand,
takes data and answers and creates the algorithm.
Instead of getting answers in the end,
we already have the answers.
What we get is a set of rules that
determine what the machine learning model will be.
The model determines the rules,
and the if-then-else statement when it gets the inputs.
Essentially, what the model does is determine
what the parameters are in a traditional algorithm,
and instead of deciding arbitrarily that
beats per minute plus BMI equals a certain result,
we use the model to determine what the logic will be.
This model, unlike a traditional algorithm,
can be continuously trained and
be used in the future to predict values.
Machine Learning relies on defining behavioral rules by
examining and comparing large datasets
to find common patterns.
For instance, we can provide
a machine learning program with
a large volume of pictures of
birds and train the model to return
the label "bird" whenever
it has provided a picture of a bird.
We can also create a label for "cat"
and provide pictures of cats to train on.
When the machine model is shown
a picture of a cat or a bird,
it will label the picture with some level of confidence.
This type of Machine Learning
is called Supervised Learning,
where an algorithm is trained on human-labeled data.
The more samples you provide
a supervised learning algorithm,
the more precise it becomes in classifying new data.
Unsupervised Learning, another type of machine language,
relies on giving the algorithm
unlabeled data and letting it find patterns by itself.
You provide the input but not labels,
and let the machine infer qualities that
algorithm ingests unlabeled data,
draws inferences, and finds patterns.
This type of learning can be useful for clustering data,
where data is grouped according to how similar it is
to its neighbors and dissimilar to everything else.
Once the data is clustered,
different techniques can be used to
explore that data and look for patterns.
For instance, you provide
a machine learning algorithm with
a constant stream of network traffic and
let it independently learn the baseline,
normal network activity, as well as the outlier and
possibly malicious behavior happening on the network.
The third type of machine learning algorithm,
Reinforcement Learning,
relies on providing a machine learning algorithm
with a set of rules and constraints,
and letting it learn how to achieve its goals.
You define the state,
the desired goal, allowed actions, and constraints.
The algorithm figures out how to achieve
the goal by trying different combinations
of allowed actions,
and is rewarded or punished
depending on whether the decision was a good one.
The algorithm tries its best to maximize
its rewards within the constraints provided.
You could use Reinforcement Learning to teach
a machine to play chess or navigate an obstacle course.
A machine learning model is a algorithm used
to find patterns in the data without
the programmer having to
explicitly program these patterns.
Deep Learning
While Machine Learning is a subset of Artificial Intelligence,
Deep Learning is a specialized subset of Machine Learning.
Deep Learning layers algorithms to create a Neural Network,
an artificial replication of the structure and functionality of the brain,
enabling AI systems to continuously learn on the job and improve the quality and
accuracy of results.
This is what enables these systems to learn from unstructured data such as
photos, videos, and audio files.
Deep Learning, for example,
enables natural language understanding capabilities of AI systems,
and allows them to work out the context and intent of what is being conveyed.
Deep learning algorithms do not directly map input to output.
Instead, they rely on several layers of processing units.
Each layer passes its output to the next layer, which processes it and
passes it to the next.
The many layers is why it’s called deep learning.
When creating deep learning algorithms, developers and
engineers configure the number of layers and the type of functions that connect
the outputs of each layer to the inputs of the next.
Then they train the model by providing it with lots of annotated examples.
For instance, you give a deep learning algorithm thousands of images and
labels that correspond to the content of each image.
The algorithm will run the those examples through its layered neural network,
and adjust the weights of the variables in each layer of the neural network to be
able to detect the common patterns that define the images with similar labels.
Deep Learning fixes one of the major problems present in
older generations of learning algorithms.
While the efficiency and
performance of machine learning algorithms plateau as the datasets grow,
deep learning algorithms continue to improve as they are fed more data.
Deep Learning has proven to be very efficient at various tasks,
including image captioning, voice recognition and transcription,
facial recognition, medical imaging, and language translation.
Deep Learning is also one of the main components of driverless cars.
[MUSIC]
Neural Networks
An artificial neural network is
a collection of smaller units called neurons,
which are computing units modeled on the way
the human brain processes information.
Artificial neural networks borrow
some ideas from the biological
neural network of the brain,
in order to approximate some of its processing results.
These units or neurons take incoming data like
the biological neural networks
and learn to make decisions over time.
Neural networks learn through
a process called backpropagation.
Backpropagation uses a set of training data that
match known inputs to desired outputs.
First, the inputs are plugged
into the network and outputs are determined.
Then, an error function determines how far
the given output is from the desired output.
Finally, adjustments are made in order to reduce errors.
A collection of neurons is called a layer,
and a layer takes in an input and provides an output.
Any neural network will have
one input layer and one output layer.
It will also have one or more hidden layers which
simulate the types of activity
that goes on in the human brain.
Hidden layers take in a set of weighted inputs
and produce an output through an activation function.
A neural network having
more than one hidden layer is
referred to as a deep neural network.
Perceptrons are the simplest
and oldest types of neural networks.
They are single-layered neural networks consisting
of input nodes connected directly to an output node.
Input layers forward the input values to the next layer,
by means of multiplying by
a weight and summing the results.
Hidden layers receive input from other nodes
and forward their output to other nodes.
Hidden and output nodes have a property called bias,
which is a special type of weight that applies to
a node after the other inputs are considered.
Finally, an activation function
determines how a node responds to its inputs.
The function is run against
the sum of the inputs and bias,
and then the result is forwarded as an output.
Activation functions can take different forms,
and choosing them is a critical component
to the success of a neural network.
Convolutional neural networks or CNNs are
multilayer neural networks that take
inspiration from the animal visual cortex.
CNNs are useful in applications such as image processing,
video recognition, and natural language processing.
A convolution is a mathematical operation,
where a function is applied to another function
and the result is a mixture of the two functions.
Convolutions are good at detecting
simple structures in an image,
and putting those simple features
together to construct more complex features.
In a convolutional network,
this process occurs over a series of layers,
each of which conducts
a convolution on the output of the previous layer.
CNNs are adept at building
complex features from less complex ones.
Recurrent neural networks or RNNs,
are recurrent because they perform
the same task for every element of a sequence,
with prior outputs feeding subsequent stage inputs.
In a general neural network,
an input is processed through
a number of layers and an output is
produced with an assumption that
the two successive inputs are independent of each other,
but that may not hold true in certain scenarios.
For example, when we need to consider
the context in which a word has been spoken,
in such scenarios, dependence on
previous observations has to
be considered to produce the output.
RNNs can make use of information in long sequences,
each layer of the network representing
the observation at a certain time.
(Music)
Lesson Summary
In this lesson, you have learned:
Machine Learning, a subset of AI, uses computer algorithms to analyze data and make intelligent
decisions based on what it has learned. The three main categories of machine learning algorithms
include Supervised Learning, Unsupervised Learning, and Reinforcement learning.
Deep Learning, a specialized subset of Machine Learning, layers algorithms to create a neural
network enabling AI systems to learn from unstructured data and continue learning on the job.
Neural Networks, a collection of computing units modeled on biological neurons, take incoming
data and learn to make decisions over time. The different types of neural networks include
Perceptrons, Convolutional Neural Networks or CNNs, and Recurrent Neural Networks or RNNs.
In the Machine Learning Techniques and Training topic, you have learned:
Supervised Learning is when we have class labels in the data set and use these to build the
classification model.
Supervised Learning is split into three categories – Regression, Classification, and Neural
Networks.
Machine learning algorithms are trained using data sets split into training data, validation data,
and test data.
[Optional] To learn more about Machine Learning, Deep Learning, and Neural Networks, read
these articles:
Models for Machine Learning
Applications of Deep Learning
A Neural Networks deep dive
AI Application Areas
Lesson Summary
In this lesson, you have learned:
Natural Language Processing (NLP) is a subset of artificial intelligence that enables computers to
understand the meaning of human language, including the intent and context of use.
Speech-to-text enables machines to convert speech to text by identifying common patterns in the
different pronunciations of a word, mapping new voice samples to corresponding words.
Speech Synthesis enables machines to create natural sounding voice models, including the voice
of particular individuals.
Computer Vision enables machines to identify and differentiate objects in images the same way
humans do.
Self-driving cars is an application of AI that can utilize NLP, speech, and most importantly,
computer vision.
Week 3
Exploring Today's AI Concerns
Welcome to exploring today's AI concerns.
In this video, you will learn about some of today's hot topics in AI.
First, you will hear about why trustworthy AI is the hot topic in AI.
Then, you will hear about how AI is used in facial recognition technologies,
in hiring in marketing on social media and in healthcare.
People frequently ask me what our current hot topics in AI and
I will tell you that whatever I answer today is likely to be different
next week or even by tomorrow.
The world of AI is extremely dynamic, which is a good thing.
It's an emerging technology with an amazing amount of possibilities and
the potential to solve so many problems, so
much faster than what we thought was before possible.
Now, as we've seen in some cases it can have harmful consequences.
And so, I would say that the hot topic in AI is how do we do this responsibly?
And IBM has come up with five pillars to address this issue,
kind of summarizing the idea of responsible AI.
That is explain ability, transparency, robustness, privacy and fairness.
And we can go into those topics in more depth, but
I want to emphasize two things here and
one is that this is not a one and done sport.
If you're going to use AI, if we're going to put it into use in society,
this is not something you just address at the beginning or at the end.
This is something you have to address throughout the entire lifecycle of AI.
These are questions you have to ask whether you're at the drawing board,
whether you're designing the AI, you're training the AI or
you have put it into use or you are the end user who's interacting with the AI.
And so, those five pillars or
things you want to constantly think about throughout the entire lifecycle of AI.
And then second and I think even more importantly is this is a team sport,
we all need to be aware of both the potential good and
the potential harm that comes from AI.
And encourage everybody to ask questions.
Make room for people to be curious about how AI works and what it's doing.
And with that I think we can really use it to address good problems and
have some great results and mitigate any of the potential harm.
So stay curious.
In designing solutions around Artificial Intelligence, call it AI,
facial recognition has become a permanent use case.
There are really three typical examples of categories of models and
algorithms that are being designed. Facial detection that is simply
detecting whether it is a human face versus a or a dog or cat.
This type of facial recognition happens without uniquely identifying who
that face might belong to.
In facial authentication, you might use this type of facial
recognition to open up your iPhone or your android device.
In this case, we provide a one-on-one authentication by
comparing the features of a face image.
What they previously stored, single up image, meaning that you are really only
comparing the images with the distinct image of the owner of the iPhone or
android device. Facial matching,
in this case, we compare the image with a database of other images or photos.
Just as different from the previous in that, the model is trying to
determine a facial match of an individual against the database for
images below it or photos belonging to other humans.
There are many different examples of facial recognition.
Many of them you have no doubt experienced in your day to day activity.
Some have proven to be helpful while others have shown to be not so helpful and
then there are others that have proven to be direct criminal in nature.
Where certain demographics of people have been harmed because of the use of these
facial recognition systems.
We've seen facial recognitions and solutions in AI systems
provide significant value in scenarios like navigating through an airport or
going through security or security align.
Or even using previous previous examples like the one we talked
about earlier where facial recognition to unlock your iPhone or
possibly to unlock your home or down lock the door in your automobile.
These are all helpful uses of facial recognition technologies but
there are also some clear examples and use cases that must be off-limits.
These might include identifying a person and the crowd without the sole permission
of that person or doing mass surveillance on a single or group of people.
These types of uses of technology raises important privacy,
civil rights, and human rights concerns.
When used the wrong way by the wrong people in facial recognition
technologies no doubt can be used to suppress, dissent, or
infringe upon the rights of minorities or
to simply just erase your basic expectations of having privacy.
AI is being increasingly introduced into each stage of workforce
progression, hiring, onboarding, career progression,
including promotions and awards handling, attrition etc.
Stock board hiring:
Consider an organization that receives thousands of job applications.
People applying for all kinds of jobs, front office,
back office, seasonal, permanent.
Instead of having large teams of people sit and
sift through all these applications, AI helps you rank and
prioritize applicants against targeted job openings,
presenting a list of the top candidates to the hiring managers.
AI solutions can process text in resumes and
combine that with other structured data to help in decision making.
Play video starting at :6:43 and follow transcript6:43
Now, we need to be careful to have guardrails in place.
We need to ensure the use of AI in hiring is not biased across
sensitive attributes like age, general ethnicity and the like.
Even when those attributes are not directly used by the AI but
maybe creeping in, coming in from proxy attributes like zip code or
type of job previously held.
One of the hot topics in AI today is its application and
marketing on social media.
It has completely transformed how brands interact
with their audiences on social media platforms like
TikTok, LinkedIn, Twitter, Instagram, Facebook.
AI today can create ads for you, it can create social media posts for you.
It can help you target those ads appropriately.
It can use sentiment analysis to identify new audiences for you.
All of this drives incredible results for a marketeer.
It improves the effectiveness of the marketing campaigns while dramatically
reducing the cost of running those campaigns.
Play video starting at :8:6 and follow transcript8:06
Now, the same techniques and capabilities that AI produces for
doing marketing on social media platforms also raises some ethical questions.
Play video starting at :8:21 and follow transcript8:21
The marketing is successful because of all of the data and
social media platforms collect from their users.
Ostensibly, this data is collected to deliver more
personalized experiences for end users.
It's not always explicit what data is being collected and
if you are providing your consent for them to use as data.
Now, the same techniques that are so effective for
marketing campaigns for brands can also be applied for
generating misinformation, conspiracy theories.
Whether it's political or scientific misinformation and
this has horrendous implications for our communities at large.
Play video starting at :9:15 and follow transcript9:15
This is why it is absolutely critical that all enterprises
adhere to some clear principles around transparency,
explain ability, trust, privacy in terms of how they use AI or
build AI into their solutions into their platforms.
Play video starting at :9:40 and follow transcript9:40
The use of AI is increasing across all healthcare segments,
healthcare providers, pairs, life sciences etc.
Play video starting at :9:50 and follow transcript9:50
Pair organizations are using AI and
machine learning solutions that tap into claims data,
often combining it with other data sets like social determinants of health.
A top use case is disease prediction for coordinating care.
For example, predicting who in the member population is likely to have
an adverse condition, maybe like an ER visit in the next three months and
then providing the right forms of intervention and prevention.
Play video starting at :10:22 and follow transcript10:22
Equitable care becomes very important in this context.
We need to make sure the AI is not biased across sensitive attributes like age,
gender, ethnicity, etc.
Across all of these of course, conversational AI where virtual agents
as well as systems that help humans better service the member population.
Play video starting at :10:45 and follow transcript10:45
That has become table stakes.
Across all of these use cases of AI in health care, we see a few common things.
Being able to unlock insights from the rich sets of data the organization owns,
improving the member or patient experience, and
having guardrails in place to ensure AI is trustworthy.
[MUSIC]
Exploring AI and Ethics
Welcome to exploring AI and ethics.
In this video, you will learn about what AI ethics is, and why it matters.
You will find out, what makes AI ethics a socio-technical challenge,
what it means to build and use AI ethically,
and how organizations can put AI ethics into action.
AI, Artificial Intelligence, is a very pervasive in everybody's life.
Even if often we don't realize it, we use it when we use a credit
card to buy something online, when we search something on a web,
when we post or like or follow somebody on a social platform.
And even when we drive with the navigation support, and
the driver assistance capabilities of the car based on AI.
This pervasiveness generates very fast a significant transformations in our life,
and also in the structure and equilibrium of our society.
This is why AI besides being a technical and scientific discipline,
has also a very significant social impact.
This raises a lot of ethical questions about how AI should be designed,
developed, deployed, used, and also regulated.
The social technical dimensions of AI requires efforts to identify
all stakeholders, that go well beyond technical experts, and
include also sociologists, philosophers, economists, policymakers.
And all the communities that are impacted by the deployment of this technology.
Inclusiveness is necessary in defining the ecosystem,
in all the phases of the AI development and deployment, and
also in the impact of AI in the deployment scenario.
Without it, we risk of creating AI only for some,
and leave many others behind in a disadvantaged position.
Everybody needs to be involved in defining the vision of the future that we
want to build using AI and other technology as a means and not as an end.
To achieve this, appropriate guidelines are necessary to drive the creation and
use of AI in the right direction.
Technical tools are necessary and useful, but they should be complemented by
principles guardrails, well-defined processes, and effective governance.
We should not think that all these slows down innovation.
Think about traffic rules, it may seem that traffic lights,
precedents, and stop signals, and speed limits are slowing us down.
However, without them, we will not drive faster, but
actually we would drive much slower, because we would be always in
a complete state of uncertainty about other vehicles and pedestrians.
AI ethics identifies and
addresses the socio-technical issues raised by this technology,
and makes sure that the right kind of innovation is supported,
and facilitated, so that the path to the future we want is faster.
As our CEO at IBM States, "Trust is our license to operate."
We've earned this trust through our policies, programs,
partnerships and advocacy of the responsible use of technology.
For over 100 years, IBM has been at the forefront of
innovation that brings benefits to our clients and society.
This approach most definitely applies to the development, use, and
deployment of AI.
Therefore, ethics should be embedded into the lifecycle of the design and
development process.
Ethical decision making is not just a technical problem solving approach.
Rather an ethical, sociological, technical and
human centered approach, should be embarked upon,
based on principles, value standards, laws and benefits to society.
So having this foundation is important, even necessary, but where to start?
A good place to start is with a set of guiding principles,
at IBM we call our principles, the principles of trust and transparency.
of which there are three.
The purpose of AI is to augment not replace human intelligence.
Data and insights belong to their creator and
new technology including AI systems must be transparent and explainable.
This last principle is built upon our pillars, of which there are five.
We just mentioned transparency which reinforces trust,
by sharing the what, and the how, the AI is being used for.
It must be explainable and also fair.
So when it's properly calibrated, it can assist in making better choices.
Should be robust, which means it should be secure, and
as well as privacy preserving, safeguarding privacy and rights.
We know having principles and pillars are not enough.
We have an extensive set of tools and talented practitioners that can help
diagnose, monitor, promote all of our pillars and continuous monitoring,
to mitigate against drift and unintended consequences.
The first step to putting AI ethics into action, just like with anything else,
is about building understanding and awareness.
This is about equipping your teams to think about AI ethics, what it means
to put it into action, whatever solution you're building and deploying.
Let's take an example, if you're building a learning solution and
deploying that within a company,
your HR team leader who is doing that should be thinking about,
is this solution designed with users in mind?
Have you co-created the solution with users?
How does it enable equal access to opportunity to
all the employees across diverse groups.
A keen understanding of AI ethics, and
reflecting on these issues continuously,
is critical as a foundation to putting AI ethics into action.
The second step in putting AI ethics into action,
once you build that understanding and awareness and
everybody is reflecting on this topic, is to put in place in a governance structure.
And the critical point here is,
it's a governance structure to scale AI ethics in action.
It's not about doing it in one isolated instance in a market,
or in a business unit, it's about a governance structure that works at scale.
We talked about understanding and awareness as a foundation, second
governance, which is the responsibility of leaders to put in place structures.
Once you've got these two elements, the third step is operationaIizing.
How do you make sure a developer, or a data scientist, or a vendor,
who's in Malaysia or Poland knows how to put AI ethics into action?
What does it mean for them?
Right?
It's one thing to put structures in place at the global level, but
how do you make sure it's operationalized at scale in the markets and
every user, every data scientist, every developer knows what they need to do?
This is all about having clarity of the pillars of trustworthy AI for
IBM, it is transparency.
Let's go back to our learning example, are you designing it with users?
Think about, what we think of as best in class, transparent recommendation systems,
your favorite movie streaming service, or your cab hailing service?
It's transparent, is it explainable?
Is it telling you what recommendations are, and why they're being made, but
also telling you as a user, it's your choice to make the final decision?
Fairness, is it giving equal access to opportunity to everyone by ensuring
adoption, not just of the process, but also the outcome across different groups.
Robustness, privacy, every data scientist and developer and
every vendor needs to know, what we mean by these in a very operational manner.
[MUSIC]
Defining AI Ethics
Welcome to Defining AI Ethics.
Humans rely on culturally agreed-upon morals and standards of action — or ethics — to
guide their decision-making, especially for decisions that impact others.
As AI is increasingly used to automate and augment decision-making, it is critical that
AI is built with ethics at the core so its outcomes align with human ethics and expectations.
AI ethics is a multidisciplinary field that investigates how to maximize AI's beneficial
impacts while reducing risks and adverse impacts.
It explores issues like data responsibility and privacy, inclusion, moral agency, value
alignment, accountability, and technology misuse
…to understand how to build and use AI in ways that
align with human ethics and expectations.
There are five pillars for AI ethics:
explainability,
fairness,
robustness,
transparency, and privacy.
These pillars are focus areas that help us take action to build and use AI ethically.
Explainability
AI is explainable when it can show how and why it arrived at a particular outcome or
recommendation.
You can think of explainability as an AI system showing its work.
Fairness
AI is fair when it treats individuals or groups equitably.
AI can help humans make fairer choices by counterbalancing human biases, but beware
— bias can be present in AI too, so steps must be taken to mitigate it.
Robustness
AI is robust when it can effectively handle exceptional conditions, like abnormal input
or adversarial attacks.
Robust AI is built to withstand intentional and unintentional interference.
Transparency
AI is transparent when appropriate information is shared with humans about how the AI system
was designed and developed.
Transparency means that humans have access to information like what data was used to
train the AI system, how the system collects and stores data, and who has access to the
data the system collects.
Privacy
Because AI ingests so much data, it must be designed to prioritize and safeguard humans'
privacy and data rights.
AI that is built to respect privacy collects and stores only the minimum amount of data
it needs to function,
and collected data should never be repurposed without users' consent, among other considerations.
In summary, together, these five pillars —
explainability,
fairness,
robustness,
transparency, and
privacy — help us to design, develop, deploy, and use AI more ethically
and to understand how to build and use AI in ways that
align with human ethics and expectations.
Lesson Summary
In this lesson, you have learned:
Today's AI issues and concerns include how to build and use AI responsibly and how AI is used in
healthcare, facial recognition, social media and marketing, and hiring
AI ethics is a multidisciplinary field that investigates how to optimize AI's beneficial impact while
reducing risks and adverse impacts
AI ethics is a "socio-technical" challenge, meaning that it cannot be solved by tools alone
AI ethics principles are:
o The purpose of AI is to augment — not replace — human intelligence
o Data and insights belong to their creator
o New technology must be transparent and explainable
AI ethics pillars are:
Explainability: The ability of an AI system to provide insights that humans can use to understand
the causes of the system's predictions.
o Fairness: The equitable treatment of individuals or groups of individuals.
o Robustness: The ability of an AI system to effectively handle exceptional conditions, such
as abnormal input or adversarial attacks.
o Transparency: Sharing appropriate information with stakeholders on how an AI system
has been designed and developed.
o Privacy: AI systems must prioritize and safeguard consumers’ privacy and data rights.
In AI, bias gives systematic disadvantages to certain groups or individuals
One potential cause of bias in AI is the implicit or explicit biases of the people who design and
develop AI
One way to mitigate bias is to assemble diverse teams
Regulations are government rules enforceable by law.
AI governance is an organization's act of governing, through its corporate instructions, staff,
processes and systems
To learn more about AI ethics, visit:
IBM AI Ethics homepage
IBM Trustworthy AI homepage
IBM Policy Lab
Good Tech IBM