Unit 1 - FAIML
Unit 1 - FAIML
FUNDAMENTALS IN AI and ML
Prepared by
Dr.M.Manimaran
Associate Professor - II,
School of Computing Science and Engineering,
VIT Bhopal University
Unit-1 Contents
Introduction – Definition - Future of Artificial Intelligence -
Unit-1 Introduction to AI ML
AI Definitions
• The study of how to make programs/computers do things that people do better
• The study of how to make computers solve problems which require knowledge
and intelligence
• The exciting new effort to make computers think … machines with minds.
Thinking
• The automation of activities that we associate with human thinking (e.g., decision- machines or
making, learning…) machine
intelligence
• The art of creating machines that perform functions that require intelligence when
performed by people
• The study of mental faculties through the use of computational models Studying
• A field of study that seeks to explain and emulate intelligent behavior in terms of cognitive
faculties
computational processes
• The branch of computer science that is concerned with the automation of
intelligent behavior Problem Solving
and CS
Unit-1 Introduction to AI ML 3
• AI as a field of study
What Is AI?
– Computer Science
– Cognitive Science
– Psychology
– Philosophy
– Linguistics
– Neuroscience
• AI is a belief that the brain is a form of biological computer and that the mind is
computational
• AI has had a concrete impact on society but unlike other areas of CS, the impact is often
– felt only tangentially (that is, people are not aware that system X has AI)
– felt years after the initial investment in the technology
Unit-1 Introduction to AI ML 4
Unit-1 Introduction to AI ML 5
What is Intelligence?
• Definition for intelligence?
• Here are some definitions:
– the ability to understand and profit from experience
– a general mental capability that involves the ability to
reason, plan, solve problems, think abstractly, comprehend
ideas and language, and learn is effectively perceiving,
interpreting and responding to the environment
Unit-1 Introduction to AI ML 6
Unit-1 Introduction to AI ML 8
Unit-1 Introduction to AI ML 9
Unit-1 Introduction to AI ML 10
Unit-1 Introduction to AI ML 11
Unit-1 Introduction to AI ML 12
Unit-1 Introduction to AI ML 13
Unit-1 Introduction to AI ML 14
Unit-1 Introduction to AI ML 15
Unit-1 Introduction to AI ML 16
Unit-1 Introduction to AI ML 17
Unit-1 Introduction to AI ML 18
FUTURE OF ARTIFICIAL
INTELLIGENCE
Unit-1 Introduction to AI ML 19
Unit-1 Introduction to AI ML 20
Unit-1 Introduction to AI ML 21
Unit-1 Introduction to AI ML 22
Unit-1 Introduction to AI ML 23
Unit-1 Introduction to AI ML 24
Unit-1 Introduction to AI ML 25
Unit-1 Introduction to AI ML 26
Unit-1 Introduction to AI ML 27
Future Scope of Artificial Intelligence
• Cyber Security - Ensure in curbing hackers
• Face Recognition - launch of iPhone X
• Data Analysis – SAS, Tableau
• Transport - Tesla
• Various Jobs - Robotic Process Automation
• Emotion Bots - Cortana & Alexa
• Marketing & Advertising – Flipkart, Adohm
Unit-1 Introduction to AI ML 28
Agents
• An intelligent agent (IA) is an
entity that makes a decision, that
enables artificial intelligence to
be put into action.
• It can also be described as a
software entity that conducts An actuator is a device that uses a form of
operations in the place of users power to convert a control signal into
or programs after sensing the mechanical motion. Industrial plants use
environment. actuators to operate valves, dampers, fluid
• It uses actuators to initiate couplings, and other devices used in industrial
action in that environment. process control. The industrial actuator can use
air, hydraulic fluid, or electricity for motive
power. 29
Unit-1 Introduction to AI ML
State of the art
• Deep Blue defeated the reigning world chess champion Garry Kasparov in 1997.
• Proved a mathematical conjecture (Robbins conjecture) unsolved for decades.
• No hands across America (driving autonomously 98% of the time from
Pittsburgh to San Diego).
• During the 1991 Gulf War, US forces deployed an AI logistics planning and
scheduling program that involved up to 50,000 vehicles, cargo, and people
• NASA's on-board autonomous planning program controlled the scheduling of
operations for a spacecraft.
• Proverb solves crossword puzzles better than most humans.
Intelligent Agents
Outline
• Agents and environments
• Rationality
• PEAS (Performance measure, Environment,
Actuators, Sensors)
• Environment types
• Agent types
Agents
• An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through actuators
• Human agent: eyes, ears, and other organs for sensors; hands,
legs, mouth, and other body parts for actuators.
•They have a learning ability that enables them to learn even as tasks are carried out.
•They can interact with other entities such as agents, humans, and systems.
and entities.
Unit-1 Introduction to AI ML 58
Typical Intelligent Agents
• Agents can be grouped into five classes based on their degree of perceived
intelligence and capability.
• All these agents can improve their performance and generate better action
over the time.
These are given below:
• Simple Reflex Agent
• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
59
Unit-1 Introduction to AI ML
Simplex Agent
• The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
• Problems for the simple reflex agent design approach:
– They have very limited intelligence
– They do not have knowledge of non-perceptual parts of the current state
– Mostly too big to generate and to store.
– Not adaptive to changes in the environment.
60
Unit-1 Introduction to AI ML
Simplex Agent
61
Unit-1 Introduction to AI ML
Model-based reflex agent
• The Model-based agent can work in a partially observable environment and track the situation.
– Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent.
• These agents have the model, "which is knowledge of the world" and based on the model they perform
actions.
• Updating the agent state requires information about:
62
Unit-1 Introduction to AI ML
Model-based reflex agent
persistent:
state, the agent’s current conception of the world state model , a description of how the
next state depends on current state and action rules, a set of condition–action rules
action, the most recent action, initially none
rule←RULE-MATCH(state, rules)
action ←rule.ACTION
return action
•A model-based reflex agent. It keeps track of the current state of the world, using an
internal model. It then chooses an action in the same way as the reflex agent.
63
Unit-1 Introduction to AI ML
Model-based reflex agent
64
Unit-1 Introduction to AI ML
Goal-based agents
• The knowledge of the current state environment is not always sufficient to
decide for an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different
scenario are called searching and planning, which makes an agent proactive.
65
Unit-1 Introduction to AI ML
Goal-based agents
66
Unit-1 Introduction to AI ML
Utility-based agents
• These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success
at a given state.
• Utility-based agent act based not only goals but also the best way to achieve the
goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
67
Unit-1 Introduction to AI ML
Utility-based agents
• A model-based, utility-based agent. It uses a model of the world, along with a utility function
that measures its preferences among states of the world.
• Then it chooses the action that leads to the best expected utility, where expected utility is
computed by averaging over all possible outcome states, weighted by the probability of the
outcome
68
Unit-1 Introduction to AI ML
Utility-based agents
69
Unit-1 Introduction to AI ML
Learning Agents
• A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning
capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically through learning.
– Critic: Learning element takes feedback from critic which describes that how well the agent is doing with
respect to a fixed performance standard.
– Problem generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.
• Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the performance.
70
Unit-1 Introduction to AI ML
Learning Agents
71
Unit-1 Introduction to AI ML
Problem Solving approach
to typical AI problems
Solving problems by
searching
Outline
• Problem-solving agents
• Problem types
• Problem formulation
• Example problems
• Basic search algorithms
81
Problem-solving agents
82
Example: Romania
• On holiday in Romania; currently in Arad.
• Flight leaves tomorrow from Bucharest
• Formulate goal:
– be in Bucharest
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras,
Bucharest
–
–
• 83
Example: Romania
84
Problem types
• Deterministic, fully observable single-state problem
– Agent knows exactly which state it will be in; solution is a
sequence
• Non-observable sensorless problem (conformant
problem)
– Agent may have no idea where it is; solution is a sequence
• Nondeterministic and/or partially observable
contingency problem
– percepts provide new information about current state
– often interleave} search, execution
• Unknown state space exploration problem
–
– 85
Example: vacuum world
• Single-state, start in #5.
Solution?
•
86
Example: vacuum world
• Single-state, start in #5.
Solution? [Right, Suck]
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
•
87
Example: vacuum world
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
– Nondeterministic: Suck may
dirty a clean carpet
– Partially observable: location, dirt at current location.
– Percept: [L, Clean], i.e., start in #5 or #7
Solution?
• Contingency
88
Example: vacuum world
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
– Nondeterministic: Suck may
dirty a clean carpet
– Partially observable: location, dirt at current location.
– Percept: [L, Clean], i.e., start in #5 or #7
Solution? [Right, if dirt then Suck]
• Contingency
89
Single-state problem formulation
A problem is defined by four items:
1. initial state e.g., "at Arad"
– e.g., S(Arad) = {<Arad Zerind, Zerind>, … }
2. goal test, can be
– explicit, e.g., x = "at Bucharest"
– implicit, e.g., Checkmate(x)
3. path cost (additive)
– e.g., sum of distances, number of actions executed, etc.
– c(x,a,y) is the step cost, assumed to be ≥ 0
• A solution is a sequence of actions leading from the initial state to a goal state
–
–
–
actions or successor function S(x) = set of action–
state pairs
•
90
Selecting a state space
• Real world is absurdly complex
state space must be abstracted for problem solving
• (Abstract) action = complex combination of real actions
– e.g., "Arad Zerind" represents a complex set of possible
routes, detours, rest stops, etc.
• For guaranteed realizability, any real state "in Arad“ must
get to some real state "in Zerind"
• (Abstract) solution =
– set of real paths that are solutions in the real world
• Each abstract action should be "easier" than the original
problem
– (Abstract) state = set of real states
–
91
Vacuum world state space graph
• states?
• actions?
• goal test?
• path cost?
92
Vacuum world state space graph
• states?
• actions?
• goal test?
• path cost?
94
Example: The 8-puzzle
95
•
Example: robotic assembly
96
Tree search algorithms
• Basic idea:
– offline, simulated exploration of state space by
generating successors of already-explored states
(a.k.a.~expanding states)
–
97
Tree search example
98
Tree search example
99
Tree search example
100
Implementation: general tree search
101
Implementation: states vs. nodes
104
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end
107
Breadth-first search
• Expand shallowest unexpanded node
– fringe is a FIFO queue, i.e., new successors
go at end
• Implementation:
108
Properties of breadth-first search
• Complete? Yes (if b is finite)
• Time? 1+b+b2+b3+… +bd + b(bd-1) =
O(bd+1)
• Space? O(bd+1) (keeps every node in
memory)
• Optimal? Yes (if cost = 1 per step)
• Space is the bigger problem (more than
time)
• 109
Uniform-cost search
• Expand least-cost unexpanded node
– fringe = queue ordered by path cost
• Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
• Optimal? Yes – nodes expanded in increasing order of
g(n)
– Equivalent to breadth-first if step costs all equal
– Complete? Yes, if step cost ≥ ε
– Time? # of nodes with g ≤ cost of optimal
solution, O(bceiling(C*/ ε)) where C* is the cost
of the optimal solution
• Implementation:
110
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
–
•
111
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
–
•
112
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
–
•
113
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
–
•
114
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
–
•
115
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
–
•
116
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
–
•
117
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
–
•
118
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
–
•
119
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
–
•
120
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
121
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
–
•
122
Properties of depth-first search
• Complete? No: fails in infinite-depth spaces,
spaces with loops
– Modify to avoid repeated states along path
• Time? O(bm): terrible if m is much larger than d
– but if solutions are dense, may be much faster than
breadth-first
• Space? O(bm), i.e., linear space!
• Optimal? No
– complete in finite spaces
123
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
• Recursive implementation:
124
Iterative deepening search
125
Iterative deepening search l =0
126
Iterative deepening search l =1
127
Iterative deepening search l =2
128
Iterative deepening search l =3
129
Iterative deepening search
• Number of nodes generated in a depth-limited search to
depth d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd
• Number of nodes generated in an iterative deepening
search to depth d with branching factor b:
NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd
• For b = 10, d = 5,
– NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
– NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
• Overhead = (123,456 - 111,111)/111,111 = 11%
–
–
130
Properties of iterative deepening
search
• Complete? Yes
• Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd =
O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1
•
•
•
131
Summary of algorithms
132
Repeated states
• Failure to detect repeated states can turn
a linear problem into an exponential one!
•
133
Graph search
134
Summary
• Problem formulation usually requires abstracting away
real-world details to define a state space that can
feasibly be explored
135