0% found this document useful (0 votes)
16 views135 pages

Unit 1 - FAIML

Uploaded by

DSY
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views135 pages

Unit 1 - FAIML

Uploaded by

DSY
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 135

CSA2001

FUNDAMENTALS IN AI and ML

Prepared by
Dr.M.Manimaran
Associate Professor - II,
School of Computing Science and Engineering,
VIT Bhopal University
Unit-1 Contents
Introduction – Definition - Future of Artificial Intelligence -

Characteristics of Intelligent Agents - Typical Intelligent

Agents – Problem Solving Approach to Typical AI problems.

Unit-1 Introduction to AI ML
AI Definitions
• The study of how to make programs/computers do things that people do better
• The study of how to make computers solve problems which require knowledge
and intelligence
• The exciting new effort to make computers think … machines with minds.
Thinking
• The automation of activities that we associate with human thinking (e.g., decision- machines or
making, learning…) machine
intelligence
• The art of creating machines that perform functions that require intelligence when
performed by people
• The study of mental faculties through the use of computational models Studying
• A field of study that seeks to explain and emulate intelligent behavior in terms of cognitive
faculties
computational processes
• The branch of computer science that is concerned with the automation of
intelligent behavior Problem Solving
and CS

Unit-1 Introduction to AI ML 3
• AI as a field of study
What Is AI?
– Computer Science
– Cognitive Science
– Psychology
– Philosophy
– Linguistics
– Neuroscience

• AI is part science, part engineering


• AI often must study other domains in order to implement systems
– e.g., medicine and medical practices for a medical diagnostic system, engineering and chemistry to monitor
a chemical processing plant

• AI is a belief that the brain is a form of biological computer and that the mind is
computational
• AI has had a concrete impact on society but unlike other areas of CS, the impact is often
– felt only tangentially (that is, people are not aware that system X has AI)
– felt years after the initial investment in the technology

Unit-1 Introduction to AI ML 4
Unit-1 Introduction to AI ML 5
What is Intelligence?
• Definition for intelligence?
• Here are some definitions:
– the ability to understand and profit from experience
– a general mental capability that involves the ability to
reason, plan, solve problems, think abstractly, comprehend
ideas and language, and learn is effectively perceiving,
interpreting and responding to the environment

Unit-1 Introduction to AI ML 6
Unit-1 Introduction to AI ML 8
Unit-1 Introduction to AI ML 9
Unit-1 Introduction to AI ML 10
Unit-1 Introduction to AI ML 11
Unit-1 Introduction to AI ML 12
Unit-1 Introduction to AI ML 13
Unit-1 Introduction to AI ML 14
Unit-1 Introduction to AI ML 15
Unit-1 Introduction to AI ML 16
Unit-1 Introduction to AI ML 17
Unit-1 Introduction to AI ML 18
FUTURE OF ARTIFICIAL
INTELLIGENCE

Unit-1 Introduction to AI ML 19
Unit-1 Introduction to AI ML 20
Unit-1 Introduction to AI ML 21
Unit-1 Introduction to AI ML 22
Unit-1 Introduction to AI ML 23
Unit-1 Introduction to AI ML 24
Unit-1 Introduction to AI ML 25
Unit-1 Introduction to AI ML 26
Unit-1 Introduction to AI ML 27
Future Scope of Artificial Intelligence
• Cyber Security - Ensure in curbing hackers
• Face Recognition - launch of iPhone X
• Data Analysis – SAS, Tableau
• Transport - Tesla
• Various Jobs - Robotic Process Automation
• Emotion Bots - Cortana & Alexa
• Marketing & Advertising – Flipkart, Adohm

Unit-1 Introduction to AI ML 28
Agents
• An intelligent agent (IA) is an
entity that makes a decision, that
enables artificial intelligence to
be put into action.
• It can also be described as a
software entity that conducts An actuator is a device that uses a form of
operations in the place of users power to convert a control signal into
or programs after sensing the mechanical motion. Industrial plants use
environment. actuators to operate valves, dampers, fluid
• It uses actuators to initiate couplings, and other devices used in industrial
action in that environment. process control. The industrial actuator can use
air, hydraulic fluid, or electricity for motive
power. 29
Unit-1 Introduction to AI ML
State of the art
• Deep Blue defeated the reigning world chess champion Garry Kasparov in 1997.
• Proved a mathematical conjecture (Robbins conjecture) unsolved for decades.
• No hands across America (driving autonomously 98% of the time from
Pittsburgh to San Diego).
• During the 1991 Gulf War, US forces deployed an AI logistics planning and
scheduling program that involved up to 50,000 vehicles, cargo, and people
• NASA's on-board autonomous planning program controlled the scheduling of
operations for a spacecraft.
• Proverb solves crossword puzzles better than most humans.
Intelligent Agents
Outline
• Agents and environments
• Rationality
• PEAS (Performance measure, Environment,
Actuators, Sensors)
• Environment types
• Agent types
Agents
• An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through actuators

• Human agent: eyes, ears, and other organs for sensors; hands,
legs, mouth, and other body parts for actuators.

• Robotic agent: cameras and infrared range finders for sensors;


various motors for actuators
Agents and environments

• The agent function maps from percept histories to


actions:
[f: P*  A]
The agent program runs on the physical architecture to
produce f agent = architecture + program
A vacuum-cleaner agent
• \input{tables/vacuum-agent-function-table}
Rational agents
• An agent should strive to "do the right thing", based on what it
can perceive and the actions it can perform. The right action is
the one that will cause the agent to be most successful
• Performance measure: An objective criterion for success of an
agent's behavior
• E.g., performance measure of a vacuum-cleaner agent could
be amount of dirt cleaned up, amount of time taken, amount of
electricity consumed, amount of noise generated, etc.
Rational agents
• Rational Agent: For each possible percept sequence, a rational
agent should select an action that is expected to maximize its
performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent
has.
Rational agents
• Rationality is distinct from omniscience (all-knowing with
infinite knowledge)
• Agents can perform actions in order to modify future percepts
so as to obtain useful information (information gathering,
exploration)
• An agent is autonomous if its behavior is determined by its
own experience (with ability to learn and adapt)
PEAS
• PEAS: Performance measure, Environment, Actuators,
Sensors
• Must first specify the setting for intelligent agent design
• Consider, e.g., the task of designing an automated taxi driver:
– Performance measure
– Environment
– Actuators
– Sensors
PEAS
• Must first specify the setting for intelligent agent design
• Consider, e.g., the task of designing an automated taxi driver:
– Performance measure: Safe, fast, legal, comfortable trip,
maximize profits
– Environment: Roads, other traffic, pedestrians, customers
– Actuators: Steering wheel, accelerator, brake, signal, horn
– Sensors: Cameras, sonar, speedometer, GPS, odometer,
engine sensors, keyboard
PEAS
• Agent: Medical diagnosis system
• Performance measure: Healthy patient, minimize costs,
lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions, tests, diagnoses,
treatments, referrals)
• Sensors: Keyboard (entry of symptoms, findings, patient's
answers)
PEAS
• Agent: Part-picking robot
• Performance measure: Percentage of parts in correct
bins
• Environment: Conveyor belt with parts, bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors
PEAS
• Agent: Interactive English tutor
• Performance measure: Maximize student's score on
test
• Environment: Set of students
• Actuators: Screen display (exercises, suggestions,
corrections)
• Sensors: Keyboard
Environment types/Properties
• Fully observable (vs. partially observable):
• Deterministic (vs. stochastic):
• Episodic (vs. sequential):
Environment types
Agent functions and programs
• An agent is completely specified by the agent
function mapping percept sequences to actions
• One agent function (or a small equivalence
class) is rational
• Aim: find a way to implement the rational
agent function concisely
Table-lookup agent
• \input{algorithms/table-agent-algorithm}
• Drawbacks:
– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn the
table entries
Agent program for a vacuum-
cleaner agent
• \input{algorithms/reflex-vacuum-agent-
algorithm}
Characteristics of Intelligent Agents
Intelligent agents have the following distinguishing characteristics:
•They have some level of autonomy that allows them to perform certain tasks on their own.

•They have a learning ability that enables them to learn even as tasks are carried out.

•They can interact with other entities such as agents, humans, and systems.

•New rules can be accommodated by intelligent agents incrementally.

•They exhibit goal-oriented habits.

•They are knowledge-based. They use knowledge regarding communications, processes,

and entities.

Unit-1 Introduction to AI ML 58
Typical Intelligent Agents
• Agents can be grouped into five classes based on their degree of perceived
intelligence and capability.
• All these agents can improve their performance and generate better action
over the time.
These are given below:
• Simple Reflex Agent
• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
59

Unit-1 Introduction to AI ML
Simplex Agent
• The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
• Problems for the simple reflex agent design approach:
– They have very limited intelligence
– They do not have knowledge of non-perceptual parts of the current state
– Mostly too big to generate and to store.
– Not adaptive to changes in the environment.
60

Unit-1 Introduction to AI ML
Simplex Agent

61

Unit-1 Introduction to AI ML
Model-based reflex agent
• The Model-based agent can work in a partially observable environment and track the situation.

• A model-based agent has two important factors:

– Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent.

– Internal State: It is a representation of the current state based on percept history.

• These agents have the model, "which is knowledge of the world" and based on the model they perform
actions.
• Updating the agent state requires information about:

– How the world evolves

– How the agent's action affects the world.

62

Unit-1 Introduction to AI ML
Model-based reflex agent

function MODEL-BASED-REFLEX-AGENT(percept ) returns an action

persistent:

state, the agent’s current conception of the world state model , a description of how the
next state depends on current state and action rules, a set of condition–action rules
action, the most recent action, initially none

state←UPDATE-STATE(state, action, percept ,model )

rule←RULE-MATCH(state, rules)

action ←rule.ACTION

return action
•A model-based reflex agent. It keeps track of the current state of the world, using an
internal model. It then chooses an action in the same way as the reflex agent.
63

Unit-1 Introduction to AI ML
Model-based reflex agent

64

Unit-1 Introduction to AI ML
Goal-based agents
• The knowledge of the current state environment is not always sufficient to
decide for an agent to what to do.
• The agent needs to know its goal which describes desirable situations.

• Goal-based agents expand the capabilities of the model-based agent by having


the "goal" information.
• They choose an action, so that they can achieve the goal.

• These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different
scenario are called searching and planning, which makes an agent proactive.

65

Unit-1 Introduction to AI ML
Goal-based agents

66

Unit-1 Introduction to AI ML
Utility-based agents
• These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success
at a given state.
• Utility-based agent act based not only goals but also the best way to achieve the
goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each
action achieves the goals.

67

Unit-1 Introduction to AI ML
Utility-based agents

• A model-based, utility-based agent. It uses a model of the world, along with a utility function
that measures its preferences among states of the world.
• Then it chooses the action that leads to the best expected utility, where expected utility is
computed by averaging over all possible outcome states, weighted by the probability of the
outcome

68

Unit-1 Introduction to AI ML
Utility-based agents

69

Unit-1 Introduction to AI ML
Learning Agents
• A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning
capabilities.

• It starts to act with basic knowledge and then able to act and adapt automatically through learning.

• A learning agent has mainly four conceptual components, which are:

– Learning element: It is responsible for making improvements by learning from environment

– Critic: Learning element takes feedback from critic which describes that how well the agent is doing with
respect to a fixed performance standard.

– Performance element: It is responsible for selecting external action

– Problem generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.

• Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the performance.

70

Unit-1 Introduction to AI ML
Learning Agents

71

Unit-1 Introduction to AI ML
Problem Solving approach
to typical AI problems
Solving problems by
searching
Outline
• Problem-solving agents
• Problem types
• Problem formulation
• Example problems
• Basic search algorithms

81
Problem-solving agents

82
Example: Romania
• On holiday in Romania; currently in Arad.
• Flight leaves tomorrow from Bucharest
• Formulate goal:
– be in Bucharest
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras,
Bucharest


• 83
Example: Romania

84
Problem types
• Deterministic, fully observable  single-state problem
– Agent knows exactly which state it will be in; solution is a
sequence
• Non-observable  sensorless problem (conformant
problem)
– Agent may have no idea where it is; solution is a sequence
• Nondeterministic and/or partially observable 
contingency problem
– percepts provide new information about current state
– often interleave} search, execution
• Unknown state space  exploration problem

– 85
Example: vacuum world
• Single-state, start in #5.
Solution?

86
Example: vacuum world
• Single-state, start in #5.
Solution? [Right, Suck]

• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?

87
Example: vacuum world
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
– Nondeterministic: Suck may
dirty a clean carpet
– Partially observable: location, dirt at current location.
– Percept: [L, Clean], i.e., start in #5 or #7
Solution?

• Contingency
88
Example: vacuum world
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
– Nondeterministic: Suck may
dirty a clean carpet
– Partially observable: location, dirt at current location.
– Percept: [L, Clean], i.e., start in #5 or #7
Solution? [Right, if dirt then Suck]
• Contingency
89
Single-state problem formulation
A problem is defined by four items:
1. initial state e.g., "at Arad"
– e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }
2. goal test, can be
– explicit, e.g., x = "at Bucharest"
– implicit, e.g., Checkmate(x)
3. path cost (additive)
– e.g., sum of distances, number of actions executed, etc.
– c(x,a,y) is the step cost, assumed to be ≥ 0
• A solution is a sequence of actions leading from the initial state to a goal state



actions or successor function S(x) = set of action–
state pairs

90
Selecting a state space
• Real world is absurdly complex
 state space must be abstracted for problem solving
• (Abstract) action = complex combination of real actions
– e.g., "Arad  Zerind" represents a complex set of possible
routes, detours, rest stops, etc.
• For guaranteed realizability, any real state "in Arad“ must
get to some real state "in Zerind"
• (Abstract) solution =
– set of real paths that are solutions in the real world
• Each abstract action should be "easier" than the original
problem
– (Abstract) state = set of real states

91
Vacuum world state space graph

• states?
• actions?
• goal test?
• path cost?
92
Vacuum world state space graph

• states? integer dirt and robot location


• actions? Left, Right, Suck
• goal test? no dirt at all locations
• path cost? 1 per action
93
Example: The 8-puzzle

• states?
• actions?
• goal test?
• path cost?
94
Example: The 8-puzzle

• states? locations of tiles


• actions? move blank left, right, up, down
• goal test? = goal state (given)
• path cost? 1 per move

[Note: optimal solution of n-Puzzle family is NP-hard]

95

Example: robotic assembly

• states?: real-valued coordinates of robot joint


angles parts of the object to be assembled
• actions?: continuous motions of robot joints
• goal test?: complete assembly
• path cost?: time to execute

96
Tree search algorithms
• Basic idea:
– offline, simulated exploration of state space by
generating successors of already-explored states
(a.k.a.~expanding states)

97
Tree search example

98
Tree search example

99
Tree search example

100
Implementation: general tree search

101
Implementation: states vs. nodes

• A state is a (representation of) a physical configuration


• A node is a data structure constituting part of a search tree
includes state, parent node, action, path cost g(x), depth

• The Expand function creates new nodes, filling in the


various fields and using the SuccessorFn of the problem
to create the corresponding states.

102
Search strategies
• A search strategy is defined by picking the order of node
expansion
• Strategies are evaluated along the following dimensions:
– completeness: does it always find a solution if one exists?
– time complexity: number of nodes generated
– space complexity: maximum number of nodes in memory
– optimality: does it always find a least-cost solution?
• Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)

– 103
Uninformed search strategies
• Uninformed search strategies use only the
information available in the problem
definition
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search

104
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end

CS 3243 - Blind Search 105


Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end

CS 3243 - Blind Search 106


Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end

107
Breadth-first search
• Expand shallowest unexpanded node
– fringe is a FIFO queue, i.e., new successors
go at end
• Implementation:

108
Properties of breadth-first search
• Complete? Yes (if b is finite)
• Time? 1+b+b2+b3+… +bd + b(bd-1) =
O(bd+1)
• Space? O(bd+1) (keeps every node in
memory)
• Optimal? Yes (if cost = 1 per step)
• Space is the bigger problem (more than
time)
• 109
Uniform-cost search
• Expand least-cost unexpanded node
– fringe = queue ordered by path cost
• Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
• Optimal? Yes – nodes expanded in increasing order of
g(n)
– Equivalent to breadth-first if step costs all equal
– Complete? Yes, if step cost ≥ ε
– Time? # of nodes with g ≤ cost of optimal
solution, O(bceiling(C*/ ε)) where C* is the cost
of the optimal solution
• Implementation:

110
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

111
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

112
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

113
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

114
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

115
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

116
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

117
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

118
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

119
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

120
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

121
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

122
Properties of depth-first search
• Complete? No: fails in infinite-depth spaces,
spaces with loops
– Modify to avoid repeated states along path
• Time? O(bm): terrible if m is much larger than d
– but if solutions are dense, may be much faster than
breadth-first
• Space? O(bm), i.e., linear space!
• Optimal? No
–  complete in finite spaces
123
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors

• Recursive implementation:

124
Iterative deepening search

125
Iterative deepening search l =0

126
Iterative deepening search l =1

127
Iterative deepening search l =2

128
Iterative deepening search l =3

129
Iterative deepening search
• Number of nodes generated in a depth-limited search to
depth d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd
• Number of nodes generated in an iterative deepening
search to depth d with branching factor b:
NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd
• For b = 10, d = 5,
– NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
– NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
• Overhead = (123,456 - 111,111)/111,111 = 11%

130
Properties of iterative deepening
search
• Complete? Yes
• Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd =
O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1



131
Summary of algorithms

132
Repeated states
• Failure to detect repeated states can turn
a linear problem into an exponential one!

133
Graph search

134
Summary
• Problem formulation usually requires abstracting away
real-world details to define a state space that can
feasibly be explored

• Variety of uninformed search strategies

• Iterative deepening search uses only linear space and


not much more time than other uninformed algorithms

135

You might also like