0% found this document useful (0 votes)
65 views88 pages

L1 1,2,3,4 Introduction1

This document appears to be a course syllabus for an Artificial Intelligence class. It includes information such as the instructor's contact details, class times and location, topics to be covered each week over the 16-week semester, learning outcomes, grading policy, and required textbooks. The syllabus introduces students to key concepts in AI like components of AI systems, different branches of AI, and the difference between machine learning and deep learning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views88 pages

L1 1,2,3,4 Introduction1

This document appears to be a course syllabus for an Artificial Intelligence class. It includes information such as the instructor's contact details, class times and location, topics to be covered each week over the 16-week semester, learning outcomes, grading policy, and required textbooks. The syllabus introduces students to key concepts in AI like components of AI systems, different branches of AI, and the difference between machine learning and deep learning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 88

Artificial Intelligence

Mr. Mumtaz Zahoor


Lecturer (Computer Sciences)
[email protected]
Faculty of Computer Sciences
Ibadat international University Islamabad
Course Code: CS-13110 Semester: Fall 2022
Credit Hours: 4 (3+1) Prerequisite None
Codes:

Instructor: Mr. Mumtaz Zahoor Class: BSCS


Office: B-01 SD15 Telephone: +923455043534
Lecture Days: Wednesday , E-mail: [email protected]
Thursday

Class Room: F-1/2, F-1/3 Consulting


Hours:

Lab Engineer: Ms. Maida Khalid Lab Engineer [email protected]


Email:

Updates on After every lecture


LMS:
Grading distribution and policy
Criteria Weightage
Assignment(s) / 10%
semester project
Quizzes 10%
Mid Term 20%
Examination
LAB 20%

Final Term 40%


Examination
Ms. Mariya Hanif (Artificial Intelligence)
Week Topics / Sub-Topics

1
Introduction:  Introduction, basic component of AI
2 Introduction:  Identifying AI systems, branches of AI
Reasoning and Knowledge Representation 
3 Introduction to Reasoning and Knowledge Representation

Reasoning and Knowledge Representation 


4 Propositional Logic, First order Logic

Problem Solving by Searching 


5
Introduction and Informed searching
Problem Solving by Searching 
6 Uninformed searching

Problem Solving by Searching 


7 Local searching

Constraint Satisfaction Problems 


8 Adversarial Search introduction

Constraint Satisfaction Problems 


9 (Min-max algorithm, Alpha beta pruning, Game-playing
Learning 
10 Introduction to learning Unsupervised learning
Learning
11 Supervised learning (Decision Tree)
Recent trends in AI and applications of AI algorithms 
12 latest trends 
Recent trends in AI and applications of AI algorithms 
13 Case study of AI systems CNN

Recent trends in AI and applications of AI algorithms 


14 Analysis of AI systems
Uncertainty handling 
15
Uncertainty in AI, Fuzzy logic
16 Revision
Reference Books:

1.Stuart Russell and Peter Norvig, Artificial Intelligence. A


Modern Approach, 3rd edition, Prentice Hall, Inc., 2010. 
2.Hart, P.E., Stork, D.G. and Duda, R.O., 2001. Pattern
classification. John Willey & Sons.
3.Luger, G.F. and Stubblefield, W.A., 2009. AI algorithms,
data structures, and idioms in Prolog, Lisp, and Java. Pearson
Addison-Wesley.
Artificial Intelligence
Artificial Intelligence
Contents
• Introduction
• Components of AI
• Course overview
• What is AI?
• A brief history
• The state of the art
• Identifying AI systems
• Branches of AI
• Subsets of AI
• Difference between Machine Learning and Deep
Learning
Course Outcomes
• CO 1 Investigate the concepts and terminologies of intelligent
agents
• CO 2 Create problem solving algorithms using uninformed searches
• CO 3 Design problem solving algorithms using heuristic searches
• CO 4 Gauge the performance of local and optimized search
methods
• CO 5 Perform evolutionary strategies for solving a given problem
• CO 6 Implement constraint satisfaction for a given set of problems
• CO 7 Compare the performance of adversarial search algorithms
• CO 8 Design an intelligent agent to solve the problem at hand

1
1
What is Intelligence?

• The dream of AI has been to build. . .


“. . . machines that can think, that learn
and that create.”
• “The question of whether Machines Can Think. . .
. . . is about as relevant as the question whether
Submarines Can Swim.”
Dijkstra
(1984)

1
2
What is Intelligence?
The capability of a machine to imitate intelligent human
behavior.
Tossing the word
In 1950 English mathematician Alan Turing wrote a landmark paper
titled “Computing Machinery and Intelligence.
Tossing the word
-AI is one of the newest fields in science and engineering. Work
started after World War II and the name itself was coined in
1956.

-Further work came out of a 1956 workshop . In the proposal


for that workshop, he coined the phrase a “study of Artificial
Intelligence”.
Strong and Weak AI

• One may dream about. . .


. . . that computers can be made to think on a
level at least equal to humans, that they can
be conscious and experience emotions.
Strong AI
• This course is about. . .
. . . adding “thinking-­‐like” features to
computers to make them more useful tools.
That is, “not obviously machine like”.
Weak AI

1
6
1
8
What is AI?
• Systems that. . .
  Humanly Rationally
Thinking humanly — Thinking rationally — the use
cognitive modeling. of logic. Need to worry about
Systems should solve modeling uncertainty and dealing
Thinking problems the same way with complexity.
humans do.

Acting humanly — the Acting rationally — the study of


Turing Test approach. rational agents: agents that
maximize the expected value of
their performance measure given
Acting
what they currently know.

1
9
Acting humanly: The Turing test

• Turing (1950) “Computing machinery and intelligence”:


• “Can machines think?” −→ “Can machines behave intelligently?”
• Operational test for intelligent behavior: the Imitation (the action of using
someone or something as a model or simulation) Game

• Predicted that by the year 2000, a machine might have a 30% chance of fooling a lay person
for 5 minutes
• Anticipated (expect, predict) all major arguments against AI in the following 50 years
• Suggested major components of AI: knowledge, reasoning, language understanding, learning,
computer vision, robotics

Problem: Turing test is not reproducible, constructive, or amenable (controllable) to


mathematical analysis 10
Thinking humanly: Cognitive
Science
• If we are going to say that a given program thinks like a human, we
must have some way of determining how humans think. We need to
get inside the actual workings of human minds.

• There are three ways to do this:


– through introspection—trying to catch our own thoughts as they
go by;
– through psychological experiments—observing a person in action;
– and through brain imaging—observing the brain in action.

Once we have a sufficiently precise theory of the mind, it becomes
possible to express the theory as a computer program. If the program’s
input–output behavior matches corresponding human behavior, that is
evidence that some of the program’s mechanisms could also be
operating in humans. 21
Thinking humanly: Cognitive
Science
1960s: “Cognitive revolution”
• Requires scientific theories of internal activities of the brain
– What level of abstraction? “Knowledge” or “circuits”?
– How to validate? Requires
• Predicting and testing behavior of human subjects (top-­‐down)
• Or Direct identification from neurological data (bottom-­‐up)
• Both approaches (roughly, Cognitive Science and Cognitive
Neuroscience) are now distinct from AI
• Both share with AI the following characteristic:
the available theories do not explain (or engender (cause
or give rise to)) anything resembling human-­‐level general
intelligence

• Hence, all three fields share one principal direction!


22
Thinking rationally: Laws of
Thought
Normative or prescriptive (relating to some standard) rather than
descriptive

• Aristotle: what are correct arguments/thought processes?


• Several Greek schools developed various forms of logic: notation
and rules of derivation for thoughts; may or may not have
proceeded to the idea of mechanization
• Direct line through mathematics and philosophy to modern AI

• Problems:
– Not all intelligent behavior is mediated by logical deliberation (careful
discussion or consideration).
– What is the purpose of thinking? What thoughts should I have out of
all the thoughts (logical or otherwise) that I could have?

23
Thinking rationally: Laws of
Thought
• There are two main obstacles to this approach.
– First, it is not easy to take informal knowledge and
state it in the formal terms required by logical
notation, particularly when the knowledge is less than
100% certain.
– Second, there is a big difference between solving a
problem “in principle” and solving it in practice. Even
problems with just a few hundred facts can exhaust
the computational resources of any computer unless
it has some guidance as to which reasoning steps to
try first.

24
Acting rationally: Rational agents
• Rational behavior: “doing the right thing”, i.e., that which is
expected to maximize goal achievement, given the available
information
– doesn’t necessarily involve thinking (e.g., blinking reflex), but thinking should be in the
service of rational action

• An agent is an entity that perceives and acts.


– This course (and the course book) is about designing rational agents

• Abstractly, an agent is a function from percept histories to actions:


f : P∗ → A, For any given class of environments and tasks,
we seek
the agent (or class of agents) with the best performance

• Caveat (warning): computational limitations make perfect


rationality unachievable
25
→ design best program for given machine resources
Acting rationally: Rational agents
• Knowledge representation and reasoning
enable agents to reach good decisions.

• The rational-­‐agent approach has two advantages


over the other approaches.
– First, it is more general than the “laws of thought”
approach because correct inference is just one of
several possible mechanisms for achieving
rationality.
– Second, it is more amenable to scientific
development than are approaches based on
human behavior or human thought.
26
• If the interrogator cannot reliably distinguish the human from the computer then
the computer does possess (artificial) intelligence
Agents
An agent is anything that perceives
its environment through sensors and
can act on its environment through
actuators

A percept is the agent’sperceptual


inputs at any giveninstance.
What about your robot?
What actuators does it
have?

What sensors does it


have?

2
9
Agents and environments

An agent is specified by an agent function f:P a that


maps a sequence of percept vectors P to an action a
from a set A:
P=[p0, p1, …, pt]
abstract
A={a0, a1, …, ak}
mathematical
description
3
0
Agent function & program
The agent program runs on the physical architecture to
produce f
• agent = architecture + program

“Easy” solution: a giant table that maps every possible


sequence P to an action a
• One small problem: exponential in length of P

3
1
Agents

An agent is anything that can be viewed as


• perceiving its environment through sensors and
• acting upon that environment through actuators

Human agent:
• Sensors: eyes, ears, ...
• Actuators: hands, legs, mouth, …

Robotic agent:
• Sensors: cameras and infrared rangefinders
• Actuators: various motors

Agents include humans, robots, softbots, thermostats, …


3
2
Rational Agent

Let’s try to define “rational agent”.

A rational agent is an agent that perceives its environment


and and behaves rationally

Rational behavior: doing the right thing

Obviously doing the right thing is better than doing the wrong
thing, but what does it
mean to do the right thing?
In Philosophy

Moral philosophy has developed


different notions of “the right
thing”.

AI is usually concerned with


Consequentialism.
The Good Place
We evaluate an agent’s
behavior by its consequences.
Performance measure

How do we know if an agent is acting rationally?


• Informally, we expect that it will do the right thing in all
circumstances.

How do we know if it’s doing the right thing?

We define a performance measure:


• An objective criterion for success of an agent's behavior
• given the evidence provided by the percept sequence.
Performance measure -example

A performance measure for a vacuum-cleaner agent


might include e.g. some subset of:
• +1 point for each clean square in time T
• +1 point for clean square, -1 for each move
• -1000 for more than k dirty squares
Performance measure –rule ofthumb
It is better to design performance measures according to what
you want to be achieved in the environment, rather than how
you think the agent should behave.

For example what might happen if we said

• +1 point for each time the robot cleans a square

instead of

• +1 point for each clean square in time T


Rational agents

Rational Agent:
• For each possible percept sequence P
• a rational agent selects an action a
• to maximize its performance measure

16
Expected value
Rational Agent (initial definition):
• For each possible percept sequence P,
• a rational agent selects an action a
• to maximize its performancemeasure
It doesn’t have to
Rational Agent (revised definition): know what the
actual outcome
• For each possible percept sequence P,
will be.
• a rational agent selects an action a
• that maximizes the expected value of
its performance measure
Task environments

To design a rational agent we need to specify a task


environment
• a problem specification for which the agent is a
solution

PEAS:to specify a task environment


• Performance measure
• Environment
• Actuators
• Sensors
PEAS: Specifying an automated taxi driver

Performance measure:
•?

Environment:
•?

Actuators:
•?

Sensors:
•?
PEAS: Specifying an automated taxi driver

Performance measure:
• safe, fast, legal, comfortable, maximize
profits
Environment:
• roads, other traffic, pedestrians, customers

Actuators:
• steering, accelerator, brake, signal, horn

Sensors:
• cameras, LiDAR, speedometer, GPS
PEAS: Amazon Prime Air

Performance measure:
•?

Environment:
•?

Actuators:
•?

Sensors:
•?
https://www.today.com/video/amazon-adebuts-new-package-delivery-drone-
61414981780
PEAS: Specifying an Amazon delivery drone

Performance measure:
maximize profits - minimize time - obey laws governing
airspace restrictions - deliver package to right location -
keep package in good condition - avoid accidents
- reduce noise - preserve battery life
Environment:
-airspace - obstacles when airborne (other drones,
birds, buildings, trees, utility poles) - obstacles when
landing (pets, patio furniture, lawnmowers, people,
cars) - weather - distances/route information between
warehouse and destinations - position of houses, and
spaces that are safe for drop-off- package weight
PEAS: Specifying an Amazon delivery drone

Actuators:
- Propellers and flight control system- Payload actuators:
E.g. Arm/basket/claw for picking up, dropping off
packages- Lights or signals - Mechanism to
announce/verify delivery- Device for delivering packages to
customers

Sensors:
• GPS - radar/Lidar- altitude sensor- weather sensors
(barometer, etc).- gyroscope- accelerometer- camera-
rotor sensors- weight sensor to recognize package
The rational agent designer’s goal
Goal of AI practitioner who designs rational agents:
given a PEAS task environment, abstract
mathematical
descripti
on
1. Construct agent function f that maximizes
the expected value of the performance concrete
measure, implementation
2. Design an agent program that
implements f on a particular
architecture
Agent Types and their PEAS
Agent Types and their PEAS
Why AI ?
-Artificial intelligence is more accurate than doctors in
diagnosing breast cancer from mammograms, a study in the
journal Nature suggests.
-An international team, including researchers from Google
Health and Imperial College London, designed and trained a
computer model on X-ray images from nearly 29,000 women.
-The algorithm outperformed six radiologists in reading
mammograms.
-AI was still as good as two doctors working together.
-Unlike humans, AI is tireless. Experts say it could improve
detection.

https://www.bbc.com/news/health-50857759
AI 'outperforms' doctors diagnosing breast cancer (2 January 2020)
Components of AI
•Perception - sense audio-visual and other inputs
(In perception the environment is scanned by means of various
sense-organs, real or artificial)

•Knowledge - capture concepts and relationship between these


input data

•Learning - improving with experience (The simplest is


learning
by trial-and-error.)
Components of AI
•Solving - Completion (Special-purpose method is tailor-made for a
particular problem. A general-purpose method is applicable to
address a wide range of different problems.
One general-purpose technique used in AI is means-end analysis,
which involves the step-by-step reduction of the difference between
the current state and the goal state, and some of the methods are
move forward or move back)

•Planning - best use of options to achieve goals (methods - supervised


unsupervised)

•Action - move or manipulate objects or programs (to start up)


PoKed history of AI
• 1943 McCulloch & Pitts: Boolean circuit model of brain
• 1950 Turing’s “Computing Machinery and Intelligence”
• 1952–69 Look, Ma, no hands!
• 1950s Early AI programs: e.g., Samuel’s checkers program, Gelernter’s Geometry Engine, Newell
• 1956 & Simon’s Logic Theorist and General Problem Solver
• 1965 Dartmouth meeting: “Artificial Intelligence” adopted
Robinson’s complete algorithm for logical reasoning
• 1966–74
AI discovers computational complexity, Neural network research almost
• 1969–79
disappears Early development of knowledge-­based
‐ systems
• 1971
Terry Winograd’s Shrdlu dialogue system
• 1980–88 Expert systems industry booms
• 1988–93 Expert systems industry busts: “AI Winter”
• 1985–95 Neural networks return to popularity
• 1988– Resurgence of probability; general increase in technical depth “Nouvelle AI”: ALife,
• GAs, soj computing
1995–
• Agents, agents, everywhere . . .
1997
IBM Deep Blue beats the World Chess Champion
• 2001–
• 2003– Very large datasets: Google gigaword corpus, Wikipedia
• 2011 Human-­level
‐ AI back on the agenda
• 2012 IBM Watson wins Jeopardy
US state of Nevada permits driverless cars
State of the art

• What can AI do today? A concise answer is difficult because


there are so many activities in so many subfields. Here we
sample a few applications;
• Understanding and Processing Natural Languages:
• Driverless vehicles:
• Speech recognition:
• Autonomous planning and scheduling:
• Game playing:
• Spam fighting:
• Logistics (the detailed organization and
implementation of a complex operation) planning:
• Robotics:
• Machine Translation:

54
Identifying and branches of AI
System
-Machine learning: The science of getting a computer to act without
programming. Deep learning is a subset of machine learning.
There are three types of machine learning algorithms:
•Supervised learning: Data sets are labeled so that patterns can be
detected
and used to label new data sets.
•Unsupervised learning: Data sets aren't labeled and are sorted
according
to similarities or differences.
-Machine vision: The science of allowing computers to see. This
technology captures and analyzes visual information using a camera,
analog-to-digital conversion and digital signal processing. It is often
compared to human eyesight.
Identifying and branches of AI
System
-Natural language processing (NLP): One of the older and best known
examples of NLP is spam detection, which looks at the subject line and
the text of an email and decides if it's junk or not. Current approaches to
NLP are based on machine learning.
-Robotics: A field of engineering focused on the design and manufacturin
of robots. Robots are often used to perform tasks that are difficult for
humans to perform or perform consistently. They are used in assembly
lines for car production.
-Self-driving cars: These use a combination of computer vision,
image recognition and deep learning to build automated skill to direct a
vehicle while staying in a given lane and avoiding unexpected
obstructions, such as pedestrians.
Subsets of Artificial
Intelligence
Difference between machine learning and
Deep learning

Both machine learning and deep learning discover patterns in data, but
they involve dramatically different techniques.
Deep learning is a specific kind of machine learning. Both machine
learning and deep learning start with training and test data and a model
and go through an optimization process to find the weights that make the
model best fit the data. Both can handle numeric and non-numeric
problems, deep learning models tend to produce better results than
machine learning models in some areas.
While basic machine learning models do become progressively better at
whatever their function is, but they still need some guidance.
Difference between machine learning and
Deep learning

In Machine Learning model, If an AI algorithm returns an inaccurate


prediction, then an engineer has to step in and make adjustments.
With a Deep Learning model, an algorithm can determine on its own if
a prediction is accurate or not through its own neural network.
Homework
1­‐-

60
When should something be considered an
agent?
When should something be considered another agent?

If we’re talking about a self driving taxi, when should we


consider something part of the environment versus another
agent?

For instance, a telephone pole is part of the environment,


but a car might be another agent.

When something behavior can best be described as


having its own performance measure, then we should
consider it to be an agent.
Examples

35
Properties of Task Environment

• Environment: Observability
• Environment: Determinism
• Environment: Episodicity
• Environment: Dynamism
• Environment: Continuity
• Environment: Other Agents
• Environment: Complex
Environment: Observability
• Fully observable
– If an agent's sensors give it access to the complete state of the
environment at each point in time, then we say that the task
environment is fully observable
– Example: Chess, Poker (if the information of both hands is
available)

• Partially observable
– An environment might be partially observable because of noisy
and inaccurate sensors or because parts of the state are simply
missing from the sensor data
– Example: a vacuum agent with only a local dirt sensor cannot
tell
whether there is dirt in other squares
– Example: Taxi driving - one can never predict the behavior of
traffic exactly
Environment: Determinism
• Deterministic
– The next state of the environment is completely determined by the
current state and the action of agent
– Example: Image analysis, Poker (both hands known)
• Stochastic
– The presence of an element of uncertainty leads to stochastic
environment
– Example: board game with dice (stairs and snakes)
– Example: One's tires blow out and one's engine seizes up without
warning
• Strategic
– The environment wholly determined by the preceding state and
the actions of multiple agents
– Example: Chess
Environment: Episodicity
• Episodic
– Subsequent episodes do not depend on what actions occurred in
previous episodes (agent's experience is divided into atomic episodes)
– Example: An agent that has to spot defective parts on an
assembly line bases each decision on the current part, regardless
of previous decisions
• Sequential
– The agent is engaged in a series of connected episodes
– Example: Chess
– Example: Taxi driving
Environment: Dynamism
• Static
– The environment does not change from one state to the next
while the agent is considering its course of action. The only
changes to the environment are those caused by the agent
itself (agent does not need to observe the world during its thinking
process)

– Example: Turns taken during Ludo


– Example: Assembly line of new tractors for rejection of
engine torque
• Dynamic
– The environment changes over time independent of the
actions of the agent - continuously asking the agent what it
wants to do; if it hasn't decided yet, that counts as deciding to
do nothing
– Example: Interactive tutor
Environment: Continuity
• Discrete
– The number of distinct percepts and actions is limited in
case of discrete environment (it is related to the fact that
how the time is handled? )
– Example: a chess game has a finite number of distinct states
• Continuous
– The number of distinct percepts and actions is not limited in
case of discrete environment
– Example: Taxi driving is a continuous state and continuous-
time problem
– Example: Taxi-driving actions are also continuous
(steering angles, etc.)
Environment: Other Agents

• Single agent
– It is based on the involvement of a single agent
– Example: an agent solving a crossword puzzle by
itself is a single- agent environment
• Multi-agent
– More than one agent is involved resulting in complex
environment
– Example: an agent playing chess is in a two-agent
environment
Environment: Complex

• Complexity of the environment includes


– Knowledge rich
– Input rich

• The agent must have a way of managing this complexity.


Often such conditions lead to the development of
– Sensing strategies
– Attentional mechanisms
• So that the agent may more readily focus its efforts in
such rich environments
Environment
Types
Types of
Agent
Simple Reflex
Agents
• The agent selects action on the basis of the current
percept, ignoring the rest of the percept history

• Example: the vacuum agent whose agent function


is
tabulated is a simple reflex agent
Simple Reflex
Agents
Simple Reflex
Agents
• Some weak points are:
– tables may become very large
– a table based agent has no intelligence in the agent itself
– entire work is done by the designer of the table
– agent just looks it up to decide how to act and has no autonomy
– all actions are predetermined with no concept of learning
Reflex Agents with
State
• The agent maintains some sort of internal state that
depends on the percept history and thereby reflects at
least some of the unobserved aspects of the current state
• Example: For the braking problem of a car, the internal
state is not too extensive just the previous frame from the
camera to detect red lights move to the on state
Reflex Agents with
State
• Updating internal state requires two kinds of knowledge in the agent
program
– First, we need some information about how the world evolves independently
of the agent-for example, that an overtaking car generally will be closer
behind than it was a moment ago
– Second, we need some information about how the agent's own actions affect the
world-for example, that when the agent turns the steering wheel clockwise, the
car turns to the right
• This knowledge about "how the world works”-whether implemented in
simple Boolean circuits or in complete scientific theories-is called a
model of the world
• An agent that uses such a model is called a model-based agent
Reflex Agents with
State

Introduction to Artificial Intelligence


Goal-based
Agents
• The agent needs some sort of goal information as well as a current
state description that describes situations that are desirable
• Example: Knowing about the current state of the environment is not
always enough to decide what to do. For example, at a road junction,
the taxi can turn left, turn right, or go straight on
• The correct decision depends on where the taxi is trying to get
to, i.e. the goal
• The reflex agent brakes when it sees brake lights. A goal-based agent, in
principle, could reason that if the car in front has its brake lights on, it will
slow down
Goal-based
Agents
Goal-based
Agent
• This agent appears less efficient
• It is more flexible because the knowledge that supports its
decisions is represented explicitly and can be modified. If it starts
to rain, the agent can update its knowledge of how effectively its
brakes will operate; this will automatically cause all of the
relevant behaviors to be altered to suit the new conditions. For
the reflex agent, on the other hand, we would have to rewrite
many condition-action rules
• The percept-sequence is not known a priori and must be
determined by fair exploration of numerous options
Utility-based
Agents
• Goals alone are not really enough to generate high-quality behavior in
most environments
• For example, there are many action sequences that will get the taxi to
its destination (thereby achieving the goal) but some are quicker, safer,
more reliable, or cheaper than others
• Goals just provide a crude binary distinction between "happy" and
"unhappy" states, whereas a more general performance measure
should allow a comparison of different world states according to
exactly how happy they would make the agent if they could be
achieved
• Because "happy" does not sound very scientific, the customary
terminology is to say that if one world state is preferred to another, then
it has higher utility for the agent
Utility-based
Agents

• This type of agent provides a more general framework with different


preferences for different goals
• A utility function maps a state or a sequence of states to a real
valued utility
• The agent objective is to maximize the expected utility
Utility-based
Agents
Learning
Agents

• Learning is important for true autonomy


• Autonomous agent must be accompanied with some learning built
into it
• Learning agents are able to operate in initially unknown
environments
• Let’s see four conceptual components of a learning agent
Learning
Agents
1. Learning element: responsible for making modifications in
performance element
2. Performance element: responsible for selecting external
actions
3. Critic: it provides feedback on how the agent is doing and
determines how the performance element can be improved
4. Problem Generator: responsible for suggesting exploratory
experiences
Learning
Agents
Jazak Allah khair

You might also like