0% found this document useful (0 votes)
193 views32 pages

Unit I-AI-KCS071

The document provides information about an Artificial Intelligence course offered at Galgotias College of Engineering and Technology. It outlines 5 course outcomes that students will achieve by the end of the course, including understanding the basics of AI theory and intelligent agents, search techniques, knowledge representation, classification and clustering techniques, and pattern recognition basics. It also provides details on 5 units that will be covered related to introduction to AI, problem solving methods, knowledge representation, intelligent agent architecture, and AI applications.

Uploaded by

Sv tuber
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
193 views32 pages

Unit I-AI-KCS071

The document provides information about an Artificial Intelligence course offered at Galgotias College of Engineering and Technology. It outlines 5 course outcomes that students will achieve by the end of the course, including understanding the basics of AI theory and intelligent agents, search techniques, knowledge representation, classification and clustering techniques, and pattern recognition basics. It also provides details on 5 units that will be covered related to introduction to AI, problem solving methods, knowledge representation, intelligent agent architecture, and AI applications.

Uploaded by

Sv tuber
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Galgotias College of Engineering and Technology

ARTIFICIAL INTELLIGENCE
KCS 071
B.TECH IV YR / VII SEM
(2022-23)

DEPARTMENT OF COMPUTER SCIENCE &


ENGENEERING

1
Course Outcomes
Course
Statement (On completion of this course, the student will be able to)
Outcome
Understand the basics of the theory and practice of Artificial Intelligence as a
CO1 discipline and about intelligent agents.
CO2 Understand search techniques and gaming theory.
The student will learn to apply knowledge representation techniques and
CO3 problem solving strategies to common AI applications.
CO4 Student should be aware of techniques used for classification and clustering.

CO5 Student should aware of basics of pattern recognition and steps required for it.

UNIT 1
Introduction–Definition – Future of Artificial Intelligence – Characteristics of Intelligent
Agents–Typical Intelligent Agents – Problem Solving Approach to Typical AI problems.

UNIT 2
Problem solving Methods – Search Strategies- Uninformed – Informed – Heuristics –
Local Search Algorithms and Optimization Problems – Searching with Partial
Observations – Constraint Satisfaction Problems – Constraint Propagation –
Backtracking Search – Game Playing – Optimal Decisions in Games – Alpha – Beta
Pruning – Stochastic Games.

UNIT 3
First Order Predicate Logic – Prolog Programming – Unification – Forward Chaining-
Backward Chaining – Resolution – Knowledge Representation – Ontological
Engineering- Categories and Objects – Events – Mental Events and Mental Objects –
Reasoning Systems forCategories – Reasoning with Default Information.

UNIT 4
Architecture for Intelligent Agents – Agent communication – Negotiation and
Bargaining – Argumentation among Agents – Trust and Reputation in Multi-agent
systems.

UNIT 5
AI applications – Language Models – Information Retrieval- Information Extraction –
NaturalLanguage Processing – Machine Translation – Speech Recognition.

2
UNIT 1

INTRODUCTION TO ARTIFICIAL INTELLIGENCE


Introduction–Definition –Future of Artificial Intelligence –Characteristics of
IntelligentAgents– Typical Intelligent Agents –Problem Solving Approach to Typical
AI problems.

1.1 INTRODUCTION

INTELLIGENCE ARTIFICIAL INTELLIGENCE


It is a natural process. It is programmed by humans.
It is actually hereditary. It is not hereditary.
Knowledge is required for intelligence. KB and electricity are required to generate
output.
No human is an expert. We may get better Expert systems are made which aggregate
solutions from other humans. many person’s experience and ideas.

 Artificial Intelligence is concerned with the design of intelligence in an artificial device. The
termwas coined by John McCarthy in 1956.
 Intelligence is the ability to acquire, understand and apply the knowledge to achieve goals in the
world.
 AI is the study of the mental faculties through the use of computational models
 AI is the study of intellectual/mental processes as computational processes.
 AI program will demonstrate a high level of intelligence to a degree that equals or exceeds the
intelligence required of a human in performing some task.
 AI is unique, sharing borders with Mathematics, Computer Science, Philosophy,
Psychology, Biology, Cognitive Science and many others.
 Although there is no clear definition of AI or even Intelligence, it can be described as an attempt to
build machines that like humans can think and act, able to learn and use knowledge to solve
problems on their own.

History of AI:

Important research that laid the groundwork for AI:

 In 1931, Goedel layed the foundation of Theoretical Computer Science1920-30s:


 He published the first universal formal language and showed that math

3
itself is either flawed or allows for unprovable but true statements.
 In 1936, Turing reformulated Goedel’s result and church’s extension thereof.

 In 1956, John McCarthy coined the term "Artificial Intelligence" as the topic of the Dartmouth
Conference, the first conference devoted to the subject.

 In 1957, The General Problem Solver (GPS) demonstrated by Newell, Shaw & Simon
 In 1958, John McCarthy (MIT) invented the Lisp language.
 In 1959, Arthur Samuel (IBM) wrote the first game-playing program, for checkers, to achieve sufficient
skill to challenge a world champion.
 In 1963, Ivan Sutherland's MIT dissertation on Sketchpad introduced the idea of interactive graphics into
computing.
 In 1966, Ross Quillian (PhD dissertation, Carnegie Inst. of Technology; now CMU) demonstrated
semantic nets
 In 1967, Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland
at Stanford) demonstrated to interpret mass spectra on organic chemical compounds. First successful
knowledge-based program for scientific reasoning.
 In 1967, Doug Engelbart invented the mouse at SRI
 In 1968, Marvin Minsky & Seymour Papert publish Perceptrons, demonstrating limits of simple neural
nets.
 In 1972, Prolog developed by Alain Colmerauer.
 In Mid 80’s, Neural Networks become widely used with the Backpropagation algorithm (first described
by Werbos in 1974).
 1990, Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent
tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining,
natural language understanding and translation, vision, virtual reality, games, and other topics.
 In 1997, Deep Blue beats the World Chess Champion Kasparov
 In 2002,iRobot, founded by researchers at the MIT Artificial Intelligence Lab, introduced Roomba, a
vacuum cleaning robot. By 2006, two million had been sold.

Foundations of Artificial Intelligence:

o Philosophy
 e.g., foundational issues (can a machine think?), issues of knowledge and
believe, mutual knowledge

4
o Psychology and Cognitive Science
 e.g., problem solving skills
o Neuro-Science
 e.g., brain architecture
o Computer Science And Engineering
 e.g., complexity theory, algorithms, logic and inference, programming languages, and
system building.
o Mathematics and Physics
 e.g., statistical modeling, continuous mathematics,
o Statistical Physics, and Complex Systems

Sub Areas of AI:


1) Game Playing
Deep Blue Chess program beat world champion Gary Kasparov
2) Speech Recognition
PEGASUS spoken language interface to American Airlines' EAASY SABRE reseration system,
which allows users to obtain flight information and make reservations over the telephone. The
1990s has seen significant advances in speech recognition so that limited systems are now
successful.
3) Computer Vision
Face recognition programs in use by banks, government, etc. The ALVINN system from CMU
autonomously drove a van from Washington, D.C. to San Diego (all but 52 of 2,849 miles),
averaging 63 mph day and night, and in all weather conditions. Handwriting recognition, electronics
and manufacturing inspection, photo interpretation, baggage inspection, reverse engineering to
automatically construct a 3D geometric model.
4) Expert Systems
Application-specific systems that rely on obtaining the knowledge of human experts in an area
and programming that knowledge into a system.
a. Diagnostic Systems : MYCIN system for diagnosing bacterial infections of the blood and
suggesting treatments. Intellipath pathology diagnosis system (AMA approved). Pathfinder
medical diagnosis system, which suggests tests and makes diagnoses. Whirlpool customer
assistance center.
b. System Configuration
DEC's XCON system for custom hardware configuration. Radiotherapy treatment planning.
c. Financial Decision Making
Credit card companies, mortgage companies, banks, and the U.S. government employ
5
AI systems to detect fraud and expedite financial transactions. For example, AMEX
credit check.
d. Classification Systems
Put information into one of a fixed set of categories using several sources of information.
E.g., financial decision making systems. NASA developed a system for classifying very
faint areas in astronomical images into either stars or galaxies with very high accuracy by
learning from human experts' classifications.
5) Mathematical Theorem Proving
Use inference methods to prove new theorems.
6) Natural Language Understanding
AltaVista's translation of web pages. Translation of Catepillar Truck manuals into 20 languages.
7) Scheduling and Planning
Automatic scheduling for manufacturing. DARPA's DART system used in Desert Storm and
Desert Shield operations to plan logistics of people and supplies. American Airlines rerouting
contingency planner. European space agency planning and scheduling of spacecraft assembly,
integration and verification.
8) Artificial Neural Networks
9) Machine Learning

1.2 DEFINITION
Building AI Systems:

1) Perception
Intelligent biological systems are physically embodied in the world and experience the world
through their sensors (senses). For an autonomous vehicle, input might be images from a camera
and range information from a rangefinder. For a medical diagnosis system, perception is the set
of symptoms and test results that have been obtained and input to the system manually.
2) Reasoning
Inference, decision-making, classification from what is sensed and what the internal "model" is
of the world. Might be a neural network, logical deduction system, Hidden Markov Model
induction, heuristic searching a problem space, Bayes Network inference, genetic algorithms, etc.
Includes areas of knowledge representation, problem solving, decision theory, planning, game
theory, machine learning, uncertainty reasoning, etc.
3) Action
Biological systems interact within their environment by actuation, speech, etc. All behavior is
centered around actions in the world. Examples include controlling the steering of a Mars rover
6
or autonomous vehicle, or suggesting tests and making diagnoses for a medical diagnosis system.
Includes areas of robot actuation, natural language generation, and speech synthesis.

The definitions of AI:


AI is a very wide field of science and engineering which makes intelligent machines and especially Intelligent
computer programs. It is related to the similar tasks of using computers to understand human intelligence.
Scientists want to automate human intelligence for the following reasons :
(i) Understanding and reasoning of human intelligence in better way.
(ii) Making more smarter programs.
(iii) Useful and efficient techniques to solve complex problems.

Definitions of AI vary along two main dimensions. Roughly, the ones on top are concerned with thought
processes and reasoning, whereas the ones on the bottom address behavior.
The study of how to make computers do things at which at the moment, people are better.
“Artificial Intelligence is the ability of a computer to act like a human being”.

 Systems that think like humans


 Systems that act like humans
 Systems that think rationally.
 Systems that act rationally.

a) "The exciting new effort to make computers b) "The study of mental faculties through the
think . . . machines with minds, in the full and use of computational models" (Charniak
literal sense" (Haugeland, 1985) and McDermott, 1985)

"The automation of] activities that we "The study of the computations that
associate with human thinking, activities such make it possible to perceive, reason, and
as decision-making, problem solving, act" (Winston, 1992)
learning..."(Bellman, 1978)
c) "The art of creating machines that perform d) "A field of study that seeks to explain and
functions that require intelligence when emulate intelligent behavior in terms of
performed by people" (Kurzweil, 1990) computational processes" (Schalkoff, 1
990)
"The study of how to make computers do
things at which, at the moment, people are "The branch of computer science that is
better" (Rich and Knight, 1 99 1 ) concerned with the automation of
intelligent behavior" (Luger and
Stubblefield, 1993)

Some definitions of artificial intelligence, organized into four categories

The definitions on the top, (a) and (b) are concerned with reasoning, whereas those on the bottom, (c)
and (d) address behavior. The definitions on the left, (a) and (c) measure success in terms of human
performance, and those on the right, (b) and (d) measure the ideal concept of intelligence called
rationality

7
(a) Intelligence - Ability to apply knowledge in order to perform better in an environment.
(b) Artificial Intelligence - Study and construction of agent programs that perform well in
a given environment, for a given agent architecture.
(c) Agent - An entity that takes action in response to precepts from an environment.
(d) Rationality - property of a system which does the “right thing” given what it knows.
(e) Logical Reasoning - A process of deriving new sentences from old, such that the new
sentences are necessarily true if the old ones are true.

Four Approaches of Artificial Intelligence:


 Acting humanly: The Turing test approach.
 Thinking humanly: The cognitive modelling approach.
 Thinking rationally: The laws of thought approach.
 Acting rationally: The rational agent approach.

Human- Like Rationally

Cognitive Science Approach Laws of thought Approach


Think:
“Machines that think like humans” “ Machines that think Rationally”

Turing Test Approach Rational Agent Approach


Act:
“Machines that behave like humans” “Machines that behave Rationally”

Scientific Goal: To determine which ideas about knowledge representation, learning, rule
systems search, and so on, explain various sorts of real intelligence.
Engineering Goal:To solve real world problems using AI techniques such as Knowledge
representation, learning, rule systems, search, and so on.
Traditionally, computer scientists and engineers have been more interested in the engineering goal,
while psychologists, philosophers and cognitive scientists have been more interested in the scientific
goal.
Cognitive Science: Think Human-Like

a. Requires a model for human cognition. Precise enough models allow simulation by
computers.

b. Focus is not just on behavior and I/O, but looks like reasoning process.

c. Goal is not just to produce human-like behavior but to produce a sequence of steps of the
reasoning process, similar to the steps followed by a human in solving the same task.

Laws of thought: Think Rationally

a. The study of mental faculties through the use of computational models; that it is, the study of
computations that make it possible to perceive reason and act.
8
b. Focus is on inference mechanisms that are probably correct and guarantee an optimal solution.

c. Goal is to formalize the reasoning process as a system of logical rules and procedures of inference.

d. Develop systems of representation to allow inferences to be like “Socrates

is a man. All men are mortal. Therefore Socrates is mortal”

1.3 ACTING HUMANLY: THE TURING TEST APPROACH

The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory
operational definition of intelligence. A computer passes the test if a human interrogator, after
posing some written questions, cannot tell whether the written responses come from a person
or from a computer.

a. The art of creating machines that perform functions requiring intelligence when performed by people; that
it is the study of, how to make computers do things which, at the moment, people do better.

b. Focus is on action, and not intelligent behavior centered around the representation of the world

c. Example: Turing Test

o 3 rooms contain: a person, a computer and an interrogator.

o The interrogator can communicate with the other 2 by teletype (to avoid the machine
imitate the appearance of voice of the person)
o The interrogator tries to determine which the person is and which the machineis.

o The machine tries to fool the interrogator to believe that it is the human, and the
person also tries to convince the interrogator that it is the human.

o If the machine succeeds in fooling the interrogator, then conclude that the machine
is intelligent.

Figure: Turing Test

9
 natural language processing to enable it to communicate successfully in English;
 knowledge representation to store what it knows or hears;
 automated reasoning to use the stored information to answer questions and to
drawnew conclusions
 machine learning to adapt to new circumstances and to detect and extrapolate patterns.

Total Turing Test includes a video signal so that the interrogator can test the subject’s
perceptual abilities, as well as the opportunity for the interrogator to pass physical objects
“through the hatch.” To pass the total Turing Test, the computer will need

• computer vision to perceive objects, and robotics to manipulate objects and move
about.

Thinking humanly: The cognitive modelling approach

Analyse how a given program thinks like a human, we must have some way of determining
how humans think. The interdisciplinary field of cognitive science brings together computer
models from AI and experimental techniques from psychology to try to construct precise and
testable theories of the workings of the human mind.

Although cognitive science is a fascinating field in itself, we are not going to be discussing it
all that much in this book. We will occasionally comment on similarities or differences between
AI techniques and human cognition. Real cognitive science, however, is necessarily based on
experimental investigation of actual humans or animals, and we assume that the reader only
has access to a computer for experimentation. We will simply note that AIand cognitive science
continue to fertilize each other, especially in the areas of vision, naturallanguage, and learning.

Thinking rationally: The “laws of thought” approach

The Greek philosopher Aristotle was one of the first to attempt to codify ``right thinking,'' that
is, irrefutable reasoning processes. His famous syllogisms provided patterns for argument
structures that always gave correct conclusions given correct premises.

For example, ``Socrates is a man; all men are mortal; therefore Socrates is mortal.''

These laws of thought were supposed to govern the operation of the mind, and initiatedthe field
of logic.

Acting rationally: The rational agent approach

Acting rationally means acting so as to achieve one's goals, given one's beliefs. An
agent is just something that perceives and acts.

The right thing: that which is expected to maximize goal achievement, given the
available information

Does not necessary involve thinking.


10
For Example - blinking reflex- but should be in the service of rational action.

Rational agent: Act Rationally

a. Tries to explain and emulate intelligent behavior in terms of computational process; that it is
concerned with the automation of the intelligence.

b. Focus is on systems that act sufficiently if not optimally in all situations.

c. Goal is to develop systems that are rational and sufficient

Difference b/w Natural & Human Intelligence

NATURAL INTELLIGENCE ARTIFICIAL INTELLLLIGENCE


Exhibited by human beings Programmed by humans in machines
Highly refined and no electricity It exists in computer system, so
required to generate electrical energy is required for
output. activation of output.
No one is an expert. We can get Expert system exists ,
better solution from one another which collect ideas of human beings
Intelligence increases under Intelligence increases by updating
supervision. technology and algorithms used .

The difference between strong AI and weak AI:

Strong AI makes the bold claim that computers can be made to think on a level (at least) equal to humans. It truly reasons
and solve complex problems. In strong AI programs itself are explanations for any solution.

Weak AI simply states that some "thinking-like" features can be added to computers to make them more useful tools...
and this has already started to happen (witness expert systems, drive-by-wire cars and speech recognition software).
Weak AI deals with the creation of some form of computer based artificial intelligence which can reason and solve
problems in limited domain. Some thinking like feature may be added to machine , but true intelligence is absent. Here
we have to get the explanation of solution by us in own way rather depending on computer machine.

1.4 FUTURE OF ARTIFICIAL INTELLIGENCE

 Transportation: Although it could take a decade or more to perfect them, autonomous


cars will one day ferry us from place to place.

 Manufacturing: AI powered robots work alongside humans to perform a limited range


of tasks like assembly and stacking, and predictive analysis sensors keep equipment
running smoothly.

 Healthcare: In the comparatively AI-nascent field of healthcare, diseases are more


quickly and accurately diagnosed, drug discovery is sped up and streamlined, virtual
nursing assistants monitor patients and big data analysis helps to create a more
personalized patient experience.
11
 Education: Textbooks are digitized with the help of AI, early-stage virtual tutors assist
human instructors and facial analysis gauges the emotions of students to help determine
who’s struggling or bored and better tailor the experience to their individual needs.

 Media: Journalism is harnessing AI, too, and will continue to benefit from it.
Bloomberg uses Cyborg technology to help make quick sense of complex financial
reports. The Associated Press employs the natural language abilities of Automated
Insights to produce 3,700 earning reports stories per year — nearly four times more
than in the recent past

 Customer Service: Last but hardly least, Google is working on an AI assistant that can
place human-like calls to make appointments at, say, your neighborhood hair salon. In
addition to words, the system understands context and nuance.

1.5 CHARACTERISTICS OF INTELLIGENT AGENTS

Intelligent Agents:
The word agent is derived from the concept that when some agency hires some person to do a particular work
on behalf of the user. Agent is that program in terms of AI , which perceives its environment through sensors
and acts upon it accordingly by using actuators. E.g : Software agent, Robotic agent, Nano robots for body
check ups/ biological agents, Internet search agent etc. Software agents carry following properties :

 Intelligent agents are autonomous.


 Ability to perceive data and signals from the environment.
 Adapting to change in surroundings.
 Transportable or mobile over networks.
 Ability to learn , reason , and interact with humans.
Situatedness

The agent receives some form of sensory input from its environment, and it performssome
action that changes its environment in some way.

Examples of environments: the physical world and the Internet.

Autonomy

The agent can act without direct intervention by humans or other agents and that it has control
over its own actions and internal state.

Adaptivity

The agent is capable of


(1) reacting flexibly to changes in its environment;
(2) taking goal-directed initiative (i.e., is pro-active), when appropriate; and
(3) Learning from its own experience, its environment, and interactions with others.
12
Sociability

The agent is capable of interacting in a peer-to-peer manner with other agents or humans

1.6 AGENTS AND ITS TYPES

Figure: Agents and Environments

An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.

 Human Sensors:
 Eyes, ears, and other organs for sensors.
 Human Actuators:
 Hands, legs, mouth, and other body parts.
 Robotic Sensors:
 Mic, cameras and infrared range finders for sensors
 Robotic Actuators:
 Motors, Display, speakers etc
An agent can be:

Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
13
and hand, legs, vocal tract work for actuators.

Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.

Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cell phone, camera, and
even we are also agents. Before moving forward, we should first know about sensors,
effectors,and actuators.

Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through
sensors.
Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator can
be anelectric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.

Figure: Effectors

1.7 PROPERTIES OF ENVIRONMENT

An environment is everything in the world which surrounds the agent, but it is not a part of an
agent itself. An environment can be described as a situation in which an agent is present.

The environment is where agent lives, operate and provide the agent with something tosense
and act upon it.

Fully observable vs Partially Observable:

If an agent sensor can sense or access the complete state of an environment at each point of time then
it is a fully observable environment, else it is partially observable. In other words, in fully observable
environment, agent’s sensors give it’s access to complete state of the environment at each point. In
14
partially observable due to noise and inaccurate sensors prediction becomes unclear.

A fully observable environment is easy as there is no need to maintain the internal stateto keep track
history of the world.

An agent with no sensors in all environments then such an environment is called as unobservable.

Example: chess – the board is fully observable, as are opponent’s moves.


Driving– what is around the next bend is not observable and hence partially observable.

Deterministic vs Stochastic

 If an agent's current state and selected action can completely determine the next stateof the
environment, then such environment is called a deterministic environment.
 A stochastic environment is random in nature and cannot be determined completely byan agent.

 In a deterministic, fully observable environment, agent does not need to worry about uncertainty.
E.g : Taxi driving agent is stochastic because one can never predict behavior of traffic exactly. Vaccum cleaner
agent is deterministic.

Episodic vs Sequential

 In an episodic environment, there is a series of one-shot actions, and only the current percept is
required for the action.

 However, in Sequential environment, an agent requires memory of past actions to determine the
next best actions.
E.g : In a Taxi driving agent intensity of brakes put on may have long term consequences.

Single-agent vs Multi-agent

 If only one agent is involved in an environment, and operating by itself then such an environment
is called single agent environment.

 However, if multiple agents are operating in an environment, then such an environmentis called a
multi-agent environment.

 The agent design problems in the multi-agent environment are different from single agent
environment.
Ex: Agent solving a crossword puzzle alone a single agent. Chess playing is two agent. Robot Soccer
is multi agent ( Cooperative multi agent).

Static vs Dynamic

 If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
15
 Static environments are easy to deal because an agent does not need to continue looking
at the world while deciding for an action.

 However for dynamic environment, agents need to keep looking at the world at each
action.

 Taxi driving is an example of a dynamic environment whereas Crossword puzzles are


an example of a static environment.

Discrete vs Continuous

 If in an environment there are a finite number of precepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.

 A chess game comes under discrete environment as there is a finite number of moves
that can be performed.

 A self-driving car is an example of a continuous environment.

Known vs Unknown
 Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.

 In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an action.

 It is quite possible that a known environment to be partially observable and an Unknown


environment to be fully observable.

Accessible vs. Inaccessible

 If an agent can obtain complete and accurate information about the state's
environment,then such an environment is called an Accessible environment else it
is called inaccessible.

 An empty room whose state can be defined by its temperature is an example of an


accessible environment.

 Information about an event on earth is an example of Inaccessible environment.

16
Task environments, which are essentially the "problems" to which rational agents arethe
"solutions."

PEAS: Performance Measure, Environment, Actuators, Sensors

Performance measure decides criterion for the success of an agent’s behavior. When an agent is
plunked down in the environment , it generates a sequence of actions according to the percept it
receives.The sequence of actions causes environment to go through a sequence of states. If
sequence is desirable, then agent has performed well.

Performance

The output which we get from the agent. All the necessary results that an agent gives
after processing comes under its performance.

Environment

All the surrounding things and conditions of an agent fall in this section. It basically
consists of all the things under which the agents work.

Actuators

The devices, hardware or software through which the agent performs any actions or
processes any information to produce a result are the actuators of the agent.

Sensors

The devices through which the agent observes and perceives its environment are the
sensors of the agent.

17
Figure: Examples of agent types and their PEAS descriptions

Rational Agent - A system is rational if it does the “right thing”. Given what it knows.

Characteristic of Rational Agent

 The agent's prior knowledge of the environment.


 The performance measure that defines the criterion of success.
 The actions that the agent can perform.
 The agent's percept sequence to date.

For every possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.

An omniscient agent knows the actual outcome of its actions and can act accordingly;
but omniscience is impossible in reality.

Ideal Rational Agent precepts and does things. It has a greater performance measure.
Eg. Crossing road. Here first perception occurs on both sides and then only action. No
perception occurs in Degenerate Agent.
Eg. Clock. It does not view the surroundings. No matter what happens outside. The
clock works based on inbuilt program.

Ideal Agent describes by ideal mappings. “Specifying which action an agent ought to
take in response to any given percept sequence provides a design for ideal agent”.

18
Eg. SQRT function calculation in calculator.

Doing actions in order to modify future precepts-sometimes called information


gathering- is an important part of rationality.

A rational agent should be autonomous-it should learn from its own prior knowledge
(experience).
Percept:
We use the term percept to refer to the agent's perceptual inputs at any given instant.

Percept Sequence:
An agent's percept sequence is the complete history of everything the agent has ever perceived.

Agent function:
Mathematically speaking, we say that an agent's behavior is described by the agent function that maps any
given percept sequence to an action. Internally agent function can be implemented by an agent program
f : P* →A , where P* sequence of zero or more percepts, A is an action taken by the agent.

The Structure of Intelligent Agents

Agent = Architecture + Agent Program


Architecture = the machinery that an agent executes on. (Hardware)
Agent Program = an implementation of an agent function. (Algorithm,
Logic – Software)
Agent program
Internally, the agent function for an artificial agent will be implemented by an agent program. It is
important to keep these two ideas distinct. The agent function is an abstract mathematical
description; the agent program is a concrete implementation, running on the agent architecture.

To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world shown
in Fig A below.
This particular world has just two locations (2 environments): squares A and B. The vacuum agent
perceives which square it is in and whether there is dirt in the square. It can choose to move left,
move right, suck up the dirt, or do nothing. One very simple agent function is the following: if the
current square is dirty, then suck, otherwise move to the other square. A partial tabulation of this
agent function is shown in Fig B.

19
Fig A: A vacuum-cleaner world with just two locations.

Agent function

Percept Sequence Action

[A, Clean] Right

[A, Dirty] Suck

[B, Clean] Left

[B, Dirty] Suck

[A, Clean], [A, Clean] Right

[A, Clean], [A, Dirty] Suck

Fig B: Partial tabulation of a simple agent function for the example: vacuum-cleaner world

Function REFLEX-VACCUM-AGENT ([location, status]) returns

an action If status=Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Fig : The REFLEX-VACCUM-AGENT program is invoked for each new percept (location, status)
and returns an action each time

20
For an Automated Car:
Environment- Road (artificial/natural)
Activator: brakes & accelerator
Sensor: Cameras
Performance Measure: Mileage
Rational Agent
For each possible percept sequence, a rational agent should select an action that is expected to
maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
Ex: vacuum-cleaner world

A simple agent that cleans a square if it is dirty and moves to the other square if not

Is it rational?

Assumption:

performance measure: 1 point for each clean square at each time step

environment is known a priori

actions = {left, right, suck, no-op}

agent is able to perceive the location and dirt in that location

Given different assumption, it might not be rational anymore

Omniscience, Learning & Autonomy

1. Distinction between rationality and omniscience

o expected performance vs. actual performance

2. Agents can perform actions in order to modify future percepts so as to obtain useful

information(information gathering, exploration)

3. An agent can also learn from what it perceives

4. An agent is autonomous if its behavior is determined by its own experience (with ability to

learn and adapt)

21
1.8 TYPES OF AGENTS

Agents can be grouped into four classes based on their degree of perceived intelligence
and capability :

 Simple Reflex Agents


 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
 Learning Agent

The Simple reflex agents

 The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history (past State).

 These agents only succeed in the fully observable environment.

 The Simple reflex agent does not consider any part of percepts history during their
decision and action process.

 The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.

 Problems for the simple reflex agent design approach:

o They have very limited intelligence

o They do not have knowledge of non-perceptual parts of the current state


o Mostly too big to generate and to store.

o Not adaptive to changes in the environment.

Condition-Action Rule − It is a rule that maps a state (condition) to an action.

Ex: if car-in-front-is-braking then initiate- braking.

22
Figure A simple reflex agent

Model Based Reflex Agents

 The Model-based agent can work in a partially observable environment, and track the
situation.
 A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
o Internal State: It is a representation of the current state based on percept history.
 These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
 Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.

23
Figure A model-based reflex agent

Goal Based Agents

 The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.

 The agent needs to know its goal which describes desirable situations.

 Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.

 They choose an action, so that they can achieve the goal.

 These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.

Figure A goal-based agent

24
Utility Based Agents

 These agents are similar to the goal-based agent but provide an extra component of
utility measurement (“Level of Happiness”) which makes them different by providing
a measure of success at a given state.

 Utility-based agent act based not only goals but also the best way to achieve the goal.

 The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.

 The utility function maps each state to a real number to check how efficiently each
action achieves the goals.

Figure A utility-based agent


Learning Agents

 A learning agent in AI is the type of agent which can learn from its past experiences, or
it has learning capabilities.

 It starts to act with basic knowledge and then able to act and adapt automatically
through learning.

 A learning agent has mainly four conceptual components, which are:

a. Learning element: It is responsible for making improvements by learning from


environment

b. Critic: Learning element takes feedback from critic which describes that how well
the agent is doing with respect to a fixed performance standard.

c. Performance element: It is responsible for selecting external action

25
d. Problem generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.

 Hence, learning agents are able to learn, analyze performance, and look for new ways
to improve the performance.

Figure Learning Agents

1.9 PROBLEM SOLVING APPROACH TO TYPICAL AI PROBLEMS

Problem-solving agents

In Artificial Intelligence, Search techniques are universal problem-solving methods.


Rational agents or Problem-solving agents in AI mostly used these search strategies or
algorithms to solve a specific problem and provide the best result. Problem- solving agents are
the goal-based agents and use atomic representation. In this topic, wewill learn various
problem-solving search algorithms.

Some of the most popularly used problem solving with the help of artificial intelligence
are:

1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem.

Problem Searching

 In general, searching refers to as finding information one needs.

26
 Searching is the most commonly used technique of problem solving in artificial
intelligence.

 The searching algorithm helps us to search for solution of particular problem.

Problem: Problems are the issues which comes across any system. A solution is needed to
solve that particular problem.

Steps : Solve Problem Using Artificial Intelligence

The process of solving a problem consists of five steps. These are:

Figure Problem Solving in Artificial Intelligence

Defining The Problem: The definition of the problem must be included precisely. It should
contain the possible initial as well as final situations which should result in acceptable solution.

1. Analyzing The Problem: Analyzing the problem and its requirement must be done as
few features can have immense impact on the resulting solution.

2. Identification Of Solutions: This phase generates reasonable amount of solutions to


the given problem in a particular range.

3. Choosing a Solution: From all the identified solutions, the best solution is chosen basis
on the results produced by respective solutions.

4. Implementation: After choosing the best solution, its implementation is done.

Measuring problem-solving performance

We can evaluate an algorithm’s performance in four ways:

27
Completeness: Is the algorithm guaranteed to find a solution when there is one?
Optimality: Does the strategy find the optimal solution?
Time complexity: How long does it take to find a solution?
Space complexity: How much memory is needed to perform the search?

Search Algorithm Terminologies

 Search: Searching is a step by step procedure to solve a search-problem in a given


search space. A search problem can have three main factors:

1. Search Space: Search space represents a set of possible solutions, which a system
may have.

2. Start State: It is a state from where agent begins the search.

3. Goal test: It is a function which observe the current state and returns whether the
goal state is achieved or not.

 Search tree: A tree representation of search problem is called Search tree. The root of
the search tree is the root node which is corresponding to the initial state.

 Actions: It gives the description of all the available actions to the agent.

 Transition model: A description of what each action do, can be represented as a


transition model.

 Path Cost: It is a function which assigns a numeric cost to each path.

 Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.

Example Problems

A Toy Problem is intended to illustrate or exercise various problem-solving methods.


Areal- world problem is one whose solutions people actually care about.

Toy Problems

Vacuum World

States: The state is determined by both the agent location and the dirt locations. The
agent is in one of the 2 locations, each of which might or might not contain dirt. Thus there are
2*2^2=8 possible world states.

Initial state: Any state can be designated as the initial state.

28
Actions: In this simple environment, each state has just three actions: Left, Right, and
Suck. Larger environments might also include Up and Down.

Transition model: The actions have their expected effects, except that moving Left in
the leftmost squ are, moving Right in the rightmost square, and Sucking in a clean square have
no effect. The complete state space is shown in Figure.

Goal test: This checks whether all the squares are clean.

Path cost: Each step costs 1, so the path cost is the number of steps in the path.

Figure 1.12 Vacuum World State Space Graph

1) 8- Puzzle Problem

Figure 1.13 8- Puzzle Problem

States: A state description specifies the location of each of the eight tiles and the blank
in one of the nine squares.

29
Initial state: Any state can be designated as the initial state. Note that any given goal
can be reached from exactly half of the possible initial states.

The simplest formulation defines the actions as movements of the blank space Left,
Right, Up, or Down. Different subsets of these are possible depending on where the blank is.

Transition model: Given a state and action, this returns the resulting state; for example,
if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blank
switched.

Goal test: This checks whether the state matches the goal configuration shown in
Figure. Path cost: Each step costs 1, so the path cost is the number of steps in the path.

Queens Problem

Figure 1.14 Queens Problem

 States: Any arrangement of 0 to 8 queens on the board is a state.


 Initial state: No queens on the board.
 Actions: Add a queen to any empty square.
 Transition model: Returns the board with a queen added to the specified square.
 Goal test: 8 queens are on the board, none attacked.

Consider the given problem. Describe the operator involved in it. Consider the water
jug problem: You are given two jugs, a 4-gallon one and 3-gallon one. Neither has any
measuring marker on it. There is a pump that can be used to fill the jugs with water. How can
you get exactly 2 gallon of water from the 4-gallon jug ?

Explicit Assumptions: A jug can be filled from the pump, water can be poured out of a
jug on to the ground, water can be poured from one jug to another and that there are no other
measuring devices available.

Here the initial state is (0, 0). The goal state is (2, n) for any value of n.

30
State Space Representation: we will represent a state of the problem as a tuple (x, y)
where x represents the amount of water in the 4-gallon jug and y represents the amount of water
in the 3-gallon jug. Note that 0 ≤ x ≤ 4, and 0 ≤ y ≤ 3.

To solve this we have to make some assumptions not mentioned in the problem. They
are:

 We can fill a jug from the pump.


 We can pour water out of a jug to the ground.
 We can pour water from one jug to another.
 There is no measuring device available.

Operators - we must define a set of operators that will take us from one state to another.

Table 1.1

Sr. Current State Next State Descriptions


1 (x,y) if x < 4 (4,y) Fill the 4 gallon jug
2 (x,y) if x < 3 (x,3) Fill the 3 gallon jug
3 (x,y) if x > 0 (x – d, y) Pour some water out of the 4 gallon jug
4 (x,y) if y > 0 (x, y – d) Pour some water out of the 3 gallon jug
5 (x,y) if y > 0 (0, y) Empty the 4 gallon jug
6 (x,y) if y > 0 (x 0) Empty the 3 gallon jug on the ground
(x,y) if x + y > = 4 Pour water from the 3 gallon jug into the
7 (4, y – (4 – x))
and y > 0 4 gallon jug until the 4 gallon jug is full
(x,y) if x + y > = 3 Pour water from the 4 gallon jug into the
8 (x – (3 – x), 3)
and x > 0 3 gallon jug until the 3 gallon jug is full
(x,y) if x + y < = 4 Pour all the water from the 3 gallon jug
9 (x + y, 0)
and y > 0 into the 4 gallon jug
(x,y) if x + y < = 3 Pour all the water from the 4 gallon jug
10 (0, x + y)
and x > 0 into the 3 gallon jug
Pour the 2 gallons from 3 gallon jug into
11 (0, 2) (2, 0)
the 4 gallon jug
Empty the 2 gallons in the 4 gallon jug
12 (2, y) (0, y)
on the ground

31
Figure: Solution

Table 1.2

Solution

Gallons in 4-gel Gallons in 3-gel


S.No. Rule Applied
jug(x) jug (y)
1. 0 0 Initial state
2.. 4 0 1. Fill 4
3 1 3 6. Poor 4 into 3 to fill
4. 1 0 4. Empty 3
5. 0 1 8. Poor all of 4 into 3
6. 4 1 1. Fill 4
7. 2 3 6. Poor 4 into 3

 4-gallon one and a 3-gallon Jug

 No measuring mark on the jug.

 There is a pump to fill the jugs with water.

 How can you get exactly 2 gallon of water into the 4-gallon jug? (Ques)

32

You might also like