0% found this document useful (0 votes)
474 views

Notes

This document provides an introduction to artificial intelligence, including definitions, characteristics of intelligent agents, and different approaches to AI problems. It discusses four main approaches: acting humanly using the Turing test, thinking humanly via cognitive modeling, thinking rationally using logical laws, and acting rationally as a goal-oriented agent. The document also describes properties of agents and their environments, such as whether environments are fully or partially observable, deterministic or stochastic, single-agent or multi-agent, static or dynamic, discrete or continuous.

Uploaded by

a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
474 views

Notes

This document provides an introduction to artificial intelligence, including definitions, characteristics of intelligent agents, and different approaches to AI problems. It discusses four main approaches: acting humanly using the Turing test, thinking humanly via cognitive modeling, thinking rationally using logical laws, and acting rationally as a goal-oriented agent. The document also describes properties of agents and their environments, such as whether environments are fully or partially observable, deterministic or stochastic, single-agent or multi-agent, static or dynamic, discrete or continuous.

Uploaded by

a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

CS 8691 – Artificial Intelligence Department of CSE 2019 - 20

UNIT I INTRODUCTION
Introduction–Definition – Future of Artificial Intelligence – Characteristics of Intelligent
Agents–Typical Intelligent Agents – Problem Solving Approach to Typical AI
problems.

Artificial Intelligence:
 “Artificial Intelligence is the ability of a computer to act like a human being”.
 Artificial intelligence systems consist of people, procedures, hardware, software, data, and
knowledge needed to develop computer systems and machines that demonstrate the
characteristics of intelligence

Programming Without AI Programming With AI


A computer program without AI can answer the A computer program with AI can answer the
specific questions it is meant to solve. generic questions it is meant to solve.
AI programs can absorb new modifications by
putting highly independent pieces of
Modification in the program leads to change in
information together. Hence you can modify
its structure.
even a minute piece of information of program
without affecting its structure.
Modification is not quick and easy. It may lead to
Quick and Easy program modification.
affecting the program adversely.

Four Approaches of Artificial Intelligence:


 Acting humanly: The Turing test approach.
 Thinking humanly: The cognitive modelling approach.
 Thinking rationally: The laws of thought approach.
 Acting rationally: The rational agent approach.

Acting humanly: The Turing Test approach


The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory
operational definition of intelligence. A computer passes the test if a human interrogator, after posing
some written questions, cannot tell whether the written responses come from a person or from a
computer.

 natural language processing to enable it to communicate successfully in English;


 knowledge representation to store what it knows or hears;

 automated reasoning to use the stored information to answer questions and to draw new
conclusions
 machine learning to adapt to new circumstances and to detect and extrapolate patterns.

St. Joseph’s Group of Institutions Unit I Page 1 of 12


CS 8691 – Artificial Intelligence Department of CSE 2019 - 20

Thinking humanly: The cognitive modelling approach


Analyse how a given program thinks like a human, we must have some way of determining
how humans think. The interdisciplinary field of cognitive science brings together computer models
from AI and experimental techniques from psychology to try to construct precise and testable theories
of the workings of the human mind. Although cognitive science is a fascinating field in itself, we are
not going to be discussing it all that much in this book. We will occasionally comment on similarities
or differences between AI techniques and human cognition. Real cognitive science, however, is
necessarily based on experimental investigation of actual humans or animals, and we assume that the
reader only has access to a computer for experimentation. We will simply note that AI and cognitive
science continue to fertilize each other, especially in the areas of vision, natural language, and learning.

Thinking rationally: The “laws of thought” approach


The Greek philosopher Aristotle was one of the first to attempt to codify ``right thinking,''
that is, irrefutable reasoning processes. His famous syllogisms provided patterns for argument
structures that always gave correct conclusions given correct premises.
For example, ``Socrates is a man; all men are mortal;
therefore Socrates is mortal.''
These laws of thought were supposed to govern the operation of the mind, and initiated the field of
logic.

Acting rationally: The rational agent approach


Acting rationally means acting so as to achieve one's goals, given one's beliefs. An agent is just
something that perceives and acts.
The right thing: that which is expected to maximize goal achievement, given the available information
Does not necessary involve thinking.
For Example - blinking reflex- but should be in the service of rational action.

Agents and environments


An agent is anything that can be viewed as perceiving its environment through sensors and acting
upon that environment through actuators.

 Human Sensors:
Eyes, ears, and other organs for sensors.
 Human Actuators:
Hands, legs, mouth, and other body parts.
 Robotic Sensors:
Mic, cameras and infrared range finders for sensors
 Robotic Actuators:
Motors, Display, speakers etc

St. Joseph’s Group of Institutions Unit I Page 2 of 12


CS 8691 – Artificial Intelligence Department of CSE 2019 - 20
Agent Characteristics
 Situatedness
The agent receives some form of sensory input from its environment, and it performs some
action that changes its environment in some way.
Examples of environments: the physical world and the Internet.
 Autonomy
The agent can act without direct intervention by humans or other agents and that it has control
over its own actions and internal state.
 Adaptivity
The agent is capable of
(1) reacting flexibly to changes in its environment;
(2) taking goal-directed initiative (i.e., is pro-active), when appropriate; and
(3) Learning from its own experience, its environment, and interactions with others.
 Sociability
The agent is capable of interacting in a peer-to-peer manner with other agents or humans.

Properties of Environment
An environment is everything in the world which surrounds the agent, but it is not a part
of an agent itself. An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with something to sense
and act upon it.
1. Fully observable vs Partially Observable:
 If an agent sensor can sense or access the complete state of an environment at each point of time
then it is a fully observable environment, else it is partially observable.
 A fully observable environment is easy as there is no need to maintain the internal state to keep
track history of the world.
 An agent with no sensors in all environments then such an environment is called as
unobservable.
 Example: chess – the board is fully observable, as are opponent’s moves.
Driving – what is around the next bend is not observable and hence partially observable.

2. Deterministic vs Stochastic:
 If an agent's current state and selected action can completely determine the next state of the
environment, then such environment is called a deterministic environment.
 A stochastic environment is random in nature and cannot be determined completely by an
agent.
 In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.

3. Episodic vs Sequential:
 In an episodic environment, there is a series of one-shot actions, and only the current percept
is required for the action.
 However, in Sequential environment, an agent requires memory of past actions to determine
the next best actions.

4. Single-agent vs Multi-agent
 If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
 However, if multiple agents are operating in an environment, then such an environment is
called a multi-agent environment.

St. Joseph’s Group of Institutions Unit I Page 3 of 12


CS 8691 – Artificial Intelligence Department of CSE 2019 - 20
 The agent design problems in the multi-agent environment are different from single agent
environment.

5. Static vs Dynamic:
 If the environment can change itself while an agent is deliberating then such environment is
called a dynamic environment else it is called a static environment.
 Static environments are easy to deal because an agent does not need to continue looking at the
world while deciding for an action.
 However for dynamic environment, agents need to keep looking at the world at each action.
 Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an
example of a static environment.

6. Discrete vs Continuous:
 If in an environment there are a finite number of percepts and actions that can be performed
within it, then such an environment is called a discrete environment else it is called continuous
environment.
 A chess game comes under discrete environment as there is a finite number of moves that can
be performed.
 A self-driving car is an example of a continuous environment.

7. Known vs Unknown
 Known and unknown are not actually a feature of an environment, but it is an agent's state of
knowledge to perform an action.
 In a known environment, the results for all actions are known to the agent. While in unknown
environment, agent needs to learn how it works in order to perform an action.
 It is quite possible that a known environment to be partially observable and an Unknown
environment to be fully observable.

8. Accessible vs. Inaccessible


 If an agent can obtain complete and accurate information about the state's environment, then
such an environment is called an Accessible environment else it is called inaccessible.
 An empty room whose state can be defined by its temperature is an example of an accessible
environment.
 Information about an event on earth is an example of Inaccessible environment.

Task environments, which are essentially the "problems" to which rational agents are the "solutions."

PEAS: Performance Measure, Environment, Actuators, Sensors


Performance:
The output which we get from the agent. All the necessary results that an agent gives after processing
comes under its performance.

Environment:
All the surrounding things and conditions of an agent fall in this section. It basically consists of all the
things under which the agents work.

Actuators:
The devices, hardware or software through which the agent performs any actions or processes any
information to produce a result are the actuators of the agent.

St. Joseph’s Group of Institutions Unit I Page 4 of 12


CS 8691 – Artificial Intelligence Department of CSE 2019 - 20
Sensors:
The devices through which the agent observes and perceives its environment are the sensors of the
agent.

Consider, e.g., the task of designing an automated taxi driver:


Performance measure Environment Actuators Sensors
Agent Type
Safe, Roads, Steering wheel, Cameras,
fast, other traffic, accelerator, speedometer,
legal, pedestrians, brake, GPS,
Taxi Driver
comfortable trip, customers signal, engine sensors,
maximize profits horn keyboard

Rational Agent - A system is rational if it does the “right thing”. Given what it knows.
Characteristic of Rational Agent.
 The agent's prior knowledge of the environment.
 The performance measure that defines the criterion of success.
 The actions that the agent can perform.
 The agent's percept sequence to date.

For every possible percept sequence, a rational agent should select an action that is expected
to maximize its performance measure, given the evidence provided by the percept sequence and
whatever built-in knowledge the agent has.
 An omniscient agent knows the actual outcome of its actions and can act accordingly; but
omniscience is impossible in reality.
 Ideal Rational Agent precepts and does things. It has a greater performance measure.
Eg. Crossing road. Here first perception occurs on both sides and then only action.
 No perception occurs in Degenerate Agent.
Eg. Clock. It does not view the surroundings. No matter what happens outside. The clock works
based on inbuilt program.
 Ideal Agent describes by ideal mappings. “Specifying which action an agent ought to take in
response to any given percept sequence provides a design for ideal agent”.
Eg. SQRT function calculation in calculator.
 Doing actions in order to modify future precepts-sometimes called information gathering-
is an important part of rationality.
 A rational agent should be autonomous-it should learn from its own prior knowledge
(experience).

The Structure of Intelligent Agents


Agent = Architecture + Agent Program
Architecture = the machinery that an agent executes on. (Hardware)
Agent Program = an implementation of an agent function. (Algorithm , Logic – Software)

St. Joseph’s Group of Institutions Unit I Page 5 of 12


CS 8691 – Artificial Intelligence Department of CSE 2019 - 20

Types of AI Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the time.

1)The Simple reflex agents


 The Simple reflex agents are the simplest agents. These agents take decisions on the basis of
the current percepts and ignore the rest of the percept history(past State).
 These agents only succeed in the fully observable environment.
 The Simple reflex agent does not consider any part of percepts history during their decision
and action process.
 The Simple reflex agent works on Condition-action rule, which means it maps the current state
to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
 Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.

Condition-Action Rule − It is a rule that maps a state (condition) to an action.


Ex: if car-in-front-is-braking then initiate-braking.

function SIMPLE-REFLEX-AGENT(percept)
returns an action
persistent: rules, a set of condition–action rules
state ← INTERPRET-INPUT(percept)
rule ← RULE-MATCH(state, rules)
action ← rule.ACTION
return action

2) Model Based Reflex Agents:


 The Model-based agent can work in a partially observable environment, and track the situation.
 A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called a Model-based
agent.
o Internal State: It is a representation of the current state based on percept history.
 These agents have the model, "which is knowledge of the world" and based on the model they
perform actions.
 Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.

St. Joseph’s Group of Institutions Unit I Page 6 of 12


CS 8691 – Artificial Intelligence Department of CSE 2019 - 20
function MODEL-BASED-REFLEX-AGENT
(percept) returns an action
persistent: state, the agent’s current conception
of the world state
model , a description of how the next state
depends on current state and action
rules, a set of condition–action rules
action, the most recent action, initially
none
state ← UPDATE-STATE (state, action, percept,
model)
rule←RULE-MATCH (state, rules)
action ←rule.ACTION
return action

3) Goal Based Agents:


o The knowledge of the current state environment is not always sufficient to decide for an agent
to what to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
o They choose an action, so that they can achieve the goal.
o These agents may have to consider a long sequence of possible actions before deciding whether
the goal is achieved or not. Such considerations of different scenario are called searching and
planning, which makes an agent proactive.

4) Utility Based Agents


o These agents are similar to the goal-based agent but provide an extra component of utility
measurement(“Level of Happiness”) which makes them different by providing a measure of
success at a given state.
o Utility-based agent act based not only goals but also the best way to achieve the goal.

St. Joseph’s Group of Institutions Unit I Page 7 of 12


CS 8691 – Artificial Intelligence Department of CSE 2019 - 20

o The Utility-based agent is useful when there are multiple possible alternatives, and an agent
has to choose in order to perform the best action.
o The utility function maps each state to a real number to check how efficiently each action
achieves the goals.

5. Learning Agents
o A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
o A learning agent has mainly four conceptual components, which are:
a. Learning element: It is responsible for making improvements by learning from
environment
b. Critic: Learning element takes feedback from critic which describes that how well the
agent is doing with respect to a fixed performance standard.
c. Performance element: It is responsible for selecting external action
d. Problem generator: This component is responsible for suggesting actions that will lead
to new and informative experiences.
o Hence, learning agents are able to learn, analyze performance, and look for new ways to
improve the performance.

St. Joseph’s Group of Institutions Unit I Page 8 of 12


CS 8691 – Artificial Intelligence Department of CSE 2019 - 20

Problem Solving Approach to Typical AI problems.


Problem-solving agents:
In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational
agents or Problem-solving agents in AI mostly used these search strategies or algorithms to solve a
specific problem and provide the best result. Problem-solving agents are the goal-based agents and
use atomic representation. In this topic, we will learn various problem-solving search algorithms.

 Some of the most popularly used problem solving with the help of artificial intelligence are:
1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem.

Problem Searching
 In general, searching refers to as finding information one needs.
 Searching is the most commonly used technique of problem solving in artificial intelligence.
 The searching algorithm helps us to search for solution of particular problem.

Problem: Problems are the issues which comes across any system. A solution is needed to solve that
particular problem.

St. Joseph’s Group of Institutions Unit I Page 9 of 12


CS 8691 – Artificial Intelligence Department of CSE 2019 - 20
The three classes of problem.
 Ignorable, in which solution steps can be ignored.
 Recoverable, in which solution steps can be undone.
 Irrecoverable, in which solution steps cannot be undone.

Steps : Solve Problem Using Artificial Intelligence

 The process of solving a problem consists of five steps. These are:

1.Defining The Problem: The definition of the problem


must be included precisely. It should contain the possible
initial as well as final situations which should result in
acceptable solution.
2.Analysing The Problem: Analysing the problem and its
requirement must be done as few features can have
immense impact on the resulting solution.
3. Identification of Solutions: This phase generates
reasonable amount of solutions to the given problem in a
particular range.
4. Choosing a Solution: From all the identified solutions,
the best solution is chosen basis on the results produced by
respective solutions.
5. Implementation: After choosing the best solution, its
implementation is done.

Search Algorithm Terminologies:

 Search: Searching is a step by step procedure to solve a search-problem in a given search space.
A search problem can have three main factors:
1. Search Space: Search space represents a set of possible solutions, which a system may
have.
2. Start State: It is a state from where agent begins the search.
3. Goal test: It is a function which observe the current state and returns whether the goal
state is achieved or not.
 Search tree: A tree representation of search problem is called Search tree. The root of the search
tree is the root node which is corresponding to the initial state.
 Actions: It gives the description of all the available actions to the agent.
 Transition model: A description of what each action do, can be represented as a transition
model.
 Path Cost: It is a function which assigns a numeric cost to each path.
 Solution: It is an action sequence which leads from the start node to the goal node.
 Optimal Solution: If a solution has the lowest cost among all solutions.

St. Joseph’s Group of Institutions Unit I Page 10 of 12


CS 8691 – Artificial Intelligence Department of CSE 2019 - 20
Example Problems
A Toy Problem is intended to illustrate or exercise various problem-solving methods. A real-
world problem is one whose solutions people actually care about.
Toy Problems:

1) Vaccum World
States : The state is determined by both the agent location and the dirt locations. The agent is in
one of the 2 locations, each of which might or might not contain dirt. Thus there are 2*2^2=8
possible world states.
Initial state: Any state can be designated as the initial state.
Actions: In this simple environment, each state has just three actions: Left, Right, and Suck.
Larger environments might also include Up and Down.
Transition model: The actions have their expected effects, except that moving Left in the leftmost
square, moving Right in the rightmost square, and Sucking in a clean square have no effect. The
complete state space is shown in Figure.
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of steps in the path.

2) 8- Puzzle Problem

States: A state description specifies the location of each of the eight tiles and the blank in one of the
nine squares.
Initial state: Any state can be designated as the initial state. Note that any given goal can be reached
from exactly half of the possible initial states.

St. Joseph’s Group of Institutions Unit I Page 11 of 12


CS 8691 – Artificial Intelligence Department of CSE 2019 - 20
Actions: The simplest formulation defines the actions as movements of the blank space Left, Right, Up,
or Down. Different subsets of these are possible depending on where the blank is.
Transition model: Given a state and action, this returns the resulting state; for example, if we apply
Left to the start state in Figure 3.4, the resulting state has the 5 and the blank switched.
Goal test: This checks whether the state matches the goal configuration shown in Figure.
Path cost: Each step costs 1, so the path cost is the number of steps in the path.

3) 8 – Queens Problem:

 States: Any arrangement of 0 to 8 queens on the board is a state.


 Initial state: No queens on the board.
 Actions: Add a queen to any empty square.
 Transition model: Returns the board with a queen added to the specified square.
 Goal test: 8 queens are on the board, none attacked.

St. Joseph’s Group of Institutions Unit I Page 12 of 12

You might also like