Notes
Notes
UNIT I INTRODUCTION
Introduction–Definition – Future of Artificial Intelligence – Characteristics of Intelligent
Agents–Typical Intelligent Agents – Problem Solving Approach to Typical AI
problems.
Artificial Intelligence:
“Artificial Intelligence is the ability of a computer to act like a human being”.
Artificial intelligence systems consist of people, procedures, hardware, software, data, and
knowledge needed to develop computer systems and machines that demonstrate the
characteristics of intelligence
automated reasoning to use the stored information to answer questions and to draw new
conclusions
machine learning to adapt to new circumstances and to detect and extrapolate patterns.
Human Sensors:
Eyes, ears, and other organs for sensors.
Human Actuators:
Hands, legs, mouth, and other body parts.
Robotic Sensors:
Mic, cameras and infrared range finders for sensors
Robotic Actuators:
Motors, Display, speakers etc
Properties of Environment
An environment is everything in the world which surrounds the agent, but it is not a part
of an agent itself. An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with something to sense
and act upon it.
1. Fully observable vs Partially Observable:
If an agent sensor can sense or access the complete state of an environment at each point of time
then it is a fully observable environment, else it is partially observable.
A fully observable environment is easy as there is no need to maintain the internal state to keep
track history of the world.
An agent with no sensors in all environments then such an environment is called as
unobservable.
Example: chess – the board is fully observable, as are opponent’s moves.
Driving – what is around the next bend is not observable and hence partially observable.
2. Deterministic vs Stochastic:
If an agent's current state and selected action can completely determine the next state of the
environment, then such environment is called a deterministic environment.
A stochastic environment is random in nature and cannot be determined completely by an
agent.
In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
3. Episodic vs Sequential:
In an episodic environment, there is a series of one-shot actions, and only the current percept
is required for the action.
However, in Sequential environment, an agent requires memory of past actions to determine
the next best actions.
4. Single-agent vs Multi-agent
If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
However, if multiple agents are operating in an environment, then such an environment is
called a multi-agent environment.
5. Static vs Dynamic:
If the environment can change itself while an agent is deliberating then such environment is
called a dynamic environment else it is called a static environment.
Static environments are easy to deal because an agent does not need to continue looking at the
world while deciding for an action.
However for dynamic environment, agents need to keep looking at the world at each action.
Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an
example of a static environment.
6. Discrete vs Continuous:
If in an environment there are a finite number of percepts and actions that can be performed
within it, then such an environment is called a discrete environment else it is called continuous
environment.
A chess game comes under discrete environment as there is a finite number of moves that can
be performed.
A self-driving car is an example of a continuous environment.
7. Known vs Unknown
Known and unknown are not actually a feature of an environment, but it is an agent's state of
knowledge to perform an action.
In a known environment, the results for all actions are known to the agent. While in unknown
environment, agent needs to learn how it works in order to perform an action.
It is quite possible that a known environment to be partially observable and an Unknown
environment to be fully observable.
Task environments, which are essentially the "problems" to which rational agents are the "solutions."
Environment:
All the surrounding things and conditions of an agent fall in this section. It basically consists of all the
things under which the agents work.
Actuators:
The devices, hardware or software through which the agent performs any actions or processes any
information to produce a result are the actuators of the agent.
Rational Agent - A system is rational if it does the “right thing”. Given what it knows.
Characteristic of Rational Agent.
The agent's prior knowledge of the environment.
The performance measure that defines the criterion of success.
The actions that the agent can perform.
The agent's percept sequence to date.
For every possible percept sequence, a rational agent should select an action that is expected
to maximize its performance measure, given the evidence provided by the percept sequence and
whatever built-in knowledge the agent has.
An omniscient agent knows the actual outcome of its actions and can act accordingly; but
omniscience is impossible in reality.
Ideal Rational Agent precepts and does things. It has a greater performance measure.
Eg. Crossing road. Here first perception occurs on both sides and then only action.
No perception occurs in Degenerate Agent.
Eg. Clock. It does not view the surroundings. No matter what happens outside. The clock works
based on inbuilt program.
Ideal Agent describes by ideal mappings. “Specifying which action an agent ought to take in
response to any given percept sequence provides a design for ideal agent”.
Eg. SQRT function calculation in calculator.
Doing actions in order to modify future precepts-sometimes called information gathering-
is an important part of rationality.
A rational agent should be autonomous-it should learn from its own prior knowledge
(experience).
Types of AI Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the time.
function SIMPLE-REFLEX-AGENT(percept)
returns an action
persistent: rules, a set of condition–action rules
state ← INTERPRET-INPUT(percept)
rule ← RULE-MATCH(state, rules)
action ← rule.ACTION
return action
o The Utility-based agent is useful when there are multiple possible alternatives, and an agent
has to choose in order to perform the best action.
o The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
5. Learning Agents
o A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
o A learning agent has mainly four conceptual components, which are:
a. Learning element: It is responsible for making improvements by learning from
environment
b. Critic: Learning element takes feedback from critic which describes that how well the
agent is doing with respect to a fixed performance standard.
c. Performance element: It is responsible for selecting external action
d. Problem generator: This component is responsible for suggesting actions that will lead
to new and informative experiences.
o Hence, learning agents are able to learn, analyze performance, and look for new ways to
improve the performance.
Some of the most popularly used problem solving with the help of artificial intelligence are:
1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem.
Problem Searching
In general, searching refers to as finding information one needs.
Searching is the most commonly used technique of problem solving in artificial intelligence.
The searching algorithm helps us to search for solution of particular problem.
Problem: Problems are the issues which comes across any system. A solution is needed to solve that
particular problem.
Search: Searching is a step by step procedure to solve a search-problem in a given search space.
A search problem can have three main factors:
1. Search Space: Search space represents a set of possible solutions, which a system may
have.
2. Start State: It is a state from where agent begins the search.
3. Goal test: It is a function which observe the current state and returns whether the goal
state is achieved or not.
Search tree: A tree representation of search problem is called Search tree. The root of the search
tree is the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Transition model: A description of what each action do, can be represented as a transition
model.
Path Cost: It is a function which assigns a numeric cost to each path.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.
1) Vaccum World
States : The state is determined by both the agent location and the dirt locations. The agent is in
one of the 2 locations, each of which might or might not contain dirt. Thus there are 2*2^2=8
possible world states.
Initial state: Any state can be designated as the initial state.
Actions: In this simple environment, each state has just three actions: Left, Right, and Suck.
Larger environments might also include Up and Down.
Transition model: The actions have their expected effects, except that moving Left in the leftmost
square, moving Right in the rightmost square, and Sucking in a clean square have no effect. The
complete state space is shown in Figure.
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of steps in the path.
2) 8- Puzzle Problem
States: A state description specifies the location of each of the eight tiles and the blank in one of the
nine squares.
Initial state: Any state can be designated as the initial state. Note that any given goal can be reached
from exactly half of the possible initial states.
3) 8 – Queens Problem: