0% found this document useful (0 votes)
9 views27 pages

Imp Q and A

The document categorizes agents into five types based on their intelligence and capabilities: Simple Reflex Agents, Model-Based Reflex Agents, Goal-Based Agents, Utility-Based Agents, and Learning Agents. It also discusses algorithms such as Minimax and A* Search, highlighting their properties and applications in problem-solving. Additionally, it covers Constraint Satisfaction Problems (CSP) and various environmental properties affecting agent behavior.

Uploaded by

pnagasyamala39
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views27 pages

Imp Q and A

The document categorizes agents into five types based on their intelligence and capabilities: Simple Reflex Agents, Model-Based Reflex Agents, Goal-Based Agents, Utility-Based Agents, and Learning Agents. It also discusses algorithms such as Minimax and A* Search, highlighting their properties and applications in problem-solving. Additionally, it covers Constraint Satisfaction Problems (CSP) and various environmental properties affecting agent behavior.

Uploaded by

pnagasyamala39
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

1.

8 TYPES OF AGENTS

Agents can be grouped into four classes based on their degree of perceived intelligence and capability :
• Simple Reflex Agents

• Model-Based Reflex Agents

• Goal-Based Agents

• Utility-Based Agents

• Learning Agent

The Simple reflex agents

• The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the
current percepts and ignore the rest of the percept history (past State).

• These agents only succeed in the fully observable environment.

• The Simple reflex agent does not consider any part of percepts history during their decision and action
process.

• The Simple reflex agent works on Condition-action rule, which means it maps the current state to
action. Such as a Room Cleaner agent, it works only if there is dirt in the room.

• Problems for the simple reflex agent design approach:

o They have very limited intelligence

o They do not have knowledge of non-perceptual parts of the current stat

o Mostly too big to generate and to store.

o Not adaptive to changes in the environment.

Condition-Action Rule − It is a rule that maps a state (condition) to an action.

Ex: if car-in-front-is-braking then initiate- braking.


Model Based Reflex Agents

• The Model-based agent can work in a partially observable environment, and track the
situation.

• A model-based agent has two important factors:

o Model: It is knowledge about "how things happen in the world," so it is called a Model-based
agent.

o Internal State: It is a representation of the current state based on percept history.

• These agents have the model, "which is knowledge of the world" and based on the model they
perform actions.

Updating the agent state requires information about:

o How the world evolves

o How the agent's action affects the world.


Goal Based Agents

o The knowledge of the current state environment is not always sufficient to decide for an agent to what
to do.

o The agent needs to know its goal which describes desirable situations.

o Goal-based agents expand the capabilities of the model-based agent by having the "goal" information.

o They choose an action, so that they can achieve the goal.

o These agents may have to consider a long sequence of possible actions before deciding whether the
goal is achieved or not. Such considerations of different scenario are called searching and planning,
which makes an agent proactive.

Utility Based Agents o These agents are similar to the goal-based agent but provide an extra component
of utility measurement (“level of happiness’) which makes them different by providing a measure of
success at a given state.

o Utility-based agent act based not only goals but also the best way to achieve the goal.

o The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to
choose in order to perform the best action.

o The utility function maps each state to a real number to check how efficiently each action achieves the
goals.
Learning Agents

o A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning
capabilities.

o It starts to act with basic knowledge and then able to act and adapt automatically through learning.

o A learning agent has mainly four conceptual components, which are:

a. Learning element: It is responsible for making improvements by learning from environment

b. Critic: Learning element takes feedback from critic which describes that how well the agent is doing
with respect to a fixed performance standard.

c. Performance element: It is responsible for selecting external action

d. Problem generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.

o Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the
performance.

Minimax Algorithm
• It is backtracking algorithm
• Search the tree to the end

• Assign utility values to terminal nodes

• Find the best move for MAX (on MAX’s turn), assuming:

– MAX will make the move that maximizes MAX’s utility

– MIN will make the move that minimizes MAX’s utility

• Here, MAX should make the leftmost move


Minimax Properties
• Complete if tree is finite

• Optimal if play against opponent with same strategy (utility function)

• Time complexity is O(b m)

• Space complexity is O(bm) (depth-first exploration)

• If we have 100 seconds to make a move

– Can explore 104 nodes/second

– Can consider 106 nodes / move

• Standard approach is

– Apply a cutoff test (depth limit, quiescence)

– Evaluate nodes at cutoff (evaluation function estimates desirability of position)

Alpha-Beta Pruning
• Cutoff search by exploring less no.of nodes

• Typically can only look 3-4 ply in allowable chess time

• Alpha-beta pruning simplifies search space without eliminating optimality

– By applying common sense

– If one route allows queen to be captured and a better move is available

– Then don’t search further down bad path

– If one route would be bad for opponent, ignore that route also

Maintain [alpha, beta] window at each node during depth-first search

alpha = lower bound, change at max levels


beta = upper bound, change at min levels

If alpha>=beta then apply alpha-beta pruning


Alpha Beta Properties
• Pruning does not affect final result

• Good move ordering improves effectiveness of pruning

• With perfect ordering, time complexity is O(b m/2)

Constraint Satisfaction Problems (CSP)


Constraint satisfaction problem

A constraint satisfaction problem (CSP) requires a value, selected from a given finite domain, to be
assigned to each variable in the problem, so that all constraints relating the variables are satisfied. Many
combinatorial problems in operational research, such as scheduling and timetabling, can be formulated
as CSPs.

Constraint satisfaction problem CSP is one of the standard search problem where instead of saying state
is black box, we say state is defined by variables and values.

• CSP:

• state is defined by variables Xi with values from domain Di

• goal test is a set of constraints specifying allowable combinations of values for subsets of variables

Allows useful general-purpose algorithms with more power than standard search algorithms

Varieties of constraints

• Unary constraints involve a single variable,

e.g., SA ≠ green

• Binary constraints involve pairs of variables,

e.g., SA ≠ WA

• Higher-order constraints involve 3 or more variables,

e.g., SA ≠ WA ≠ NT Preferences (Soft Constraints): e.g. red is better than green. Need not be satisfied
but you get credit for satisfying them.

 Constraint Optimization Problems.

Real-world CSPs
Assignment problems
e.g., who teaches what class
 Timetabling problems
e.g., which class is offered when and where?
 Transportation scheduling
 Factory scheduling
 Hardware configuration
 Floor planning
Notice that many real-world problems involve real-valued variables.
Examples of CSPs
1. Graph/ Map Coloring
2. Cryptarithmetic Problems
3. Sudoku Problems
4. 4- Queen Problems
5. Puzzles etc.

A* Search
• Uses heuristics function h(n) and cost g(n) to reach the node ‘n’ from initial state to goal state
– f(n)=g(n)+h(n)
• Finds shortest path through search space.
• Note that UCS and Best-first both improve search
– UCS keeps solution cost low
– Best-first helps find solution quickly
– A* combines these approaches
• It gives fast and optimal result.
• It is optimal and complete
• It solves complex problems
• Required more memory

A* Search- Algorithm
i. Enter initial node in OPEN list
ii. If OPEN= Empty return Fail
iii. Select node from OPEN which has smallest value (g+h)
If Node=Goal return Succes
s iv. Expand node ‘n’ Generate all successors of Node and compute (g+h) for each successor
v. If node ‘n’ is already in OPEN/CLOSED attach to back pointer
vi. Goto step (iii
Power of f
• If heuristic function is wrong it either
– overestimates (guesses too high)
– underestimates (guesses too low)
• Overestimating is worse than underestimating
• A* returns optimal solution if h(n) is admissible
– heuristic function is admissible if never overestimates true cost to nearest goal
– if search finds optimal solution using admissible heuristic, the search is admissible

Environment Properties
• Fully observable vs. Partially observable
– If an agent’s sensors give it access to the complete state of the environment at each point in
time
• Deterministic vs. Stochastic / strategic
– If the next state of the environment is completely determined by the current state and the
action executed by the agent.
• Episodic vs. Sequential
– The agent’s experience is divided into atomic episodes.
• Single agent vs. Multi agent
• Static vs. Dynamic
– If the environment can change while an agent is deliberating
• Discrete vs. Continuous
– The discrete/continuous distinction applies to the state of the environment, to the way time is
handled, and to the percepts and actions of the agent

You might also like