0% found this document useful (0 votes)
6 views24 pages

Intelligent and Expert System (IE)

An intelligent agent is an autonomous entity that acts upon its environment using sensors and actuators to achieve goals, with a thermostat as an example. The document discusses the components of intelligent systems, differentiates between rational and omniscient agents, and outlines various types of environments in which intelligent agents operate. It also explains the role of autonomy, the PEAS framework for designing intelligent agents, and the importance of perception in decision-making processes.

Uploaded by

Bhumika Piplani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views24 pages

Intelligent and Expert System (IE)

An intelligent agent is an autonomous entity that acts upon its environment using sensors and actuators to achieve goals, with a thermostat as an example. The document discusses the components of intelligent systems, differentiates between rational and omniscient agents, and outlines various types of environments in which intelligent agents operate. It also explains the role of autonomy, the PEAS framework for designing intelligent agents, and the importance of perception in decision-making processes.

Uploaded by

Bhumika Piplani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Intelligent and Expert System

Q1 Define an intelligent agent. Explain the components of an


intelligent agent and give an example.
ANS- An intelligent agent is an autonomous entity which act upon an
environment using sensors and actuators for achieving goals. An intelligent
agent may learn from the environment to achieve their goals. A thermostat is
an example of an intelligent agent.

Components of Intelligent System:


Sensor: Sensor is a device which detects the change in the environment and
sends the information to other electronic devices. An agent observes its
environment through sensors.
Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a
system. An actuator can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can
be legs, wheels, arms, fingers, wings, fins, and display screen.

Examples:
Self-Driving Car: Uses sensors (cameras, radar, lidar) to perceive its
surroundings, and actuators (steering, braking) to navigate and drive safely.
Smart Assistant (e.g., Siri, Alexa): Perceives user commands through voice
recognition and acts by providing information, controlling smart devices, or
performing tasks.
Recommendation System (e.g., Netflix, Amazon): Analyzes user preferences
and behaviors to recommend movies, products, or other content.

Q2. Differentiate between a rational agent and an omniscient


agent. Why is rationality more practical in AI?
ANS- Rational Agent
Definition: A rational agent is one that takes actions to maximize its
performance measure based on the information it has and its current
environment. It does not need to have complete knowledge of the
environment but aims to act in a way that is expected to yield the best possible
outcome given its current understanding.
Characteristics:
• Limited Knowledge: A rational agent works with the information it
currently possesses, which may be incomplete or imperfect.
• Decision-Making: It uses its knowledge to make decisions that are
expected to lead to the best results, considering its goals and constraints.
• Adaptability: It can learn and adapt its actions based on feedback and
changes in the environment.
Example: A chess-playing AI program that evaluates the current board state
and makes the best move based on its knowledge of chess strategies and
possible outcomes. It doesn’t know the entire future of the game but aims to
make the best move at each turn.
Omniscient Agent
Definition: An omniscient agent is one that has complete and perfect
knowledge of the environment and all possible future states. It knows
everything about the current state, future states, and any uncertainties in the
environment.
Characteristics:
• Complete Knowledge: It has perfect information about the environment,
including all possible outcomes and the effects of every action.
• Optimal Decision-Making: With complete knowledge, an omniscient
agent can make decisions that are guaranteed to be the best possible
since it knows the entire outcome of every possible action.
• Infeasibility in Practice: The concept of an omniscient agent is
theoretical because it assumes complete knowledge which is often
impossible to achieve in real-world scenarios.
Example: An agent that can predict every possible move in a game of chess and
know exactly how every move will affect the game outcome. This level of
knowledge is unrealistic for most real-world applications.
Why Rationality is More Practical in AI
1. Limited Information: In real-world scenarios, complete information
about the environment is rarely available. Rational agents work with
whatever information they have, making decisions that maximize their
performance based on this partial information.
2. Computational Feasibility: An omniscient agent requires a level of
computational power and information processing that is usually
impractical. Rational agents, by contrast, are designed to make good
decisions within the limits of their information and computational
resources.
3. Adaptability: Rational agents can adapt and improve their performance
based on experience and learning. They can handle uncertainties and
incomplete knowledge more effectively than an omniscient agent, which
is a theoretical construct with fixed, perfect knowledge.
4. Practicality: Designing an AI system that operates under the assumption
of omniscience is not feasible for most practical applications. Rationality
allows AI systems to be useful and effective in real-world environments
where uncertainties and limited information are common.

Q3. What are the different types of environments that intelligent


agents operate in?
ANS- As per Russell and Norvig, an environment can have various features
from the point of view of an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible

1. Fully Observable vs. Partially Observable


• Fully Observable: The agent has complete information about the
environment at all times. It can see everything it needs to make a
decision.
o Example: A chess game where the agent (or player) can see the
entire board and all the pieces.
• Partially Observable: The agent only has limited or incomplete
information about the environment. It might need to make decisions
based on guesses or past experiences.
o Example: A self-driving car can't always see what's behind a
building or around a corner, making the environment partially
observable.
2. Static vs. Dynamic
• Static: The environment remains unchanged while the agent is making
decisions. The world only changes when the agent acts.
o Example: A crossword puzzle where the state of the game doesn’t
change unless the player writes something.
• Dynamic: The environment changes on its own, even if the agent does
nothing. The agent must keep up with these changes.
o Example: A stock trading system where prices fluctuate constantly,
even when no trades are made by the agent.
3. Discrete vs. Continuous
• Discrete: The environment has a limited number of distinct states or
actions that the agent can choose from.
o Example: A board game like Monopoly, where you can only move
to specific spaces.
• Continuous: The environment has a range of possible states or actions,
often involving real numbers.
o Example: A robot navigating through a room where it can move in
any direction and to any point, not just predefined spots.
4. Deterministic vs. Stochastic
• Deterministic: The outcome of an action is always predictable. If the
agent performs a certain action, the result is always the same.
o Example: A math calculation where 2 + 2 always equals 4.
• Stochastic: The outcome of an action can vary, even if the agent
performs the same action multiple times. There's an element of
randomness.
o Example: Rolling a die in a game where the result can be any
number between 1 and 6, even if you roll it the same way.
5. Single-agent vs. Multi-agent
• Single-agent: The agent operates alone in the environment, without any
other agents affecting its decisions.
o Example: A puzzle-solving game where you’re the only player.
• Multi-agent: The agent operates in an environment where other agents
are present, and they can cooperate or compete with each other.
o Example: A soccer game where multiple players (agents) work
together on a team or compete against another team.
6. Episodic vs. Sequential
• Episodic: The agent's actions are divided into episodes, where each
episode is independent of the others. What happens in one episode
doesn’t affect the others.
o Example: A series of independent image recognition tasks where
each image is processed separately.
• Sequential: The current action influences future actions, and the agent’s
decisions build on one another over time.
o Example: Playing chess, where each move affects future moves
and the overall outcome of the game.
7. Known vs. Unknown
• Known: The agent knows the rules of the environment, including how
actions lead to outcomes. The agent is aware of how the environment
works.
o Example: A video game where the rules and objectives are clearly
defined from the start.
• Unknown: The agent does not know the rules or how its actions will
affect the environment. It must learn or explore to figure out how things
work.
o Example: A new maze where the agent has to discover the layout
and the rules for escaping.
8. Accessible vs. Inaccessible
• Accessible: The agent has access to all the relevant information needed
to make decisions.
o Example: An agent navigating a map with all landmarks and routes
clearly marked.
• Inaccessible: The agent lacks access to some crucial information and
must make decisions with incomplete data.
o Example: A weather prediction system that doesn’t have access to
all global weather data, relying instead on limited regional data.

Q4 Explain the role of autonomy in intelligent agents. How does an


autonomous agent differ from a non-autonomous one?
ANS- Role of Autonomy in Intelligent Agents
1. Independent Decision-Making: An autonomous agent can make its own
decisions based on its goals, knowledge, and perception of the
environment. It doesn’t rely on human input to make choices about what
actions to take.
2. Adaptation and Learning: Autonomous agents can adapt their behavior
based on experience and feedback from their environment. This means
they can improve their performance over time and handle new
situations more effectively.
3. Real-Time Operation: These agents can act in real-time, responding to
changes in the environment without waiting for external commands. This
is crucial for tasks that require immediate responses, like autonomous
driving or real-time monitoring systems.
4. Goal-Oriented Behavior: Autonomy allows agents to pursue their goals
and objectives independently. They are designed to act in a way that
maximizes their effectiveness in achieving their predefined goals.
Difference Between Autonomous and Non-Autonomous Agents
Autonomous Agents
• Self-Governed: Autonomous agents operate independently, making
decisions and taking actions on their own based on their programming
and the information they gather from their environment.
• Decision-Making: They have the capability to make decisions without
needing continuous input or oversight from humans.
• Examples:
o Self-Driving Cars: They navigate roads, avoid obstacles, and make
driving decisions without human intervention.
o Robotic Vacuum Cleaners: They clean floors autonomously,
avoiding obstacles and deciding the cleaning path on their own.
Non-Autonomous Agents
• Dependent on Human Input: Non-autonomous agents require
continuous human input or supervision to make decisions and take
actions. They often perform predefined tasks based on explicit
instructions given by users.
• Limited Decision-Making: They follow a set of rules or commands and
do not have the capability to adapt or make decisions beyond their
programmed instructions.
• Examples:
o Traditional Software Tools: These may require users to manually
enter commands or make decisions, such as spreadsheets where
users input data and formulas.
o Manual Remote-Controlled Robots: These require an operator to
control every action, like using a remote to drive a toy car.

Q5 Describe the PEAS framework used in designing intelligent


agents. Apply this to a real-world example, such as a self-driving car.
ANS- PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an
AI agent or rational agent, then we can group its properties under PEAS
representation model. It is made up of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's
behavior.
PEAS for self-driving cars:
Let's suppose a self-driving car then PEAS representation will be:
Performance: Safety, time, legal drive, comfort
Environment: Roads, other vehicles, road signs, pedestrian
Actuators: Steering, accelerator, brake, signal, horn
Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
Q6. What is perception in the context of artificial intelligence? How
does it impact an agent’s decision-making process?
ANS- In AI, perception is like the agent's way of "seeing" and "hearing" what’s
happening around it. It involves collecting information from the environment
so the agent can understand and react to it.
How Perception Works:
1. Collecting Information:
o Sensors: These are like the agent’s eyes and ears. For example, a
camera for seeing and a microphone for hearing.
2. Understanding the Information:
o Processing: The agent takes the raw data (like pictures or sounds)
and figures out what it means. For instance, recognizing a stop
sign or detecting a person.
3. Making Decisions:
o Action: Based on what it "sees" or "hears," the agent decides what
to do. For example, if the agent sees a stop sign, it might decide to
stop moving.
Impact on Decisions:
• Better Decisions: Good perception helps the agent make better choices
because it has accurate information.
• Quick Reactions: If the agent can quickly understand its surroundings, it
can react faster, like avoiding obstacles on the road.
• Learning and Adapting: Over time, the agent learns from its experiences
and improves how it perceives and reacts.
Example:
Self-Driving Car:
• Sensors: The car uses cameras and radar to "see" the road, other cars,
and pedestrians.
• Processing: It figures out what’s in front of it and where things are.
• Decision: It decides whether to turn, stop, or go based on what it "sees."
Q7 Explain the concept of sensors and actuators in AI. How
do they contribute to the perception and action cycle of an
intelligent agent?
ANS- Sensors
Definition: Sensors are devices or systems that gather information from the
environment. They act like the "senses" of the agent, allowing it to perceive
and understand what's happening around it.
Function:
• Data Collection: Sensors collect data such as images, sounds,
temperatures, or distances.
• Information Processing: This data is then processed to build a picture of
the environment. For example, cameras capture visual information,
while microphones pick up sounds.
Examples:
• Cameras: For visual input, like detecting objects or reading signs.
• Microphones: For capturing audio, such as recognizing spoken
commands.
• Lidar: For measuring distances and creating a 3D map of surroundings.
Contribution to Perception:
• Sensors provide the raw data needed for the agent to understand its
environment. This is the first step in the perception cycle, where the
agent "sees" or "hears" what is happening around it.
Actuators
Definition: Actuators are devices or mechanisms that enable the agent to take
actions based on its decisions. They are like the "muscles" of the agent,
allowing it to interact with and change its environment.
Function:
• Action Execution: Actuators perform actions such as moving, turning, or
manipulating objects.
• Response to Decisions: They carry out the decisions made by the agent
based on the information from sensors.
Examples:
• Motors: For moving parts, like wheels in a robot or car.
• Servos: For precise movements, like adjusting a camera angle.
• Brakes and Accelerators: For controlling speed and stopping in vehicles.
Contribution to Action:
• Actuators implement the agent's decisions and changes its environment.
This is the final step in the action cycle, where the agent acts based on its
understanding of the environment.
Perception and Action Cycle
1. Perception:
o Sensors gather data from the environment.
o Processing: The data is interpreted to understand the current
situation (e.g., recognizing an obstacle in the path).
2. Decision-Making:
o The agent makes a decision based on the processed information
(e.g., deciding to avoid the obstacle).
3. Action:
o Actuators execute the decision by performing actions (e.g.,
steering the car around the obstacle).
4. Feedback Loop:
o The agent continuously repeats this cycle, using sensors to
perceive changes in the environment, making decisions, and acting
to adjust its behavior.

Q8 Describe a perceptual cycle for a robotic vacuum cleaner. How


does it use perception to navigate its environment?
ANS- A perceptual cycle for a robotic vacuum cleaner describes how it
continuously gathers information from its surroundings, processes that
information, makes decisions, and takes action to navigate and clean a room.
Perceptual Cycle of a Robotic Vacuum Cleaner
1. Perception (Sensing the Environment)
o The robotic vacuum cleaner uses various sensors to gather
information about its surroundings:
▪ Infrared sensors: Detect obstacles like walls or furniture.
▪ Bump sensors: Register when the vacuum physically bumps
into something.
▪ Cliff sensors: Prevent it from falling down stairs by sensing
drop-offs.
▪ Dirt sensors: Identify areas with more dirt or dust to focus
on.
These sensors help the robot "see" its environment, including where it can
move, where obstacles are, and which areas need more cleaning.
2. Processing Information
o The data from the sensors is processed by the robot's onboard
computer. This processing includes:
▪ Obstacle detection: Determining where walls or objects like
furniture are located.
▪ Path planning: Figuring out the most efficient route to clean
the area without hitting obstacles.
▪ Dirt detection: Recognizing which spots need extra cleaning
based on how much dirt is detected.
The robot uses this information to create a real-time map of the environment
and make decisions about its next action.
3. Decision-Making
o After processing the sensor data, the robot makes decisions about
what to do next:
▪ Avoiding obstacles: If it senses an obstacle, it will decide to
change direction.
▪ Adjusting path: It chooses a new path if an area is blocked
or if it has already been cleaned.
▪ Focusing on dirtier spots: If it detects a high concentration
of dirt, it might go over the area multiple times to clean it
thoroughly.
4. Action (Actuating Movement)
o The robotic vacuum then uses its actuators to carry out the actions
decided by its onboard computer:
▪ Motors: Control the wheels to move the vacuum in different
directions (forward, backward, turn).
▪ Brushes and suction: Start or adjust cleaning functions to
pick up dirt.
For example, if the robot senses an obstacle in front of it, it will steer away and
choose a new direction.
5. Feedback Loop
o The vacuum cleaner continuously repeats this cycle. As it moves, it
constantly uses its sensors to gather new information about its
surroundings.
o If the environment changes (e.g., a chair is moved), the vacuum
updates its perception, makes new decisions, and takes
appropriate actions.
How the Robotic Vacuum Cleaner Uses Perception to Navigate
• Obstacle Avoidance: The vacuum uses sensors like infrared and bump
sensors to detect walls, furniture, and other obstacles. When it senses an
obstacle, it adjusts its path to avoid collisions.
• Path Optimization: The robot scans its environment and calculates the
most efficient path to clean the area, ensuring it covers the room
without getting stuck or missing spots.
• Stair and Cliff Detection: Cliff sensors detect any drop-offs, like stairs,
preventing the robot from falling. If it detects a cliff, it changes direction.
• Adaptive Cleaning: Dirt sensors help the vacuum identify dirtier spots
and spend more time cleaning those areas, adjusting its behavior based
on real-time feedback.

Q9 What are the main challenges in Natural Language Processing


(NLP)? Discuss with examples.
ANS- Natural Language Processing (NLP) faces several challenges, primarily
due to the complexity, ambiguity, and diversity of human language. Here are
some key challenges, along with examples:
1 Ambiguity
• Challenge: Words or sentences can have more than one meaning.
• Example: In search engines, the word "bank" could mean a financial
institution or the side of a river. The system needs to decide the correct
meaning based on context, like when someone searches for "river bank"
or "bank account."
2. Context Understanding
• Challenge: Machines struggle to understand how context changes the
meaning of words.
• Example: Virtual assistants like Siri or Alexa might misunderstand
requests without context. If you say, "Turn it off," they need to know
what "it" refers to—like the TV or the lights—from previous interactions.
3. Sentiment Analysis
• Challenge: Detecting emotion in language, especially sarcasm, is hard.
• Example: On social media, a comment like "I just love getting stuck in
traffic" is sarcastic, but a sentiment analysis tool might mistakenly read it
as a positive statement, thinking "love" means the user is happy.
5. Speech Recognition
Machines can struggle to understand spoken words, especially when
people have different accents or there’s background noise.
• Example: Speech recognition might confuse "bare" and "bear" because
they sound the same.

5. Named Entity Recognition (NER)


• Challenge: Identifying names of people, companies, and places can be
tough when words overlap with common terms.
• Example: In news articles, the word "Apple" can refer to the tech
company or the fruit. NER systems must correctly classify this based on
the sentence—like recognizing "Apple launched a new product" as the
company.

Q Explain the difference between syntax, semantics, and pragmatics


in NLP. Provide an example of each.
ANS- In Natural Language Processing (NLP), syntax, semantics, and pragmatics
are key aspects that help machines understand and process human language.
Here’s how they differ:
1. Syntax
• Definition: Syntax refers to the structure and rules that govern how
words are arranged to form sentences. It focuses on grammar, word
order, and sentence formation.
• Example: In English, the sentence "The dog chased the cat" follows
proper syntax because the subject ("The dog") comes before the verb
("chased"), and the object ("the cat") follows the verb. A sentence like
"Chased the cat the dog" is syntactically incorrect.
2. Semantics
• Definition: Semantics is about the meaning of words and sentences. It
focuses on understanding what a sentence means.
• Example: The sentence "Colorless green ideas sleep furiously" is
syntactically correct, but semantically it doesn’t make sense because the
words "colorless" and "green" contradict each other, and ideas can't
"sleep furiously."
3. Pragmatics
• Definition: Pragmatics is about understanding the real meaning behind
what someone says, based on the situation or context.
• Example: If someone says, "Can you pass the salt?" in a dinner setting,
the pragmatic meaning is not about the ability to pass the salt but rather
a polite request to pass it. The literal meaning would be a question about
physical ability, but pragmatics helps us understand the actual request.

Q What is the difference between informed and uninformed search


strategies? Provide an example of each.
ANS-
Q List and briefly explain the different types of uninformed search
strategies. Which one is the most efficient in terms of memory
usage?
ANS- Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as
closeness, the location of the goal. It operates in a brute-force way as it only
includes information about how to traverse the tree and how to identify leaf and
goal nodes. Uninformed search applies a way in which search tree is searched
without any information about the search space like initial state operators and
test for the goal, so it is also called blind search.It examines each node of the
tree until it achieves the goal node.
It can be divided into five main types:

o Breadth-first search
o Uniform cost search
o Depth-first search
o Iterative deepening depth-first search
o Bidirectional Search

BREADTH-FIRST SEARCH:

o Breadth-first search is the most common search strategy for


traversing a tree or graph. This algorithm searches breadthwise in a
tree or graph, so it is called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and
expands all successor node at the current level before moving to
nodes of next level.
o The breadth-first search algorithm is an example of a general-graph
search algorithm.
o Breadth-first search implemented using FIFO queue data structure.

EXAMPLE
Depth-first Search
o Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
o It is called the depth-first search because it starts from the root node and follows each
path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.

EXAMPLE-

In the below search tree, we have shown the flow of depth-first search, and it will follow
the order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node is
not found. After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.
Depth-Limited Search Algorithm:
A depth-limited search algorithm is similar to depth-first search with a predetermined
limit. Depth-limited search can solve the drawback of the infinite path in the Depth-
first search. In this algorithm, the node at the depth limit will treat as it has no successor
nodes further.

Depth-limited search can be terminated with two Conditions of failure:

o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.

Example:
Uniform-cost Search Algorithm:
Uniform-cost search is a searching algorithm used for traversing a weighted tree or
graph. This algorithm comes into play when a different cost is available for each edge.
The primary goal of the uniform-cost search is to find a path to the goal node which
has the lowest cumulative cost. Uniform-cost search expands nodes according to their
path costs form the root node. It can be used to solve any graph/tree where the
optimal cost is in demand. A uniform-cost search algorithm is implemented by the
priority queue. It gives maximum priority to the lowest cumulative cost. Uniform cost
search is equivalent to BFS algorithm if the path cost of all edges is the same.

Example:

Iterative deepening depth-first Search:


The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing the
limit until a goal is found.

This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.

This Search algorithm combines the benefits of Breadth-first search's fast search and
depth-first search's memory efficiency.

The iterative search algorithm is useful uninformed search when search space is large,
and depth of goal node is unknown.
Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS
algorithm performs various iterations until it does not find the goal node. The iteration
performed by the algorithm is given as:

1st iteration – A

2nd iteration – A B C

3rd iteration- A B D E C F G

4th iteration – A B D H I E C F K G

In the fourth iteration,the algorithm will find the goal node.

Bidirectional Search Algorithm:


Bidirectional search algorithm runs two simultaneous searches, one form
initial state called as forward-search and other from goal node called as
backward-search, to find the goal node. Bidirectional search replaces one
single search graph with two small subgraphs in which one starts the
search from an initial vertex and other starts from goal vertex. The search
stops when these two graphs intersect each other.

Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in the
forward direction and starts from goal node 16 in the backward direction.

The algorithm terminates at node 9 where two searches meet.

Most Efficient in Terms of Memory Usage:

• Depth-First Search (DFS) is the most memory-efficient because it only needs


to store a single path from the root to a leaf node, plus the unexplored siblings
for each node on the path. This means it uses memory proportional to the depth
of the search tree, rather than the breadth.

Q Explain the A* search algorithm. How does it use heuristics to guide


the search process?

ANS- A* Algorithm-
• A* Algorithm is one of the best and popular techniques used for
path finding and graph traversals.
• A lot of games and web-based maps use this algorithm for finding
the shortest path efficiently.
• It is essentially a best first search algorithm.

A* algorithm extends the path that minimizes the following function-

F(n) = g(n) + h(n)

Here,

• ‘n’ is the last node on the path


• g(n) is the cost of the path from start node to node ‘n’
• h(n) is a heuristic function that estimates cost of the cheapest path
from node ‘n’ to the goal node

Algorithm-

The implementation of A* Algorithm involves maintaining two lists- OPEN


and CLOSED.

• OPEN contains those nodes that have been evaluated by the


heuristic function but have not been expanded into successors yet.
• CLOSED contains those nodes that have already been visited.

Q Compare and contrast depth-first search (DFS) and


breadth-first search (BFS). What are the strengths and
weaknesses of each?
ANS-

You might also like