0% found this document useful (0 votes)
5 views

Module 1

Advanced AI & ML

Uploaded by

tarunsg1106
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Module 1

Advanced AI & ML

Uploaded by

tarunsg1106
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

MODULE 1

An agent can be broadly defined as any entity that interacts with its environment by receiving input
through sensors and performing actions through actuators. These interactions are essential for the
agent to achieve its goals. Agents can range from simple programs to complex robots.
The environment refers to the external world in which the agent operates and interacts. It provides
the context or scenario within which the agent perceives, learns, and acts.

What are the key characteristics of AI Agents?


AI agents are designed to operate in complex environments and perform tasks intelligently.
1. Autonomy:
o AI agents can operate independently without requiring constant human supervision.
They can make decisions and take actions autonomously based on their programming
and interaction with the environment.
o Example: An autonomous robot performing routine maintenance in a factory.
2. Perception:
o They can sense and interpret their environment through various sensors like cameras,
microphones, or other devices. This enables them to understand the state of their
surroundings and act accordingly.
o Example: A self-driving car using cameras and LIDAR to detect road conditions and
traffic.
3. Reactivity:
o AI agents can assess their environment in real-time and respond dynamically to
changes to achieve their goals.
o Example: A robotic vacuum cleaner avoiding obstacles like furniture while cleaning.
4. Reasoning and Decision-Making:
o They use reasoning techniques and algorithms to analyze data and make informed
decisions that align with their objectives.
o Example: A recommendation engine analyzing user preferences to suggest
personalized content.
5. Learning:
o AI agents incorporate machine learning, deep learning, or reinforcement learning to
improve their performance over time by learning from interactions and feedback.
o Example: A chatbot improving its responses based on user interactions.
6. Communication:
o AI agents can communicate effectively with humans or other agents through natural
language processing, speech recognition, or message exchange.
o Example: Virtual assistants like Siri or Alexa understanding and responding to voice
commands.
7. Goal-Oriented Behavior:
o They are designed to achieve specific objectives, which can be pre-defined or learned
through their interactions with the environment.
o Example: A delivery drone ensuring packages are delivered to the correct locations
efficiently.
8. Adaptivity:
o AI agents can adapt to new environments, situations, or challenges by learning and
modifying their behavior. This allows them to remain effective even in dynamic or
uncertain conditions.
o Example: An e-commerce recommendation system adjusting its suggestions based on
changing user preferences.
9. Rationality:
 AI agents strive to make rational decisions, maximizing their performance based on
available information and their goals. They aim to act in ways that achieve the best
possible outcomes.
 Example: A chess-playing AI choosing moves that maximize its chances of winning
based on strategic evaluation.

Importance of AI Agents:
AI agents are transformative for modern industries and society due to their ability to handle complex
tasks and provide intelligent solutions. Here’s a detailed look at their importance:
1. Automation:
o AI agents automate repetitive and mundane tasks like data entry, customer support,
and basic analysis, allowing humans to focus on higher-value activities. This reduces
operational costs and increases productivity.
o Example: Chatbots handling customer inquiries, allowing support teams to focus on
complex issues.
2. Scalability:
o AI agents can scale effortlessly to manage increasing workloads or customer
interactions without compromising quality and performance.
o Example: Virtual assistants in e-commerce handling thousands of queries
simultaneously during peak shopping seasons.
3. Improved Efficiency:
o AI agents enhance efficiency, productivity and safety in fields like robotics,
transportation, and manufacturing.
o Example: Self-driving cars navigating urban environments safely while reducing
human errors.
4. Increased Sales and Profitability:
o AI agents use data-driven insights to personalize customer experiences, increasing
engagement and conversion rates.
o Example: AI-driven recommendation systems in online stores increasing average
order values.
5. Predictive Analytics:
o AI agents excel in predictive analytics, enabling better planning and decision-making
in fields like healthcare and finance.
o AI agents analyze historical and real-time data to predict trends and outcomes, aiding
in proactive decision-making.
o Example: Predicting patient admission rates or identifying potential disease outbreaks
to improve resource allocation.
6. Cost Savings:
o Automating tasks and reducing errors leads to significant cost savings for businesses.
o Example: AI-powered fraud detection systems reducing financial losses in banking.
7. Multi-Agent Collaboration:
o Specialized AI agents can collaborate to solve complex problems, improving overall
efficiency and outcomes.
o Multiple AI agents can work together to solve complex problems more efficiently
than a single agent.
o Example: In logistics, multiple AI agents coordinate to optimize supply chain
operations.
8. Enhanced Decision-Making:
o AI agents analyze large volumes of data to provide actionable insights, helping
organizations make informed decisions.
o Example: Business intelligence tools suggesting optimal inventory levels based on
historical sales data.
9. Innovation and New Capabilities:
o AI agents enable the development of new products and services that were previously
infeasible.
o Example: AI systems generating realistic art, music, or design prototypes.
10. Safety and Risk Reduction:
o AI agents perform hazardous tasks, reducing risks to human workers in dangerous
environments.
o Example: Robots inspecting and repairing pipelines in extreme conditions.
Define PEAS and explain diAerent agent types with their PEAS descriptions.
PEAS is an acronym that stands for Performance Measure, Environment, Actuators, and Sensors.
It is a framework used in artificial intelligence to design and analyze intelligent agent systems.
1. Performance Measure (P):
This component defines the criteria for evaluating the success or effectiveness of the
agent's behavior. It specifies the goals or objectives that the agent is trying to achieve. The
performance measure provides a quantitative measure of how well the agent is
performing in its environment.
2. Environment (E):
The environment represents the external system or context within which the agent
operates. It encompasses all the elements, properties, and dynamics that the agent interacts
with, perceives, and affects through its actions. The environment defines the context in which
the agent's behavior is evaluated.
3. Actuators (A):
Actuators are the mechanisms or devices through which the agent can influence or modify
the environment. They enable the agent to perform actions or operations that affect the
state of the environment. Actuators could include physical actuators such as motors or
effectors, as well as virtual or simulated actions in software-based systems.
4. Sensors (S):
Sensors are the mechanisms or devices through which the agent perceives or gathers
information about the state of the environment. They enable the agent to sense and
observe relevant aspects of the environment, such as objects, events, or conditions. Sensors
provide input to the agent, allowing it to make informed decisions and take appropriate
actions.
Concept of Rationality in Intelligent Agents
Rationality in intelligent agents refers to the ability of the agent to make decisions and take actions
that maximize its success in achieving specified goals, based on the information it has and the
constraints it faces. Rational agents aim to achieve the best possible outcomes given their knowledge,
beliefs, and the evidence available to them. Rationality is evaluated based on the agent's ability to
achieve its goals effectively and efficiently.
The concept of rationality refers to the quality of making decisions and taking actions that are logical,
reasonable, and aligned with specific goals or principles. In the context of AI, rationality involves an
intelligent agent behaving in a way that maximizes its performance measure, given its knowledge,
perceptions, and constraints. It focuses on selecting actions expected to yield the most favorable
outcomes under the given circumstances.
Performance Measure:
Rationality is evaluated based on the consequences of the agent's behavior in achieving its goals. The
performance measure defines the criteria for success, such as the number of clean squares in a given
environment for a vaccum cleaner or the overall utility of actions taken by the agent.
Agent's Behavior:
A rational agent is one that selects actions expected to maximize its performance measure, given its
knowledge about the environment, available actions, and expected sequence received so far. The agent
aims to achieve the best possible outcomes based on the evidence provided by its built-in knowledge.
Prior Knowledge and Perception:
Rationality depends on the agent's prior knowledge of the environment and its ability to perceive and
interpret sensory information accurately. The agent's actions are guided by its understanding of the
environment and its perception of the current state.
Adaptability:
A rational agent may need to adapt its behavior over time based on changes in the environment or new
information received through perceptions. It may explore different actions and strategies to maximize
its performance measure in dynamic or uncertain environments.
Optimality:
Rationality does not necessarily imply achieving perfect or optimal outcomes in all situations. Instead,
it involves selecting actions that are expected to lead to the most favorable outcomes given the
available information and constraints. The agent's rationality is evaluated relative to its goals and the
context of the environment.
Uncertainty Handling:Rational agents often deal with incomplete or uncertain information about
their environment. They use probabilistic reasoning and decision theory to act in the face of such
uncertainty.
Optimality vs. Practicality
In the context of rationality, optimality refers to selecting actions that achieve the best possible
outcomes based on the performance measure. However, real-world constraints like limited time or
computational resources often make achieving perfect optimality impractical. Practicality involves
finding solutions that are "good enough" to meet goals efficiently within these constraints.
Goal Orientation
Goal orientation ensures that a rational agent’s actions are aligned with its predefined objectives. The
agent prioritizes tasks and decisions that directly contribute to achieving these goals while adapting to
changes in the environment or new information.
Discuss different types of environments that agents can interact
1. Fully Observable vs. Partially Observable:

 In a fully observable environment, the agent has complete knowledge of the


environment's state at any given time. This means the agent can directly perceive all
relevant information needed to make decisions.
 In a partially observable environment, the agent's knowledge of the environment is
limited or incomplete. The agent may only have access to partial information about
the environment, requiring it to maintain beliefs or hypotheses about the unobserved
aspects of the environment.
 Fully observable environments simplify decision-making, whereas partially
observable environments demand reasoning under uncertainty.
2. Single Agent vs. Multiagent:

 In a single-agent environment, there is only one autonomous agent that interacts


with the environment. The agent's actions influence the environment, but there are no
other agents present to influence the agent's decision-making.
 In a multiagent environment, there are multiple autonomous agents, each with its
own goals and behaviors, interacting with each other and the environment. The
actions of one agent may affect the actions and outcomes of other agents. Multiagent
environments require coordination, competition, or cooperation strategies.
3. Deterministic vs. Stochastic:

 A deterministic environment provides predictable outcomes for actions. Given the


same initial conditions and action, the environment's state transition is always the
same. It is easier to model.
 In contrast, a stochastic environment introduces randomness or uncertainty into the
outcomes of actions. The same action taken under identical conditions may lead to
different outcomes with certain probabilities.
4. Episodic vs. Sequential:

 An episodic environment consists of self-contained episodes where the agent's actions


and outcomes are independent of previous episodes. Each episode starts anew, and
there is no dependency between them.
 In a sequential environment, the agent's actions and outcomes are interdependent
across time steps. The agent's actions in one time step may affect the subsequent
states and outcomes, leading to a sequential decision-making process. It require
planning and foresight
5. Static vs. Dynamic:

 In a static environment, the elements and properties of the environment remain


constant over time. There are no changes or external influences that affect the
environment's state or dynamics.
 A dynamic environment undergoes changes or transitions over time. These
changes can be due to external factors, agent actions, or inherent dynamics of the
environment, requiring the agent to adapt and respond to the evolving conditions.
6. Discrete vs. Continuous:
o A discrete environment has a countable state space, action space, or time. The
environment's elements and properties are distinct and finite.
o A continuous environment has an uncountable state space, action space, or time. The
environment's elements and properties exist on a continuous scale, making it
challenging to discretize or represent them explicitly.
7. Known vs. Unknown:
o In a known environment, the agent has complete knowledge of the environment's
properties, dynamics, and rules. The agent can accurately predict the consequences of
its actions.
o In an unknown environment, the agent has limited or incomplete information about
the environment. The agent may need to explore, learn, and update its knowledge
through interactions to make informed decisions and achieve its goals.
---------------------------------

Minimax Algorithm
The minimax algorithm is a fundamental decision rule in game theory and artificial intelligence,
especially for two-player, zero-sum games. It ensures optimal decision-making by assuming both
players play perfectly. The algorithm alternates between minimizing and maximizing values as it
evaluates the game tree.
How Minimax Works:
1. Terminal State Evaluation:
o Assign utility values to terminal nodes of the game tree based on the outcome for the
maximizing player.
2. Minimization and Maximization:
o At MIN nodes (opponent’s turn), choose the lowest value among child nodes.
o At MAX nodes (player's turn), select the highest value among child nodes, aiming to
maximize the score
3. Backpropagation:
o Propagate these values upward through the tree, assigning a minimax value to each
node.
4. Optimal Move Selection:
o At the root node, the maximizing player selects the branch leading to the child node
with the highest minimax value, ensuring the best possible outcome against a perfect
opponent.
Example of minimax:

Step-by-Step Process:
1. Terminal State Evaluation:
o The algorithm starts by evaluating the utility values of the three bottom-left nodes
(children of node B).
o Utility values for these nodes are determined using the UTILITY function:
Values: 3,12,and 8
2. Minimization:
o At node B, which is a MIN node (opponent's turn), the minimum value is selected:
Min(3,12,8)= 3
o This value (3) is "backed up" to node B.
3. Repeat for Other Nodes:
o A similar process is applied to nodes C and D:
 For node C, the child node utility values yield a minimum value of 2.
 For node D, the minimum value is also 2.
4. Maximization at Root:
o At the root node (a MAX node, player's turn), the algorithm selects the maximum
value among the backed-up values from its children (B, C, D):
Max(3,2,2)=3
o The value 3 becomes the backed-up value of the root node.

The root node's final minimax value is 3, indicating that the maximizing player should choose the
move leading to node B for the optimal outcome. This process ensures rational decision-making based
on the assumption of perfect play by both players.
------------------
List the four basic types of agent programs in any intelligent system and explain how to convert
them into learning agents.
Four Basic Types of Agent Programs
1. Simple Reflex Agents:
o These agents act based on condition-action rules, reacting to the current percept
without considering past states.
2. Model-Based Reflex Agents:
o These agents maintain an internal state that tracks unobservable aspects of the
environment, enabling more informed decisions.
3. Goal-Based Agents:
o These agents select actions based on their ability to achieve specific goals. They
consider the outcomes of actions and plan accordingly.
4. Utility-Based Agents:
o These agents optimize actions to maximize a utility function, weighing trade-offs to
achieve the most desirable outcomes.
Converting Agent Programs into Learning Agents
To convert these agents into learning agents, a learning component must be added that enables them
to improve their performance over time through experience. This involves four main components:
1. Performance Element:
o The component responsible for selecting actions based on percepts (unchanged from
the original agent design).
2. Learning Element:
o Responsible for improving the agent's performance based on feedback and
experience. It improves the agent’s knowledge and strategies.
3. Critic:
o Evaluates the agent's performance against a fixed standard, providing feedback to the
learning element. It helps identify areas for improvement.
4. Problem Generator:
o Suggests exploratory actions to discover new knowledge or optimize performance,
enabling the agent to test and refine its strategies.
Application to Agent Types:
 Simple Reflex Agent:
o Add a learning element that updates condition-action rules based on feedback.
o Example: A thermostat that learns optimal temperature settings based on user
preferences.
 Model-Based Reflex Agent:
o Enhance the internal state representation by incorporating learning to improve
predictions about the environment.
o Example: A self-driving car that learns patterns in traffic flow to improve route
planning.
 Goal-Based Agent:
o Enable the agent to learn new ways to achieve its goals or refine its planning
strategies based on past successes and failures.
o Example: A robot learning more efficient paths to complete tasks in a warehouse.
 Utility-Based Agent:
o Add a learning element to optimize the utility function dynamically based on
observed outcomes.
o Example: A recommendation system adapting its utility function to account for
changing user preferences.
-------------------------------------
STRUCTURE OF AGENT

The task of AI is to design an agent program which implements the agent function. The structure of an
intelligent agent is a combination of architecture and agent program.

Agent = Architecture + Agent program

1. Architecture:

- Define architecture as the underlying machinery or computational framework on which the agent
operates. This can include hardware components, software platforms, communication protocols, etc.
- Explains how the architecture provides the necessary resources and capabilities for the agent to
perceive its environment, process information, and act upon it.

2. Agent Function (f):


- Define the agent function as the mapping between percepts (inputs from the environment) and
actions (outputs or decisions made by the agent).
- Emphasize that the agent function determines how the agent interprets sensory information and
selects appropriate actions based on its goals and objectives.

3. Agent Program:
- Define the agent program as the concrete implementation of the agent function within the chosen
architecture.
- Explain that the agent program consists of the actual code or algorithms that execute on the
architecture to perceive, reason, and act within the environment.
- Discusses how the agent program may incorporate various AI techniques and algorithms to
process sensory data, make decisions, and adapt to changing conditions.

---------------------------------

Describe the main components and architectures of intelligent agents (e.g., simple reflex
agents, model-based agents, goal-based agents). Elaborate on each of them with a
suitable diagram, with advantages, limitations and applications??

SIMPLEX REFLEX AGENT

Operate based on condition-action rules. These rules are typically in the form of "if condition, then
action."
However, their effectiveness is limited to fully observable environments where the agent has complete
knowledge of the current state.
They operate on a very simple principle: if a certain condition is observed in the current state of the
environment (as perceived by the agent), then take a specific action. They don't have memory or the
ability to consider past experiences or future consequences of their actions.
These agents only consider the current percept, which is the information received from the environment
through sensors at a specific point in time.

Components:
 Sensors: Perceive the current state of the environment.
 Actuators: Execute predefined actions based on condition-action rules.

Advantages of Simple Reflex Agents


 Simplicity: Easy to design and implement.
 Speed: Fast decision-making due to straightforward condition-action mapping.
 Efficiency: Suitable for fully observable and well-defined environments.
Limitations:
 Ineffective in partially observable or complex environments.
 Limited Intelligence: Incapable of handling complex behaviors or adapting to dynamic
environments.
 Lack of Knowledge: Operate only on current percepts without memory of past or future
states of environment.
 Non-Adaptive: Cannot learn or modify behavior based on environmental changes.
Applications:
 Thermostats.
 Simple robotic systems (e.g., line-following robots).

MODEL-BASED RELEX AGENT

A model-based reflex agent is an intelligent agent that uses an internal model of the environment to
make decisions and take actions.
Use percept history and internal models to make decisions.
The agent maintains an internal state that represents its current understanding of the environment's
state. This internal state is updated based on the agent's percepts and actions. It provides a structured
representation of both the observable and unobservable aspects of the environment.
When faced with a new percept, the model-based agent consults its internal model to determine the best
course of action. It evaluates possible actions based on their expected outcomes and selects the action
that is most likely to achieve its goals or improve its situation.

Components:
 Internal State: Tracks changes in the environment.
 Sensors and Actuators: Similar to simple reflex agents.
 Update Function: Updates the internal state based on new percepts.

Advantages:
 Internal State Tracking: Can handle partially observable environments by maintaining an
internal model.
 Reasoning Ability: Better decision-making through understanding of environmental
dynamics.
 Flexibility: Adaptable to changes in the environment over time.
Limitations:
 Complexity: Higher computational demands to update and maintain the internal state.
 Resource Intensive: Requires more memory and processing power than simple reflex agents.
 Limited Planning: Cannot explicitly plan for long-term goals.
Applications:
 Self-driving cars.
 Industrial process monitoring systems.
GOAL BASED AGENTS

Goal-based agents are intelligent agents that operate with a clear understanding of their objectives or
goals. Goal-based agents are equipped with knowledge about the desirable outcomes they aim to
achieve.
Goal-based agents select actions based on their potential to move closer to achieving the specified
goals. The agent evaluates available actions and chooses those that are most likely to lead to goal
attainment.
Goal-based agents engage in searching and planning processes to identify the most effective path
towards goal achievement. They choose an action, so that they can achieve the goal.

Components:
 Goal Information: Defines desired outcomes.
 Decision Logic: Evaluates actions based on their contribution to goals.
 Sensors, Actuators, and Internal State: Shared with model-based agents.

Advantages:
 Purposeful Actions: Decisions are directed toward achieving specific goals.
 Flexibility: Can adapt strategies based on the goal and changing conditions.
 Search and Planning: Supports advanced decision-making through goal evaluation and
planning.

Limitations:
 Computationally Expensive: Search and planning can require significant time and resources.
 Goal Dependence: Performance heavily depends on the clarity and specificity of goals.
 Inflexibility in Ambiguity: Struggles when goals are vague or conflicting.

Applications:
 Autonomous delivery robots.
 Navigation systems.

UTILITY-BASED AGENT
A utility-based agent is a type of intelligent agent that not only considers goals but also takes into account
the desirability or utility of different states in achieving those goals. Evaluate actions based on a utility
function, representing the desirability of outcomes.

Utility-based agents incorporate a utility function that assigns a real number (utility) to each possible
state or outcome. This utility represents the desirability or preference of the agent for that state. The
utility function allows the agent to evaluate the consequences of its actions and choose the one that
maximizes overall utility.

Components:
 Utility Function: Assigns value to different states or outcomes.
 Sensors, Actuators, Internal State, and Goal Information: Shared with goal-based agents.

Advantages:
 Optimal Decision-Making: Considers trade-offs to maximize utility across conflicting goals.
 Comprehensive Evaluation: Evaluates actions based on the desirability of outcomes.
 Adaptability: Can handle uncertain and dynamic environments effectively.

Limitations:
 Utility Function Design: Defining a suitable utility function can be complex.
 High Computational Demands: Requires significant processing for evaluating and
comparing utilities.
 Scalability Issues: Challenging to compute utilities in large and complex environments.

Applications:
 Recommendation systems.
 Financial trading algorithms.

LEARNING AGENT

Improve performance over time by learning from interactions with the environment.
This component is responsible for making improvements to the agent's behavior by learning from its
interactions with the environment. It receives feedback from the environment and updates the agent's
knowledge or policy accordingly.

Components:
1. Performance Element: Executes actions.
2. Learning Element: Improves agent behavior.
3. Critic: Provides feedback on performance.
4. Problem Generator: Encourages exploration for learning.
Advantages:
 Adaptability: Can improve performance over time by learning from interactions with the
environment.
 Exploration: Capable of discovering new strategies and adapting to unknown situations.
 Continuous Improvement: Enhances decision-making through feedback and experience.
Limitations:
 Initial Performance: May perform poorly initially as it requires time to learn effectively.
 Data Dependency: Relies on sufficient and high-quality data for training and improvement.
 Resource Intensive: Requires computational power and memory for learning algorithms and
feedback processing.
----------------------------------------

What are zero sum games?


A zero-sum game is a type of game in which the total gain and loss among all players add up to zero.
This means one player's gain is exactly balanced by the losses of the other players.
 Example: Chess or poker, where one player's win implies the other player's loss of an
equivalent value.

What are Stochastic Games?


Stochastic Games
Stochastic games are an extension of game theory that includes randomness in state transitions. They
combine elements of Markov decision processes (MDPs) with multi-agent decision-making.
 Characteristics:
o The game consists of a sequence of states.
o Players choose actions, and the next state is determined probabilistically.
o Rewards are distributed based on the current state and actions.
 Example: Multiplayer board games with dice rolls influencing the state transitions.
Write and explain the Alpha-Beta algorithm with an example?? Elaborate on alpha beta pruning.??
What do you understand by forward pruning?
Alpha-Beta Algorithm
The Alpha-Beta Pruning algorithm is an optimization technique for the minimax algorithm used in
game search trees. It reduces the number of nodes evaluated while ensuring the same optimal decision
is reached as the original minimax algorithm. The algorithm introduces two bounds:
 Alpha (α): The best value found so far for the maximizing player (MAX).
 Beta (β): The best value found so far for the minimizing player (MIN).
The effectiveness of alpha–beta pruning is highly dependent on the order in which the states are
examined.
How Alpha-Beta Pruning Works
1. Depth-First Search:
o The algorithm performs a depth-first traversal of the game tree.
2. Pruning Condition:
o Stop exploring a branch if it cannot influence the final decision (when α ≥ β).
3. Efficiency:
o Alpha-Beta Pruning improves efficiency by discarding branches that do not
contribute to the final outcome.
Alpha-beta Algorithm
Example:

Benefits of Alpha-Beta Pruning


 Efficiency: Reduces the effective branching factor, allowing deeper searches in the same
time.
 Scalability: Enables handling of complex games like chess and Go.
 Move Ordering: Improves pruning efficiency when good moves are evaluated first.
Forward Pruning
Forward pruning is a technique in game tree search that involves intentionally skipping the
exploration of some branches of the tree, based on heuristic judgments about their likelihood of
being relevant to the final decision. Unlike Alpha-Beta pruning, which guarantees the same result as a
full minimax search by pruning only irrelevant branches, forward pruning makes a deliberate trade-off
between accuracy and efficiency.

How Forward Pruning Works


1. Heuristic Selection:
o The algorithm evaluates the importance or promise of each branch based on
predefined heuristics (e.g., estimated utility, position strength, or game-specific
factors).
o Only a limited number of "promising" branches are explored further.
2. Thresholding:
o Some implementations use thresholds or limits to discard branches that fall below a
certain score or are less promising than others.
3. Application:
o Forward pruning is commonly used in situations where computational resources are
limited, such as real-time decision-making in games or dynamic systems.
Advantages:
 Reduces computation further in large game trees.
 Speeds up decision-making in real-time applications.
Disadvantages:
 Risk of missing the optimal decision as some branches are ignored entirely.

You might also like