Chapter 2 - Intelligent Agents
Chapter 2 - Intelligent Agents
Agent: An entity that perceives the environment and acts upon it.
Environment: The external world in which the agent operates, which provides input
(percepts) and reacts to the agent’s actions.
Rational Agent: An agent that acts to achieve the best outcome based on its knowledge and
abilities. Rationality depends on:
A rational agent chooses actions that are expected to maximize its performance based on the
percepts and knowledge it has.
Perfect Rationality: Achieving the best possible outcome, considering all possible actions.
Bounded Rationality: Agents that make reasonable decisions given the limits of time,
information, and resources.
Simple Reflex Agents: These agents respond to specific stimuli with predefined rules
(condition-action rules). Example: A thermostat that turns on the heater when it senses
cold.
Model-based Agents: These agents maintain an internal model of the world and base
their decisions on past percepts as well as the current one.
Goal-based Agents: These agents act to achieve a defined goal, beyond just reacting to
stimuli.
Utility-based Agents: These agents try to maximize their overall utility, or the expected
value of their actions.
4. Types of Agents
Intelligent agents perceive the environment and act in a way that maximizes success
based on rational decisions.
Their structure consists of sensors, actuators, and an agent program.
Different agent types offer varying degrees of complexity, from simple reflex agents to
advanced learning agents.
This overview introduces the foundational concepts of intelligent agents. Each type of agent has
a role depending on the complexity of tasks it is designed to perform, and understanding these
distinctions is key in designing AI systems.
Key Terms:
Rational Agent
Percept
Actuator
Utility
Learning Agent
Chapter 3 – Problem Solving
Problem Solving by searching
Problem solving agents
Problem Formulation
Search strategies
In AI, problem-solving often involves searching through various possibilities to find a solution.
An intelligent agent must find a sequence of actions that leads from the initial state to a goal
state. The process of systematically exploring possible options is called searching.
Key Components:
2. Problem-Solving Agents
A problem-solving agent is a type of intelligent agent that makes decisions by searching for
sequences of actions leading to desirable outcomes.
Characteristics:
They are goal-driven: The agent takes actions to achieve a specific goal.
They operate in a well-defined problem space where the initial state, goal state, and actions are
clearly defined.
They follow a process that typically includes formulating a problem, searching for a solution,
and executing the solution.
The agent uses a search algorithm to explore the problem space and find a path to the goal.
3. Problem Formulation
Problem formulation is the first step in problem-solving. It involves clearly defining the
problem by specifying:
4. Search Strategies
Search strategies are methods used by agents to explore the problem space. They can be
categorized into two main types:
These strategies have no additional information about the problem beyond the initial state, goal
state, and possible actions. Examples include:
These strategies use additional knowledge (heuristics) to guide the search more efficiently
toward the goal. Examples include:
In many cases, AI agents are designed to compete in games, which are a special type of search
problem. The challenge is not only to find a solution but to win against an adversary.
1. Minimax Algorithm:
o Used for two-player games where one player tries to maximize the score and the
opponent tries to minimize it.
o It ensures the best possible result against an optimal adversary.
2. Alpha-Beta Pruning:
o An optimization technique for the minimax algorithm that cuts off branches that won't
affect the final decision, improving efficiency without changing the outcome.
3. Evaluation Function:
o In complex games, it's not feasible to search all the way to terminal states. Instead,
evaluation functions estimate the desirability of intermediate states, helping guide
decisions.
Summary
Initial State
Goal State
Breadth-First Search
Depth-First Search
Heuristic
Minimax
Alpha-Beta Pruning
Chapter 4 – Knowledge and Reasoning
Logical Agents
Propositional Logic
Knowledge representation
Knowledge based systems.
1. Logical Agents
Logical agents are a type of intelligent agent that makes decisions based on formal logic. They
use knowledge-based systems to represent information about the world and apply logical
reasoning to draw conclusions and choose actions.
2. Propositional Logic
Propositional logic is a simple but powerful form of logic used to represent facts about the
world. It works with propositions—statements that are either true or false.
Basic Elements:
Propositions: Simple statements, like "It is raining" (denoted as P), which can be either
true or false.
Logical connectives: These are operators that combine propositions:
o AND (∧): True if both propositions are true.
o OR (∨): True if at least one proposition is true.
o NOT (¬): Reverses the truth value of a proposition.
o Implication (→): If the first proposition is true, then the second must be true.
o Biconditional (↔): True if both propositions have the same truth value.
Truth tables are used to show the truth values of complex propositions. For example:
Inference in propositional logic involves applying rules to derive new facts from known facts.
Two important inference methods are:
3. Knowledge Representation
Knowledge representation refers to how information about the world is structured and stored in
an AI system. It is essential for enabling reasoning, understanding, and decision-making.
Knowledge-based systems (KBS) are AI systems that reason and make decisions based on a
structured set of knowledge. They use a knowledge base combined with an inference engine to
derive new information or make decisions.
Key components:
1. Knowledge Base: A collection of facts and rules about the world. This includes:
o Declarative knowledge: Statements about what is true (e.g., "All birds can fly").
o Procedural knowledge: Instructions on how to perform tasks (e.g., "How to
calculate an area").
2. Inference Engine: The part of the system that applies logical reasoning to the knowledge
base. It uses rules of inference to deduce new facts or determine actions. Inference
engines work in two primary ways:
o Forward chaining: Starts with the known facts and applies inference rules to
extract more data until it reaches the goal or conclusion.
o Backward chaining: Starts with the goal and works backward by finding rules
that could lead to the goal and checking whether their conditions are met.
3. User Interface: The interface through which users interact with the system, providing
input and receiving output.
Expert Systems: Designed to mimic human experts in specific domains, like medical
diagnosis or legal reasoning. They apply domain-specific knowledge to make decisions.
Decision Support Systems: Used in businesses to help with decision-making based on a
structured set of rules or knowledge.
Summary
Logical agents use formal logic to reason and make decisions based on a knowledge
base.
Propositional logic deals with true/false statements and uses logical connectives to build
more complex expressions.
Knowledge representation is crucial for structuring and storing information that AI
systems use to reason, including methods like propositional logic, first-order logic, and
semantic networks.
Knowledge-based systems combine a knowledge base and inference engine to draw
conclusions or solve problems, with applications in expert systems and decision-making.
Key Terms:
Logical Agent
Propositional Logic
Knowledge Base
First-Order Logic
Inference Engine
Expert System
Chapter 5 – Uncertain Knowledge and Reasoning
Quantifying Uncertainty
Probabilistic Reasoning
Probabilistic Reasoning over time
Making simple decisions
Making complex decisions
1. Quantifying Uncertainty
In many real-world situations, agents must make decisions without having complete or perfect
knowledge. Uncertainty arises due to incomplete information, unpredictable outcomes, or the
complexity of the environment. To handle this, we use probabilistic methods to quantify
uncertainty and make decisions based on likelihood rather than certainty.
Key concepts:
2. Probabilistic Reasoning
Bayesian Networks (also known as belief networks) are a key tool for probabilistic reasoning:
Bayes' Theorem is a critical rule for updating beliefs based on new evidence:
Bayes’ Theorem formula: P(A∣B)=P(B∣A)⋅P(A)P(B)P(A|B) = \frac{P(B|A) \cdot
P(A)}{P(B)}P(A∣B)=P(B)P(B∣A)⋅P(A) Where:
o P(A∣B)P(A|B)P(A∣B) is the probability of event A given that event B has
occurred.
o P(B∣A)P(B|A)P(B∣A) is the probability of event B given that A has occurred.
o P(A)P(A)P(A) and P(B)P(B)P(B) are the probabilities of events A and B
occurring independently.
Bayes' Theorem allows agents to update their beliefs about an event when new data or evidence
becomes available.
When reasoning about processes that evolve over time, agents need to consider probabilistic
reasoning over time. This deals with predicting future states based on current and past
information.
In simple decision-making scenarios, an agent selects the best action by considering the possible
outcomes and their associated probabilities.
Decision Theory provides a framework for making decisions under uncertainty by combining
probabilities with utilities (the value or benefit of outcomes). The goal is to maximize expected
utility.
Expected Utility: The weighted average of the utility values of all possible outcomes,
where each outcome's utility is multiplied by its probability of occurring. It is calculated
as: Expected Utility=∑P(outcome)×U(outcome)\text{Expected Utility} = \sum
P(outcome) \times U(outcome)Expected Utility=∑P(outcome)×U(outcome) Where:
o P(outcome)P(outcome)P(outcome) is the probability of the outcome.
o U(outcome)U(outcome)U(outcome) is the utility (value) of the outcome.
Agents choose actions that maximize this expected utility. For example, a self-driving car might
choose a route based on the probabilities of traffic congestion and the utility of arriving on time.
In more complex scenarios, agents must make decisions that involve multiple variables,
interrelated outcomes, or decisions that affect future choices. These are handled by sequential
decision-making models like Markov Decision Processes (MDPs).
Markov Decision Processes (MDPs) are a mathematical framework used to model decision-
making in environments where outcomes are partly random and partly under the control of the
agent.
The agent's goal is to find a policy that maximizes the expected cumulative reward over time.
This is useful in applications like robotics, where an agent must decide on a sequence of actions
to achieve long-term goals.
Summary
Quantifying uncertainty is essential in real-world problem-solving, where complete
knowledge is not available.
Probabilistic reasoning allows agents to make decisions based on the likelihood of
different outcomes, using tools like Bayesian networks and Bayes' Theorem.
Probabilistic reasoning over time helps agents predict future states by using models
such as Hidden Markov Models and Dynamic Bayesian Networks.
Simple decision-making involves selecting actions that maximize expected utility,
combining probabilities with the value of outcomes.
Complex decision-making uses frameworks like Markov Decision Processes to make
sequential decisions in uncertain environments, considering long-term rewards.
Key Terms:
Probability
Bayesian Network
Bayes' Theorem
Hidden Markov Model (HMM)
Markov Decision Process (MDP)
Expected Utility
Policy
Chapter 6: Learning
Learning from Examples and Observations
Knowledge in Learning
Learning Probabilistic models
Neural Networks
2. Knowledge in Learning
Types of Knowledge:
o Declarative Knowledge: Knowledge of facts and information (e.g., knowing that Paris is
the capital of France).
o Procedural Knowledge: Knowledge of how to perform tasks (e.g., knowing how to ride a
bicycle).
o Conditional Knowledge: Knowledge of when and why to apply declarative and
procedural knowledge (e.g., knowing when to use a specific math formula).
Role of Knowledge in Learning:
o Knowledge serves as the foundation for further learning and application. It enables
learners to connect new information with prior understanding, facilitating better
retention and comprehension.
4. Neural Networks
Definition: Neural networks are a subset of machine learning techniques inspired by the
structure and function of the human brain. They consist of interconnected layers of nodes
(neurons) that process data and learn complex patterns.
Key Features:
o Architecture: Neural networks typically consist of an input layer, one or more hidden
layers, and an output layer. Each layer transforms the data and passes it to the next
layer.
o Activation Functions: Functions applied to each neuron’s output to introduce non-
linearity, allowing the network to learn complex relationships in the data.
o Training: Neural networks are trained using algorithms like backpropagation, which
adjusts the weights of connections based on the error between predicted and actual
outputs.
Applications:
o Neural networks are used in various applications, including image recognition, natural
language processing, and game playing, showcasing their versatility and effectiveness in
handling complex data.
Conclusion
3. Perception
Definition: Perception refers to the process through which machines interpret sensory
data to understand their environment. This can involve visual, auditory, and tactile
information processing.
Key Components:
o Computer Vision: The ability of machines to interpret and make decisions based on
visual data from the world, such as recognizing objects, faces, and scenes.
o Sensor Fusion: Combining data from multiple sensors (e.g., cameras, microphones) to
enhance perception accuracy and reliability.
Applications:
o Autonomous vehicles, surveillance systems, augmented reality, and industrial
automation.
4. Robotics
Definition: Robotics is the branch of technology that deals with the design, construction,
operation, and use of robots. It combines elements from engineering, computer science,
and artificial intelligence to create machines that can perform tasks autonomously or
semi-autonomously.
Key Components:
o Actuators: Mechanisms that allow robots to move and interact with their environment,
such as motors and servos.
o Control Systems: Algorithms and software that govern robot behavior, allowing for task
execution and environmental interaction.
o Mobility: The ability of robots to navigate different terrains and environments, which
can involve wheels, legs, or flying capabilities.
Applications:
o Manufacturing (industrial robots), healthcare (surgical robots), space exploration, and
home automation (robotic vacuum cleaners).
Conclusion