0% found this document useful (0 votes)
4 views19 pages

Chapter 2 - Intelligent Agents

Intelligent agents are entities that perceive their environment and act to achieve specific goals, relying on rational decision-making. They consist of sensors, actuators, and an agent program, and can be categorized into various types based on complexity. Problem-solving in AI often involves searching through possibilities, with agents using well-defined states and actions to navigate and find solutions.

Uploaded by

kidunative
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views19 pages

Chapter 2 - Intelligent Agents

Intelligent agents are entities that perceive their environment and act to achieve specific goals, relying on rational decision-making. They consist of sensors, actuators, and an agent program, and can be categorized into various types based on complexity. Problem-solving in AI often involves searching through possibilities, with agents using well-defined states and actions to navigate and find solutions.

Uploaded by

kidunative
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Chapter 2 – Intelligent Agents

 Introduction, Agents and Requirements


 Acting of Intelligent Agents (Rationality)
 Structure of Intelligent Agents
 Agent Types

1. Introduction to Intelligent Agents

An intelligent agent is an entity capable of perceiving its environment, processing this


information, and taking actions to achieve specific goals. Intelligent agents are fundamental in
artificial intelligence (AI) because they can autonomously make decisions and execute tasks.

Agents and Environments:

 Agent: An entity that perceives the environment and acts upon it.
 Environment: The external world in which the agent operates, which provides input
(percepts) and reacts to the agent’s actions.

2. Acting of Intelligent Agents (Rationality)

Rational Agent: An agent that acts to achieve the best outcome based on its knowledge and
abilities. Rationality depends on:

1. The performance measure: How success is evaluated.


2. Percept history: All past percepts the agent has experienced.
3. Knowledge: Information the agent has about its environment.
4. Available actions: The options the agent can take.

A rational agent chooses actions that are expected to maximize its performance based on the
percepts and knowledge it has.

Perfect Rationality: Achieving the best possible outcome, considering all possible actions.
Bounded Rationality: Agents that make reasonable decisions given the limits of time,
information, and resources.

3. Structure of Intelligent Agents

An intelligent agent’s structure consists of the following key components:


1. Sensors: Devices or functions that gather information from the environment (input).
2. Actuators: The components that allow the agent to act upon the environment (output).
3. Agent Program: The brain of the agent, which processes percepts and determines
actions.

Agents are categorized into two main types:

 Simple Reflex Agents: These agents respond to specific stimuli with predefined rules
(condition-action rules). Example: A thermostat that turns on the heater when it senses
cold.
 Model-based Agents: These agents maintain an internal model of the world and base
their decisions on past percepts as well as the current one.
 Goal-based Agents: These agents act to achieve a defined goal, beyond just reacting to
stimuli.
 Utility-based Agents: These agents try to maximize their overall utility, or the expected
value of their actions.

4. Types of Agents

Intelligent agents are classified based on their complexity and capabilities:

1. Simple Reflex Agents:


o Act only based on current percept.
o No memory or consideration of future consequences.
o Example: Light switch that turns off if it's bright.
2. Model-based Reflex Agents:
o Maintain an internal state or model of the environment.
o Can remember past states and adapt actions based on changes.
o Example: A robot vacuum that remembers areas it has cleaned.
3. Goal-based Agents:
o Actions are driven by goals.
o The agent assesses actions based on whether they help achieve a specific goal.
o Example: A self-driving car with a goal to reach a destination.
4. Utility-based Agents:
o Use a utility function to evaluate each action.
o Consider not only goals but also trade-offs, preferences, and resource
optimization.
o Example: A financial trading bot optimizing for profit while minimizing risk.
5. Learning Agents:
o Have the ability to improve their performance over time by learning from their
environment and experiences.
o Example: Machine learning algorithms that adjust their strategies based on data
patterns.
Summary

 Intelligent agents perceive the environment and act in a way that maximizes success
based on rational decisions.
 Their structure consists of sensors, actuators, and an agent program.
 Different agent types offer varying degrees of complexity, from simple reflex agents to
advanced learning agents.

This overview introduces the foundational concepts of intelligent agents. Each type of agent has
a role depending on the complexity of tasks it is designed to perform, and understanding these
distinctions is key in designing AI systems.

Key Terms:

 Rational Agent
 Percept
 Actuator
 Utility
 Learning Agent
Chapter 3 – Problem Solving
 Problem Solving by searching
 Problem solving agents
 Problem Formulation
 Search strategies

Games as search problem

1. Problem Solving by Searching

In AI, problem-solving often involves searching through various possibilities to find a solution.
An intelligent agent must find a sequence of actions that leads from the initial state to a goal
state. The process of systematically exploring possible options is called searching.

Key Components:

 Initial State: Where the agent begins.


 Actions: The possible moves or steps the agent can take.
 Goal State: The desired outcome or destination.
 State Space: The set of all possible states reachable from the initial state.
 Solution: A sequence of actions leading from the initial state to the goal state.

2. Problem-Solving Agents

A problem-solving agent is a type of intelligent agent that makes decisions by searching for
sequences of actions leading to desirable outcomes.

Characteristics:

 They are goal-driven: The agent takes actions to achieve a specific goal.
 They operate in a well-defined problem space where the initial state, goal state, and actions are
clearly defined.
 They follow a process that typically includes formulating a problem, searching for a solution,
and executing the solution.

The agent uses a search algorithm to explore the problem space and find a path to the goal.

3. Problem Formulation
Problem formulation is the first step in problem-solving. It involves clearly defining the
problem by specifying:

1. Initial State: The starting point for the agent.


2. Goal State: The desired outcome the agent aims to reach.
3. Actions: The available actions that can change the state.
4. Transition Model: Describes how each action changes the current state to a new state.
5. Path Cost: A measure of how much each action costs (in terms of resources like time, effort,
etc.).

A well-formulated problem allows the agent to systematically explore possible solutions.

4. Search Strategies

Search strategies are methods used by agents to explore the problem space. They can be
categorized into two main types:

Uninformed Search (Blind Search):

These strategies have no additional information about the problem beyond the initial state, goal
state, and possible actions. Examples include:

1. Breadth-First Search (BFS):


o Explores all nodes at the present depth level before moving on to the next depth level.
o Complete and optimal for finding the shortest solution, but can be slow and memory-
intensive.
2. Depth-First Search (DFS):
o Explores as far as possible along a branch before backtracking.
o Uses less memory than BFS but may not find the shortest path or even fail to find a
solution in infinite state spaces.
3. Uniform-Cost Search (UCS):
o Explores the least costly path first, where the cost refers to the path cost.
o Guarantees finding the least expensive solution.

Informed Search (Heuristic Search):

These strategies use additional knowledge (heuristics) to guide the search more efficiently
toward the goal. Examples include:

1. Greedy Best-First Search:


o Expands the node that appears to be closest to the goal using a heuristic function.
o Faster but may not guarantee the optimal solution.
2. A* Search:
o Combines the cost of the path (from the initial state) and the heuristic estimate to
choose the next node.
o It is both complete and optimal if the heuristic is admissible (never overestimates the
cost to reach the goal).

5. Games as Search Problems

In many cases, AI agents are designed to compete in games, which are a special type of search
problem. The challenge is not only to find a solution but to win against an adversary.

Key elements of game-playing as a search problem:

 Initial State: The starting configuration of the game.


 Successor Function: Defines the legal moves or actions a player can take from a given state.
 Terminal State: The state where the game ends, which could be a win, loss, or draw.
 Utility Function (Payoff): Provides a numerical value (like a win/loss score) for terminal states,
guiding the agent's decisions.

Search Techniques in Games:

1. Minimax Algorithm:
o Used for two-player games where one player tries to maximize the score and the
opponent tries to minimize it.
o It ensures the best possible result against an optimal adversary.
2. Alpha-Beta Pruning:
o An optimization technique for the minimax algorithm that cuts off branches that won't
affect the final decision, improving efficiency without changing the outcome.
3. Evaluation Function:
o In complex games, it's not feasible to search all the way to terminal states. Instead,
evaluation functions estimate the desirability of intermediate states, helping guide
decisions.

Summary

 Problem-solving by searching helps agents systematically explore solutions to a given problem.


 Problem-solving agents rely on well-defined states, goals, and actions to navigate the problem
space.
 Problem formulation is critical for structuring problems in a way that allows efficient search.
 Search strategies can be uninformed or informed, depending on whether they use heuristics.
 Games provide interesting examples of adversarial search, requiring strategies like minimax and
alpha-beta pruning.
Key Terms:

 Initial State
 Goal State
 Breadth-First Search
 Depth-First Search
 Heuristic
 Minimax
 Alpha-Beta Pruning
Chapter 4 – Knowledge and Reasoning
 Logical Agents
 Propositional Logic
 Knowledge representation
 Knowledge based systems.

1. Logical Agents

Logical agents are a type of intelligent agent that makes decisions based on formal logic. They
use knowledge-based systems to represent information about the world and apply logical
reasoning to draw conclusions and choose actions.

Key features of logical agents:

 They use declarative knowledge, meaning the knowledge is explicitly stated in


sentences or formulas.
 They make inferences by applying rules of logic.
 They operate on a knowledge base (KB), which is a collection of knowledge represented
in a formal language.

Logical agents follow these steps:

1. Perceive the environment.


2. Update the knowledge base.
3. Infer new information using logical rules.
4. Act based on the conclusions drawn.

2. Propositional Logic

Propositional logic is a simple but powerful form of logic used to represent facts about the
world. It works with propositions—statements that are either true or false.

Basic Elements:

 Propositions: Simple statements, like "It is raining" (denoted as P), which can be either
true or false.
 Logical connectives: These are operators that combine propositions:
o AND (∧): True if both propositions are true.
o OR (∨): True if at least one proposition is true.
o NOT (¬): Reverses the truth value of a proposition.
o Implication (→): If the first proposition is true, then the second must be true.
o Biconditional (↔): True if both propositions have the same truth value.

Truth tables are used to show the truth values of complex propositions. For example:

 P ∧ Q is true if both P and Q are true.


 P → Q means if P is true, then Q must also be true.

Inference in propositional logic involves applying rules to derive new facts from known facts.
Two important inference methods are:

 Modus Ponens: If P → Q and P is true, then Q is true.


 Modus Tollens: If P → Q and Q is false, then P must be false.

3. Knowledge Representation

Knowledge representation refers to how information about the world is structured and stored in
an AI system. It is essential for enabling reasoning, understanding, and decision-making.

There are different methods for representing knowledge in AI, including:

1. Propositional Logic: Uses simple true/false statements, as discussed above. It is easy to


use but limited in its expressiveness.
2. First-Order Logic (FOL): Extends propositional logic by allowing the use of
quantifiers and relations. In FOL, we can talk about objects, their properties, and
relationships between them. Example: "All humans are mortal" is expressed as ∀x
(Human(x) → Mortal(x)).
3. Semantic Networks: A graphical representation where nodes represent objects or
concepts, and edges represent relationships. Example: A network might show that
"Birds" are related to "Flying" through an "ability" edge.
4. Frames: Structured data representations for describing stereotypical situations. Each
frame has slots (attributes) and fillers (values). For example, a "Car" frame might have
slots like "Make," "Model," and "Year."
5. Rules: In rule-based systems, knowledge is represented as if-then rules. Example: "If it
is raining, then take an umbrella."
6. Ontologies: Formal representations of a set of concepts within a domain and the
relationships between them. Ontologies are often used in knowledge-based systems for
more complex reasoning.

Knowledge representation enables AI systems to handle both factual and conceptual


information, enabling more sophisticated reasoning.
4. Knowledge-Based Systems

Knowledge-based systems (KBS) are AI systems that reason and make decisions based on a
structured set of knowledge. They use a knowledge base combined with an inference engine to
derive new information or make decisions.

Key components:

1. Knowledge Base: A collection of facts and rules about the world. This includes:
o Declarative knowledge: Statements about what is true (e.g., "All birds can fly").
o Procedural knowledge: Instructions on how to perform tasks (e.g., "How to
calculate an area").
2. Inference Engine: The part of the system that applies logical reasoning to the knowledge
base. It uses rules of inference to deduce new facts or determine actions. Inference
engines work in two primary ways:
o Forward chaining: Starts with the known facts and applies inference rules to
extract more data until it reaches the goal or conclusion.
o Backward chaining: Starts with the goal and works backward by finding rules
that could lead to the goal and checking whether their conditions are met.
3. User Interface: The interface through which users interact with the system, providing
input and receiving output.

Applications of Knowledge-Based Systems:

 Expert Systems: Designed to mimic human experts in specific domains, like medical
diagnosis or legal reasoning. They apply domain-specific knowledge to make decisions.
 Decision Support Systems: Used in businesses to help with decision-making based on a
structured set of rules or knowledge.

Summary

 Logical agents use formal logic to reason and make decisions based on a knowledge
base.
 Propositional logic deals with true/false statements and uses logical connectives to build
more complex expressions.
 Knowledge representation is crucial for structuring and storing information that AI
systems use to reason, including methods like propositional logic, first-order logic, and
semantic networks.
 Knowledge-based systems combine a knowledge base and inference engine to draw
conclusions or solve problems, with applications in expert systems and decision-making.

Key Terms:
 Logical Agent
 Propositional Logic
 Knowledge Base
 First-Order Logic
 Inference Engine
 Expert System
Chapter 5 – Uncertain Knowledge and Reasoning
 Quantifying Uncertainty
 Probabilistic Reasoning
 Probabilistic Reasoning over time
 Making simple decisions
 Making complex decisions

1. Quantifying Uncertainty

In many real-world situations, agents must make decisions without having complete or perfect
knowledge. Uncertainty arises due to incomplete information, unpredictable outcomes, or the
complexity of the environment. To handle this, we use probabilistic methods to quantify
uncertainty and make decisions based on likelihood rather than certainty.

Key concepts:

 Probability: A numerical measure of how likely an event is to occur, ranging from 0


(impossible) to 1 (certain).
 Random Variables: Variables whose possible values are outcomes of random
phenomena (e.g., whether it rains tomorrow).
 Probability Distribution: A function that describes the likelihood of different outcomes
for a random variable.

2. Probabilistic Reasoning

Probabilistic reasoning involves making inferences and decisions based on probabilities. It


helps agents reason under uncertainty by calculating the likelihood of different outcomes.

Bayesian Networks (also known as belief networks) are a key tool for probabilistic reasoning:

 Bayesian Networks represent the relationships between random variables using a


directed acyclic graph (DAG). Each node represents a random variable, and edges
represent dependencies between them.
 The network uses conditional probabilities to express how each variable depends on its
parents in the graph.
 For example, in a medical diagnosis system, a Bayesian network might model the
probability of diseases based on symptoms.

Bayes' Theorem is a critical rule for updating beliefs based on new evidence:
 Bayes’ Theorem formula: P(A∣B)=P(B∣A)⋅P(A)P(B)P(A|B) = \frac{P(B|A) \cdot
P(A)}{P(B)}P(A∣B)=P(B)P(B∣A)⋅P(A) Where:
o P(A∣B)P(A|B)P(A∣B) is the probability of event A given that event B has
occurred.
o P(B∣A)P(B|A)P(B∣A) is the probability of event B given that A has occurred.
o P(A)P(A)P(A) and P(B)P(B)P(B) are the probabilities of events A and B
occurring independently.

Bayes' Theorem allows agents to update their beliefs about an event when new data or evidence
becomes available.

3. Probabilistic Reasoning Over Time

When reasoning about processes that evolve over time, agents need to consider probabilistic
reasoning over time. This deals with predicting future states based on current and past
information.

Two common models for reasoning over time are:

1. Hidden Markov Models (HMMs):


o A statistical model that represents systems where the true state is hidden (not
directly observable), but there are observable outcomes related to the state.
o HMMs consist of:
 States: Possible situations in which the system can be.
 Transitions: The probability of moving from one state to another.
 Observations: The data or signals received at each time step.
o HMMs are used in applications like speech recognition, where the system infers
the sequence of spoken words based on audio signals.
2. Dynamic Bayesian Networks:
o Extends Bayesian networks to model time-varying processes. These networks
include nodes representing variables at different time steps, allowing agents to
reason about how events evolve over time.
o Used for more complex problems where multiple variables interact over time,
such as weather forecasting or financial modeling.

4. Making Simple Decisions

In simple decision-making scenarios, an agent selects the best action by considering the possible
outcomes and their associated probabilities.
Decision Theory provides a framework for making decisions under uncertainty by combining
probabilities with utilities (the value or benefit of outcomes). The goal is to maximize expected
utility.

 Expected Utility: The weighted average of the utility values of all possible outcomes,
where each outcome's utility is multiplied by its probability of occurring. It is calculated
as: Expected Utility=∑P(outcome)×U(outcome)\text{Expected Utility} = \sum
P(outcome) \times U(outcome)Expected Utility=∑P(outcome)×U(outcome) Where:
o P(outcome)P(outcome)P(outcome) is the probability of the outcome.
o U(outcome)U(outcome)U(outcome) is the utility (value) of the outcome.

Agents choose actions that maximize this expected utility. For example, a self-driving car might
choose a route based on the probabilities of traffic congestion and the utility of arriving on time.

5. Making Complex Decisions

In more complex scenarios, agents must make decisions that involve multiple variables,
interrelated outcomes, or decisions that affect future choices. These are handled by sequential
decision-making models like Markov Decision Processes (MDPs).

Markov Decision Processes (MDPs) are a mathematical framework used to model decision-
making in environments where outcomes are partly random and partly under the control of the
agent.

 MDPs consist of:


o States: Possible situations the agent can be in.
o Actions: Choices available to the agent at each state.
o Transition Model: Probabilities of moving from one state to another given a
particular action.
o Rewards: Immediate benefit received after transitioning to a new state.
o Policy: A strategy that specifies the action to take in each state to maximize long-
term rewards.

The agent's goal is to find a policy that maximizes the expected cumulative reward over time.
This is useful in applications like robotics, where an agent must decide on a sequence of actions
to achieve long-term goals.

In some cases, decision-making involves Partially Observable Markov Decision Processes


(POMDPs), where the agent doesn't have full knowledge of the current state, making decisions
even more challenging.

Summary
 Quantifying uncertainty is essential in real-world problem-solving, where complete
knowledge is not available.
 Probabilistic reasoning allows agents to make decisions based on the likelihood of
different outcomes, using tools like Bayesian networks and Bayes' Theorem.
 Probabilistic reasoning over time helps agents predict future states by using models
such as Hidden Markov Models and Dynamic Bayesian Networks.
 Simple decision-making involves selecting actions that maximize expected utility,
combining probabilities with the value of outcomes.
 Complex decision-making uses frameworks like Markov Decision Processes to make
sequential decisions in uncertain environments, considering long-term rewards.

Key Terms:

 Probability
 Bayesian Network
 Bayes' Theorem
 Hidden Markov Model (HMM)
 Markov Decision Process (MDP)
 Expected Utility
 Policy
Chapter 6: Learning
Learning from Examples and Observations

 Knowledge in Learning
 Learning Probabilistic models
 Neural Networks

1. Learning from Examples and Observations

 Definition: Learning from examples and observations involves acquiring knowledge


through experience and practice rather than formal instruction. This method allows
learners to derive patterns and insights by analyzing data and examples.
 Key Concepts:
o Supervised Learning: A process where a model learns from labeled data (examples with
known outcomes) to make predictions or classifications.
o Unsupervised Learning: Learning from data without explicit labels, focusing on
discovering hidden patterns or groupings within the data.
o Reinforcement Learning: A type of learning where an agent learns to make decisions by
taking actions in an environment to maximize cumulative rewards.

2. Knowledge in Learning

 Types of Knowledge:
o Declarative Knowledge: Knowledge of facts and information (e.g., knowing that Paris is
the capital of France).
o Procedural Knowledge: Knowledge of how to perform tasks (e.g., knowing how to ride a
bicycle).
o Conditional Knowledge: Knowledge of when and why to apply declarative and
procedural knowledge (e.g., knowing when to use a specific math formula).
 Role of Knowledge in Learning:
o Knowledge serves as the foundation for further learning and application. It enables
learners to connect new information with prior understanding, facilitating better
retention and comprehension.

3. Learning Probabilistic Models

 Definition: Probabilistic models are mathematical frameworks that incorporate


uncertainty into the learning process. These models help in making predictions based on
observed data, accounting for variability and incomplete information.
 Key Components:
o Random Variables: Variables whose values are subject to chance. They can represent
various outcomes in a given context.
o Probability Distributions: Functions that describe the likelihood of different outcomes
for random variables.
o Bayesian Inference: A method for updating the probability estimate for a hypothesis as
more evidence or information becomes available.
 Applications:
o Probabilistic models are widely used in fields such as finance, medicine, and machine
learning, enabling systems to make informed decisions despite uncertainty.

4. Neural Networks

 Definition: Neural networks are a subset of machine learning techniques inspired by the
structure and function of the human brain. They consist of interconnected layers of nodes
(neurons) that process data and learn complex patterns.
 Key Features:
o Architecture: Neural networks typically consist of an input layer, one or more hidden
layers, and an output layer. Each layer transforms the data and passes it to the next
layer.
o Activation Functions: Functions applied to each neuron’s output to introduce non-
linearity, allowing the network to learn complex relationships in the data.
o Training: Neural networks are trained using algorithms like backpropagation, which
adjusts the weights of connections based on the error between predicted and actual
outputs.
 Applications:
o Neural networks are used in various applications, including image recognition, natural
language processing, and game playing, showcasing their versatility and effectiveness in
handling complex data.

Conclusion

Understanding these concepts in learning, knowledge representation, probabilistic models, and


neural networks is crucial for developing effective learning systems and technologies.
Emphasizing examples and observations enhances the learning experience, making it more
applicable and relevant.
Chapter 7: Communicating, Perceiving, and Acting
 Natural Language Processing
 Natural Language for Communication
 Perception
 Robotics.

1. Natural Language Processing (NLP)

 Definition: Natural Language Processing is a field of artificial intelligence that focuses


on the interaction between computers and humans through natural language. It enables
machines to understand, interpret, and generate human language in a valuable way.
 Key Components:
o Text Analysis: The process of analyzing text data to extract meaningful information,
such as sentiment, topics, and key phrases.
o Speech Recognition: Converting spoken language into text, allowing for voice-activated
systems and applications.
o Machine Translation: Automatically translating text or speech from one language to
another, facilitating cross-linguistic communication.
 Applications:
o Chatbots, virtual assistants (like Siri and Alexa), sentiment analysis in social media, and
automated translation services.

2. Natural Language for Communication

 Definition: Natural language communication involves using spoken or written language


to convey information and express thoughts, feelings, and intentions between humans and
machines.
 Key Aspects:
o Human-Computer Interaction: Designing systems that can understand and respond to
human language effectively, improving usability and accessibility.
o Contextual Understanding: The ability of systems to interpret language based on
context, which is crucial for understanding nuances, idioms, and cultural references.
 Challenges:
o Ambiguity in language, idiomatic expressions, and varying sentence structures can
complicate machine understanding.

3. Perception

 Definition: Perception refers to the process through which machines interpret sensory
data to understand their environment. This can involve visual, auditory, and tactile
information processing.
 Key Components:
o Computer Vision: The ability of machines to interpret and make decisions based on
visual data from the world, such as recognizing objects, faces, and scenes.
o Sensor Fusion: Combining data from multiple sensors (e.g., cameras, microphones) to
enhance perception accuracy and reliability.
 Applications:
o Autonomous vehicles, surveillance systems, augmented reality, and industrial
automation.

4. Robotics

 Definition: Robotics is the branch of technology that deals with the design, construction,
operation, and use of robots. It combines elements from engineering, computer science,
and artificial intelligence to create machines that can perform tasks autonomously or
semi-autonomously.
 Key Components:
o Actuators: Mechanisms that allow robots to move and interact with their environment,
such as motors and servos.
o Control Systems: Algorithms and software that govern robot behavior, allowing for task
execution and environmental interaction.
o Mobility: The ability of robots to navigate different terrains and environments, which
can involve wheels, legs, or flying capabilities.
 Applications:
o Manufacturing (industrial robots), healthcare (surgical robots), space exploration, and
home automation (robotic vacuum cleaners).

Conclusion

Chapter 7 explores the integration of natural language processing, communication, perception,


and robotics, highlighting how these technologies contribute to the development of intelligent
systems capable of interacting with humans and their environments. Understanding these
concepts is vital for advancing in the fields of AI and robotics.

You might also like