0% found this document useful (0 votes)
13 views11 pages

AI Notes

The document discusses different types of agents and environments in artificial intelligence. It describes simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents. It also covers properties of environments, state space representation, problem solving techniques, and search algorithms used in AI.

Uploaded by

hihebi7136
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views11 pages

AI Notes

The document discusses different types of agents and environments in artificial intelligence. It describes simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents. It also covers properties of environments, state space representation, problem solving techniques, and search algorithms used in AI.

Uploaded by

hihebi7136
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

What are Agent and Environment?

An agent is anything that can perceive its environment through sensors and acts upon that environment
through effectors.

A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the sensors, and
other organs such as hands, legs, mouth, for effectors.

A robotic agent replaces cameras and infrared range finders for the sensors, and various motors and
actuators for effectors.

A software agent has encoded bit strings as its programs and actions.

Properties of Environment
The environment has multifold properties −

Discrete / Continuous − If there are a limited number of distinct, clearly defined, states of the
environment, the environment is discrete (For example, chess); otherwise it is continuous (For example,
driving).

Observable / Partially Observable − If it is possible to determine the complete state of the environment
at each time point from the percepts it is observable; otherwise it is only partially observable.

Static / Dynamic − If the environment does not change while an agent is acting, then it is static;
otherwise it is dynamic.

Single agent / Multiple agents − The environment may contain other agents which may be of the same
or different kind as that of the agent.

Accessible / Inaccessible − If the agent’s sensory apparatus can have access to the complete state of the
environment, then the environment is accessible to that agent.

Deterministic / Non-deterministic − If the next state of the environment is completely determined by


the current state and the actions of the agent, then the environment is deterministic; otherwise it is
non-deterministic.

Episodic / Non-episodic − In an episodic environment, each episode consists of the agent perceiving and
then acting. The quality of its action depends just on the episode itself. Subsequent episodes do not
depend on the actions in the previous episodes. Episodic environments are much simpler because the
agent does not need to think ahead.

Types of AI Agents

Simple Reflex agent:

The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current
percepts and ignore the rest of the percept history.

These agents only succeed in the fully observable environment.

The Simple reflex agent does not consider any part of percepts history during their decision and action
process.

The Simple reflex agent works on Condition-action rule, which means it maps the current state to action.
Such as a Room Cleaner agent, it works only if there is dirt in the room.

Problems for the simple reflex agent design approach:

They have very limited intelligence


They do not have knowledge of non-perceptual parts of the current state

Mostly too big to generate and to store.

Not adaptive to changes in the environment.

Model-based reflex agent

The Model-based agent can work in a partially observable environment, and track the situation.

A model-based agent has two important factors:

Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent.

Internal State: It is a representation of the current state based on percept history.

These agents have the model, "which is knowledge of the world" and based on the model they perform
actions.

Updating the agent state requires information about:

How the world evolves

How the agent's action affects the

world.
Goal-based agents

The knowledge of the current state environment is not always sufficient to decide for an agent to what
to do.

The agent needs to know its goal which describes desirable situations.

Goal-based agents expand the capabilities of the model-based agent by having the "goal" information.

They choose an action, so that they can achieve the goal.

These agents may have to consider a long sequence of possible actions before deciding whether the goal
is achieved or not. Such considerations of different scenario are called searching and planning, which
makes an agent proactive.
Utility-based agents

These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.

Utility-based agent act based not only goals but also the best way to achieve the goal.

The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to
choose in order to perform the best action.

The utility function maps each state to a real number to check how efficiently each action achieves the
goals.
Learning Agents

A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning
capabilities.

It starts to act with basic knowledge and then able to act and adapt automatically through learning.

A learning agent has mainly four conceptual components, which are:

Learning element: It is responsible for making improvements by learning from environment

Critic: Learning element takes feedback from critic which describes that how well the agent is doing with
respect to a fixed performance standard.

Performance element: It is responsible for selecting external action

Problem generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.

Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the
performance.
State Space Representation in AI

A state space is a way to mathematically represent a problem by defining all the possible states in which
the problem can be. This is used in search algorithms to represent the initial state, goal state, and
current state of the problem. Each state in the state space is represented using a set of variables.

State Space Representation consists of identifying an INITIAL STATE (from where to begin) and a GOAL
STATE (the final destination) and then following a specific sequence of actions (called States).

A Simple Example. Consider an 4th order system represented by a single 4th order differential equation
with input x and output z. We can define 4 new variables, q1 through q4. For this problem a state space
representation was easy to find.

Problem Solving in Artificial Intelligence

The reflex agent of AI directly maps states into action. Whenever these agents fail to operate in an
environment where the state of mapping is too large and not easily performed by the agent, then the
stated problem dissolves and sent to a problem-solving domain which breaks the large stored problem
into the smaller storage area and resolves one by one. The final integrated action will be the desired
outcomes.

On the basis of the problem and their working domain, different types of problem-solving agent defined
and use at an atomic level without any internal state visible with a problem-solving algorithm. The
problem-solving agent performs precisely by defining problems and several solutions. So we can say that
problem solving is a part of artificial intelligence that encompasses a number of techniques such as a
tree, B-tree, heuristic algorithms to solve a problem.

We can also say that a problem-solving agent is a result-driven agent and always focuses on satisfying
the goals.
There are basically three types of problem in artificial intelligence:

1. Ignorable: In which solution steps can be ignored.

2. Recoverable: In which solution steps can be undone.

3. Irrecoverable: Solution steps cannot be undo.

Steps problem-solving in AI: The problem of AI is directly associated with the nature of humans and
their activities. So we need a number of finite steps to solve a problem which makes human easy works.

These are the following steps which require to solve a problem :

Problem definition: Detailed specification of inputs and acceptable system solutions.

Problem analysis: Analyse the problem thoroughly.

Knowledge Representation: collect detailed information about the problem and define all possible
techniques.

Problem-solving: Selection of best techniques.

Components to formulate the associated problem:

Initial State: This state requires an initial state for the problem which starts the AI agent towards a
specified goal. In this state new methods also initialize problem domain solving by a specific class.

Action: This stage of problem formulation works with function with a specific class taken from the initial
state and all possible actions done in this stage.

Transition: This stage of problem formulation integrates the actual action done by the previous action
stage and collects the final stage to forward it to their next stage.
Goal test: This stage determines that the specified goal achieved by the integrated transition model or
not, whenever the goal achieves stop the action and forward into the next stage to determines the cost
to achieve the goal.

Path costing: This component of problem-solving numerical assigned what will be the cost to achieve
the goal. It requires all hardware software and human working cost.

Search Algorithms in AI

There are two types of search algorithms explained below:

1.Uninformed Search Algorithms

Uninformed search algorithms do not have any domain knowledge. It works in a brute force manner,
hence called brute force algorithms. It does not know how far the goal node is; all it knows is how to
get around and tell the difference between a leaf node and a goal node. Every node is examined
without prior knowledge; hence called a blind search algorithm.

Uninformed search algorithms are of mainly three types:

a.Breadth-first search (BFS)Depth-first search (DFS)

b.Depth-first search (DFS)

Breadth-First Search(BFS)

We’re spreading out across the tree or graph when working with breadth-first search. We start at a
node—we’ll call it the search key—and from there, we explore all its neighbors at the same depth.
Once we’ve covered those, we level up and do it again. It is implemented using the queue data
structure that works on the concept of first in first out (FIFO). It is a complete algorithm as it returns a
solution if a solution exists.

Example: If the search starts from root node A to reach goal node G, it will traverse A-B-C-D-G. It
traverses level-wise, i.e., explores the shallowest node first.
Depth First Search(DFS)

We’re diving deep into the tree or graph when dealing with a depth-first search. We kick off from a
node—let’s call it the search key—and explore all the nodes down the branch. Once we’ve done that,
we backtrack and repeat it. We use a stack data structure to make this work. The key concept here?
Last in first out (LIFO)

Example: If the search starts from root node A to reach goal node G, it will traverse A-B-D-G. It
traverses depth-wise, i.e., explores the deepest node first.

2. Informed Search Algorithms

Informed search algorithms have domain knowledge. It contains the problem description and extra
information like how far the goal node is. You might also know it as the Heuristic search algorithm.
Although it might not always supply the best solution, it will do so in a timely manner. It can solve
complex problems more efficiently than uninformed ones.

It is mainly of two types:

❖ Greedy Best First Search


❖ A* Search

Greedy Best First Search

In this algorithm, we expand the closest node to the goal node. The heuristic function h(x) roughly
calculates the closeness factor. We expand or explore the node when f(n) equals h(n). We implement
this algorithm using the priority queue. It is not an optimal algorithm. It can get stuck in loops.
Example: Let’s say we need to find the lowest-cost path from root node A to any goal state using
greedy search. In that case, the solution turns out to be A-B-E-H. It will start with B because it has less
cost than C, then E because it has less cost than D and G2.

A* Search

A* search is a combination of greedy search and uniform cost search. In this algorithm, we denote the
total cost (heuristic) by f(x), summing the cost in uniform cost search represented by g(x) and the cost
of the greedy search represented by h(x).

f (x) = g (x) + h (x)

In this, we define g(x) as the backward cost, which is the cumulative cost from the root node to the
current node, and we define h(x) as the forward cost, which is the approximate distance between the
goal node and the current node.

You might also like