Unit 2 BIM 5
Unit 2 BIM 5
Artificial intelligence is defined as the study of rational agents. A rational agent could be
anything that makes decisions, as a person, firm, machine, or software. It carries out an action
with the best outcome after considering past and current percepts(agent’s perceptual inputs at a
given instance). An AI system is composed of an agent and its environment. The agents act in
their environment. The environment may contain other agents.
Following are the main three terms involved in the structure of an AI agent:
Examples of Agent:
• A software agent has Keystrokes, file contents, received network packages which act
as sensors and displays on the screen, files, sent network packets acting as actuators. • A
Human-agent has eyes, ears, and other organs which act as sensors, and hands, legs,
mouth, and other body parts acting as actuators.
• A Robotic agent has Cameras and infrared range finders which act as sensors and
various motors acting as actuators.
• An agent is anything that can perceive its environment through sensors and acts upon that
environment through effectors.
• A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the
sensors, and other organs such as hands, legs, mouth, for effectors.
• A robotic agent replaces cameras and infrared range finders for the sensors, and various
motors and actuators for effectors.
• A software agent has encoded bit strings as its programs and actions.
Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability :
• Simple Reflex Agents
• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent
Simple reflex agents ignore the rest of the percept history and act only on the basis of the
current percept. Percept history is the history of all that an agent has perceived to date.
The agent function is based on the condition-action rule. A condition-action rule is a rule that
maps a state i.e, condition to an action. If the condition is true, then the action is taken, else not.
This agent function only succeeds when the environment is fully observable. For simple reflex
agents operating in partially observable environments, infinite loops are often
unavoidable. It may be possible to escape from infinite loops if the agent can randomize its
actions. Problems with Simple reflex agents are :
• Very limited intelligence.
• No knowledge of non-perceptual parts of the state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then the collection of rules need to be
updated.
It works by finding a rule whose condition matches the current situation. A model-based agent
can handle partially observable environments by the use of a model about the world. The agent
has to keep track of the internal state which is adjusted by each percept and that depends on the
percept history. The current state is stored inside the agent which maintains some kind of
structure describing the part of the world which cannot be seen.
Updating the state requires information about :
• how the world evolves independently from the agent, and
• how the agent’s actions affect the world.
Goal-based agents
These kinds of agents take decisions based on how far they are currently from their
goal(description of desirable situations). Their every action is intended to reduce its distance
from the goal. This allows the agent a way to choose among multiple possibilities, selecting the
one which reaches a goal state. The knowledge that supports its decisions is represented
explicitly and can be modified, which makes these agents more flexible. They usually require
search and planning. The goal-based agent’s behavior can easily be changed.
Utility-based agents
The agents which are developed having their end uses as building blocks are called utility - based
agents. When there are multiple possible alternatives, then to decide which one is best, utility
based agents are used. They choose actions based on a preference (utility) for each state.
Sometimes achieving the desired goal is not enough. We may look for a quicker, safer, cheaper
trip to reach a destination. Agent happiness should be taken into consideration. Utility describes
how “happy” the agent is. Because of the uncertainty in the world, a utility agent chooses the
action that maximizes the expected utility. A utility function maps a state onto a real number
which describes the associated degree of happiness.
Learning Agent :
A learning agent in AI is the type of agent that can learn from its past experiences or it has
learning capabilities. It starts to act with basic knowledge and then is able to act and adapt
automatically through learning.
A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by learning from the
environment
2. Critic: The learning element takes feedback from critics which describes how well
the agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action 4. Problem
Generator: This component is responsible for suggesting actions that will lead to new
and informative experiences.
Agent Environment in AI
An environment is everything in the world which surrounds the agent, but it is not a part of an
agent itself. An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with something to sense and
act upon it. An environment is mostly said to be non-feministic.
Features of Environment
As per Russell and Norvig, an environment can have various features from the point of view of
an agent:
3. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
o However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.
4. Single-agent vs Multi-agent
o If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
o However, if multiple agents are operating in an environment, then such an environment is
called a multi-agent environment.
o The agent design problems in the multi-agent environment are different from single agent
environment.
5. Static vs Dynamic:
o If the environment can change itself while an agent is deliberating then such environment
is called a dynamic environment else it is called a static environment.
o Static environments are easy to deal because an agent does not need to continue looking
at the world while deciding for an action.
o However for dynamic environment, agents need to keep looking at the world at each
action.
o Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an
example of a static environment.
6. Discrete vs Continuous:
o If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it is
called continuous environment.
o A chess game comes under discrete environment as there is a finite number of moves that
can be performed.
o A self-driving car is an example of a continuous environment.
7. Known vs Unknown
o Known and unknown are not actually a feature of an environment, but it is an agent's state
of knowledge to perform an action.
o In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an action. o It
is quite possible that a known environment to be partially observable and an Unknown
environment to be fully observable.
8. Accessible vs Inaccessible
o If an agent can obtain complete and accurate information about the state's environment,
then such an environment is called an Accessible environment else it is called inaccessible. o
An empty room whose state can be defined by its temperature is an example of an accessible
environment.
o Information about an event on earth is an example of Inaccessible environment.
PEAS System of AI
We know that there are different types of agents in AI. PEAS System is used to categorize similar
agents together. The PEAS system delivers the performance measure with respect to the
environment, actuators, and sensors of the respective agent. Most of the highest performing
agents are Rational Agents.
Rational Agent: The rational agent considers all possibilities and chooses to perform a highly
efficient action. For example, it chooses the shortest path with low cost for high efficiency.
PEAS stands for a Performance measure, Environment, Actuator, Sensor.
1. Performance Measure: Performance measure is the unit to define the success of an
agent. Performance varies with agents based on their different precepts.
2. Environment: Environment is the surrounding of an agent at every instant. It keeps
changing with time if the agent is set in motion. There are 5 major types of
environments:
• Fully Observable & Partially Observable
• Episodic & Sequential
• Static & Dynamic
• Discrete & Continuous
• Deterministic & Stochastic
3. Actuator: An actuator is a part of the agent that delivers the output of action to the
environment.
4. Sensor: Sensors are the receptive parts of an agent that takes in the input for the
agent.
Agent Performance Environment Actuator Sensor
Measure
Artificial intelligence classified as “soft” or “weak” is response-based AI. In other words, the
technology is not actively thinking for itself.
Common “soft” artificial intelligence systems may be as nearby as your pocket! Personal
assistants like Apple’s Siri and Alexa from Amazon are excellent examples of “soft” AI.
For instance, imagine getting ready for the day and wondering what attire is appropriate. If you
were to own some sort of device with a technological assistant, you could simply ask, “What’s the
weather like today?”
The personal assistant technology is not actively thinking about what the weather outside is like.
Instead, it is using keywords and phrases through speech recognition algorithms to provide a
response. The words “weather” and “today” can be used to prompt the technology to use location
data and information from meteorology services to tell you whether or not you need an umbrella,
a jacket, snow boots, or some strong SPF sunscreen.
Therefore, we can think of “soft” or “weak” AI as an input-output system. Information goes in,
the input data is processed, the technology uses algorithms to complete a desired task or function,
and finally, an action or answer is then spit back out. Yet the technology is not using the
information to learn and get smarter.
“Soft” artificial intelligence doesn’t seem so scary. It obeys our commands with no mind of its
own. However, the other classification for AI, “hard” artificial intelligence is a slightly different
story.
The most prominent difference between “soft” and “hard” AI is that “hard” AI does not just take
in information — it actively works to comprehend the information and carry out tasks with its
own volition. Where “soft” AI is predictable, “hard” AI is more like the human brain itself. A
technological entity with the ability to think and process for itself is more representative of
human intelligence and action than algorithms that prompt information and task spitballing.
Some of the best examples of “hard” AI are actually robots that have been taught to play poker or
video games. By examining the variables of a game mathematically, they can develop their own
strategies. This association-style processing allows the technology to learn instead of stagnating
in the ask and answer style of “soft” AI.