0% found this document useful (0 votes)
54 views30 pages

Unit 2 Intellegent Agent

Ai in the context

Uploaded by

smc smc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views30 pages

Unit 2 Intellegent Agent

Ai in the context

Uploaded by

smc smc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT 2

Intelligent Agents in Artificial Intelligence

An intelligent agent (IA) is an entity that makes a decision that enables artificial
intelligence to be put into action. It can also be described as a software entity
that conducts operations in the place of users or programs after sensing the
environment. It uses actuators to initiate action in that environment.

This agent has some level of autonomy that allows it to perform specific,
predictable, and repetitive tasks for users or applications.

It’s also termed as ‘intelligent’ because of its ability to learn during the process
of performing tasks.

The two main functions of intelligent agents include perception and action.
Perception is done through sensors while actions are initiated through actuators.

Intelligent agents consist of sub-agents that form a hierarchical structure.


Lower-level tasks are performed by these sub-agents.

The higher-level agents and lower-level agents form a complete system that can
solve difficult problems through intelligent behaviours or responses.

Artificial intelligence is defined as the study of rational agents. A rational


agent could be anything that makes decisions, as a person, firm, machine, or
software. It carries out an action with the best outcome after considering past
and current precepts(agent’s perceptual inputs at a given instance). An AI
system is composed of an agent and its environment. The agents act in their
environment. The environment may contain other agents.
An agent is anything that can be viewed as :
 perceiving its environment through sensors and
 acting upon that environment through actuators
Note: Every agent can perceive its own actions (but not always the effects)

To understand the structure of Intelligent Agents, we should be familiar


with Architecture and Agent programs. Architecture is the machinery that the
agent executes on. It is a device with sensors and actuators, for example, a
robotic car, a camera, a PC. Agent program is an implementation of an agent
function. An agent function is a map from the percept sequence (history of all
that an agent has perceived to date) to an action.

Agent = Architecture + Agent Program

Examples of Agent:
 A software agent has Keystrokes, file contents, received network packages
which act as sensors and displays on the screen, files, sent network packets
acting as actuators.
 A Human-agent has eyes, ears, and other organs which act as sensors, and
hands, legs, mouth, and other body parts acting as actuators.

 A Robotic agent has Cameras and infrared range finders which act as
sensors and various motors acting as actuators.

The structure of intelligent agents

Characteristics of intelligent agents

Intelligent agents have the following distinguishing characteristics:

 They have some level of autonomy that allows them to perform certain
tasks on their own.
 They have a learning ability that enables them to learn even as tasks are
carried out.
 They can interact with other entities such as agents, humans, and systems.
 New rules can be accommodated by intelligent agents incrementally.
 They exhibit goal-oriented habits.
 They are knowledge-based. They use knowledge regarding
communications, processes, and entities.

The IA structure consists of three main parts: architecture, agent function, and
agent program.

1. Architecture: This refers to machinery or devices that consist of


actuators and sensors. The intelligent agent executes on this machinery.
Examples include a personal computer, a car, or a camera.
2. Agent function: This is a function in which actions are mapped from a
certain percept sequence. Percept sequence refers to a history of what the
intelligent agent has perceived.
3. Agent program: This is an implementation or execution of the agent
function. The agent function is produced through the agent program’s
execution on the physical architecture (environment).

PAGE

The Principles on Artificial Intelligence (PAGE) refers to the OECD's AI


Principles, which were adopted in May 2019 by the OECD member countries
and several non-member countries. These principles are designed to promote
the innovative and trustworthy development and application of AI while
respecting human rights and democratic values. The OECD AI Principles, often
abbreviated as PAGE, serve as a global standard to guide governments,
organizations, and developers in the ethical and responsible use of AI
technologies.
The OECD AI Principles consist of five broad recommendations for AI actors
and five principles for the responsible stewardship of trustworthy AI. The five
recommendations for AI actors include:

1. Inclusive Growth, Sustainable Development, and Well-Being: AI


should benefit people and the planet by driving inclusive growth,
sustainable development, and well-being.
2. Human-Centered Values and Fairness: AI systems should be designed
in a way that respects human rights, dignity, and autonomy, ensuring
fairness and avoiding bias.
3. Transparency and Explainability: The operations and outcomes of AI
systems should be transparent, and stakeholders should have a clear
understanding of how decisions are made.
4. Robustness, Security, and Safety: AI systems must function reliably
and securely throughout their lifecycle, with measures in place to ensure
safety and mitigate risks.
5. Accountability: Organizations and individuals responsible for AI
systems should be accountable for their proper functioning and adherence
to these principles.

Additionally, the five principles for responsible stewardship of trustworthy AI


include:

1. Investing in AI research and development: Encourage investments in


AI R&D to drive innovation while considering ethical and societal
impacts.
2. Fostering a digital ecosystem for AI: Support policies that enable the
development of AI infrastructure and data ecosystems.
3. Building human capacity and preparing for labor market
transformation: Enhance education and training to prepare the
workforce for changes brought by AI technologies.
4. International cooperation for trustworthy AI: Promote global
collaboration to address AI challenges and opportunities collectively.
5. Developing AI standards and regulations: Establish clear guidelines
and standards to govern AI development and deployment responsibly.

These principles are intended to ensure that AI technologies are used in ways
that are beneficial, fair, and transparent, promoting trust and accountability in
AI systems across different sectors and countries.

Properties of IA

An intelligent agent is a system that perceives its environment, processes this


information, and takes actions to achieve specific goals. The properties of an
intelligent agent include:

1. Autonomy: An intelligent agent operates without direct human


intervention, making decisions and taking actions independently based on
its perception of the environment and internal state.
2. Reactivity: An intelligent agent responds to changes in its environment
in a timely manner. It can perceive relevant stimuli and react accordingly
to maintain its performance and achieve its objectives.
3. Proactivity: Beyond merely reacting to the environment, an intelligent
agent exhibits goal-directed behavior. It takes the initiative to fulfill its
designed goals and can plan and execute actions to achieve long-term
objectives.
4. Social Ability: An intelligent agent can interact with other agents
(including humans) to share information, negotiate, and collaborate. This
property is crucial for multi-agent systems where agents need to work
together to solve complex problems.
5. Learning: An intelligent agent can improve its performance over time by
learning from experiences, adapting to new situations, and updating its
knowledge base. This involves using techniques from machine learning
and artificial intelligence to enhance decision-making.
6. Flexibility: An intelligent agent can operate in a variety of environments
and can adapt its behavior based on different contexts and conditions. It is
not limited to a single predefined set of actions.
7. Perception: An intelligent agent has the ability to sense its environment
through various inputs (e.g., sensors, data feeds). This perception is
crucial for understanding the current state of the environment and making
informed decisions.
8. Rationality: An intelligent agent acts in a way that is expected to achieve
its goals, given its knowledge and resources. Rationality implies making
decisions that maximize the expected outcome based on the agent's
objectives and information.
9. Persistence: An intelligent agent continues to pursue its goals over time,
even when faced with obstacles or changes in the environment. It exhibits
perseverance and resilience in achieving its objectives.

These properties enable intelligent agents to perform a wide range of tasks in


diverse applications, from autonomous vehicles and personal assistants to
complex systems in healthcare, finance, and robotics.

Configuration of an Intelligent Agent

The configuration of an intelligent agent involves several key components and


modules that collectively enable the agent to perceive its environment, make
decisions, and take actions. Here is a detailed breakdown of the main
components typically found in an intelligent agent:

1. Sensors:
o Perception Module: The sensors allow the agent to perceive its
environment by collecting data. This could include cameras,
microphones, GPS, temperature sensors, or any other type of input
device relevant to the agent's environment.
o Data Preprocessing: Raw sensor data is often noisy or incomplete.
Preprocessing includes filtering, normalization, and extraction of
relevant features to prepare the data for further processing.
2. Effectors:
o Actuation Module: Effectors are the components that allow the
agent to take actions in the environment. These could include
motors, displays, speakers, or any other output device that the
agent can control to affect the environment.
3. Knowledge Base:
o World Model: A representation of the environment, including
static and dynamic information. This model helps the agent
understand the context in which it operates.
o Domain Knowledge: Specific information about the tasks the
agent is designed to perform, including rules, constraints, and
historical data.
4. Decision-Making Module:
o Goal Management: Defines the objectives the agent aims to
achieve. This includes both short-term and long-term goals.
o Planning: Generates a sequence of actions to achieve the agent’s
goals. Planning algorithms can range from simple rule-based
systems to complex methods like A* or probabilistic planners.
o Reasoning: Involves logical inference and decision-making
processes that help the agent determine the best course of action
based on its knowledge base and current perceptions.
5. Learning Module:
o Machine Learning Algorithms: Techniques such as supervised
learning, unsupervised learning, reinforcement learning, and neural
networks that allow the agent to learn from experience and adapt
its behavior over time.
o Training Data: Historical data and experiences that the agent uses
to improve its models and decision-making processes.
6. Communication Module:
o Inter-Agent Communication: Protocols and interfaces for
communicating with other agents, either to share information or to
coordinate actions.
o Human-Agent Interaction: Interfaces for interacting with human
users, including natural language processing for understanding and
generating human language.
7. Control Architecture:
o Reactive Layer: Handles immediate responses to environmental
changes. This layer often includes simple, rule-based systems for
rapid reaction.
o Deliberative Layer: Manages long-term planning and decision-
making, using more complex reasoning and planning algorithms.
o Hybrid Systems: Combines reactive and deliberative approaches
to balance quick responses with thoughtful planning.
8. Monitoring and Evaluation:
o Performance Metrics: Criteria for evaluating the agent’s success
in achieving its goals and performing tasks efficiently.
o Feedback Loop: Mechanisms for receiving feedback from the
environment and adjusting behavior accordingly.

An intelligent agent’s configuration will vary based on its specific application


and the complexity of the tasks it needs to perform. However, these core
components provide a framework for designing and implementing a wide range
of intelligent agents, from simple bots to advanced autonomous systems.

PEAS Description of an IA

The PEAS framework (Performance measure, Environment, Actuators, Sensors)


is a useful tool for specifying the task environment of an intelligent agent.
Here’s how it can be applied to describe an intelligent agent:

1. Performance Measure

The performance measure defines the criteria for evaluating the success of the
agent’s actions. These criteria are specific to the agent’s goals and can include
various metrics depending on the application.

Examples:

 Accuracy in task completion


 Efficiency in resource utilization (time, energy, etc.)
 User satisfaction
 Safety and reliability
 Achievement of specific objectives (e.g., delivery times, error rates)

2. Environment

The environment is the context within which the agent operates. It includes
everything the agent can interact with, and it can vary significantly based on the
type of agent.
Examples:

 Physical world (for robots, autonomous vehicles)


 Virtual environments (for software agents, game bots)
 Mixed environments (for augmented reality agents)
 Specific domains (e.g., a smart home, a hospital, a stock market)

3. Actuators

Actuators are the components that allow the agent to take actions and affect the
environment. These vary based on whether the agent is a physical entity or a
software system.

Examples:

 Motors and servos (for robots)


 Display screens and speakers (for virtual assistants)
 Network interfaces (for software agents)
 Manipulators and effectors (for industrial robots)

4. Sensors

Sensors are the components that allow the agent to perceive its environment.
They provide the necessary input data for the agent to understand and react to
its surroundings.

Examples:

 Cameras, LIDAR, RADAR (for autonomous vehicles)


 Microphones (for voice-activated assistants)
 Temperature, humidity, and light sensors (for smart home systems)
 Network monitoring tools (for cybersecurity agents)
Example: Autonomous Vehicle

Performance Measure

 Safety (accidents avoided)


 Efficiency (fuel consumption, travel time)
 Passenger comfort and satisfaction
 Compliance with traffic laws

Environment

 Urban streets, highways, rural roads


 Varying weather conditions (rain, snow, fog)
 Dynamic elements (other vehicles, pedestrians, cyclists)
 Traffic signals and road signs

Actuators

 Steering mechanism
 Accelerators and brakes
 Indicator lights
 Horn

Sensors

 Cameras
 LIDAR and RADAR systems
 Ultrasonic sensors
 GPS and inertial measurement units

Example: Virtual Personal Assistant


Performance Measure

 Accuracy of responses
 User satisfaction and engagement
 Task completion rate (e.g., setting reminders, answering queries)
 Response time

Environment

 Digital platforms (smartphones, computers)


 Online services (email, calendars, weather updates)
 User’s personal data and preferences

Actuators

 Display screen
 Speakers
 Network interface (for sending emails, fetching information)

Sensors

 Microphone (for voice input)


 Keyboard and touchscreen inputs
 Camera (optional, for visual recognition)

Using the PEAS framework helps in designing and understanding the specific
requirements and capabilities of an intelligent agent, ensuring that all aspects of
its operation are considered and integrated effectively.

Categories of intelligent agents


Intelligent agents can be categorized based on their design, capabilities, and
application areas. Here are the main categories of agents. These categories
highlight the diversity of intelligent agents and their applications across various
domains, from simple rule-based systems to complex autonomous and
collaborative entities. Each type of agent is suited to specific tasks and
environments, making them versatile tools in the field of artificial intelligence.

Simple reflex agents

Simple reflex agents ignore the rest of the percept history and act only on the
basis of the current percept. Percept history is the history of all that an agent
has perceived to date. The agent function is based on the condition-action
rule. A condition-action rule is a rule that maps a state i.e, condition to an
action. If the condition is true, then the action is taken, else not. This agent
function only succeeds when the environment is fully observable. For simple
reflex agents operating in partially observable environments, infinite loops are
often unavoidable. It may be possible to escape from infinite loops if the agent
can randomize its actions.
Example: A thermostat that turns on the heating when the temperature drops
below a certain threshold.
Problems with Simple reflex agents are :

 Very limited intelligence.


 No knowledge of non-perceptual parts of the state.
 Usually too big to generate and store.
 If there occur any change in the environment, then the collection of rules
needs to be updated.
Model-based reflex agents

It works by finding a rule whose condition matches the current situation. A


model-based agent can handle partially observable environments by the use
of a model about the world. The agent has to keep track of the internal
state which is adjusted by each percept and that depends on the percept
history. The current state is stored inside the agent which maintains some kind
of structure describing the part of the world which cannot be seen.
Example: A robot vacuum cleaner that keeps a map of the rooms it has cleaned
and plans its path accordingly.
Updating the state requires information about :

 how the world evolves independently from the agent, and


 how the agent’s actions affect the world.
Goal-based agents

These kinds of agents take decisions based on how far they are currently from
their goal(description of desirable situations). Their every action is intended to
reduce its distance from the goal. This allows the agent a way to choose
among multiple possibilities, selecting the one which reaches a goal state. The
knowledge that supports its decisions is represented explicitly and can be
modified, which makes these agents more flexible. They usually require search
and planning. The goal-based agent’s behaviourcan easily be changed.
Example: A navigation system that finds the shortest path to a destination.
Utility-based agents

The agents which are developed having their end uses as building blocks are
called utility-based agents. When there are multiple possible alternatives, then
to decide which one is best, utility-based agents are used. They choose actions
based on a preference (utility) for each state. Sometimes achieving the
desired goal is not enough. We may look for a quicker, safer, cheaper trip to
reach a destination. Agent happiness should be taken into consideration.
Utility describes how “happy” the agent is. Because of the uncertainty in the
world, a utility agent chooses the action that maximizes the expected utility. A
utility function maps a state onto a real number which describes the associated
degree of happiness.
Example: An autonomous trading agent that makes investment decisions to
maximize profit.
Learning Agent:
A learning agent in AI is the type of agent that can learn from its past
experiences or it has learning capabilities. It starts to act with basic knowledge
and then is able to act and adapt automatically through learning.
A learning agent has mainly four conceptual components, which are:

1. Learning element: It is responsible for making improvements by learning


from the environment
2. Critic: The learning element takes feedback from critics which describes
how well the agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem Generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
Example: A recommendation system that improves its suggestions based on
user feedback.

Agent Environment in AI

An environment is everything in the world which surrounds the agent, but it is


not a part of an agent itself. An environment can be described as a situation in
which an agent is present.

The environment is where agent lives, operate and provide the agent with
something to sense and act upon it. An environment is mostly said to be non-
feministic.

Features of Environment

An environment can have various features from the point of view of an agent:

1. Fully observable vs Partially Observable


2. Static vs Dynamic
3. Competitive vs Collaborative
4. Discrete vs Continuous
5. Deterministic vs Stochastic
6. Single-agent vs Multi-agent
7. Episodic vs sequential
8. Known vs Unknown
9. Accessible vs Inaccessible

1. Fully observable vs Partially Observable:


o If an agent sensor can sense or access the complete state of an
environment at each point of time then it is a fully
observable environment, else it is partially observable.
o A fully observable environment is easy as there is no need to maintain the
internal state to keep track history of the world.
o An agent with no sensors in all environments then such an environment is
called as unobservable.
o Example:
o Chess – the board is fully observable, so are the opponent’s
moves
o Driving – the environment is partially observable because what’s
around the corner is not known.

2. Deterministic vs Stochastic:
o If an agent's current state and selected action can completely determine
the next state of the environment, then such environment is called a
deterministic environment.
o A stochastic environment is random in nature and cannot be determined
completely by an agent.
o In a deterministic, fully observable environment, agent does not need to
worry about uncertainty.
o Example:
Chess – there would be only a few possible moves for a coin at the
current state and these moves can be determined
Self Driving Cars – the actions of a self-driving car are not unique, it
varies time to time

3. Competitive vs Collaborative:
o An agent is said to be in a competitive environment when it competes
against another agent to optimize the output.
o The game of chess is competitive as the agents compete with each other
to win the game which is the output.
o An agent is said to be in a collaborative environment when multiple
agents cooperate to produce the desired output.
o When multiple self-driving cars are found on the roads, they cooperate
with each other to avoid collisions and reach their destination which is
the output desired.

4. Single-agent vs Multi-agent
o If only one agent is involved in an environment, and operating by itself
then such an environment is called single agent environment.
o However, if multiple agents are operating in an environment, then such
an environment is called a multi-agent environment.
o The agent design problems in the multi-agent environment are different
from single agent environment.
o The game of football is multi-agent as it involves 11 players in each
team.

5. Static vs Dynamic:
o If the environment can change itself while an agent is deliberating then
such environment is called a dynamic environment else it is called a static
environment.
o Static environments are easy to deal because an agent does not need to
continue looking at the world while deciding for an action.
o However for dynamic environment, agents need to keep looking at the
world at each action.
o Taxi driving is an example of a dynamic environment whereas Crossword
puzzles are an example of a static environment.

6. Discrete vs Continuous:
o If in an environment there are a finite number of percepts and actions that
can be performed within it, then such an environment is called a discrete
environment else it is called continuous environment.
o A chess gamecomes under discrete environment as there is a finite
number of moves that can be performed.
o A self-driving car is an example of a continuous environment.
o Self-driving cars are an example of continuous environments as their
actions is driving, parking, etc. which cannot be numbered.

7. Known vs Unknown
o Known and unknown are not actually a feature of an environment, but it
is an agent's state of knowledge to perform an action.
o In a known environment, the results for all actions are known to the
agent. While in unknown environment, agent needs to learn how it works
in order to perform an action.
o It is quite possible that a known environment to be partially observable
and an Unknown environment to be fully observable.

8. Accessible vs Inaccessible
o If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible
environment else it is called inaccessible.
o An empty room whose state can be defined by its temperature is an
example of an accessible environment.
o Information about an event on earth is an example of Inaccessible
environment.

9. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only
the current percept is required for the action.
o However, in Sequential environment, an agent requires memory of past
actions to determine the next best actions.

What is PEAS Representation?

The PEAS framework is used to provide a high-level description of


agents. PEAS stand for:

 P = Performance Measure
 E = Environment
 A = Actuators
 S = Sensors
It is a valuable method for creating and evaluating intelligent systems. Now, let
us see how the PEAS representation can be implemented in self-driving cars.

 Performance Measure:

The performance measure for a self-driving car is typically a blend


of safety, time, legal regulations, and passenger comfort. Safety is the
most crucial performance metric, as the car must prevent accidents. Next,
the time factor is vital since the agent must respond quickly to situations
(timely application of brakes, the opening of airbags, etc.). It must also
follow traffic signals and drive legally. Also, it is expected for the agent
to ensure passenger comfort by maintaining optimal internal atmospheric
conditions as well as minimizing the effects of vibrations and shocks.

 Environment:
A self-driving car's environment refers to the exterior circumstances in
which it operates. It must be capable of driving on a variety of terrains
(hilly roads) or road conditions (wet surfaces). Traffic signals, road
signs (speed limits, exits, bends, etc.), pedestrians, and other vehicles on
the road also influence driving conditions.
 Actuators:
Actuators enable self-driving cars to interact with the environment. The
steering wheel, accelerator, brake pedals, indicators, and horn
are examples of actuators. The steering wheel is an important actuator as
it enables the car to move and change directions. The accelerator and
brake pedals are also significant since they allow control of the car's
speed. Moreover, indicators and the horn are important as they will
enable the car to communicate lane changes to drivers or pedestrians.
 Sensors:
Sensors are essential for self-driving cars to sense their
environment. Cameras, GPS, speedometers, accelerometers, and
sonars are examples of sensors. Cameras are especially important as they
allow the car to detect objects such as other vehicles, pedestrians, and
traffic signs. Another essential sensor is the GPS, which assists the car in
determining its location and planning its route. The speedometer also
monitors the vehicle's speed, while the accelerometer measures its
acceleration. Moreover, the sonar detects items in the vicinity of the car
using sound waves, allowing the car to drive the road safely and quickly,
particularly for identifying objects outside the camera's range.

Examples of Agents

Agents in AI can be considered in three different categories.

1. Human agent: A human agent possesses sensors in the form of eyes,


ears, etc., while the hands, legs, mouth, and other bodily parts act as
actuators.
2. Software agent: A software agent is a pre-programmed agent that can
display values on the screen, accept inputs, and store data. Keystrokes for
typed inputs and microphone inputs for speech can be examples of
sensors while responding with information to voice commands or a
screen displaying a file can be considered actuators.
3. Robotic agent: A robotic agent is configured with several sensors to
execute tasks in the environment. Cameras are used as sensors, while
motors and links work as actuators to perform actions.

Driverless Cars

 Performance Measure: The measure for driverless cars is safe navigation


and efficient route planning, ensuring passenger safety and timely
arrivals.
 Environment: The environment includes roads, traffic patterns,
pedestrians, and weather conditions, which the car must interact with
while navigating.
 Actuators: Actuators consist of steering, acceleration, and braking
systems that execute the car's movements as directed by its AI
algorithms.
 Sensors: Sensors, such as cameras, LiDAR, GPS, and radar, collect real-
time data about the car's surroundings, enabling it to perceive and
respond to the environment.

Virtual Personal Assistants

 Performance Measure: Virtual assistants aim for accurate responses, task


completion, and user satisfaction as performance indicators.
 Environment: The environment encompasses user queries and internet
resources where virtual assistants source information.
 Actuators: Text-to-speech conversion and displays are actuators that
allow virtual assistants to communicate and provide information to users.
 Sensors: Microphones and cameras serve as sensors, gathering data about
user queries and contextual cues to tailor responses effectively.

Medical Diagnosis AI

 Performance Measure: The accuracy of diagnoses, minimizing false


positives and negatives, is the performance measure for medical diagnosis
AI.
 Environment: The environment includes patient data and medical
knowledge, providing the context within which the AI makes diagnostic
recommendations.
 Actuators: Actuators generate reports and recommendations that assist
medical professionals in decision-making.
 Sensors: Sensors collect patient records and lab results, supplying the
data required for the AI to make accurate diagnostic assessments.

Playing soccer

 Performance Measure: scoring goals, defending, speed

 Environment: playground, teammates, opponents, ball

 Actuators: body, dribbling, tackling, passing the ball, shooting

 Sensors: camera, ball sensor, location sensor, other players locator

Exploring the subsurface oceans of Titan

 Performance Measure: safety, images quality, video quality

 Environment: ocean, water

 Actuators: mobile diver, steering, brake, accelerator

 Sensors: video, accelerometers, depth sensor, GPS

Shopping for used AI books on the Internet

 Performance Measure: price, quality, authors, book review

 Environment: web, vendors, shippers


 Actuators: fill-in the form, follow URL, display to the user

 Sensors: HTML

Playing a tennis match

 Performance Measure: winning

 Environment: playground, racquet, ball, opponent

 Actuators: ball, racquet, joint arm

 Sensors: ball locator, camera, racquet sensor, opponent locator

Practicing tennis against a wall

 Performance Measure: hit speed, hit accuracy

 Environment: playground, racquet, ball, wall

 Actuators: ball, racquet, joint arm

 Sensors: ball locator, camera, racquet sensor

Performing a high jump

 Performance Measure: safety, altitude

 Environment: wall

 Actuators: jumping apparatus


 Sensors: camera, height sensor

Knitting a sweater

 Performance Measure: size, looking, comfort

 Environment: craft, pattern

 Actuators: needles, yarn, jointed-arms

 Sensors: pattern sensor

Bidding on an item at an auction

 Performance Measure: cost, value, necessity, quality

 Environment: auctioneer, items, bidders

 Actuators: speaker, display items

 Sensors: camera, price monitor

Performance
Agent Measure Environment Actuator Sensor
Patient’s
Nurses,
Hospital health, Prescription, Symptoms,
Hospital,
Management Admission Diagnosis, Patient’s
Doctors,
System process, Scan report response
Patients, staff
Payment
The
Steering
comfortable Roads, Camera,
Automated wheel,
trip, Safety, Traffic, GPS,
Car Drive Accelerator,
Maximum Vehicles Odometer
Brake, Mirror
Distance
Maximize Classroom,
Smart
Subject scores, Desk, Chair, Eyes, Ears,
displays,
Tutoring Improvement Board, Staff, Notebooks
Corrections
is students Students
Percentage of Conveyor belt
Part-picking Jointed arms Camera, joint
parts in with parts;
robot and hand angle sensors
correct bins bins
Satellite
Downlink Display
image Correct image Color pixel
from orbiting categorization
analysis categorization arrays
satellite of scene
system
Vacuum Cleanliness, Room, table, Wheels, Camera,
cleaner security, carpet, floors brushes sensors
battery
Chatbot Helpful Messaging Sender NLP
system responses, platform, mechanism, algorithms
accurate internet, typer
responses website
Efficient Roads, traffic, Brake, Cameras,
Autonomous navigation, pedestrians, accelerator, GPS,
vehicle safety, time, road signs steer, horn speedometer
comfort

You might also like