0% found this document useful (0 votes)
16 views22 pages

Ai 1

artificial intelligence unit 1

Uploaded by

Vikram Nairy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views22 pages

Ai 1

artificial intelligence unit 1

Uploaded by

Vikram Nairy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT - 1

2 MARKS QUESTIONS
1. Define AI.
Artificial Intelligence (AI) is the branch of computer science dedicated to creating
systems and machines capable of performing tasks that typically require human intelligence.
This includes activities such as learning, reasoning, problem-solving, perception, language
understanding, and decision-making.

2. Define AI with respect to “Thinking humanly”.


Artificial intelligence (AI) with respect to " Thinking humanly " is the branch of computer
science and engineering that focuses on creating systems capable of creating task. It mimic
how humans think, reason and solve problems.To achieve this ,it uses the methods like
cognitive science brings together computer models from AI and experimental techniques
from psychology to try to testable theories of the working of human mind.

3. Define AI with respect to “Thinking rationally”.


Artificial Intelligence (AI), with respect to thinking rationally, is the branch of computer
science and engineering that focuses on creating systems that can reason, draw inferences,
and make decisions in a logically consistent manner, similar to human rational thinking.
4. Define AI with respect to “Acting humanly”.
Artificial Intelligence (AI), with respect to “Acting humanly,” is the branch of computer
science and engineering that focuses on creating systems capable of performing tasks and
exhibiting behaviors indistinguishable from those of humans. This involves mimicking
human actions, interactions, and responses to provide a seamless, human-like experience in
various contexts.
5. Define AI with respect to “Acting rationally”.
Defining AI with respect to "Acting Rationally" refers to the rational agent approach,
which involves an agent that senses and acts. In this context, AI is about capturing
intelligence abstractly, focusing on rationality rather than modelling human behaviour.
According to Russell and Norvig's textbook "Artificial Intelligence: A Modern Approach",
AI can be classified into four categories, with "Acting Rationally" being one of them. This
approach deals with behaviour, where a system is considered rational if it does the "right
thing" given what it knows, unlike humans who can make mistakes.

6. What is Turing Test?


Turing test are there to allow rational actions.thus,we need the ability to represent knowledge
and reason with it because this enables us to reach good decisions in a wide variety of
situations.This test is significant in the field of artificial intelligence as it aims to access
machines ability to exhibit intelligent behaviour comparable to that of a human.

7. List capabilities needed by computers to pass Turing Test.

· natural language processing to enable it to communicate successfully in English.

· knowledge representation to store what it knows or liars;

· automated reasoning to use the stored information to answer questions and to draw

new conclusions;

· machine learning to adapt to new circumstances and to detect and extrapolate patterns.

8. Define cognitive modelling.


Cognitive modeling involves creating computer models that simulate human cognitive
processes to test theories about how the mind works. By comparing the behavior of these
models to human behavior, researchers can determine if the models accurately represent
human thought processes.

9. List any four foundations of AI.


1.Philosophy
2.Mathematics and statistics
3.Economics
4.Neuro science
10. List any four Applications of AI.
1.AI in Astronomy
2.AI in Healthcare
3.AI in Gaming
4.AI in Education
11. Define percept and percept sequence.
12. Define performance measure.
A performance measure embodies the criterion for success of an agent's behaviour. When
an agent is plunked down in an environment, it generates a sequence of actions according to
the percepts it receives. This sequence of actions causes the environment to go through a
sequence of states.
13. List any four state of the art AI applications.
1. Natural Language Processing (NLP) and Conversational AI
2. Computer Vision and Image Recognition
3. Autonomous Vehicles
4. Healthcare AI and Medical Diagnosis

14. Define agent. Give an example.


An agent is anything that can be viewed as perceiving its environment through sensors and
SENSOR acting upon that environment through actuators.
Example:A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth,
and other
body parts for actuators. A robotic agent might have cameras and infrared range finders for
sensors and various motors for actuators. A software agent receives keystrokes, file contents,
and network packets as sensory inputs and acts on the environment by displaying on the
screen, writing files, and sending network packets.

15. List various terminology associated with agent and environment.


Here are various terminologies associated with agents and environments:

Agent:
- Simple reflex agent
- Model-based reflex agent
- Goal-based agent
- Utility-based agent
- Learning agent

Environment:
- Fully observable environment
- Partially observable environment
- Deterministic environment
- Probabilistic environment
- Episodic environment
- Sequential environment
- Dynamic environment
- Static environment

Agent-Environment Interaction:
- Perception
- Action
- Sensor
- Actuator

16. Differentiate between agent function and agent program.


Agent Function:

● Definition: The agent function defines the mapping from percept histories to actions.
In simpler terms, it specifies what action the agent should take in response to a
sequence of percepts (observations or inputs).
● Nature: It is an abstract concept that describes the agent's behavior in terms of its
input-output mapping without detailing how this mapping is implemented.
● Example: For a vacuum-cleaning agent, the agent function could be to "clean any
dirty square it perceives".

Agent Program:

● Definition: The agent program is an actual implementation of the agent function. It is


the concrete representation of the agent's behavior in code or a specific algorithm.
● Nature: It is the actual software or hardware that embodies the agent function. It
takes inputs (percepts) and produces outputs (actions) as per the defined agent
function.
● Example: For the vacuum-cleaning agent, the agent program could be a piece of
software that uses sensors to detect dirty squares and actuators to move and clean
those squares.

17. Define rational agents.


A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome. An agent is something that acts. Computer agents are
not mere programs, but they are expected to have the following attributes also:
1. Operating under autonomous control
2. Perceiving their environment
3. Persisting over a prolonged time period
4. Adapting to change
18. What is task environments? Example.
We must think about task environments which are essentially the "problems" to which
rational agents are the "solutions."
Example : Specifying the task environment of a simple vacuum-cleaner agent:
Performance Measure:
- Maximizes cleanliness of the environment (percentage of clean cells).
- Minimizes time and energy used for cleaning.
Environment:
- A grid of cells with dirt and static obstacles.
- Bounded by walls.
Actuators:
- Movement in four directions (up, down, left, right).
- Vacuum to clean dirt in the current cell.
Sensors:
- Detects dirt in the current cell.
- Detects obstacles in adjacent cells.
- Detects boundaries of the environment.
19. Differentiate between omniscience and rationality.|

Omniscience Rationality

Definition: Complete and infinite knowledge Definition: The ability to make decisions that
about all aspects of the environment and maximise performance based on available
outcomes of all possible actions. information and prior knowledge.

Knowledge Scope: Knows the actual outcome of Knowledge Scope: Limited to the agent’s
every action. percepts and prior knowledge.

Feasibility: Impractical and impossible to Feasibility: Practical and achievable.


achieve in reality.

Nature: Idealised and theoretical. Nature: Realistic and applied in real-world


scenarios.
20. Write PEAS for an automated taxi driver.

21. Write PEAS for a medical diagnosis system. (ref below table)
Agent Type: Medical diagnosis System
P(performance measure): healthy patient, minimum costs,lawsuits.
E(Environment): patient, hospital,staff.
A(Actuators): Display questions, tests,diagnoses,treatments,referrals.
S(Sensors): Keyboard entry of symptoms,findings,patient’s answer.
22. Write PEAS for a satellite image analysis system.. (ref below table)
23. Write PEAS for a part-picking robot. . (ref below table)
24. Write PEAS for a refinery controller. . (ref below table)
25. Write PEAS for an Interactive English tutor. . (ref below table)
26. Write PEAS for a Vacuum cleaner agent.
PEAS (Performance measure, Environment, Actuators, Sensors) description for a vacuum
cleaner agent:
● Performance measure: Cleanliness of the floor, efficient use of power, time taken to
clean, minimal noise.
● Environment: Varied floor types (carpet, tile, hardwood), obstacles (furniture, walls),
different room layouts.
● Actuators: Wheels for movement, vacuum suction, brushes, beater bar, dirt container.
● Sensors: Dirt sensors, cliff sensors, bump sensors, wheel encoders, cameras or
infrared sensors for navigation.
27. What are fully observable and partially observable Environments? Give an example.
*Fully observable Environment
If an agent's sensors give it access to the complete state of the environment at each point in
time, then we say that the task environment is fully observable.
A task environment is effectively fully observable if the sensors detect all aspects that are
relevant to the choice of action; relevance, in turn, depends on the performance measure.
Fully observable environments are convenient because the agent need not maintain any
internal state to keep track of the world.
Example:chess game-the entire board is visible to both players
*Partially observable Environment
An environment might be partially observable because of noisy and inaccurate sensors or
because parts of the state are simply missing from the sensor data
for example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in
other squares, and an automated taxi cannot see what other drivers are thinking.

28. What are Deterministic and stochastic Environments? Give an example.


1. Deterministic Environment:
- In a deterministic environment, outcomes are entirely predictable given initial conditions
and rules governing the system. There is no randomness or uncertainty involved.
- Example: A pendulum swinging under ideal conditions (ignoring air resistance and other
external factors) follows deterministic laws of physics. If you know the initial position and
velocity, you can predict exactly where the pendulum will be at any future time.
2. Stochastic Environment:
- In a stochastic environment, outcomes are influenced by randomness or probability. Even
with known initial conditions, there is variability in the outcomes due to random factors or
events.
- Example: Stock market fluctuations are a classic example of a stochastic environment.
While economic principles and market trends influence stock prices, they are also affected by
unpredictable events like economic crises, geopolitical events, or natural disasters. These
factors introduce randomness into the outcomes.

29. What are Episodic and sequential Environments? Give an example.


Nested quantifiers are expressions in formal logic where one quantifier (such as ∀ or ∃) is
contained within the scope of another quantifier. Here's an example:
Consider the domain of all integers, denoted by ℤ.
Example: ∃x ∀y (x + y = y)
In this example:
∃x denotes "there exists an integer x",
∀y denotes "for all integers y".

30. What are Static and dynamic Environments? Give an example.


static environment, the conditions remain unchanged while an agent is deliberating. This
means the agent can take its time to decide on an action without worrying about changes in
the environment or the passage of time. An example of a static environment is a crossword
puzzle. Once the puzzle is presented, it remains the same until the agent (the person solving
it) makes changes by filling in the answers.
dynamic environment changes while the agent is deliberating. The agent must continuously
monitor the environment and quickly decide on actions to adapt to these changes. An
example of a dynamic environment is taxi driving. The positions and actions of other
vehicles, pedestrians, and traffic signals constantly change, requiring the driving algorithm to
make rapid, real-time decisions to navigate safely.

31. What are Discrete and continuous Environments? Give an example.


Discrete vs. continuous: The discrete/continuous distinction can be applied to the state of the
environment, to the way time is handled, and to the percepts and actions of the agent. For
example, a discrete-state environment such as a chess game has a finite number of distinct
states. Chess also has a discrete set of percepts and actions. Taxi driving is a continuous- state
and continuous-time problem: the speed and location of the taxi and of the other vehicles
sweep through a range of continuous values and do so smoothly over time. Taxi-driving
actions are also continuous (steering angles, etc.).
32. What are Single agent and multiagent? Give an example.
33. Distinguish between fully and partially observable environments.

Aspect Fully Observable Environment Partially Observable Environment

Definition The agent's sensors give it access to the The agent's sensors do not have access
complete state of the environment at each point to the complete state of the
in time. environment at each point in time.

Effectiveness Effectively fully observable if sensors detect all Sensors are noisy, inaccurate, or parts
aspects relevant to the choice of action. of the state are missing from the sensor
Relevance depends on the performance data.
measure.

Convenience Convenient because the agent need not Requires the agent to maintain some
maintain any internal state to keep track of the internal state to infer missing
world. information or deal with uncertainty.

Example Scenario A scenario where all relevant information is A vacuum agent with a local dirt sensor
available to the agent, e.g., a chess game where cannot detect dirt in other squares; an
all pieces are visible. automated taxi cannot perceive other
drivers' intentions.

34. List the limitations of table driven agents.


TABLE-DRIVEN-AGENT does do what we want: it implements the desired agent function.
The key challenge for A1 is to find out how to write programs that, to the extent possible,
produce rational behavior from a small amount of code rather than from a large number of
table entries.

35. List the four basic kinds of agent programs.


Simple reflex agents
Model-based reflex agents

Goal-based agents
Utility-based agents

36. What are the limitations of simple reflex agent?


37. What is a utility function?
An agent's preferences between world states are captured by a utility function, which assigns
a single number to express the desirability of a state. Utilities are combined with outcome
probabilities of actions to give an expected utility for each action.
38. Distinguish between goal based and utility based agents.

Aspect Goal-Based Agents Utility-Based Agents

Definition Aim to achieve a specific Aim to maximise overall


goal utility

Decision Process Plan and execute actions to Evaluate and choose actions
reach the goal based on utility

Behaviour Focus on reaching a specific Consider multiple factors


end state and trade-offs

Flexibility Less adaptable to changing More flexible and adaptive


circumstances

Example Robot navigating a maze to Self-driving car balancing


reach the exit safety and efficiency

39. What are Model-based reflex agents?

● A model-based agent has two important factors:


○ Model: It is knowledge about "how things happen in the world," so it is
called a Model-based agent.
○ Internal State: It is a representation of the current state based on
percept history.

● These agents have the model, "which is knowledge of the world" and based
on the model they perform actions.

● Updating the agent state requires information about:


1. How the world evolves
2. How the agent's action affects the world.

40. List the four conceptual components of a learning agent.


● A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by
learning from environment
2. Critic: Learning element takes feedback from critic which describes
that how well the agent is doing with respect to a fixed performance
standard.
3. Performance element: It is responsible for selecting external action
4. Problem generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
Long Answer Questions (3,4,5,6 Marks)
1.Write a note on “Acting humanly, the Turing test approach”.
Acting humanly: The Turing Test approach
o Test proposed by Alan Turing in 1950
o The computer is asked questions by a human interrogator.
The computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or not. Programming a
computer to pass, the computer need to possess the following capabilities:
● Natural language processing to enable it to communicate successfully in English.
● Knowledge representation to store what it knows or hears
● Automated reasoning to use the stored information to answer questions and to
draw new conclusions
● Machine learning to adapt to new circumstances and to detect and extrapolate
● patterns
To pass the complete Turing Test, the computer will need
● Computer vision to perceive the objects
● Robotics to manipulate objects and move about.

2.Explain “Thinking humanly: The cognitive modeling approach”


Thinking humanly: The cognitive modeling approach
We need to get inside actual working of the human mind:
● through introspection – trying to capture our own thoughts as they go by;
● through psychological experiments
Allen Newell and Herbert Simon, who developed GPS, the “General Problem Solver” tried to
trace the reasoning steps to traces of human subjects solving the same problems.The
interdisciplinary field of cognitive science brings together computer models from AI and
experimental techniques from psychology to try to construct precise and testable theories of
the workings of the human mind
3. Explain “Acting rationally: The rational agent approach”
An agent is something that acts. Computer agents are not mere programs, but they are
expected to have the following attributes also :
● operating under autonomous control,
● perceiving their environment,
● persisting over a prolonged time period,
● adapting to change.
A rational agent is one that acts so as to achieve the best outcome.

4. Explain “Thinking rationally: The laws of thought approach”


The Greek philosopher Aristotle was one of the first to attempt to codify “right
thinking”, that is irrefuatable reasoning processes. His syllogism provided patterns for
argumentstructures that always yielded correct conclusions when given correct premises—
for example,”Socrates is a man;all men are mortal; therefore Socrates is mortal.”.
These laws of thought were supposed to govern the operation of the mind;their study
initiated afield called logic.
5. Briefly explain any five foundations of AI.
Economics
● How should we make decisions so as to maximize payoff?
● How should we do this when the payoff may be far in the future?
❖ The science of economics got its start in 1776, when Scottish philosopher Adam
Smith treat it as a science, using the idea that economies can be thought of as
consisting of individual agents maximizing their own economic well being.
❖ Decision theory, which combines probability theory with utility theory, provides a
formal and complete framework for decisions (economic or otherwise) made under
uncertainty— that is, in cases where probabilistic descriptions appropriately capture
the decision maker’s environment.
Psychology:
● How do humans and animals think and act?
❖ The Behaviorism movement, led by John Watson(1878-1958).
❖ Behaviorists insisted on studying only objective measures of the percepts(stimulus)
given to an animal and its resulting actions(or response).
❖ Behaviorism discovered a lot about rats and pigeons but had less success at
understanding human. ·
❖ Cognitive psychology, views the brain as an information processing device.
❖ Common view among psychologist that a cognitive theory should be like a computer
program.(Anderson 1980) i.e. It should describe a detailed information processing
mechanism whereby some cognitive function might be implemented.
Neuroscience:
● How do brain process information?
❖ Neuroscience is the study of the nervous system, particularly the brain.
❖ Aristotle wrote, "Of all the animals, man has the largest brain in
proportion to his size.
❖ Nicolas Rashevsky (1936, 1938) was the first to apply mathematical
models to the study of the nervous system.
❖ The measurement of intact brain activity began in 1929 with the
invention by Hans Berger of the electroencephalograph (EEG).
❖ The recent development of functional magnetic resonance imaging
(fMRI) is giving neuroscientists detailed images of brain activity, enabling
measurements that correspond in interesting ways to ongoing cognitive
processes
Control theory and cybernetics:
● How can artifacts operate under their own control?
❖ Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling machine: a water
clock with a regulator that maintained a constant flow rate.
❖ This invention changed the definition of what an artifact could do.
❖ Modern control theory, especially the branch known as stochastic optimal control, has
as its goal the design of systems that maximize an objective function over time.
❖ This roughly matches our view of Al: designing systems that behave optimally.
❖ The tools of logical inference and computation allowed AI researchers to consider
problems such as language, vision, and planning that fell completely outside the
control theorist’s purview.
Computer engineering:
● How can we build an efficient computer?
❖ For artificial intelligence to succeed, we need two things: intelligence and an
artifact.
❖ The computer has been the artifact(object) of choice.
❖ The first operational computer was the electromechanical Heath Robinson, built in
1940 by Alan Turing's team for a single purpose: deciphering German messages.
❖ The first operational programmable computer was the Z-3, the invention of
KonradZuse in Germany in 1941
❖ The first electronic computer, the ABC, was assembled by John Atanasoff and his
student Clifford Berry between 1940 and 1942 at Iowa State University.
❖ The first programmable machine was a loom, devised in 1805 by Joseph Marie
Jacquard (1752- 1834) that used punched cards to store instructions for the pattern
to be woven.
6. Explain any six AI applications.
7. Explain any six state of the art AI applications.
Autonomous control: The ALVINN computer vision system was trained to steer a car to
keep it following a lane. It was placed in CMU's NAVLAB computer-controlled minivanand
used to navigate across the United S
Game playing: IBM's Deep Blue became the first computer program to defeat the world
champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an
exhibition match (Goodman and Keene, 1997).
Diagnosis: Medical diagnosis programs based on probabilistic analysis have been ableto
perform at the level of an expert physician in several areas of medicine.
Autonomous planning and scheduling: A hundred million miles from Earth, NASA's
Remote Agent program became the first on-board autonomous planning program to control
the scheduling of operations for a spacecraft (Jonsson et al., 2000). Remote Agent generated
plans from high-level goals specified from the ground, and it monitored the operation of the
spacecraft as the plans were executed-detecting, diagnosing, and recovering from problems as
they occurred.
Robotics: Many surgeons now use robot assistants in microsurgery. HipNav (DiGioia et al.,
1996) is a system that uses computer vision techniques to create a threedimensional model of
a patient's internal anatomy and then uses robotic control to guide the insertion of a hip
replacement prosthesis.
Language understanding and problem solving: PROVERB (Littman et al., 1999) is a
computer program that solves crossword puzzles better than most humans, using constraints
on possible word fillers, a large database of past puzzles, and a variety of information
sourcesincluding dictionaries and online databases such as a list of movies and the actors that
appearin them.
8. Explain History of AI.
The gestation of artificial intelligence (1943-1955) There were a number of early examples
of work that can be characterized as AI, but it was Alan Turing who first articulated a
complete vision of A1 in his 1950 article "Comput-ing Machinery and Intelligence." Therein,
he introduced the Turing test, machine learning, genetic algorithms, and reinforcement
learning.
The birth of artificial intelligence (1956) McCarthy convinced Minsky, Claude Shannon,
and Nathaniel Rochester to help him bring together U.S. researchers interested in automata
theory, neural nets, and the study ofintelligence. They organized a two-month workshop at
Dartmouth in the summer of 1956. Perhaps the longest-lasting thing to come out of the
workshop was an agreement to adopt McCarthy's new name for the field – Artificial
Intelligence
Early enthusiasm, great expectations (1952-1969) The early years of A1 were full of
successes-in a limited way.
General Problem Solver (GPS) was a computer program created in 1957 by Herbert Simon
and Allen Newell to build a universal problem solver machine. The order in which the
program consideredsubgoals and possible actions was similar to that in which humans
approached the same problems. Thus, GPS was probably the first program to embody the
"thinking humanly" approach. At IBM, Nathaniel Rochester and his colleagues produced
some of the first A1 pro- grams. Herbert Gelernter (1959) constructed the Geometry Theorem
Prover, which was able to prove theorems that many students of mathematics would find
quite tricky. Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts
Institute of Technology (MIT). In 1963, McCarthy started the AI lab at Stanford. Tom
Evans's ANALOGY program (1968) solved geometric analogy problems that appear in IQ
tests
A dose of reality (1966-1973) From the beginning, AI researchers were not shy about
making predictions of their comingsuccesses. The following statement by Herbert Simon in
1957 is often quoted: “It is not my aim to surprise or shock you-but the simplest way I can
summarize is to say that there are now in the world machines that think, that learn and that
create. Moreover, their ability to do these things is going to increase rapidly until-in a visible
future-the range of problems they can handle will be coextensive with the range to which the
human mind has been applied.
Knowledge-based systems: The key to power? (1969-1979) Dendral was an influential
pioneer project in artificial intelligence (AI) of the 1960s, and the computer software expert
system that it produced. Its primary aim was to help organic chemists in identifying unknown
organic molecules, by analyzing their mass spectra and using knowledge of chemistry. It was
done at Stanford University by Edward Feigenbaum, Bruce Buchanan, Joshua Lederberg, and
Carl Djerassi.
AI becomes an industry (1980-present) In 1981, the Japanese announced the "Fifth
Generation" project, a 10-year plan to build intelligent computers running Prolog. Overall,
the A1 industry boomed from a few million dollars in 1980 tobillions of dollars in 1988.
The return of neural networks (1986-present) Psychologists including David Rumelhart
and Geoff Hinton continued the study of neural-net models ofmemory.
AI becomes a science (1987-present) In recent years, approaches based on hidden Markov
models (HMMs) have come to dominate the area.Speech technology and the related field of
handwritten character recognition are already making the transition to widespread industrial
and consumer applications. The Bayesian network formalism was invented to allow efficient
representation of, and rigorous reasoningwith, uncertain knowledge.
The emergence of intelligent agents (1995-present) One of the most important
environments for intelligent agents is the Internet.
9. Explain the concept of agents and environments with an example.
n agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators. This simple idea is illustrated in Figure 1.2
● A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth,
and other bodyparts for actuators.
● A robotic agent might have cameras and infrared range finders for sensors and various
motors foractuators.
● A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and
sending network packets

10.Write a note on rationality and rational agent.


Rational Agent :A rational agent is one that does the right thing-conceptually speaking,
every entry in the table for the agent function is filled out correctly. Obviously, doing the
right thing is better than doing the wrong thing. The right action is the one that will cause the
agent to be most successful.
Rationality What is rational at any given time depends on four things:
● The performance measure that defines the criterion of success.
● The agent's prior knowledge of the environment.
● The actions that the agent can perform.
● The agent's percept sequence to date.
This leads to a definition of a rational agent: For each possible percept sequence, a rational
agent should select an action that is expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in knowledge the agent has.
11.How do you specify a task environment using PEAS description? Explain with an
example.
We must think about task environments, which are essentially the "problems" to which
rational agents are the "solutions." Specifying the task environment The rationality of the
simple vacuum-cleaner agent, needs specification of
● the performance measure
● the environment
● the agent's actuators and sensors.
PEAS :All these are grouped together under the heading of the task environment. We call
this the PEAS (Performance, Environment, Actuators, Sensors) description. In designing an
agent, the first step must always be to specify the task environment as fully as possible

12.Explain rationality with example.


Rationality What is rational at any given time depends on four things:
● The performance measure that defines the criterion of success.
● The agent's prior knowledge of the environment.
● The actions that the agent can perform.
● The agent's percept sequence to date.
This leads to a definition of a rational agent: For each possible percept sequence, a rational
agent should select an action that is expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in knowledge the agent has
13.Explain the various categories of environments with example.
Fully observable vs. partially observable. If an agent's sensors give it access to the
complete state of the environment at each point in time, then we say that the task
environment is fully observable. A task environment is effectively fully observable if the
sensors detect all aspects that are relevant to the choice of action; An environment might be
partially observable because of noisy and inaccurate sensors or because parts of the state are
simply missing from the sensor data.
Deterministic vs. stochastic. If the next state of the environment is completely determined
by the current state and the action executed by the agent, then we say the environment is
deterministic; otherwise, it is stochastic.
Episodic vs. sequential In an episodic task environment, the agent's experience is divided
into atomic episodes. Each episode consists of the agent perceiving and then performing a
single action. Cru- cially, the next episode does not depend on the actions taken in previous
episodes. For example, an agent that has to spot defective parts on an assembly line bases
each decision on the current part, regardless of previous decisions; In sequential
environments, on the other hand, the current decision could affect all future decisions. Chess
and taxi driving are sequential:
Discrete vs. continuous. The discrete/continuous distinction can be applied to the state of the
environment, to the way time is handled, and to the percepts and actions of the agent. For
example, a discrete-state environment such as a chess game has a finite number of distinct
states. Chess also has a discrete set of percepts and actions. Taxi driving is a continuousstate
and continuous-time problem: the speed and location of the taxi and of the other vehicles
sweep through a range of continuous values and do so smoothly over time. Taxi-driving
actions are also continuous (steering angles, etc.).
Single agent vs. multiagent. An agent solving a crossword puzzle by itself is clearly in a
single-agent environment, whereas an agent playing chess is in a two-agent environment. As
one might expect, the hardest case is par
14.Explain Agent programs with example.
The agent programs all have the same skeleton: they take the current percept as input from
the sensors and return an action to the actuators. The agent program which takes the current
percept as input, and the agent function, which takes the entire percept history. The agent
program takes just the current percept as input because nothing more is availablefrom the
environment; if the agent's actions depend on the entire percept sequence, the agent will have
to remember the percepts.
15.Explain simple reflex agent with a neat diagram.
The simplest kind of agent is the simple reflex agent. These agents select actions on the basis
of the current percept, ignoring the rest of the percept history. For example, the vacuum agent
whose agent function is a simple reflex agent, because its decision is based only on the
current location and on whether that contains dirt.
● Select action on the basis of only the current percept. E.g. the vacuum-agent
● Large reduction in possible percept/action situations(next page).
● Implemented through condition-action rules If dirty then suck

Characteristics
● Only works if the environment is fully observable.
● Lacking history, easily get stuck in infinite loops
● One solution is to randomize actions
16.Briefly explain model based reflex agent with a diagram.
The most effective way to handle partial observability is for the agent to keep track of the
part of the world it can't see now. That is, the agent should maintain some sort of internal
state that depends on the percept history and thereby reflects at least some of the unobserved
aspects of the current state. Updating this internal state information as time goes by requires
two kinds of knowledge to be encoded in the agent program. First, we need some information
about how the world evolves independently of the agent-for example, that an overtaking car
generally will be closer behind than it was a moment ago. Second, we need some information
about how the agent's own actions affect the world-for example, that when the agent turns the
steering wheel clockwise, the car turns to the right or that after driving for five minutes
northbound on the freeway one is usually about five miles north of where one was five
minutes ago. This knowledge about "how the world working - whether implemented in
simple Boolean circuits or in complete scientific theories-is called a model of the world. An
agent that uses such a MODEL-BASED model is called a model-based agent.

17.Briefly explain goal based reflex agent with a diagram


Knowing about the current state of the environment is not always enough to decide what to
do. For example, at a road junction, the taxi can turn left, turn right, or go straight on. The
correct decision depends on where the taxi is trying to get to. In other words, as well as a
current state description, the agent needs some sort of goal information that describes
situations that are desirable-for example, being at the passenger's destination. The
agentprogram can combine this with information about the results of possible actions (the
same information as was used to update internal state in the reflex agent) in order to choose
actions that achieve the goal. Figure 1.13 shows the goal-based agent's structure.

18.Explain utility based agent with a diagram.


Goals alone are not really enough to generate high-quality behavior in most environments.
For example, there are many action sequences that will get the taxi to its destination (thereby
achieving the goal) but some are quicker, safer, more reliable, or cheaper than others. Goals
just provide a crude binary distinction between "happy" and "unhappy" states, whereas a
more general performance measure should allow a comparison of different world states
according to exactly how happy they would make the agent if they could be achieved.
Because "happy" does not sound very scientific, the customary terminology is to say that if
one world state is preferred to another, then it has higher utility for the agent.

19.Explain learning agent with a diagram


All agents can improve their performance through learning. A learning agent can be divided
into four conceptual components, as shown in Figure 1.15 The most important distinction is
between the learning element, which is responsible for making improvements, and the
performance element, which is responsible for selecting external actions. The performance
element is what we have previously considered to be the entire agent: it takes in percepts and
decides on actions. The learning element uses feedback from the critic on how the agent is
doing and determines how the performance element should be modified to do better in the
future. The last component of the learning agent is the problem generator. It is responsible for
suggesting actions that will lead to new and informative experiences. But if the agent is
willing to explore a little, it might discover much better actions for the long run. The problem
generator's job is to suggest these exploratory actions. This is what scientists do when they
carry out experiments

You might also like