0% found this document useful (0 votes)
11 views60 pages

Artificial Intelligence: V Sem Cse

The document discusses various aspects of artificial intelligence, including methods for understanding human thought processes, the concept of rational agents, and the capabilities of AI today such as robotic vehicles and speech recognition. It emphasizes the importance of rationality in AI, defining it as the ability to act in a way that maximizes expected performance based on available knowledge and perceptions. Additionally, it outlines the characteristics of environments in which agents operate, including observability, determinism, and the nature of tasks.

Uploaded by

snishitasingh287
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views60 pages

Artificial Intelligence: V Sem Cse

The document discusses various aspects of artificial intelligence, including methods for understanding human thought processes, the concept of rational agents, and the capabilities of AI today such as robotic vehicles and speech recognition. It emphasizes the importance of rationality in AI, defining it as the ability to act in a way that maximizes expected performance based on available knowledge and perceptions. Additionally, it outlines the characteristics of environments in which agents operate, including observability, determinism, and the nature of tasks.

Uploaded by

snishitasingh287
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 60

Artificial Intelligence

V SEM CSE
There are 3 ways to do this:
• Through Introspection – trying to catch our thoughts as they go by
• Psychological experiments – observing a person in action
• Brain imagining – observing the brain in action

Once we have a sufficiently precise theory of the mind, it becomes possible to


express the theory as a computer program

If the program’s input/output behavior matches the corresponding human


behavior, that is evidence that some of the program’s mechanisms could also be
in humans.
Ex: GPS (General problem Solver)
III. Thinking Rationally
• The laws of thought approach
• Thinking Rationally is the idea that thinking can be modeled as a logical process
where conclusions are drawn based on symbolic logic.
• Rational agents are theoretical entities that are used to model how humans think
and make decisions.
• The goal is to create systems that can solve problems and make decisions in a way
that is consistent with the principles of rational thinking.
• Rational Agents are used in game theory and decision theory to help, develop AI
that can mimic human behavior.
Two main obstacles:
• It is not easy to take informal knowledge and state it in the formal
required by logical notation, particularly when the knowledge is less
than 100% certain.
• There is big difference between solving a problem “in principle” and
solving it in practice
IV. Acting Rationally
• It’s the Rational Agent Approach
• An agent is just something that acts
• Computer agents are expected to do more: operate autonomously, perceive their
environment, persist over a prolonged time period, adapt to change, and create and
pursue goals
• Acting Rationally in AI means acting to achieve one’s goals, given one’s beliefs
or understanding about the world.
• A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.
• An agent is a system that perceives an environment and acts within that environment
• An intelligent is one that acts rationally with respect to its goals
• Two benefits:

• It is more general than the laws of thought approach, because correct inference
is just one of several possible mechanisms for achieving rationality.

• It is more agreeable to scientific development than the approaches based on


human behavior or human thought
THE STATE OF ART
What can AI do today?

• Robotic vehicles:
• A driverless robotic car named STANLEY sped through the rough terrain of the Mojave dessert at
22 mph, finishing the 132-mile course first to win the 2005 DARPA Grand Challenge.
• STANLEY is a Volkswagen Touareg outfitted with cameras, radar, and laser rangefinders to
sense the environment and onboard software to command the steering, braking, and
acceleration.
• The following year CMU’s BOSS won the Urban Challenge, safely driving in traffic through the
streets of a closed Air Force base, obeying traffic rules and avoiding pedestrians and other
vehicles.
• Speech recognition:
• A traveler calling United Airlines to book a flight can have the entire conversation guided by an
automated speech recognition and dialog management system.
• Autonomous planning and scheduling:
• A hundred million miles from Earth, NASA’s Remote Agent program became the first on-board
autonomous planning program to control the scheduling of operations for a spacecraft.
• REMOTE AGENT generated plans from high-level goals specified from the ground and monitored the
execution of those plans—detecting, diagnosing, and recovering from problems as they occurred.
• Game playing:
• IBM’s DEEP BLUE became the first computer program to defeat the world champion in a chess match
when it bested Garry Kasparov (Russian Chess grand master) by a score of 3.5 to 2.5 in an exhibition
match.
• Spam fighting:
• Each day, learning algorithms classify over a billion messages as spam, saving the recipient from having
to waste time deleting what, for many users, could comprise 80% or 90% of all messages, if not
classified away by algorithms. Because the spammers are continually updating their tactics, it is difficult
for a static programmed approach to keep up, and learning algorithms work best.
• Logistics planning:
• During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic Analysis and Replanning Tool,
DART (Cross and Walker, 1994), to do automated logistics planning and scheduling for transportation.
• This involved up to 50,000 vehicles, cargo, and people at a time, and had to account for starting points,
destinations, routes, and conflict resolution among all parameters. The AI planning techniques generated
in hours a plan that would have taken weeks with older methods. The Defense Advanced Research
Project Agency (DARPA) stated that this single application more than paid back DARPA’s 30-year
investment in AI.
• Robotics:
• The iRobot Corporation has sold over two million Roomba robotic vacuum cleaners for home use.
• The company also deploys the more rugged PackBot to Iraq and Afghanistan, where it is used to handle
hazardous materials, clear explosives, and identify the location of snipers.
• Machine Translation:
• A computer program automatically translates from Arabic to English, allowing an English speaker to
see the headline “Ardogan Confirms That Turkey Would Not Accept Any Pressure, Urging Them to
Recognize Cyprus.”
• The program uses a statistical model built from examples of Arabic-to-English translations and from
examples of English text totaling two trillion words (Brants et al., 2007). None of the computer
scientists on the team speak Arabic, but they do understand statistics and machine learning algorithms.
Intelligent Agents
AGENTS AND ENVIRONMENTS
• An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
• A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal
tract, and so on for actuators.
• A robotic agent might have cameras and infrared range finders for sensors and
various motors for actuators.
• A software agent receives keystrokes, file contents, and network packets as
sensory inputs and acts on the environment by displaying on the screen, writing
files, and sending network packets.
• An agent’s choice of action at any given instant can depend on the entire percept
sequence observed to date, but not on anything it hasn’t perceived.
• The term percept to refer to the agent’s perceptual inputs at
any given instant.

• An agent’s percept sequence is the complete history of


everything the agent has ever perceived.

• In general, an agent’s choice of action at any given instant


can depend on the entire percept sequence observed to date,
but not on anything it hasn’t perceived.

• an agent’s behavior is described by the agent function that


Figure 2.1 Agents interact with environments through maps any given percept sequence to an action.
sensors and actuators.
• The agent function for an artificial agent will be implemented by an agent program.
• The agent function is an abstract mathematical description; the agent program is a concrete
implementation, running within some physical system.

• This particular world has just two locations: squares A and


B. The vacuum agent perceives which square it is in and
whether there is dirt in the square. It can choose to move left,
move right, suck up the dirt, or do nothing.
• One very simple agent function is the following: if the
current square is dirty, then suck; otherwise, move to the
other square.
A vacuum-cleaner world with just two locations.
• partial tabulation of this agent function is shown in Figure 2.3

• What is the right way to fill out the


table?
• In other words, what makes an agent
good or bad, intelligent or stupid?
GOOD BEHAVIOR: THE CONCEPT OF RATIONALITY

• A rational agent is one that does the right thing - every entry in the table for the agent
function is filled out correctly
• doing the right thing is better than doing the wrong thing, but what does it mean to do
the right thing?
• Example: When an agent is plunked down in an environment, it generates a sequence of
actions according to the percepts it receives. This sequence of actions causes the
environment to go through a sequence of states. If the sequence is desirable, then the
agent has performed well. This notion of desirability is captured by a performance
measure that evaluates any given sequence of environment states.
• There is not one fixed performance measure for all tasks and agents; - As a general
rule, it is better to design performance measures according to what one actually wants in
the environment, rather than according to how one thinks the agent should behave. Even
when the obvious pitfalls are avoided, there remain some knotty issues to untangle.
Rationality
What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.
Definition:
• For each possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.
Omniscience, learning, and
autonomy
• We need to be careful to distinguish between rationality and omniscience.
• An omniscient agent knows the actual outcome of its actions and can act
accordingly; but omniscience is impossible in reality.
• rationality is not the same as perfection. Rationality maximizes expected
performance, while perfection maximizes actual performance.
• Doing actions in order to modify future percepts—sometimes called information
gathering—is an important part of rationality
• Our definition requires a rational agent not only to gather information but also to
learn as much as possible from what it perceives. The agent’s initial configuration
could reflect some prior knowledge of the environment, but as the agent gains
experience this may be modified and augmented.
AUTONOMY
• To the extent that an agent relies on the prior knowledge of its designer rather than
on its own percepts, we say that the agent lacks autonomy. A rational agent
should be autonomous—it should learn what it can to compensate for partial or
incorrect prior knowledge.
• For example, a vacuum-cleaning agent that learns to foresee where and when
additional dirt will appear will do better than one that does not. As a practical
matter, one seldom requires complete autonomy from the start: when the agent has
had little or no experience, it would have to act randomly unless the designer gave
some assistance.
• So, just as evolution provides animals with enough built-in reflexes to survive
long enough to learn for themselves, it would be reasonable to provide an artificial
intelligent agent with some initial knowledge as well as an ability to learn. After
sufficient experience of its environment, the behavior of a rational agent can
become effectively independent of its prior knowledge. Hence, the incorporation
of learning allows one to design a single rational agent that will succeed in a vast
variety of environments.
THE NATURE OF ENVIRONMENTS
• task environments – Problems
• rational agents – solutions
Specifying the task environment
• task environment. we call PEAS this the PEAS (Performance, Environment, Actuators, Sensors)
description
• In designing an agent, the first step must always be to specify the task environment as fully as
possible.
Properties of task environments
Fully observable vs. partially observable:

• If an agent’s sensors give it access to the complete state of the environment at each point in time, then we
say that the task environment is fully observable.
• A task environment is effectively fully observable if the sensors detect all aspects that are relevant to the
choice of action; relevance, in turn, depends on the performance measure. Fully observable environments
are convenient because the agent need not maintain any internal state to keep track of the world.
• An environment might be partially observable because of noisy and inaccurate sensors or because parts
of the state are simply missing from the sensor data
• for example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other squares,
and an automated taxi cannot see what other drivers are thinking.
• If the agent has no sensors at all then the environment is unobservable. One might think that in such cases
the agent’s plight is hopeless
Single agent vs. multi-agent:
• an agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an
agent playing chess is in a two agent environment.
• There are, however, some subtle issues:
• First, we have described how an entity may be viewed as an agent, but we have not explained which
entities must be viewed as agents?
• Does an agent A (the taxi driver for example) have to treat an object B (another vehicle) as an agent,
or can it be treated merely as an object behaving? The key distinction is whether B’s behavior is
best described as maximizing a performance measure whose value depends on agent A’s behavior.
• For example, in chess, the opponent entity B is trying to maximize its performance measure, which,
by the rules of chess, minimizes agent A’s performance measure. Thus, chess is a competitive
multiagent environment.
• In the taxi-driving environment, on the other hand, avoiding collisions maximizes the performance
measure of all agents, so it is a partially cooperative multiagent environment.
• Deterministic vs. stochastic:
• If the next state of the environment is completely determined by the current state and the action
executed by the agent, then we say the environment is deterministic; otherwise, it is stochastic.
• In principle, an agent need not worry about uncertainty in a fully observable, deterministic
environment.
• If the environment is partially observable, however, then it could appear to be stochastic.
• Most real situations are so complex that it is impossible to keep track of all the unobserved
aspects; for practical purposes, they must be treated as stochastic. Taxi driving is clearly stochastic
in this sense, because one can never predict the behavior of traffic exactly;
Episodic vs. sequential:
• In an episodic task environment, the agent’s experience is divided into atomic episodes. In each
episode the agent receives a percept and then performs a single action. Crucially, the next episode
does not depend on the actions taken in previous episodes.
• Many classification tasks are episodic.
• In sequential environments, on the other hand, the current decision could affect all future
decisions.
• Chess and taxi driving are sequential: in both cases, short-term actions can have long-term
consequences.
• Episodic environments are much simpler than sequential environments because the agent does not
need to think ahead.
Static vs. dynamic:
• If the environment can change while an agent is deliberating, then we say the environment is
dynamic for that agent; otherwise, it is static.
• Static environments are easy to deal with because the agent need not keep looking at the world
while it is deciding on an action, nor need it worry about the passage of time.
• Dynamic environments, on the other hand, are continuously asking the agent what it wants to do;
if it hasn’t decided yet, that counts as deciding to do nothing.
• If the environment itself does not change with the passage of time but the agent’s performance
score does, then we say the environment is semi-dynamic.
• Taxi driving is clearly dynamic: the other cars and the taxi itself keep moving while the driving
algorithm dithers about what to do next. Chess, when played with a clock, is semi-dynamic.
Crossword puzzles are static.
Discrete vs. continuous:
• The discrete/continuous distinction applies to the state of the environment, to the way time is
handled, and to the percepts and actions of the agent. For example, the chess environment has a
finite number of distinct states (excluding the clock).
• Chess also has a discrete set of percepts and actions.
• Taxi driving is a continuous-state and continuous-time problem: the speed and location of the taxi
and of the other vehicles sweep through a range of continuous values and do so smoothly over
time. Taxi-driving actions are also continuous (steering angles, etc.).
• Input from digital cameras is discrete, strictly speaking, but is typically treated as representing
continuously varying intensities and locations.
Known vs. unknown:
• In a known environment, the outcomes (or outcome probabilities if the environment is stochastic)
for all actions are given.
• if the environment is unknown, the agent will have to learn how it works in order to make good
decisions.
• the distinction between known and unknown environments is not the same as the one between
fully and partially observable environments. It is quite possible for a known environment to be
partially observable—for example, in solitaire card games, I know the rules but am still unable to
see the cards that have not yet been turned over.
• Conversely, an unknown environment can be fully observable—in a new video game, the screen
may show the entire game state but I still don’t know what the buttons do until I try them.
THE STRUCTURE OF AGENTS
• The job of AI is to design an agent program that implements the agent function—the mapping
from percepts to actions.
• this program will run on some sort of computing device with physical sensors and actuators—
architecture:
agent = architecture + program
• the program we choose has to be one that is appropriate for the architecture. If the program is
going to recommend actions like Walk, the architecture had better have legs.
• the architecture makes the percepts from the sensors available to the program, runs the program,
and feeds the program’s action choices to the actuators as they are generated.
Agent programs
• The agent programs have the same skeleton: they take the current percept as input from the
sensors and return an action to the actuators.
• agent program, takes the current percept as input, and the agent function, takes the entire percept
history.
• The agent program takes just the current percept as input because nothing more is available from
the environment; if the agent’s actions need to depend on the entire percept sequence, the agent
will have to remember the percepts.
• four basic kinds of agent programs that embody the principles underlying almost all intelligent
systems:
• Simple reflex agents;
• Model-based reflex agents;
• Goal-based agents;
• Utility-based agents
Simple reflex agents
• These agents select actions on the basis of the current percept, ignoring the rest of the percept
history.
• For example, the vacuum agent whose agent function is tabulated in Figure 2.3 is a simple reflex
agent, because its decision is based only on the current location and on whether that location
contains dirt. An agent program for this agent is shown in Figure 2.8.
• Simple reflex behaviors occur even in more complex environments. Imagine yourself as the driver
of the automated taxi. If the car in front brakes and its brake lights come on, then you should
notice this and initiate braking. In other words, some processing is done on the visual input to
establish the condition we call “The car in front is braking.” Then, this triggers some established
connection in the agent program to the action “initiate braking.” We call such a connection a
condition–action rule
• if car-in-front-is-braking then initiate-braking.
A simple reflex agent. It acts according to a rule whose
condition matches the current state, as defined by the
percept.

Schematic diagram of a simple reflex agent.


Model-based reflex agents
• The most effective way to handle partial observability is for the agent to keep track of the part of
the world it can’t see now.
• That is, the agent should maintain some sort of internal state that depends on the percept history
and thereby reflects at least some of the unobserved aspects of the current state.
• Updating this internal state information as time goes by requires two kinds of knowledge to be
encoded in the agent program.
• First, we need some information about how the world evolves independently of the agent—
• example, that an overtaking car generally will be closer behind than it was a moment ago.
• Second, we need some information about how the agent’s own actions affect the world—
• example, that when the agent turns the steering wheel clockwise, the car turns to the right, or that
after driving for five minutes northbound on the freeway, one is usually about five miles north of
where one was five minutes ago.
• This knowledge about “how the world works”—whether implemented in simple Boolean circuits
or in complete scientific theories—is called a model of the world. An agent that uses such a model
is called a model-based agent.

Figure 2.12 A model-based reflex agent. It keeps track of the


current state of the world, using an internal model. It then
chooses an action in the same way as the reflex agent.
Goal-based agents
• Knowing something about the current state of the environment is not always enough to decide
what to do.
• For example, at a road junction, the taxi can turn left, turn right, or go straight on.
• The correct decision depends on where the taxi is trying to get to. In other words, as well as a
current state description, the GOAL agent needs some sort of goal information that describes
situations that are desirable—for example, being at the passenger’s destination.

Figure 2.13 A model-based, goal-based agent. It keeps track of


the world state as well as a set of goals it is trying to achieve,
and chooses an action that will (eventually) lead to the
achievement of its goals.
Utility-based agents
• Goals alone are not enough to generate high-quality behavior in most environments.
• For example, many action sequences will get the taxi to its destination (thereby achieving the goal)
but some are quicker, safer, more reliable, or cheaper than others. Goals just provide a crude
binary distinction between “happy” and “unhappy” states.
• A more general performance measure should allow a comparison of different world states
according to exactly how happy they would make the agent. Because “happy” does not sound very
scientific, economists and computer scientists use the term utility instead.
Figure 2.14 A model-based, utility-based agent. It uses a
model of the world, along with a utility function that
measures its preferences among states of the world. Then
it chooses the action that leads to the best expected utility,
where expected utility is computed by averaging over all
possible outcome states, weighted by the probability of
the outcome.
Learning agents

You might also like