0% found this document useful (0 votes)
9 views

Module 1 Chapter 1&2 EC

The document provides an introduction to artificial intelligence (AI), detailing its history, definitions, and foundational disciplines such as philosophy, mathematics, economics, neuroscience, psychology, computer engineering, control theory, and linguistics. It discusses various approaches to understanding AI, including the Turing Test, cognitive modeling, rational thought, and the rational agent approach. The document emphasizes the interdisciplinary nature of AI and its relevance to a wide range of intellectual tasks.

Uploaded by

rohith p
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Module 1 Chapter 1&2 EC

The document provides an introduction to artificial intelligence (AI), detailing its history, definitions, and foundational disciplines such as philosophy, mathematics, economics, neuroscience, psychology, computer engineering, control theory, and linguistics. It discusses various approaches to understanding AI, including the Turing Test, cognitive modeling, rational thought, and the rational agent approach. The document emphasizes the interdisciplinary nature of AI and its relevance to a wide range of intellectual tasks.

Uploaded by

rohith p
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Module 1

Introduction
AI is one of the newest fields in science and engineering. Work started in earnest soon after
World War II, and the name itself was coined in 1956. AI currently encompasses a huge variety of
subfields, ranging from the general (learning and perception) to the specific, such as playing chess,
proving mathematical theorems, writing poetry, driving a car on a crowded street, and diagnosing
diseases. AI is relevant to any intellectual task; it is truly a universal field.
1.1WHAT IS AI?
• In Figure 1.1 we see eight definitions of AI, laid out along two dimensions. The definitions
on top are concerned with thought processes and reasoning, whereas the ones on the bottom
address behavior.
• The definitions on the left measure success in terms of fidelity to human performance,
whereas the ones on the right measure against an ideal performance measure, called
rationality. A system is rational if it does the “right thing,” given what it knows.
• Historically, all four approaches to AI have been followed, each by different people with
different methods. A human-centered approach must be in part an empirical science,
involving observations and hypotheses about human behavior. A rationalist approach
involves a combination of mathematics and engineering.

pg. 1
1.1.1 Acting humanly: The Turing Test approach TURING TEST
The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory
operational definition of intelligence. A computer passes the test if a human interrogator, after
posing some written questions, cannot tell whether the written responses come from a person or
from a computer. The computer would need to possess the following capabilities:

• natural language processing to enable it to communicate successfully in English;


• knowledge representation to store what it knows or hears;
• automated reasoning to use the stored information to answer questions and to draw new
conclusions;
• machine learning to adapt to new circumstances and to detect and extrapolate patterns.
Turing’s test deliberately avoided direct physical interaction between the interrogator and
the computer, because physical simulation of a person is unnecessary for intelligence. However,
the so- called total Turing Test includes a video signal so that the interrogator can test the
subject’s perceptual abilities, as well as the opportunity for the interrogator to pass physical
objects “through the hatch.” To pass the total Turing Test, the
• computer will need computer vision to perceive objects, and
• robotics to manipulate objects and move about.
1.1.2 Thinking humanly: The cognitive modelling approach
• If we are going to say that a given program thinks like a human, we must have some way
of determining how humans think. We need to get inside the actual workings of human
minds. There are three ways to do this:
• through introspection—trying to catch our own thoughts as they go by;
• through psychological experiments—observing a person in action; and
• through brain imaging—observing the brain in action.
• Once we have a sufficiently precise theory of the mind, it becomes possible to express the
theory as a computer program. If the program’s input–output behavior matches
corresponding human behavior, that is evidence that some of the program’s mechanisms
could also be operating in humans.
• In the early days of AI there was often confusion between the approaches: an author would
argue that an algorithm performs well on a task and that it is therefore a good model of
human performance, or vice versa. Modern authors separate the two kinds of claims; this
distinction has allowed both AI and cognitive science to develop more rapidly. The two

pg. 2
fields continue to fertilize each other, most notably in computer vision, which incorporates
neurophysiological evidence into computational models.
1.1.3 Thinking rationally: The “laws of thought” approach
• The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking,”
that is, certain reasoning processes. His syllogisms provided patterns for argument
structures that always yielded correct conclusions when given correct premises.
• Logicians in the 19th century developed a precise notation for statements about all kinds of
objects in the world and the relations among them. The so- called logicist tradition within
artificial intelligence hopes to build on such programs to create intelligent systems.
• There are two main obstacles to this approach. First, it is not easy to take informal
knowledge and state it in the formal terms required by logical notation, particularly when
the knowledge is less than 100% certain. Second, there is a big difference between solving
a problem “in principle” and solving it in practice.
1.1.4 Acting rationally: The rational agent approach
• An agent is just something that acts (agent comes from the Latin agere, to do). Of course,
all computer programs do something, but computer agents are expected to do more: operate
autonomously, perceive their environment, persist over a prolonged time period, adapt to
change, and create and pursue goals.
• A rational agent is one that acts to achieve the best outcome or, when there is uncertainty,
the best expected outcome. In the “laws of thought” approach to AI, the emphasis was on
correct inferences. Making correct inferences is sometimes part of being a rational agent,
because one way to act rationally is to reason logically to the conclusion that a given action
will achieve one’s goals and then to act on that conclusion.
• The rational-agent approach has two advantages over the other approaches. First, it is more
general than the “laws of thought” approach because correct inference is just one of several
possible mechanisms for achieving rationality. Second, it is more amenable to scientific
development than are approaches based on human behavior or human thought. The
standard of rationality is mathematically well defined and completely general, and can be
“unpacked” to generate agent designs that provably achieve it.

1.2 THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE


We provide a brief history of the disciplines that contributed ideas, viewpoints, and techniques
to AI.

pg. 3
1.2.1 Philosophy
• Aristotle (384–322 B.C.), was the first to formulate a precise set of laws governing the
rational part of the mind. He developed an informal system of syllogisms for proper
reasoning, which in principle allowed one to generate conclusions mechanically, given
initial premises. Much later, Ramon Lull (d.1315) had the idea that useful reasoning could
actually be carried out by a mechanical artifact.
• Thomas Hobbes (1588–1679) proposed that reasoning was like numerical computation.
Around 1500, Leonardo da Vinci (1452–1519) designed but did not build a mechanical
calculator. Gottfried Wilhelm Leibniz (1646–1716) built a mechanical device intended to
carry out operations on concepts rather than numbers, but its scope was rather limited.
Ren´e Descartes (1596–1650) gave the first clear discussion of the distinction between
mind and matter and of the problems that arise.
• Given a physical mind that manipulates knowledge, the next problem is to establish
EMPIRICISM the source of knowledge. The empiricism movement, starting with Francis
Bacon’s (1561– 1626) Novum Organum, 2 is characterized by a dictum of John Locke
(1632–1704): “Nothing is in the understanding, which was not first in the senses.”
• David Hume’s (1711–1776) A Treatise of Human Nature (Hume, 1739) proposed what is
now known as the principle of induction: that general rules are acquired by exposure to
repeated associations between their elements. Building on the work of Ludwig Wittgenstein
(1889– 1951) and Bertrand Russell (1872–1970), the famous Vienna Circle, led by Rudolf
Carnap (1891–1970), developed the doctrine of logical positivism.
• The confirmation theory of Carnap and Carl Hempel (1905–1997) attempted to analyze
the acquisition of knowledge from experience. The final element in the philosophical
picture of the mind is the connection between knowledge and action. This question is vital
to AI because intelligence requires action as well as reasoning. Moreover, only by
understanding how actions are justified can we understand how to build an agent whose
actions are justifiable (or rational).
1.2.2 Mathematics
• Philosophers staked out some of the fundamental ideas of AI, but the leap to a formal
science required a level of mathematical formalization in three fundamental areas: logic,
computation, and probability.
• The idea of formal logic can be traced back to the philosophers of ancient Greece, but its
mathematical development really began with the work of George Boole (1815–1864), who

pg. 4
worked out the details of propositional, or Boolean, logic (Boole, 1847). In 1879, Gottlob
Frege (1848–1925) extended Boole’s logic to include objects and relations, creating the
firstorder logic that is used today.
• Alfred Tarski (1902–1983) introduced a theory of reference that shows how to relate the
objects in a logic to objects in the real world.
• The next step was to determine the limits of what could be done with logic and computation.
The first nontrivial algorithm is thought to be Euclid’s algorithm for computing greatest
common divisors.
• The word algorithm (and the idea of studying them) comes from al-Khowarazmi, a Persian
mathematician of the 9th century, whose writings also introduced Arabic numerals and
algebra to Europe.
• Boole and others discussed algorithms for logical deduction, and, by the late 19th century,
efforts were under way to formalize general mathematical reasoning as logical deduction.
• In 1930, Kurt G¨odel (1906–1978) showed that there exists an effective procedure to prove
any true statement in the first-order logic of Frege and Russell, but that first-order logic
could not capture the principle of mathematical induction needed to characterize the natural
numbers.
• Besides logic and computation, the third great contribution of mathematics to AI is the
theory of probability. The Italian Gerolamo Cardano (1501–1576) first framed the idea of
probability, describing it in terms of the possible outcomes of gambling events.
• In 1654, Blaise Pascal (1623–1662), in a letter to Pierre Fermat (1601–1665), showed how
to predict the future of an unfinished gambling game and assign average payoffs to the
gamblers.
• Probability quickly became an invaluable part of all the quantitative sciences, helping to
deal with uncertain measurements and incomplete theories. James Bernoulli (1654–1705),
Pierre Laplace (1749–1827), and others advanced the theory and introduced new statistical
methods.
• Thomas Bayes (1702–1761), proposed a rule for updating probabilities in the light of new
evidence. Bayes’ rule underlies most modern approaches to uncertain reasoning in AI
systems.
1.2.3 Economics
• The science of economics got its start in 1776, when Scottish philosopher Adam Smith
(1723–1790) published An Inquiry into the Nature and Causes of the Wealth of Nations.

pg. 5
While the ancient Greeks and others had made contributions to economic thought, Smith
was the first to treat it as a science, using the idea that economies can be thought of as
consisting of individual agents maximizing their own economic well-being.
• Most people think of economics as being about money, but economists will say that they
are really studying how people make choices that lead to preferred outcomes.
• Decision theory, which combines probability theory with utility theory, provides a formal
and complete framework for decisions (economic or otherwise) made under uncertainty
1.2.4 Neuroscience
• How do brains process information?
Neuroscience is the study of the nervous system, particularly the brain. Although the exact
way in which the brain enables thought is one of the great mysteries of science, the fact
that it does enable thought has been appreciated for thousands of years because of the
evidence that strong blows to the head can lead to mental incapacitation. Nicolas Rashevsky
(1936, 1938) was the first to apply mathematical models to the study of the nervous system.

• We now have some data on the mapping between areas of the brain and the parts of the
body that they control or from which they receive sensory input. Such mappings are able

pg. 6
to change radically over the course of a few weeks, and some animals seem to have multiple
maps. Moreover, we do not fully understand how other areas can take over functions when
one area is damaged. There is almost no theory on how an individual memory is stored.
• The measurement of intact brain activity began in 1929 with the invention by Hans Berger
of the electroencephalograph (EEG). The recent development of functional magnetic
resonance imaging (fMRI) (Ogawa et al., 1990; Cabeza and Nyberg, 2001) is giving
neuroscientists unprecedentedly detailed images of brain activity, enabling measurements
that correspond in interesting ways to ongoing cognitive processes.
• These are augmented by advances in single-cell recording of neuron activity. Individual
neurons can be stimulated electrically, chemically, or even optically (Han and Boyden,
2007), allowing neuronal input– output relationships to be mapped.
• Despite these advances, we are still a long way from understanding how cognitive
processes actually work. The truly amazing conclusion is that a collection of simple cells
can lead to thought, action, and consciousness or, in the pithy words of John Searle (1992),
brains cause minds.

1.2.5 Psychology
• How do humans and animals think and act?
The origins of scientific psychology are usually traced to the work of the German physicist
Hermann von Helmholtz (1821–1894) and his student Wilhelm Wundt (1832–1920).
Helmholtz applied the scientific method to the study of human vision, and his Handbook
of Physiological Optics is even now described as “the single most important treatise on the
physics and physiology of human vision” (Nalwa, 1993, p.15).

pg. 7
1.2.6 Computer engineering
• How can we build an efficient computer?
For artificial intelligence to succeed, we need two things: intelligence and an artifact. The
computer has been the artifact of choice. The modern digital electronic computer was
invented independently and almost simultaneously by scientists in three countries
embattled in World War II. The first operational computer was the electro-mechanical
Heath Robinson,8 built in 1940 by Alan Turing’s team for a single purpose.
• Each generation of computer hardware has brought an increase in speed and capacity and
a decrease in price. Performance doubled every 18 months or so until around 2005, when
power dissipation problems led manufacturers to start multiplying the number of CPU
cores rather than the clock speed.
1.2.7 Control theory and cybernetics
• How can artifacts operate under their own control?
Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling machine: a water clock
with a regulator that maintained a constant flow rate. This invention changed the definition
of what an artifact could do. Previously, only living things could modify their behavior in
response to changes in the environment.
• Modern control theory, especially the branch known as stochastic optimal control, has as
its goal the design of systems that maximize an objective function over time. This roughly
matches our view of AI: designing systems that behave optimally.
• Why, then, are AI and control theory two different fields, despite the close connections
among their founders? The answer lies in the close coupling between the mathematical
techniques that were familiar to the participants and the corresponding sets of problems
that were encompassed in each world view.
• Calculus and matrix algebra, the tools of control theory, lend themselves to systems that
are describable by fixed sets of continuous variables, whereas AI was founded in part as a
way to escape from these perceived limitations.
• The tools of logical inference and computation allowed AI researchers to consider
problems such as language, vision, and planning that fell completely outside the control
theorist’s purview.
1.2.8 Linguistics
• How does language relate to thought?

pg. 8
In 1957, B. F. Skinner published Verbal Behavior. This was a comprehensive, detailed
account of the behaviorist approach to language learning, written by the foremost expert
in the field. But curiously, a review of the book became as well known as the book itself,
and served to almost kill off interest in behaviorism.
Modern linguistics and AI, then, were “born” at about the same time, and grew up together,
intersecting in a hybrid field called computational linguistics or natural language
processing. The problem of understanding language soon turned out to be considerably
more complex than it seemed in 1957.
Understanding language requires an understanding of the subject matter and context, not
just an understanding of the structure of sentences. This might seem obvious, but it was
not widely appreciated until the 1960s.
Much of the early work in knowledge representation (the study of how to put knowledge
into a form that a computer can reason with) was tied to language and informed by research
in linguistics, which was connected in turn to decades of work on the philosophical analysis
of language.
1.3 THE HISTORY OF ARTIFICIAL INTELLIGENCE
• The gestation of artificial intelligence (1943–1955)
• The birth of artificial intelligence (1956)
• Early enthusiasm, great expectations (1952–1969)
• A dose of reality (1966–1973)
• Knowledge-based systems: The key to power? (1969–1979)
• AI becomes an industry (1980–present)
• The return of neural networks (1986–present)
• AI adopts the scientific method (1987–present)
• The emergence of intelligent agents (1995–present)
• The availability of very large data sets (2001–present)

pg. 9
2.1 AGENTS AND ENVIRONMENTS

• An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators. This simple idea is illustrated in Figure 2.1.

• A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal tract, and so on
for actuators. A robotic agent might have cameras and infrared range finders for sensors and
various motors for actuators. A software agent receives keystrokes, file contents, and network
packets as sensory inputs and acts on the environment by displaying on the screen, writing
files, and sending network packets.

• Example—the vacuum-cleaner world shown in Figure 2.2.

• This particular world has just two locations: squares A and B.

• The vacuum agent perceives which square it is in and whether there is dirt in the square.

• It can choose to move left, move right, suck up the dirt, or do nothing. One very simple
agent function is the following: if the current square is dirty, then suck; otherwise, move
to the other square.

pg. 10
2.2 GOOD BEHAVIOR: THE CONCEPT OF RATIONALITY

• A rational agent is one that does the right thing—conceptually speaking, every entry in the
table for the agent function is filled out correctly.

• When an agent is plunked down in an environment, it generates a sequence of actions


according to the percepts it receives. This sequence of actions causes the environment to go
through a sequence of states. If the sequence is desirable, then the agent has performed well.
This notion of desirability is captured by a performance measure that evaluates any given
sequence of environment states.

2.2.1 Rationality
Rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.
Let us assume the following:
• The performance measure awards one point for each clean square at each time
step, over a “lifetime” of 1000-time steps.
• The “geography” of the environment is known a priori (Figure 2.2) but the dirt
distribution and the initial location of the agent are not. Clean squares stay clean
and sucking cleans the current square. The Left and Right actions move the
agent left and right except when this would take the agent outside the
environment, in which case the agent remains where it is.
• The only available actions are Left , Right , and Suck .
• The agent correctly perceives its location and whether that location contains dirt.

pg. 11
We claim that under these circumstances the agent is indeed rational; its expected perfor-
mance is at least as high as any other agent’s.

2.2.2 Omniscience, learning, and autonomy

• An omniscient agent knows the actual outcome of its actions and can act accordingly; but
omniscience is impossible in reality.

• Our definition requires a rational agent not only to gather information but also to learn
as much as possible from what it perceives.

• The agent’s initial configuration could reflect some prior knowledge of the environment, but
as the agent gains experience this may be modified and augmented. To the extent that an
agent relies on the prior knowledge of its designer rather than on its own percepts, we say
that the agent lacks autonomy.

2.3 THE NATURE OF ENVIRONMENTS

2.3.1 Specifying the task environment

• The rationality of the simple vacuum-cleaner agent, we had to specify the performance
measure, the environment, and the agent’s actuators and sensors.We group all these under the
heading of the task environment

Performance measure to be considered for automated taxi


• getting to the correct destination;
• minimizing fuel consumption and wear and tear;
• minimizing the trip time or cost;
• minimizing violations of traffic
• laws and disturbances to other drivers;
• maximizing safety and passenger comfort;
• maximizing profits.

The actuators for an automated taxi include those available to a human driver:

• Control over the engine through the accelerator and control over steering and braking.
• It will need output to a display screen or voice synthesizer to talk back to the passengers, and
• perhaps some way to communicate with other vehicles.

The basic sensors for the taxi will include one or more controllable video cameras so that it can see the road;
pg. 12
it might augment these with infrared or sonar sensors to detect distances to other cars and obstacles.

2.3.2 Properties of task environments

Fully observable vs. partially observable:

• If an agent’s sensors give it access to the complete state of the environment at each point
in time, then we say that the task environment is fully observable. The sensors detect all
aspects that are relevant to the choice of action; relevance, in turn, depends on the
performance measure.

• An environment might be partially observable because of noisy and inaccurate sensors or


because parts of the state are simply missing from the sensor data

• Example: a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in
other squares, and an automated taxi cannot see what other drivers are thinking.
If the agent has no sensors at all then the environment is unobservable

Single agent vs. multiagent:

• For example, an agent solving a crossword puzzle by itself is clearly in a single-agent


environment, whereas an agent playing chess is in a two-agent environment.

• Example, in chess, the opponent entity B is trying to maximize its performance measure,
which by the rules of chess, minimizes agent A’s performance measure. Thus, chess is a
competitive multiagent environment.

• In the taxi-driving environment, on the other hand, avoiding collisions maximizes the
performance measure of all agents, so it is a partially cooperative multiagent environment.
It is also partially competitive because, for example, only one car can occupy a parking
space.

Deterministic vs. stochastic.

pg. 13
• If the next state of the environment is completely determined by the current state and
the action executed by the agent, then we say the environment is deterministic; otherwise, it
is stochastic.

• If the environment is partially observable, however, then it could appear to be


stochastic. Taxi driving is clearly stochastic in this sense, because one can never predict the
behavior of traffic exactly; moreover, one’s tires blow out and one’s engine seizes up without
warning.

Episodic vs. sequential:

• In an episodic task environment, the agent’s experience is divided into atomic episodes. In
each episode the agent receives a percept and then performs a single action. Crucially, the
next episode does not depend on the actions taken in previous episodes.

• In sequential environments, on the other hand, the current decision could affect all future
decisions. Chess and taxi driving are sequential: in both cases, short-term actions can have
long-term consequences.

Static vs. dynamic:

• If the environment can change while an agent is deliberating, then we say the environment is
dynamic for that agent; otherwise, it is static. Static environments are easy to deal with
because the agent need not keep looking at the world while it is deciding on an action, nor
need it worry about the passage of time. Crossword puzzles are static.

• Dynamic environments, on the other hand, are continuously asking the agent what it wants to
do; if it hasn’t decided yet, that counts as deciding to do nothing. Taxi driving is clearly
dynamic.

Discrete vs. continuous:

• The discrete/continuous distinction applies to the state of the environment, to the way time is
handled, and to the percepts and actions of the agent. For example, the chess environment
has a finite number of distinct states (excluding the clock).Chess also has a discrete set of
percepts and actions.

• Taxi driving is a continuous-state and continuous-time problem: the speed and location of the
taxi and of the other vehicles sweep through a range of continuous values and do so smoothly
over time. Taxi-driving actions are also continuous (steering angles, etc.). Input from digital
cameras is discrete, strictly speaking, but is typically treated as representing continuously
varying intensities and locations.

Known vs. unknown:

• In a known environment, the outcomes (or outcome probabilities if the environment is


stochastic) for all actions are given. It is quite possible for a known environment to be
partially observable—for example, in solitaire card games.

• Conversely, an unknown environment can be fully observable—in a new video game, the
screen may show the entire game state but I still don’t know what the buttons do until I try
them.

pg. 14
2.4 THE STRUCTURE OF AGENTS

• The job of AI is to design an agent program that implements the agent function— the mapping from
percepts to actions.
• The program will run on some sort of computing device with physical sensors and actuators—we call
this the architecture:
o agent = architecture + program.

2.4.1 Agent Programs


• The agent programs take the current percept as input from the sensors and return an action
to the actuators.
• The agent program takes just the current percept as input because nothing more is available
from the environment; if the agent’s actions need to depend on the entire percept sequence,
the agent will have to remember the percepts.

Four basic kinds of agent programs that embody the principles underlying almost all intelligent
systems:

• Simple reflex agents;


• Model-based reflex agents;
• Goal-based agents; and
• Utility-based agents

2.4.2 Simple reflex Agents:

• The simplest kind of agent is the simple reflex agent. These agents select actions on the basis
of the current percept, ignoring the rest of the percept history. For example, the vacuum
agent whose agent function is tabulated in Figure 2.3 is a simple reflex agent, because its
decision is based only on the current location and on whether that location contains dirt.

pg. 15
• Simple reflex behaviors occur even in more complex environments. In the automated taxi, If
the car in front brakes and its brake lights come on, then you should notice this and initiate
braking.

• We use rectangles to denote the current internal state of the agent’s decision process, and
ovals to represent the background information used in the process. The agent program, is
shown in Figure 2.10.

pg. 16
2.4.3 Model Based Agents:

• The agent should maintain some sort of internal state that depends on the percept history
and thereby reflects at least some of the unobserved aspects of the current state.

• For driving tasks such as changing lanes, the agent needs to keep track of where the other
cars are if it can’t see them all at once. Updating this internal state information as time goes
by requires two kinds of knowledge to be encoded in the agent program.

• First, we need some information about how the world evolves independently of the agent—
for example, that an overtaking car generally will be closer behind than it was a moment ago.
Second, we need some information about how the agent’s own actions affect the world—
for example, that when the agent turns the steering wheel clockwise, the car turns to the right.

• This knowledge about “how the world works”—whether implemented in simple Boolean
circuits or in complete scientific theories—is called a model of the world. An agent that uses
such a model is called a model-based agent. Figure 2.11 gives the structure of the model-
based reflex agent with internal state, showing how the current percept is combined with the
old internal state to generate the updated description of the current state, based on the agent’s
model of how the world works.

pg. 17
2.4.4 Goal Based Agents:
• The GOAL agent needs some sort of goal information that describes situations that are
desirable—for example, being at the passenger’s destination. The agent program can combine
this with the model to choose actions that achieve the goal. Figure 2.13 shows the goal-based
agent’s structure.

• When the agent has to consider long sequences of twists and turns in order to find a way to
achieve the goal. Search and planning are the subfields of AI devoted to finding action
sequences that achieve the agent’s goals.

• The goal-based agent appears less efficient, it is more flexible because the knowledge that
supports its decisions is represented explicitly and can be modified.

• The goal-based agent’s behavior can easily be changed to go to a different destination, simply
by specifying that destination as the goal. The reflex agent’s rules for when to turn and when
to go straight will work only for a single destination; they must all be replaced to go
somewhere new.

2.4.5 Utility- Based Agents:


• Goals alone are not enough to generate high-quality behaviour in most environments. For
example, many action sequences will get the taxi to its destination (thereby achieving the
goal) but some are quicker, safer, more reliable, or cheaper than others

• Goals just provide a crude binary distinction between “happy” and “unhappy” states. A more
general performance measure should allow a comparison of different world states according
to exactly how happy they would make the agent.

• In two kinds of cases, goals are inadequate but a utility-based agent can still make rational
decisions. First, when there are conflicting goals, only some of which can be achieved (for
example, speed and safety), the utility function specifies the appropriate trade-off.

pg. 18
• Second, when there are several goals that the agent can aim for, none of which can be
achieved with certainty, utility provides a way in which the likelihood of success can be
weighed against the importance of the goals.

2.4.6 Learning Agents:

• A learning agent can be divided into four conceptual components, as shown in Figure 2.15.

pg. 19
• The performance element: it takes in precepts and decides on actions.

• The learning element uses feedback from the critic on how the agent is doing and
determines how the performance element should be modified to do better in the future.

• The most important distinction is between the learning element, which is responsible for
making improvements, and the performance element, which is responsible for selecting
external actions.

• The critic tells the learning element how well the agent is doing with respect to a fixed
performance standard.

• The last component of the learning agent is the problem generator. It is responsible for
suggesting actions that will lead to new and informative experiences.

• Example:The performance element consists of collection of knowledge and procedures the


taxi has for selecting its driving actions. The taxi goes out on the road and drives, using this
performance element. The critic observes the world and passes information along to the
learning element. The problem generator might identify certain areas of behaviour in need
of improvement and suggest experiments, such as trying out the brakes on different road
surfaces under different conditions.

2.4.7 How the components of agent programs work:

• we can place the representations along an axis of increasing complexity and expressive
power—atomic, factored, and structured.

• In an atomic representation each state of the world is indivisible—it has no internal


structure. Consider the problem of finding a driving route from one end of a country to the
other via some sequence of cities.

• For the purposes of solving this problem, it may suffice to reduce the state of world to just the
name of the city we are in—a single atom of knowledge; a “black box” whose only
discernible property is that of being identical to or different from another black box.

pg. 20
• We might need to pay attention to how much gas is in the tank, our current GPS coordinates,
whether or not the oil warning light is working, how much spare change we have for toll
crossings, what station is on the radio, and so on.

• A factored representation splits up each state into a fixed set of variables or attributes,
each of which can have a value.

• Two different factored states can share some attributes (such as being at some particular GPS
location) and not others (such as having lots of gas or having no gas); this makes it much
easier to work out how to turn one state into another.

• With factored representations, we can also represent uncertainty—for example, ignorance


about the amount of gas in the tank can be represented by leaving that attribute blank.

• Areas of AI are based on factored representations, constraint satisfaction algorithms ,


propositional logic , planning, Bayesian networks , and the machine learning
algorithms.

• A structured representation, in which objects such as cows and trucks and their
various and varying relationships can be described explicitly. (See Figure 2.16(c).)

• Structured representations underlie relational databases and first-order logic, first-order


probability models, knowledge-based learning and much of natural language
understanding.

• In fact, almost everything that humans express in natural language concerns objects and their
relationships. The axis along which atomic, factored, and structured representations lie is the
axis of increasing expressiveness.

pg. 21

You might also like