AI Unit-1 Material
AI Unit-1 Material
Syllabus
Introduction: What is AI, Foundations of AI, History of AI, The State of Art.
Intelligent Agents: Agents and Environments, Good Behavior: The Concept of Rationality,
The Nature of Environments, The Structure of Agents.
2
CHAPTER-1 INTRODUCTION
The field of artificial intelligence, or AI, goes further still: it attempts not just to understand but also
to build intelligent entities.
AI is one of the newest sciences. Work started in earnest soon after World War II, and the name
itself was coined in 1956.
Along with molecular biology, AI is regularly cited as the "field I would most like to be in" by
scientists in other disciplines.
A student in physics might reasonably feel that all the good ideas have already been taken by Galileo,
Newton, Einstein, and the rest. AI, on the other hand, still has openings for several full-time
Einsteins.
AI systematizes and automates intellectual tasks and is therefore potentially relevant to any sphere of
human intellectual activity. In this sense, it is truly a universal field.
3
Predicted that by 2000, a machine might have a 30% chance of fooling a lay person for 5 minutes.
Anticipated all major arguments against AI in following 50 years.
Suggested major components of AI: knowledge, reasoning, language understanding, learning, computer
vision & Robotics.
Problem: Turing test is not reproducible, constructive, or amenable to mathematical analysis.
i. Philosophy(428 B .C .-present):
Can formal rules be used to draw valid conclusions?
How does the mental mind arise from a physical brain?
Where does knowledge come from?
How does knowledge lead to action?
Aristotle (384-322 B.C.) was the first to formulate a precise set of laws governing the rational part of the
mind.
iv Neuroscience (1861-present):
NOTE :Moore’s Law predicts that the CPU’s gate count will equal the brain’s neuron count around
2020.
6
v Psychology (1879-present):
Viii Linguistics:
7
B. Early enthusiasm, 1952–69:
Lots of progress.
Programs written that could:
– plan, learn
– play games,
– prove theorems, in general, solve problems.
Major feature of the period were microworlds—toy problem domains.
8
C. A dose of reality, 1966–1973:
General purpose, brute force techniques don’t work, so use knowledge rich solutions.
Early 1970s saw emergence of expert systems as systems capable of exploiting knowledge about tightly
focused domains to solve problems normally considered the domain of experts.
Ed Feigenbaum’s knowledge principle: [Knowledge] is power, and computers that amplify that knowledge
will amplify every dimension of power.
Expert systems success stories:
o MYCIN — blood diseases in humans;
o DENDRAL — interpreting mass spectrometers;
o R1/XCON — configuring DEC VAX hardware;
o PROSPECTOR —finding promising sites for mineral deposits;
Expert systems emphasised knowledge representation: rules, frames, semantic nets.
Problems:
– the knowledge elicitation bottleneck;
– marrying expert system & traditional software;
– breaking into the mainstream.
10
H. Large datasets, 2001–present:
11
UNIT-I CHAPTER -2 INTELLIGENT AGENTS
– is situated in an environment,
– is capable of perceiving its environment, and
– is capable of acting in its environment with the goal of satisfying its design objectives.
(or)
Anything that can be viewed as perceiving its environment through sensors and SENSOR acting upon that
environment through actuators.
(OR)
12
Examples for different agents:
i. Human “agent”:
– environment: physical world;
– sensors: eyes, ears, . . .
– effectors: hands, legs, . . .
ii. Software agent:
– environment: (e.g.) UNIX operating system;
– sensors: keystrokes, file contents, n/w packets . . .
– effectors: rm, chmod, . . .
iii. Robot:
– environment: physical world;
– sensors: sonar, camera;
– effectors: wheels.
Important terms…
Percept:
We use the term percept to refer to the agent's perceptual inputs at any given instant.
Percept Sequence:
An agent's percept sequence is the complete history of everything the agent has ever perceived.
Agent function: f:P* →A
Mathematically speaking, we say that an agent's behavior is described by the agent function that maps any given
percept sequence to an action.
13
1.2.2 Good behavior – Concept of Rationality…
Rational Agent…
A rational agent is one that does the right thing-conceptually speaking, every entry in the table for the agent function
is filled out correctly.
Obviously, doing the right thing is better than doing the wrong thing. The right action is the one that will cause the
agent to be most successful.
Performance measures:
Rationality :
15
I. Fully observable vs. partially observable…
If an agent's sensors give it access to the complete state of the environment at each point in time, then we say
that the task environment is fully observable.
A task environment is effectively fully observable if the sensors detect all aspects that are relevant to the
choice of action;
An environment might be partially observable because of noisy and inaccurate sensors or because parts of the
state are simply missing from the sensor.
If the next state of the environment is completely determined by the current state and the action executed by the
agent, then we say the environment is deterministic; other-wise, it is stochastic.
In an episodic task environment, the agent's experience is divided into atomic episodes.
Each episode consists of the agent perceiving and then performing a single action.
Crucially, the next episode does not depend on the actions taken in previous episodes.
In sequential environments, on the other hand, the current decision could affect all future decisions. Chess and
taxi driving are sequential.
A discrete-state environment such as a chess game has a finite number of distinct states.
Chess also has a discrete set of percepts and actions.
Taxi driving is a continuous state and continuous-time problem: the speed and location of the taxi and of the
other
vehicles sweep through a range of continuous values and do so smoothly over time.
Taxi-driving actions are also continuous (steering angles,.)
An agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent
16
playing chess is in a two-agent environment.
As one might expect, the hardest case is partially observable, stochastic, sequential, dynamic, continuous, and
Multiagent.
The job of AI is to design the agent program that implements the agent function mapping percepts to actions.
Agent = architecture +program
The agent programs all have the same skeleton: they take the current percept as input from the sensors and
return an action to the actuators.
Notice the difference between the agent program, which takes the current percept as input, and the agent
function, which takes the entire percept history.
Agent Types/Categories…
i. Table-driven agents:
-- use a percept sequence/action table in memory to find the next action. They are implemented by a (large) lookup
table.
ii. Simple reflex agents:
-- are based on condition-action rules, implemented with an appropriate production system. They are stateless devices
which do not have memory of past world states.
iii. Agents with memory (Model):
-- have internal state, which is used to keep track of past states of the world.
iv. Agents with goals:
-- are agents that, in addition to state information, have goal information that describes desirable situations. Agents of
this kind take future events into consideration.
v. Utility-based agents:
--base their decisions on classic axiomatic utility theory in order to act rationally.
vi. Learning Agents:
17
i. Table Driven Agent:
Drawbacks:
Table lookup of percept-action pairs defining all possible condition-action rules necessary to interact in an
environment.
Problems :
– Too big to generate and to store (Chess has about 10^120 states, for example)
– No knowledge of non-perceptual parts of the current state
– Not adaptive to changes in the environment; requires entire table to be updated if changes occur
– Looping: Can't make actions conditional
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn the table entries
18
Characteristics:
Only works if the environment is fully observable.
Lacking history, easily get stuck in infinite loops.
One solution is to randomize actions.
That is, the agent should maintain some sort of internal state that depends on the percept history and thereby
reflects at least some of the unobserved aspects of the current state.
Updating this internal state information as time goes by requires two kinds of knowledge to be encoded in the
19
agent program.
First, we need some information about how the world evolves independently of the agent-for examp le,
that
an overtaking car generally will be closer behind than it was a moment ago.
Second, we need some information about how the agent's own actions affect the world-for example, that
when the agent turns the steering wheel clockwise, the car turns to the right or that after driving for five
minutes northbound on the freeway one is usually about five miles north of where one was five minutes
ago.
This knowledge about "how the world working -whether implemented in simple Boolean circuits or
in complete scientific theories-is called a model of the world. An agent that uses such a model is called a
model-based agent.
Knowing about the current state of the environment is not always enough to decide what to do. For example,
at a road junction, the taxi can turn left, turn right, or go straight on.
The correct decision depends on where the taxi is trying to get to.
In other words, as well as a current state description, the agent needs some sort of goal information that
describes situations that are desirable-for example, being at the passenger's destination.
20
The agent program can combine this with information about the results of possible actions (the same
information as was used to update internal state in the reflex agent) in order to choose actions that achieve the
goal.
Sometimes goal-based action selection is straightforward—for example, when goal satisfaction results
immediately from a single action. Sometimes it will be more tricky—for example, when the agent has to
consider long sequences of twists and turns in order to find a way to achieve the goal.
Search and planning are the subfields of AI devoted to finding action sequences that achieve the agent’s
goals.
Although the goal-based agent appears less efficient, it is more flexible because the knowledge that supports
its decisions is represented explicitly and can be modified.
If it starts to rain, the agent can update its knowledge of how effectively its brakes will operate; this will
automatically cause all of the relevant behaviors to be altered to suit the new conditions.
Goals alone are not really enough to generate high-quality behavior in most environments. For example, there
are many action sequences that will get the taxi to its destination (thereby achieving the goal) but some are
quicker, safer, more reliable, or cheaper than others.
Goals just provide a crude binary distinction between "happy" and "unhappy" states, whereas a more general
performance measure should allow a comparison of different world states according to exactly how happy
they would make the agent if they could be achieved.
Because "happy" does not sound very scientific, the customary terminology is to say that if one world state is
preferred to another, then it has higher utility for the agent.
A utility function maps a state (or a sequence of states) onto a real number, which describes the associated
degree of happiness.
A complete specification of the utility function allows rational decisions in two kinds of cases where goals are
21
inadequate.
First, when there are conflicting goals, only some of which can be achieved (for example, speed and safety), the
utility function specifies the appropriate tradeoff.
Second, when there are several goals that the agent can aim for, none of which can be achieved with certainty,
utility provides a way in which the likelihood of success can be weighed up against the importance of the
goals.
Partial observability and stochasticity are ubiquitous in the real world, and so, therefore, is decision making
under uncertainty.
Technically speaking, a rational utility-based agent chooses the action that maximizes the expected utility of
the action outcomes—that is, the utility the agent expects to derive, on average, given the probabilities and
utilities of each outcome.
Vi Learning Agents:
22
a. Learning element - is responsible for making improvements.
b. Performance element - is responsible for selecting external actions. The performance element is what we have
previously considered to be the entire agent: it takes in percepts and decides on actions.
c. Critic - The learning element uses feedback from the critic on how the agent is doing and determines how the
performance element should be modified to do better in the future.
d. Problem generator - It is responsible for suggesting actions that will lead to new and informative experiences.
*********************
23