0% found this document useful (0 votes)
10 views50 pages

Introduction To Artificial Intelligence

Uploaded by

goudasanjay09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views50 pages

Introduction To Artificial Intelligence

Uploaded by

goudasanjay09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 50

Module 1

Introduction to Artificial Intelligence


• Homo Sapiens: The name is Latin for "wise man“

• Philosophy of AI - “Can a machine think and behave like humans do?”

• In Simple Words - Artificial Intelligence is a way of making a computer, a


computer controlled robot, or a software think intelligently, in the similar manner
the intelligent humans think.

• Artificial intelligence (AI) is an area of computer science that emphasizes the


creation of intelligent machines that work and react like humans.

• AI is accomplished by studying how the human brain thinks and how humans
learn, decide, and work while trying to solve a problem, and then using the
outcomes of this study as a basis of developing intelligent software and systems
What is AI ?

Views of AI fall into four categories:


i. Thinking humanly
ii. Thinking rationally
iii. Acting humanly
iv. Acting rational
Thinking humanly: The cognitive modeling approach
• If we are going to say that a given program thinks like
a human, we must have some way of determining
how humans think.
• We need to get inside the actual working of human
minds.
• There are 3 ways to do it:
I. Through introspection Trying to catch our own
thoughts as they go
II. Through psychological experiments Observing a
person in action
III. Through brain imaging Observing the brain in action
• Comparison of the trace of computer program reasoning steps to
traces of human subjects solving the same problem.
• Cognitive Science brings together computer models from AI and
experimental techniques from psychology to try to construct precise
and testable theories of the working of the human mind.
• Now distinct from AI. AI and Cognitive Science fertilize each other in
the areas of vision and natural language.
• Once we have a sufficiently precise theory of the mind, it becomes
possible to express the theory as a computer program.
• If the program’s input-output behaviour matches corresponding
human behaviour, that is evidence that the program’s mechanisms
could also be working in humans.
• For example, Allen Newell and Herbert Simon, who developed GPS,
the "General Problem Solver”.
Thinking rationally: The “laws of thought” approach
• Aristotle was one of the first to attempt to codify ―right
thinking,‖ that is, irrefutable reasoning processes. His
syllogisms provided patterns for argument structures that
always yielded correct conclusions when given correct
premises.
Eg.Socratesis a man;
All men are mortal;
Therefore, Socrates is mortal.– logic
There are two main obstacles to this approach.
1. It is not easy to take informal knowledge and state it in the
formal terms required by logical notation, particularly when
the knowledge is less than 100% certain.
2. Second, there is a big difference between solving a problem
―in principle and solving it in practice.
Acting humanly: The Turing Test approach
• Turing (1950) developed "Computing machinery and
intelligence“
• "Can machines think?" or "Can machines behave
intelligently?“
• Operational test for intelligent behavior: the Imitation
Game.
• A computer passes the test if a human interrogator, after
posing some written questions, cannot tell whether the
written responses come from a person or from a machine.
• Suggested major components of AI: knowledge,
reasoning, language understanding, learning.
The computer would need to possess the
following capabilities:
•Natural Language Processing: To enable it to communicate
successfully in English.

•Knowledge representation: To store what it knows or hears.


To pass the Total Turing Test

•Computer vision: To perceive objects.

•Robotics: To manipulate objects and move about.

•Machine Learning: To adapt to new circumstances and to detect and


extrapolate patterns.

•Automated reasoning: To use the stored information to answer


questions and to draw new conclusions.
Acting rationally: The rational agent approach
• An agent is just something that acts.
• All computer programs do something, but computer agents are
expected to do more: operate autonomously, perceive their
environment, persist over a prolonged time period, and adapt to
change, and create and pursue goals.
• A rational agent is one that acts so as to achieve the best
outcome or, when there is uncertainty, the best expected
outcome.
• In the ―laws of thought‖ approach to AI, the emphasis was on
correct inferences.
• On the other hand, correct inference is not all of rationality; in
some situations, there is no provably correct thing to do, but
something must still be done.
• For example, recoiling from a hot stove is a reflex action that is
usually more successful than a slower action taken after careful
deliberation.
What means “behave rationally” for a person/system:
– Take the right/ best action to achieve the goals,
based on his/its knowledge and belief
– Example: Assume I don’t like to get wet in rain
(my goal), so I bring an umbrella (my action). Do I
behave rationally?
– The answer is dependent on my knowledge and
belief.
– If I’ve heard the forecast for rain and I believe it,
then bringing the umbrella is rational.
– If I’ve not heard the forecast for rain and I do not
believe that it is going to rain, then bringing the
umbrella is not rational.
“Behave rationally” does not always achieve the goals
successfully
• Example:
– My goals – (i) do not get wet if rain; (ii) do not looked
stupid (such as bring an umbrella when not raining)
– My knowledge/belief – weather forecast for rain and I
believe it
– My rational behavior – bring an umbrella
– The outcome of my behavior: If rain, then my rational
behavior achieves both goals; If no rain, then my rational
behavior fails to achieve the 2nd goal
• The successfulness of “behave rationally” is limited by my
knowledge and belief.
THE STATE OF THE ART
What can AI do today?
Robotic vehicles:
A driverless robotic car named STANLEY sped through
the rough terrain of the Mojave dessert at 22 mph,
finishing the 132-mile.
Speech recognition:
A traveler calling United Airlines to book a flight can
have the entire conversation guided by an automated
speech recognition and dialog management system .
Autonomous planning and scheduling:
• A hundred million miles from Earth, NASA’s
Remote Agent program became the first on-board
autonomous planning program to control the
scheduling of operations for a spacecraft (Jonsson
et al., 2000).
• REMOTE AGENT generated plans from high-
level goals specified from the ground and
monitored the execution of those plans—detecting,
diagnosing, and recovering from problems as they
occurred. Successor program MAPGEN (Al-Chang
et al., 2004) plans the daily operations for NA.
Game playing:
• IBM’s DEEP BLUE became the first computer program to
defeat the world champion in a chess match when it bested
Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match
(Goodman and Keene, 1997).
• Kasparov said that he felt a “new kind of intelligence” across
the board from him. Newsweek magazine described the match
as “The brain’s last stand.” The value of IBM’s stock increased
by $18 billion.
• Human champions studied Kasparov’s loss and were able to
draw a few matches in subsequent years, but the most recent
human-computer matches have been won convincingly by the
computer
Spam fighting:
• Each day, learning algorithms classify over a billion
messages as spam, saving the recipient from having
to waste time deleting what, for many users, could
comprise 80% or 90% of all messages, if not
classified away by algorithms.
• Because the spammers are continually updating their
tactics, it is difficult for a static programmed
approach to keep up, and learning algorithms work
best (Sahami et al., 1998; Goodman and Heckerman,
2004).
Logistics planning:
• During the Persian Gulf crisis of 1991, U.S. forces deployed a
Dynamic Analysis and Re-planning Tool, DART (Cross and
Walker, 1994), to do automated logistics planning and
scheduling for transportation.
• This involved up to 50,000 vehicles, cargo, and people at a
time, and had to account for starting points, destinations, routes,
and conflict resolution among all parameters.
• The AI planning techniques generated in hours a plan that
would have taken weeks with older methods. The Defense
Advanced Research Project Agency (DARPA) stated that this
single application more than paid back DARPA’s 30-year
investment in AI.
Robotics:
• The iRobot Corporation has sold over two million Roomba robotic
vacuum cleaners for home use.
• The company also deploys the more rugged PackBot to Iraq and
Afghanistan, where it is used to handle hazardous materials, clear
explosives, and identify the location of snipers.

Machine Translation:
• A computer program automatically translates from Arabic to
English, allowing an English speaker to see the headline “Ardogan
Confirms That Turkey Would Not Accept Any Pressure, Urging Them
to Recognize Cyprus.”
• The program uses a statistical model built from examples of Arabic-
to-English translations and from examples of English text totaling
two trillion words (Brants et al., 2007).
• None of the computer scientists on the team speak Arabic, but
they do understand statistics and machine learning algorithms.
Chap-2:Intelligent Agents
Agents and environment: An agent is anything that can be viewed as
perceiving its environment through sensors and acting upon that
environment through actuators. simple idea is illustrated in Figure 2.1

Percept − It is agent’s perceptual inputs at a given instance.


Percept Sequence − It is the history of all that an agent has perceived till date.
Agent Function − It is a map from the precept sequence to an action.
Performance Measure of Agent − It is the criteria which determines how
successful an agent is.
Behavior of Agent − It is the action that agent performs after any given sequence
of percepts.
The vacuum-cleaner world shown in Figure 2.2.
• This particular world has just two locations: squares A and B.
The vacuum agent perceives which square it is in and whether
there is dirt in the square.
• It can choose to move left, move right, suck up the dirt, or do
nothing.
• One very simple agent function is the following: if the current
square is dirty, then suck; otherwise, move to the other
square.
• A partial tabulation of this agent function is shown in Figure
2.3 and an agent program that implements is given in Figure
2.8.
Good Behavior: The Concept of Rationality

• A rational agent is one that does the right thing—conceptually


speaking, every entry in the table for the agent function is filled out
correctly.

Rationality
• Rational at any given time depends on four things:
– The performance measure that defines the criterion of
success.
– The agent's prior knowledge of the environment.
– The actions that the agent can perform.
– The agent's percept sequence to date.
• A definition of a rational agent: For each possible
percept sequence, a rational agent should select an
action that is ex-pected to maximize its performance
measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent
has.
Consider the simple vacuum-cleaner agent that cleans a
square if it is dirty and moves to the other square if not;
this is the agent function tabulated in Figure 2.3. Is this a
rational agent? That depends! First, we need to say what
the performance measure is, what is known about the
environment, and what sensors and actuators the agent has.
Let us assume the following:
• The performance measure awards one point for each clean
square at each time step, over a “lifetime” of 1000-time
steps.
• The “geography” of the environment is known a priori
(Figure 2.2) but the dirt distribution and the initial location of
the agent are not. Clean squares stay clean and sucking
cleans the current square. The Left and Right actions move
the agent left and right except when this would take the
agent outside the environment, in which case the agent
remains where it is.
• The only available actions are Left , Right , and Suck .
• The agent correctly perceives its location and whether that
location contains dirt. We claim that under these
circumstances the agent is indeed rational.
The nature of environment
• Task environments, which are essentially the
“problems” to which rational agents are the
“solutions.”
• To specify the performance measure, the
environment, and the agent’s actuators and sensors
called the PEAS (Performance, Environment,
Actuators, Sensors) description.
• In designing an agent, the first step must always be
to specify the task environment as fully as possible.
• PEAS description of an automated taxi driver.
• The performance measure to which we would like our
automated driver to aspire? Desirable qualities include getting to
the correct destination; minimizing fuel consumption and wear
and tear; minimizing the trip time or cost; minimizing violations
of traffic laws and disturbances to other drivers; maximizing
safety and passenger comfort; maximizing profits. Obviously,
some of these goals conflict, so tradeoffs will be required.

What is the driving environment that the taxi will face? Any taxi
driver must deal with a variety of roads, ranging from rural lanes
and urban alleys to 12-lane freeways. The roads contain other
traffic, pedestrians, stray animals, road works, police cars,
puddles, and potholes. The taxi must also interact with potential
and actual passengers.
Properties of the Agent’s State of Knowledge Known vs.
unknown
• Describes the agent’s (or designer’s) state of knowledge
about the “laws of physics” of the environment
– if the environment is known, then the outcomes (or outcome
probabilities if stochastic) for all actions are given.
– if the environment is unknown, then the agent will have to learn
how it works in order to make good decisions
• Orthogonal wrt. task-environment properties. Known not
equal to Fully observable
• a known environment can be partially observable (Ex: a
solitaire card games)
• an unknown environment can be fully observable (Ex: a game
I don’t know the rules of)
The structure of agents

• Agent = Architecture + Program


• AI Job: design an agent program implementing the agent function
• The agent program runs on some computing device with physical
sensors and actuators: the agent architecture
• All agents have the same skeleton:
– Input: current percepts
– Output: action
– Program: manipulates input to produce output.
• The agent function takes the entire percept history as input
• The agent program takes only the current percept as input.
• if the actions need to depend on the entire percept sequence, the
agent will have to remember the percepts
The Table-Driven Agent
The table represents explicitly the agent function Ex:
the simple vacuum cleaner


Agents can be grouped into five classes based on their
degree of perceived intelligence and capability. All these
agents can improve their performance and generate
better action over the time.
These are given below:
• Simple Reflex Agent
• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
Simple reflex agents

• The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest
of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts
history during their decision and action process.
• The Simple reflex agent works on Condition-action rule, which
means it maps the current state to action. Such as a Room Cleaner
agent, it works only if there is dirt in the room.
• Problems for the simple reflex agent design approach:
– They have very limited intelligence
– They do not have knowledge of non-perceptual parts of the current state
– Mostly too big to generate and to store.
– Not adaptive to changes in the environment.
Model-based reflex agent
• The Model-based agent can work in a partially observable
environment, and track the situation.
• A model-based agent has two important factors:
– Model: It is knowledge about "how things happen in the world,"
so it is called a Model-based agent.
– Internal State: It is a representation of the current state based on
percept history.
• These agents have the model, "which is knowledge of the
world" and based on the model they perform actions.
• Updating the agent state requires information about:
– How the world evolves
– How the agent's action affects the world.
• For the braking problem, the internal state is not too
extensive— just the previous frame from the camera, allowing
the agent to detect when two red lights at the edge of the
vehicle go on or off simultaneously.
• For other driving tasks such as changing lanes, the agent
needs to keep track of where the other cars are if it can’t see
them all at once. And for any driving to be possible at all, the
agent needs to keep track of where its keys are.
• Updating this internal state information as time goes by
requires two kinds of knowledge to be encoded in the agent
program.
• First, we need some information about how the world evolves
independently of the agent—for example, that an overtaking
car generally will be closer behind than it was a moment ago.
• Second, we need some information about how the
agent’s own actions affect the world—for example,
that when the agent turns the steering wheel
clockwise, the car turns to the right, or that after
driving for five minutes northbound on the freeway,
one is usually about five miles north of where one
was five minutes ago.
• This knowledge about “how the world works”—
whether implemented in simple Boolean circuits or in
complete scientific theories—is called a model of the
world. An agent that uses such a model is called a
model-based agent.
Goal-based agents
• The knowledge of the current state environment is not always sufficient
to decide for an agent to what to do.

• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by
having the "goal" information.

• They choose an action, so that they can achieve the goal.

• These agents may have to consider a long sequence of possible actions


before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and planning,
which makes an agent proactive.
• Sometimes goal-based action selection is
straightforward: for example when goal satisfaction
results immediately from a single action.

• Sometimes it will be trickier: for example, when the


agent has to consider long sequences of twists and
turns to find a way to achieve the goal.

• Search and planning are the subfields of AI devoted


to finding action sequences that achieve the agent’s
goals.
Utility-based agents
• These agents are similar to the goal-based agent but
provide an extra component of utility measurement
which makes them different by providing a measure of
success at a given state.
• Utility-based agent act based not only goals but also the
best way to achieve the goal.
• The Utility-based agent is useful when there are multiple
possible alternatives, and an agent has to choose in
order to perform the best action.
• The utility function maps each state to a real number to
check how efficiently each action achieves the goals.
Utility-based Agents advantages wrt. goal-
based:
• with conflicting goals, utility specifies and
appropriate tradeoff
• with several goals none of which can be achieved
with certainty, utility selects proper tradeoff
between importance of goals and likelihood of
success
• still complicate to implement
• require sophisticated perception, reasoning, and
learning
• may require expensive computation.
Learning Agents
• Problem Previous agent programs describe
methods for selecting actions
• How are these agent programs programmed?
• Programming by hand inefficient and ineffective!
• Solution: build learning machines and then
teach them (rather than instruct them)
• Advantage: robustness of the agent program
toward initially-unknown environments
• Performance element: selects actions based on percepts Corresponds
to the previous agent programs
• Learning element: introduces improvements uses feedback from the
critic on how the agent is doing determines improvements for the
performance element
• Critic tells how the agent is doing w.r.t. performance standard
• Problem generator: suggests actions that will lead to new and
informative experiences forces exploration of new stimulating
scenarios
Example: Taxi Driving
• After the taxi makes a quick left turn across three lanes, the critic
observes the shocking language used by other drivers.
• From this experience, the learning element formulates a rule saying
this was a bad action.
• The performance element is modified by adding the new rule.
• The problem generator might identify certain areas of behavior in
need of improvement, and suggest trying out the brakes on different
road surfaces under different conditions.

You might also like