Introduction To Artificial Intelligence
Introduction To Artificial Intelligence
• AI is accomplished by studying how the human brain thinks and how humans
learn, decide, and work while trying to solve a problem, and then using the
outcomes of this study as a basis of developing intelligent software and systems
What is AI ?
Machine Translation:
• A computer program automatically translates from Arabic to
English, allowing an English speaker to see the headline “Ardogan
Confirms That Turkey Would Not Accept Any Pressure, Urging Them
to Recognize Cyprus.”
• The program uses a statistical model built from examples of Arabic-
to-English translations and from examples of English text totaling
two trillion words (Brants et al., 2007).
• None of the computer scientists on the team speak Arabic, but
they do understand statistics and machine learning algorithms.
Chap-2:Intelligent Agents
Agents and environment: An agent is anything that can be viewed as
perceiving its environment through sensors and acting upon that
environment through actuators. simple idea is illustrated in Figure 2.1
Rationality
• Rational at any given time depends on four things:
– The performance measure that defines the criterion of
success.
– The agent's prior knowledge of the environment.
– The actions that the agent can perform.
– The agent's percept sequence to date.
• A definition of a rational agent: For each possible
percept sequence, a rational agent should select an
action that is ex-pected to maximize its performance
measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent
has.
Consider the simple vacuum-cleaner agent that cleans a
square if it is dirty and moves to the other square if not;
this is the agent function tabulated in Figure 2.3. Is this a
rational agent? That depends! First, we need to say what
the performance measure is, what is known about the
environment, and what sensors and actuators the agent has.
Let us assume the following:
• The performance measure awards one point for each clean
square at each time step, over a “lifetime” of 1000-time
steps.
• The “geography” of the environment is known a priori
(Figure 2.2) but the dirt distribution and the initial location of
the agent are not. Clean squares stay clean and sucking
cleans the current square. The Left and Right actions move
the agent left and right except when this would take the
agent outside the environment, in which case the agent
remains where it is.
• The only available actions are Left , Right , and Suck .
• The agent correctly perceives its location and whether that
location contains dirt. We claim that under these
circumstances the agent is indeed rational.
The nature of environment
• Task environments, which are essentially the
“problems” to which rational agents are the
“solutions.”
• To specify the performance measure, the
environment, and the agent’s actuators and sensors
called the PEAS (Performance, Environment,
Actuators, Sensors) description.
• In designing an agent, the first step must always be
to specify the task environment as fully as possible.
• PEAS description of an automated taxi driver.
• The performance measure to which we would like our
automated driver to aspire? Desirable qualities include getting to
the correct destination; minimizing fuel consumption and wear
and tear; minimizing the trip time or cost; minimizing violations
of traffic laws and disturbances to other drivers; maximizing
safety and passenger comfort; maximizing profits. Obviously,
some of these goals conflict, so tradeoffs will be required.
•
What is the driving environment that the taxi will face? Any taxi
driver must deal with a variety of roads, ranging from rural lanes
and urban alleys to 12-lane freeways. The roads contain other
traffic, pedestrians, stray animals, road works, police cars,
puddles, and potholes. The taxi must also interact with potential
and actual passengers.
Properties of the Agent’s State of Knowledge Known vs.
unknown
• Describes the agent’s (or designer’s) state of knowledge
about the “laws of physics” of the environment
– if the environment is known, then the outcomes (or outcome
probabilities if stochastic) for all actions are given.
– if the environment is unknown, then the agent will have to learn
how it works in order to make good decisions
• Orthogonal wrt. task-environment properties. Known not
equal to Fully observable
• a known environment can be partially observable (Ex: a
solitaire card games)
• an unknown environment can be fully observable (Ex: a game
I don’t know the rules of)
The structure of agents
•
Agents can be grouped into five classes based on their
degree of perceived intelligence and capability. All these
agents can improve their performance and generate
better action over the time.
These are given below:
• Simple Reflex Agent
• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
Simple reflex agents
• The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest
of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts
history during their decision and action process.
• The Simple reflex agent works on Condition-action rule, which
means it maps the current state to action. Such as a Room Cleaner
agent, it works only if there is dirt in the room.
• Problems for the simple reflex agent design approach:
– They have very limited intelligence
– They do not have knowledge of non-perceptual parts of the current state
– Mostly too big to generate and to store.
– Not adaptive to changes in the environment.
Model-based reflex agent
• The Model-based agent can work in a partially observable
environment, and track the situation.
• A model-based agent has two important factors:
– Model: It is knowledge about "how things happen in the world,"
so it is called a Model-based agent.
– Internal State: It is a representation of the current state based on
percept history.
• These agents have the model, "which is knowledge of the
world" and based on the model they perform actions.
• Updating the agent state requires information about:
– How the world evolves
– How the agent's action affects the world.
• For the braking problem, the internal state is not too
extensive— just the previous frame from the camera, allowing
the agent to detect when two red lights at the edge of the
vehicle go on or off simultaneously.
• For other driving tasks such as changing lanes, the agent
needs to keep track of where the other cars are if it can’t see
them all at once. And for any driving to be possible at all, the
agent needs to keep track of where its keys are.
• Updating this internal state information as time goes by
requires two kinds of knowledge to be encoded in the agent
program.
• First, we need some information about how the world evolves
independently of the agent—for example, that an overtaking
car generally will be closer behind than it was a moment ago.
• Second, we need some information about how the
agent’s own actions affect the world—for example,
that when the agent turns the steering wheel
clockwise, the car turns to the right, or that after
driving for five minutes northbound on the freeway,
one is usually about five miles north of where one
was five minutes ago.
• This knowledge about “how the world works”—
whether implemented in simple Boolean circuits or in
complete scientific theories—is called a model of the
world. An agent that uses such a model is called a
model-based agent.
Goal-based agents
• The knowledge of the current state environment is not always sufficient
to decide for an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by
having the "goal" information.