Unit 1
Unit 1
Unit – 1
Fundamentals of AI
(Artificial Intelligence)
Topics to be covered
Looping
What is AI?
Four Main Approaches to Artificial Intelligence
The Foundations of Artificial Intelligence
The History of Artificial Intelligence
The State of the Art (Applications of AI)
Agents and Environments
The Concept of Rationality
Omniscience, learning and autonomy
The Nature of Environments
Specifying the task environment
Properties of task environments
The Structure of Agents
Agent programs
Types of agent programs
What is AI?
AI is a branch of computer science dealing with the simulation of intelligent behaviour in
computers.
AI is the study of how to make computers do things which, at the moment, people do better.
AI is the study and design of intelligent agents where an intelligent agent is a system that
perceives its environment and takes actions.
According to, John McCarthy (The father of AI), AI is the science and engineering of making
intelligent machines, especially intelligent computer programs (1956).
What is AI?
Figure: Partial tabulation of a simple agent function for the vacuum-cleaner world
The Concept of Rationality
A rational agent is one that does the right thing - means in the table for the agent
function is filled out correctly.
Doing the right thing is better than doing the wrong thing. Right action is the one
that will cause the agent to be most successful. Therefore, we will need to some
way to measure success.
Performance measure: A performance measure embodies the criterion for
success of an agent's behaviour. When an agent is plunked down in an
environment, it generates a sequence of actions according to the percepts it
receives. This sequence of actions causes the environment to go through
sequence of states. If the sequence is desirable, then the agent has performed
well. This notion of desirability is captured by a performance measure that
evaluates any given sequence of environment states.
The Concept of Rationality
There is not one fixed measure suitable for all agents.
We could ask the agent for the subjective opinion of how happy it is with its own
performance, but some agents would be unable to answer, and other would delude
themselves.
Example: vacuum - cleaner agent : We might propose to measure performance by
the amount of dirt cleaned up in a single eight-hour shift. A rational agent can
maximize this performance measure by cleaning up the dirt, then dumping it all on
the floor, then cleaning it up again, and so on.
As a general rule, it is better to design performance measures according to what one
actually wants in the environment, rather than according to how one thinks the agent
should behave.
The Concept of Rationality
What is rational at any given time depends on four things:
1. The performance measure that defines the criterion of success.
2. The agent’s prior knowledge of the environment.
3. The actions that the agent can perform.
4. The agent's percept sequence to date.
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.
Omniscience, learning and autonomy
An omniscient agent knows the actual outcome of its actions and can act
accordingly; but omniscience is impossible in reality.
Doing actions in order to modify future percepts - sometimes called information
gathering - is an important part of rationality.
Definition of a rational agent requires not only to gather information but also to
learn as much as possible from what it perceives.
To the extent that an agent relies on the prior knowledge of its designer rather
than on its own percepts, we say that the agent lacks autonomy.
A rational agent should be autonomous - it should learn what it can to compensate
for partial or incorrect prior knowledge.
Specifying the task environment
Task environments are the "problems" to which rational agents are the "solutions."
Task environments specify the performance measure, the environment, and the
agent’s actuators and sensors needed to make an agent.
We call this the PEAS description:
Performance measure
Environment
Actuators
Sensors
In designing an agent, the first step must always be to specify the task
environment as fully as possible.
Examples of Agent types and its PEAS description
Figure shows the structure of the model-based reflex agent with internal state,
showing how the current percept is combined with the old internal state to
generate the updated description of the current state, based on the agent’s model
of how the world works.
Goal-based agents
Knowing something about the current state of the environment is not always
enough to decide what to do.
For example, at a road junction, the taxi can turn left, turn right, or go straight on.
The correct decision depends on where the taxi is trying to get to.
In other words, as well as a current state description, the agent needs some sort
of goal information that describes situations that are desirable - for example,
being at the passenger’s destination.
The agent program can combine this with the model (the same information as was
used in the model-based reflex agent) to choose actions that achieve the goal.
Goal-based agents
Fig. shows goal-based agent’s structure.
It keeps track of the world state as well
as a set of goals it is trying to achieve
and chooses an action that will lead to
the achievement of its goals.
Search and planning are the subfields of
AI devoted to finding action sequences
that achieve the agent’s goals.
Although the goal-based agent appears
less efficient, it is more flexible because
Figure: A model-based, goal-based agent
the knowledge that supports its decisions
is represented explicitly and can be
modified.
Utility-based agents
Goals alone are not enough to generate high-quality behaviour in most
environments.
For example, many action sequences will get the taxi to its destination (thereby
achieving the goal) but some are quicker, safer, more reliable, or cheaper than
others.
These agents are similar to the goal-based agent but provide an extra component
of utility measurement which makes them different by providing a measure of
success at a given state.
Utility-based agent act based not only goals but also the best way to achieve the
goal.
The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
Utility-based agents
The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
A complete specification of the utility function allows rational decisions in two
kinds of cases where goals are inadequate.
First, when there are conflicting goals, only some of which can be achieved (for
example, speed and safety), the utility function specifies the appropriate trade-off.
Second, when there are several goals that the agent can aim for, none of which
can be achieved with certainty, utility provides a way in which the likelihood of
success can be weighed against the importance of the goals.
Utility-based agents
Fig. shows utility-based agent’s structure.
It uses a model of the world, along with a
utility function that measures its
preferences among states of the world.
Then it chooses the action that leads to
the best expected utility, where expected
utility is computed by averaging over all
possible outcome states, weighted by the
probability of the outcome.