Artificial Intelligence
Artificial Intelligence
KECERDASAN BUATAN
Artificial Intelligence
Satrio Dewanto
Book :
Artificial Intelligent
A Modern Approach
Stuart Russel & Peter Norvig
Prentice Hall 2009
WHAT IS INTELLIGENCE ?
For thousands of years people have tried
to understand how we think ?.
Philosophy
Mathematics
Neuroscience
Psychology
Economics
WHAT IS AI ?
Artificial Intelligence is a branch of
computer science that aims to produce
“intelligent” thought and/or behaviour
with machines.
AI attempts to build intelligent entities
AI is both science and engineering:
the science of understanding intelligent
entities. Developing theories which
attempt to explain and predict the nature of
such entities.
the engineering of intelligent entities.
ACADEMIC DISCIPLINES
IMPORTANT TO AI
Philosophy Logic, methods of reasoning, mind as physical
system, foundations of learning, language,
rationality.
Banks
automatic check readers, signature verification systems
automated loan application classification
Telephone Companies
automatic voice recognition for directory inquiries
automatic fraud detection,
classification of phone numbers into groups
Computer Companies
automated diagnosis for help-desk applications
TYPICAL AI APPLICATION AREAS
natural language processing - understanding,
generating, translating;
planning;
vision - scene recognition, object recognition,
face
recognition;
robotics;
theorem proving;
speech recognition;
game playing;
problem solving;
expert systems etc.
WHAT IS AN INTELLIGENT AGENT ?
An intelligent agent is a system that:
Perceives its environment (which may be the
physical world, a user via a graphical user
interface, a collection of other agents, the Internet,
or other complex environment).
Reasons to interpret perceptions, draw inferences,
solve problems, and determine actions.
Acts upon that environment to realize a set of goals
or tasks for which it was designed.
input/
sensors Intelligent
output/ Agent
user/ effectors
environment
AGENT
An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through
actuators
Goal-based agents
Utility-based agents
Learning Agents
TABLE DRIVEN AGENT.
current state of decision process
table lookup
for entire history
SIMPLE REFLEX AGENTS
NO MEMORY
Fails if environment
is partially observable
If a reflex agent were able to keep track of its previous states (i.e. maintain a
description or “model” of its previous history in the world), and had knowledge
of how the world evolves, then it would have more information about its current
state, and hence be able to make better informed decisions about its actions.
Implicit in the knowledge about how the world evolves will be knowledge of
how the agent’s own actions affect its state. It will need this so that it can
update its own state without having to rely on sensory information.
We say that such reflex agents have an internal state. They will have access
to information/predictions that are not currently available to their sensors. This
will give them more scope for intelligent action than simple reflex agents
agents.
GOAL-BASED AGENTS
Goals provide reason to prefer one action over the other.
We need to predict the future: we need to plan & search
GOAL-BASED AGENTS
Even with the increased knowledge of the current state of the world
provided by an agent’s internal state, the agent may still not have enough
information to tell it what to do.
The appropriate action for the agent will often depend on what its goals
are, and so it must be provided with some goal information.
Given knowledge of how the world evolves, and of how its own actions will
affect its state, an agent will be able to determine the consequences of all
(or many) possible actions. It can then compare each of these against its
goal to determine which action(s) achieves its goal, and hence which
action(s) to take.
Goal based agents may appear less efficient, but they are far more
flexible. We can simply specify a new goal, rather than re-programming all
the rules.
UTILITY-BASED AGENTS
Some solutions to goal states are better than others.
Which one is best is given by a utility function.
Which combination of goals is preferred?
UTILITY-BASED AGENTS
Goals alone are not enough to generate high quality behaviour. Often there are
many sequences of actions that can result in the same goal being achieved.
We talk about the utility of particular states. Any utility based agent can be
described as possessing a utility function that maps a state, or sequence of
states, on to a real number that represents its utility or usefulness.
When there are conflicting goals, only some of which can be achieved, the
utility is able to quantify the appropriate trade-offs.
LEARNING AGENTS
How does an agent improve over time?
By monitoring it’s performance and suggesting better modeling, new action rules, etc.
Evaluates
current
world
state
changes
action
rules
model world
and
decide on
suggests actions to be
explorations taken
LEARNING AGENTS
The most powerful agents are able to learn by actively exploring and
experimenting with their environment.
Note that not all learning agents will need a problem generator – a
teacher, or the agent’s normal mode of operation, may provide
sufficient feedback for learning.