0% found this document useful (0 votes)
87 views43 pages

Artificial Intelligence

The document discusses the history and development of artificial intelligence, describing early work in logic and neural networks, challenges faced as AI failed to scale up, and recent successes in areas like games, speech recognition, and intelligent agents that can perceive and act rationally in an environment. Key concepts discussed include the Turing test, different approaches to AI like thinking rationally versus acting rationally, and applications of AI systems in everyday life like mail sorting and fraud detection.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views43 pages

Artificial Intelligence

The document discusses the history and development of artificial intelligence, describing early work in logic and neural networks, challenges faced as AI failed to scale up, and recent successes in areas like games, speech recognition, and intelligent agents that can perceive and act rationally in an environment. Key concepts discussed include the Turing test, different approaches to AI like thinking rationally versus acting rationally, and applications of AI systems in everyday life like mail sorting and fraud detection.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 43

INTELEGENSIA SEMU

KECERDASAN BUATAN
Artificial Intelligence
Satrio Dewanto

Book :
Artificial Intelligent
A Modern Approach
Stuart Russel & Peter Norvig
Prentice Hall 2009
WHAT IS INTELLIGENCE ?
 For thousands of years people have tried
to understand how we think ?.
 Philosophy
 Mathematics
 Neuroscience
 Psychology
 Economics
WHAT IS AI ?
 Artificial Intelligence is a branch of
computer science that aims to produce
“intelligent” thought and/or behaviour
with machines.
 AI attempts to build intelligent entities
 AI is both science and engineering:
 the science of understanding intelligent
entities. Developing theories which
attempt to explain and predict the nature of
such entities.
 the engineering of intelligent entities.
ACADEMIC DISCIPLINES
IMPORTANT TO AI
 Philosophy Logic, methods of reasoning, mind as physical
system, foundations of learning, language,
rationality.

 Mathematics Formal representation and proof, algorithms,


computation

 Economics Decision theory

 Neuroscience Neurons as information processing units.

 Psychology/ How do people behave, perceive, process


Cognitive Science information, represent knowledge.

 Computer Building fast computers


engineering

 Control theory Design systems that maximize an objective


function over time

 Linguistics Knowledge representation, grammar


FOUR CATEGORIES OF AI
 Systems that act like humans.
 Turing test.
 Systems that think like humans.
 Cognitive science.
 Systems that think rationally.
 Logical Approach.
 Systems that act rationally.
 Intelligent agent approach.
ACTING HUMANLY
 Emphasis on how to tell that a machine
is intelligent, not on how to make it
intelligent.
 When can we count a machine as being
intelligent ?.
 Can machines think ?.
 Can machines behave intelligently?
 Most famous response due to Alan
Turing, British mathematician and
computing pioneer.
TURING TEST
 Interrogator has to find which is a machine and
which is a human by asking questions to both
of them. If the machine is able to fool the
interrogator to behave like a human, then that
machine passes the Turing Test.
THINKING HUMANLY
 Try to understand :
 How the mind works.
 How do we think?
 The discipline of cognitive science:
particularly influential in vision, natural
language processing, and learning.
THINKING RATIONALLY
 Trying to understand how we actually
think is one route to AI , but how about
how we should think ?.
 Use logic to capture the laws of rational
thought as symbols.
 Reasoning involves shifting symbols
according to well-defined rules (like
algebra).
 Result is idealised reasoning.
ACTING RATIONALLY
 Acting rationally = acting to achieve one's goals,
given one's beliefs.
 An agent is a system that perceives and acts;
intelligent agent is one that acts rationally w.r.t.
the goals we delegate to it.
 Emphasis shifts from designing theoretically
best decision making procedure to best decision
making procedure possible in circumstances.
 Logic may be used in the service of finding the
best action.
HISTORY OF AI
Stone age (1943-1956)

 Early work on neural networks and logic.


 The Logic Theorist ( Alan Newell and
Herbert Simon )
 Birth of AI : Dartmouth workshop -
summer 1956
 John McCarthy’s name for the field:
artificial intelligence
HISTORY OF AI
Early enthusiasm, great expectations (1952-1969)
 McCarthy (1958)
 defined Lisp
 invented time-sharing
 Advice Taker
 Learning without knowledge
 Neural modeling
 Evolutionary learning
 Samuel’s checkers player: learning
 Robinson’s resolution method.
 Minsky: the microworlds (e.g. the block’s world).
 Many small demonstrations of “intelligent” behavior.
 Simon’s over-optimistic predictions
HISTORY OF AI
Dark ages (1966-1973)
 AI did not scale up.

 The fact that a program can find a solution in principle


does not mean that the program contains any of the
mechanisms needed to find it in practice.

 Failure of natural language translation approach based on


simple grammars and word dictionary.
 The famous retranslation English->Russian->English of
“the spirit is willing but the flesh is weak” into
“the vodka is good but the meat is rotten”.
 Funding for natural language processing stopped.
HISTORY OF AI
Renaissance (1969-1979)

 Change of problem solving paradigm:


 from search-based problem solving to knowledge-
based problem solving

 The first expert systems:


 Dendral: infers molecular structure from the
information provided by a mass spectrometer
 Mycin: diagnoses blood infections
 Prospector: recommended exploratory drilling at
a geological site that proved to contain a large
molybdenum deposit.
HISTORY OF AI
Industrial age (1980-present)

 The first successful commercial expert


systems.
 Many AI companies.
 Exploration of different learning
strategies (Explanation-based learning,
Case-based Reasoning, Genetic
algorithms, Neural networks, etc.)
HISTORY OF AI
The return of neural networks (1986-present)

 The reinvention of the back propagation


learning algorithm for neural networks
first found in 1969 by Bryson and Ho.

 Many successful applications of neural


networks.
HISTORY OF AI
Maturity (1987-present)

 Change in the content and methodology of AI


research:
 build on existing theories rather than propose
new ones.
 base claims on theorems and experiments
rather than on intuition.
 show relevance to real-world applications rather
than toy examples.
HISTORY OF AI
Intelligent agents (1995-present)

 The realization that the previously


isolated subfields of AI (speech
recognition, problem solving and
planning, robotics, computer vision,
machine learning, knowledge
representation, etc.) need to be
reorganized when their results are to be
tied together into a single agent design.
STATE OF THE ART IN AI
 Deep Blue defeated Kasparov, the chess world champion (1997).
 The program FRITZ running on an ordinary PC drawed with V.Kramnik (the
human world champion (2002).

 PEGASUS, a speech understanding system is able to handle


transactions such as finding the cheapest air faire.

 MARVEL: a real-time expert system monitors the stream of data from


the Voyager spacecraft and signals any anomalies.

 A robotic system drives a car at 55 mph on the public highway.

 A diagnostic expert system is correcting the diagnosis of a reputable


expert.

 Intelligent agents for a variety of domains are proliferating at a very


high rate.

 Subject matter experts teach a learning agent their reasoning in


military center of gravity determination.
INTELLIGENT SYSTEMS IN OUR
EVERYDAY LIFE
 Post Office
 automatic address recognition and sorting of mail

 Banks
 automatic check readers, signature verification systems
 automated loan application classification

 Telephone Companies
 automatic voice recognition for directory inquiries
 automatic fraud detection,
 classification of phone numbers into groups

 Credit Card Companies


 automated fraud detection, automated screening of
applications

 Computer Companies
 automated diagnosis for help-desk applications
TYPICAL AI APPLICATION AREAS
 natural language processing - understanding,
 generating, translating;
 planning;
 vision - scene recognition, object recognition,
face
 recognition;
 robotics;
 theorem proving;
 speech recognition;
 game playing;
 problem solving;
 expert systems etc.
WHAT IS AN INTELLIGENT AGENT ?
 An intelligent agent is a system that:
 Perceives its environment (which may be the
physical world, a user via a graphical user
interface, a collection of other agents, the Internet,
or other complex environment).
 Reasons to interpret perceptions, draw inferences,
solve problems, and determine actions.
 Acts upon that environment to realize a set of goals
or tasks for which it was designed.

input/
sensors Intelligent
output/ Agent
user/ effectors
environment
AGENT
 An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through
actuators

 Human agent : eyes, ears, and other organs for


sensors;hands, legs, mouth, and other body parts
for actuators.

 Robotic agent : cameras and infrared range finders


for sensors; various motors for actuators
AGENTS AND ENVIRONMENTS

 The agent function maps from percept histories to actions:


[f : P*  A]

 The agent program runs on the physical architecture to


produce f.

 agent = architecture + program


VACUUM-CLEANER WORLD

 Percepts : location and state of the


environment, e.g., [A,Dirty],
[B,Clean]

 Actions : Left, Right, Suck, NoOp


RATIONAL AGENT
 Rational Agent: For each possible percept
sequence, a rational agent should select an action
that is expected to maximize its performance
measure, based on the evidence provided by the
percept sequence and whatever built-in knowledge
the agent has.

 Performance measure: An objective criterion for


success of an agent's behavior

 E.g., performance measure of a vacuum-cleaner


agent could be amount of dirt cleaned up, amount
of time taken, amount of electricity consumed,
amount of noise generated, etc.
RATIONAL AGENT
 Rationality is distinct from omniscience (all-
knowing with infinite knowledge)

 Agents can perform actions in order to modify


future percepts so as to obtain useful
information (information gathering,
exploration)

 An agent is autonomous if its behavior is


determined by its own percepts & experience
(with ability to learn and adapt) without
depending solely on build-in knowledge
Task Environment
 Before we design an intelligent agent,
we must specify its “task environment”.
 PEAS
 Performance measure
 Environment
 Actuators
 Sensors
PEAS
 Example: Agent = taxi driver

 Performance measure: Safe, fast, legal,


comfortable trip, maximize profits

 Environment: Roads, other traffic,


pedestrians, customers

 Actuators: Steering wheel, accelerator,


brake, signal, horn

 Sensors: Cameras, sonar, speedometer, GPS,


odometer, engine sensors, keyboard
PEAS
 Example: Agent = Medical diagnosis
system
 Performance measure: Healthy patient,
minimize costs, lawsuits
 Environment: Patient, hospital, staff
 Actuators: Screen display (questions,
tests, diagnoses, treatments, referrals)
 Sensors: Keyboard (entry of symptoms,
findings, patient's answers)
PEAS
 Example: Agent = Part-picking robot
 Performance measure: Percentage of

parts in correct bins.


 Environment: Conveyor belt with
parts, bins.
 Actuators: Jointed arm and hand.
 Sensors: Camera, joint angle
sensors.
AGENT TYPES
 Five basic types in order of increasing generality:

 Table Driven agent

 Simple reflex agents

 Model-based reflex agents

 Goal-based agents

 Utility-based agents

 Learning Agents
TABLE DRIVEN AGENT.
current state of decision process

table lookup
for entire history
SIMPLE REFLEX AGENTS

NO MEMORY
Fails if environment
is partially observable

example: vacuum cleaner world


SIMPLE REFLEX AGENTS
 Finding the Action appropriate for a given set of Percepts in a look-up
table is clearly is going to be impossible for all but the most trivial
situations, because of the prohibitive number of entries such a table would
require.
 However, one can often summarise portions of the look-up table by noting
commonly occurring input/output associations which can be written as
condition-action rules:
 if {set of percepts} then {set of actions}
 These are sometimes also called situation-action rules, productions,
or if-then rules.
 Generally, large sets of rules like these, e.g.
 if it is raining then put up umbrella
 may together produce actions that appear intelligent, and they can be
implemented very efficiently.
 Humans have many learned responses and innate reflexes of this form.
 However, we shall see that their range of applicability is actually very
narrow.
MODEL-BASED REFLEX AGENTS
description of Models the world by:
current world state modeling how the world changes
how it’s actions change the world

sometimes it is unclear what to do


without a clear goal
MODEL-BASED REFLEX AGENTS
 One problem with Simple Reflex Agents is that their actions depend only on the
current information provided by their sensors.

 If a reflex agent were able to keep track of its previous states (i.e. maintain a
description or “model” of its previous history in the world), and had knowledge
of how the world evolves, then it would have more information about its current
state, and hence be able to make better informed decisions about its actions.

 Implicit in the knowledge about how the world evolves will be knowledge of
how the agent’s own actions affect its state. It will need this so that it can
update its own state without having to rely on sensory information.

 We say that such reflex agents have an internal state. They will have access
to information/predictions that are not currently available to their sensors. This
will give them more scope for intelligent action than simple reflex agents
agents.
GOAL-BASED AGENTS
Goals provide reason to prefer one action over the other.
We need to predict the future: we need to plan & search
GOAL-BASED AGENTS
 Even with the increased knowledge of the current state of the world
provided by an agent’s internal state, the agent may still not have enough
information to tell it what to do.

 The appropriate action for the agent will often depend on what its goals
are, and so it must be provided with some goal information.

 Given knowledge of how the world evolves, and of how its own actions will
affect its state, an agent will be able to determine the consequences of all
(or many) possible actions. It can then compare each of these against its
goal to determine which action(s) achieves its goal, and hence which
action(s) to take.

 If a long sequence of actions is required to reach the goal, then Search


and Planning are the sub-fields of AI that must be called into action. In
simple reflex agents, such information would have to be pre-computed and
built into the system by the designer.

 Goal based agents may appear less efficient, but they are far more
flexible. We can simply specify a new goal, rather than re-programming all
the rules.
UTILITY-BASED AGENTS
Some solutions to goal states are better than others.
Which one is best is given by a utility function.
Which combination of goals is preferred?
UTILITY-BASED AGENTS
 Goals alone are not enough to generate high quality behaviour. Often there are
many sequences of actions that can result in the same goal being achieved.

 Given appropriate criteria, it may be possible to chose the ‘best’ sequence of


actions from a number that all result in the goal being achieved.

 We talk about the utility of particular states. Any utility based agent can be
described as possessing a utility function that maps a state, or sequence of
states, on to a real number that represents its utility or usefulness.

 We can then use the utility to choose between alternative sequences of


actions/states that lead to a given goal being achieved.

 When there are conflicting goals, only some of which can be achieved, the
utility is able to quantify the appropriate trade-offs.
LEARNING AGENTS
How does an agent improve over time?
By monitoring it’s performance and suggesting better modeling, new action rules, etc.

Evaluates
current
world
state

changes
action
rules
model world
and
decide on
suggests actions to be
explorations taken
LEARNING AGENTS
 The most powerful agents are able to learn by actively exploring and
experimenting with their environment.

 A general learning agent has four basic components:


 The Performance Element – which takes in percepts and decides on
appropriate actions in the same way as a non-learning agent.
 The Critic – which uses a fixed standard of performance to tell the
learning element how well the agent is doing.
 The Learning Element – that receives information from the critic and
makes appropriate improvements to the performance element.
 The Problem Generator – that suggests actions that will lead to new
and informative experiences (e.g. as in carrying out experiments).

 Note that not all learning agents will need a problem generator – a
teacher, or the agent’s normal mode of operation, may provide
sufficient feedback for learning.

You might also like