0% found this document useful (0 votes)
89 views48 pages

Foundations of Artificial Intelligence: Chapter 1: Introduction

This document provides an overview of an introduction to foundations of artificial intelligence course. It discusses the course outline, including topics like agents, search, logic, planning, uncertainty, learning, and natural language processing. It also defines what AI is, describing different views like thinking humanly, thinking rationally, acting humanly, and acting rationally. Additionally, it discusses the history of AI and provides examples of the current state of the art in areas like games, mathematics, logistics, and more.

Uploaded by

ratih
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views48 pages

Foundations of Artificial Intelligence: Chapter 1: Introduction

This document provides an overview of an introduction to foundations of artificial intelligence course. It discusses the course outline, including topics like agents, search, logic, planning, uncertainty, learning, and natural language processing. It also defines what AI is, describing different views like thinking humanly, thinking rationally, acting humanly, and acting rationally. Additionally, it discusses the history of AI and provides examples of the current state of the art in areas like games, mathematics, logistics, and more.

Uploaded by

ratih
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

YAYASAN PERGURUAN TINGGI KOMPUTER

UNIVERSITAS PUTRA INDONESIA “YPTK” PADANG


FAKULTAS ILMU KOMPUTER

FOUNDATIONS OF
ARTIFICIAL INTELLIGENCE

Chapter 1: Introduction
Lecturer:
Dr. Ir. Gunadi Widi Nurcahyo, MSc.

Outline
• Course overview
• What is AI?
• A brief history
• The state of the art

1
Course overview
• Introduction and Agents
• Search
• Logic
• Planning
• Uncertainty
• Learning
• Natural Language Processing

What is AI?
Views of AI fall into four categories:

Thinking humanly Thinking rationally


Acting humanly Acting rationally

The textbook advocates "acting rationally"

2
Acting humanly: Turing Test
• Turing (1950) "Computing machinery and intelligence":
• "Can machines think?"  "Can machines behave intelligently?"
• Operational test for intelligent behavior: the Imitation Game

• Predicted that by 2000, a machine might have a 30% chance of


fooling a lay person for 5 minutes
• Anticipated all major arguments against AI in following 50 years
• Suggested major components of AI: knowledge, reasoning,
language understanding, learning

Thinking humanly: cognitive


modeling
• 1960s "cognitive revolution": information-
processing psychology
• Requires scientific theories of internal activities
of the brain
• -- How to validate? Requires
1) Predicting and testing behavior of human subjects
(top-down)
or 2) Direct identification from neurological data
(bottom-up)
• Both approaches (roughly, Cognitive Science
and Cognitive Neuroscience)
• are now distinct from AI

3
Thinking rationally: "laws of
thought"
• Aristotle: what are correct arguments/thought
processes?
• Several Greek schools developed various forms of
logic: notation and rules of derivation for thoughts; may
or may not have proceeded to the idea of
mechanization
• Direct line through mathematics and philosophy to
modern AI
• Problems:
1. Not all intelligent behavior is mediated by logical deliberation
2. What is the purpose of thinking? What thoughts should I
have?

Acting rationally: rational agent


• Rational behavior: doing the right thing
• The right thing: that which is expected to
maximize goal achievement, given the
available information
• Doesn't necessarily involve thinking – e.g.,
blinking reflex – but thinking should be in
the service of rational action

4
Rational agents
• An agent is an entity that perceives and acts
• This course is about designing rational agents
• Abstractly, an agent is a function from percept
histories to actions:
[f: P*  A]
• For any given class of environments and tasks,
we seek the agent (or class of agents) with the
best performance
• Caveat: computational limitations make perfect
rationality unachievable
 design best program for given machine resources

AI prehistory
• Philosophy Logic, methods of reasoning, mind as physical
system foundations of learning, language,
rationality
• Mathematics Formal representation and proof algorithms,
computation, (un)decidability, (in)tractability,
probability
• Economics utility, decision theory
• Neuroscience physical substrate for mental activity
• Psychology phenomena of perception and motor control,
experimental techniques
• Computer building fast computers
engineering
• Control theory design systems that maximize an objective
function over time
• Linguistics knowledge representation, grammar

5
A bridged history of AI
• 1943 McCulloch & Pitts: Boolean circuit model of brain
• 1950 Turing's "Computing Machinery and Intelligence"
• 1956 Dartmouth meeting: "Artificial Intelligence" adopted
• 1952—69 Look, Ma, no hands!
• 1950s Early AI programs, including Samuel's checkers
program, Newell & Simon's Logic Theorist,
Gelernter's Geometry Engine
• 1965 Robinson's complete algorithm for logical reasoning
• 1966—73 AI discovers computational complexity
Neural network research almost disappears
• 1969—79 Early development of knowledge-based systems
• 1980-- AI becomes an industry
• 1986-- Neural networks return to popularity
• 1987-- AI becomes a science
• 1995-- The emergence of intelligent agents

State of the art


• Deep Blue defeated the reigning world chess champion
Garry Kasparov in 1997
• Proved a mathematical conjecture (Robbins conjecture)
unsolved for decades
• No hands across America (driving autonomously 98% of
the time from Pittsburgh to San Diego)
• During the 1991 Gulf War, US forces deployed an AI
logistics planning and scheduling program that involved
up to 50,000 vehicles, cargo, and people
• NASA's on-board autonomous planning program
controlled the scheduling of operations for a spacecraft
• Proverb solves crossword puzzles better than most
humans

6
Intelligent Agents

Chapter 2

Outline
• Agents and environments
• Rationality
• PEAS (Performance measure,
Environment, Actuators, Sensors)
• Environment types
• Agent types

7
Agents
• An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through
actuators
• Human agent: eyes, ears, and other organs for
sensors;
• hands, legs, mouth, and other body parts for
actuators
• Robotic agent: cameras and infrared range
finders for sensors;
• various motors for actuators

Agents and environments

• The agent function maps from percept histories


to actions:
[f: P*  A]
• The agent program runs on the physical
architecture to produce f
• agent = architecture + program

8
Vacuum-cleaner world

• Percepts: location and contents, e.g.,


[A,Dirty]
• Actions: Left, Right, Suck, NoOp

A vacuum-cleaner agent
• \input{tables/vacuum-agent-function-
table}

9
Rational agents
• An agent should strive to "do the right thing",
based on what it can perceive and the actions it
can perform. The right action is the one that will
cause the agent to be most successful
• Performance measure: An objective criterion for
success of an agent's behavior
• E.g., performance measure of a vacuum-cleaner
agent could be amount of dirt cleaned up,
amount of time taken, amount of electricity
consumed, amount of noise generated, etc.

Rational agents
• Rational Agent: For each possible percept
sequence, a rational agent should select
an action that is expected to maximize its
performance measure, given the evidence
provided by the percept sequence and
whatever built-in knowledge the agent
has.

10
Rational agents
• Rationality is distinct from omniscience
(all-knowing with infinite knowledge)
• Agents can perform actions in order to
modify future percepts so as to obtain
useful information (information gathering,
exploration)
• An agent is autonomous if its behavior is
determined by its own experience (with
ability to learn and adapt)

PEAS
• PEAS: Performance measure, Environment,
Actuators, Sensors
• Must first specify the setting for intelligent agent
design
• Consider, e.g., the task of designing an
automated taxi driver:
– Performance measure
– Environment
– Actuators
– Sensors

11
PEAS
• Must first specify the setting for intelligent agent
design
• Consider, e.g., the task of designing an
automated taxi driver:
– Performance measure: Safe, fast, legal, comfortable
trip, maximize profits
– Environment: Roads, other traffic, pedestrians,
customers
– Actuators: Steering wheel, accelerator, brake, signal,
horn
– Sensors: Cameras, sonar, speedometer, GPS,
odometer, engine sensors, keyboard

PEAS
• Agent: Medical diagnosis system
• Performance measure: Healthy patient,
minimize costs, lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions,
tests, diagnoses, treatments, referrals)
• Sensors: Keyboard (entry of symptoms,
findings, patient's answers)

12
PEAS
• Agent: Part-picking robot
• Performance measure: Percentage of
parts in correct bins
• Environment: Conveyor belt with parts,
bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors

PEAS
• Agent: Interactive English tutor
• Performance measure: Maximize student's
score on test
• Environment: Set of students
• Actuators: Screen display (exercises,
suggestions, corrections)
• Sensors: Keyboard

13
Environment types
• Fully observable (vs. partially observable): An agent's
sensors give it access to the complete state of the
environment at each point in time.
• Deterministic (vs. stochastic): The next state of the
environment is completely determined by the current
state and the action executed by the agent. (If the
environment is deterministic except for the actions of
other agents, then the environment is strategic)
• Episodic (vs. sequential): The agent's experience is
divided into atomic "episodes" (each episode consists of
the agent perceiving and then performing a single
action), and the choice of action in each episode
depends only on the episode itself.

Environment types
• Static (vs. dynamic): The environment is
unchanged while an agent is deliberating. (The
environment is semidynamic if the environment
itself does not change with the passage of time
but the agent's performance score does)
• Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and actions.
• Single agent (vs. multiagent): An agent
operating by itself in an environment.

14
Environment types
Chess with Chess without Taxi driving
a clock a clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No

• The environment type largely determines the agent design


• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent

Agent functions and programs


• An agent is completely specified by the
agent function mapping percept
sequences to actions
• One agent function (or a small
equivalence class) is rational
• Aim: find a way to implement the rational
agent function concisely

15
Table-lookup agent
• \input{algorithms/table-agent-algorithm}
• Drawbacks:
– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn
the table entries

Agent program for a vacuum-


cleaner agent
• \input{algorithms/reflex-vacuum-agent-
algorithm}

16
Agent types
• Four basic types in order of increasing
generality:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents

Simple reflex agents

17
Simple reflex agents
• \input{algorithms/d-agent-algorithm}

Model-based reflex agents

18
Model-based reflex agents
• \input{algorithms/d+-agent-algorithm}

Goal-based agents

19
Utility-based agents

Learning agents

20
Solving problems by
searching
Chapter 3

Outline
• Problem-solving agents
• Problem types
• Problem formulation
• Example problems
• Basic search algorithms

21
Problem-solving agents

Example: Romania
• On holiday in Romania; currently in Arad.
• Flight leaves tomorrow from Bucharest
• Formulate goal:
– be in Bucharest
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras,
Bucharest

22
Example: Romania

Problem types
• Deterministic, fully observable  single-state problem
– Agent knows exactly which state it will be in; solution is a
sequence
• Non-observable  sensorless problem (conformant
problem)
– Agent may have no idea where it is; solution is a sequence
• Nondeterministic and/or partially observable 
contingency problem
– percepts provide new information about current state
– often interleave} search, execution
• Unknown state space  exploration problem

23
Example: vacuum world
• Single-state, start in #5.
Solution?

Example: vacuum world


• Single-state, start in #5.
Solution? [Right, Suck]

• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?

24
Example: vacuum world
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]

• Contingency
– Nondeterministic: Suck may
dirty a clean carpet
– Partially observable: location, dirt at current location.
– Percept: [L, Clean], i.e., start in #5 or #7
Solution?

Example: vacuum world


• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]

• Contingency
– Nondeterministic: Suck may
dirty a clean carpet
– Partially observable: location, dirt at current location.
– Percept: [L, Clean], i.e., start in #5 or #7
Solution? [Right, if dirt then Suck]

25
Single-state problem formulation
A problem is defined by four items:

1. initial state e.g., "at Arad"


2. actions or successor function S(x) = set of action–state pairs
– e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }
3. goal test, can be
– explicit, e.g., x = "at Bucharest"
– implicit, e.g., Checkmate(x)
4. path cost (additive)
– e.g., sum of distances, number of actions executed, etc.
– c(x,a,y) is the step cost, assumed to be ≥ 0

• A solution is a sequence of actions leading from the initial state to a


goal state

Selecting a state space


• Real world is absurdly complex
 state space must be abstracted for problem solving
• (Abstract) state = set of real states
• (Abstract) action = complex combination of real actions
– e.g., "Arad  Zerind" represents a complex set of possible
routes, detours, rest stops, etc.
• For guaranteed realizability, any real state "in Arad“ must
get to some real state "in Zerind"
• (Abstract) solution =
– set of real paths that are solutions in the real world
• Each abstract action should be "easier" than the original
problem

26
Vacuum world state space graph

• states?
• actions?
• goal test?
• path cost?

Vacuum world state space graph

• states? integer dirt and robot location


• actions? Left, Right, Suck
• goal test? no dirt at all locations
• path cost? 1 per action

27
Example: The 8-puzzle

• states?
• actions?
• goal test?
• path cost?

Example: The 8-puzzle

• states? locations of tiles


• actions? move blank left, right, up, down
• goal test? = goal state (given)
• path cost? 1 per move

[Note: optimal solution of n-Puzzle family is NP-hard]

28
Example: robotic assembly

• states?: real-valued coordinates of robot joint


angles parts of the object to be assembled
• actions?: continuous motions of robot joints
• goal test?: complete assembly
• path cost?: time to execute

Tree search algorithms


• Basic idea:
– offline, simulated exploration of state space by
generating successors of already-explored states
(a.k.a.~expanding states)

29
Tree search example

Tree search example

30
Tree search example

Implementation: general tree search

31
Implementation: states vs. nodes

• A state is a (representation of) a physical configuration


• A node is a data structure constituting part of a search
tree includes state, parent node, action, path cost g(x),
depth

• The Expand function creates new nodes, filling in the


various fields and using the SuccessorFn of the
problem to create the corresponding states.

Search strategies
• A search strategy is defined by picking the order of node
expansion
• Strategies are evaluated along the following dimensions:
– completeness: does it always find a solution if one exists?
– time complexity: number of nodes generated
– space complexity: maximum number of nodes in memory
– optimality: does it always find a least-cost solution?
• Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)

32
Uninformed search strategies
• Uninformed search strategies use only the
information available in the problem
definition
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search

Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end

33
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end

Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end

34
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end

Properties of breadth-first search


• Complete? Yes (if b is finite)
• Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
• Space? O(bd+1) (keeps every node in memory)
• Optimal? Yes (if cost = 1 per step)

• Space is the bigger problem (more than time)

35
Uniform-cost search
• Expand least-cost unexpanded node
• Implementation:
– fringe = queue ordered by path cost
• Equivalent to breadth-first if step costs all equal
• Complete? Yes, if step cost ≥ ε
• Time? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε)) where C* is the cost of the optimal solution
• Space? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε))
• Optimal? Yes – nodes expanded in increasing order of
g(n)

Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

36
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

37
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

38
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

39
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

40
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

41
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

Properties of depth-first search


• Complete? No: fails in infinite-depth spaces,
spaces with loops
– Modify to avoid repeated states along path
 complete in finite spaces
• Time? O(bm): terrible if m is much larger than d
– but if solutions are dense, may be much faster than
breadth-first
• Space? O(bm), i.e., linear space!
• Optimal? No

42
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors

• Recursive implementation:

Iterative deepening search

43
Iterative deepening search l =0

Iterative deepening search l =1

44
Iterative deepening search l =2

Iterative deepening search l =3

45
Iterative deepening search
• Number of nodes generated in a depth-limited search to
depth d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd

• Number of nodes generated in an iterative deepening


search to depth d with branching factor b:
NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd

• For b = 10, d = 5,
– NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
– NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

• Overhead = (123,456 - 111,111)/111,111 = 11%

Properties of iterative
deepening search
• Complete? Yes
• Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd =
O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1

46
Summary of algorithms

Repeated states
• Failure to detect repeated states can turn
a linear problem into an exponential one!

47
Graph search

Summary
• Problem formulation usually requires abstracting away
real-world details to define a state space that can
feasibly be explored

• Variety of uninformed search strategies

• Iterative deepening search uses only linear space and


not much more time than other uninformed algorithms

48

You might also like