Presentation 1
Presentation 1
Chapter 1& 2
1
What is artificial intelligence?
▪ There is no clear consensus on the definition of AI
▪ Artificial
• Produced by human art or effort, rather than originating naturally.
▪ Intelligence
• is the ability to acquire knowledge and use it“.
• is the computational part of the ability to achieve goals.
• So AI was defined as:
• is the study of ideas that enable computers to be intelligent.
• is the part of computer science concerned with design of computer systems that
exhibit human intelligence(From the Concise Oxford Dictionary).
• is the science and engineering of making intelligent machines, especially intelligent
computer programs
▪ AI can have two purposes.
• One is to use the power of computers to augment human thinking.
• The other is to use a computer's artificial intelligence to understand how humans
think.
2
Definition of AI
HUMAN RATIONAL
3
Thinking rationally: Laws of Thought
4
Acting rationally
▪ Rational behavior: doing the right thing
▪ The right thing: that which is expected to maximize goal achievement, given the
available information
▪ Doesn’t necessarily involve thinking, but thinking should be in the service
of rational action
▪ Giving answers to questions is ‘acting’.
5
Thinking humanly: Cognitive Science
▪ How humans think?
▪ learn about human thought in three ways
Introspection, psychological experiments & brain imaging
▪ Humans as observed from ‘inside’
▪ How do we know how humans think?
▪ Cognitive Science
▪ The exciting new effort to make computers think … machines with minds in the full and
literal sense.
▪ The automation of activities that we associate with human thinking, activities
such as decision-making, problem solving, learning.
6
Acting humanly: The Turing test
HUMAN
HUMAN 7 / 26
INTERROGATOR
?
AI SYSTEM
Model has ability to (store data, retrieve data, expect data & take actions).
www.boibot.com
Suggested major components of AI: knowledge, reasoning, language understanding, learning
But Turing test is not reproducible, constructive, or amenable to mathematical analysis.
AI prehistory
8
AI prehistory (cont.)
9
The gestation of AI
▪ Newell and Simon: The Logic Theorist (LT), about which Simon
claimed, "We have invented a computer program capable of thinking
non-numerically, and thereby solved the venerable mind—body
problem.
10
Great Expectation (1952-1969)
▪ Newell and Simon: developed the General Problem Solver (GPS) to
imitate human.
11
A reality (1966-1973)
▪ U.S. National Research Council:
funded an attempt to speed up the translation of Russian scientific papers
in 1957. This project failed.
▪ Herbert Simon (1957) said: “It is not my aim to surprise or shock you
but the simplest way I can summarize is to say that there are now in
the world machines that think, that learn and that create”
12
The knowledge base, the key to power
13
(1980 — present)
14
AI today
▪ Games
▪ Automatic control
▪ Diagnostic
▪ Robotics
▪ Many application fields:
smart home, driving assistance, image recognition, personal assistants,
smart grids, video analytics,….
15
Agents and environments
sensors
percepts
?
environment
agent
actions
actuators
Agent =
17
Intelligent Agent
Environment
Agent
?
19
AI in Robotics
Environment
Robot Decision
20
AI in Games
Game
Agent
Decision
You
21
AI in Medicine
22
AI in the WEB
Search text
23
Rational agent
Environment: [room A] and [room B], both possibly with dirt that
does not respawn
Actions: [move left], [move right] or [suck]
Senses: current location only: [dirty or clean]
24
Rational agent
An agent function for vacuum agent:
26
Rational agent
You want to express the performance measure in terms of the environment
not the agent.
For example, if we describe a measure as: “Suck up the most dirt”.
A rational vacuum agent would suck up dirt then dump it back to be
sucked up again...
This will not lead to a clean floor
If we do not know how often dirt will reappear, a rational agent might need
to learn
27
Rational agent
Learning can use prior knowledge to estimate how often dirt tends to reappear,
but should value actual observations more
The agent might need to explore and take sub-optimal short-term actions to find a
better long-term solution
A rational agent depends on:
▪ Performance measure
▪ Prior knowledge of the environment
▪ Actions available
▪ History of sensed information
You need to know all of these before you can determine rationality
28
Agent types
Can also classify agents into four categories:
▪ Simple reflex
These agents select actions on the basis of the current percept, ignoring the rest of the
percept history. condition–action rule
acts only on the most recent part of the percept and not the whole history
Our vacuum agent is of this type, as it only looks at the current state and not any previous
▪ Model-based reflex
➢ The most effective way to handle partial observability is for the agent to keep track
of the part of the world it can’t see now
➢ agent needs to have a representation of the environment in memory (called internal
state)
➢This internal state is updated with each observation and then dictates
actions, This internal state should be from the agent's perspective, not a
global perspective
The global perspective is the same, but the agents could have
different goals (stars)
Pic 1 Pic 2
➢ The goal based agent is more general than the model-based agent
▪ Utility based
➢ Autility based agent maps the sequence of states (or actions) to a real value.
➢Goals can describe general terms as “success” or “failure”, but there is no degree of
success
➢If you want to go upstairs, a goal based agent could find the closest way up...
➢A utility based agent could accommodate your preferences between stairs vs. elevator32
PEAS
Performance measure??
Environment??
Actuators??
Sensors??
33
7 / 18
Environment Types
34
Environment Types cont.
35
Environment Types cont.
36
Environment Types cont.
37
Environment Types cont.
• Episodic: the agent’s experience is divided into atomic episodes. the next episode
does not depend on the actions taken in previous episodes.
Image analysis is episodic
38
Fully Non- Continues Adversarial
Observe. deterministic
Checkers
Cards (king)
Robot car
39
Fully Non- Continues Adversarial
Observe. deterministic
Checkers X
Cards (king)
Robot car
40
Fully Non- Continues Adversarial
Observe. deterministic
Checkers X X
Cards (king)
Robot car
41
Fully Non- Continues Adversarial
Observe. deterministic
Checkers X X
Cards (king) X
Robot car
42
Fully Non- Continues Adversarial
Observe. deterministic
Checkers X X
Cards (king) X X
Robot car
43
Fully Non- Continues Adversarial
Observe. deterministic
Checkers X X
Cards (king) X X
Robot car X
44
Fully Non- Continues Adversarial
Observe. deterministic
Checkers X X
Cards (king) X X
Robot car X X
45
Summary
➢ Agents interact with environments through actuators and sensors
➢ The agent function describes what the agent does in all circumstances
➢ The performance measure evaluates the environment sequence
➢ A perfectly rational agent maximizes expected performance
➢ Agent programs implement (some) agent functions
➢ PEAS descriptions define task environments
➢ Environments are categorized along several dimensions:
• observable? deterministic? episodic? static? discrete? single-agent?
➢ Several basic agent architectures exist:
• reflex, reflex with state, goal-based, utility-based
46