0% found this document useful (0 votes)
16 views46 pages

Presentation 1

Uploaded by

22-101072
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views46 pages

Presentation 1

Uploaded by

22-101072
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Introduction to AI

Chapter 1& 2

DR. Farid Zaky

1
What is artificial intelligence?
▪ There is no clear consensus on the definition of AI
▪ Artificial
• Produced by human art or effort, rather than originating naturally.
▪ Intelligence
• is the ability to acquire knowledge and use it“.
• is the computational part of the ability to achieve goals.
• So AI was defined as:
• is the study of ideas that enable computers to be intelligent.
• is the part of computer science concerned with design of computer systems that
exhibit human intelligence(From the Concise Oxford Dictionary).
• is the science and engineering of making intelligent machines, especially intelligent
computer programs
▪ AI can have two purposes.
• One is to use the power of computers to augment human thinking.
• The other is to use a computer's artificial intelligence to understand how humans
think.
2
Definition of AI

THOUGHT Systems that Systems that


think think
like humans rationally

Systems that Systems that


BEHAVIOUR act act
like humans rationally

HUMAN RATIONAL

3
Thinking rationally: Laws of Thought

▪ Humans are not always ‘rational


▪ Rational - defined in terms of logic?
▪ Logic can’t express everything (e.g. uncertainty)
▪ Logical approach is often not feasible in terms of computation time (needs ‘guidance’)
▪ Aristotle: what are correct arguments/thought processes?
▪ Several Greek schools developed various forms of logic
▪ The study of mental facilities through the use of computational models”
(Charniak and McDermott)
▪ “The study of the computations that make it possible to perceive, reason, and act”
(Winston)

4
Acting rationally
▪ Rational behavior: doing the right thing
▪ The right thing: that which is expected to maximize goal achievement, given the
available information
▪ Doesn’t necessarily involve thinking, but thinking should be in the service
of rational action
▪ Giving answers to questions is ‘acting’.

▪ I don't care whether a system:

➢ replicates human thought processes


➢ makes the same decisions as humans
➢ uses purely logical reasoning

5
Thinking humanly: Cognitive Science
▪ How humans think?
▪ learn about human thought in three ways
Introspection, psychological experiments & brain imaging
▪ Humans as observed from ‘inside’
▪ How do we know how humans think?
▪ Cognitive Science
▪ The exciting new effort to make computers think … machines with minds in the full and
literal sense.
▪ The automation of activities that we associate with human thinking, activities
such as decision-making, problem solving, learning.

6
Acting humanly: The Turing test

Turing (1950) “Computing machinery and intelligence”


“Can machines think?” −→ “Can machines behave intelligently?”

HUMAN
HUMAN 7 / 26

INTERROGATOR
?
AI SYSTEM

Model has ability to (store data, retrieve data, expect data & take actions).
www.boibot.com
Suggested major components of AI: knowledge, reasoning, language understanding, learning
But Turing test is not reproducible, constructive, or amenable to mathematical analysis.
AI prehistory

▪ Philosophy (from 348, Aristotle)


▪ logics, reasoning methods
✓ mind as a physical system or not
✓ foundations of learning, language, rationality
▪ Mathematics (George Boole from 1825)
✓ formal logics, proof theory
✓ algorithms, computation, decidability, tractability, probability
▪ Economics (from 1776, Adam Smith)
✓ published An Inquiry into (the Nature and Causes of the Wealth of Nations)

8
AI prehistory (cont.)

▪ Neuroscience (from 1861


▪ Computer Science (from 1940, Stibitz)
Broca)
computer efficiency
study of aphasia (speech
deficit) in brain-damaged ▪ Control theory (from 1948, Wiener)
patients) homeostatic systems, stability
simple optimal agent designs
▪ Psychology (from Hermann von ▪ Linguistics (from 1957, Chomsky)
Helmholtz (1821-1894) ) knowledge representation
applied the scientific method to the grammar
study of human vision, and his
Handbook of Physiological Optics)

9
The gestation of AI

▪ Warren McCulloch and Walter Pins (1943): build a neuron model to


represent prepositional logic.

▪ (SNARC) Stochastic Neural Analog Reinforcement Calculator : first


neural network computer in 1950 by Marvin Minsky and Dean
Edmonds at Harvard.

▪ Newell and Simon: The Logic Theorist (LT), about which Simon
claimed, "We have invented a computer program capable of thinking
non-numerically, and thereby solved the venerable mind—body
problem.

10
Great Expectation (1952-1969)
▪ Newell and Simon: developed the General Problem Solver (GPS) to
imitate human.

▪ Arthur Samuel: wrote a series of programs for checkers.

▪ Minsky: supervised a series of students who chose limited problems that


appeared to require intelligence to solve which known by Microworlds.

▪ McCarthy defined the high-level language Lisp, solve problem by logic.

11
A reality (1966-1973)
▪ U.S. National Research Council:
funded an attempt to speed up the translation of Russian scientific papers
in 1957. This project failed.

▪ Herbert Simon (1957) said: “It is not my aim to surprise or shock you
but the simplest way I can summarize is to say that there are now in
the world machines that think, that learn and that create”

12
The knowledge base, the key to power

▪ (1969-1979) DENDRAL (Dendritic Algorithm) by Feigenbaum 1969 at Stanford:


program to solve the problem of inferring molecular structure from the
information provided by a mass spectrometer.

▪ AI become industry (1980— present) RI expert system:


The program helped configure orders for new computer systems. saved a company
40 million a year.

▪ The return of neural networks (1986—present)


• back-propagation learning algorithm first found in 1969 by Bryson

13
(1980 — present)

▪ AI adopts the scientific method (1987–present)


▪ The emergence of intelligent agents (1995— present)
➢ Allen Newell, John Laird. and Paul Rosenbloom on SOAR (Newell, 1990;
Laird et al., 1987) is the best- known example of a complete agent
architecture
The goal of the Soar project is to develop the fixed computational
building blocks necessary for general intelligent agents
➢ The availability of very large data sets (2001—present)

14
AI today

▪ Games
▪ Automatic control
▪ Diagnostic
▪ Robotics
▪ Many application fields:
smart home, driving assistance, image recognition, personal assistants,
smart grids, video analytics,….

15
Agents and environments
sensors

percepts
?
environment
agent
actions

actuators

Agents include humans, robots, thermostats, etc.


is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
There are two ways to describe an agent's action based on what it sensed:
The agent function maps from percept histories to actions
The agent program runs on the physical architecture to produce 16
16 / 18
Rational agent

An agent/robot must be able to perceive and interact with


the environment

A rational agent is one that always takes the best action

Agent =

17
Intelligent Agent

Environment
Agent
?

• It’s all about how the control policy function


(?) of this agent takes the decision
18
AI in Finance

Stock or Bond Market


Trading
Agent Decision

19
AI in Robotics

Environment
Robot Decision

20
AI in Games

Game
Agent
Decision

You
21
AI in Medicine

You and your Doctor


Diagnostic
agent
Decision

22
AI in the WEB

Search text

Search Feeling lucky

You and World Wide Web


DB
Decision

23
Rational agent

Consider the case of a simple vacuum agent

Environment: [room A] and [room B], both possibly with dirt that
does not respawn
Actions: [move left], [move right] or [suck]
Senses: current location only: [dirty or clean]
24
Rational agent
An agent function for vacuum agent:

A corresponding agent program:


if [Dirty], return [Suck]
if at [roomA], return [move right] if at [room B], return [move left]
25
Rational agent

In order to determine if the vacuum agent is rational we need a


performance measure
Under which of these metrics is the agent program on the previous
slide rational?
1. Have a clean floor in A and B
2. Have a clean floor as fast as possible
3. Have a clean floor with moving as little as possible
4. Maximize the amount of time sucking

26
Rational agent
You want to express the performance measure in terms of the environment
not the agent.
For example, if we describe a measure as: “Suck up the most dirt”.
A rational vacuum agent would suck up dirt then dump it back to be
sucked up again...
This will not lead to a clean floor
If we do not know how often dirt will reappear, a rational agent might need
to learn

27
Rational agent
Learning can use prior knowledge to estimate how often dirt tends to reappear,
but should value actual observations more

The agent might need to explore and take sub-optimal short-term actions to find a
better long-term solution
A rational agent depends on:
▪ Performance measure
▪ Prior knowledge of the environment
▪ Actions available
▪ History of sensed information
You need to know all of these before you can determine rationality
28
Agent types
Can also classify agents into four categories:
▪ Simple reflex
These agents select actions on the basis of the current percept, ignoring the rest of the
percept history. condition–action rule
acts only on the most recent part of the percept and not the whole history

Our vacuum agent is of this type, as it only looks at the current state and not any previous

These can be generalized as:


“if state = then do action ” (often can
fail or loop infinitely)
29
Agent types

▪ Model-based reflex
➢ The most effective way to handle partial observability is for the agent to keep track
of the part of the world it can’t see now
➢ agent needs to have a representation of the environment in memory (called internal
state)
➢This internal state is updated with each observation and then dictates

actions, This internal state should be from the agent's perspective, not a

global perspective

(as same global state might have different actions)


30
Agent types

The global perspective is the same, but the agents could have
different goals (stars)
Pic 1 Pic 2

Goals are not global information


31
Agent types
▪ Goal based
➢ Knowing something about the current state of the environment is not always enough to
decide what to do

➢ The goal based agent is more general than the model-based agent

▪ Utility based
➢ Autility based agent maps the sequence of states (or actions) to a real value.

➢Goals can describe general terms as “success” or “failure”, but there is no degree of

success

➢If you want to go upstairs, a goal based agent could find the closest way up...

➢A utility based agent could accommodate your preferences between stairs vs. elevator32
PEAS

To design a rational agent, we must specify the task environment

Consider, e.g., the task of designing an automated taxi:

Performance measure??
Environment??
Actuators??
Sensors??

33
7 / 18
Environment Types

• Fully observable: An environment is called fully observable


if what your agent can sense at any point in time is
completely sufficient to make the optimal decision i.e. its
sensors can see the entire state of the environment.
• partially observable: that is in contrast to some other
environments where agents need memory to make the
best decision.

34
Environment Types cont.

• Deterministic: Deterministic environment is one where


your agent's actions uniquely determine the outcome.

• Non-Deterministic “Stochastic": In stochastic environment,


there is certain amount of randomness.
• Stochastic “it explicitly deals with probabilities, there’s a 25%
chance of rain tomorrow”
• nondeterministic” if the possibilities are listed without being
quantified, there’s a chance of rain tomorrow”

35
Environment Types cont.

• Discrete v.s. continues: A discrete environment is one


where you have finitely many action choices, and finitely
many things you can sense. For example, in chess there's
finitely many board positions, and finitely many things you
can do. Taxi-driving actions are also continuous
• The discrete/continuous distinction applies to the state of the
environment, to the way time is handled, and to the percepts
and actions of the agent

36
Environment Types cont.

• Benign: In benign environments, the environment might be


random. It might be stochastic, but it has no objective on its
own that would contradict your own objective. For example,
weather is benign.
• Adversarial: Contrast this with adversarial environments,
such as many games, like chess, where your opponent is
really out there to get you.

37
Environment Types cont.
• Episodic: the agent’s experience is divided into atomic episodes. the next episode
does not depend on the actions taken in previous episodes.
Image analysis is episodic

• Sequential: the current decision could affect all future decisions


Chess and taxi driving are sequential

38
Fully Non- Continues Adversarial
Observe. deterministic
Checkers
Cards (king)
Robot car

39
Fully Non- Continues Adversarial
Observe. deterministic

Checkers X
Cards (king)
Robot car

40
Fully Non- Continues Adversarial
Observe. deterministic

Checkers X X
Cards (king)
Robot car

41
Fully Non- Continues Adversarial
Observe. deterministic

Checkers X X
Cards (king) X
Robot car

42
Fully Non- Continues Adversarial
Observe. deterministic

Checkers X X
Cards (king) X X
Robot car

43
Fully Non- Continues Adversarial
Observe. deterministic

Checkers X X
Cards (king) X X
Robot car X

44
Fully Non- Continues Adversarial
Observe. deterministic

Checkers X X
Cards (king) X X
Robot car X X

45
Summary
➢ Agents interact with environments through actuators and sensors
➢ The agent function describes what the agent does in all circumstances
➢ The performance measure evaluates the environment sequence
➢ A perfectly rational agent maximizes expected performance
➢ Agent programs implement (some) agent functions
➢ PEAS descriptions define task environments
➢ Environments are categorized along several dimensions:
• observable? deterministic? episodic? static? discrete? single-agent?
➢ Several basic agent architectures exist:
• reflex, reflex with state, goal-based, utility-based

46

You might also like