0% found this document useful (0 votes)
59 views74 pages

Introduction To AI and Intelligent Agents

Uploaded by

animeshrajak649
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views74 pages

Introduction To AI and Intelligent Agents

Uploaded by

animeshrajak649
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Introduction to AI

&
Intelligent Agents
Artificial Intelligence in Movies
AI in Computer Games

https://www.analyticsindiamag.com/top-5-video-games-that-have-made-the-best-use-of-ai/
What is Intelligence?

• Intelligence:
– “the capacity to learn and solve problems” (Websters dictionary)
– in particular,
• the ability to solve novel problems
• the ability to act rationally
• the ability to act like humans

• Artificial Intelligence
– build and understand intelligent entities or agents
– 2 main approaches: “engineering” versus “cognitive modeling”
What’s involved in Intelligence?

• Ability to interact with the real world


– to perceive, understand, and act
– e.g., speech recognition and understanding and synthesis
– e.g., image understanding
– e.g., ability to take actions, have an effect

• Reasoning and Planning


– modeling the external world, given input
– solving new problems, planning, and making decisions
– ability to deal with unexpected problems, uncertainties

• Learning and Adaptation


– we are continuously learning and adapting
– our internal models are always being “updated”
• e.g., a baby learning to categorize and recognize animals
Academic Disciplines relevant to AI
• Philosophy Logic, methods of reasoning, mind as physical
system, foundations of learning, language,
rationality.

• Mathematics Formal representation and proof, algorithms,


computation, (un)decidability, (in)tractability

• Probability/Statistics modeling uncertainty, learning from data

• Economics utility, decision theory, rational economic agents

• Neuroscience neurons as information processing units.

• Psychology/ how do people behave, perceive, process cognitive


Cognitive Science information, represent knowledge.

• Computer building fast computers


engineering

• Control theory design systems that maximize an objective


function over time

• Linguistics knowledge representation, grammars


Can we build hardware as complex as the brain?

• How complicated is our brain?


– a neuron, or nerve cell, is the basic information processing unit
– estimated to be on the order of 10 12 neurons in a human brain
– many more synapses (10 14) connecting these neurons
– cycle time: 10 -3 seconds (1 millisecond)

• How complex can we make computers?


– 108 or more transistors per CPU
– supercomputer: hundreds of CPUs, 1012 bits of RAM
– cycle times: order of 10 - 9 seconds

• Conclusion
– YES: in the near future we can have computers with as many basic
processing elements as our brain, but with
• far fewer interconnections (wires or synapses) than the brain
• much faster updates than the brain
– but building hardware is very different from making a computer
behave like a brain!
AIs beat humans
at their own game

1997, IBM's Deep Blue


Supercomputer beat
reigning World Chess
Champion, Garry
Kasparov

…Following rules, "I'm not afraid to admit


…brute force searching that I'm afraid,"
- Kasparov
Can Computers beat Humans at Chess?
• Chess Playing is a classic AI problem
– well-defined problem
– very complex: difficult for humans to play well

3000
Deep Blue
2800 Human World Champion
2600
Points Ratings

2400 Deep Thought


2200
Ratings
2000
1800
1600
1400
1200
1966 1971 1976 1981 1986 1991 1997

• Conclusion:
– YES: today’s computers can beat even the best human
Can Computers Talk?
• This is known as “speech synthesis”
– translate text to phonetic form
• e.g., “fictitious” -> fik-tish-es
– use pronunciation rules to map phonemes to actual sound
• e.g., “tish” -> sequence of basic audio sounds

• Difficulties
– sounds made by this “lookup” approach sound unnatural
– sounds are not independent
• e.g., “act” and “action”
• modern systems (e.g., at AT&T) can handle this pretty well
– a harder problem is emphasis, emotion, etc
• humans understand what they are saying
• machines don’t: so they sound unnatural

• Conclusion:
– YES, for individual words
Can Computers Recognize Speech?

• Speech Recognition:
– mapping sounds from a microphone into a list of words
– classic problem in AI, very difficult
• “Lets talk about how to wreck a nice beach”

• (I really said “________________________”)

• Recognizing single words from a small vocabulary


• systems can do this with high accuracy (order of 99%)
• e.g., directory inquiries
– limited vocabulary (area codes, city names)
– computer tries to recognize you first, if unsuccessful hands
you over to a human operator
– saves millions of dollars a year for the phone companies
Recognizing human speech (ctd.)

• Recognizing normal speech is much more difficult


– speech is continuous: where are the boundaries between words?
• e.g., “John’s car has a flat tire”
– large vocabularies
• can be many thousands of possible words
• we can use context to help figure out what someone said
– e.g., hypothesize and test
– try telling a waiter in a restaurant:
“I would like some dream and sugar in my coffee”
– background noise, other speakers, accents, colds, etc
– on normal speech, modern systems are only about 60-70% accurate

• Conclusion:
– NO, normal speech is too complex to accurately recognize
– YES, for restricted problems (small vocabulary, single speaker)
Can Computers Understand speech?

• Understanding is different to recognition:


– “Time flies like an arrow”
• assume the computer can recognize all the words
• how many different interpretations are there?
– 1. time passes quickly like an arrow?
– 2. command: time the flies the way an arrow times the flies
– 3. command: only time those flies which are like an arrow
– 4. “time-flies” are fond of arrows
• only 1. makes any sense,
– but how could a computer figure this out?
– clearly humans use a lot of implicit commonsense
knowledge in communication

• Conclusion: NO, much of what we say is beyond the capabilities of


a computer to understand at present.
• But in near future, YES.
Can Computers Learn and Adapt ?
• Learning and Adaptation
– consider a computer learning to drive on the freeway
– we could teach it lots of rules about what to do
– or we could let it drive and steer it back on course when it heads for
the embankment
• systems like this are under development (e.g., Daimler Benz)
• e.g., RALPH at CMU
– in mid 90’s it drove 98% of the way from Pittsburgh to San
Diego without any human assistance
– machine learning allows computers to learn to do things without
explicit programming
– many successful applications:
• requires some “set-up”: does not mean your PC can learn to
forecast the stock market or become a brain surgeon

• Conclusion: YES, computers can learn and adapt, when presented


with information in the appropriate way
Can Computers “see”?

• Recognition v. Understanding (like Speech)


– Recognition and Understanding of Objects in a scene
• look around this room
• you can effortlessly recognize objects
• human brain can map 2d visual image to 3d “map”

• Why is visual recognition a hard problem?

• Conclusion:
– mostly NO: computers can only “see” certain types of objects under
limited circumstances
– YES for certain constrained problems (e.g., face recognition)
Can computers plan and make optimal decisions?

• Intelligence
– involves solving problems and making decisions and plans
– e.g., you want to take a holiday in Brazil
• you need to decide on dates, flights
• you need to get to the airport, etc
• involves a sequence of decisions, plans, and actions

• What makes planning hard?


– the world is not predictable:
• your flight is canceled or there’s a backup on the 405
– there are a potentially huge number of details
• do you consider all flights? all dates?
– no: commonsense constrains your solutions
– AI systems are only successful in constrained planning problems

• Conclusion: NO, real-world planning and decision-making is still beyond


the capabilities of modern computers
– exception: very well-defined, constrained problems

– BUT in near future YES


Applications of Artificial Intelligence

• Speech Recognition

• Machine Translation

• Facial Recognition and Automatic Tagging


Applications of Artificial Intelligence

• Virtual Personal Assistants

• Self Driving Car

• Chatbots

https://www.edureka.co/blog/artificial-intelligence-tutorial/
Summary of State of AI Systems in Practice
• Speech synthesis, recognition and understanding
– very useful for limited vocabulary applications
– unconstrained speech understanding is still too hard

• Computer vision
– works for constrained problems (hand-written zip-codes)
– understanding real-world, natural scenes is still too hard

• Learning
– adaptive systems are used in many applications: have their limits

• Planning and Reasoning


– only works for constrained problems: e.g., chess
– real-world is too complex for general systems

• Overall:
– many components of intelligent systems are “doable”
– there are many interesting research problems remaining
Intelligent Systems in Your Everyday Life
• Post Office
– automatic address recognition and sorting of mail

• Banks
– automatic check readers, signature verification systems
– automated loan application classification

• Customer Service
– automatic voice recognition

• The Web
– Identifying your age, gender, location, from your Web surfing
– Automated fraud detection

• Digital Cameras
– Automated face detection and focusing

• Computer Games
– Intelligent characters/agents
Hard or Strong AI

• Generally, artificial intelligence research aims to create AI that


can replicate human intelligence completely.

• Strong AI refers to a machine that approaches or supersedes


human intelligence,
◊ If it can do typically human tasks,
◊ If it can apply a wide range of background knowledge and
◊ If it has some degree of self-consciousness.

• Strong AI aims to build machines whose overall intellectual


ability is indistinguishable from that of a human being.
Soft or Weak AI
• Weak AI refers to the use of software to study or accomplish
specific problem solving or reasoning tasks that do not
encompass the full range of human cognitive abilities.

• Example : a chess program such as Deep Blue.

• Weak AI does not achieve self-awareness; it demonstrates wide


range of human-level cognitive abilities; it is merely an intelligent,
a specific problem-solver.
The Turing Test
(Can Machine think? A. M. Turing, 1950)

• Requires:
– Natural language
– Knowledge representation
– Automated reasoning
– Machine learning
– (vision, robotics) for full test
Acting humanly: Turing test

• Turing (1950) "Computing machinery and intelligence“

• "Can machines think?"  "Can machines behave intelligently?“

• Operational test for intelligent behavior: the Imitation Game

• Suggests major components required for AI:


- knowledge representation
- reasoning,
- language/image understanding,
- learning

* Question: is it important that an intelligent system act like a human?


Turing Machine Test- Details

◊ 3 rooms contain: a person, a computer, and an interrogator.

◊ The interrogator can communicate with the other 2 by teletype (to avoid
the machine imitate the appearance or voice of the person).

◊ The interrogator tries to determine which is the person and which is the
machine.

◊ The machine tries to fool the interrogator to believe that it is the human,
and the person also tries to convince the interrogator that it is the
human.

◊ If the machine succeeds in fooling the interrogator, then conclude that the
machine is intelligent.

Goal is to develop systems that are human-like.


Intelligent Agents
Agents
• An agent is anything that can perceive its environment
through sensors and act upon that environment
through actuators

• Human agent: eyes, ears, and other organs for


sensors; hands, legs, mouth, and other body parts for
actuators

• Robotic agent: camera and microphone for sensors;


various motors for actuators

• Percept- agent’s perceptual inputs at any given instant.


• Percept Sequence- the complete history of everything the agent
has ever perceived.
Agents and environments

• The agent function maps from percept histories to actions:


[f: P*  A]

• The agent program runs on the physical architecture to produce f


The vacuum-cleaner world

• Environment: square A and B


• Percepts: [location and content] e.g. [A, Dirty]
• Actions: left, right, pull-up, and no-op
The vacuum-cleaner world

Percept sequence Action


[A,Clean] Right
[A, Dirty] Pull_up
[B, Clean] Left
[B, Dirty] Pull_up
[A, Clean],[A, Clean] Right
[A, Clean],[A, Dirty] Suck
… …
The vacuum-cleaner world

function REFLEX-VACUUM-AGENT ([location, status]) return an action


if status == Dirty then return Pull_up
else if location == A then return Right
else if location == B then return Left
Rational agents
• For each possible percept sequence, a rational
agent should select an action that is expected to
maximize its performance measure, given the
evidence provided by the percept sequence, and
whatever built-in knowledge the agent has.

• E.g., performance measure of a vacuum-cleaner


agent could be amount of dirt cleaned up, amount
of time taken, amount of electricity consumed,
amount of noise generated, etc.
Environments
• To design an agent we must specify its task
environment.

• PEAS description of the task environment:


– Performance
– Environment
– Actuators
– Sensors
PEAS-EXAMPLE
• Example: Agent = taxi driver

– Performance measure: Safe, fast, legal,


comfortable trip, maximize profits

– Environment: Roads, other traffic, pedestrians,


customers

– Actuators: Steering wheel, accelerator, brake,


signal, horn

– Sensors: Cameras, sonar, speedometer, GPS,


odometer, engine sensors, keyboard
PEAS-EXAMPLE

• Example: Agent = Medical diagnosis system

Performance measure: Healthy patient, minimize


costs, lawsuits

Environment: Patient, hospital, staff

Actuators: Screen display (questions, tests,


diagnoses, treatments, referrals)

Sensors: Keyboard (entry of symptoms, findings,


patient's answers)
PEAS-EXAMPLE

• Example: Agent = Part-picking robot

• Performance measure: Percentage of parts in


correct bins

• Environment: Conveyor belt with parts, bins

• Actuators: Jointed arm and hand

• Sensors: Camera, joint angle sensors


Environment types
• Fully observable (vs. partially observable): An
agent's sensors give it access to the complete
state of the environment at each point in time.

• Deterministic (vs. stochastic): The next state of


the environment is completely determined by the
current state and the action executed by the
agent. (If the environment is deterministic except
for the actions of other agents, then the
environment is strategic)

• Episodic (vs. sequential): An agent’s action is


divided into atomic episodes. Decisions do not
depend on previous decisions/actions.
Environment types

• Static (vs. dynamic): The environment is


unchanged while an agent is deliberating. (The
environment is semidynamic if the environment
itself does not change with the passage of time
but the agent's performance score does)

• Discrete (vs. continuous): A limited number of


distinct, clearly defined percepts and actions.
How do we represent or abstract or model the
world?

• Single agent (vs. multi-agent): An agent operating


by itself in an environment. Does the other agent
interfere with my performance measure?
task observable determ./ episodic/ static/ discrete/ agents
environm. stochastic sequential dynamic continuous
crossword fully determ. sequential static discrete single
puzzle
chess with fully strategic sequential semi discrete multi
clock
poker

back
gammon
taxi partial stochastic sequential dynamic continuous multi
driving
medical partial stochastic sequential dynamic continuous single
diagnosis
image fully determ. episodic semi continuous single
analysis
partpicking partial stochastic episodic dynamic continuous single
robot
refinery partial stochastic sequential dynamic continuous single
controller
interact. partial stochastic sequential dynamic discrete multi
Eng. tutor
task observable determ./ episodic/ static/ discrete/ agents
environm. stochastic sequential dynamic continuous
crossword fully determ. sequential static discrete single
puzzle
chess with fully strategic sequential semi discrete multi
clock
poker partial stochastic sequential static discrete multi

back
gammon
taxi partial stochastic sequential dynamic continuous multi
driving
medical partial stochastic sequential dynamic continuous single
diagnosis
image fully determ. episodic semi continuous single
analysis
partpicking partial stochastic episodic dynamic continuous single
robot
refinery partial stochastic sequential dynamic continuous single
controller
interact. partial stochastic sequential dynamic discrete multi
Eng. tutor
task observable determ./ episodic/ static/ discrete/ agents
environm. stochastic sequential dynamic continuous
crossword fully determ. sequential static discrete single
puzzle
chess with fully strategic sequential semi discrete multi
clock
poker partial stochastic sequential static discrete multi

back fully stochastic sequential static discrete multi


gammon
taxi partial stochastic sequential dynamic continuous multi
driving
medical partial stochastic sequential dynamic continuous single
diagnosis
image fully determ. episodic semi continuous single
analysis
partpicking partial stochastic episodic dynamic continuous single
robot
refinery partial stochastic sequential dynamic continuous single
controller
interact. partial stochastic sequential dynamic discrete multi
Eng. tutor
Environment types

Solitaire Backgammom Intenet shopping Taxi


Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
Environment types

Fully vs. partially observable: an environment is full observable when the


sensors can detect all aspects that are relevant to the choice of action.

Solitaire Backgammom Intenet shopping Taxi


Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
Environment types

Fully vs. partially observable: an environment is full observable when the


sensors can detect all aspects that are relevant to the choice of action.

Solitaire Backgammom Intenet shopping Taxi


Observable?? FULL FULL PARTIAL PARTIAL
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
Environment types

Deterministic vs. stochastic: if the next environment state is completely


determined by the current state and the executed action then the environment is
deterministic.

Solitaire Backgammom Intenet shopping Taxi


Observable?? FULL FULL PARTIAL PARTIAL
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
Environment types

Deterministic vs. stochastic: if the next environment state is completely


determined by the current state and the executed action then the environment is
deterministic.

Solitaire Backgammom Intenet shopping Taxi


Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic??
Static??
Discrete??
Single-agent??
Environment types

Episodic vs. sequential: In an episodic environment, the agent’s experience


is divided into atomic episodes. The choice of action depends only on the episode itself

Solitaire Backgammom Intenet shopping Taxi


Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO

Episodic??
Static??
Discrete??
Single-agent??
Environment types

Episodic vs. sequential: In an episodic environment, the agent’s experience


is divided into atomic episodes. The choice of action depends only on the episode itself

Solitaire Backgammom Intenet shopping Taxi


Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static??
Discrete??
Single-agent??
Environment types

Static vs. dynamic: If the environment can change while the agent is choosing
an action, the environment is dynamic. Semi-dynamic if the agent’s performance
changes even when the environment remains the same.

Solitaire Backgammom Intenet shopping Taxi


Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static??
Discrete??
Single-agent??
Environment types

Static vs. dynamic: If the environment can change while the agent is choosing
an action, the environment is dynamic. Semi-dynamic if the agent’s performance
changes even when the environment remains the same.

Solitaire Backgammom Intenet shopping Taxi


Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static?? YES YES SEMI NO
Discrete??
Single-agent??
Environment types

Discrete vs. continuous: This distinction can be applied to the state of the
environment, the way time is handled and to the percepts/actions of the agent.

Solitaire Backgammom Intenet shopping Taxi


Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static?? YES YES SEMI NO
Discrete??
Single-agent??
Environment types

Discrete vs. continuous: This distinction can be applied to the state of the
environment, the way time is handled and to the percepts/actions of the agent.

Solitaire Backgammom Intenet shopping Taxi


Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static?? YES YES SEMI NO
Discrete?? YES YES YES NO
Single-agent??
Environment types

Single vs. multi-agent: Does the environment contain other agents who
are also maximizing some performance measure that depends on the
current agent’s actions?

Solitaire Backgammom Intenet shopping Taxi


Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static?? YES YES SEMI NO
Discrete?? YES YES YES NO
Single-agent??
Environment types

Single vs. multi-agent: Does the environment contain other agents who
are also maximizing some performance measure that depends on the
current agent’s actions?

Solitaire Backgammom Intenet shopping Taxi


Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static?? YES YES SEMI NO
Discrete?? YES YES YES NO
Single-agent?? YES NO NO NO
Environment types

• The simplest environment is


– Fully observable, deterministic, episodic, static,
discrete and single-agent.
• Most real situations are:
– Partially observable, stochastic, sequential,
dynamic, continuous and multi-agent.
Agent types

• Table Driven agents

• Simple reflex agents

• Model-based reflex agents

• Goal-based agents

• Utility-based agents

• Learning agents
Table Driven Agent.
current state of decision process

table lookup
for entire history
Table Driven Agent.

Drawbacks:
• Huge table
• Take a long time to build the table
• No autonomy
• Even with learning, need a long time to learn the table
entries
Simple reflex agents

NO MEMORY
Fails if environment
is partially observable

example: vacuum cleaner world


Simple reflex agents

Example 1:

Example 2:
if car-in-front-is-braking then initiate-braking
Simple reflex agents
Model-based reflex agents
description of Model the state of the world by:
current world state modeling how the world changes
how it’s actions change the world

•This can work even with partial information


•It’s is unclear what to do
without a clear goal
Model-based reflex agents

Example:
• “how the world evolves independently of the agent”
– an overtaking car generally will be closer behind than it was a
moment ago
• “how the agent’s own actions affect the world”
– when the agent turns the steering clockwise, the car turns to
the right.
– After driving for five minutes northbound on the freeway, one
is usually about five miles north of where one was five
minutes ago.

• The details of how models and states are represented vary


widely depending on the type of environment and the particular
technology used in the agent design.
Model-based reflex agents
Goal-based agents

• Knowing about the current state of the


environment is not always enough to
decide what to do (e.g. decision at a road
junction)
• The agent needs some sort of goal
information that describes situations that
are desirable
• The agent program can combine this with
information about the results of possible
actions in order to choose actions that
achieve the goal
• Usually requires search and planning
Goal-based agents
Goals provide reason to prefer one action over the other.
We need to predict the future: we need to plan & search
Utility-based agents

• Goals alone are not really enough to generate high


quality behavior in most environments – they just
provide a binary distinction between happy and
unhappy states
• A more general performance measure should allow a
comparison of different world states according to
exactly how happy they would make the agent if they
could be achieved
• Happy – Utility (the quality of being useful)
• A utility function maps a state onto a real number
which describes the associated degree of happiness
Utility-based agents
Some solutions to goal states are better than others.
Which one is best is given by a utility function.
Which combination of goals is preferred?
Learning agents

• Turing – instead of actually programming intelligent


machines by hand, which is too much work, build
learning machines and then teach them

• Learning also allows the agent to operate in initially


unknown environments and to become more
competent than its initial knowledge alone might allow
Learning agents
How does an agent improve over time?
By monitoring it’s performance and suggesting
better modeling, new action rules, etc.

Evaluates
current
world
state

changes
action
rules “old agent”=
model world
and decide on
actions
suggests to be taken
explorations
Learning agents
• Learning element – responsible for making
improvements
• Performance element – responsible for selecting
external actions (it is what we had defined as the
entire agent before)
• Learning element uses feedback from the critic on
how the agent is doing and determines how the
performance element should be modified to do better
in the future
• Problem generator is responsible for suggesting
actions that will lead to a new and informative
experiences

You might also like