0% found this document useful (0 votes)
36 views64 pages

CS480 Lecture August 29th

This document provides an overview of an introduction to artificial intelligence course (CS 480). It includes announcements about setting up the Python environment and reading the assigned material. It then lists the teaching assistants for the course along with their contact information and office hours. Finally, it outlines the plan for the first class, which will cover intelligent agents, identifying problems suitable for AI, agent-based modeling, and different types of intelligent agents.

Uploaded by

Rajeswari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views64 pages

CS480 Lecture August 29th

This document provides an overview of an introduction to artificial intelligence course (CS 480). It includes announcements about setting up the Python environment and reading the assigned material. It then lists the teaching assistants for the course along with their contact information and office hours. Finally, it outlines the plan for the first class, which will cover intelligent agents, identifying problems suitable for AI, agent-based modeling, and different types of intelligent agents.

Uploaded by

Rajeswari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

CS 480

Introduction to Artificial Intelligence

August 29, 2023


Announcements / Reminders
 Contribute to the discussion on Blackboard, please
 Please follow the Week 01 To Do List instructions (if you
haven't already):
– Go through the Syllabus,
– Setup Python environment on your computer
– READ the assigned material

2
Teaching Assistants
Name e-mail Office hours
Donthula, Sumanth [email protected] TBD
Nagaraju, Ashish [email protected] TBD
Sun, Haoyu [email protected] TBD
Vishwanath, Tejass [email protected] TBD

TAs will:
• assist you with your assignments,
• hold office hours to answer your questions,
• grade your lab work (a specific TA will be assigned to you).

Take advantage of their time and knowledge!


DO NOT email them with questions unrelated to lab grading.
Make time to meet them during their office hours.
Add a [CS480 Fall 2023] prefix to your email subject when contacting TAs, please.

3
Turing Test: Optional Viewing Material

4
Plan for Today
 Intelligent Agents

5
Identyfing Problems Suitable for AI
Most AI problems will exhibit the following three
characteristics:

 tend to be large,
 computationally complex and cannot be solved by
a straightforward algorithm,
 tend to require a significant amount of human
expertise to be solved

6
Agent-Based Modeling

Source: https://www.youtube.com/watch?v=E_-9hFzmxkw

7
Intelligent (Autonomous) Agents

8
Intelligent Agents in Action

Source: https://www.youtube.com/watch?v=kopoLzvh5jY

9
Agent
Agent:
An agent is just something that acts (from the Latin
agere, to do).

Of course, we would prefer “acting” to be:


 autonomous
 situated in some environment (that could be
really complex)
 adaptive
 creative and goal-oriented
10
Rational Agent
Rational Agent:
A rational agent is one that acts so as to achieve
the best outcome, or when there is uncertainty,
the best expected outcome.*

* no worries, we will make it a little less vague soon

11
AI: Constructing Agents
You can say that:
AI is focused on the study and construction of
agents that do the right thing.

12
Agent

13
Agent
Agent:
An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators.

14
Percepts and Percept Sequences
 Percept: content / information that agent’s
sensors are perceiving / capturing currently

 Percept Sequence: a complete history of


everything that agent has ever perceived
 any practical issues that you can see here?
 what can a percept sequence be used for?

15
Percepts, Knowledge, Actions, States
 Agent’s choice of action / decision at any given
moment:
 CAN depend on:
 built-in knowledge
 entire percept sequence
 CANNOT depend anything it hasn’t perceived

 Agent’s action CAN change the environment state

Knowledge is power, right?


16
Agent

Now what
about this?

17
Agent Function / Program

Agent
Function /
Program

18
Agent Function / Program
 Specifying an action choice for every possible
percept sequence would define an agent

 Action <-> percept sequence mapping IS the


agent function
 Agent function describes agent behavior
 Agent function is an abstract concept
 Agent program implements agent function

19
Vacuum Cleaner Agent Example

20
Vacuum Cleaner Agent Example

21
Actions Have Consequences
 An agent can act upon its environment, but how
do we know if the end result is “right”?
 After all, actions have consequences: either good
or bad.
 Recall that agent actions change environment
state!
 If state changes are desirable, an agent performs
well.
 Performance measure evaluates state changes.
22
Performance Measure: A Tip
It is better to design performance
measures according to what one actually
wants to be achieved in the
environment, rather than according to
how one thinks the agent should behave.

23
Performance Measure: A Warning
If it is difficult to specify the
performance measure, agents may end
up optimizing a wrong objective. Handle
uncertainty well in such cases.

24
Rationality
Rational decisions at the moment depend on:
 The performance measure that defines success
criteria
 The agent’s prior knowledge of the environment
 The actions that the agent can perform
 The agent’s percept sequence so far

25
Rational Agent
For each possible percept sequence, a
rational agent should select an action that is
expected to maximize its performance
measure, given the evidence provided by the
percept sequence and whatever built-in
knowledge the agent has.

26
Rationality in Reality
 An omniscient agent will ALWAYS know the
final outcome of its action. Impossible in
reality. That would be perfection.
 Rationality maximizes what is EXPECTED to
happen
 Perfection maximizes what WILL happen
 Performance can be improved by
information gathering and learning
27
Designing the Agent for the Task

Analyze the Apply


Select Agent Select Internal
Problem / Task Corresponding
Architecture Representations
(PEAS) Algorithms

28
Task Environment | PEAS
In order to start the agent design process we need
to specify / define:
 The Performance measure
 The Environment in which the agent will operate
 The Actuators that the agent will use to affect
the environment
 The Sensors that the agent will use to perceive
the environment

29
PEAS: Taxi Driver Example

30
PEAS: Other Examples

31
Task Environment Properties
Key dimensions by which task environments can be
categorized:
 Fully vs partially observable (can be unobservable too)
 Single agent vs multiagent
 multiagent: competitive vs. cooperative
 Deterministic vs. nonderministic (stochastic)
 Episodic vs. sequential
 Static vs. dynamic
 Discrete vs. continuous
 Known vs. unknown (to the agent)
32
Fully Observable Environment

Source: Pixabay (www.pexels.com)

33
Partially Observable Environment

Source: https://en.wikipedia.org/wiki/Fog_of_war

34
Partially Observable Environment

Undiscovered

Source: https://en.wikipedia.org/wiki/Fog_of_war

35
Single-agent System

Source: cottonbro (www.pexels.com)

36
Single-agent System

Environment

Agent

Source: cottonbro (www.pexels.com)

37
Multiagent System

Source: Vlada Karpovich (www.pexels.com)

38
Multiagent System

Agents

Environment

Source: Vlada Karpovich (www.pexels.com)

39
Deterministic vs. Nondeterministic
 Deterministic environment:
 next state is completely determined by the current
state and agent action
 deterministic AND fully observable environment: no
need to worry about uncertainty
 deterministic AND partially observable ***may***
appear nondeterministic
 Nondeterministic (stochastic) environment:
 next state is NOT completely determined by the current
state and agent action

40
Episodic vs. Sequential
 Episodic environment:
 agent experience is divided into individual,
independent, and atomic episodes
 one percept - one action.
 next action is not a function of previous action: not
necessary to memorize it
 Sequential environment:
 current decision / action COULD affect all future
decisions / actions
 better keep track of it
41
Static vs. Dynamic
 Static environment:
 environment CANNOT change while the agent is
taking its time to decide
 Dynamic environment:
 environment CAN change while the agent is taking its
time to decide -> decision / action may be dated
 speed is important

42
Discrete vs. Continuous
 Discrete environment:
 state changes are discrete
 time changes are discrete
 percepts are discrete
 Continuous environment:
 state changes are continuous (“fluid”)
 time changes are continuous
 percepts / actions can be continuous

43
Known vs. Unknown (to Agent)
 Known environment:
 agent knows all outcomes to its actions (or their
probabilities)
 agent “knows how the environment works”
 Unknown environment:
 agent “doesn’t know all the details about the inner
workings of the environment”
 learning and exploration can be necessary

44
Task Environment Characteristics

45
Hardest Case / Problem
 Partially observable (incomplete information,
uncertainty)
 Multiagent (complex interactions)
 Nondeterministic (uncertainty)
 Sequential (planning usually necessary)
 Dynamic (changing environment, uncertainty)
 Continuous (infinite number of states)
 Unknown (agent needs to learn / explore,
uncertainty)
46
Designing the Agent for the Task

Analyze the Apply


Select Agent Select Internal
Problem / Task Corresponding
Architecture Representations
(PEAS) Algorithms

47
Agent Structure / Architecture

Agent = Architecture + Program

48
Typical Agent Architectures
 Simple reflex agent
 Model-based reflex agent:
 Goal-based reflex agent
 Utility-based reflex agent

49
Simple Reflex Agent

50
Simple Reflex Agent

51
Simple Reflex Agent: Challenges?

52
Model-based Reflex Agent

53
Model-based Reflex Agent

54
Model-based Goal-based Agent

55
Model-based Utility-based Agent

56
Model-based Agents: Challenges?

57
Typical Agent Architectures
 Simple reflex agent: uses condition-action rules
 Model-based reflex agent: keeps track of the
unobserved parts of the environment by maintaing
internal state:
 “how the world works”: state transition model
 how percepts and environment is related: sensor model

 Goal-based reflex agent: maintains the model of the


world and goals to select decisions (that lead to goal)
 Utility-based reflex agent: maintains the model of the
world and utility function to select PREFERRED decisions
(that lead to the best expected utility: avg (EU * p))

58
Learning Agent

59
Designing the Agent for the Task

Analyze the Apply


Select Agent Select Internal
Problem / Task Corresponding
Architecture Representations
(PEAS) Algorithms

60
State and Transition Representations

 Atomic: state representation has NO internal structure


 Factored: state representation includes fixed attributes (which
can have values)
 Structured: state representation includes objects and their
relationships

61
State and Transition Representations

Complexity, level of detail, expresiveness, more difficult to process

62
Representations and Algorithms

 Searching  Constraint satisfaction  Relational database algorithms


 Hidden Markov algorithms  First-order logic
models  Propositional logic  First-order probability models
 Markov decision  Planning  Natural language understanding
process  Bayesian algorithms (some)
 Finite state  Some machine learning
machines algorithms

63
Designing the Agent for the Task

Analyze the Apply


Select Agent Select Internal
Problem / Task Corresponding
Architecture Representations
(PEAS) Algorithms

64

You might also like