0% found this document useful (0 votes)
17 views20 pages

AI Summary

AI_summaryhshsbhdhdhbdhdhhd

Uploaded by

Fares Mohamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views20 pages

AI Summary

AI_summaryhshsbhdhdhbdhdhhd

Uploaded by

Fares Mohamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

1

Chapter 1 2
1. Acting humanly: The Turing test approach 3
2. Thinking humanly: The cognitive modeling approach 4
3. Thinking rationally: The “laws of thought” approach. 5
4. Acting rationally: The rational agent approach. 6
Summary from the book 7
Chapter 2 8
Good Behavior: The Concept of Rationality. 9
PEAS Examples 10
Properties of task environments / Environment Types 11
The Structure of Agents 12
1. Simple reflex agents 13
2. Model-based reflex agents 14
3. Goal-based agents 16
4. Utility-based agents 19
summary from the book 20
2

Chapter 1

Intelligence : how our brain can perceive, understand, predict, and


manipulate a world more complicated than itself.
artificial intelligence, or AI : is concerned with not just understanding but
also building intelligent entities (rational agents) [ machines that can
compute how to act effectively and safely in a wide variety of novel
situations ].
Or it is the study of rational agents and its environment.
What Is AI (intelligence)?
‫ اتجاهات هنتكلم عن كل إتجاه منهم بالتفصيل‬4 ‫ كان فيه اكتر من اتجاه لتعريفه وهما‬AI ‫علشان نعرف‬
‫ونشوف تعريف كل اتجاه منهم بعدها نوصل للتعريف األفضل بينهم واالتجاه األفضل كمان والكتاب‬
‫ فالزم نعرفها عشان‬intelligence ‫ مرتبط بتعريفنا لـ‬AI ‫اللي معانا متحيز اكتر لالتجاه الرابع تعريف‬
AI ‫نقدر نعرف‬
Some have defined intelligence:
1. in terms of fidelity to human performance.
2. abstract, formal definition called rationality [ loosely speaking,
doing the “right thing.” ]
The subject matter itself also varies:
1. intelligence to be a property of internal thought processes and
reasoning,
2. intelligent behavior, an external characterization.

From these two dimensions—human vs. rational and thought vs.


behavior—there are four possible combinations:

1. Acting humanly: The Turing test approach

Turing test : a measure of a machine's ability to exhibit intelligent


behavior that can not be distinguished from that of a
human,focuses on natural language understanding and
conversation.

Total Turing test : It is a more comprehensive measure for


evaluating artificial general intelligence , including problem-solving,
reasoning, creativity, perception, and more.
3

Internal things in human:


1. natural language processing to communicate successfully in a
human language.
a. Understanding the language (read or hear).
b. Generating (write or speak).
2. knowledge representation to store what it knows or hears (with
no database).
3. automated reasoning to answer questions and to draw new
conclusions (‫)االستنتاج‬.
4. machine learning to adapt to new circumstances and to detect
and extrapolate patterns (‫)التعلم‬.
External things in humans :
5. computer vision and speech recognition to perceive the world.
)…‫التعبيرات البشرية‬,‫ المشاعر‬: ‫(فهم العالم كأنه بشري‬
6. robotics to manipulate objects and move about.
)‫(يبقى ليه جسم يتحرك ويتعامل فيه‬

These six disciplines compose most of AI.

The quest for “artificial flight” succeeded when engineers and inventors
stopped imitating birds and started using wind tunnels and learning
about aerodynamics.
)"‫(الهدف ليس التقليد وإنما فهم طريقة العمل"هذا االتجاه لم يطبق ذلك‬
4

2. Thinking humanly: The cognitive modeling approach


) ‫ بمعنى كيف يحدث اإلدراك عند اإلنسان‬، ‫ لعملية اإلدراك‬modelling ‫(بيعمل‬

To say that a program thinks like a human, we must know how humans
think. We can learn about human thought in three ways:

1. Introspection trying to catch our own thoughts as they go by.


(‫)االستنباط‬
2. psychological experiments observing a person in action.
(‫)مراقبة تصرفات االنسان‬
3. brain imaging observing the brain in action.
(‫)تصوير العقل‬

cognitive science brings together computer models from AI and


experimental techniques from psychology to construct precise and
testable theories of the human mind.
5

3. Thinking rationally: The “laws of thought” approach.


)‫(التفكير العقالني‬
Syllogisms
- Aristotle syllogisms provided patterns for argument structures that
always yielded correct conclusions when given correct premises.
Logicist
- These laws of thought were supposed to govern the operation of
the mind; their study initiated the field called logic.
Probability
- Logic as conventionally understood requires knowledge of the
world that is certain—a condition that, in reality, is seldom achieved
)‫(لو الفرضية األساسية خاطئة إذن المبني عليها كله خاطئ‬
- The theory of probability fills this gap :
1. It allows the construction of a comprehensive model of
rational thought (leading from raw perceptual information to
an understanding of how the world works to predictions
about the future).
2. What it does not do, is generate intelligent behavior.
3. For that, we need a theory of rational action. Rational
thought, by itself, is not enough.
6

4. Acting rationally: The rational agent approach.


)‫(التصرف العقالني‬
Agent and Rational agent :
- Agent : is just something that acts (robot, software program..etc)
- All computer programs do something, but computer agents are
expected to do more:
1. operate autonomously (‫)يتصرف بشكل مستقل‬
2. perceive their environment (‫)يستوعب البيئة التي حوله‬
3. persist over a prolonged time period (‫)يشتغل لفترات طويلة‬
4. adapt to change (‫) تكيف مع التغيير أي يطور نفسه بنفسه‬
5. create and pursue goals (‫)إنشاء ومتابعة األهداف‬
- Rational agent : is one that acts so as to achieve the best outcome
or, when there is uncertainty, the best expected outcome
‫(التعريف األخير للذكاء االصطناعي بناء على االتجاه األخير وهو التعريف الذي توصل له هذا‬
)‫الكتاب‬

The rational-agent approach to AI has two advantages over the


other approaches:
1. First, it is more general than the “laws of thought” approach
because correct inference is just one of several possible
mechanisms for achieving rationality.
2. Second, it is more amenable to scientific development. The
standard of rationality is mathematically well defined and
completely general. We can often work back from this
specification to derive agent designs that provably achieve
it—something that is largely impossible if the goal is to
imitate human behavior or thought processes.
7

Summary from the book

This chapter defines AI and establishes the cultural background against


which it has developed. Some of the important points are as follows:

• Different people approach AI with different goals in mind. Two


important questions to ask are: Are you concerned with thinking, or
behavior? Do you want to model humans, or try to achieve the optimal
results?

• According to what we have called the standard model, AI is concerned


mainly with rational action. An ideal intelligent agent takes the best
possible action in a situation. We study the problem of building agents
that are intelligent in this sense.

• Two refinements to this simple idea are needed: first, the ability of any
agent, human or otherwise, to choose rational actions is limited by the
computational intractability of doing so; second, the concept of a
machine that pursues a definite objective needs to be replaced with that
of a machine pursuing objectives to benefit humans, but uncertain as to
what those objectives are.

• Philosophers (going back to 400 BCE) made AI conceivable by


suggesting that the mind is in some ways like a machine, that it operates
on knowledge encoded in some internal language, and that thought can
be used to choose what actions to take.
8

Chapter 2

1. Agent and Rational agent :


- Agent : is just something that acts (robot, software program..etc)
(An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators)
- All computer programs do something, but computer agents are
expected to do more:
6. operate autonomously (‫)يتصرف بشكل مستقل‬
7. perceive their environment (‫)يستوعب البيئة التي حوله‬
8. persist over a prolonged time period (‫)يشتغل لفترات طويلة‬
9. adapt to change (‫) تكيف مع التغيير أي يطور نفسه بنفسه‬
10. create and pursue goals (‫)إنشاء ومتابعة األهداف‬
- Rational agent : is one that acts so as to achieve the best
outcome or, when there is uncertainty, the best expected outcome
given the available information and resources.
- Intelligent agents :
1. Robots agent (physical presence)
2. Interface agent : stand alone or web based applications (Software
agent , or software robot, or softbot)
- Performance measure (utility function) : An objective criterion
for success of an agent’s behavior. (‫)مقياس موضوعي لتحديد نجاح سلوك االله‬
- percept : to refer to the content an agent’s sensors are perceiving
- An agent’s percept sequence : is the complete history of
everything the agent has ever perceived.
- agent function : maps any given percept sequence to an action.
9

Good Behavior: The Concept of Rationality.


A rational agent is one that does the right thing. Obviously, doing the
right thing is better than doing the wrong thing, but what does it mean to
do the right thing?
- autonomy agent : To the extent that an agent relies on the prior
knowledge of its designer rather than on its own percepts and
learning processes.

Specifying the task environment

- PEAS (Performance measure, Environment, Actuators, Sensors).

P: A performance measure is a criterion used to evaluate how well an


intelligent system or agent is performing in a specific task or
environment.

E: The environment refers to the external context in which an intelligent


system or agent operates.

A: components or mechanisms that allow an intelligent system or agent


to take actions or manipulate its environment.

S: devices or components that enable an intelligent system or agent to


perceive and collect information from its environment.
10

PEAS Examples

Agent Performance Environment Actuators Sensors


Type measure
Taxi Safe, fast, legal, Roads, other Steering wheel, Cameras, LIDAR,
driver / comfortable trip, traffic, police, accelerator, speedometer, GPS,
Autonom maximize profits, pedestrians, brake, signal, engine Sensors,
minimize impact customers, horn, display, accelerometer,
-ous Taxi on other road weather. speech. Microphone,
users. touchscreen.

Medical Healthy patient, Patient, Display of touchscreen/voicentr


diagnosis reduced cost hospital, staff questions, tests, y of symptoms and
system digonoses, findings
treatments

Satellite Correct Orbiting Display of scene High-resolution


image categorization of satellite, categorization digital camera
analysis Objects, terrain downlink,
weather
system
Part Percentage of Conveyor belt Jointed arm and Camera, tactile and
picking parts in correct with parts; bins hand joint angle sensors
robot bins

Refinery Purity, yield, Refinery, raw Valves,pumps,h Temperature,


controller safety materials, eaters, stirrers, pressure, flow,
operators displays chemical sensors

Interactie Student's score students, Display of Keyboard entry,


English on test testing agency exercises,feedb voice
tutor ack,speech

Spam Minimizing false a user’s Mark as spam, Incoming


Filter positives, false email account, delete, etc. messages, other
negatives email server information about
user’s account
11

Properties of task environments / Environment Types

Task Observable Agents Deterministic Episodic Static Discrete


Environment
Crossword Fully Single Deterministic Sequential Static Discrete
puzzle
Chess with a Fully Multi Deterministic Sequential Semi Discrete
clock
Poker Partially Multi Stochastic Sequential Static Discrete
Backgammon Fully Multi Stochastic Sequential Static Discrete
Taxi driving / Partially Multi Stochastic Sequential Dynamic Continuous
Autonomous
Driving
Medical Partially Single Stochastic Sequential Dynamic Continuous
diagnosis
Image analysis Fully Single Deterministic Episodic Semi Continuous
Part-picking Partially Single Stochastic Episodic Dynamic Continuous
robot
Refinery Partially Single Stochastic Sequential Dynamic Continuous
controller
English tutor Partially Multi Stochastic Sequential Dynamic Discrete
Word Jumble Fully Single Deterministic Episodic Static Discrete
Solver
Scrabble Partially Multi Stochastic Sequential Static Discrete
12

1. The Structure of Agents


The job of AI is to design an agent program that implements the agent
function.We assume this program will run on some sort of computing device
with physical sensors and actuators.we call this the agent architecture.

Points:
agent architecture : agent = architecture + program.
agent architecture : design an agent program that implements the agent
function.
agent function : the mapping from percepts to actions.

- Agent programs
We describe the agent programs in the simple pseudocode language.

Can AI do for general intelligent behavior what Newton did for square
roots(the huge tables of square roots have now been replaced by a 5-line
program for Newton’s method)? We believe the answer is yes.

four basic kinds of agent programs that embody the principles underlying
almost all intelligent systems(can be reusable / Generic solution):
1.Simple reflex agents.
2.Model-based reflex agents.
3.Goal-based agents.
4.Utility-based agents.
13

1. Simple reflex agents


1. definition : These agents select actions on the basis of the current percept,
ignoring the rest of the percept history (it is doing rule matching).

2. case : Suppose that a simple reflex vacuum agent is deprived of its location
sensor and has only a dirt sensor. It can Suck in response to [Dirty]; what
should it do in response to [Clean]? ( Infinite loops).Escape from infinite loops
is possible if the agent can randomize its actions.

3. how it works : condition–action rule (Also called situation–action rules,


productions rules, or if–then rules.).

4. example with pseudocode : the vacuum agent


function REFLEX-VACUUM-AGENT( [location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left

5. pseudocode : (It acts according to a rule whose condition matches the


current state, as defined by the percept)
function SIMPLE-REFLEX-AGENT(percept) returns an action
persistent: rules, a set of condition–action rules
state←INTERPRET-INPUT(percept)
rule←RULE-MATCH(state,rules)
action←rule.ACTION
return action

6. Figure :
14

2. Model-based reflex agents


1. definition : the agent should maintain some sort of internal state that
depends on the percept history and thereby reflects at least some of the
unobserved aspects of the current state (keeps track of the current state of
the world, using an internal model. It then chooses an action in the same way as the
reflex agent.).

2. case : if I got a state that is new for me I have two options to deal
with : random choice or choice based on similar states I have.
Uncertainty about the current state may be unavoidable, but the agent
still has to make a decision.

3. how it works :
First, we need some information about how the world changes over time
/ transition model of the world (how the world works. It describes how different
actions or events lead to changes in the state of the system or the environment.),
which can be divided roughly into two parts: the effects of the agent’s
actions (This involves knowing how the agent's actions influence the state of the
world.) and how the world evolves independently of the agent (This part of
knowledge pertains to the changes that occur in the world without any direct
influence from the agent).

Second, we need some information about how the state of the world is
reflected in the agent’s percepts (sensor model).
- sensor model : how the state of the world is reflected in the agent’s
percepts (translating sensory data into a meaningful representation of the
world).

Together, the transition model and sensor model allow an agent to keep
track of the state of the world—to the extent possible given the
limitations of the agent’s sensors.
15

4. pseudocode : (It keeps track of the current state of the world, using an
internal model. It then chooses an action in the same way as the reflex
agent.)
function MODEL-BASED-REFLEX-AGENT(percept) returns an action
persistent: state, the agent’s current conception of the world state
transition model, a description of how the next state depends on
the current state and action
sensor model, a description of how the current world state is
reflected in the agent’s percepts
rules, a set of condition–action rules
action, the most recent action, initially none
state←UPDATE-STATE(state, action, percept, transition model,sensor
model)
rule←RULE-MATCH(state,rules)
action←rule.ACTION
return action

5. Figure :
16

3. Goal-based agents
1. definition : as well as a current state description, the agent needs
some sort of goal information that describes situations that are desirable.

2. case : Sometimes goal-based action selection is straightforward.


Sometimes it will be more tricky( consider long sequences)

3. how it works : It keeps track of the world state as well as a set of goals
it is trying to achieve, and chooses an action that will (eventually) lead to
the achievement of its goals. Although the goal-based agent appears
less efficient, it is more flexible because the knowledge that supports its
decisions is represented explicitly and can be modified (means I can
modify the goal based on my needs).

4. pseudocode :
function GOAL-BASED-AGENT(percept) returns an action
persistent: state, the agent’s current conception of the world state
goals, a list of the agent's current goals
rules, a set of condition–action rules
state ← UPDATE-STATE(state, percept)
while goals are not achieved:
for each goal in goals:
if goal.is_achieved(state):
goals.remove(goal)
rule ← RULE-MATCH(state, goals, rules)
action ← rule.ACTION
return action
17

5. Figure :
18

4. Utility-based agents
1. definition : A more general performance measure should allow a
comparison of different world states according to exactly how happy they
would make the agent (Because “happy” does not sound very scientific,
computer scientists use the term utility instead).

2. case : utility-based agent, It uses a model of the world, along with a


utility function that measures its preferences among states of the world.
Then it chooses the action that leads to the best expected utility (where
expected utility is computed by averaging over all possible outcome states, weighted
by the probability of the outcome).

3. how it works : An agent’s utility function is essentially an


internalization of the performance measure. Provided that the internal
utility function and the external performance measure are in agreement.
- an agent that chooses actions to maximize its utility will be rational
according to the external performance measure.

a utility-based agent has many advantages in terms of flexibility and


learning
1. First, when there are conflicting goals, only some of which can be
achieved (for example, speed and safety).
2. Second, when there are several goals that the agent can aim for
(none of which can be achieved with certainty). a rational utility-based
agent chooses the action that maximizes the expected utility of the
action outcomes.
19

4. pseudocode :
function UTILITY-BASED-AGENT(percept) returns an action
persistent: state, the agent’s current conception of the world state
utility function, a function that measures the desirability of a state
possible_actions, a list of all possible actions in the current state
action, the most recent action, initially none
best_action, the most best action, initially none
state ← UPDATE-STATE(state, percept)
possible_actions ← GENERATE-POSSIBLE-ACTIONS(state)
expected_utility ← CALCULATE-EXPECTED-UTILITY(action, state)
for each action in possible_actions:
if expected_utility > max_utility:
max_utility ← expected_utility
best_action ← action
return best_action

5. Figure :
20

summary from the book

This chapter has been something of a whirlwind tour of AI, which we have
conceived of as the science of agent design. The major points to recall are as
follows:
- An agent is something that perceives and acts in an environment. The
agent function for an agent specifies the action taken by the agent in
response to any percept sequence.
- The performance measure evaluates the behavior of the agent in an
environment. A rational agent acts so as to maximize the expected
value of the performance measure, given the percept sequence it has
seen so far.
- A task environment specification includes the performance measure,
the external environment, the actuators, and the sensors. In designing
an agent, the first step must always be to specify the task environment
as fully as possible.
- Task environments vary along several significant dimensions. They can
be fully or partially observable, single-agent or multiagent, deterministic
or nondeterministic, episodic or sequential, static or dynamic, discrete
or continuous, and known or unknown.
- In cases where the performance measure is unknown or hard to specify
correctly, there is a significant risk of the agent optimizing the wrong
objective. In such cases the agent design should reflect uncertainty
about the true objective.
- The agent program implements the agent function. There exists a
variety of basic agent program designs reflecting the kind of information
made explicit and used in the decision process. The designs vary in
efficiency, compactness, and flexibility. The appropriate design of the
agent program depends on the nature of the environment.
- Simple reflex agents respond directly to percepts, whereas
model-based reflex agents maintain internal state to track aspects of the
world that are not evident in the current percept. Goal-based agents act
to achieve their goals, and utility-based agents try to maximize their own
expected “happiness.”
- All agents can improve their performance through learning.

You might also like