0% found this document useful (0 votes)
10 views78 pages

Unit 1

Artificial Intelligence (AI) is a scientific field focused on enabling machines to solve complex problems in a human-like manner through automation of tasks such as decision-making and learning. It encompasses various technologies including machine learning and natural language processing, and requires specialized hardware and software for development. The document also outlines the historical evolution of AI, from its early concepts to modern advancements and applications.

Uploaded by

dnyangitte01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views78 pages

Unit 1

Artificial Intelligence (AI) is a scientific field focused on enabling machines to solve complex problems in a human-like manner through automation of tasks such as decision-making and learning. It encompasses various technologies including machine learning and natural language processing, and requires specialized hardware and software for development. The document also outlines the historical evolution of AI, from its early concepts to modern advancements and applications.

Uploaded by

dnyangitte01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

UNIT 1

INTRODUCTION

1
What is AI

• Artificial Intelligence (AI) is a branch of Science which


deals with helping machines finding solutions to
complex problems in a more human-like fashion.
• It is the automation of activities that we associate with
human thinking, activities such as decision making,
problem solving and learning

2
What is AI
• This generally involves borrowing characteristics from
human intelligence, and applying them as algorithms in
a computer friendly way.
• AI is an umbrella term that encompasses a wide variety
of technologies, including machine learning, deep
learning, and natural language processing (NLP).
• AI allows machines to match, or even improve upon,
the capabilities of the human mind.

3
What is AI
• Artificial intelligence refers to computer systems that are
capable of performing tasks traditionally associated with
human intelligence.
• AI systems learn by processing massive amounts of data and
looking for patterns to model in their own decision-making.
• In many cases, humans will supervise an AI’s learning
process, reinforcing good decisions and discouraging bad
ones, but some AI systems are designed to learn without
supervision.

4
What is AI
• AI systems improve on their performance of
specific tasks, allowing them to adapt to new
inputs and make decisions without being
explicitly programmed to do so

5
What is AI
• AI requires specialized hardware and software for
writing and training machine learning algorithms.
• No single programming language is used exclusively in
AI, but Python, R, Java, C++ and Julia are all popular
languages among AI developers.
• In general, AI systems work by ingesting large amounts
of labeled training data, analyzing that data for
correlations and patterns, and using these patterns to
make predictions about future states.

6
What is AI
• Views of AI fall into four categories in Two dimensions:
• –The definitions on top are concerned with Thought
processes/reasoning vs. the ones on the bottom concerned
with address behavior/action
• –The definitions on the left measure success according to
human standards vs. the ones on the right measure success
of an ideal performance measure
• According to an ideal concept of intelligence:
• A system is rational if it does the “right thing,” given what it
knows Thinking Humanly Thinking Rationally
Acting Humanly Acting Rationally

7
How to Achieve AI?
Acting
humanly

Thinking
humanly AI Thinking
rationally

Acting
rationally

8
Acting humanly: The Turing Test approach

• The Turing Test, proposed by Alan Turing in 1950, designed


to provide a satisfactory operational definition of
intelligence.
• After posing some written questions, if a human interrogator
cannot tell whether the written responses come from a
person or from a computer, then a computer passes the test.
• If can’t reliably distinguish the human from the computer,
then the computer is deemed intelligent.
• For example: ELIZA, MGONZ and ALICE are few programs

9
Acting humanly: The Turing Test approach

Experimenter /
Interrogator

Control

10
Acting humanly: The Turing Test approach
• Turing suggested: computer would need to possess the following
capabilities:
1. Natural Language Processing to enable it to communicate
2. Knowledge representation to store what it knows or learns
3. Automated reasoning to use the stored information to answer
questions and to draw new conclusions
4. Machine learning to adapt to new circumstances and to detect
and extrapolate patterns.
• Turing’s test deliberately avoided direct physical interaction
between the interrogator and the computer.

11
Acting humanly: The Turing Test approach

• The total Turing Test includes a video signal so that the


interrogator can test the subject’s perceptual abilities
• To pass the total Turing Test, the computer will need
 Computer vision to perceive objects, and
 Robotics to manipulate objects and move about

12
Thinking humanly: The cognitive modeling
approach
• Real intelligence requires thinking  think like a human
• First, know how a human think
– Introspect ones thoughts
– Physiological experiment to understand how someone thinks
– Brain imaging – MRI…
• Then, build programs and models that think like humans
– Resulted in the field of cognitive science: a merger between AI
and psychology.

13
Thinking humanly: The cognitive modeling
approach
• The human thinking process is difficult to understand: how
does the mind raises from the brain ? Think also about
unconscious tasks such as vision and speech understanding.
• Humans are not perfect! We make a lot of systemic
mistakes:

14
Thinking rationally: The “laws of thought”
approach
• The Greek philosopher Aristotle was one of the first to
attempt to codify “right thinking,” irrefutable(undisputable)
reasoning processes.
• Instead of thinking like a human : think rationally.
• Find out how correct thinking must proceed: the laws of
thought.
• Aristotle syllogism: “Socrates is a man; all men are mortal,
therefore Socrates is mortal.”
• This initiated logic: a traditional and important branch of
mathematics and computer science.

15
Thinking rationally: The “laws of
thought” approach
• The logicist tradition within artificial intelligence hopes to
build on such programs to create intelligent systems.
• There are two main obstacles to this approach.
1. It is not easy to take informal knowledge and state it in the
formal terms required by logical notation, particularly
when the knowledge is less than 100% certain.
2. There is a big difference between solving a problem “in
principle” and solving it in practice

16
Acting rationally: The rational agent approach

• Rational agent: acts as to achieve the best outcome


• Logical thinking is only one aspect of appropriate behavior:
reactions like getting your hand out of a hot place is not the result
of a careful deliberation, yet it is clearly rational.
• Sometimes there is no correct way to do, yet something must be
done.
• Instead of insisting on how the program should think, we insist on
how the program should act: we care only about the final result.
• Advantages:
– more general than “thinking rationally” and more
– Mathematically principled; proven to achieve rationality unlike
human behavior or thought

17
Acting rationally: The rational agent approach

This is how birds fly Humans tried to mimic This is how we finally
birds for centuries achieved “artificial flight”

18
The foundations of AI
1. Philosophy Knowledge Rep., Logic, Foundation of AI (is AI
possible?)
2. Mathematics Search, Analysis of search algos, logic
3. Economics Expert Systems, Decision Theory, Principles of
Rational Behavior
4. Psychology Behavioristic insights into AI programs
5. Neuroscience (Brain Science) Learning, Neural Nets
6. Computer Sc. & Engg.Systems for AI
7. Control theory and Cybernetics Information Theory & AI,
Entropy, Robotics
8. Linguistics
19
History of AI
• 1. The gestation of artificial intelligence (1943–1955)
• Between 1943 and 1952, there was notable progress in the expansion of
artificial intelligence (AI).
 Year 1943: The first work which is now recognized as AI was done by
Warren McCulloch and Walter pits in 1943. They proposed a model
of artificial neurons.
 Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian
learning.
 Year 1950: The Alan Turing who was an English mathematician and
pioneered Machine learning in 1950. Alan Turing publishes "Computing
Machinery and Intelligence" in which he proposed a test. The test can
check the machine's ability to exhibit intelligent behavior equivalent to
human intelligence, called a Turing test.
 Year 1951: Marvin Minsky and Dean Edmonds created the initial artificial
neural network (ANN) named SNARC. They utilized 3,000 vacuum tubes to
mimic a network of 40 neurons.

20
History of AI
• 2. The birth of Artificial Intelligence (1952-1956)
• From 1952 to 1956, AI surfaced as a unique domain of investigation.
 Year 1952: Arthur Samuel pioneered the creation of the Samuel
Checkers-Playing Program, which marked the world's first self-
learning program for playing games.
 Year 1955: An Allen Newell and Herbert A. Simon created the "first
artificial intelligence program"Which was named as "Logic
Theorist". This program had proved 38 of 52 Mathematics
theorems, and find new and more elegant proofs for some
theorems.
 Year 1956: The word "Artificial Intelligence" first adopted by
American Computer scientist John McCarthy at the Dartmouth
Conference. For the first time, AI launched as an academic field.

21
History of AI
• 3. Early enthusiasm, great expectations (1952–1969).
• The period from 1956 to 1974 is commonly known as the "Golden Age" of
artificial intelligence (AI). In this timeframe, AI researchers and innovators
were filled with enthusiasm and achieved remarkable advancements in
the field.
• Year 1958: During this period, Frank Rosenblatt introduced the
perceptron, one of the early artificial neural networks with the ability to
learn from data. This invention laid the foundation for modern neural
networks. Simultaneously, John McCarthy developed the Lisp
programming language, which swiftly found favor within the AI
community, becoming highly popular among developers.
• Year 1959: Arthur Samuel is credited with introducing the phrase
"machine learning" in a pivotal paper in which he proposed that
computers could be programmed to surpass their creators in
performance. Additionally, Oliver Selfridge made a notable contribution to
machine learning with his publication "Pandemonium: A Paradigm for
Learning." This work outlined a model capable of self-improvement,
enabling it to discover patterns in events more effectively.

22
History of AI
• 3. Early enthusiasm, great expectations (1952–1969).
• Year 1964: During his time as a doctoral candidate at MIT, Daniel Bobrow
created STUDENT, one of the early programs for natural language
processing (NLP), with the specific purpose of solving algebra word
problems.
• Year 1965: The initial expert system, Dendral, was devised by Edward
Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi. It
aided organic chemists in identifying unfamiliar organic compounds.
• Year 1966: The researchers emphasized developing algorithms that can
solve mathematical problems. Joseph Weizenbaum created the first
chatbot in 1966, which was named ELIZA. Furthermore, Stanford Research
Institute created Shakey, the earliest mobile intelligent robot incorporating
AI, computer vision, navigation, and NLP. It can be considered a precursor
to today's self-driving cars and drones.

23
History of AI
• 3. Early enthusiasm, great expectations (1952–1969).
• Year 1968: Terry Winograd developed SHRDLU, which was the
pioneering multimodal AI capable of following user instructions to
manipulate and reason within a world of blocks.
• Year 1969: Arthur Bryson and Yu-Chi Ho outlined a learning
algorithm known as backpropagation, which enabled the
development of multilayer artificial neural networks.
• This represented a significant advancement beyond the perceptron
and laid the groundwork for deep learning.
• Additionally, Marvin Minsky and Seymour Papert authored the
book "Perceptrons," which elucidated the constraints of basic
neural networks.

24
History of AI
• 4. Knowledge-based systems: (1969–1979)
• Year 1972: The first intelligent humanoid robot was built in Japan,
which was named WABOT-1.
• Year 1973: James Lighthill published the report titled "Artificial
Intelligence: A General Survey," resulting in a substantial reduction
in the British government's backing for AI research.
• The duration between years 1974 to 1980 was the first AI winter
duration. AI winter refers to the time period where computer
scientist dealt with a severe shortage of funding from government
for AI researches.

25
History of AI
• 5. AI becomes an industry (1980–present)
• In 1980, the first national conference of the American Association
of Artificial Intelligence was held at Stanford University.
• Year 1980: After AI's winter duration, AI came back with an "Expert
System".
• Expert systems were programmed to emulate the decision-making
ability of a human expert. Additionally, Symbolics Lisp machines
were brought into commercial use, marking the onset of an AI
recovery.
• Year 1981: Danny Hillis created parallel computers tailored for AI
and various computational functions, featuring an architecture akin
to contemporary GPUs.

26
History of AI
• 5. AI becomes an industry (1980–present)
• Year 1984: Marvin Minsky and Roger Schank introduced the phrase
"AI winter" during a gathering of the Association for the
Advancement of Artificial Intelligence.
• Year 1985: Judea Pearl introduced Bayesian network causal
analysis, presenting statistical methods for encoding uncertainty in
computer systems.

27
History of AI
• 6. The return of neural networks (1986–1993)
• In the mid-1980s at least four different groups reinvented the back-
propagation learning algorithm first found in 1969 by Bryson and Ho.
• The algorithm was applied to many learning problems in computer science
and psychology, and the widespread dissemination of the results in the
collection Parallel Distributed Processing.
• The separation of AI and cognitive science, modern neural network
research has bifurcated into two fields, one concerned with creating
effective network architectures and algorithms and understanding their
mathematical properties, the other concerned with careful modeling of
the properties of actual neurons and ensembles of neurons.

28
History of AI
• 7. The emergence of intelligent agents (1993-2011)
• Between 1993 and 2011, there were significant leaps forward in artificial
intelligence (AI), particularly in the development of intelligent computer
programs.
• Year 1997: In 1997, IBM's Deep Blue achieved a historic milestone by
defeating world chess champion Gary Kasparov, marking the first time a
computer triumphed over a reigning world chess champion.
• Sepp Hochreiter and Jürgen Schmidhuber introduced the Long Short-Term
Memory recurrent neural network, revolutionizing the capability to
process entire sequences of data such as speech or video.
• Year 2002: for the first time, AI entered the home in the form of Roomba,
a vacuum cleaner.

29
History of AI
• 7. The emergence of intelligent agents (1993-2011)
• Year 2006: AI came into the Business world till the year 2006.
Companies like Facebook, Twitter, and Netflix also started using AI.
• Year 2009: Rajat Raina, Anand Madhavan, and Andrew Ng released
the paper titled "Utilizing Graphics Processors for Extensive Deep
Unsupervised Learning," introducing the concept of employing
GPUs for the training of expansive neural networks.
• Year 2011: Apple launched Siri, a voice-activated personal assistant
capable of generating responses and executing actions in response
to voice commands.

30
History of AI
• 8. Deep learning, big data and artificial general intelligence
(2011-present)
• Year 2011: In 2011, IBM's Watson won Jeopardy, a quiz show where
it had to solve complex questions as well as riddles. Watson had
proved that it could understand natural language and can solve
tricky questions quickly.
• Year 2012: Google launched an Android app feature, "Google Now",
which was able to provide information to the user as a prediction.
• Geoffrey Hinton, Ilya Sutskever, and Alex Krizhevsky presented a
deep CNN structure that emerged victorious in the ImageNet
challenge, sparking the proliferation of research and application in
the field of deep learning.

31
History of AI
• 8. Deep learning, big data and artificial general intelligence
(2011-present)
• Year 2013: DeepMind unveiled deep reinforcement learning, a CNN
that acquired skills through repetitive learning and rewards,
ultimately surpassing human experts in playing games.
• Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a
competition in the infamous "Turing test." Whereas Ian Goodfellow
and his team pioneered generative adversarial networks (GANs), a
type of machine learning framework employed for producing
images, altering pictures, and crafting deepfakes.

32
History of AI
• 8. Deep learning, big data and artificial general intelligence
(2011-present)
• Year 2018: The "Project Debater" from IBM debated on complex
topics with two master debaters and also performed extremely
well.
• Google has demonstrated an AI program, "Duplex," which was a
virtual assistant that had taken hairdresser appointments on call,
and the lady on the other side didn't notice that she was talking
with the machine.
• Year 2021: OpenAI unveiled the Dall-E multimodal AI system,
capable of producing images based on textual prompts.
• Year 2022: In November, OpenAI launched ChatGPT, offering a chat-
oriented interface to its GPT-3.5 LLM.

33
The State of the Art
• 1. Robotic vehicles: A driverless robotic car named STANLEY sped
through the rough terrain of the Mojave dessert at 22 mph,
finishing the 132-mile course first to win the 2005 DARPA Grand
Challenge.
• STANLEY is a Volkswagen Touareg outfitted with cameras, radar,
and laser rangefinders to sense the environment and onboard
software to command the steering, braking, and acceleration.
• 2. Speech recognition: A traveler calling United Airlines to book a
flight can have the entire conversation guided by an automated
speech recognition and dialog management system.

34
The State of the Art
• 3. Autonomous planning and scheduling: A hundred million miles
from Earth, NASA’s Remote Agent program became the first on-
board autonomous planning program to control the scheduling of
operations for a spacecraft.
• REMOTE AGENT generated plans from high-level goals specified
from the ground and monitored the execution of those plans
• Also helps in detecting, diagnosing, and recovering from problems
as they occurred.
• Another program MAPGEN plans the daily operations for NASA’s
Mars Exploration Rovers, and MEXAR2 did mission planning

35
The State of the Art
• 4. Game playing: IBM’s DEEP BLUE became the first computer
program to defeat the world champion in a chess match when it
bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition
match.
• Kasparov said that he felt a “new kind of intelligence” across the
board from him.
• Newsweek magazine described the match as “The brain’s last
stand.”
• The value of IBM’s stock increased by $18 billion.
• Human champions studied Kasparov’s loss and were able to draw
a few matches in subsequent years, but the most recent human-
computer matches have been won convincingly by the computer

36
The State of the Art
• 5.Spam fighting: Each day, learning algorithms classify over a
billion messages as spam, saving the recipient from having to
waste time deleting what, for many users, could comprise 80% or
90% of all messages, if not classified away by algorithms.
• Because the spammers are continually updating their tactics, it is
difficult for a static programmed approach to keep up, and
learning algorithms work best

37
The State of the Art
• 6. Logistics planning: During the Persian Gulf crisis of 1991, U.S.
forces deployed a Dynamic Analysis and Replanning Tool, DART
(Cross and Walker, 1994), to do automated logistics planning and
scheduling for transportation.
• This involved up to 50,000 vehicles, cargo, and people at a time,
and had to account for starting points, destinations, routes, and
conflict resolution among all parameters.
• The AI planning techniques generated in hours a plan that would
have taken weeks with older methods.
• The Defense Advanced Research Project Agency (DARPA) stated
that this single application more than paid back DARPA’s 30-year
investment in AI

38
The State of the Art
• 7. Robotics: The iRobot Corporation has sold over two million
Roomba robotic vacuum cleaners for home use.
• The company also deploys the more rugged PackBot to Iraq and
Afghanistan, where it is used to handle hazardous materials, clear
explosives, and identify the location of snipers.
• 8. Machine Translation: A computer program automatically
translates from Arabic to English, allowing an English speaker to
see the headlines.
• The program uses a statistical model built from examples of
Arabic-to-English translations and from examples of English text
totaling two trillion words.
• None of the computer scientists on the team speak Arabic, but
they do understand statistics and machine learning algorithms
39
INTELLIGENT AGENTS

40
Agents and Environments
• An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through actuators.
• A human agent has eyes, ears, and other organs for sensors and
hands, legs, vocal tract for actuators.
• A robotic agent might have cameras and infrared range finders for
sensors and various motors for actuators.
• A software agent receives keystrokes, file contents, and network
packets as sensory inputs and acts on the environment by
displaying on the screen, writing files, and sending network
packets.

41
Agents and Environments

42
Agents and Environments
• Percept is the agent’s perceptual inputs at any given instant.
• An agent’s percept sequence is the complete history of
everything the agent has ever perceived.
• An agent’s choice of action at any given instant can depend
on the entire percept sequence observed but not on
anything it hasn’t perceived.
• An agent’s behavior is described by the agent function that
maps any given percept sequence to an action.
• A table can be constructed for agent function that describes
any given agent.

43
Agents and Environments
• The table is, an external characterization of the agent.
• Internally, the agent function for an artificial agent will be
implemented by an agent program.
• The agent function is an abstract mathematical description.
• The agent program is a concrete implementation, running
within some physical system.

44
45
Good Behavior: The concept of Rationality
• Every entry in the table for the agent function is filled out
correctly
• A rational agent is one that does the right thing.
• When an agent is placed in an environment, it generates a
sequence of actions according to the percepts it receives.
• This sequence of actions causes the environment to go through a
sequence of states.
• If the sequence is desirable, then the agent has performed well.
• The desirability feature of an agent is captured by a performance
measure that evaluates any given sequence of environment
states.
• There is not one fixed performance measure for all tasks and
agents
46
Good Behavior: The concept of Rationality
• 1. Rationality
• What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.
• This leads to a definition of a rational agent:
• RATIONAL AGENT
• For each possible percept sequence, a rational agent should select
an action that is expected to maximize its performance measure,
given the evidence provided by the percept sequence and
whatever built-in knowledge the agent has.

47
Good Behavior: The concept of Rationality
• 2. Omniscience, learning, and autonomy
• An omniscient agent knows the actual outcome of its actions and
can act accordingly.
• Omniscience is impossible in reality.
• Rationality maximizes expected performance, while perfection
maximizes actual performance.
• As the rational choice depends only on the percept sequence,
rationality does not require omniscience.
• Doing actions in order to modify future percepts—is called
information gathering.
• It is an important part of rationality.

48
Good Behavior: The concept of Rationality

• A rational agent not only to gather information but also to


learn as much as possible from what it perceives.
• The agent’s initial configuration could reflect some prior
knowledge of the environment.
• As the agent gains experience this may be modified and
augmented.
• There are extreme cases in which the environment is
completely known a priori.
• In such cases, the agent need not perceive or learn; it simply
acts correctly.

49
Good Behavior: The concept of Rationality
• An agent relies on the prior knowledge of its designer rather than
on its own percepts, this case is called as agent’s lack of
autonomy.
• A rational agent should be autonomous—it should learn what it
can to compensate for partial or incorrect prior knowledge.
• When the agent has little or no experience, it would have to act
randomly unless the designer gave some assistance.
• It would be reasonable to provide an artificial intelligent agent
with some initial knowledge as well as an ability to learn.
• After sufficient experience of its environment, the behavior of a
rational agent can become effectively independent of its prior
knowledge

50
The Nature of Environments
• Rational agents are the solutions to the problems known as
task environments.
• Specifying the Task Environment:-
• The Performance measure, the Environment, and the agent’s
Actuators and Sensors are grouped as task environment
called as PEAS (Performance, Environment, Actuators,
Sensors)
• In designing an agent, the first step must always be to
specify the task environment as fully as possible.

51
The Nature of Environments

52
The Nature of Environments
• Properties of task environments:-
• The range of task environments in AI is obviously vast.
• Generally a small number of dimensions along are used to
categorize the task environments.
• These dimensions determine, the appropriate agent design
and the applicability of each of the principal families of
techniques for agent implementation.

53
The Nature of Environments
• 1. Fully Observable vs Partially Observable
• When an agent sensor is capable to sense or access the complete
state of an agent at each point in time, it is said to be a fully
observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no
need to keep track of the history of the surrounding.
• An environment is called unobservable when the agent has no
sensors in all environments.
• Examples:
– Chess – the board is fully observable, and so are the opponent’s
moves.
– Driving – the environment is partially observable because
what’s around the corner is not known.

54
The Nature of Environments
• 2. Deterministic vs Stochastic
• When a uniqueness in the agent’s current state completely
determines the next state of the agent, the environment is said to
be deterministic.
• The stochastic environment is random in nature which is not unique
and cannot be completely determined by the agent.
• Examples:
– Chess – there would be only a few possible moves for a chess
piece at the current state and these moves can be determined.
– Self-Driving Cars- the actions of a self-driving car are not
unique, it varies time to time.

55
The Nature of Environments
• 3. Competitive vs Collaborative
• An agent is said to be in a competitive environment when it
competes against another agent to optimize the output.
• The game of chess is competitive as the agents compete with each
other to win the game which is the output.
• An agent is said to be in a collaborative environment when multiple
agents cooperate to produce the desired output.
• When multiple self-driving cars are found on the roads, they
cooperate with each other to avoid collisions and reach their
destination which is the output desired.

56
The Nature of Environments
• 4. Single-agent vs Multi-agent
• An environment consisting of only one agent is said to be a single-
agent environment.
• A person left alone in a maze is an example of the single-agent
system.
• An environment involving more than one agent is a multi-agent
environment.
• The game of football is multi-agent as it involves 11 players in each
team.

57
The Nature of Environments
• 5. Dynamic vs Static
• An environment that keeps constantly changing itself when
the agent is up with some action is said to be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the
environment keeps changing every instant.
• An idle environment with no change in its state is called a
static environment.
• An empty house is static as there’s no change in the
surroundings when an agent enters.

58
The Nature of Environments
• 6. Discrete vs Continuous
• If an environment consists of a finite number of actions that can be
calculated in the environment to obtain the output, it is said to be a
discrete environment.
• The game of chess is discrete as it has only a finite number of
moves. The number of moves might vary with every game, but still,
it’s finite.
• The environment in which the actions are performed cannot be
numbered i.e. is not discrete, is said to be continuous.
• Self-driving cars are an example of continuous environments as
their actions are driving, parking, etc. which cannot be numbered.

59
The Nature of Environments
• 7.Episodic vs Sequential
• In an Episodic task environment, each of the agent’s actions is
divided into atomic incidents or episodes.
• There is no dependency between current and previous incidents.
• In each incident, an agent receives input from the environment and
then performs the corresponding action.
• Example: Consider an example of Pick and Place robot, which is
used to detect defective parts from the conveyor belts. Here, every
time robot(agent) will make the decision on the current part i.e.
there is no dependency between current and previous decisions.

60
The Nature of Environments
• 7.Episodic vs Sequential
• In a Sequential environment, the previous decisions can affect all
future decisions.
• The next action of the agent depends on what action he has taken
previously and what action he is supposed to take in the future.
• Example:
– Checkers- Where the previous move can affect all the following
moves.

61
The Nature of Environments
• 8. Known vs Unknown
• In a known environment, the output for all probable actions is
given.
• In case of unknown environment, for an agent to make a
decision, it has to gain knowledge about how the
environment works.

62
The Nature of Environments

63
The Structure of Agents
• Behavior of an agent is the action that is performed after
any given sequence of percepts.
• The job of AI is to design an agent program that implements
the agent function that maps from percepts to actions.
• This program will run on some sort of computing device with
physical sensors and actuators, this is called as the
Architecture.
Agent = Architecture + Program .

64
The Structure of Agents
• The program should be appropriate for the architecture.
• The architecture might be an ordinary PC, or it might be a
robotic car with several onboard computers, cameras, and
other sensors.
• Generally, the architecture makes the percepts from the
sensors available to the program, runs the program, and
feeds the program’s action choices to the actuators as they
are generated.

65
The Structure of Agents
• Agent programs:-
• The agent programs take the current percept as input from the
sensors and return an action to the actuators.
• The agent program takes the current percept as input because
nothing more is available from the environment.
• If the agent’s actions need to depend on the entire percept
sequence, the agent will have to remember the percepts.
• the table-driven approach is used for agent construction to avoid
failure.
• TABLE-DRIVEN-AGENT approach implements the desired agent
function.
• The key challenge for AI is to find out how to write programs that,
to the extent possible, produce rational behavior from a smallish
program rather than from a vast table
66
The Structure of Agents
• The agent program for a simple reflex agent in the two-state
vacuum environment

• function REFLEX-VACUUM-AGENT([location,status]) returns


an action
• if status = Dirty then return Suck
• else if location = A then return Right
• else if location = B then return Left

67
The Structure of Agents
• 1. Simple Reflex Agents
• Percept history is the history of all that an agent has perceived to
date.
• Simple reflex agents ignore the rest of the percept history and act
only on the basis of the current percept.
• The agent function is based on the condition-action rule.
• A condition-action rule is a rule that maps a state i.e., a condition
to an action.
• If the condition is true, then the action is taken, else not.
• This agent function only succeeds when the environment is fully
observable.

68
1. Simple Reflex Agents
• Problems with Simple reflex agents are :
1. Very limited intelligence.
2. No knowledge of non-perceptual parts of the state.
3. Usually too big to generate and store.
4. If there occurs any change in the environment, then the
collection of rules needs to be updated.

69
1. Simple Reflex Agents

70
1. Simple Reflex Agents

71
2. Model-Based Reflex Agents
• It works by finding a rule whose condition matches the current
situation.
• A model-based agent can handle partially observable
environments by the use of a model about the world.
• The agent has to keep track of the internal state which is adjusted
by each percept and that depends on the percept history.
• The current state is stored inside the agent which maintains some
kind of structure describing the part of the world which cannot be
seen.

72
2. Model-Based Reflex Agents

73
2. Model-Based Reflex Agents

74
3.Goal-Based Agents
• These kinds of agents take decisions based on how far they are
currently from their expected output.
• Their every action is intended to reduce their distance from the
expected output.
• This allows the agent to choose among multiple possibilities,
selecting the one which reaches a goal state.
• The knowledge that supports its decisions is represented explicitly
and can be modified, which makes these agents more flexible.
• They usually require search and planning.
• The goal-based agent’s behavior can easily be changed.

75
3.Goal-Based Agents

76
4. Utility-Based Agents
• The agents which are developed having their end uses as building
blocks are called utility-based agents.
• They choose actions based on a preference (utility) for each state
rather than only achieving the desired goal.
• These agents look for a quicker, safer, cheaper way to reach a
destination.
• Because of the uncertainty in the world, a utility agent chooses
the action that maximizes the expected utility.
• A utility function maps a state onto a real number which describes
the associated degree of happiness(how happy the agent and the
user is).

77
4. Utility-Based Agents

78

You might also like