0% found this document useful (0 votes)
95 views11 pages

Ai 3 2

Artificial intelligence (AI) is the field of building intelligent machines that can think and act like humans. There are different approaches to AI, including trying to mimic human thought processes through techniques like natural language processing, knowledge representation, automated reasoning, and machine learning. Another approach focuses on rational behavior based on an ideal performance measure. To truly achieve human-level AI, machines may need to pass more rigorous versions of the Turing Test that involve physical interaction and perception. The goal is not just to duplicate human abilities but to understand the underlying principles of intelligence.

Uploaded by

Aniket Nichinde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views11 pages

Ai 3 2

Artificial intelligence (AI) is the field of building intelligent machines that can think and act like humans. There are different approaches to AI, including trying to mimic human thought processes through techniques like natural language processing, knowledge representation, automated reasoning, and machine learning. Another approach focuses on rational behavior based on an ideal performance measure. To truly achieve human-level AI, machines may need to pass more rigorous versions of the Turing Test that involve physical interaction and perception. The goal is not just to duplicate human abilities but to understand the underlying principles of intelligence.

Uploaded by

Aniket Nichinde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

www.acuityeducare.

com
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1

Q. Define Artificial Intelligence? Explain any one in detail?  natural language processing to enable it to communicate successfully in English;
 knowledge representation to store what it knows or hears;
Acuity Educare Answer:  automated reasoning to use the stored information to answer questions and to draw new conclusions;
1. For thousands of years, we have tried to understand how we think; that is, how a mere handful of matter can  machine learning to adapt to new circumstances and to detect and extrapolate patterns.
perceive, understand, predict, and manipulate a world far larger and ARTIFICIAL more complicated than itself. b) Turing’s test deliberately avoided direct physical interaction between the interrogator and the computer,
The field of artificial intelligence, or AI, goes further still: it INTELLIGENCE attempts not just to understand but
because physical simulation of a person is unnecessary for intelligence. However, the so-called total Turing

Artificial Intelligence
also to build intelligent entities. AI is one of the newest fields in science and engineering. Work started in earnest Test includes a video signal so that the interrogator can test the subject’s perceptual abilities, as well as the
soon after World War II, and the name itself was coined in 1956. Along with molecular biology, AI is regularly opportunity for the interrogator to pass physical objects “through the hatch.” To pass the total Turing Test,
cited as the ―field I would most like to be in‖ by scientists in other disciplines. A student in physics might
 the computer will need computer vision to perceive objects, and

SEM : V
reasonably feel that all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest. AI, on
the other hand, still has openings for several full-time Einsteins and Edisons.AI currently encompasses a huge  robotics to manipulate objects and move about.
variety of subfields, ranging from the general (learning and perception) to the specific, such as playing chess, c) These six disciplines compose most of AI, and Turing deserves credit for designing a test that remains
proving mathematical theorems, writing poetry, driving a car on a crowded street, and diagnosing diseases. AI is relevant 60 years later. Yet AI researchers have devoted little effort to passing the Turing Test, believing that it
is more important to study the underlying principles of intelligence than to duplicate an exemplar. The quest for
SEM V: UNIT 1
relevant to any intellectual task; it is truly a universal field.
2. In the following diagram we see eight definitions of AI, laid out along two dimensions. The definitions on top “artificial flight” succeeded when the Wright brothers and others stopped imitating birds and started using wind
are concerned with thought processes and reasoning, whereas the ones on the bottom address behavior. The tunnels and learning about aerodynamics. Aeronautical engineering texts do not define the goal of their field as
definitions on the left measure success in terms of fidelity to human performance, whereas RATIONALITY the making “machines that fly so exactly like pigeons that they can fool even other pigeons.”
ones on the right measure against an ideal performance measure, called rationality. A system is rational if it does 4. Thinking humanly: The cognitive modeling approach
the ―right thing,‖ given what it knows. Historically, all four approaches to AI have been followed, each by a) If we are going to say that a given program thinks like a human, we must have some way of determining
different people with different methods. A human-centered approach must be in part an empirical science, how humans think. We need to get inside the actual workings of human minds. There are three ways to do
involving observations and hypotheses about human behavior. A rationalist1 approach involves a combination of this: through introspection—trying to catch our own thoughts as they go by; through psychological
mathematics and engineering. The various group have both disparaged and helped each other. experiments—observing a person in action; and through brain imaging—observing the brain in action.
Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory as a
computer program. If the program’s input–output behavior matches corresponding human behavior, that is
Thinking Humanly Thinking Rationally evidence that some of the program’s mechanisms could also be operating in humans. For example, Allen
Newell and Herbert Simon, who developed GPS, the ―General Problem Solver‖ (Newell and Simon,
“The exciting new effort to make computers “The study of mental faculties through the 1961), were not content merely to have their program solve problems correctly. They were more concerned
think . . . machines with minds, in the use of computational models.” with comparing the trace of its reasoning steps to traces of human subjects solving the same problems. The
full and literal sense.” (Haugeland, 1985) (Charniak and McDermott, 1985) interdisciplinary field of cognitive science brings together computer models from AI and experimental
techniques from psychology to construct precise and testable theories of the human mind.
“[The automation of] activities that we “The study of the computations that make b) Cognitive science is a fascinating field in itself, worthy of several textbooks and at least one encyclopedia
associate with human thinking, activities it possible to perceive, reason, and act.” (Wilson and Keil, 1999). We will occasionally comment on similarities or differences between AI techniques
such as decision-making, problem solving, (Winston, 1992) and human cognition. Real cognitive science, however, is necessarily based on experimental investigation of
learning . . .” (Bellman, 1978) actual humans or animals. We will leave that for other books, as we assume the reader has only a computer
for experimentation. In the early days of AI there was often confusion between the approaches: an author
Acting Humanly Acting Rationally would argue that an algorithm performs well on a task and that it is therefore a good model of human
performance, or vice versa. Modern authors separate the two kinds of claims; this distinction has allowed
both AI and cognitive science to develop more rapidly. The two fields continue to fertilize each other, most
“The art of creating machines that perform “Computational Intelligence is the study
notably in computer vision, which incorporates neurophysiological evidence into computational models.
functions that require intelligence of the design of intelligent agents.” (Poole
5. Thinking rationally: The “laws of thought” approach
when performed by people.” (Kurzweil, et al., 1998)
a) The Greek philosopher Aristotle was one of the first to attempt to codify ―right thinking,‖ that is, irrefutable
1990)
reasoning processes. His syllogisms provided patterns for argument structures that always yielded correct
“AI . . . is concerned with intelligent behavior
conclusions when given correct premises—for example, ―Socrates is a man; all men are mortal; therefore,
“The study of how to make computers do in artifacts.” (Nilsson, 1998)
Socrates is mortal.‖ These laws of thought were supposed to govern the operation of the mind; their study
things at which, at the moment, people are
initiated the field called logic.
better.” (Rich and Knight, 1991)
b) Logicians in the 19th century developed a precise notation for statements about all kinds
of objects in the world and the relations among them. (Contrast this with ordinary arithmetic
3. Acting humanly: The Turing Test approach
notation, which provides only for statements about numbers.) By 1965, programs existed
a) The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition that could, in principle, solve any solvable problem described in logical notation. (Although
of intelligence. A computer passes the test if a human interrogator, after posing some written questions, cannot if no solution exists, the program might loop forever.) The so-called logicist tradition within
tell whether the written responses come from a person or from a computer. Chapter 26 discusses the details of artificial intelligence hopes to build on such programs to create intelligent systems.
the test and whether a computer would really be intelligent if it passed. For now, we note that programming a c) There are two main obstacles to this approach. First, it is not easy to take informal
computer to pass a rigorously applied test provides plenty to work on. The computer would need to possess the
607A, 6th floor, Ecstasy business park, city of joy, JSD following capabilities:
knowledge and state it in the formal terms required by logical notation, particularly when
road, mulund (W) | 8591065589/022-25600622 the knowledge is less than 100% certain. Second, there is a big difference between solving
Page 1 of 13 Page 2 of 13
YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
Abhay More abhay_more 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622

TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1

a problem ―in principle‖ and solving it in practice. Even problems with just a few hundred question is vital to AI because intelligence requires action as well as reasoning. Moreover, only by understanding problems led manufacturers to start multiplying the number of CPU cores rather than the clock speed. Current
facts can exhaust the computational resources of any computer unless it has some guidance how actions are justified can we understand how to build an agent whose actions are justifiable expectations are that future increases in power will come from massive parallelism—a curious convergence with
as to which reasoning steps to try first. Although both of these obstacles apply to any attempt 3. Mathemetics: the properties of the brain.
to build computational reasoning systems, they appeared first in the logicist tradition. a) Philosophers staked out some of the fundamental ideas of AI, but the leap to a formal science required a level c) AI also owes a debt to the software side of computer science, which has supplied the operating systems,
6. Acting rationally: The rational agent approach of mathematical formalization in three fundamental areas: logic, computation, and probability. programming languages, and tools needed to write modern programs (and papers about them). But this is one
a) An agent is just something that acts (agent comes from the Latin agere, to do). Of course, all computer  What are the formal rules to draw valid conclusions? area where the debt has been repaid: work in AI has pioneered many ideas that have made their way back to
programs do something, but computer agents are expected to do more: operate autonomously, perceive  What can be computed? mainstream computer science, including time sharing, interactive interpreters, personal computers with windows
their environment, persist over a prolonged time period, adapt to change, and create and pursue goals. A and mice, rapid development environments, the linked list data type, automatic storage management, and key
rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best  How do we reason with uncertain information? concepts of symbolic, functional, declarative, and object-oriented programming.
expected outcome. In the ―laws of thought‖ approach to AI, the emphasis was on correct inferences. b) The first nontrivial algorithm is thought to be Euclid’s algorithm for computing greatest common divisors. 8. Control Theory and Cybernetics
Making correct inferences is sometimes part of being a rational agent, because one way to act rationally is This fundamental result can also be interpreted as showing that some functions on the integers cannot be
a) Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling machine: a water clock with a regulator
to reason logically to the conclusion that a given action will achieve one’s goals and then to act on that represented by an algorithm—that is, they cannot be computed. This motivated Alan Turing (1912–1954) to try
that maintained a constant flow rate. This invention changed the definition of what an artifact could do. Modern
conclusion. On the other hand, correct inference is not all of rationality; in some situations, there is no to characterize exactly which functions are computable—capable of being computed.
control theory, especially the branch known as stochastic optimal control, has as its goal the design of systems
provably correct thing to do, but something must still be done. There are also ways of acting rationally that c) Besides logic and computation, the third great contribution of mathematics to AI is the theory of probability. that maximize an objective function over time. This roughly matches our view of AI: designing systems that
cannot be said to involve inference. For example, recoiling from a hot stove is a reflex action that is usually 4. Economics: behave optimally.
more successful than a slower action taken after careful deliberation. a) Decision theory, which combines probability theory with utility theory, provides a formal and complete b) Calculus and matrix algebra, the tools of control theory, lend themselves to systems that are describable by
b) All the skills needed for the Turing Test also allow an agent to act rationally. Knowledge framework for decisions (economic or otherwise) made under uncertainty— that is, in cases where probabilistic fixed sets of continuous variables, whereas AI was founded in part as a way to escape from the these perceived
representation and reasoning enable agents to reach good decisions. We need to be able to descriptions appropriately capture the decision maker’s environment. This is suitable for ―large‖ economies limitations. The tools of logical inference and computation
generate comprehensible sentences in natural language to get by in a complex society. We where each agent need pay no attention to the actions of other agents as individuals. For ―small‖ economies, the allowed AI researchers to consider problems such as language, vision, and planning that fell completely outside
need learning not only for erudition, but also because it improves our ability to generate situation is much more like a game: the actions of one player can significantly affect the utility of another (either the control theorist’s purview.
effective behavior. positively or negatively). 9. Linguistics
c) The rational-agent approach has two advantages over the other approaches. First, it  How should we make decisions so as to maximize payoff? a) Modern linguistics and AI, then, were ―born‖ at about the same time, and grew up together, intersecting in a
is more general than the ―laws of thought‖ approach because correct inference is just one  How should we do this when others may not go along? hybrid field COMPUTATIONAL called computational linguistics or natural language processing. The problem
of several possible mechanisms for achieving rationality. Second, it is more amenable to  How should we do this when the payoff may be far in the future? of understanding language soon turned out to be considerably more complex than it seemed in 1957.
scientific development than are approaches based on human behavior or human thought. The Understanding language requires an understanding of the subject matter and context, not just an understanding
standard of rationality is mathematically well defined and completely general, and can be b) Work in economics and operations research has contributed much to our notion of rational agents, yet for of the structure of sentences.
―unpacked‖ to generate agent designs that provably achieve it. Human behavior, on the other many years AI research developed along entirely separate paths. One reason was the apparent complexity of b) This might seem obvious, but it was not widely appreciated until the 1960s. Much of the early work in
hand, is well adapted for one specific environment and is defined by, well, the sum total making rational decisions. knowledge representation (the study of how to put knowledge into a form that a computer can reason with) was
of all the things that humans do. tied to language and informed by research in linguistics, which was connected in turn to decades of work on the
5. Neuroscience:
Neuroscience is the study of the nervous system, particularly the brain. Although the exact philosophical analysis of language.
way in which the brain enables thought is one of the great mysteries of science, the fact that it
Q. List the foundations of AI? Explain any one in detail? does enable thought has been appreciated for thousands of years.
Brains and digital computers have somewhat different properties. Figure 1.3 shows that Q. Explain history of Artificaila Intelligence?
Answer: computers have a cycle time that is a million times faster than a brain. The brain makes up
for that with far more storage and interconnection than even a high-end personal computer, Answer:
1. Following are some of the foundations of AI: although the largest supercomputers have a capacity that is similar to the brain’s. (It should 1. The gestation of artificial intelligence (1943–1955)
a) Philosophy be noted, however, that the brain does not seem to use all of its neurons simultaneously.) a) The first work that is now generally recognized as AI was done by Warren McCulloch and Walter Pitts
b) Mathemetics 6. Psychology: (1943). They drew on three sources: knowledge of the basic physiology and function of neurons in the brain; a
c) Economics a) Cognitive psychology, which views COGNITIVE the brain as an information-processing device, can be traced formal analysis of propositional logic due to Russell and Whitehead; and Turing’s theory of computation. They
d) Neuroscience back at least to the works of William James (1842–1910). proposed a model of artificial neurons in which each neuron is characterized as being ―on‖ or ―off,‖ with a
e) Psycology b) The three key steps of a knowledge-based agent: switch to ―on‖ occurring in response to stimulation by a sufficient number of neighboring neurons.
f) Computer Engg  the stimulus must be translated into an internal representation,
g) Control Theory and Cybernatics  the representation is manipulated by cognitive processes to derive new internal representations, and 2. The birth of artificial intelligence (1956)
h) Linguistics  these are in turn retranslated back into action. a) McCarthy convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him bring together U.S.
2. Philosophy: d) It is now a common (although far from universal) view among psychologists that ―a cognitive theory should researchers interested in automata theory, neural nets, and the study of intelligence. They organized a two-
a) What is Philosophy? be like a computer program‖ (Anderson, 1980); that is, it should describe a detailed information processing month workshop at Dartmouth in the summer of 1956. The Dartmouth workshop did not lead to any new
breakthroughs, but it did introduce all the major figures to each other. For the next 20 years, the field would be
 Can formal rules be used to draw valid conclusions? mechanism whereby some cognitive function might be implemented.
dominated by these people and their students and colleagues
 How does the mind arise from a physical brain? 7. Computer Engg:
 Where does knowledge come from? a) For artificial intelligence to succeed, we need two things: intelligence and an artifact. The computer has been
3. Early enthusiasm, great expectations (1952–1969)
 How does knowledge lead to action? the artifact of choice. The modern digital electronic computer was invented independently and almost
simultaneously by scientists in three countries embattled in World War II. a) The early years of AI were full of successes—in a limited way. Given the primitive computers and
b) It’s one thing to say that the mind operates, at least in part, according to logical rules, and to build physical programming tools of the time and the fact that only a few years earlier computers were seen as things that
systems that emulate some of those rules; it’s another to say that the mind itself is such a physical system. This b) Since that time, each generation of computer hardware has brought an increase in speed and capacity and a
decrease in price. Performance doubled every 18 months or so until around 2005, when power dissipation could do arithmetic and no more, it was astonishing whenever a computer did anything remotely clever.

Page 3 of 13 Page 4 of 13 Page 5 of 13


YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1

learning, and the most promising technique can be applied to each application. As a result of these developments, 5. Spam fighting:
4. A dose of reality (1966–1973) so-called data DATA MINING mining technology has spawned a vigorous new industry. Each day, learning algorithms classify over a billion messages as spam, saving the recipient from having to waste
a) Terms such as ―visible future‖ can be interpreted in various ways, but Simon also made more concrete time deleting what, for many users, could comprise 80% or 90% of all messages, if not classified away by
predictions: that within 10 years a computer would be chess champion, and a significant mathematical theorem 9. The emergence of intelligent agents (1995–present) algorithms. Because the spammers are continually updating their tactics, it is difficult for a static programmed
would be proved by machine. These predictions came true (or approximately true) within 40 years rather a) Perhaps encouraged by the progress in solving the subproblems of AI, researchers have also started to look at approach to keep up, and learning algorithms work best.
than10. the ―whole agent‖ problem again. One of the most important environments for intelligent agents is the Internet.
 The first kind of difficulty arose because most early programs knew nothing of their subject matter; they AI systems have become so common in Web-based applications that the ―-bot‖ suffix has entered everyday 6. Logistics planning:
succeeded by means of simple syntactic manipulations. language. Moreover, AI technologies underlie many Internet tools, such as search engines, recommender During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic Analysis and Replanning Tool,
 The second kind of difficulty was the intractability of many of the problems that AI was attempting to systems, and Web site aggregators. DART (Cross and Walker, 1994), to do automated logistics planning and scheduling for transportation.
solve. Most of the early AI programs solved problems by trying out different combinations of steps until b) One consequence of trying to build complete agents is the realization that the previously isolated subfields of This involved up to 50,000 vehicles, cargo, and people at a time, and had to account for starting points,
the solution was found. This strategy worked initially because microworlds contained very few objects AI might need to be reorganized somewhat when their results are to be tied together. In particular, it is now destinations, routes, and conflict resolution among all parameters. The AI planning techniques
and hence very few possible actions and very short solution sequences. widely appreciated that sensory systems (vision, sonar, speech recognition, etc.) cannot deliver perfectly reliable generated in hours a plan that would have taken weeks with older methods. The Defense Advanced
Research Project Agency (DARPA) stated that this single application more than paid back DARPA’s 30-
 A third difficulty arose because of some fundamental limitations on the basic structures being used to information about the environment.
year investment in AI.
generate intelligent behavior. c) Hence, reasoning and planning systems must be able to handle uncertainty. A second major consequence of
5. Knowledge-based systems: The key to power? (1969–1979) the agent perspective is that AI has been drawn into much closer contact with other fields, such as control theory
and economics, that also deal with agents. Recent progress in the control of robotic cars has derived from a 7. Robotics:
mixture of approaches ranging from better sensors, control-theoretic integration of sensing, localization and The iRobot Corporation has sold over two million Roomba robotic vacuum cleaners for home use. The company
a) The picture of problem solving that had arisen during the first decade of AI research was of a general-purpose also deploys the more rugged PackBot to Iraq and Afghanistan, where it is used to handle hazardous materials,
search mechanism trying to string together elementary reasoning steps to find complete solutions. Such mapping, as well as a degree of high-level planning.
clear explosives, and identify the location of snipers.
approaches have been called weak methods because, although general, they do not scale up to large or difficult
problem instances. 10. The availability of very large data sets (2001–present):
8. Machine Translation:
b) The alternative to weak methods is to use more powerful, domain-specific knowledge that allows larger a) Throughout the 60-year history of computer science, the emphasis has been on the algorithm as the main
A computer program automatically translates from Arabic to English, allowing an English speaker to see the
reasoning steps and can more easily handle typically occurring cases in narrow areas of expertise. One might subject of study. But some recent work in AI suggests that for many problems, it makes more sense to worry
headline ―Ardogan Confirms That Turkey Would Not Accept Any Pressure, Urging Them to Recognize Cyprus.‖
say that to solve a hard problem, you have to almost know the answer already. about the data and be less picky about what algorithm to apply. This is true because of the increasing availability
The program uses a statistical model built from examples of Arabic-to-English translations and from examples of
c) The widespread growth of applications to real-world problems caused a concurrent increase in the demands of very large data sources
English text totaling two trillion words (Brants et al., 2007). None of the computer scientists on the team speak
for workable knowledge representation schemes. A large number of different representation and reasoning Arabic, but they do understand statistics and machine learning algorithms.
languages were developed.
6. AI becomes an industry (1980–present) Q. Explain various applications of AI?
a) The first successful commercial expert system, R1, began operation at the Digital Equipment Corporation Q. Define Agent and Environment?
Answer:
(McDermott, 1982). The program helped configure orders for new computer systems; by 1986, it was saving the
1. Robotic vehicles:
company an estimated $40 million a year. Answer:
A driverless robotic car named STANLEY sped through the rough terrain of the Mojave dessert at 22 mph,
b) Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988, including 1. An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that
finishing the 132-mile course first to win the 2005 DARPA Grand Challenge. STANLEY is a Volkswagen
hundreds of companies building expert systems, vision systems, robots, and software and hardware specialized environment through actuators. A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal
Touareg outfitted with cameras, radar, and laser rangefinders to sense the environment and onboard software to
for these purposes. tract, and so on for actuators. A robotic agent might have cameras and infrared range finders for sensors and
command the steering, braking, and acceleration (Thrun, 2006). The following year CMU’s BOSS won the Urban
c) Soon after that came a period called the ―AIWinter,‖ in which many companies fell by the wayside as they various motors for actuators. A software agent receives keystrokes, file contents, and network packets as sensory
Challenge, safely driving in traffic through the streets of a closed Air Force base, obeying traffic rules and
failed to deliver on extravagant promises. inputs and acts on the environment by displaying on the screen, writing files, and sending network packets.
avoiding pedestrians and other vehicles.
2. We use the term percept to refer to the agent’s perceptual inputs at any given instant. An agent’s percept
7. The return of neural networks (1986–present) 2. Speech recognition: sequence is the complete history of everything the agent has ever perceived. In general, an agent’s choice of action
a) In the mid-1980s at least four different groups reinvented the back-propagation learning algorithm first found at any given instant can depend on the entire percept sequence observed to date, but not on anything it hasn’t
A traveler calling United Airlines to book a flight can have the entire conversation guided by an automated speech
in 1969 by Bryson and Ho. The algorithm was applied to many learning problems in computer science and recognition and dialog management system. perceived. By specifying the agent’s choice of action for every possible percept sequence, we have said more or
psychology, and the widespread dissemination of the results in the collection Parallel Distributed Processing less everything here is to say about the agent. Mathematically speaking, we say that an agent’s behavior is
(Rumelhart and McClelland, 1986) caused great excitement. 3. Autonomous planning and scheduling: described by the agent function that maps any given percept sequence to an action
A hundred million miles from Earth, NASA’s Remote Agent program became the first on-board autonomous
8. AI adopts the scientific method (1987–present) planning program to control the scheduling of operations for a spacecraft (Jonsson et al., 2000). generated plans
a) Recent years have seen a revolution in both the content and the methodology of work in artificial from high-level goals specified from the ground and monitored the execution of those plans—detecting,
intelligence.14 It is now more common to build on existing theories than to propose brand-new ones, to base diagnosing, and recovering from problems as they occurred.
claims on rigorous theorems or hard experimental evidence. In terms of ethodology, AI has finally come firmly
under the scientific method. To be accepted hypotheses must be subjected to rigorous empirical experiments, and 4. Game playing:
the results must be analyzed statistically for their importance (Cohen, 1995). It is now possible to replicate IBM’s DEEP BLUE became the first computer program to defeat the world champion in a chess match when it
experiments by using shared repositories of test data and code. bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match (Goodman and Keene, 1997). Kasparov said
b) Using improved methodology and theoretical frameworks, the field arrived at an understanding in which that he felt a ―new kind of intelligence‖ across the board from him. Newsweek magazine described the match as
neural nets can now be compared with corresponding techniques from statistics, pattern recognition, and machine ―The brain’s last stand.‖ The value of IBM’s stock increased by $18 billion. Human champions studied
Kasparov’s loss and were able to draw a few matches in subsequent years, but the most recent human-computer
matches have been won convincingly by the computer.
Page 6 of 13 Page 7 of 13 Page 8 of 13
YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622

TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1

3. We can imagine tabulating the agent function that describes any given agent; for most agents, this would be a b) We must also ensure that we haven’t inadvertently allowed the agent to engage in decidedly underintelligent tell whether there is dirt in other squares, and an automated taxi cannot see what other drivers are thinking. If the
very large table—infinite, in fact, unless we place a bound on the length of percept sequences we want to consider. activities. agent has no sensors at all then the environment is unobservable
Given an agent to experiment with, we can, in principle, construct this table by trying out all possible percept c) Our definition requires a rational agent not only to gather information but also to learn as much as possible from b) Single agent vs. multiagent:
sequences and recording which actions the agent does in response.1 The table is, of course, an external what it perceives. The agent’s initial configuration could reflect some prior knowledge of the environment, but as -agent and multiagent environments may seem simple enough.
characterization of the agent. Internally, the agent function for an artificial agent will be implemented by an agent the agent gains experience this may be modified and augmented. There are extreme cases in which the c) Deterministic vs. stochastic:
program. It is important to keep these two ideas distinct. The agent function is an abstract mathematical environment is completely known a priori. In such cases, the agent need not perceive or learn; it simply acts
description; the agent program is a concrete implementation, running within some physical system correctly. agent, then we say the environment is deterministic; otherwise, it is stochastic.
d) To the extent that an agent relies on the prior knowledge of its designer rather than on its own percepts, we say d) Episodic vs. sequential:
that the agent lacks autonomy. A rational agent should be autonomous—it should learn what it can to compensate
for partial or incorrect prior knowledge. agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the
actions taken in previous episodes. Many classification tasks are episodic.
Q. List various nature of environment?
taxi driving are sequential: in both cases, short-term actions can have long-term consequences. Episodic
Answer: environments are much simpler than sequential environments because the agent does not need to think ahead.
e) Static vs. dynamic:
1. Specifying the task environment
a) In our discussion of the rationality we have to specify the performance measure, the environment, and the
agent; otherwise, it is static. Static environments are easy to deal with because the agent need not keep looking at
agent’s actuators and sensors. We group all these under the heading of the task environment. For the
the world while it is deciding on an action, nor need it worry about the passage of time.
acronymically minded, we call PEAS this the PEAS (Performance, Environment, Actuators, Sensors) description.
Q. Explain the concept of rationality? In designing an agent, the first step must always be to specify the task environment as fully as possible. continuously asking the agent what it wants to do; if it hasn’t
decided yet, that counts as deciding to do nothing
b) Following figure summarizes the PEAS description for the taxi’s task environment.
Answer: f) Discrete vs. continuous:
to
1. A rational agent is one that does the right thing—conceptually speaking, every entry in the table for the agent the percepts and actions of the agent. For example, the chess environment has a finite number of distinct states
function is filled out correctly. Obviously, doing the right thing is better than doing the wrong thing, but what does g) Known vs. unknown:
it mean to do the right thing? When an agent is plunked down in an environment, it generates a sequence of of
actions according to the percepts it receives. This sequence of actions causes the environment to go through a knowledge about the ―laws of physics‖ of the environment. In a known environment, the outcomes (or outcome
sequence of states. If the sequence is desirable, then the agent has performed well. This notion of desirability is probabilities if the environment is stochastic) for all actions are given.
captured by a performance measure that evaluates any given sequence of environment states. Notice that we said rn how it works in order to make good
environment states, not agent states. If we define success in terms of agent’s opinion of its own performance, an decisions
agent could achieve perfect rationality simply by deluding itself that its performance was perfect.

2. Rationality c) In Figure, we have sketched the basic PEAS elements for a number of additional agent types. It may come as a Q. Explain the structure of Agent in detail?
What is rational at any given time depends on four things: surprise to some readers that our list of agent types includes some programs that operate in the entirely artificial
environment defined by keyboard input and character output on a screen. ―Surely,‖ one might say, ―this is not a
 The performance measure that defines the criterion of success. Answer:
real environment, is it?‖ In fact, what matters is not the distinction between ―real‖ and ―artificial‖ environments,
 The agent’s prior knowledge of the environment. but the complexity of the relationship among the behavior of the agent, the percept sequence generated by the 1. The job of AI is to design an agent program that implements the agent function—the mapping from percepts
 The actions that the agent can perform. environment, and the performance measure. to actions. We assume this program will run on some sort of computing device with physical sensors and
 The agent’s percept sequence to date. d) In contrast, some software agents (or software robots or softbots) exist in rich, unlimited domains. Imagine a actuators—we call this the architecture: agent = architecture + program .
 softbot Web site operator designed to scan Internet news sources and show the interesting items to its users, while
3. This leads to a definition of a rational agent: selling advertising space to generate revenue. To do well, that operator will need some natural language 2. Agent programs
For each possible percept sequence, a rational agent should select an action that is expected to maximize its processing abilities, it will need to learn what each user and advertiser is interested in, and it will need to change a) The agent programs that we design in this book all have the same skeleton: they take the current percept as
performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the its plans dynamically—for example, when the connection for one news source goes down or when a new one input from the sensors and return an action to the actuators. Notice the difference between the agent program,
agent has. comes online. which takes the current percept as input, and the agent function, which takes the entire percept history. The
agent program takes just the current percept as input because nothing more is available from the environment; if
4. Omniscience, learning, and autonomy 2. Properties of task environments the agent’s actions need to depend on the entire percept sequence, the agent will have to remember the percepts.
a) We need to be careful to distinguish between rationality and omniscience. An omniscient agent knows the a) Fully observable vs. partially observable:
actual outcome of its actions and can act accordingly; but omniscience is impossible in reality. Rationality is not ’s sensors give it access to the complete state of the environment at each point in time, then we say
the same as perfection. Rationality maximizes expected performance, while perfection maximizes actual that the task environment is fully observable. A task environment is effectively fully observable if the sensors
performance. Retreating from a requirement of perfection is not just a question of being fair to agents. The point is detect all aspects that are relevant to the choice of action; relevance, in turn, depends on the performance measure.
that if we expect an agent to do what turns out to be the best action after the fact, it will be impossible to design an Fully observable environments are convenient because the agent need not maintain any internal state to keep track
agent to fulfill this specification. Our definition of rationality does not require omniscience, then, because the of the world.
rational choice depends only on the percept sequence to date. ause of noisy and inaccurate sensors or because parts of the
state are simply missing from the sensor data—for example, a vacuum agent with only a local dirt sensor cannot
Page 9 of 13 Page 10 of 13 Page 11 of 13
YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
www.acuityeducare.com
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 1

b) We outline four basic kinds of agent programs that embody the principles underlying almost all intelligent c) Regardless of the kind of representation used, it is seldom possible for the agent to determine the current state
systems: of a partially observable environment exactly. Instead, the box labeled ―what the world is like now‖ (Figure
 Simple reflex agents; 2.11) represents the agent’s ―best guess‖ (or sometimes best guesses). For example, an automated taxi may not Acuity Educare
 Model-based reflex agents; be able to see around the large truck that has stopped in front of it and can only guess about what may be
causing the hold-up. Thus, uncertainty about the current state may be unavoidable, but the agent still has to
 Goal-based agents; and make a decision.
 Utility-based agents.

Artificial Intelligence
5. Goal-based agents
Each kind of agent program combines particular components in particular ways to generate actions. a) Knowing something about the current state of the environment is not always enough to decide what to do. For
example, at a road junction, the taxi can turn left, turn right, or go straight on. The correct decision depends on
3. Simple reflex agents

SEM : V
where the taxi is trying to get to. In other words, as well as a current state description, the GOAL agent needs
a) The simplest kind of agent is the simple reflex agent. These agents select actions on the basis of the current some sort of goal information that describes situations that are desirable—for example, being at the passenger’s
percept, ignoring the rest of the percept history Simple reflex behaviors occur even in more complex destination. The agent program can combine this with the model (the same information as was used in the model
environments. Imagine yourself as the driver of the automated taxi. If the car in front brakes and its brake lights based reflex agent) to choose actions that achieve the goal.
come on, then you should notice this and initiate braking. In other words, some processing is done on the visual
input to establish the condition we call ―The car in front is braking.‖ Then, this triggers some established
b) Sometimes goal-based action selection is straightforward—for example, when goal satisfaction results
immediately from a single action. Sometimes it will be more tricky—for example, when the agent has to SEM V: UNIT 2
connection in the agent program to the action ―initiate braking.” consider long sequences of twists and turns in order to find a way to achieve the goal.
c) Although the goal-based agent appears less efficient, it is more flexible because the knowledge that supports
its decisions is represented explicitly and can be modified. If it starts to rain, the agent can update its knowledge
of how effectively its brakes will operate; this will automatically cause all of the relevant behaviors to be altered
to suit the new conditions. For the reflex agent, on the other hand, we would have to rewrite many condition–
action rules. The goal-based agent’s behavior can easily be changed to go to a different destination, simply by
specifying that destination as the goal. The reflex agent’s rules for when to turn and when to go straight will
work only for a single destination; they must all be replaced to go somewhere new.
6. Utility-based agents
a) A more general performance measure should allow a comparison of different world states according to
exactly how happy they would make the agent. Because ―happy‖ does not sound very scientific, economists
and computer scientists use the term utility instead. Utility-based agent programs are where we design decision-
b) Simple reflex agents have the admirable property of being simple, but they turn out to be of limited making agents that must handle the uncertainty inherent in stochastic or partially observable environments.
intelligence. The agent in Figure 2.10 will work only if the correct decision can be made on the basis of only the b) A utility-based agent has to model and keep track of its environment, tasks that have involved a great deal of
current percept—that is, only if the environment is fully observable. Even a little bit of unobservability can research on perception, representation, reasoning, and learning. The results of this research fill many of the
cause serious trouble. chapters of this book. Choosing the utility-maximizing course of action is also a difficult task, requiring
ingenious algorithms.
4. Model-based reflex agents
a) The most effective way to handle partial observability is for the agent to keep track of the part of the world it
can’t see now. That is, the agent should maintain some sort of internal state that depends on the percept history
and thereby reflects at least some of the unobserved aspects of the current state
b) Updating this internal state information as time goes by requires two kinds of knowledge to be encoded in the
agent program.
—for example, that
an overtaking car generally will be closer behind than it was a moment ago.
—for example, that
when the agent turns the steering wheel clockwise, the car turns to the right

607A, 6th floor, Ecstasy business park, city of joy, JSD


road, mulund (W) | 8591065589/022-25600622
Page 12 of 13 Page 13 of 13
YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 Abhay More abhay_more

TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2

Q. What are Problem solving agents? Any state can be designated as the initial state. Note that any given goal can be reached from exactly half of the
possible initial states.
Answer:  Actions:

1. Intelligent agents are supposed to maximize their performance measure. Achieving this is sometimes The simplest formulation defines the actions as movements of the blank space Left, Right, Up, or Down. Different
simplified if the agent can adopt a goal and aim at satisfying it. Goals help organize behavior by limiting the subsets of these are possible depending on where the blank is.
objectives that the agent is trying to achieve and hence the actions it needs to consider. Goal formulation,  Transition model:
based on the current situation and the agent’s performance measure, is the first step in problem solving.
2. We will consider a goal to be a set of world states—exactly those states in which the goal is satisfied. The Given a state and action, this returns the resulting state; for example, if we apply Left to the start state, the
agent’s task is to find out how to act, now and in the future, so that it reaches a goal state. Before it can do resulting state has the 5 and the blank switched.
this, it needs to decide (or we need to decide on its behalf) what sorts of actions and states it should consider  Goal test:
Problem formulation is the process of deciding what actions and states to consider, given a goal.
3. If the agent has no additional information—i.e., if the environment is unknown then it is has no choice but to This checks whether the state matches the goal configuration. (Other goal configurations are possible.)
try one of the actions at random. In general, an agent with several immediate options of unknown value can  Path cost:
decide what to do by first examining future actions that eventually lead to states of known value.
4. The process of looking for a sequence of actions that reaches the goal is called search. A search algorithm Each step costs 1, so the path cost is the number of steps in the path.
takes a problem as input and returns a solution in the form of an action sequence. Once a solution is found, the
actions it recommends can be carried out. This is called the execution phase.

2. Toy problems
a) The first example we examine is the vacuum world. This can be formulated as a problem as follows:
 States:
The state is determined by both the agent location and the dirt locations. The agent is in one of two locations,
each of which might or might not contain dirt. Thus, there are 2 × 22 = 8 possible world states. A larger
environment with n locations has n · 2n states.
 Initial state:
Any state can be designated as the initial state.
 Actions:
In this simple environment, each state has just three actions: Left, Right, and Suck. Larger environments 2. The actions are abstracted to their beginning and final states, ignoring the intermediate locations where the block
might also include Up and Down. is sliding. We have abstracted away actions such as shaking the board when pieces get stuck and ruled out
 Transition model: extracting the pieces with a knife and putting them back again. We are left with a description of the rules of the
The actions have their expected effects, except that moving Left in the leftmost square, moving Right in the puzzle, avoiding all the details of physical manipulations.
5. Thus, we have a simple ―formulate, search, execute‖ design for the agent. After formulating a goal and a problem rightmost square, and Sucking in a clean square have no effect..
to solve, the agent calls a search procedure to solve it. It then uses the solution to guide its actions, doing whatever the  Goal test: 3. The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as test problems for new
solution recommends as the next thing to do—typically, the first action of the sequence—and then removing that step This checks whether all the squares are clean. search algorithms in AI. This family is known to be NP-complete, so one does not expect to find methods
from the sequence. Once the solution has been executed, the agent will formulate a new goal.  Path cost: significantly better in the worst case than the search algorithms described in this chapter and the next. The 8-puzzle
Each step costs 1, so the path cost is the number of steps in the path. has 9!/2=181, 440 reachable states and is easily solved. The 15-puzzle (on a 4×4 board) has around 1.3 trillion
6. Notice that while the agent is executing the solution sequence it ignores its percepts when choosing an action states, and random instances can be solved optimally in a few milliseconds by the best search algorithms. The 24-
because it knows in advance what they will be. An agent that carries out its plans with its eyes closed, so to speak, 3. Compared with the real world, this toy problem has discrete locations, discrete dirt, reliable cleaning, and it never puzzle (on a 5 × 5 board) has around 1025 states, and random instances take several hours to solve optimally.
must be quite certain of what is going OPEN-LOOP on. Control theorists call this an open-loop system, because gets any dirtier.
ignoring the percepts breaks the loop between agent and environment Q. List some real world problems?
Q. Explain the concept of toy problem with example?
Answer: Q. Explain 8-Puzzle example in detail? Amswer:
1. The problem-solving approach has been applied to a vast array of task environments. We list some of the best
known here, distinguishing between toy and real-world problems. A toy problem is intended to illustrate or exercise Answer: 1. We have already seen how the route-finding problem is defined in terms of specified locations and transitions along
various problem-solving methods. It can be given a concise, exact description and hence is usable by different
links between them. Route-finding algorithms are used in a variety of applications. Some, such as Web sites and in-
researchers to compare the performance of algorithms. A real-world problem is one whose solutions people actually 1. The 8-puzzle consists of a 3×3 board with eight numbered tiles and a blank space. A tile adjacent to the blank car systems that provide driving directions, are relatively straightforward extensions of the Romania example.
care about. Such problems tend not to have a single agreed-upon description, but we can give the general flavor of space can slide into the space. The object is to reach a specified goal state, such as the one shown on the right of the Others, such as routing video streams in computer networks, military operations planning, and airline travel-
their formulations. figure. The standard formulation is as follows: planning systems, involve much more complex specifications. Consider the airline travel problems that must be
 States: solved by a travel-planning Web site:
A state description specifies the location of each of the eight tiles and the blank in one of the nine squares.
 Initial state:

Page 1 of 17 Page 2 of 17 Page 3 of 17


YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2

 States: 2. This is the essence of search—following up one option now and putting the others aside for later, in case the first Q. List various uninformed Search Strategies?
Each state obviously includes a location (e.g., an airport) and the current time. Furthermore, because the cost of an choice does not lead to a solution. The process of expanding nodes on the frontier continues until either a solution
action (a flight segment) may depend on previous segments, their fare bases, and their status as domestic or is found or there are no more states to expand. Answer:
international, the state must record extra information about these ―historical‖ aspects. 3. Search algorithms all share this basic structure; they vary primarily according to how they choose which state to
 Initial state: expand next—the so-called search strategy. Loopy paths are a special case of the more general concept of 1. The term, uninformed search (also called blind search), means that the strategies have no additional information
redundant paths, which exist whenever there is more than one way to get from one state to another. Fortunately, about states beyond that provided in the problem definition. All they can do is generate successors and distinguish
This is specified by the user’s query. there is no need to consider loopy paths. We can rely on more than intuition for this: because path costs are a goal state from a non-goal state. All search strategies are distinguished by the order in which nodes are
 Actions: additive and step costs are nonnegative, a loopy path to any given state is never better than the same path with the expanded. Strategies that know whether one non-goal state is ―more promising‖ than another are called informed
loop removed. In some cases, it is possible to define the problem itself so as to eliminate redundant paths. search or heuristic search strategies
Take any flight from the current location, in any seat class, leaving after the current time, leaving enough time for 4. As the saying goes, algorithms that forget their history are doomed to repeat it. The way to avoid exploring 2. Breadth-first search
within-airport transfer if needed. redundant paths is to remember where one has been. To do this, we augment the TREE-SEARCH algorithm with a) Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of the
a data structure called the explored set (also known as the closed list), which remembers every expanded node. root node are expanded next, then their successors, and so on. In general, all the nodes are expanded at a given
 Transition model: Newly generated nodes that match previously generated nodes—ones in the explored set or the frontier—can be depth in the search tree before any nodes at the next level are expanded.
The state resulting from taking a flight will have the flight’s destination as the current location and the flight’s discarded instead of being added to the frontier. The new algorithm is called as ―GRAPH-SEARCH‖ b) Breadth-first search is an instance of the general graph-search algorithm in which the shallowest unexpanded
arrival time as the current time. 5. Example: node is chosen for expansion. This is achieved very simply by using a FIFO queue for the frontier. Thus, new
a) Figure shows the first few steps in growing the search tree for finding a route from Arad to Bucharest. The root nodes (which are always deeper than their parents) go to the back of the queue, and old nodes, which are
 Goal test: node of the tree corresponds to the initial state, In(Arad). The first step is to test whether this is a goal state. shallower than the new nodes, get expanded first. There is one slight tweak on the general graph-search algorithm,
Are we at the final destination specified by the user? (Clearly it is not, but it is important to check so that we can solve trick problems like ―starting in Arad, get to which is that the goal test is applied to each node when it is generated rather than when it is selected for
Arad.‖) Then we need to consider taking various actions. We do this by expanding the current state; that is, expansion. This decision is explained below, where we discuss time complexity. Note also that the algorithm,
 Path cost: applying each legal action to the current state, thereby generating a new set of states. In this case, we add three following the general template for graph search, discards any new path to a state already in the frontier or explored
This depends on monetary cost, waiting time, flight time, customs and immigration procedures, seat quality, time branches from the parent node In(Arad) leading to three new child nodes: In(Sibiu), In(Timisoara), and In(Zerind). set; it is easy to see that any such path must be at least as deep as the one already found. Thus, breadth-first search
of day, type of airplane, frequent-flyer mileage awards, and so on. Now we must choose which of these three possibilities to consider further always has the shallowest path to every node on the frontier.
Commercial travel advice systems use a problem formulation of this kind, with many additional complications to
handle the byzantine fare structures that airlines impose. Any seasoned traveler knows, however, that not all air
travel goes according to plan.
2. The traveling salesperson problem (TSP) is a touring problem in which each city must be visited exactly once.
The aim is to find the shortest tour. The problem is known to be NP-hard, but an enormous amount of effort has
been expended to improve the capabilities of TSP algorithms. In addition to planning trips for traveling
salespersons, these algorithms have been used for tasks such as planning movements of automatic circuit-board
drills and of stocking machines on shop floors.
3. A VLSI layout problem requires positioning millions of components and connections on a chip to minimize area,
minimize circuit delays, minimize stray capacitances, and maximize manufacturing yield. The layout problem
comes after the logical design phase and is usually split into two parts: cell layout and channel routing. In cell
layout, the primitive components of the circuit are grouped into cells, each of which performs some recognized
function. Each cell has a fixed footprint (size and shape) and requires a certain number of connections to each of b) Suppose we choose Sibiu first. We check to see whether it is a goal state (it is not) and then expand it to get
the other cells. The aim is to place the cells on the chip so that they do not overlap and so that there is room for the In(Arad), In(Fagaras), In(Oradea), and In(RimnicuVilcea). We can then choose any of these four or go back and
connecting wires to be placed between the cells. Channel routing finds a specific route for each wire through the choose Timisoara or Zerind. Each of these six nodes is a leaf node, that is, a node with no children in the tree. The
gaps between the cells. These search problems are extremely complex, but definitely worth solving. set of all leaf nodes available for expansion at any given point is called the frontier.
4. Robot navigation is a generalization of the route-finding problem described earlier. Rather than following a
discrete set of routes, a robot can move in a continuous space with (in principle) an infinite set of possible actions
and states. For a circular robot moving on a flat surface, the space is essentially two-dimensional. When the robot
has arms and legs or wheels that must also be controlled, the search space becomes many-dimensional. Advanced
techniques are required just to make the search space finite.
c) Two lessons can be learned from above diagram.
 First, the memory requirements are a bigger problem for breadth-first search than is the execution time. One
Q. Explain the process of “Searching for solutions”? might wait for days for the solution to an important problem with search depth 12, but no personal computer
has the petabyte of memory it would take. Fortunately, other strategies require less memory.
Answer:  The second lesson is that time is still a major factor. If your problem has a solution at depth 16, then (given
our assumptions) it will take about 350 years for breadth-first search (or indeed any uninformed search) to
1. Having formulated some problems, we now need to solve them. A solution is an action sequence, so search find it. In general, exponential-complexity search problems cannot be solved by uninformed methods for any
algorithms work by considering various possible action sequences. The possible action sequences starting at the but the smallest instances
initial state form a search tree with the initial state at the root; the branches are actions and the nodes correspond to 3. Uniform-cost search
states in the state space of the problem

Page 4 of 17 Page 5 of 17 Page 6 of 17


YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622

TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2

a) When all step costs are equal, breadth-first search is optimal because it always expands the shallowest  Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated cost of the cheapest path 2. The average solution cost for a randomly generated 8-puzzle instance is about 22 steps. The branching factor is
unexpanded node. By a simple extension, we can find an algorithm that is optimal with any step-cost function. from n to the goal, we have f(n) = estimated cost of the cheapest solution through n . about 3. (When the empty tile is in the middle, four moves are possible; when it is in a corner, two; and when it is
Instead of expanding the shallowest node, uniform-cost search expands the node n with the lowest path cost g(n).  Thus, if we are trying to find the cheapest solution, a reasonable thing to try first is the node with the lowest along an edge, three.) This means that an exhaustive tree search to depth 22 would look at about 322 ≈ 3.1×1010
This is done by storing the frontier as a priority queue ordered by g. value of g(n) + h(n). It turns out that this strategy is more than just reasonable: provided that the heuristic states.
b) In addition to the ordering of the queue by path cost, there are two other significant differences from breadth- function h(n) satisfies certain conditions, A∗ search is both complete and optimal. The algorithm is identical to 3. A graph search would cut this down by a factor of about 170,000 because only 9!/2 = 181, 440 distinct states are
first search. The first is that the goal test is applied to a node when it is selected for expansion (as in the generic UNIFORM-COST-SEARCH except that A∗ uses g + h instead of g. reachable. This is a manageable number, but the corresponding number for the 15-puzzle is roughly 1013, so the
graph-search algorithm shown in Figure 3.7) rather than when it is first generated. The reason is that the first goal c) Conditions for optimality: Admissibility and consistency next order of business is to find a good heuristic function. If we want to find the shortest solutions by using A∗, we
node that is generated may be on a suboptimal path. The second difference is that a test is added in case a better  The first condition we require for optimality is that h(n) be an admissible heuristic. An admissible heuristic is need a heuristic function that never overestimates the number of steps to the goal. There is a long history of such
path is found to a node currently on the frontier. one that never overestimates the cost to reach the goal. Because g(n) is the actual cost to reach n along the heuristics for the 15-puzzle; here are two commonly used candidates:
4. Depth-first search current path, and f(n)=g(n) + h(n), we have as an immediate consequence that f(n) never overestimates the a) h1 = the number of misplaced tiles. For Figure 3.28, all of the eight tiles are out of position, so the start state
a) Depth-first search always expands the deepest node in the current frontier of the search tree. The progress of true cost of a solution along the current path through n. would have h1 = 8. h1 is an admissible heuristic because it is clear that any tile that is out of place must be moved
the search is illustrated in Figure 3.16. The search proceeds immediately to the deepest level of the search tree,  Admissible heuristics are by nature optimistic because they think the cost of solving the problem is less than it at least once.
where the nodes have no successors. As those nodes are expanded, they are dropped from the frontier, so then the actually is b) h2 = the sum of the distances of the tiles from their goal positions. Because tiles cannot move along diagonals,
search ―backs up‖ to the next deepest node that still has unexplored successors. 3. Memory-bounded heuristic search
the distance we will count is the sum of the horizontal and vertical distances. This is sometimes called the city
b) The depth-first search algorithm is an instance of the graph-search algorithm in Figure 3.7; whereas breadth- block distance or Manhattan distance. h2 is also admissible because all any move can do is move one tile one step
a) The simplest way to reduce memory requirements for A∗ is to adapt the idea of iterative deepening to the closer to the goal. Tiles 1 to 8 in the start state give a Manhattan distance of h2 = 3+1 + 2 + 2+ 2 + 3+ 3 + 2 = 18 .
first-search uses a FIFO queue, depth-first search uses a LIFO queue. A LIFO queue means that the most recently heuristic search context, resulting in the iterative-deepening A∗ (IDA∗) al- gorithm. The main difference between
generated node is chosen for expansion. This must be the deepest unexpanded node because it is one deeper than As expected, neither of these overestimates the true solution cost, which is 26.
IDA∗ and standard iterative deepening is that the cutoff used is the f-cost (g+h) rather than the depth; at each
its parent—which, in turn was the deepest unexpanded node when it was selected. iteration, the cutoff value is the smallest f-cost of any node that exceeded the cutoff on the previous iteration. IDA∗
is practical for many problems with unit step costs and avoids the substantial overhead associated with keeping a
sorted queue of nodes. Unfortunately, it suffers from the same difficulties with realvalued costs as does the iterative
version of uniform-cost search.
b) Two other memory-bounded algorithms, called RBFS and MA∗. Recursive best-first search (RBFS) is a simple
recursive algorithm that attempts to mimic the operation of standard best-first search, but using only linear space.
Like A∗ tree search, RBFS is an optimal algorithm if the heuristic function h(n) is admissible. Its space complexity
is linear in the depth of the deepest optimal solution, but its time complexity is rather difficult to characterize: it
depends both on the accuracy of the heuristic function and on how often the best path changes as nodes are
expanded It seems sensible, therefore, to use all available memory. Two algorithms that do this are MA∗ (memory-
bounded A∗) and MA* SMA∗ (simplified MA∗). SMA∗ is—well—simpler, so SMA* we will describe it. SMA∗
proceeds just like A∗, expanding the best leaf until memory is full. At this point, it cannot add a new node to the Q. Explain Local Search Algorithms in detail?
search tree without dropping an old one. SMA∗ always drops the worst leaf node—the one with the highest f-value.
Answer:

Q. Explain various INFORMED (HEURISTIC) SEARCH STRATEGIES? 1. The search algorithms that we have seen so far are designed to explore search spaces systematically. This
systematicity is achieved by keeping one or more paths in memory and by recording which alternatives have been
Answer: explored at each point along the path. When a goal is found, the path to that goal also constitutes a solution to the
problem. In many problems, however, the path to the goal is irrelevant.
1. Informed INFORMED SEARCH search strategy—one that uses problem-specific knowledge beyond the
definition of the problem itself—can find solutions more efficiently than can an uninformed strategy. 2. If the path to the goal does not matter, we might consider a different class of algorithms, ones that do not worry
The general approach we consider is called best-first search. Best-first search is an instance of the general TREE- about paths at all. Local search algorithms operate using a single current node (rather than multiple paths) and
SEARCH or GRAPH-SEARCH algorithm in which a node is selected for expansion based on an evaluation generally move only to neighbors of that node. Typically, the paths followed by the search are not retained.
function, f(n). The evaluation function is construed as a cost estimate, so the node with the lowest evaluation is Although local search algorithms are not systematic, they have two key advantages: (1) they use very little
expanded first. The implementation of best-first graph search is identical to that for uniform-cost search, except for memory—usually a constant amount; and (2) they can often find reasonable solutions in large or infinite
the use of f instead of g to order the priority queue. (continuous) state spaces for which systematic algorithms are unsuitable.
2. Greedy best-first search
a) Greedy best-first search tries to expand the node that is closest to the goal, on the grounds that this is likely to
lead to a solution quickly. Thus, it evaluates nodes by using just the heuristic function; that is, f(n) = h(n). Greedy Q. Explain HEURISTIC FUNCTIONS with example? 3. In addition to finding goals, local search algorithms are useful for solving pure optimization problems, in which
best-first tree search is also incomplete even in a finite state space, much like depth-first search. the aim is to find the best state according to an objective function. To understand local search, we find it useful to
b) A* search: Minimizing the total estimated solution cost Answer: consider the state-space landscape. A landscape has both ―location‖ (defined by the state) and ―elevation‖
(defined by the value of the heuristic cost function or objective function). If elevation corresponds to cost, then the
 The most widely known form of best-first search is called A∗ search (pronounced ―A-star search‖). It
1. We look at heuristics for the 8-puzzle, in order to shed light on the nature of heuristics in general. The 8-puzzle aim is to find the lowest valley—a global minimum; if elevation corresponds to an objective function, then the aim
evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to get from the node to the
was one of the earliest heuristic search problems. The object of the puzzle is to slide the tiles horizontally or is to find the highest peak—a global maximum. (You can convert from one to the other just by inserting a minus
goal:
vertically into the empty space until the configuration matches the goal configuration. sign.) Local search algorithms explore this landscape. A complete local search algorithm always finds a goal if one
exists; an optimal algorithm always finds a global minimum/maximum.
f(n) = g(n) + h(n) .
Page 7 of 17 Page 8 of 17 Page 9 of 17
YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2

Q. Explain Simulated annealing algorithm in detail?

Answer:

1. A hill-climbing algorithm that never makes ―downhill‖ moves toward states with lower value (or higher cost) is
guaranteed to be incomplete, because it can get stuck on a local maximum. In contrast, a purely random walk—that
is, moving to a successor chosen uniformly at random from the set of successors—is complete but extremely
inefficient. Therefore, it seems reasonable to try to combine hill climbing with a random walk in some way that
yields both efficiency and completeness. Simulated annealing is such an algorithm.
2. In metallurgy, annealing is the process used to temper or harden metals and glass by heating them to a high
temperature and then gradually cooling them, thus allowing the material to reach a lowenergy crystalline state. To
explain simulated annealing, we switch our point of view from hill climbing to gradient descent (i.e., minimizing
cost) and imagine the task of getting a ping-pong ball into the deepest crevice in a bumpy surface. b) Now suppose that we introduce nondeterminism in the form of a powerful but erratic vacuum cleaner. In the
Q. Explain Hill Climbing Search in detail? 3. If we just let the ball roll, it will come to rest at a local minimum. If we shake the surface, we can bounce the ball erratic vacuum world, the Suck action works as follows:
out of the local minimum. The trick is to shake just hard enough to bounce the ball out of local minima but not hard  When applied to a dirty square the action cleans the square and sometimes cleans up dirt in an adjacent
Answer: enough to dislodge it from the global minimum. The simulated-annealing solution is to start by shaking hard (i.e., square, too.
at a high temperature) and then gradually reduce the intensity of the shaking (i.e., lower the temperature).  When applied to a clean square the action sometimes deposits dirt on the carpet.
1. The hill-climbing search algorithm is simply a loop that continually moves in the direction of increasing value— 4. Simulated annealing was first used extensively to solve VLSI layout problems in the early 1980s. It has been c) Solutions for nondeterministic problems can contain nested if–then–else statements; this means that they are
that is, uphill. It terminates when it reaches a ―peak‖ where no neighbor has a higher value. The algorithm does applied widely to factory scheduling and other large-scale optimization tasks. trees rather than sequences. This allows the selection of actions based on contingencies arising during execution.
not maintain a search tree, so the data structure for the current node need only record the state and the value of the
Many problems in the real, physical world are contingency problems because exact prediction is impossible.
objective function. Hill climbing does not look ahead beyond the immediate neighbors of the current state. This
resembles trying to find the top of Mount Everest in a thick fog while suffering from amnesia. 2. AND–OR search trees
2. To illustrate hill climbing, we will use the 8-queens problem. Local search algorithms typically use a complete- a) We begin by constructing search trees, but here the trees have a different character. In a deterministic
state formulation, where each state has 8 queens on the board, one per column. The successors of a state are all environment, the only branching is introduced by the agent’s own choices in each state. We call these nodes OR
possible states generated by moving a single queen to another square in the same column (so each state has 8×7=56 nodes. In the vacuum world, for example, at an OR node the agent chooses Left or Right or Suck. In a
successors). The heuristic cost function h is the number of pairs of queens that are attacking each other, either nondeterministic environment, branching is also introduced by the environment’s choice of outcome for each
directly or indirectly. The global minimum of this function is zero, which occurs only at perfect solutions. action. We call these nodes AND nodes. For example, the Suck action in state 1 leads to a state in the set {5, 7},
3. Hill climbing is sometimes called greedy local search because it grabs a good neighbor seven deadly sins, it turns so the agent would need to find a plan for state 5 and for state 7. These two kinds of nodes alternate, leading to an
out that greedy algorithms often perform quite well. Hill climbing often makes rapid progress toward a solution AND–OR tree.
because it is usually quite easy to improve a bad state. b) A solution for an AND–OR search problem is a subtree that (1) has a goal node at every leaf, (2) specifies one
4. Unfortunately, hill climbing often gets stuck for the following reasons: action at each of its OR nodes, and (3) includes every outcome branch at each of its AND nodes. The solution is
a) Local maxima: shown in bold lines in the figure; it corresponds to the plan given in Equation (4.3). (The plan uses if–then–else
a local maximum is a peak that is higher than each of its neighboring states but lower than the global maximum. notation to handle the AND branches, but when there are more than two branches at a node, it might be better to
Hill-climbing algorithms that reach the vicinity of a local maximum will be drawn upward toward the peak but will use a case condition.
then be stuck with nowhere else to go. c) If the current state is identical to a state on the path from the root, then it returns with failure. This doesn’t mean
b) Ridges: that there is no solution from the current state; it simply means that if there is a noncyclic solution, it must be
Ridges result in a sequence of local maxima that is very difficult for greedy algorithms to navigate. reachable from the earlier incarnation of the current state, so the new incarnation can be discarded. With this
c) Plateaux: check, we ensure that the algorithm terminates in every finite state space, because every path must reach a goal, a
A plateau is a flat area of the state-space landscape. It can be a flat local maximum, from which no uphill dead end, or a repeated state. Notice that the algorithm does not check whether the current state is a repetition of a
Q. List various Searches with non deterministic approcahes?
exit exists, or a shoulder, from which progress is possible. A hill-climbing search might get lost on the state on some other path from the root, which is important for efficiency
plateau. Answer:

1. The erratic vacuum world


a) As an example, we use the vacuum world. Recall that the state space has eight states. There are three actions—
Left, Right, and Suck—and the goal is to clean up all the dirt (states 7 and 8). If the environment is observable,
deterministic, and completely known, then the problem is trivially solvable by any of the algorithms and the
solution is an action sequence. For example, if the initial state is 1, then the action sequence [Suck,Right,Suck]
will reach a goal state, 8.

Page 10 of 17 Page 11 of 17 Page 12 of 17


YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622

TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2

3. Try, try again b) To solve sensorless problems, we search in the space of belief states rather than physical states. Notice that in The prediction stage is the same as for sensorless problems: given the action a in belief state b, the predicted
a) Consider the slippery vacuum world, which is identical to the ordinary (non-erratic) vacuum world except that belief-state space, the problem is fully observable because the agent belief state is ˆb =PREDICT(b, a).
movement actions sometimes fail, leaving the agent in the same location. For example, moving Right in state 1
leads to the state set {1, 2}. Figure 4.12 shows part of the search graph; clearly, there are no longer any acyclic belief state:
solutions from state 1, and AND-OR-GRAPH-SEARCH would return with failure. There is, however, a cyclic POSSIBLE-PERCEPTS(ˆb) = {o : o=PERCEPT(s) and s ∈ ˆb } .
solution, which is to keep trying Right until it works. We can express this solution by adding a label to denote rcept, the belief state that would result from the percept. The
some portion of the plan and using that label later instead of repeating the plan itself. Thus, our cyclic solution is new belief state bo is just the set of states in ˆb that could have produced the percept:
[Suck, L1 : Right , if State =5 then L1 else Suck] .
b) (A better syntax for the looping part of this plan would be ―while State =5 do Right .‖) In general a cyclic plan bo = UPDATE(ˆb, o) = {s : o= PERCEPT(s) and s ∈ ˆb } .
may be considered a solution provided that every leaf is a goal state and that a leaf is reachable from every point 4. Solving partially observable problems
in the plan. The key realization is that a loop in the state space back to a state L translates to a loop in the plan a) The preceding section showed how to derive the RESULTS function for a nondeterministic belief-state
back to the point where the subplan for state L is executed. problem from an underlying physical problem and the PERCEPT function. Given such a formulation, the AND–
c) Given the definition of a cyclic solution, an agent executing such a solution will eventually reach the goal OR search algorithm of Figure 4.11 can be applied directly to derive a solution. Figure 4.16 shows part of the
provided that each outcome of a nondeterministic action eventually occurs. Is this condition reasonable? It search tree for the local-sensing vacuum world, assuming an initial percept [A, Dirty]. The solution is the
depends on the reason for the nondeterminism. If the action rolls a die, then it’s reasonable to suppose that conditional plan
eventually a six will be rolled. If the action is to insert a hotel card key into the door lock, but it doesn’t work the
first time, then perhaps it will eventually work, or perhaps one has the wrong key (or the wrong room!). After [Suck, Right, if B state ={6} then Suck else [ ]] .
seven or eight tries, most people will assume the problem is with the key and will go back to the front desk to get
a new one. One way to understand this decision is to say that the initial problem formulation (observable,
nondeterministic) is abandoned in favor of a different formulation (partially observable, deterministic) where the
failure is attributed to an unobservable property of the key.

b) Notice that, because we supplied a belief-state problem to the AND–OR search algorithm, it returned a
conditional plan that tests the belief state rather than the actual state. This is as it should be: in a partially
observable environment the agent won’t be able to execute a solution that requires testing the actual state.

5. An agent for partially observable environments


a) The design of a problem-solving agent for partially observable environments is quite similar to the simple
problem-solving agent. the agent formulates a problem, calls a search algorithm (such as AND-OR-GRAPH-
SEARCH) to solve it, and executes the solution.
b) There are two main differences.
Q. A. List various Searches with Partial Observations? the first step is an if–then–
else expression, the agent will need to test the condition in the if-part and execute the then-part or the else-part
Answer: 3. Searching with observations accordingly.
a) For a general partially observable problem, we have to specify how the environment generates percepts for the ercepts.
1. The key concept required for solving partially observable problems is the belief state, representing the agent’s agent. This process resembles the prediction–observation–update process in Equation (4.5) but is actually simpler
current belief about the possible physical states it might be in, given the sequence of actions and percepts up to b) The formal problem specification includes a PERCEPT(s) function that returns the percept received in a given because the percept is given by the environment rather than calculated by the
that point. We begin with the simplest scenario for studying belief states, which is when the agent has no sensors state. (If sensing is nondeterministic, then we use a PERCEPTS function that returns a set of possible percepts.)
at all; then we add in partial sensing as well as nondeterministic actions. For example, in the local-sensing vacuum world, the PERCEPT in state 1 is [A, Dirty]. Fully observable
problems are a special case in which PERCEPT(s)=s for every state s, while sensorless problems are a special
2. Searching with no observation case in which PERCEPT(s)=null . When observations are partial, it will usually be the case that several states
a) When the agent’s percepts provide no information at all, we have what is called a sensor less problem or could have produced any given percept.
sometimes a conformant problem. At first, one might think the sensorless agent has no hope of solving a problem c) The ACTIONS, STEP-COST, and GOAL-TEST are constructed from the underlying physical problem just as
if it has no idea what state it’s in; in fact, sensorless problems are quite often solvable. Moreover, sensorless for sensorless problems, but the transition model is a bit more complicated. We can think of transitions from one
agents can be surprisingly useful, primarily because they don’t rely on sensors working properly. belief state to the next for a particular action as occurring in three stages:

Page 13 of 17 Page 14 of 17 Page 15 of 17


YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
www.acuityeducare.com
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 2

c) In partially observable environments—which include the vast majority of real-world environments—


maintaining one’s belief state is a core function of any intelligent system. This function goes under various Acuity Educare
names, including monitoring, filtering and state estimation.
d) If the agent is not to ―fall behind,‖ the computation has to happen as fast as percepts are coming in. As the
environment becomes more complex, the exact update computation becomes infeasible and the agent will have

Artificial Intelligence
to compute an approximate belief state.

Q. Define Online Search Algorithm?

Answer:

1. Offline search algorithms compute a complete solution before setting foot in the real world and then execute Q. Explain Online search agents?
SEM : V
the solution. In contrast, an online search agent interleaves computation and action: first it takes an action, then
it observes the environment and computes the next action. Online search is a good idea in dynamic or Answer:
SEM V: UNIT 3
semidynamic domains—domains where there is a penalty for sitting around and computing too long.
2. Online search is also helpful in nondeterministic domains because it allows the agent to focus its 1. After each action, an online agent receives a percept telling it what state it has reached; from this information, it can
computational efforts on the contingencies that actually arise rather than those that might happen but probably augment its map of the environment. The current map is used to decide where to go next. This interleaving of planning
won’t. Online search is a necessary idea for unknown environments, where the agent does not know what states and action means that online search algorithms are quite different from the offline search algorithms we have seen
exist or what its actions do. In this state of ignorance, the agent faces an exploration problem and must use its previously.
actions as experiments in order to learn enough to make deliberation worthwhile. 2. An online depth-first search agent is shown in following diagram. This agent stores its map in a table, RESULT[s,
a], that records the state resulting from executing action a in state s. Whenever an action from the current state has not
been explored, the agent tries that action. The difficulty comes when the agent has tried all the actions in a state.
Q. What is Online Search Problem?

Answer:

1. An online search problem must be solved by an agent executing actions, rather than by pure computation.
We assume a deterministic and fully observable environment but we stipulate that the agent knows only the
following:
a) ACTIONS(s), which returns a list of actions allowed in state s;
b) The step-cost function c(s, a, s')—note that this cannot be used until the agent knows that s' is the outcome;
and
c) GOAL-TEST(s).
2. Note in particular that the agent cannot determine RESULT(s, a) except by actually being in s and doing a.
The agent might have access to an admissible heuristic function h(s) that estimates the distance from the
current state to a goal state. Typically, the agent’s objective is to reach a goal state while minimizing cost. 3. It is fairly easy to see that the agent will, in the worst case, end up traversing every link in the state space exactly
3. The cost is the total path cost of the path that the agent actually travels. It is common to compare this cost twice. For exploration, this is optimal; for finding a goal, on the other hand, the agent’s competitive ratio could be
with the path cost of the path the agent would follow if it knew the search space in advance—that is, the actual arbitrarily bad if it goes off on a long excursion when there is a goal right next to the initial state. An online variant of
shortest path (or shortest complete exploration). In the language of online algorithms, this is called the iterative deepening solves this problem; for an environment that is a uniform tree, the competitive ratio of such an
competitive ratio; we would like it to be as small as possible. agent is a small constant.
4. Dead ends are a real difficulty for robot exploration—staircases, ramps, cliffs, one-way streets, and 4. Because of its method of backtracking, ONLINE-DFS-AGENT works only in state
all kinds of natural terrain present opportunities for irreversible actions. To make progress, we simply spaces where the actions are reversible. There are slightly more complex algorithms that
assume that the state space is safely explorable—that is, some goal state is reachable from every work in general state spaces, but no such algorithm has a bounded competitive ratio.
reachable state. State spaces with reversible actions, such as mazes and 8-puzzles, can be viewed as
undirected graphs and are clearly safely explorable. Even in safely explorable environments, no
bounded competitive ratio can be guaranteed if there are paths of unbounded cost. This is easy to
show in environments with irreversible actions, but in fact it remains true for the reversible case as
well. For this reason, it is common to describe the performance of online search algorithms in terms
of the size of the entire state space rather than just the depth of the shallowest goal.
607A, 6th floor, Ecstasy business park, city of joy, JSD
road, mulund (W) | 8591065589/022-25600622
Page 16 of 17 Page 17 of 17
YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 Abhay More abhay_more

TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 3 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 3 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 3

Q. What is adversarial search? Q. Write a note on Optimal Design in Games? Q. Explain Optimal decisions in multiplayer
OR games?
Q. What is the contribution of AI in games? Answer:
Answer:
Answer: 1. In a normal search problem, the optimal solution would be a sequence of actions leading to a
goal state—a terminal state that is a win. In adversarial search, MIN has something to say about
1. Multiagent environments in an environment in which each agent needs to consider the actions of 1. Many popular games allow more than two players. Let us examine how to extend the minimax
it. MAX therefore must find a contingent strategy, which specifies MAX’s move in the initial state,
other agents and how they affect its own welfare. The unpredictability of these other agents can idea to multiplayer games. This is straightforward from the technical viewpoint, but raises some
then MAX’s moves in the states resulting from every possible response by MIN, then MAX’s
introduce contingencies into the agent’s problem-solving process. Competitive environments, in interesting new conceptual issues.
moves in the states resulting from every possible response by MIN to those moves, and so on.
which the agents goals are in conflict, giving rise to adversarial search problems—often known 2. First, we need to replace the single value for each node with a vector of values. For example, in
This is exactly analogous to the AND–OR search algorithm with MAX playing the role of OR and
as games. a three-player game with players A, B, and C, a vector (vA, vB, vC) is associated with each
MIN equivalent to AND. Roughly speaking, an optimal strategy leads to outcomes at least as
2. Mathematical game theory, a branch of economics, views any multiagent environment as a node. For terminal states, this vector gives the utility of the state from each player’s viewpoint. (In
good as any other strategy when one is playing an infallible opponent.
game, provided that the impact of each agent on the others is ―significant,‖ regardless of whether two- player, zero-sum games, the two-element vector can be reduced to a single value because
the agents are cooperative or competitive. the values are always opposite.) The simplest way to implement this is to have the UTILITY
3. In AI, the most common games are of a rather specialized kind—what game theorists call function return a vector of utilities.
deterministic, turn-taking, two-player, zero-sum games of perfect information (such as chess). In 3. Now we have to consider nonterminal states. Consider the node marked X in the game tree
our terminology, this means deterministic, fully observable environments in which two agents act shown in diagram. In that state, player C chooses what to do. The two choices lead to terminal
alternately and in which the utility values at the end of the game are always equal and opposite. states with utility vectors (vA =1, vB =2, vC =6) and (vA =4, vB =2, vC =3). Since 6 is bigger than
For example, if one player wins a game of chess, the other player necessarily loses. It is this 3, C should choose the first move. This means that if state X is reached, subsequent play will
opposition between the agents’ utility functions that makes the situation adversarial. lead to a terminal state with utilities (vA =1, vB =2, vC =6). Hence, the backed-up value of X is
4. The state of a game is easy to represent, and agents are usually restricted to a small number of this vector. The backed-up value of a node n is always the utility
actions whose outcomes are defined by precise rules. Games are interesting because they are
too hard to solve. For example, chess has an average branching factor of about 35, and games
often go to 50 moves by each player, so the search tree has about 35100 or 10154 nodes
(although the search graph has ―only‖ about 1040 distinct nodes). Games, like the real world,
therefore require the ability to make some decision even when calculating the optimal decision
is infeasible. Games also penalize inefficiency severely. Whereas an implementation of A∗
search that is half as efficient will simply take twice as long to run to completion, a chess
program that is half as efficient in using its available time probably will be beaten into the
ground, other things being equal. Game-playing research has therefore spawned a number of 2. Even a simple game like tic-tac-toe is too complex for us to draw the entire game tree on one
interesting ideas on how to make the best possible use of time. page, so we will switch to the trivial game in Figure 5.2. The possible moves for MAX at the root
5. Pruning allows us to ignore portions of the search tree that make no difference to the final choice, node are labeled a1, a2, and a3. The possible replies to a1 for MIN are b1, b2, b3, and so on. 4. Now we have to consider nonterminal states. Consider the node marked X in the game tree
and heuristic evaluation functions allow us to approximate the true utility of a state without doing a This particular game ends after one move each by MAX and MIN. (In game parlance, we say that shown in Figure 5.4. In that state, player C chooses what to do. The two choices lead to terminal
complete search. this tree is one move deep, consisting of two half-moves, each of which is called a ply.) The states with utility vectors (vA =1, vB =2, vC =6) and (vA =4, vB =2, vC =3). Since 6 is bigger than
6. We first consider games with two players, whom we call MAX and MIN for reasons that will soon utilities of PLY the terminal states in this game range from 2 to 14. 3, C should choose the first move. This means that if state X is reached, subsequent play will lead
become obvious. MAX moves first, and then they take turns moving until the game is over. At the to a terminal state with utilities (vA =1, vB =2, vC =6). Hence, the backed-up value of X is this
end of the game, points are awarded to the winning player and penalties are given to the loser. A 3. Given a game tree, the optimal strategy can be determined from the minimax value of each
node, which we write as MINIMAX(n). The minimax value of a node is the utility (for MAX) of vector. The backed-up value of a node n is always the utility vector of the successor state with
game can be formally defined as a kind of search problem with the following elements: the highest value for the player choosing at n
being in the corresponding state, assuming that both players play optimally from there to the end
of the game. Obviously, the minimax value of a terminal state is just its utility. Furthermore, given 5. Multiplayer ALLIANCE games usually involve alliances, whether formal or informal, among the
a choice, MAX prefers to move to a state of maximum value, whereas MIN prefers a state of players. Alliances are made and broken as the game proceeds. How are we to understand such
minimum value. So we have the following: behavior? Are alliances a natural consequence of optimal strategies for each player in a
multiplayer game? It turns out that they can be. For example, suppose A and B are in weak
positions and C is in a stronger position. Then it is often optimal for both A and B to attack C
rather than each other, lest C destroy each of them individually. In this way, collaboration
emerges from purely selfish behavior. Of course, as soon as C weakens under the joint
onslaught, the alliance loses its value, and either A or B could violate the agreement.
6. If the game is not zero-sum, then collaboration can also occur with just two players. Suppose, for
example, that there is a terminal state with utilities (vA =1000, vB =1000) and that 1000 is the
highest possible utility for each player. Then the optimal strategy is for both players to do
everything possible to reach this state—that is, the players will automatically cooperate to
achieve a mutually desirable goal.

Page 1 of 11 Page 2 of 11 Page 3 of 11


YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 3 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 3 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 3

Q. Write a note on Alpha Beta Prunning? Q. Write a note on STOCHASTIC GAMES? Q. Write a note on State of Art Game Programs?

Answer: Answer: Answer:

1. The problem with minimax search is that the number of game states it has to examine is 1. In real life, many unpredictable external events can put us into unforeseen situations. Many 1. Chess:
exponential in the depth of the tree. Unfortunately, we can’t eliminate the exponent, but it turns games mirror this unpredictability by including a random element, such as the throwing of dice. IBM’s DEEP BLUE chess program, now retired, is well known for defeating world
out we can effectively cut it in half. The trick is that it is possible to compute the correct minimax We call these stochastic games. Backgammon is a typical game that combines luck and skill. champion Garry Kasparov in a widely publicized exhibition match. Deep Blue ran on a
decision without looking at every node in the game tree. That is, we can borrow the idea of Dice are rolled at the beginning of a player’s turn to determine the legal moves. parallel computer with 30 IBM RS/6000 processors doing alpha–beta search. The unique
pruning to eliminate large parts of the tree from consideration. The particular technique we 2. Although White knows what his or her own legal moves are, White does not know what Black is part was a configuration of 480 custom VLSI chess processors that performed move
examine is called alpha–beta pruning. When applied to a standard minimax tree, it returns the going to roll and thus does not know what Black’s legal moves will be. That means White cannot generation and move ordering for the last few levels of the tree, and evaluated the leaf
same move as minimax would, but prunes away branches that cannot possibly influence the construct a standard game tree of the sort we saw in chess and tic-tac-toe. A game tree in nodes. Deep Blue searched up to 30 billion positions per move, reaching depth 14
final decision. backgammon must include chance nodes in addition to MAX and MIN nodes. routinely. HYDRA can be seen as the successor to DEEP BLUE. HYDRA runs on a 64-
3. The next step is to understand how to make correct decisions. Obviously, we still want to pick processor cluster with 1 gigabyte per processor and with custom hardware in the form of
the move that leads to the best position. However, positions do not have definite minimax values. FPGA (Field Programmable Gate Array) chips. HYDRA reaches 200 million evaluations
Instead, we can only calculate the expected value of a position: the average over all possible
per second, about the same as Deep Blue.
outcomes of the chance nodes.
4. This leads us to generalize the minimax value for deterministic games to an expecti- minimax
value for games with chance nodes. Terminal nodes and MAX and MIN nodes (for which the 2. Checkers:
dice roll is known) work exactly the same way as before. For chance nodes we compute the Jonathan Schaeffer and colleagues developed CHINOOK, which runs on regular PCs
expected value, which is the sum of the value over all outcomes, weighted by the probability of and uses alpha–beta search. Chinook defeated the long-running human champion in an
each chance action: abbreviated match in 1990, and since 2007 CHINOOK has been able to play perfectly by
using alpha–beta search combined with a database of 39 trillion endgame positions.
2. Consider again the two-ply game tree. Let’s go through the calculation of the optimal decision
once more, this time paying careful attention to what we know at each point in the process. 3. Othello :
The outcome is that we can identify the minimax decision without ever evaluating two of the Othello, also called Reversi, is probably more popular as a computer game than as a
leaf nodes. Another way to look at this is as a simplification of the formula for MINIMAX. Let board game. It has a smaller search space than chess, usually 5 to 15 legal moves, but
the two unevaluated successors of node C in following diagram have values x and y. Then
evaluation expertise had to be developed from scratch. In 1997, the LOGISTELLO
the value of the root node is given by
5. The branches leading from each chance node denote the possible dice rolls; each branch is program (Buro, 2002) defeated the human world champion, TakeshiMurakami, by six
labeled with the roll and its probability. There are 36 ways to roll two dice, each equally likely; games to none. It is generally acknowledged that humans are no match for computers at
but because a 6–5 is the same as a 5–6, there are only 21 distinct rolls. The six doubles (1–1 Othello.
through 6–6) each have a probability of 1/36, so we say P(1–1) = 1/36. The other 15 distinct
rolls each have a 1/18 probability. 4. Backgammon:
The inclusion of uncertainty from dice rolls makes deep search an expensive luxury. Most
work on backgammon has gone into improving the evaluation function. Gerry Tesauro
(1992) combined reinforcement learning with neural networks to develop a remarkably
accurate evaluator that is used with a search to depth 2 or 3. After playing more than a
million training games against itself, Tesauro’s program, TD-GAMMON, is competitive
with top human players. The program’s opinions on the opening moves of the game have
in some cases radically altered the received wisdom.

5. Bridge :
Bridge is a card game of imperfect information: a player’s cards are hidden from the
other players. Bridge is also a multiplayer game with four players instead of two,
3. In other words, the value of the root and hence the minimax decision are independent of the although the players are paired into two teams. As in Section 5.6, optimal play in partially
values of the pruned leaves x and y. Alpha–beta pruning can be applied to trees of any depth,
observable games like bridge can include elements of information gathering,
and it is often possible to prune entire subtrees rather than just leaves. The general principle is
this: consider a node n somewhere in the tree such that Player has a choice of moving to that communication, and careful weighing of probabilities. Many of these techniques are used
node. in the Bridge Baron program (Smith et al., 1998), which won the 1997 computer bridge
4. Remember that minimax search is depth-first, so at any one time we just have to consider the championship. While it does not play optimally, Bridge Baron is one of the few successful
nodes along a single path in the tree. Alpha–beta pruning gets its name from the following two game-playing systems to use complex, hierarchical plans
parameters that describe bounds on the backed-up values that appear anywhere along the
path: 6. Scrabble:
Most people think the hard part about Scrabble is coming up with good words, but given the official
Page 4 of 11 Page 5 of 11 Page 6 of 11
YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622

TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 3 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 3 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 3

dictionary, it turns out to be rather easy to program a move generator to find the highest- scoring 4. Each time the agent program is called, it does three things. A 4×4 grid of rooms. The agent always starts in the square labeled [1,1], facing to
move (Gordon, 1994). That doesn’t mean the game is solved, however: merely taking the top- the right. The locations of the gold and the wumpus are chosen randomly, with a
scoring move each turn results in a good but not expert player. The problem is that Scrabble is both uniform distribution, from the squares other than the start square. In addition, each
partially observable and stochastic: you don’t know what letters the other player has or what letters square other than the start can be a pit, with probability 0.2.
you will draw next. So playing Scrabble well combines the difficulties of backgammon and bridge. c) Actuators:
The agent can move Forward, TurnLeft by 90◦, or TurnRight by 90◦. The agent dies a
Q. Define Logical Agents? miserable death if it enters a square containing a pit or a live wumpus. (It is safe, albeit smelly,
to enter a square with a dead wumpus.) If an agent tries to move forward and bumps into a
Answer: wall, then the agent does not move. The action Grab can be used to pick up the gold if it is in
the same square as the agent. The action Shoot can be used to fire an arrow in a straight line
1. Humans, it seems, know things; and what they know helps them do things. These are not empty
statements. They make strong claims about how the intelligence of humans is achieved—not by  First, it TELLs the knowledge base what it perceives. in the direction the agent is facing. The arrow continues until it either hits (and hence kills) the
 Second, it ASKs the knowledge base what action it should perform. In the process of wumpus or hits a wall. The agent has only one arrow, so only the first Shoot action has any
purely reflex mechanisms but by processes of reasoning that operate on internal representations effect. Finally, the action Climb can be used to climb out of the cave, but only from square [1,1].
of knowledge. In AI, this approach to intelligence is embodied in knowledge-based agents. answering this query, extensive reasoning may be done about the current state of the
2. The problem-solving agents know things, but only in a very limited, inflexible sense. For world, about the outcomes of possible action sequences, and so on.
 Third, the agent program TELLs the knowledge base which action was chosen, and the d) Sensors: The agent has five sensors, each of which gives a single bit of information
example, the transition model for the 8-puzzle—knowledge of what the actions do—is hidden  In the square containing the wumpus and in the directly (not diagonally) adjacent squares,
inside the domain-specific code of the RESULT function. agent executes the action.
The details of the representation language are hidden inside three functions that implement the agent will perceive a Stench.
3. The idea of representing states as assignments of values to variables; this is a step in the right
the interface between the sensors and actuators on one side and the core representation  In the squares directly adjacent to a pit, the agent will perceive a Breeze.
direction, enabling some parts of the agent to work in a domain-independent way and allowing
and reasoning system on the other  In the square where the gold is, the agent will perceive a Glitter.
for more efficient algorithms. We develop logic as a general class of representations to
 MAKE-PERCEPT-SENTENCE constructs a sentence asserting that the agent perceived  When an agent walks into a wall, it will perceive a Bump.
support knowledge-based agents.
the given percept at the given time.  When the wumpus is killed, it emits a woeful Scream that can be perceived anywhere in the
4. Such agents can combine and recombine information to suit myriad purposes. Often, this
 MAKE-ACTION-QUERY constructs a sentence that asks what action should be done at cave.
process can be quite far removed from the needs of the moment—as when a mathematician
proves a theorem or an astronomer calculates the earth’s life expectancy. Knowledge-based the current time.
agents can accept new tasks in the form of explicitly described goals; they can achieve  Finally, MAKE-ACTION-SENTENCE constructs a sentence asserting that the chosen
competence quickly by being told or learning new knowledge about the environment; and they action was executed.
can adapt to changes in the environment by updating the relevant knowledge. The details of the inference mechanisms are hidden inside TELL and ASK.

Q. Write a note on Knowledge based Agent? 5. A knowledge-based agent can be built simply by TELLing it what it needs to know. Starting with an
empty knowledge base, the agent designer can TELL sentences one by one until the agent knows
Answer: how to operate in its environment. This is called the declarative approach to system building. In
contrast, the procedural approach encodes desired behaviors directly as program code. We now
understand that a successful agent often combines both declarative and procedural elements in its 3. Steps:
1. The central component of a knowledge-based agent is its knowledge base, or KB. A knowledge design, and that declarative knowledge can often be compiled into more efficient procedural code. a) The agent’s initial knowledge base contains the rules of the environment, as described
base is a set of sentences. (Here ―sentence‖ is used as a technical term. It is related but not We can also provide a knowledge-based agent with mechanisms that allow it to learn for itself. previously; in particular, it knows that it is in [1,1] and that [1,1] is a safe square; we denote that
identical to the sentences of English and other natural languages.) Each sentence is expressed with an ―A‖ and ―OK,‖ respectively, in square [1,1].
in a language called a knowledge representation language and represents some assertion about b) The first percept is [None, None, None, None, None], from which the agent can conclude that its
the world. Sometimes we dignify a sentence with the name axiom, when the sentence is taken Q. Explain wumpus world problem? neighboring squares, [1,2] and [2,1], are free of dangers—they are OK.
as given without being derived from other sentences. c) A cautious agent will move only into a square that it knows to be OK. Let us suppose the agent
2. There must be a way to add new sentences to the knowledge base and a way to query what is Answer decides to move forward to [2,1]. The agent perceives a breeze (denoted by ―B‖) in [2,1], so
known. The standard names for these operations are TELL and ASK, respectively. Both there must be a pit in a neighboring square. The pit cannot be in [1,1], by the rules of the game,
operations may involve inference—that is, deriving new sentences from old. Inference must 1. The wumpus world is a cave consisting of rooms connected by passageways. Lurking so there must be a pit in [2,2] or [3,1] or both. The notation ―P?‖ indicates a possible pit in those
obey the requirement that when one ASKs a question of the knowledge base, the answer should somewhere in the cave is the terrible wumpus, a beast that eats anyone who enters its room. The squares. At this point, there is only one known square that is OK and that has not yet been
followfrom what has been told (or TELLed) to the knowledge base previously. wumpus can be shot by an agent, but the agent has only one arrow. Some rooms contain visited. So the prudent agent will turn around, go back to [1,1], and then proceed to [1,2].
3. Following diagram shows the outline of a knowledge-based agent program. Like all our agents, it bottomless pits that will trap anyone who wanders into these rooms (except for the wumpus, d) The agent perceives a stench in [1,2], resulting in the state of knowledge . The stench in [1,2]
takes a percept as input and returns an action. The agent maintains a knowledge base, KB, which is too big to fall in). The only mitigating feature of this bleak environment is the possibility means that there must be a wumpus nearby. But the wumpus cannot be in [1,1], by the rules of
which may initially contain some background knowledge. of finding a heap of gold. Although the wumpus world is rather tame by modern computer game the game, and it cannot be in [2,2] (or the agent would have detected a stench when it was in
standards, it illustrates some important points about intelligence. [2,1]). Therefore, the agent can infer that the wumpus is in [1,3]. The notation W! indicates this
2. The precise definition of the task environment is given by the PEAS description: inference. Moreover, the lack of a breeze in [1,2] implies that there is no pit in [2,2]. Yet the
a) Performance measure: agent has already inferred that there must be a pit in either [2,2] or [3,1], so this means it must
+1000 for climbing out of the cave with the gold, –1000 for falling into a pit or being be in [3,1]. This is a fairly difficult inference, because it combines knowledge gained at different
eaten by the wumpus, –1 for each action taken and –10 for using up the arrow. The times in different places and relies on the lack of a percept to make one crucial step.
game ends either when the agent dies or when the agent climbs out of the cave. e) The agent has now proved to itself that there is neither a pit nor a wumpus in [2,2], so it is OK to
move there.
b) Environment:
Page 7 of 11 Page 8 of 11 Page 9 of 11
YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
www.acuityeducare.com
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 3 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 3

Q. What is the purpose of Syntax in Propositional Logic?


Q. Write a note on Logic? State some of the desirable properties of Logic?
Answer: Acuity Educare
Answer: 1. The syntax of propositional logic defines the allowable sentences. The atomic sentences consist
of a single proposition symbol. Each such symbol stands for a proposition that can be true or
1. Knowledge bases consist of sentences. These sentences are expressed according to the syntax false. We use symbols that start with an uppercase letter and may contain other letters or

Artificial Intelligence
of the representation language, which specifies all the sentences that are well formed. The notion subscripts, for example: P, Q, R, W 1,3 and North. The names are arbitrary but are often chosen
of syntax is clear enough in ordinary arithmetic: ―x + y = 4‖ is a well-formed sentence, whereas to have some mnemonic value—we use W1,3 to stand for the proposition that the wumpus is in
―x4y+ =‖ is not. [1,3]. Remember that symbols such as W1,3 are atomic, i.e., W, 1, and 3 are not meaningful
parts of the symbol. There are two proposition symbols with fixed meanings: True is the always-

SEM : V
2. A logic must also define the semantics or meaning of sentences. The semantics defines the truth
of each sentence with respect to each possible world. For example, the semantics for arithmetic true proposition and False is the always-false proposition.
specifies that the sentence ―x + y =4‖ is true in a world where x is 2 and y is 2, but false in a world 2. Complex sentences are constructed from simpler sentences, using parentheses and logical
where x is 1 and y is 1. In standard logics, every sentence must be either true or false in each connectives. There are five connectives in common use:
possible world—there is no ―in between.| 1
3. When we need to be precise, we use the term model in place of ―possible world.‖ Whereas SEM V: UNIT 4
possible worlds might be thought of as (potentially) real environments that the agent might or
might not be in, models are mathematical abstractions, each of which simply fixes the truth or
falsehood of every relevant sentence.
4. Now that we have a notion of truth, we are ready to talk about logical reasoning. This involves
the relation of logical entailment between sentences—the idea that a sentence follows logically
from another sentence. In mathematical notation, we write α |= β to mean that the sentence α
entails the sentence β. The formal definition of entailment is this: α |= β if and only if, in every
model in which α is true, β is also true. Using the notation just introduced, we can write
α |= β if and only if M(α) ⊆ M(β) .
5. An inference algorithm that derives only entailed sentences is called sound or truth preserving.
Soundness is a highly desirable property. An unsound inference procedure essentially makes
things up as it goes along—it announces the discovery of nonexistent needles. It is easy to see
that model checking, when it is applicable is a sound procedure. The property of completeness is
also desirable: an inference algorithm is complete if it can derive any sentence that is entailed.
For real haystacks, which are finite in extent, it seems obvious that a systematic examination can
always decide whether the needle is in the haystack. For many knowledge bases, however, the
haystack of consequences is infinite, and completeness becomes an important issue.5
Fortunately, there are complete inference procedures for logics that are sufficiently expressive to
handle many knowledge bases.
6. The final issue to consider is grounding—the connection between logical reasoning processes
and the real environment in which the agent exists. In particular, how do we know that KB is true
in the real world? (After all, KB is just ―syntax‖ inside the agent’s head.) This is a philosophical
question. A simple answer is that the agent’s sensors create the connection. For example, our
wumpus-world agent has a smell sensor. The agent program creates a suitable sentence
whenever there is a smell. Then, whenever that sentence is in the knowledge base, it is true in
the real world.
3. The BNF grammar by itself is ambiguous; a sentence with several operators can be parsed by
thegrammar in multiple ways. To eliminate the ambiguity we define a precedence for each
operator.
The ―not‖ operator (¬) has the highest precedence, which means that in the sentence ¬A ∧ B
the ¬ binds most tightly, giving us the equivalent of (¬A)∧B rather than ¬(A∧B).
(The notation for ordinary arithmetic is the same: −2+4 is 2, not –6.) When in doubt, use
parentheses to make sure of the right interpretation. Square brackets mean the same
thing as parentheses; the choice of square brackets or parentheses is solely to make it
easier for a human to read a sentence.
607A, 6th floor, Ecstasy business park, city of joy, JSD
road, mulund (W) | 8591065589/022-25600622
Page 10 of 11 Page 11 of 11
YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 Abhay More abhay_more

TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 4 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 4 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 4

Q. What are Models for first-order logic? 3. In summary, a model in first-order logic consists of a set of objects and an interpretation
Q. What is First Order Logic? that maps constant symbols to objects, predicate symbols to relations on those objects,
Q. What is First Order Predicate Logic (FOPL)? and function symbols to functions on those objects. Just as with propositional logic,
entailment, validity, and so on are defined in terms of all possible models.
Answer: 4. Terms:
A term is a logical expression that refers to an object. Constant symbols are therefore
1. The models of a logical language are the formal structures that constitute the possible terms, but it is not always convenient to have a distinct symbol to name every object. For
worlds under consideration. Each model links the vocabulary of the logical sentences to example, in English we might use the expression ―King John’s left leg‖ rather than giving
elements of the possible world, so that the truth of any sentence can be determined. a name to his leg. This is what function symbols are for: instead of using a constant
Thus, models for propositional logic link proposition symbols to predefined truth values. symbol, we use
Models for first-order logic are much more interesting. First, they have objects in them! LeftLeg(John).
The domain of a model is the set of objects or domain elements it contains. The domain Q. Write a note on Symbols and interpretations in First Order Logic? In the general case, a complex term is formed by a function symbol followed by a
is required to be nonempty—every possible world must contain at least one object. parenthesized list of terms as arguments to the function symbol. It is important to
Mathematically speaking, it doesn’t matter what these objects are—all that matters is remember that a complex term is just a complicated kind of name. It is not a ―subroutine
Answer: call‖ that ―returns a value.‖ There is no LeftLeg subroutine that takes a person as input
how many there are in each particular model. 1. The basic syntactic elements of first-order logic are the symbols that stand for objects,
2. Figure shows a model with five objects: and returns a leg.
relations, and functions. The symbols, therefore, come in three kinds: constant
 Richard the Lionheart, King of England from 1189 to 1199; 5. Atomic sentences
symbols, which stand for objects; predicate symbols, which stand for relations; and Now that we have both terms for referring to objects and predicate symbols for referringto
 his younger brother, the evil King John, who ruled from 1199 to 1215; function symbols, which stand for functions. We adopt the convention that these relations, we can put them together to make atomic sentences that state facts. An atomic sentence
 the left legs of Richard and John; symbols will begin with uppercase letters. For example, we might use the constant (or atom for short) is formed from a predicate symbol optionally followed by a parenthesized list of
 and a crown. symbols Richard and John; the predicate symbols Brother , OnHead, Person, King, terms, such as
3. The objects in the model may be related in various ways. In the figure, Richard and John and Crown; and the function symbol LeftLeg. Brother (Richard , John).
are brothers. Formally speaking, a relation TUPLE is just the set of tuples of objects that 2. As in propositional logic, every model must provide the information required to This states, under the intended interpretation given earlier, that Richard the Lionheart is the brother
are related. (A tuple is a collection of objects arranged in a fixed order and is written with determine if any given sentence is true or false. Thus, in addition to its objects, of King John. Atomic sentences can have complex terms as arguments. Thus,
angle brackets surrounding the objects.) Thus, the brotherhood relation in this model is relations, and functions, each model includes an interpretation that specifies exactly Married(Father (Richard),Mother (John))
the set which objects, relations and functions are referred to by the constant, predicate, and states that Richard the Lionheart’s father is married to King John’s mother (again, under a
{ (Richard the Lionheart, King John), (King John, Richard the Lionheart) }. The crown function symbols. One possible interpretation for our example—which a logician would suitable interpretation). An atomic sentence is true in a given model if the relation referred
is on King John’s head, so the ―on head‖ relation contains just one tuple, call the intended interpretation—is as follows: to by the predicate symbol holds among the objects referred to by the arguments.We can
{the crown, King John}.  Richard refers to Richard the Lionheart and John refers to the evil King John. use logical connectives to construct more complex sentences, with the same syntax and
 Brother refers to the brotherhood relation, that is, the set of tuples of objects given in semantics as in propositional calculus.
 The ―brother‖ and ―on head‖ relations are binary relations—that is, they Equation (8.1); OnHead refers to the ―on head‖ relation that holds between the
relate pairs of objects. crown and King John; Person, King, and Crown refer to the sets of objects that are 6. Quantifiers
 The model also contains unary relations, or properties: the ―person‖ property is persons, kings, and crowns.
true of both Richard and John; the ―king‖ property is true only of John  LeftLeg refers to the ―left leg‖ function, that is, the mapping given in Equation. Once we have a logic that allows objects, it is only natural to want to express properties
(presumably because Richard is dead at this point); and of entire collections of objects, instead of enumerating the objects by name. Quantifiers
 the ―crown‖ property is true only of the crown. let us do this.
4. Certain kinds of relationships are best considered as functions, in that a given object must a) Universal quantification (∀)
be related to exactly one object in this way. For example, each person has one left leg, so  ∀ is usually pronounced ―For all . . .‖. (Remember that the upside-down A stands
the model has a unary ―left leg‖ function that includes the following mappings: for ―all.‖) Thus, the sentence says, ―For all x, if x is a king, then x is a person.‖
The symbol x is called a variable. By convention, variables are lowercase letters.
(Richard the Lionheart) → Richard’s left leg
A variable is a term all by itself, and as such can also serve as the argument of a
(King John) → John’s left leg . function—for example, LeftLeg(x). A term with no variables is called a ground
term
5. So far, we have described the elements that populate models for first-order logic. The  Intuitively, the sentence ∀x P, where P is any logical expression, says that P is
other essential part of a model is the link between those elements and the vocabulary of
true for every object x. More precisely, ∀x P is true in a given model if P is true in
the logical sentences
all possible extended interpretations constructed from the interpretation given in
the model, where each extended interpretation specifies a domain element to
which x refers.
 The universally quantified sentence ∀ x King(x) ⇒ Person(x) is true in the
original model if the sentence King(x) ⇒ Person(x) is true
Page 1 of 9 Page 2 of 9 Page 3 of 9
YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 4 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 4 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 4

b) Existential quantification (∃) vocabulary that must be fixed by returning to step 3 and iterating through the c) Then we would still like to be able to conclude that Evil(John), because we know that
 Universal quantification makes statements about every object. Similarly, we can process. John is a king (given) and John is greedy (because everyone is greedy). What we
make a statement about some object in the universe without naming it, by using e) Encode a description of the specific problem instance: need for this to work is to find a substitution both for the variables in the implication
an existential quantifier. If the ontology is well thought out, this step will be easy. It will involve writing sentence and for the variables in the sentences that are in the knowledge base. In
 To say, for example, that King John has a crown on his head, we write simple atomic sentences about instances of concepts that are already part of the this case, applying the substitution {x/John, y/John} to the implication premises
∃ x Crown(x) 𝖠 OnHead(x, John) . ontology. King(x) and Greedy(x) and the knowledge-base sentences King(John) and Greedy(y)
∃x is pronounced ―There exists an x such that . . .‖ or ―For some x . . .‖. f) Pose queries to the inference procedure and get answers: will make them identical. Thus, we can infer the conclusion of the implication.
Intuitively, the sentence ∃x P says that P is true for at least one object x. More precisely, ∃x This is where the reward is: we can let the inference procedure operate on the axioms d) This inference process can be captured as a single inference rule that we call
P is true in a given model if P is true in at least one extended interpretation that assigns x and problem-specific facts to derive the facts we are interested in knowing. Thus, we Generalized Modus Ponens: For atomic sentences pi, pi', and q, where there is a
to a domain element. avoid the need for writing an application-specific solution algorithm. substitution θ such that SUBST(θ, pi')= SUBST(θ, pi), for all i
g) Debug the knowledge base:
7. Equality The answers to queries will seldom be correct on the first try. More precisely, the
answers will be correct for the knowledge base as written, assuming that the
First-order logic includes one more way to make atomic sentences, other than using a inference procedure is sound, but they will not be the ones that the user is expecting.
predicate and terms as described earlier. We can use the equality symbol to signify e) Generalized Modus Ponens is a lifted version of Modus Ponens—it raises Modus
that two terms refer to the same object. For example, Ponens from ground (variable-free) propositional logic to first-order logic.
Father (John)=Henry Q. Write a note on Unification and Lifting?
says that the object referred to by Father (John) and the object referred to by Henry are the same. Q. Write a note on Forward Chaining?
Because an interpretation fixes the referent of any term, determining the truth of an equality Answer:
sentence is simply a matter of seeing that the referents of the two terms are the same object 1. Unification: Answer
a) Lifted inference rules require finding substitutions that make different logical 1. A forward-chaining algorithm for propositional definite clauses.The idea is simple: start
expressions look identical. This process is called unification and is a key component with the atomic sentences in the knowledge base and apply Modus Ponens in the
Q. Write a short note on KNOWLEDGE ENGINEERING IN FIRST-ORDER LOGIC of all first-order inference algorithms. The UNIFY algorithm takes two sentences andreturns forward direction, adding new atomic sentences, until no further inferences can be made.
a unifier for them if one exists: Here, we explain how the algorithm is applied to first-order definite clauses. Definite
Answer: UNIFY(p, q)=θ where SUBST(θ, p)= SUBST(θ, q) clauses such as Situation ⇒ Response are especially useful for systems that make
1. The general process of knowledge-base construction - a process called knowledge b) Let us look at some examples of how UNIFY should behave. Suppose we have a inferences in response to newly arrived information. Many systems can be defined this
engineering. KNOWLEDGE A knowledge engineer is someone who investigates a query AskVars(Knows(John, x)): whom does John know? way, and forward chaining can be implemented very efficiently.
particular domain, learns what concepts are important in that domain, and creates Answers to this query can be found by finding all sentences in the knowledge base 2. Consider the following problem:
a formal representation of the objects and relations in the domain. that unify with Knows(John, x). Here are the results of unification with four different
2. Knowledge engineering projects vary widely in content, scope, and difficulty, but all sentences that might be in the knowledge base: The law says that it is a crime for an American to sell weapons to hostile nations. The
such projects include the following steps: UNIFY(Knows(John, x), Knows(John, Jane)) = {x/Jane} country Nono, an enemy of America, has some missiles, and all of its missiles were sold
a) Identify the task: UNIFY(Knows(John, x), Knows(y, Bill )) = {x/Bill, y/John} to it by ColonelWest, who is American.
The knowledge engineer must delineate the range of questions that the knowledge UNIFY(Knows(John, x), Knows(y,Mother (y))) = {y/John, x/Mother (John)} 3. We will prove thatWest is a criminal. First, we will represent these facts as first-
base will support and the kinds of facts that will be available for each specific UNIFY(Knows(John, x), Knows(x, Elizabeth)) = fail . order definite clauses.
problem instance. This step is analogous to the PEAS process for designing c) The last unification fails because x cannot take on the values John and Elizabeth at
agents. the same time. Now, remember that Knows(x, Elizabeth) means ―Everyone knows ―. . . it is a crime for an American to sell weapons to hostile nations‖:
b) Assemble the relevant knowledge: Elizabeth,‖ so we should be able to infer that John knows Elizabeth. The problem American(x) 𝖠Weapon(y) 𝖠 Sells(x, y, z) 𝖠 Hostile(z) ⇒ Criminal (x) .
The knowledge engineer might already be an expert in the domain, or might need to arises only because the two sentences happen to use the same variable name, x.
work with real experts to extract what they know—a process called knowledge ―Nono . . . has some missiles.‖ The sentence ∃ x Owns(Nono, x)𝖠Missile(x) is transformed
acquisition. At this stage, the knowledge is not represented formally. The idea is to 2. Lifting: into two definite clauses by Existential Instantiation, introducing a new constant M1:
understand the scope of the knowledge base, as determined by the task, and to a) The inference that John is evil—that is, that {x/John} solves the query Evil(x)— works Owns(Nono,M1)
understand how the domain actually works. like this: to use the rule that greedy kings are evil, find some x such that x is a king Missile(M1)
c) Decide on a vocabulary of predicates, functions, and constants: and x is greedy, and then infer that this x is evil. More generally, if there is some
That is, translate the important domain-level concepts into logic-level names. This substitution θ that makes each of the conjuncts of the premise of the implication ―All of its missiles were sold to it by Colonel West‖:
involves many questions of knowledge-engineering style. Like programming style, identical to sentences already in the knowledge base, then we can assert the Missile(x) 𝖠 Owns(Nono, x) ⇒ Sells(West, x, Nono) .
this can have a significant impact on the eventual success of the project. conclusion of the implication, after applying θ.
d) Encode general knowledge about the domain: b) In this case, the substitution θ ={x/John} achieves that aim. We can actually make the We will also need to know that missiles are weapons:
The knowledge engineer writes down the axioms for all the vocabulary terms. This inference step do even more work. Suppose that instead of knowing Greedy(John), Missile(x) ⇒ Weapon(x)
pins down (to the extent possible) the meaning of the terms, enabling the expert to we know that everyone is greedy:
check the content. Often, this step reveals misconceptions or gaps in the ∀ y Greedy(y) . and we must know that an enemy of America counts as ―hostile‖:
Page 4 of 9 Page 5 of 9 Page 6 of 9
YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622

TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 4 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 4 TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM – V AI: U 4

Enemy(x,America) ⇒ Hostile(x) .
 Eliminate implications:
―West, who is American . . .‖: ∀ x [¬∀ y ¬Animal(y) ∨ Loves(x, y)] ∨ [∃ y Loves(y, x)] .
American(West) .  Move ¬ inwards:
In addition to the usual rules for negated connectives, we need rules for
―The country Nono, an enemy of America . . .‖: negated quantifiers. Thus, we have
Enemy(Nono,America) . ¬∀x p becomes ∃ x ¬p
4. This knowledge base contains no function symbols and is therefore an instance of the ¬∃x p becomes ∀ x ¬p .
class of Datalog knowledge bases. Datalog is a language that is restricted to first-order Our sentence goes through the following transformations:
definite clauses with no function symbols. Datalog gets its name because it can represent ∀ x [∃ y ¬(¬Animal(y) ∨ Loves(x, y))] ∨ [∃ y Loves(y, x)] .
the type of statements typically made in relational databases. ∀ x [∃ y ¬¬Animal(y) 𝖠 ¬Loves(x, y)] ∨ [∃ y Loves(y, x)] .
∀ x [∃ y Animal (y) 𝖠¬Loves(x, y)] ∨ [∃ y Loves(y, x)] .
Notice how a universal quantifier (∀ y) in the premise of the implication has
become an existential quantifier. The sentence now reads ―Either there is some
animal that x doesn’t love, or (if this is not the case) someone loves x.‖ Clearly,
Q. Compare Forward and Backward Chaining? the meaning of the original sentence has been preserved.
Answer:  Standardize variables:
For sentences like (∃xP(x))∨(∃xQ(x)) which use the same variable name twice,
Sr. No Parameter Forward Chaining Backward Chaining change the name of one of the variables. This avoids confusion later when we
1 Strategy Goal Driven Data Driven drop the quantifiers. Thus, we have
2 Initial State Possible Conclusion New Data ∀ x [∃ y Animal (y) 𝖠¬Loves(x, y)] ∨ [∃ z Loves(z, x)] .
3 Process Efficient Lacks Completeness  Skolemize:
4 Aim Necessary Data Conclusion (s) Skolemization is the process of removing existential quantifiers by elimination. In
5 Approach Conservative Oppurtunistic the simple case, it is just like the Existential Instantiation rule of Section 9.1:
6 Practical Number of possible answers are NA translate ∃x P(x) into P(A), where A is a new constant. However, we can’t apply
Q. Write a short note on Backward Chaining? Existential Instantiation to our sentence above because it doesn’t match the
reasonable
7 Appropriate Diagnostics, Debugging Planning, Monitoring, pattern ∃v α; only parts of the sentence match the pattern. If we blindly apply the
Answer: Control rule to the two matching parts we get
1. The second major family of logical inference algorithms uses the backward chaining. ∀ x [Animal (A) 𝖠 ¬Loves(x,A)] ∨ Loves(B, x) ,
8 Reasoning Top-Down reasoning Bottom Up
these algorithms work backward from the goal, chaining through rules to find known which has the wrong meaning entirely: it says that everyone either fails to love a
9 Type of Search BFS DFS
facts that support the proof. backward chaining has some disadvantages compared particular animal A or is loved by some particular entity B. In fact, our original
with forward chaining. sentence allows each person to fail to love a different animal or to be loved by a
Q. Write a short note on Resolution?
2. Figure shows a backward-chaining algorithm for definite clauses. FOL-BC-ASK(KB, different person. Thus, we want the Skolem entities to depend on x and z:
goal ) will be proved if the knowledge base contains a clause of the form lhs ⇒ goal, ∀ x [Animal (F(x)) 𝖠¬Loves(x, F(x))] ∨ Loves(G(z), x) .
Answer:
where lhs (left-hand side) is a list of conjuncts. An atomic fact like American(West) is Here F and G are Skolem functions. The general rule is that the arguments of
1. As in the propositional case, first-order resolution requires that sentences be in
considered as a clause whose lhs is the empty list. Now a query that contains variables the Skolem function are all the universally quantified variables in whose scope
conjunctive normal form (CNF)—that is, a conjunction of clauses, where each clause is
might be proved in multiple ways. For example, the query Person(x) could be proved the existential quantifier appears. As with Existential Instantiation, the
a disjunction of literals. Literals can contain variables, which are assumed to be
with the substitution {x/John} as well as with {x/Richard }. So we implement FOL-BC- Skolemized sentence is satisfiable exactly when the original sentence is
universally quantified. For example, the sentence
ASK as a generator— a function that returns multiple times, each time giving one satisfiable.
∀ x American(x) 𝖠Weapon(y) 𝖠 Sells(x, y, z) 𝖠 Hostile(z) ⇒ Criminal (x)
possible result.  Drop universal quantifiers:
becomes, in CNF,
3. Backward chaining is a kind of AND/OR search—the OR part because the goal query At this point, all remaining variables must be universally quantified. Moreover,
¬American(x) ∨ ¬Weapon(y) ∨ ¬Sells(x, y, z) ∨ ¬Hostile(z) ∨ Criminal (x) .
can be proved by any rule in the knowledge base, and the AND part because all the the sentence is equivalent to one in which all the universal quantifiers have been
2. Every sentence of first-order logic can be converted into an inferentially equivalent CNF
conjuncts in the lhs of a clause must be proved. Backward chaining, as we have written moved to the left. We can therefore drop the universal quantifiers:
sentence. In particular, the CNF sentence will be unsatisfiable just when the original
it, is clearly a depth-first search algorithm. This means that its space requirements are [Animal (F(x)) 𝖠 ¬Loves(x, F(x))] ∨ Loves(G(z), x) .
sentence is unsatisfiable, so we have a basis for doing proofs by contradiction on the
linear in the size of the proof (neglecting, for now, the space required to accumulate the
solutions). It also means that backward chaining (unlike forward chaining) suffers from
CNF sentences. The procedure for conversion to CNF is similar to the propositional case.  Distribute ∨ over � :
We illustrate the procedure by translating the sentence ―Everyone who loves all [Animal (F(x)) ∨ Loves(G(z), x)] 𝖠 [¬Loves(x, F(x)) ∨ Loves(G(z), x)] .
problems with repeated states and incompleteness.
animals is loved by someone,‖ or
This step may also require flattening out nested conjunctions and disjunctions.
∀ x [∀ y Animal(y) ⇒ Loves(x, y)] ⇒ [∃ y Loves(y, x)].

Page 7 of 9 Page 8 of 9 Page 9 of 9


YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622 607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
Assignment 5: PRECOND:On(b, x) ∧ Clear (b) ∧ Clear (y),
EFFECT:On(b, y) ∧ Clear (x) ∧ ¬On(b, x) ∧ ¬Clear (y)) .
Q1 Define Classical planning. ● Unfortunately, this does not maintain Clear properly when x or y is the table. When x is
● propositional inference, there are many actions and states. For example, in the wumpus world. the
● Factored representation is one in which a state of the world is represented by a collection of Table, this action has the effect Clear (Table), but the table should not become clear; and
variables. We use a language called PDDL, the Planning Domain Definition Language, that allows when y = Table, it has the precondition Clear (Table), but the table does not have to be
us to express all actions with one action schema. ● The following plan is a solution to the problem: clearfor us to move a block onto it.
● PDDL describes the four things we need to define a search problem: the initial state,the actions [Load (C1, P1, SFO),Fly(P1, SFO, JFK ), Unload(C1, P1, JFK), ● First, we introduce another action to move a block b from x to the table:
that are available in a state, the result of applying an action, and the goal test. Load(C2, P2, JFK ),Fly(P2, JFK , SFO), Unload(C2, P2, SFO)] . Action(MoveToTable(b, x),
● state is represented as a conjunction of fluent that are ground, functionless atoms. PRECOND:On(b, x) ∧ Clear (b),
Q3Give an example of The block of world with PDDL description. EFFECT:On(b, Table) ∧ Clear (x) ∧ ¬On(b, x)) .
For example, Poor ∧ Unknown.
● Actions are described by a set of action schemas that implicitly define the ACTIONS(s) and ● Second, we take the interpretation of Clear (x) to be “there is a clear space on x to hold a
● One of the most famous planning domains is known as the blocks world.
RESULT(s, a) functions needed to do a problem-solving search. block.” Under this interpretation, Clear (Table) will always be true. The only problem is
● This domain consists of a set of cube-shaped blocks sitting on a table. The blocks can be stacked,
● Classical planning concentrates on problems where most actions leave most things unchanged. that
but
Think of a world consisting of a bunch of objects on a flat surface. nothing prevents the planner from using Move(b, x, Table) instead of MoveToTable(b, x).
only one block can fit directly on top of another.
● The action of an object causes that object to change its location by a vector Δ. A concise ● Solution is the sequence:
● A robot arm can pick up a block and move it to another position, either on the table or on top of
description of the action should mention only Δ; it shouldn’t have to mention all the objects that [MoveToTable(C, A), Move(B, Table, C), Move(A, Table, B)].
another block.
stay in place. ● The arm can pick up only one block at a time, so it cannot pick up a block that has another one Q4 Write a short note on Complexity of classical planning.
● PDDL does that by specifying the result of an action in terms of what changes; everything that on it. The goal will always be to build one or more stacks of blocks, specified in terms of what
stays the same is left unmentioned. blocks are on topof what other blocks. Or

Write a short note on PlanSAT, Bounded PlanSAT.

● PlanSAT is the question of whether there exists any plan that solves a
planning problem.
● Bounded PlanSAT asks whether there is a solution of length k or less;
this can be used to find an optimal plan.
● The first result is that both decision problems are decidable for classical planning.
● For example, a goal might be to get block A on B and block B on C.
Q2Give an example of an Air cargo transport with PDDL description. ● The proof follows from the fact that the number of states is finite.
● We use On(b, x) to indicate that block b is on x, where x is either another block or the table. The
● But if we add function symbols to the language, then the number of states becomes infinite, and
● an air cargo transport problem involving loading and unloading cargo and flying it from place to action for moving block b from the top of x to the top of y will be Move(b, x, y).
PlanSAT becomes only semidecidable: an algorithm exists that will terminate with the correct
place. ● Now, one of the preconditions on moving b is that no other block be on it.
answer for any solvable problem, but may not terminate on unsolvable problems.
● The problem can be defined with three actions: Load, Unload,and Fly. ● Basic PDDLdoes not allow quantifiers, so instead we introduce a predicate Clear (x) that is true
● The Bounded PlanSAT problem remains decidable even in the presence of function symbols.
● The actions affect two predicates: In(c, p) means that cargo c is inside plane p, and At(x, a) when nothing is on x.
● Bounded PlanSAT is NP-complete while PlanSAT is in P; in other words, optimal planning is
means that object x (either plane or cargo) is at airport a. ● Problem:
usually hard, but sub-optimal planning is sometimes easy.
● The approach we use is to say that a piece of cargo ceases to be At anywhere when it is In a
The action Move moves a block b from x to y if both b and y are clear. ● To do well on easier-than-worst-case problems, we need good search heuristics.
plane; the cargo only becomes At the new airport when it is unloded.
● That’s the true advantage of the classical planning formalism: it has facilitated the
● Problem: After the move is made, b is still clear but y is not. development of very accurate domain-independent heuristics, whereas systems based on
successor state axioms in first-order logic have had less success in coming up with good
● A first attempt at the Move schema is
heuristics
Action(Move(b, x, y),
Q5What are the different ways of planning?

Or ● Planning uses a factored representation for states and action schemas.That makes it possible to ● The estimate might not always be accurate, however, because planning graphs
define good domain-independent heuristics and for programs to automatically apply a good allow several actions at each level, whereas the heuristic counts just the level
Explain algorithm for planning as state space search. domain-independent heuristic for a given problem and not the number of actions.
1)Forward (progression) state-space search: ● A key idea in defining heuristics is decomposition: dividing a problem into parts, solving each ● For this reason, it is common to use a serial planning graph for computing
part independently, and then combining the parts. heuristics.
● it was assumed that forward state-space search was too inefficient to be practical. ● The subgoal independence assumption is that the cost of solving a conjunction of subgoals is ● A serial graph insists that only one action can actually occur at any given time
● First, forward search is prone to exploring irrelevant actions. approximated by the sum of the costs of solving each subgoal independently. step; this is done by adding mutex links between every pair of non persistence
● An uninformed forward-search algorithm would have to start enumerating these 10 billion ● The subgoal independence assumption can be optimistic or pessimistic. It is optimistic when actions.
actions to find one that leads to the goal. there are negative interactions between the subplans for each subgoal—for example, when an ● Level costs extracted from serial graphs are often quite reasonable estimates of
● Second, planning problems often have large state spaces. action in one subplan deletes a goal achieved by another subplan. actual costs.
● in any state there is a minimum of 450 actions (when all the packages are at airports with no ● It is pessimistic, and therefore inadmissible, when subplans contain redundant actions—for ● To estimate the cost of a conjunction of goals, there are three simple
planes) and a maximum of 10,450 (when all packages and planes are at the same airport). instance, two actions that could be replaced by a single action in the merged plan. approaches.
● On average, let’s say there are about 2000 possible actions per state, so the search graph up to
● The max-level heuristic simply takes the maximum level cost of any of the goals;
the depth of the obvious solution has about 200041 nodes.
this is admissible, but not necessarily accurate.
● Clearly, even this relatively small problem instance is hopeless without an accurate heuristic.
● The level sum heuristic, following the sub goal independence assumption,
● Although many real-world applications of planning have relied on domain-specific
returns the sum of the level costs of the goals.
heuristics, it turns out that strong domain-independent heuristics can be derived automatically;
● Finally, the set-level heuristic finds the level at which all the literals in the
that is what makes forward search feasible.
conjunctive goal appear in the planning graph without any pair of them being
Q6 ) What is planning graph . Explain with example.
2)Backward (regression) relevant-states search: mutually exclusive.
● Planning graph can be used to give better heuristic estimates.
● But a planning graph cannot detect the impossibility, because any two of the
● In regression search we start at the goal and apply the actions backward until we find a ● A planning problem asks if we can reach a goal state from the initial state.
three subgoals are achievable.
sequence of steps that reaches the initial state. It is called relevant-states search because we ● A planning graph is polynomialsize approximation to this tree that can be
only consider actions that are relevant to the goal (or current state). constructed quickly.
● As in belief-state search there is a set of relevant states to consider at each step, not just a single ● The planning graph can’t answer definitively whether G is reachable from
state. S0, but it can estimate how many steps it takes to reach G. Q 8 ) Explain GRAPHPLAN algorithm.
● In general, backward search works only when we know how to regress from a state description ● A planning graph is a directed graph organized into levels: first a level S0
to the predecessor state description. for the initial state, consisting of nodes representing each fluent that holds ● The GRAPHPLAN algorithm repeatedly adds a level to a planning graph with
● Backward search keeps the branching factor lower than forward search, for most prob�lem in S0; then a level A0 consisting of nodes for each ground action that EXPAND-GRAPH.
domains. might be applicable in S0; then alternating levels Si followed by Ai; until
● However, the fact that backward search uses state sets rather than individual states makes it we reach a termination condition.
harder to come up with good heuristics. ● Planning graphs work only for propositional planning problems—ones with
● That is the main reason why the majority of current systems favor forward search. no variables.
3) Heuristics for planning: Q 7 ) What are the different steps of planning graph for heuristic estimation?
● An admissible heuristic can be derived by defining a relaxed problem that is easier to solve. ● A planning graph, is a rich source of information about the problem.
● The exact cost of a solution to this easier problem then becomes the heuristic for the original ● First, if any goal literal fails to appear in the final level of the graph, then the
problem. problem is unsolvable.
● By definition, there is no way to analyze an atomic state, and thus it it requires some ingenuity ● Second, we can estimate the cost of achieving any goal literal gi from state s as
by a human analyst to define good domain-specific heuristics for search problems with atomic the level at which gi first appears in the planning graph constructed from initial ● The first line of GRAPHPLAN initializes the planning graph to a one-level (S0)
states. state s. graph representing the initial state.
● The positive fluents from the problem description’s initial state are shown, as are Example of First Order Logic
the relevant negative fluents.
● Not shownare the unchanging positive literals (such as Tire(Spare)) and the ● A set of ground (variable-free) actions can be represented by a single
irrelevant negative literals. action schema.
● The goal At(Spare, Axle) is not present in S0, so we need not call ● The schema is a lifted representation – it lifts the level of reasoning from
EXTRACT-SOLUTION— propositional logic to a restricted subset of first-order logic.
we are certain that there is no solution yet.
● For example, here is an action schema for flying a plane from one location
● Instead, EXPAND-GRAPH adds into A0 the three actions whose preconditions
exist at level S0 (i.e., all the actions except PutOn(Spare, Axle)),along with to another:
persistence actions for all the literals in S0. The schema consists of the action name, a list of all the variables used in
● The effects of the actions are added at level S1. the schema, a precondition and an effect.
● EXPAND-GRAPH then looks for mutex relations and adds them to the graph.
Action(Fly(p,from,to),
PRECOND:At(p,from)ꓥPlane(p) ꓥAirport(from) ꓥAirport(to)
EFFECT:┐At(p,from) ꓥAt(p,to))
Q 9) Define EXTRACT SOLUTION problem.

● EXTRACT-SOLUTION is called again with the same level and goals, we can find
the recorded no-good and immediately return failure rather than searching again.
● EXTRACT-SOLUTION fails to find a solution, then there must have been at least
one set of goals that were not achievable and were marked as a no-good.

Q 10 ) Write a short note on Classical Planning Approaches.

● Each state is represented as a conjunction of fluent that are ground, functionless


atoms.
● The representation of states is carefully designed so that a state can be treated
either as a conjunction of fluent, which can be manipulated by logical inference,
or as a set of fluent, which can be manipulated with set operations. The set
semantics is sometimes easier to deal with.
● Action are described by a set of action schemas that implicitly define the
ACTIONS(s) and RESULT(s,a) functions needed to do a problem-solving search.
● Classical planning concentrates on problems where most actions leave most
things unchanged.
Example:
● Think of a world consisting of a bunch of objects on a falt surface.
● The action of nudging an object causes that object to change its location
by a vector.
● PDDL does that by specifying the result of an action in terms of what
changes; everything that stays the same is left unmentioned.

You might also like