0% found this document useful (0 votes)
28 views19 pages

Artificial Intelligence - Module 4

Artificial Intelligence- Module 4

Uploaded by

nisharobinrohit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views19 pages

Artificial Intelligence - Module 4

Artificial Intelligence- Module 4

Uploaded by

nisharobinrohit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 19

4BCS605 - Artificial

Intelligence

Module - 4
Syllabus
BOOKS and
REFERENCES
 TEXT BOOKS:
 [1] Peter Norvig, Stuart J. Russell, Artificial
Intelligence A Modern Approach, Prentice Hall, 2015,
ISBN-13: 978-9332543515
 REFERENCES:
 [1] Kevin Knight, Elaine Rich, Artificial Intelligence,
McGraw Hill Education, 2017, ISBN13: 978-
0070087705
 [2] Nils Nilsson, Morgan Kaufmann, Artificial

Intelligence: A New Synthesis, Tata McGraw Hill &


Company, 1998, ISBN-13: 9781558605350
QUANTIFYING UNCERTAINTY

 Agents may need to handle uncertainty, whether due to partial observability, nondeterminism,
or a combination of the two. An agent may never know for certain what state it’s in or where it
will end up after a sequence of actions.
 Let’s consider an example of uncertain reasoning: diagnosing a dental patient’s toothache.
 Diagnosis—whether for medicine, automobile repair, or whatever—almost always involves
uncertainty.
 Let us try to write rules for dental diagnosis using propositional logic, so that we can see how
the logical approach breaks down.
 Consider the following simple rule:
Toothache ⇒ Cavity .
 The problem is that this rule is wrong. Not all patients with toothaches have cavities; some of
them have gum disease, an abscess, or one of several other problems:
Toothache ⇒ Cavity ∨ GumProblem ∨ Abscess . . .
 Unfortunately, in order to make the rule true, we have to add an almost unlimited list of possible
problems. We could try turning the rule into a causal rule:
Cavity ⇒ Toothache .
 But this rule is not right either; not all cavities cause pain.
QUANTIFYING UNCERTAINTY

 The only way to fix the rule is to make it logically exhaustive: to augment
the left-hand side with all the qualifications required for a cavity to cause a
toothache.
 Trying to use logic to cope with a domain like medical diagnosis thus fails
for three main reasons:
 Laziness: It is too much work to list the complete set of antecedents or
consequents needed to ensure an exceptionless rule and too hard to use such
rules.
 Theoretical ignorance: Medical science has no complete theory for the
domain.
 Practical ignorance: Even if we know all the rules, we might be uncertain
about a particular patient because not all the necessary tests have been or
can be run.
QUANTIFYING UNCERTAINTY

 The agent’s knowledge can at best provide


only a degree of belief in the relevant
sentences.
 Our main tool for dealing with degrees of
belief is probability theory.
 Probability provides a way of summarizing the
uncertainty that comes from our laziness and
ignorance, thereby solving the qualification
problem.
Uncertainty and rational
decisions

 To make such choices, an agent must first have preferences


between the different possible outcomes of the various plans.
 An outcome is a completely specified state, including such factors
as whether the agent arrives on time and the length of the wait at
the airport.
 We use utility theory to represent and reason with preferences.
 The term utility is used here in the sense of “the quality of being
useful,” not in the sense of the electric company or water works.
 Utility theory says that every state has a degree of usefulness, or
utility, to an agent and that the agent will prefer states with higher
utility.
 The utility of a state is relative to an agent.
Uncertainty and rational
decisions

 Preferences, as expressed by utilities, are combined


with probabilities in the general theory of rational
decisions called decision theory:
Decision theory = probability theory + utility theory .
 The fundamental idea of decision theory is that an
agent is rational if and only if it chooses the action
that yields the highest expected utility, averaged over
all the possible outcomes of the action. This is called
the principle of maximum expected utility (MEU).
BASIC PROBABILITY NOTATION

 Like logical assertions, probabilistic assertions are about possible worlds.


 Whereas logical assertions say which possible worlds are strictly ruled out (all
those in which the assertion is false), probabilistic assertions talk about how
probable the various worlds are.
 In probability theory, the set of all possible worlds is called the sample space.
The possible worlds are mutually exclusive and exhaustive—two possible
worlds cannot both be the case, and one possible world must be the case.
 Elements of the space, that is, particular possible worlds.
 A fully specified probability model associates a numerical probability P(ω)
with each possible world.
 The basic axioms of probability theory say that every possible world has a
probability between 0 and 1 and that the total probability of the set of possible
worlds is 1:
0 ≤ P(ω) ≤ 1 for every ω and ω∈Ω P(ω) = 1 .
BASIC PROBABILITY NOTATION

 Probabilistic assertions and queries are not


usually about particular possible worlds, but
about sets of them.
 In AI, the sets are always described by
propositions in a formal language.
BASIC PROBABILITY NOTATION

 Probabilities such as P(Total =11) and P(doubles) are called unconditional or prior
probabilities (and sometimes just “priors” for short); they refer to degrees of belief in
propositions in the absence of any other information.
 Most of the time, however, we have some information, usually called evidence, that has
already been revealed.
 For example, the first die may already be showing a 5 and we are waiting with bated
breath for the other one to stop spinning. In that case, we are interested not in the
unconditional probability of rolling doubles, but the conditional or posterior probability
(or just “posterior” for short) of rolling doubles given that the first die is a 5.
 This probability is written P(doubles | Die1 =5), where the “ | ” is pronounced “given.”
 Similarly, if I am going to the dentist for a regular checkup, the probability P(cavity)=0.2
might be of interest; but if I go to the dentist because I have a toothache, it’s P(cavity |
toothache)=0.6 that matters.
 Note that the precedence of “ | ” is such that any expression of the form P(. . . | . . .)
always means P((. . .)|(. . .)).
BASIC PROBABILITY NOTATION

 Mathematically speaking, conditional probabilities are defined


in terms of unconditional probabilities as follows: for any
propositions a and b, we have

 The definition of conditional probability, can be written in a


different form called the product rule:
The language of propositions
in probability assertions

 Variables in probability theory are called random


variables and their names begin with an uppercase
letter.
 Probability density functions (sometimes called pdfs)
differ in meaning from discrete distributions.
Probability axioms and their
reasonableness

 The basic axioms of probability (Equations (13.1) and (13.2))


imply certain relationships among the degrees of belief that
can be accorded to logically related propositions.

 For example, we can derive the familiar relationship between


the probability of a proposition and the probability of its
negation:
Probability axioms and their
reasonableness

 We can also derive the well-known formula for the


probability of a disjunction, sometimes called the
inclusion–exclusion principle:
INFERENCE USING FULL JOINT
DISTRIBUTIONS
 The frequentist position is that the numbers can come
only from experiments.
 The objectivist view is that probabilities are real
aspects of the universe—propensities of objects to
behave in certain ways—rather than being just
descriptions of an observer’s degree of belief.
 The subjectivist view describes probabilities as a way
of characterizing an agent’s beliefs, rather than as
having any external physical significance.
INFERENCE USING FULL JOINT
DISTRIBUTIONS
 The subjective Bayesian view allows any self-
consistent ascription of prior probabilities to
propositions, but then insists on proper Bayesian
updating as evidence arrives.
 The principle of indifference attributed to Laplace
(1816) states that propositions that are syntactically
“symmetric” with respect to the evidence should be
accorded equal probability.
INFERENCE USING FULL JOINT
DISTRIBUTIONS
 The process is called marginalization, or summing
out—because we sum up the probabilities for each
possible value of the other variables, thereby taking
them out of the equation.
 This rule is called conditioning.
 Marginalization and conditioning turn out to be
useful rules for all kinds of derivations involving
probability expressions.
BAYES’ RULE AND ITS USE

 This equation is known as Bayes’ rule (also Bayes’


law or Bayes’ theorem).

 This simple equation underlies most modern AI


systems for probabilistic inference.
 The more general case of Bayes’ rule for multivalued
variables can be written in the P notation as follows:

You might also like