0% found this document useful (0 votes)
2 views100 pages

Probabilistic Reasoning

The document covers probabilistic reasoning in artificial intelligence, focusing on uncertain knowledge, probability review, Bayesian networks, and inference methods. It discusses concepts such as conditional probability, Bayes' theorem, and approximate inference techniques. Additionally, it explores relational and first-order probability models, emphasizing their application in real-world scenarios like online book recommendations.

Uploaded by

tdhanush2299
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views100 pages

Probabilistic Reasoning

The document covers probabilistic reasoning in artificial intelligence, focusing on uncertain knowledge, probability review, Bayesian networks, and inference methods. It discusses concepts such as conditional probability, Bayes' theorem, and approximate inference techniques. Additionally, it explores relational and first-order probability models, emphasizing their application in real-world scenarios like online book recommendations.

Uploaded by

tdhanush2299
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 100

Unit V

Probabilistic Reasoning

16/06/2025 15CS401-Artificial Intelligence 1


UNCERTAIN KNOWLEDGE AND REASONING

• Uncertainty
• Review of Probability
• Probabilistic Reasoning
• Bayesian Networks
• Inferences in Bayesian networks
• Approximate Inference in Bayesian Networks
• Relational and First- Order Probability

2
UNCERTAIN KNOWLEDGE AND REASONING

• Uncertainty

3
UNCERTAIN KNOWLEDGE AND REASONING

• Uncertainty (Example)

4
UNCERTAIN KNOWLEDGE AND REASONING

• Nature of Uncertain Knowledge

5
UNCERTAIN KNOWLEDGE AND REASONING

• Nature of Uncertain Knowledge

6
UNCERTAIN KNOWLEDGE AND REASONING

• Nature of Uncertain Knowledge

7
Review of Probability

• Probability

8
Review of Probability

• Probability

9
Review of Probability

• Probability

10
Review of Probability

• Probability

11
Review of Probability

• Random Variable

12
Review of Probability

• Types of Random Variables

13
Review of Probability

• Atomic Events

14
Review of Probability

• Prior Probability

15
Review of Probability

• Prior Probability

16
Review of Probability

• Conditional Probability

17
Review of Probability

• Conditional Probability

18
Review of Probability

• Basic Axioms of Probability

19
Probabilistic Reasoning

• Using Axioms of Probability

20
Probabilistic Reasoning

• Conditional Probability

21
Probabilistic Reasoning

• Bayes Rule

22
Probabilistic Reasoning

• Bayes Rule

23
Probabilistic Reasoning

• Example: Hypothesis for Flu based on Symptoms

24
Probabilistic Reasoning

• Baye’s Theoram

25
Probabilistic Reasoning

• Baye’s Theoram

26
Probabilistic Reasoning

• Applying Baye’s Rule

27
Probabilistic Reasoning

• Applying Baye’s Rule

28
Probabilistic Reasoning

• Bayesian Network

29
Probabilistic Reasoning

• Bayesian Network-example

30
Probabilistic Reasoning

• Bayesian Network-example(Burglar Alarm)

31
Probabilistic Reasoning

• Bayesian Network-example(Burglar Alarm)

32
Probabilistic Reasoning

• Bayesian Network-example(Burglar Alarm)

33
Probabilistic Reasoning

• Conditional Probability Tables

34
Bayesian Network

• Joint Probability Distribution

35
Bayesian Network

• Drawbacks of Joint Probability Distribution

36
Bayesian Network
• Bayesian Network

37
Bayesian Network
• Bayesian Network –example1

38
Bayesian Network
• Conditional Probability Table

39
Bayesian Network
• Example-2

40
Bayesian Network
• Example-3

41
Bayesian Network
• Example-3

42
Bayesian Network
• Example-3

43
Semantics of Bayesian Network

44
Semantics of Bayesian Network

45
Semantics of Bayesian Network

46
Semantics of Bayesian Network
Method for Constructing Bayesian Network

47
Semantics of Bayesian Network
Compactness and Node ordering

48
Semantics of Bayesian Network
Compactness and Node ordering

49
Semantics of Bayesian Network
Compactness and Node ordering

50
Semantics of Bayesian Network
Conditional Independence relations in Bayesian Networks

51
Semantics of Bayesian Network
Conditional Independence relations in Bayesian Networks

52
Semantics of Bayesian Network
Conditional Independence relations in Bayesian Networks

53
Inferences in Bayesian Network
Purpose:

54
Inferences in Bayesian Network
Notations:

55
Inferences in Bayesian Network
Example:

56
Inferences in Bayesian Network
Types of inferences:

57
Inferences in Bayesian Network
Inference by Enumeration:

58
Inferences in Bayesian Network
Inference by Enumeration:

59
Inferences in Bayesian Network
Inference by Enumeration:

60
Inferences in Bayesian Network
Inference by Enumeration:

61
Inferences in Bayesian Network
Inference by variable Elimination:

62
Inferences in Bayesian Network
Inference by variable Elimination:

63
Inferences in Bayesian Network
Inference by variable Elimination:

64
Inferences in Bayesian Network
Inference by variable Elimination:

65
Inferences in Bayesian Network
Inference by variable Elimination:

66
Inferences in Bayesian Network

67
Inferences in Bayesian Network

68
Approximate Inference in Bayesian Networks
• Exact inference is not feasible in large, multiple connected
networks
• Sampling is the approach to make it feasible
• Prior Sampling
• Rejection Sampling
• Likelihood Sampling

69
Approximate Inference in Bayesian Networks

70
Approximate Inference in Bayesian Networks

71
Approximate Inference in Bayesian Networks

72
Approximate Inference in Bayesian Networks

73
Approximate Inference in Bayesian Networks

74
Approximate Inference in Bayesian Networks
Rejection Sampling:

75
Approximate Inference in Bayesian Networks
Rejection Sampling:

76
Approximate Inference in Bayesian Networks

77
Approximate Inference in Bayesian Networks

78
Approximate Inference in Bayesian Networks

79
Approximate Inference in Bayesian Networks

80
Approximate Inference in Bayesian Networks

81
Approximate Inference in Bayesian Networks

82
Approximate Inference in Bayesian Networks

83
Approximate Inference in Bayesian Networks

84
Approximate Inference in Bayesian Networks

85
Approximate Inference in Bayesian Networks

86
Approximate Inference in Bayesian Networks

87
Approximate Inference in Bayesian Networks

88
Approximate Inference in Bayesian Networks

89
Approximate Inference in Bayesian Networks

90
Relational and First Order Probability Models
• First-order logic commits to the existence of objects and relations
among them and can express facts about some or all of the objects
in a domain.
• Bayesian networks are essentially propositional: the set of random
variables is fixed and finite, and each has a fixed domain of possible
values.
• If we can find a way to combine probability theory with the
expressive power of first-order representations, we expect to be
able to increase dramatically the range of problems that can be
handled.

91
Relational and First Order Probability Models
• online book retailer
• Provide overall evaluations of products based on recommendations
received from its customers.
• The evaluation will take the form of a posterior distribution over
the quality of the book, given the available evidence.
• But this fails to take into account the fact that some customers are
kinder than others and some are less honest than others.

92
Relational and First Order Probability Models
• online book retailer-Bayesian Network

93
Relational and First Order Probability Models
• Online book retailer –First order Probability Model
• The situation seems tailor-made for a first-order language.
• We would like to say something like
• Recommendation(c, b) ∼ RecCPT(Honest (c), Kindness(c), Quality(b))
• with the intended meaning that a customer’s recommendation for
a book depends on the customer’s honesty and kindness and the
book’s quality according to some fixed CPT

94
Relational and First Order Probability Models
• Relational Probability Model
• Like first-order logic, RPMs have constant, function, and predicate symbols.
• Type signature for each function, that is, a specification of the type of each
argument and the function’s value.
• For the book-recommendation domain, the types are Customer and Book ,
and the type signatures for the functions and predicates are as follows:
• Honest : Customer → {true, false} Kindness : Customer → {1, 2, 3, 4, 5}
• Quality : Book → {1, 2, 3, 4, 5}
• Recommendation : Customer ×Book → {1, 2, 3, 4, 5}

95
Relational and First Order Probability Models
• Relational Probability Model
• Honest (c) ∼ 0.99, 0.01
• Kindness(c) ∼ 0.1, 0.1, 0.2, 0.3, 0.3
• Quality(b) ∼ 0.05, 0.2, 0.4, 0.2, 0.15
• Recommendation(c, b) ∼ RecCPT(Honest (c), Kindness(c), Quality(b))
• We can refine the model by introducing a context-specific independence
• Recommendation(c, b) ∼ if Honest (c) then
HonestRecCPT(Kindness(c), Quality(b))
else 0.4, 0.1, 0.0, 0.1, 0.4

96
Relational and First Order Probability Models
• Relational Probability Model
• We can elaborate this model in endless ways to make it more realistic. For
example, suppose that an honest customer who is a fan of a book’s author
always gives the book a 5, regardless of quality:
• Recommendation(c, b) ∼ if Honest (c) then
if Fan(c,Author (b)) then Exactly(5)
else HonestRecCPT(Kindness(c), Quality(b))
else 0.4, 0.1, 0.0, 0.1, 0.4

97
Relational and First Order Probability Models
• Relational Probability Model
• How can the system reason about whether, say, C1 is a fan of
Author (B2) when Author(B2) is unknown?

98
Relational and First Order Probability Models
• Relational Probability Model
• The answer is that the system may have to reason about all
possible authors.
• Suppose (to keep things simple) that there are just two authors, A1
and A2. Then Author(B2) is a random variable with two possible
values, A1 and A2, and it is a parent of Recommendation(C1,B2).
• The variables Fan(C1,A1) and Fan(C1,A2) are parents too. The
conditional distribution for Recommendation(C1,B2) is then
essentially a multiplexer in which the Author (B2) parent acts as a
selector to choose which of Fan(C1,A1) and Fan(C1,A2) actually
gets to influence the recommendation.
99
100

You might also like