0% found this document useful (0 votes)
12 views67 pages

Risk Management Lecture 6

The document discusses risk analysis in information and systems engineering, focusing on decision theory, expected value approaches, and the value of information. It includes examples, such as the Burger Prince restaurant's decision-making process regarding opening a new location, utilizing expected value calculations and Bayesian statistics. The document emphasizes the importance of quantifying the value of information and making informed decisions based on probabilities and potential outcomes.

Uploaded by

KHLinn7797
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views67 pages

Risk Management Lecture 6

The document discusses risk analysis in information and systems engineering, focusing on decision theory, expected value approaches, and the value of information. It includes examples, such as the Burger Prince restaurant's decision-making process regarding opening a new location, utilizing expected value calculations and Bayesian statistics. The document emphasizes the importance of quantifying the value of information and making informed decisions based on probabilities and potential outcomes.

Uploaded by

KHLinn7797
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

1

INSE 6320 -- Week 6


Risk Analysis for Information and Systems Engineering

• Value of Information
• Risk Profile
• Sensitive analysis
• Bayesian Belief Networks

Dr. M. AMAYRI Concordia University


2

Decision Theory
• A decision is a choice between alternatives based on estimates of the values
of those alternatives.

• A decision problem is characterized by decision alternatives, states of


nature, and resulting payoffs.

• The decision alternatives are the different possible strategies the decision
maker can employ.

• The states of nature refer to future events, not under the control of the
decision maker, which will ultimately affect decision results. States of nature
should be defined so that they are mutually exclusive and contain all possible
future events that could affect the results of all potential decisions.

• Decision theory problems are generally represented as one of the following:


Influence Diagram, Payoff Table, or Decision Tree
3

Decision Making with Probabilities

• Expected Value Approach


▪ If probabilistic information regarding the states of nature is
available, one may use the expected value (EV) approach.
▪ Here the expected return for each decision is calculated by
summing the products of the payoff under each state of nature
and the probability of the respective state of nature occurring.
▪ The decision yielding the best expected return is chosen.
4

Expected Value of a Decision Alternative

• The expected value of a decision alternative is the sum of weighted


payoffs for the decision alternative.
• The expected value (EV) of decision alternative di is defined as:

N
EV(di ) = ∑ P( s j )Vij
j =1

where:
N = the number of states of nature
P(sj ) = the probability of state of nature sj
Vij = the payoff corresponding to decision alternative di and state of nature sj
5

Example: Burger Prince


Burger Prince Restaurant is contemplating opening a new restaurant
on Main Street. It has three different models, each with a different
seating capacity. Burger Prince estimates that the prior probability
0.4, 0.2, 0.4.

• Payoff Table

Model A $10,000 $15,000 $14,000


Model B $ 8,000 $18,000 $12,000
Model C $ 6,000 $16,000 $21,000
6

Example: Burger Prince


• Expected Value Approach
Calculate the expected value for each decision. The decision tree shown below can assist in this
calculation. Here d1, d2, d3 represent the decision alternatives of models A, B, C, and s1, s2, s3 represent the
states of nature of 80, 100, and 120.

Payoffs

s1 .4 10,000

s2 .2 15,000
2
s3 .4
d1: Model A 14,000

s1 .4 8,000
d2: Model B
1 s2 .2
3 18,000
s3
D3: ModelC .4
12,000

s1 .4
6,000
s2 .2
4 16,000
s3 .4
21,000
7

Example: Burger Prince

Expected Value For Each Decision


EV = .4(10,000) + .2(15,000) + .4(14,000)
= $12,600
d1 2
Model A
EV = .4(8,000) + .2(18,000) + .4(12,000)
Model B d2 = $11,600
1 3

d3 EV = .4(6,000) + .2(16,000) + .4(21,000)


Model C
= $14,000
4

Choose the model with largest EV, which is Model C.


8

CAL Building Revisited

• Suppose market research was conducted in the community where the


complex will be built. This research allowed the company to estimate that
the probability of low demand will be 0.35, and the probability of high
demand will be 0.65. Which decision alternative should they select.

STATES OF NATURE
Alternatives Low (0.35) High (0.65)
Small 8 8
Medium 5 15
Large -11 22
9

CAL Building Revisited

STATES OF NATURE
Alternatives Low High
(0.35) (0.65) Expected value (EV)
Small 8 8 8(0.35) + 8(0.65) = 8
Medium 5 15 5(0.35) + 15(0.65) = 11.5
Large -11 22 -11(0.35) + 22(0.65) = 10.45

Recall that this is a profit payoff table. Thus since the decision to build a
medium complex has the highest expected profit, this is our best decision.
10

Value of information

• Information has a massive value to any organization and in particular to


management.

• Information brings with it efficacy and quality which is something every business
needs in these challenging times.

• Value of information may be described as the maximum price one should pay
knowing the actual value of an uncertainty before deciding on a course of action

• Objective: Quantify value of information --both perfect and imperfect

• Principle: Information should help a decision maker make decisions that are
better than decisions without information
11

Expected Value of Perfect Information

• Frequently information is available which can improve the probability


estimates for the states of nature.

• The expected value of perfect information (EVPI) is the increase in the


expected profit that would result if one knew with certainty which state of
nature would occur:

▪ EVPI = expected value of perfect information


▪ EVwPI = expected value with perfect information about the states of nature
▪ EVwoPI = expected value without perfect information about the states of nature
12

Expected Value of Perfect Information


• EVPI Calculation
▪ Step 1: Determine the optimal return corresponding to each state of
nature.
▪ Step 2: Compute the expected value of these optimal returns.
▪ Step 3: Subtract the EV of the optimal decision from the amount
determined in step (2).

Example: Burger Prince

• Expected Value of Perfect Information


Calculate the expected value for the optimum payoff for each state of
nature and subtract the EV of the best decision.

EVPI= .4(10,000) + .2(18,000) + .4(21,000) - 14,000 = $2,000


13

Expected Value of Perfect Information: Burger Prince

A B C D E F
1 PAYOFF TABLE
2
3 Decision State of Nature Expected Recommended
4 Alternative s1 = 80 s2 = 100 s3 = 120 Value Decision
5 d1 = Model A 10,000 15,000 14,000 12600
6 d2 = Model B 8,000 18,000 12,000 11600
7 d3 = Model C 6,000 16,000 21,000 14000 d3 = Model C
8 Probability 0.4 0.2 0.4
9 Maximum Expected Value 14000
10
11 Maximum Payoff EVwPI EVPI
12 10,000 18,000 21,000 16000 2000
14

Bayes’ Theorem
• Bayesian Statistics play a role in assessing additional information obtained from
various sources.
• This additional information may assist in refining original probability estimates, and
help improve decision making.

• Bayes’ Theorem provides a procedure to calculate posterior probabilities

|
P(B Aj)P(Aj)
P(Aj|B) =
| |
P(B A1)P(A1)+ P(B|A2)P(A2)+…+ P(B An)P(An)

Posterior Probabilities Prior probabilities


Probabilities determined after the additional Probability estimates determined based
info becomes available. on current info, before the new info
becomes available.
15

Expected Value of Sample Information


Computing Branch (Posterior) Probabilities

▪ Step 1: For each state of nature, multiply the prior probability by its
conditional probability for the indicator -- this gives the joint probabilities for
the states and indicator.
▪ Step 2: Sum these joint probabilities over all states -- this gives the marginal
probability for the indicator.
▪ Step 3: For each state, divide its joint probability by the marginal probability
for the indicator -- this gives the posterior probability distribution.

Expected Value and Efficiency of Sample Information


• The expected value of sample information (EVSI) is the additional expected profit
possible through knowledge of the sample or survey information.
• Efficiency of sample information is the ratio of EVSI to EVPI.
16

Expected Value of Sample Information


The expected value of sample information (EVSI) is the expected gain from making
decisions based on sample information:
EVSI = |EVwSI – EVwoSI|
where
•EVSI = expected value of sample information.
•EVwSI = expected value with sample information about the states of nature.
•EVwoSI = expected value without sample information about the states of nature=EVwoPI.

• EVSI Calculation
▪ Step 1: Determine the optimal decision and its expected return for the possible
outcomes of the sample or survey using the posterior probabilities for the
states of nature.
▪ Step 2: Compute the expected value of these optimal returns.
▪ Step 3: Subtract the EV of the optimal decision obtained without using the
sample information from the amount determined in Step 2.
17

Example: Burger Prince


Burger Prince Restaurant is contemplating opening a new restaurant
on Main Street. It has three different models, each with a different
seating capacity. Burger Prince estimates that the average number
of customers per hour will be 80, 100, or 120. The prior probabilities
are P(s1)=.4, P(s2)=.2 and P(s3)=.4

• Payoff Table

Average Number of Customers Per Hour


s1 = 80 s2 = 100 s3 = 120

Model A $10,000 $15,000 $14,000


Model B $ 8,000 $18,000 $12,000
Model C $ 6,000 $16,000 $21,000
18

Example: Burger Prince

Expected Value For Each Decision


EV = .4(10,000) + .2(15,000) + .4(14,000)
= $12,600
d1 2
Model A
EV = .4(8,000) + .2(18,000) + .4(12,000)
Model B d2 = $11,600
1 3

d3 EV = .4(6,000) + .2(16,000) + .4(21,000)


Model C
= $14,000
4

Choose the model with largest EV, which is Model C.


19

Example: Burger Prince


• Sample Information
Burger Prince must decide whether or not to purchase a marketing
survey from Stanton Marketing for $1,000. The results of the survey are
"favorable (F)" or "unfavorable (U)". The conditional probabilities are:
P(favorable|80 customers per hour) = P(F|s1) = .2
P(favorable|100 customers per hour) = P(F|s2) = .5
P(favorable|120 customers per hour) = P(F|s3) = .9

Should Burger Prince have the survey performed by Stanton Marketing?


▪ How to calculate the posterior probabilities P(sj|F) and P(sj|U)?
20

Example: Burger Prince


Should Burger Prince have the survey performed by Stanton Marketing?

We have to know the posterior probabilities

How to calculate the posterior probabilities P(sj|F) and P(sj|U)?


21

Example: Burger Prince


• Solution Spreadsheet for Posterior Probabilities
22

Example: Burger Prince

d1 4

d2
2 5
F d3

1
7
U d1

d2
3 8
d3

9
23

Example: Burger Prince


EV = .148(10,000) + .185(15,000)
d1 4
+ .667(14,000) = $13,593 = EV(d1|F)
d2
5 EV = .148 (8,000) + .185(18,000)
2
d3 + .667(12,000) = $12,518 = EV(d2|F)
F
(.54)
6 EV = .148(6,000) + .185(16,000)
+.667(21,000) = $17,855 = EV(d3|F)
1
7
d1

d2
3 8
d3

9
24

Example: Burger Prince


EV = .148(10,000) + .185(15,000)
d1 4
+ .667(14,000) = $13,593 = EV(d1|F)
$17,855
d2
5 EV = .148 (8,000) + .185(18,000)
2
d3 + .667(12,000) = $12,518 = EV(d2|F)
F
(.54)
6 EV = .148(6,000) + .185(16,000)
+.667(21,000) = $17,855 = EV(d3|F)
1
7 EV = .696(10,000) + .217(15,000)
U d1 +.087(14,000)= $11,433 = EV(d1|U)
(.46)
d2
8 EV = .696(8,000) + .217(18,000)
3
d3 + .087(12,000) = $10,554 = EV(d2|U)
$11,433
9 EV = .696(6,000) + .217(16,000)
+.087(21,000) = $9,475= EV(d3|U)
25

Example: Burger Prince


• Expected Value of Sample Information
If the outcome of the survey is "favourable" choose Model C. If it is unfavourable,
choose model A.

EVSI = |EVwSI – EVwoSI| = ?

• Efficiency of Sample Information


The efficiency of the survey:
EVSI/EVPI = ?
26

Example: Burger Prince


• Expected Value of Sample Information
If the outcome of the survey is "favorable" choose Model C. If it is unfavorable,
choose model A.

EVSI = |EVwSI – EVwoSI| = |.54($17,855) + .46($11,433) - $14,000| = $900.88


▪ EVwSI = P(F)*best{EV(di|F)} + P(U)*best{EV(di|U)}
▪ EVwoSI = EVwoPI = best{EV(di)}

Since this is less than the cost of the survey, the survey should not be purchased,
i.e. Burger Prince would be willing to pay up to $900.88 for this survey.

• Efficiency of Sample Information


The efficiency of the survey:
EVSI/EVPI = ($900.88)/($2000) = .4504
The information from the survey is 45.04% as efficient as perfect information.
27

Risk Analysis
• Risk analysis helps the decision maker recognize the difference between:
▪ the expected value of a decision alternative, and the payoff that might actually occur
• The risk profile for a decision alternative shows the possible payoffs for the decision alternative
along with their associated probabilities.
• Sensitivity analysis helps the decision maker by describing how changes in the state-of-nature
probabilities and/or changes in the payoffs affect the recommended decision alternative.

.40

Probability
.30

Model C Decision Alternative .20

.10

5 10 15 20 25
Profit ($thousands)
28

Sensitivity Analysis
• Some of the quantities in a decision analysis, particularly the probabilities,
are often intelligent guesses at best.
• It is important to accompany any decision analysis with a sensitivity
analysis.
• Sensitivity analysis can be used to determine how changes to the
following inputs affect the recommended decision alternative:
▪ probabilities for the states of nature
▪ values of the payoffs
• If a small change in the value of one of the inputs causes a change in the
recommended decision alternative, extra effort and care should be taken in
estimating the input value.
29

Sensitivity Analysis
• One approach to sensitivity analysis is to arbitrarily assign different values
to the probabilities of the states of nature and/or the payoffs and resolve the
problem. If the recommended decision changes, then you know that the
solution is sensitive to the changes.

• For the special case of two states of nature, a graphical technique can be
used to determine how sensitive the solution is to the probabilities
associated with the states of nature.
30

CAL Building: Sensitivity Analysis


• This problem has two states of nature. Previously, we stated that CAL
Condominiums estimated that the probability of future low demand is 0.35
and 0.65 is the probability of high demand. These probabilities yielded the
recommended decision to build the medium complex.

• In order to see how sensitive this recommendation is to changing probability


values, we will let p equal the probability of low demand.
Thus (1-p) is the probability of high demand. Therefore

EV( small) = 8p + 8(1-p)= 8


EV( medium) = 5p + 15(1-p) = 15 – 10p
EV( large) = -11p + 22(1-p) = 22 – 33p
31

CAL Building: Sensitivity Analysis


• Next we will plot the expected value lines for each decision by plotting p on the x
axis and EV on the y axis.
EV( small) = 8; EV( medium) = 15 – 10p; EV( large) = 22 – 33p
EV
(l
ar
ge
)

EV(
med
ium)
EV( small)
EV

P
B1 B2
32

CAL Building: Sensitivity Analysis


• Since CAL condominiums list payoffs are in terms of profits, we know that the highest
profits is desirable.
• Look over the entire range of p (p=0 to p=1) and determine the range over which each
decision yields the highest profits.

EV
( la
rge
)

EV( m
ed ium)
EV( small)
33

CAL Building: Sensitivity Analysis


• Do not estimate the values of B1 or B2 (the points where the intersection of lines
occur). Determine the exact intersection points.

• B1 is the point where the EV( large) line intersects with the EV( medium) line:
To find this point set these two lines equal to each other and solve for p.
22-33p= 15-10p
7= 23p So B1 equals 0.3043
p=7/23= 0.3043

• B2 is the point where the EV( medium) line intersects with the EV( small) line:
15-10p = 8
7 = 10p So B2 equals 0.7
p = 0.7
34

CAL Building: Sensitivity Analysis

EV
(l
ar
ge
)

EV(
med
ium)
EV( small)
EV

P
0.3043 0.7
35

CAL Building: Sensitivity Analysis


• From the graph we see that if the probability of low demand (p) is between 0 and
0.3043, we recommend building a large complex.

• From the graph we see that if the probability of low demand (p) is between 0.3043 and
0.7, we recommend building a medium complex.

• From the graph we see that if the probability of low demand (p) is between 0.7 and 1,
we recommend building a small complex.

From this sensitivity analysis we see that if CAL Condos estimate of 0.35 for the
probability of low demand was slightly lower, the recommended decision would change.
36

Bayesian Networks
• What are they?
▪ Bayesian networks (also called Bayesian Belief Networks or Bayesian nets) are a
framework for representing and analyzing models involving uncertainty

▪ A Bayesian Net is a probabilistic graphical model that represents a set of random variables
and their conditional dependencies via a directed graph (DAG)
▪ A conditional probability table (CPT) is associated with each node

Example: B E
You have a new Security alarm installed. It reliably
detects Theft, but also responds to minor earthquakes. A
Two neighbours, John and Mary, promise to call the
police when they hear the alarm. John always calls when
he hears the alarm, but sometimes confuses the alarm J M
with the phone ringing and calls then also. On the other
hand, Mary likes loud music and sometimes doesn’t hear
the
alarm. Given evidence about who has and hasn’t called,
you’d like to estimate the probability of a Theft

• Influence diagrams are an example of Bayesian networks


37

Bayesian Networks

A Bayesian network is made up of:


1. A Directed Acyclic Graph
A
Arrows indicate
dependencies, i.e.
B causal connections

C D

2. A set of tables for each node in the graph

P(A) A P(B|A) B P(D|B) B P(C|B)


0.4 true 0.3 true 0.95 true 0.1
false 0.99 false 0.98 false 0.6
38

Directed Acyclic Graph

Each node in the graph is a random A node X is a parent of another node Y if


variable with a conditional probability there is an arrow from node X to node Y eg.
A is a parent of B

C D

Informally, an arrow from node X to node


Y means X has a direct influence on Y

• Each arrow denotes conditional dependence


39

Set of Tables for Each Node

Each node Xi has a conditional probability


distribution P(Xi | Parents(Xi)) that quantifies
P(A) A P(B|A) the effect of the parents on the node
0.4 true 0.3
The parameters are the probabilities in
false 0.99 these conditional probability tables (CPTs)

B P(C|B) A
true 0.1
false 0.6
B
B P(D|B)

C D true 0.95
false 0.98
40

Set of Tables for Each Node

Conditional Probability Distribution


for C given B

B P(C|B)
true 0.1
false 0.6

For a given combination of values of the parents (B


in this example), the entries for P(C=true | B) and
P(C=false | B) must add up to 1
e.g., P(C=true | B=false) + P(C=false |B=false )=1
41

Bayesian Net Concepts

• Chain Rule: P(A,B) = P(A) P(B|A)


• Conditional Independence: P(A|B,C) = P(A|B)
• Bayes Rule:
likelihood prior
P( B | A) P( A)
P( A | B) =
P( B)
posterior evidence
Example: Fuel efficiency for vehicles
• Let’s assume that we already have P(Mpg,Horse)
P(good, low) = 0.36
P(good,high) = 0.04
P(Mpg, Horse) = P( bad, low) = 0.12
P( bad,high) = 0.48

How would you rewrite this using the Chain rule? i.e. how to calculate conditional probabilities?
42

Review: Chain Rule

P(Mpg)

P(Mpg, Horse) P(good) = 0.4


P( bad) = 0.6
low high
P(good, low) = 0.36
good 0.36 0.04
P(good,high) = 0.04
P(bad
bad,0.12
low)0.48
= 0.12
*
P( bad,high) = 0.48 P(Horse|Mpg)
P(Mpg, Horse)

P( low|good) = 0.89
P( low| bad) = 0.21
P(high|good) = 0.11
P(high| bad) = 0.79
43

Review: Chain Rule = P(good) * P(low|good)


= 0.4 * 0.89
• Chain Rule: P(A,B) = P(A) P(B|A)
= P(good) * P(high|
good) = 0.4 * 0.11
P(Mpg)

P(Mpg, Horse) P(good) = 0.4


P( bad) = 0.6
low high
P(good, low) = 0.36
good 0.36 0.04
P(good,high) = 0.04
P(bad
bad,0.12
low)0.48
= 0.12
*
P( bad,high) = 0.48 P(Horse|Mpg)
P(Mpg, Horse)

P( low|good) = 0.89
P( low| bad) = 0.21
P(high|good) = 0.11
P(high| bad) = 0.79

= P(bad) * P(high|bad) = P(bad) * P(low|bad)


= 0.6 * 0.79 = 0.6 * 0.21
44

How to Make a Bayesian Net

P(Mpg, Horse) = P(Mpg) * P(Horse | Mpg)


45

How to Make a Bayesian Net

P(Mpg, Horse) = P(Mpg) * P(Horse | Mpg)

Mpg

Horse
46

How to Make a Bayesian Net

P(Mpg, Horse) = P(Mpg) * P(Horse | Mpg)


P(Mpg)

P(good) = 0.4
Mpg
P( bad) = 0.6

P(Horse|Mpg)

P( low|good) = 0.90
Horse P( low| bad) = 0.21
P(high|good) = 0.10
P(high| bad) = 0.79

• Each node is a probability function


• Each arc denotes conditional dependence
47

How to Make a Bayesian Net

Step 1: Rewrite joint using the Chain rule.

P(Mpg, Horse, Accel) = P(Mpg) P(Horse | Mpg) P(Accel | Mpg, Horse)


48

How to Make a Bayesian Net

Step 1: Rewrite joint using the Chain rule.

P(Mpg, Horse, Accel) = P(Mpg) P(Horse | Mpg) P(Accel | Mpg, Horse)

Mpg

Horse

Accel
49

How to Make a Bayesian Net


P(Mpg)

P(good) = 0.4
Mpg
P( bad) = 0.6
P(Horse|Mpg)

P( low|good) = 0.90 P(Accel|Mpg,Horse)


P( low| bad) = 0.21
P(high|good) = 0.10 Horse P(slow|good, low) = 0.97
P(high| bad) = 0.79 P(slow|good,high) = 0.15
P(slow| bad, low) = 0.90
P(slow| bad,high) = 0.05
Accel P(fast|good, low) = 0.03
P(fast|good,high) = 0.85
P(fast| bad, low) = 0.10
P(fast| bad,high) = 0.95

* Note: I made these up too…


50

How to Make a Bayesian Net


P(Mpg)

P(good) = 0.4
Mpg
P( bad) = 0.6
P(Horse|Mpg)

P( low|good) = 0.89 P(Accel|Mpg,Horse)


P( low| bad) = 0.21
P(high|good) = 0.11 Horse P(slow|good, low) = 0.97
P(high| bad) = 0.79 P(slow|good,high) = 0.15
P(slow| bad, low) = 0.90
P(slow| bad,high) = 0.05
Accel P(fast|good, low) = 0.03
P(fast|good,high) = 0.85
A Miracle Occurs! P(fast| bad, low) = 0.10
P(fast| bad,high) = 0.95
You are told by a domain expert that Accel is
independent of Mpg given Horse!

i.e., P(Accel | Mpg, Horse) = P(Accel | Horse)


51

How to Make a Bayesian Net


P(Mpg)

Mpg P(good) = 0.4


Thank you, domain expert! P( bad) = 0.6

Now I only need to P(Horse|Mpg)


learn 5 parameters
instead of 7 from my data! P( low|good) = 0.89
P( low| bad) = 0.21
Horse P(high|good) = 0.11
P(high| bad) = 0.79
My parameter estimates
will be more accurate as
a result!
P(Accel|Horse)

P(slow| low) = 0.22


Accel P(slow|high) = 0.64
P(fast| low) = 0.78
P(fast|high) = 0.36
52

Example: Simple Bayesian Network

S ∈ {no, light, heavy} Smoking Cancer

P(S=no) 0.80 C ∈ {none, benign, malignant}


P(S=light) 0.15
P(S=heavy) 0.05
Smoking= no light heavy
P(C=none) 0.96 0.88 0.60
P(C=benign) 0.03 0.08 0.25
P(C=malig) 0.01 0.04 0.15
53

More Complex Bayesian Network

Nodes
represent Age Gender
variables

Exposure Smoking
to Toxics
Links represent
Cancer “causal” relations

Serum Lung
Calcium Tumor
54

More Complex Bayesian Network

Age Gender

Exposure Smoking
to Toxics

Cancer condition

Serum Lung
Calcium Tumor
55

More Complex Bayesian Network

Age Gender

Exposure Smoking
to Toxics

Cancer
observable symptoms

Serum Lung
Calcium Tumor
56

Independence

Age and Gender are


Age Gender independent.

P(A,G) = P(G)P(A)

P(A|G) = P(A) A ⊥ G
P(G|A) = P(G) G ⊥ A

P(A,G) = P(G|A) P(A) = P(G)P(A)


P(A,G) = P(A|G) P(G) = P(A)P(G)
57

Conditional Independence

Cancer is
Age Gender
independent of Age
and Gender given
Smoking.
Smoking

P(C|A,G,S) = P(C|S)

Cancer
58

totally
59
60
61
62

The event (S) can be caused either by event (E) or event (M). The event (M) also causes the event
(B). The Bayesian network and corresponding conditional probability tables for this situation are
shown below.
63

1- Compute the following entry


2- What is the probability that event B occurs?

3- What is the probability that event M is occurring given that B occurred?

4- What is the probability that event E are presented given that event M is occurring?
64
65

What is the probability that event B occurs?


66

What is the probability that event M is occurring given that B occurred?


67

What is the probability that event E are presented given that event M is occurring?

You might also like