0% found this document useful (0 votes)
60 views6 pages

Fa17 Practice Midterm2

The document is a practice midterm exam for an introduction to artificial intelligence course. It contains multiple questions about Bayes' network inference techniques like variable elimination and queries, decision networks, and sampling approaches like prior sampling, likelihood weighting, rejection sampling, and Gibbs sampling. Specifically, it asks the student to analyze Bayes' networks using these techniques, calculate maximum expected utilities and optimal actions for decision networks, and determine probabilities and weights for different samples using sampling methods.

Uploaded by

Mygod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views6 pages

Fa17 Practice Midterm2

The document is a practice midterm exam for an introduction to artificial intelligence course. It contains multiple questions about Bayes' network inference techniques like variable elimination and queries, decision networks, and sampling approaches like prior sampling, likelihood weighting, rejection sampling, and Gibbs sampling. Specifically, it asks the student to analyze Bayes' networks using these techniques, calculate maximum expected utilities and optimal actions for decision networks, and determine probabilities and weights for different samples using sampling methods.

Uploaded by

Mygod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

UC Berkeley – Computer Science

CS188: Introduction to Artificial Intelligence

Practice Midterm2, Fall 2017

1. Bayes Net Inference

Consider the following Bayes’ net where all variables are binary.



I. Assume that we would like to perform inference to obtain !(#$ |&' = )' , &+ = )+ , ⋯ , &$ =
)$ ).
(a) What is the number of rows in the largest factor generated by inference by
enumeration, for this query !(#$ |&' = )' , &+ = )+ , ⋯ , &$ = )$ )?
__________________


(b) Suppose we decide to perform variable elimination to calculate the query !(#$ |&' =
)' , &+ = )+ , ⋯ , &$ = )$ ), and we eliminate the variables in the ordering #' , #+ , ⋯ , #$.' , /.
Write out the form and the size of factor generated by the elimination of #' and /, where0 ∈
{2, ⋯ , 4 − 1}.
The new factor generated The size of the new factor
After #' is eliminated __________________ __________________
After / is eliminated __________________ __________________


(c) Find the best and the worst variable elimination orderings for this query. The goodness
is measured by the sum of the sizes of the factors that are generated.
Best ordering: __________________
Worst ordering: __________________

Ii. Assume now we want to use variable elimination to calculate another query
!(&$ |#' , #+ , ⋯ , #$.' ).
(a) Mark all of the following variables that produce a constant factor after being eliminated
for all possible elimination orderings.

⃞ &' ⃞ &+ ⃞ &$.' ⃞ /

(b) List all the variables that can be ignored (i.e. the conditional probability tables of the
variables can be ruled out from the initial factor set), in performing the query
!(&$ |#' , #+ , ⋯ , #$.' ). Briefly present the reason why these variables can be ignored. (Hint: you
can use the results in previous part or the conditional independencies encoded in the given
BN).
Variables can be ignored when computing the query !(&$ |#' , #+ , ⋯ , #$.' ):
__________________

Reason:
__________________




























2. Decision Network

In this problem, we will model a lottery as a decision network, where we are deciding whether
or not it is worth buying a lottery ticket and playing the lottery.
We can consider the outcome of a lottery to be a node 8, with 9 as the decision to play the
lottery or not and : as our utility. The resulting decision network may look something like this:


For all parts of this question, assume that we have an utility function of :(;) = ; < , where ; is
the amount of money we pay and then (hopefully) win in the lottery, and that the price of a
lottery ticket is $4.

Q1: For this first part, assume that the lottery is [0,9/10; 14,1/10]. Fill in the following table for
:|8, 9.
8 9 :|8, 9

0 buy

14 buy

0 no buy

14 no buy

What is DE:({})?

What action achieves DE:({})?



Now, the organization running the lottery has announced a change in policy: there are two
lotteries, and when someone buys a lottery ticket, the ticket is randomly for one of the two
lotteries.
We can incorporate this into our model by adding a node F that indicates which ticket gets
bought: the resulting decision network looks like this:


In addition, we have the following conditional probability tables, that we will be using for the
remaining parts of this problem:

F !(F)

GHIIJKL1 1/2

GHIIJKL2 1/2

F 8 !(8|F)

GHIIJKL1 0 9/10

GHIIJKL1 14 1/10

GHIIJKL2 0 7/10

GHIIJKL2 5 3/10

Q2: What is the new MEU?
What action achieves DE:({})?


Q3: What is VPI(T)?







3. Sampling

A B E

C D

i. (3 pts) Check the boxes above the Bayes’ nets below that could also be valid for the above
probability tables.

ii. (2 pts) Caryn wants to compute the distribution P(A,C|E=1) using prior sampling on Model-Q1
(given at the top of this page). She draws a bunch of samples. The first of these is (0, 0, 1, 1, 0),
given in (A, B, C, D, E) format. What’s the probability of drawing this sample?


iii. (2 pts) Give an example of an inference query for Model-Q1 with one query variable and one
evidence variable that could be estimated more efficiently (in terms of runtime) using rejection
sampling than by using prior sampling. If none exist, state “not possible”.

iv. (2 pts) Give an example of an inference query for Model-Q1 with one query variable and one
evidence variable for which rejection sampling provides no efficiency advantage (in terms of
runtime) over using prior sampling. If none exist, state “not possible”.

v. (2 pts) Now Caryn wants to determine P(A,C|E=1) for Model-Q1 using likelihood weighting.
She draws the five samples shown below, which are given in (A, B, C, D, E) format, where the
leftmost sample is “Sample 1” and the rightmost is “Sample 5”. What are the weights of the
samples S1 and S3?




weight(S1): _______________________ weight(S3): ________________________

S1: (0, 0, 1, 1, 1) S2: (0, 0, 1, 1, 1) S3: (1, 0, 1, 1, 1) S4: (0, 1, 0, 0, 1) S5: (0, 1, 0, 0, 1)

vi. (1 pt) For the same samples as in part v, compute P(A=1,C=1|E=1) for Model-Q1. Express your
answer as a simplified fraction (e.g. 2 / 3 instead of 4 / 6).





vii. (2 pts) Select True or False for each of the following:
True False

When there is no evidence, prior sampling is guaranteed to yield the exact same answer
as inference by enumeration.

When collecting a sample during likelihood weighting, evidence variables are not
sampled.

When collecting a sample during rejection sampling, variables can be sampled in any
order.

Gibbs sampling is a technique for performing approximate inference, not exact


inference.

You might also like