Midterm Fall02
Midterm Fall02
CLOSED BOOK
(one sheet of notes and a calculator allowed)
Write your answers on these pages and show your work. If you feel that a question is not fully
specified, state any assumptions you need to make in order to solve the problem. You may use
the backs of these sheets for scratch work.
Write your name on this and all other pages of this exam. Make sure your exam contains six
problems on twelve pages.
Name ________________________________________________________________
Student ID ________________________________________________________________
1 ______ 15
2 ______ 20
3 ______ 5
4 ______ 20
5 ______ 10
6 ______ 30
F1 F2 F3 Output
a a a +
c b c +
c a c +
b a a -
a b c -
b b c -
a) What score would the information gain calculation assign to each of the features?
Be sure to show all your work (use the back of this or the previous sheet if needed).
b) Which feature would be chosen as the root of the decision tree being built? ____________
(Break ties in favor of F1 over F2 over F3.)
2
Name: _________________________________
c) Show the next interior node, if any, that the C5 algorithm would add to the decision tree.
Again, be sure to show all your work. (Even if this secod interior node does not completely
separate the training data, stop after adding this second node.)
Be sure to label all the leaf nodes in the decision tree that you have created.
d) Assuming you have the following tuning set, what would the first round of HW1’s pruning
algorithm do to the decision tree produced in Part c? (If you need to break any ties, default to
“-”.) Justify your answer.
F1 F2 F3 Output
b a b +
a b a +
c c a -
3
Name: _________________________________
For each of the following search strategies, indicate which goal state is reached (if any) and
list, in order, all the states popped off of the OPEN list. When all else is equal, nodes should
be removed from OPEN in alphabetical order.
Iterative Deepening
Hill Climbing
A*
S
3 8 5
A 1 C
9 2 3
B
6 1
4 11
7
10
D
4 2 G3
0
2 E
1 1
G1 5
0 0
F
G2 5
0
4
Name: _________________________________
b) (Note: This question talks about search spaces in general and is not referring to the specific
search space used in Part a.)
iii. If h(node) is an admissible heuristic, for what range of Wgt values is the above
scoring function guaranteed to produce an admissible search strategy?
Explain your answer.
5
Name: _________________________________
6
Name: _________________________________
[ (P ∨ Q) ∧ (Q ∨ R) ] ⇒ (P ∨ R)
ii) When Fido is hungry Fido barks, but Fido barking does not
necessarily mean that Fido is hungry.
7
Name: _________________________________
1 P ∧ Z given
2 (¬R ∧ ¬W) ∨ (¬P) given
3 (W ∧ Q) ⇒ P given
4 Q∨W given
5 Q ⇒ (S ∨ P) given
6 (P ∧ Q) ⇒ (S ∨ R) given
10
11
12
13
14
15
16
8
Name: _________________________________
Simulated Annealing
Expected-Value Calculations
Interpretations
Minimax Algorithm
CLOSED List
9
Name: _________________________________
Assume that each training example is described using features F1 through F3 and each feature has
three possible values, a, b, or c Each training example is labeled is either in the positive (+)
category or the negative (-) category. Also assume that you have set aside 10% of the training
examples for a test set and another 10% for a tuning set.
Your task is to cast this as a search problem where your search algorithm’s job is to find the best
propositional-logic wff for this task. Your wff should use implication (⇒) as its main connective
and the wff should imply when an example belongs to the positive category (e.g., as done in the
sample wff above).
a) Describe a search space that you could use for learning such wff’s. Show a sample node.
10
Name: _________________________________
c) Describe two (2) operators that your search algorithm would use. Be concrete.
i)
ii)
d) Would you prefer to use a goal test for this task or would you prefer to look for the
highest scoring state you can find? Justify your answer.
11
Name: _________________________________
g) Show one (1) state (other than the initial state) that might arise during your search and
show three (3) possible children of this state.
h) What would be the analog to decision-tree pruning for this task? Explain how and why
“pruning” could be done in this style of machine learning.
12