0% found this document useful (0 votes)
315 views17 pages

Review Exercises - Solution

Here are the steps to build a decision tree classifier for the animal dataset: 1. Choose the root node attribute: The attribute with the most information gain is "Give Birth". Make it the root node. 2. Split the root node: Split the root node into two child nodes based on the value of "Give Birth" - One for "yes" and one for "no". 3. Continue splitting: Further split the child nodes by choosing the attribute with the highest information gain at each step. For example, split the "yes" node based on "Live in Water", and the "no" node based on "Fly". 4. Assign class labels: When a node cannot be split further (all

Uploaded by

Nguyen Thong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
315 views17 pages

Review Exercises - Solution

Here are the steps to build a decision tree classifier for the animal dataset: 1. Choose the root node attribute: The attribute with the most information gain is "Give Birth". Make it the root node. 2. Split the root node: Split the root node into two child nodes based on the value of "Give Birth" - One for "yes" and one for "no". 3. Continue splitting: Further split the child nodes by choosing the attribute with the highest information gain at each step. For example, split the "yes" node based on "Live in Water", and the "no" node based on "Fly". 4. Assign class labels: When a node cannot be split further (all

Uploaded by

Nguyen Thong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Part 1: General Search

Problem 1:
1. Give the values calculated by minimax for all states in the tree. Do not use
alpha-beta pruning

2. Indicate which branches of the tree will be pruned by alpha-beta pruning


Problem 2:
Consider the state space graph shown above. A is the start state and G is the goal state.
The costs for each edge are shown on the graph. Each edge can be traversed in both
directions. Note that the heuristic h1 is consistent but the heuristic h2 is not consistent.
1. Possible paths returned
For each of the following graph search strategies (do not answer for tree search), mark
which, if any, of the listed paths it could return. Note that for some search strategies the
specific path returned might depend on tie-breaking behavior. In any such cases, make
sure to mark all paths that could be returned under some tie-breaking scheme
Search Algorithm A-B-D-G A-C-D-G A-B-C-D-F-G
Depth first search x x x
Breadth first search x x
Uniform cost search x
A* search with heuristic h1 x
A* search with heuristic h2 x

2. Heuristic function properties


Suppose you are completing the new heuristic function h3 shown below. All the values
are fixed except h3(B).
Node A B C D E F G
h3 10 ? 9 7 1.5 4.5 0

For each of the following conditions, write the set of values that are possible for h3(B).
For example, to denote all non-negative numbers, write [0, ∞], to denote the empty set,

write ∅, and so on.


• What values of h3(B) make h3 admissible?
0 ≤ h3(B) ≤ 12
• What values of h3(B) make h3 consistent?
9 ≤ h3(B) ≤ 10
• What values of h3(B) will cause A* graph search to expand node A, then node C,
then node B, then node D in order?
12 < h3(B) < 13
Problem 3:

You are given the initial state (a) and the goal state (b) of an 8-puzzle as shown
below:

a. Apply A* using Manhattan distance heuristic function. Draw the search tree
including possible expanded states during the algorithm procedure. Compute the
triple (g, h, f) for each state. Mark the optimal strategy found.
b. For each of the following heuristics h(n) (where n is the state) for the N-puzzle
problem, identify its admissibility and justify your answer.
- h(n) = 1.
- h(n) = the number of misplaced tiles at state n.
- h(n) = (number of tiles out of row + number of tiles out of column).

How does an 8-puzzle search tree look like? For example,


Part 2: Logic
Problem 1:

Consider the following axioms:

a. All hounds howl at night.


b. Anyone who has any cats will not have any mice.
c. Light sleepers do not have anything which howls at night.
d. John has either a cat or a hound.
e. (Conclusion) If John is a light sleeper, then John does not have any mice.
1. Represent these sentences in First order Logic.
∀ x (HOUND(x) → HOWL(x))
∀ x ∀ y (HAVE (x,y) ∧ CAT (y) → ¬ ∃ z (HAVE(x,z) ∧ MOUSE (z)))
∀ x (LS(x) → ¬ ∃ y (HAVE (x,y) ∧ HOWL(y)))
∃ x (HAVE (John,x) ∧ (CAT(x) ∨ HOUND(x)))
LS(John) → ¬ ∃ z (HAVE(John,z) ∧ MOUSE(z))

2. Convert the above FoL sentences to CNF form.


1. ¬ HOUND(x) ∨ HOWL(x)
2. ¬ HAVE(x,y) ∨ ¬ CAT(y) ∨ ¬ HAVE(x,z) ∨ ¬ MOUSE(z)
3. ¬ LS(x) ∨ ¬ HAVE(x,y) ∨ ¬ HOWL(y)
4.
a. HAVE(John,a)
b. CAT(a) ∨ HOUND(a)
5.
a. LS(John)
b. HAVE(John,b)
c. MOUSE(b)

3. Use Resolution to prove (or disprove) the Conclusion.


[1.,4.(b):] 6. CAT(a) ∨ HOWL(a)
[2,5.(c):] 7. ¬ HAVE(x,y) ∨ ¬ CAT(y) ∨ ¬ HAVE(x,b)
[7,5.(b):] 8. ¬ HAVE(John,y) ∨ ¬ CAT(y)
[6,8:] 9. ¬ HAVE(John,a) ∨ HOWL(a)
[4.(a),9:] 10. HOWL(a)
[3,10:] 11. ¬ LS(x) ∨ ¬ HAVE(x,a)
[4.(a),11:] 12. ¬ LS(John)
[5.(a),12:] 13. □

Problem 2:

Consider the following axioms:

a. Every child loves Santa.


b. Everyone who loves Santa loves any reindeer.
c. Rudolph is a reindeer, and Rudolph has a red nose.
d. Anything which has a red nose is weird or is a clown.
e. No reindeer is a clown.
f. Scrooge does not love anything which is weird.
g. (Conclusion) Scrooge is not a child.
1. Represent these sentences in First order Logic.
1. Every child loves Santa.
∀ x (CHILD(x) → LOVES(x,Santa))
2. Everyone who loves Santa loves any reindeer.
∀ x (LOVES(x,Santa) → ∀ y (REINDEER(y) → LOVES(x,y)))
3. Rudolph is a reindeer, and Rudolph has a red nose.
REINDEER(Rudolph) ∧ REDNOSE(Rudolph)
4. Anything which has a red nose is weird or is a clown.
∀ x (REDNOSE(x) → WEIRD(x) ∨ CLOWN(x))
5. No reindeer is a clown.
¬ ∃ x (REINDEER(x) ∧ CLOWN(x))
6. Scrooge does not love anything which is weird.
∀ x (WEIRD(x) → ¬ LOVES(Scrooge,x))
7. (Conclusion) Scrooge is not a child.
¬ CHILD(Scrooge)

2. Convert the above FoL sentences to CNF form.


(1) ~Child(x) v L(x,Santa)
(2) ~L(x,Santa) v ~Reind(y) v L(x,y)
(3) Reind(Rudolph)
(4) Red(Rudolph)
(5) ~Red(x) v Weird(x) v Clown(x)
(6) ~Reind(x) v ~ Clown(x)
(7) ~Weird(x) v ~L(Scrooge, x)
(8) Child(x)

3. Use Resolution to prove (or disprove) the Conclusion.


(9)=(1)+(8): L(x,Santa)
(12)=(9)+(2): ~Reind(y) v L(x,y)
(10)=(3)+(6): ~ Clown(Rudolph)
(11)=(10)+(5)+(4): Weird(Rudolph)
(13)=(11)+(7): ~L(Scrooge, Rudolph)
(14)=(12)+(13): ~Reind(Rudolph)
(3)+(14)= NIL
Problem 3:
There are three suspects for a murder: Adams, Brown, and Clark.
Adams says “I didn't do it. The victim was old acquaintance of Brown's. But Clark
hated him.” Brown states “I didn't do it. I didn't know the guy. Besides I was out of
town all the week.” Clark says “I didn't do it. I saw both Adams and Brown downtown
with the victim that day; one of them must have done it.”
Assume that the two innocent men are telling the truth, but that the guilty man might
not be. Write out the facts as sentences in Propositional Logic, and use propositional
resolution to solve the crime.

1. Adams is lying and the others are telling the truth:

2. Brown is lying and the others are telling the truth:


3. Clark is lying and the others are telling the truth:

Problem 4:
Consider this Knowledge Base in proposional logic:

KB = { A, B, A ∨ C, K ∧ E ⟷ A ∧ B, ¬C → D, E ∨ F → ¬D}

Check if these sentences in entailed by the KB:

a) B ∧ C ?

b) C ∨ E → F ∧ B ?

A
B
A∨C
K ∧ E ⟷ A ∧ B = (K ∧ E → A ∧ B) ∧ (A ∧ B → K ∧ E)
= (¬ (K ∧ E) ∨ (A ∧ B)) ∧ (¬ (A ∧ B) ∨ (K ∧ E))
= ((¬K ∨ ¬E) ∨ (A ∧ B)) ∧ ((¬A ∨ ¬B) ∨ (K ∧ E))
= ((¬K ∨ ¬E ∨ A) ∧ (¬K ∨ ¬E ∨ B)) ∧ ((¬A ∨ ¬B ∨ K) ∧ (¬A ∨ ¬B ∨ E))
= (¬K ∨ ¬E ∨ A) ∧ (¬K ∨ ¬E ∨ B) ∧ (¬A ∨ ¬B ∨ K) ∧ (¬A ∨ ¬B ∨ E)
¬C → D = C ∨ D
E ∨ F → ¬D = ¬(E ∨ F) ∨ ¬D = (¬E ∧ ¬F) ∨ ¬D = (¬E ∨ ¬D) ∧ (¬F ∨ ¬D)

a) ¬(B ∧ C) = ¬B ∨ ¬C
A, B, A ∨ C, ¬K ∨ ¬E ∨ A, ¬K ∨ ¬E ∨ B, ¬A ∨ ¬B ∨ K, ¬A ∨ ¬B ∨ E, C ∨ D, ¬E ∨ ¬D, ¬F ∨
¬D, ¬B ∨ ¬C
¬B ∨ K, ¬B ∨ E, C ∨ ¬B ∨ K, C ∨ ¬B ∨ E, B, ¬K ∨ ¬E ∨ B, C ∨ D, ¬E ∨ ¬D, ¬F ∨ ¬D, ¬B ∨
¬C
K, E, C ∨ K, C ∨ E, ¬C, ¬K ∨ ¬E ∨ ¬C, C ∨ D, ¬E ∨ ¬D, ¬F ∨ ¬D
K, E, D, ¬K ∨ ¬E ∨ D, ¬E ∨ ¬D, ¬F ∨ ¬D
¬E, ¬F, ¬K ∨ ¬E, ¬K ∨ ¬E ∨ ¬F, K, E
False

b) ¬(C ∨ E → F ∧ B) = ¬(¬(C ∨ E) ∨ (F∧ B)) = (C ∨ E) ∧ ¬(F∧ B) = (C ∨ E) ∧ (¬F ∨ ¬B)

A, B, A ∨ C, ¬K ∨ ¬E ∨ A, ¬K ∨ ¬E ∨ B, ¬A ∨ ¬B ∨ K, ¬A ∨ ¬B ∨ E, C ∨ D, ¬E ∨ ¬D, ¬F ∨
¬D, C ∨ E, ¬F ∨ ¬B
¬B ∨ K, ¬B ∨ E, C ∨ ¬B ∨ K, C ∨ ¬B ∨ E, B, ¬K ∨ ¬E ∨ B, C ∨ D, ¬E ∨ ¬D, ¬F ∨ ¬D, C ∨ E,
¬F ∨ ¬B
K, E, C ∨ K, C ∨ E, ¬F, ¬K ∨ ¬E ∨ ¬F, C ∨ D, ¬E ∨ ¬D, ¬F ∨ ¬D
C ∨ ¬E, C ∨ ¬F, K, E, C ∨ K, C ∨ E, ¬F, ¬K ∨ ¬E ∨ ¬F
C, ¬K ∨ ¬F, C ∨ ¬K ∨ ¬F, C ∨ ¬F, K, C ∨ K
¬F, ¬F ∨ C
True

Problem 5:
Consider this Knowledge Base in proposional logic:

𝐾𝐵 = {𝐴 ⇒ 𝐵 ∨ 𝐶, 𝐴 ⇒ 𝐷, 𝐶 ∧ 𝐷 ⇒ ¬𝐸, 𝐵 ⇒ 𝐸, 𝐴}

Check if these sentences in entailed by the KB:

𝐵 ⇒ ¬𝐶

¬𝐴 ∨ 𝐵 ∨ 𝐶 , ¬𝐴 ∨ 𝐷, ¬𝐶 ∨ ¬𝐷 ∨ ¬E, ¬𝐵 ∨ E, 𝐴, 𝑩, 𝑪
𝐵 ∨ 𝐶, 𝐷, ¬𝐶 ∨ ¬𝐷 ∨ ¬E, ¬𝐵 ∨ E, 𝑩, 𝑪 (Resolution 𝐴)
𝐶 ∨ E, E, 𝐷, ¬𝐶 ∨ ¬𝐷 ∨ ¬E, 𝑪 (Resolution 𝐵)
E, 𝐷, ¬𝐷 ∨ ¬E (Resolution 𝐶)
E, ¬E (Resolution D)
Part 3: Machine learning
Problem 1:
For the Bayes’ Network shown below:

1. What is the probability of having disease D and getting a positive result on test A?
P(+d, +a)

2. What is the probability of not having disease D and getting a positive result on test
A?

P(−d, +a)

3. What is the probability of having disease D given a positive result on test A?

P(+d| + a)

4. What is the probability of having disease D given a positive result on test B?

P(+d| + b)

Problem 2:
The following table informs about decision making factors to play tennis at outside for
previous 14 days.

Create ID3 decision tree for the above Dataset.


Problem 3:
This below table summary characteristics of some animal.

Name Give Can Live in Have Class


Birth Fly Water Legs
human yes no no yes mammals
python no no no no non-mammals
salmon no no yes no non-mammals
whale yes no yes no mammals
frog no no sometimes yes non-mammals
komodo no no no yes non-mammals
bat yes yes no yes mammals
pigeon no yes no yes non-mammals
cat yes no no yes mammals
Leopard shark yes no yes no non-mammals
turtle no no sometimes yes non-mammals
penguin no no sometimes yes non-mammals
porcupine yes no no yes mammals
eel no no yes no non-mammals
salamander no no sometimes yes non-mammals
Gila monster no no no yes non-mammals
platypus no no no yes mammals
owl no yes no yes non-mammals
dolphin yes no yes no mammals
eagle no yes no yes non-mammals
Using Naïve Bayes to predict this new observed animal:

Give Birth Can Live in Have Class


Fly Water Legs
1 Yes No Yes No ?
Problem 4:
Given the following neural network with initialized weights as in the picture, explain
the network architecture knowing that we are trying to distinguish between nails and
screws and an example of training tupples is as follows: T1{0.6, 0.1, nail}, T2{0.2,
0.3, screw}.

Let the learning rate ŋ be 0.1 and the weights be as indicated in the figure above.
Do the forward propagation of the signals in the network using T1 as input, then
perform the back propagation of the error. Show the changes of the weights

What encoding of the outputs?


10 for class “nail”, 01 for class “screw”

Forward pass for T1 - calculate the outputs o6 and o7


o1=0.6, o2=0.1, target output 1 0, i.e. class “nail”
Activations of the hidden units:
net3= o1 *w13+ o2*w23+b3=0.6*0.1+0.1*(-0.2)+0.1=0.14
o3=1/(1+e-net3) =0.53

net4= o1 *w14+ o2*w24+b4=0.6*0+0.1*0.2+0.2=0.22


o4=1/(1+e-net4) =0.55

net5= o1 *w15+ o2*w25+b5=0.6*0.3+0.1*(-0.4)+0.5=0.64


o5=1/(1+e-net5) =0.65

Activations of the output units:


net6= o3 *w36+ o4*w46+ o5*w56 +b6=0.53*(-0.4)+0.55*0.1+0.65*0.6-0.1=0.13
o6=1/(1+e-net6) =0.53

net7= o3 *w37+ o4*w47+ o5*w57 +b7=0.53*0.2+0.55*(-0.1)+0.65*(-.2)+0.6=0.52


o7=1/(1+e-net7) =0.63

Backward pass for T1 - calculate the output errors δ6 and δ7


(note that d6=1, d7=0 for class “nail”)
δ6 = (d6-o6) * o6 * (1-o6)=(1-0.53)*0.53*(1-0.53)=0.12
δ7 = (d7-o7) * o7 * (1-o7)=(0-0.63)*0.63*(1-0.63)=-0.15

Calculate the new weights between the hidden and output units (η=0.1)
∆w36= η * δ6 * o3 = 0.1*0.12*0.53=0.006
w36new = w36old + ∆w36 = -0.4+0.006=-0.394

∆w37= η * δ7 * o3 = 0.1*-0.15*0.53=-0.008
w37new = w37old + ∆w37 = 0.2-0.008=-0.19

Similarly for w46new , w47new , w56new and w57new


For the biases b6 and b7
∆b6= η * δ6 * 1 = 0.1*0.12=0.012
b6new = b6old + ∆b6 = -0.1+0.012=-0.012

Similarly for b7

Calculate the errors of the hidden units δ3, δ4 and δ5


δ3 = o3 * (1-o3) * (w36* δ6 +w37 * δ7 ) = 0.53*(1-0.53)(-0.4*0.12+0.2*(-0.15))=-
0.019
Similarly for δ4 and δ5

Calculate the new weights between the input and hidden units (η=0.1)

∆w13= η * δ3 * o1 = 0.1*(-0.019)*0.6=-0.0011
w13new = w13old + ∆w13 = 0.1-0.0011=0.0989

Similarly for w23new , w14new , w24new , w15new and w25new; b3, b4 and b6
Repeat the same procedure for the other training examples

You might also like