Machine Learning UNIT-5
Machine Learning UNIT-5
0J0S0A0Z
OPO0ZPZ0
0Z0Z0ZPZ
Z0Z0Z0Z0
0Z0ZqZpZ
ZpM0o0Zp
pZpZ0Z0Z
sks0ZbZ0
Given:
• Instances
• Hypotheses
• Target Concept
• Training examples of target concept
Determine:
• Hypotheses consistent with the training examples
Given:
• Instances
• Hypotheses
• Target Concept
• Training examples of target concept
• Domain theory for explaining examples
Determine:
• Hypotheses consistent with the training examples
and the domain theory
Given:
• Instances: pairs of objects
• Hypotheses: sets of horn clause rules
• Target Concept: Safe-to-stack(x,y)
• Training Example: Safe-to-stack(OBJ1,OBJ2)
On(OBJ1,OBJ2)
Isa(OBJ1,BOX)
Isa(OBJ2,ENDTABLE)
Color(OBJ1,RED)
Color(OBJ2,BLUE)
Volume(OBJ1,.1)
Density(OBJ1,.1)
...
• Domain Theory:
Safe-To-Stack(x,y) :- Not(Fragile(y))
Safe-To-Stack(x,y) :- Lighter(x,y)
Lighter(x,y) :- Weight(x,wx), Weight(y,wy),
Less(wx,wy)
Weight(x,w) :- Volume(x,v), Density(x,d),
Equal(w, v*d)
Weight(x,5) :- Isa(x, ENDTABLE)
...
Determine:
• Hypotheses consistent with training examples and
domain theory
Initialize hypothesis = {}
Explanation:
Safe-to-Stack(OBJ1,OBJ2)
Lighter(OBJ1,OBJ2)
Training Example:
Wood
Material Yes
2
Volume 0.3
Density Fragile
On OBJ2
OBJ1
Type
Material Type Color
Owner
Color
Owner Blue
EndTable
Cardboard Box
Red Louise
Fred
Safe-to-Stack(OBJ1,OBJ2)
Safe-to-Stack(x,y)
Lighter(OBJ1,OBJ2)
Lighter(x,y)
Type(OBJ2,ENDTABLE)
Volume(x,vx) Density(x,dx) Equal(wx,vx*dx) Less-Than(wx,5) Type(y,ENDTABLE)
Returns the list of expressions forming the weakest preimage of Frontier with respect to Rule
S(UE,R(Consequent)) = UI,R(Consequent)
Example:
Regress({Volume(x,vs), Density(x,dx), Equal(wx,vx*dx),
Less-Than(wx,wy), Weight(y,wy)},
Weight(z,5) :- Type(z,ENDTABLE),
Weight(y,wy),
{OBJ2/z})
Consequent ← Weight(z,5)
Antecedents ← Type(z,ENDTABLE)
UE,R ← {y/z, 5/wy}, (S = {OBJ2/y})
Is it learning?
• Are you learning when you get better over time at
chess?
• Even though you already know everything in
principle, once you know rules of the game...
• Are you learning when you sit in a mathematics
class?
• Even though those theorems follow
deductively from the axioms you’ve already
learned...
1 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
Inductive and Analytical Learning
2 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
What We Would Like
3 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
Domain theory:
Cup Stable, Liftable, OpenVessel
Stable BottomIsFlat
Liftable Graspable, Light
Graspable HasHandle
OpenVessel HasConcavity, ConcavityPointsUp
Training examples:
p Cups
p p p p pNon-Cups
p p
BottomIsFlat p pppp pp
ConcavityPoints Up p p p p
Expensive p p pp p p
Fragile p p
HandleOnTop p p p
HandleOnSide p pppp ppp p
HasConcavity p pp p p
HasHandle p pppppp p
Light p p pp
MadeOfCeramic p p
MadeOfPaper pp p p
MadeOfStyrofoam
4 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
KBANN
5 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
Neural Net Equivalent to Domain Theory
Expensive
BottomIsFlat Stable
MadeOfCeramic
MadeOfStyrofoam
MadeOfPaper
HasHandle Graspable Liftable Cup
HandleOnTop
HandleOnSide
Light
OpenVessel
HasConcavity
ConcavityPointsUp
Fragile
6 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
Creating Network Equivalent to Do-
main Theory
7 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
Result of re ning the network
Expensive
BottomIsFlat Stable
MadeOfCeramic
MadeOfStyrofoam
MadeOfPaper
HasHandle Graspable Liftable Cup
HandleOnTop
HandleOnSide
Light
Open-Vessel
HasConcavity
ConcavityPointsUp
Fragile Large positive weight
Large negative weight
Negligible weight
8 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
KBANN Results
9 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
Hypothesis space search in KBANN
Hypothesis Space
Hypotheses that
fit training data
equally well
Initial hypothesis
for KBANN
Initial hypothesis
for BACKPROPAGATION
10 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
EBNN
Key idea:
Previously learned approximate domain theory
Domain theory represented by collection of
neural networks
Learn target function as another neural network
11 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
Explanation of
training example Stable
in terms of
domain theory:
BottomIsFlat =T
ConcavityPointsUp =T Graspable Liftable Cup
Expensive =T
Fragile =T
HandleOnTop =F
HandleOnSide =T Cup = T
HasConcavity =T
HasHandle =T
Light =T
0.8
MadeOfCeramic =T
MadeOfPaper =F
MadeOfStyrofoam =F
0.2 OpenVessel
Training
derivatives
12 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
Modi ed Objective for Gradient Descent
2 0 12 3
E= X 666
(f (x ) ? f^(x )) + 2 X BB
B@
@A(x) ? @ f^(x) CC
CA
77
77
i
64 i i i
j @x j
@x j
(x=xi )
5
where
1? jA(x ) ? f (x )j i i
i
c
f (x) is target function
f^(x) is neural net approximation to f (x)
A(x) is domain theory approximation to f (x)
13 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
f(x)
h
f(x1)
f(x2)
f(x3) f
g
x1 x2 x3 x x x
14 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
Hypothesis Space Search in EBNN
Hypothesis Space
TANGENTPROP
Search BACKPROPAGATION
Search
15 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
Search in FOCL
Cup
Generated by the
domain theory
Cup HasHandle
[2+,3–]
Cup HasHandle
[2+,3–]
Cup Fragile ...
Cup BottomIsFlat,
[2+,4–] Light,
HasConcavity,
ConcavityPointsUp
[4+,2–]
Cup BottomIsFlat,
Light,
HasConcavity,
...
ConcavityPointsUp
HandleOnTop Cup BottomIsFlat,
[0+,2–] Light,
Cup BottomIsFlat, HasConcavity,
Light, ConcavityPointsUp,
HasConcavity, HandleOnSide
ConcavityPointsUp, [2+,0–]
HandleOnTop
[4+,0–]
16 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997
FOCL Results
17 lecture slides for textbook Machine Learning, c T. Mitchell, McGraw Hill, 1997