Analytical
Analytical
Analytical Learning
This talk is based on
Tom M. Mitchell. Machine Learning. [1] McGraw Hill. 1997. Chapter 11.
1 Introduction
So far, we have studied inductive learning methods.
Induction fails when there is very little data. In fact, CLT gives us a bound
Q: Can we break that bound?
Yes, if we re-state the learning problem.
Learning algorithm accepts explicit prior knowledge as an input, in addition to the training data.
Inverted deduction systems also use background knowledge, but they use it to augment the description of instances.
∀⟨𝑥𝑖 , 𝑓 ( 𝑥𝑖 ) ⟩ ∈ 𝐷 𝐵 ∧ ℎ ∧ 𝑥 𝑖 → 𝑓 ( 𝑥 𝑖 ) This results in increasing the size of H.
In explanation-based learning the prior knowledge is used to reduce the size of H. EBL assumes that ∀⟨𝑥𝑖 , 𝑓 ( 𝑥𝑖 ) ⟩ ∈ 𝐷 𝐵' ∧ 𝑥 𝑖 → 𝑓 ( 𝑥 𝑖 ) and
outputs ℎ such that ∀⟨𝑥𝑖 , 𝑓 ( 𝑥𝑖 ) ⟩ ∈ 𝐷 ℎ ∧ 𝑥 𝑖 → 𝑓 ( 𝑥 𝑖 ) 𝐷 ∧ 𝐵' → ℎ
Want program to recognize "chessboard positions in which black will lose its queen within
two moves."
Because there are so many possible chessboards we would nee to provide a lot of examples.
And yet, humans can learn this concept really quickly. Why?
Humans appear to rely heavily on explaining the training example in terms of their prior
knowledge.
https://jmvidal.cse.sc.edu/talks/analyticallearning/allslides.xml 1/7
SafeToStack(x,y) ← ¬Fragile(y)
SafeToStack(x,y) ← Lighter(x,y)
Lighter(x,y) ← Weight(x,wx) ∧ Weight(y,wy) ∧ LessThan(wx,wy)
Weight(x,w) ← Volume(x,v) ∧ Density(x,d) ∧ Equal(w,times(v,d))
Weight(x,5) ← Type(x, endtable)
Fragile(x) ← Material(x,Glass)
Find h that is consistent with training examples and domain theory.
2.1 Prolog-EBG
Prolog-EGB(TargetConcept, TraningExamples, DomainTheory)
1. LearnedRules = {}
2. Pos = the positive examples from TraningExamples.
3. for each PositiveExample in Pos that is not covered by LearnedRules do
1. Explanation = an explanation in terms of DomainTheory that Pos satisfies the
TargetConcept.
2. SufficientConditions = the most general set of features of PositiveExample sufficient to
satisfy the TargetConcept according to the Explanation.
3. LearnedRules = LearnedRules + {TargetConcept ← SufficientConditions}.
4. return LearnedRules
Give a proof, using the domain theory, that the (positive) training satisfies the target concept.
In our ongoing example the positive example of SafeToStack(o1,o2) can be explained by using the domain theory, as such:
https://jmvidal.cse.sc.edu/talks/analyticallearning/allslides.xml 3/7
1/31/23, 2:55 PM Analytical Learning
Since all the candidate hypotheses are generated from B it follows that the inductive bias of Prolog-EBG is simply B, right?
Almost. We also have to consider how it chooses from among the alternative clauses.
Since it uses sequential covering by growing the Horn clauses we can say that it prefers small sets of Horn clauses.
So, the inductive bias is B plus a preference for small sets of maximally general Horn clauses.
The inductive bias is largely determined by the input domain theory, not the algorithm.
In Prolog-EBG the ℎ follows (logically) directly from B alone, independent of D. So, why do we need examples?
Examples focus Prolog-EBG on generating rules that cover the distribution of instances that occur.
So, will it ever learn to classify an instance that could not be classified by B?
No. Since 𝐵 → ℎ then any classification entailed by ℎ is also entailed by 𝐵.
OK, so this this a problem with all analytical learning methods?
No. For example, let B contain a statement like
GrandDaughter(sister(x),spouse(y)) ← GrandDaughter(x,y)
This rule does nothing until we have one example, then it might identify added GrandDaughter().
Another example is provided by assertions known as determinations. If we are trying to identify "people who speak Portuguese", a
determinant might be
https://jmvidal.cse.sc.edu/talks/analyticallearning/allslides.xml 5/7
I think it's also because EBL seems a lot harder to implement. (It's not since Prolog, Soar, etc. already do it for you).
URLs
1. Machine Learning book at Amazon, http://www.amazon.com/exec/obidos/ASIN/0070428077/multiagentcom/
2. Deep Blue homepage, http://www.research.ibm.com/deepblue/
3. Prodigy Homepage, http://www.cs.cmu.edu/~prodigy/
4. Soar Homepage, http://ai.eecs.umich.edu/soar/
https://jmvidal.cse.sc.edu/talks/analyticallearning/allslides.xml 7/7