0% found this document useful (0 votes)
6 views2 pages

Analytical

Uploaded by

Parupally Girija
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views2 pages

Analytical

Uploaded by

Parupally Girija
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

1/31/23, 2:55 PM Analytical Learning

Analytical Learning
This talk is based on

Tom M. Mitchell. Machine Learning. [1] McGraw Hill. 1997. Chapter 11.

1 Introduction
So far, we have studied inductive learning methods.
Induction fails when there is very little data. In fact, CLT gives us a bound
Q: Can we break that bound?
Yes, if we re-state the learning problem.

1.1 New Learning Problem

Learning algorithm accepts explicit prior knowledge as an input, in addition to the training data.
Inverted deduction systems also use background knowledge, but they use it to augment the description of instances.
∀⟨𝑥𝑖 , 𝑓 ( 𝑥𝑖 ) ⟩ ∈ 𝐷 𝐵 ∧ ℎ ∧ 𝑥 𝑖 → 𝑓 ( 𝑥 𝑖 ) This results in increasing the size of H.
In explanation-based learning the prior knowledge is used to reduce the size of H. EBL assumes that ∀⟨𝑥𝑖 , 𝑓 ( 𝑥𝑖 ) ⟩ ∈ 𝐷 𝐵' ∧ 𝑥 𝑖 → 𝑓 ( 𝑥 𝑖 ) and
outputs ℎ such that ∀⟨𝑥𝑖 , 𝑓 ( 𝑥𝑖 ) ⟩ ∈ 𝐷 ℎ ∧ 𝑥 𝑖 → 𝑓 ( 𝑥 𝑖 ) 𝐷 ∧ 𝐵' → ℎ

1.2 EBL Example

Want program to recognize "chessboard positions in which black will lose its queen within
two moves."
Because there are so many possible chessboards we would nee to provide a lot of examples.
And yet, humans can learn this concept really quickly. Why?
Humans appear to rely heavily on explaining the training example in terms of their prior
knowledge.

https://jmvidal.cse.sc.edu/talks/analyticallearning/allslides.xml 1/7

1/31/23, 2:55 PM Analytical Learning

SafeToStack(x,y) ← ¬Fragile(y)
SafeToStack(x,y) ← Lighter(x,y)
Lighter(x,y) ← Weight(x,wx) ∧ Weight(y,wy) ∧ LessThan(wx,wy)
Weight(x,w) ← Volume(x,v) ∧ Density(x,d) ∧ Equal(w,times(v,d))
Weight(x,5) ← Type(x, endtable)
Fragile(x) ← Material(x,Glass)
Find h that is consistent with training examples and domain theory.

2 Learning With Perfect Domain Theories


A perfect domain theory is correct and complete.
A domain theory is correct if each of its assertions is a truthful statement about the world.
A domain the is complete wrt target concept and X, if it covers every positive example in the instance space.
So, if we have a perfect domain theory, why do we need to learn?
Chess. Often the theory leads to too many deductions (large breadth) making it impossible to find the optimal strategy. The examples help to
focus search.
Perfect domain theories are often unrealistic, but, learning in them is a first step before learning with imperfect theories (next chapter).
Prolog-EBG is an EBL learner. It uses sequential covering.

2.1 Prolog-EBG
Prolog-EGB(TargetConcept, TraningExamples, DomainTheory)

1. LearnedRules = {}
2. Pos = the positive examples from TraningExamples.
3. for each PositiveExample in Pos that is not covered by LearnedRules do
1. Explanation = an explanation in terms of DomainTheory that Pos satisfies the
TargetConcept.
2. SufficientConditions = the most general set of features of PositiveExample sufficient to
satisfy the TargetConcept according to the Explanation.
3. LearnedRules = LearnedRules + {TargetConcept ← SufficientConditions}.
4. return LearnedRules

2.1.1 Explaining the Example

Give a proof, using the domain theory, that the (positive) training satisfies the target concept.
In our ongoing example the positive example of SafeToStack(o1,o2) can be explained by using the domain theory, as such:

https://jmvidal.cse.sc.edu/talks/analyticallearning/allslides.xml 3/7
1/31/23, 2:55 PM Analytical Learning

2.1.4 Inductive Bias of Prolog-EBG

Since all the candidate hypotheses are generated from B it follows that the inductive bias of Prolog-EBG is simply B, right?
Almost. We also have to consider how it chooses from among the alternative clauses.
Since it uses sequential covering by growing the Horn clauses we can say that it prefers small sets of Horn clauses.
So, the inductive bias is B plus a preference for small sets of maximally general Horn clauses.
The inductive bias is largely determined by the input domain theory, not the algorithm.

3 Thinking about EBL


EBL as a theory-guided (rational) generalization of examples.
EBL as example-guided reformulation of theories. That is, reformulating the domain theory into a more usable form.
EBL as simply restating what the learner already knows. Remember that what one knows in principle is very different from what one knows
in practice (f=ma).

3.1 Knowledge Level Learning

In Prolog-EBG the ℎ follows (logically) directly from B alone, independent of D. So, why do we need examples?
Examples focus Prolog-EBG on generating rules that cover the distribution of instances that occur.
So, will it ever learn to classify an instance that could not be classified by B?
No. Since 𝐵 → ℎ then any classification entailed by ℎ is also entailed by 𝐵.
OK, so this this a problem with all analytical learning methods?
No. For example, let B contain a statement like

GrandDaughter(sister(x),spouse(y)) ← GrandDaughter(x,y)

This rule does nothing until we have one example, then it might identify added GrandDaughter().
Another example is provided by assertions known as determinations. If we are trying to identify "people who speak Portuguese", a
determinant might be

The language spoken by a person is determined by their nationality.

4 EBL of Search Control Knowledge


Search is an endemic problem in AI (iow, AI is search). But, often the spaces we search are huge.
Search problem:

https://jmvidal.cse.sc.edu/talks/analyticallearning/allslides.xml 5/7

1/31/23, 2:55 PM Analytical Learning

I think it's also because EBL seems a lot harder to implement. (It's not since Prolog, Soar, etc. already do it for you).

URLs
1. Machine Learning book at Amazon, http://www.amazon.com/exec/obidos/ASIN/0070428077/multiagentcom/
2. Deep Blue homepage, http://www.research.ibm.com/deepblue/
3. Prodigy Homepage, http://www.cs.cmu.edu/~prodigy/
4. Soar Homepage, http://ai.eecs.umich.edu/soar/

This talk available at http://jmvidal.cse.sc.edu/talks/analyticallearning/


Copyright © 2003 José M. Vidal . All rights reserved.
10 April 2003, 12:16PM

https://jmvidal.cse.sc.edu/talks/analyticallearning/allslides.xml 7/7

You might also like