0% found this document useful (0 votes)
21 views22 pages

Learning From Examples

Uploaded by

Kasi Harsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views22 pages

Learning From Examples

Uploaded by

Kasi Harsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

LEARNING FROM EXAMPLES

Dr. J. Ujwala Rekha


Introduction
• Learning agents improve their behavior through
diligent study of their own experiences.
• Any component of an agent can be improved by
learning from data.
• The improvements, and the techniques used to make
them, depend on four major factors:
– Which component is to be improved
– What prior knowledge the agent already has
– What representation is used for the data and the
component
– What feedback is available to learn from
Components to be Learned
• A direct mapping from conditions on the current state to
actions
• A means to infer relevant properties of the world from the
percept sequence
• Information about the way the world evolves and about the
results of possible actions the agent can take
• Utility information indicating the desirability of world states
• Action-value information indicating the desirability of actions
• Goals that describe classes of states whose achievement
maximizes the agent’s utility
Representation and Prior Knowledge
• Representation of knowledge
– Propositional logic
– Predicate logic
– Bayesian networks
• Inductive Learning: when the output and examples
of the function are fed into the AI system, inductive
learning attempts to learn the function for new
data.
• Deductive Learning: going from a known general
rule to a new rule that is logically entailed
Feedback to Learn From
• There are three types of feedback that determine the
three main types of learning:
• Unsupervised Learning: the agent learns patterns in
the input even though no explicit feedback is supplied.
The most common unsupervised learning is clustering.
• Supervised Learning: the agent observes some
examples input-output pairs and learns a function that
maps from input to output
• Reinforcement Learning: the agent learns from a
series of reinforcements-rewards or punishments.
Supervised Learning
Supervised Learning
• When the output is one of a finite set of
values such as sunny, cloudy or rainy, the
learning problem is called classification
• It is called Boolean or binary classification if
there are only two values.
• When the output is a number such as
tomorrow’s temperature, the learning
problem is called regression.
Supervised Learning
• Figure 18.1 (a) shows some data with an exact fit by a straight
line 0.4x+3.
• Figure 18.1 (b) shows a high-degree polynomial that is also
consistent hypothesis because it agrees with all the data.
• This illustrates a fundamental problem in inductive learning:
how do we choose from among multiple consistent
hypotheses?
• According to Ockham’s razor, prefer the simplest hypothesis
consistent with the data.
• However, defining simplicity is not easy, but it seems clear that
a degree-1 polynomial is simpler than a degree 7 polynomial.
Supervised Learning
Learning Decision Trees
• A decision tree represents a function that takes as input a
vector of attribute values and returns a “decision”- a single
output value.
• The input and output values can be discrete or continuous.
• A decision tree reaches its decision by performing a sequence
of tests.
• Each internal node in the tree corresponds to a test of the
value of one of the input attributes and the branches from
the node are labeled with the possible values of the attribute.
• Each leaf node in the tree specifies a value to be returned by
the function.
Learning Decision Trees
Learning Decision Trees
Learning Decision Trees
Learning Decision Trees
• Over fitting happens when a model learns the detail and
noise in the training data to the extent that it negatively
impacts the performance of the model on new data.
• Over fitting becomes more likely as the hypothesis space
and the number of input attributes grows, and less likely
as we increase the number of training examples.
• For decision trees, a technique called decision tree
pruning combats over fitting.
• Pruning works by eliminating nodes that are not clearly
relevant.
T HANK YOU

You might also like