0% found this document useful (0 votes)
48 views

Instance-Based Learning

Instance-based learning is a family of machine learning algorithms that compares new problem instances to stored training instances instead of explicitly generating a generalization. It constructs hypotheses directly from training instances, so hypothesis complexity can grow with data. Examples include k-nearest neighbors, kernel machines, and RBF networks, which store training instances and compute distances to make predictions for new instances. Instance reduction algorithms have been proposed to address memory complexity and overfitting risks from storing all training data.

Uploaded by

watson191
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Instance-Based Learning

Instance-based learning is a family of machine learning algorithms that compares new problem instances to stored training instances instead of explicitly generating a generalization. It constructs hypotheses directly from training instances, so hypothesis complexity can grow with data. Examples include k-nearest neighbors, kernel machines, and RBF networks, which store training instances and compute distances to make predictions for new instances. Instance reduction algorithms have been proposed to address memory complexity and overfitting risks from storing all training data.

Uploaded by

watson191
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Instance-based learning

In machine learning, instance-based learning (sometimes called memory-based learning[1]) is a family


of learning algorithms that, instead of performing explicit generalization, compare new problem instances
with instances seen in training, which have been stored in memory. Because computation is postponed until
a new instance is observed, these algorithms are sometimes referred to as "lazy."[2]

It is called instance-based because it constructs hypotheses directly from the training instances
themselves.[3] This means that the hypothesis complexity can grow with the data:[3] in the worst case, a
hypothesis is a list of n training items and the computational complexity of classifying a single new instance
is O(n). One advantage that instance-based learning has over other methods of machine learning is its
ability to adapt its model to previously unseen data. Instance-based learners may simply store a new
instance or throw an old instance away.

Examples of instance-based learning algorithms are the k-nearest neighbors algorithm, kernel machines and
RBF networks.[2]: ch. 8  These store (a subset of) their training set; when predicting a value/class for a new
instance, they compute distances or similarities between this instance and the training instances to make a
decision.

To battle the memory complexity of storing all training instances, as well as the risk of overfitting to noise
in the training set, instance reduction algorithms have been proposed.[4]

See also
Analogical modeling

References
1. Walter Daelemans; Antal van den Bosch (2005). Memory-Based Language Processing.
Cambridge University Press.
2. Tom Mitchell (1997). Machine Learning. McGraw-Hill.
3. Stuart Russell and Peter Norvig (2003). Artificial Intelligence: A Modern Approach, second
edition, p. 733. Prentice Hall. ISBN 0-13-080302-2
4. D. Randall Wilson; Tony R. Martinez (2000). "Reduction techniques for instance-based
learning algorithms". Machine Learning.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Instance-based_learning&oldid=1024885183"

You might also like