0% found this document useful (0 votes)
77 views2 pages

Human Level Concept Learning Through Probabilistic Program Induction

This paper introduces a Bayesian Program Learning (BPL) framework that is capable of one-shot learning of visual concepts like characters from different alphabets in a way that generalizes similar to human learning. The BPL framework represents concepts as simple probabilistic programs built compositionally from parts. It learns by constructing programs that best explain example images under a Bayesian criterion. In experiments, the BPL framework achieved human-level one-shot learning performance on character recognition tasks, outperforming deep learning models.

Uploaded by

Varoon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views2 pages

Human Level Concept Learning Through Probabilistic Program Induction

This paper introduces a Bayesian Program Learning (BPL) framework that is capable of one-shot learning of visual concepts like characters from different alphabets in a way that generalizes similar to human learning. The BPL framework represents concepts as simple probabilistic programs built compositionally from parts. It learns by constructing programs that best explain example images under a Bayesian criterion. In experiments, the BPL framework achieved human-level one-shot learning performance on character recognition tasks, outperforming deep learning models.

Uploaded by

Varoon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

1.

Introduction
People when presented with new things, learn to generalize it so quickly, for eg: If we
have learnt to drive in some car, you could generalise to drive other cars too, it need
be the same as the one we learnt in, but this is not the same with machines. Machine
learning algorithms would require potentially hundreds or even thousands of images to
learn from before they could achieve human like accuracy.
This paper presents computational model that captures human learning capabilities for a
large class of simple visual concepts, such as character’s of the world’s alphabets. It
then represents concepts as simple programs that best explain the given examples
under the ​Bayesian Criterion. ​On a challenging ​one-shot ​learning classification tasks,
it achieves human level performance. This model has even passed the Turing tests in
some cases.

● This paper introduces Bayesian Program Learning(BPL) framework, capable of


learning a large class of visual concepts from just a single example generalizing
similar to how we people generalize.
● Concepts are presented as simple probabilistic programs.
● This framework brings together 3 key ideas compositionality, casuality and
learning to learn, that have been separately influential in cognitive science and
machine learning.
● Learning proceeds by constructing programs that best explains the seen
examples under the bayesian criterion.

2. Bayesian Program Learning


This approach learns simple stochastic programs to represent concepts, building them
compositionally from parts and sub parts and spatial relations BPL, defines a generative
model that can sample new types of concept by combining parts and subparts.

The generative process for types P(y) and tokens P(q(m)|y) are described by the
pseudocode in Fig. 3B and detailed along with the image model P(I(m)|q(m)) in section
S2.The model learns to learn by fitting each conditional distribution to a background set
of characters from 30 alphabets, using both the image and the stroke data, and this
image set was also used to pretrain the alternative deep learning models.
3. Results

● One shot learning for alphabets(experimental setup), humans error rate was at 4.5% and
BPL was around 3.3%, where a Deep CNN got around 13.5% error rate.
● BPL benefits by modeling the underlying causal process in learning concepts.
● Human capacity for one-shot learning is much more than classification, can lead to
sparse generations as well.
● A siamese DNN customised for this one shot learning was compared

● Even with less experience (priori) these models were able to beat DNN’s.
● Generating totally foreign concepts that are meaningful, is termed creativity. So, some
foreign examples were trained and yet again BPL gave good results.

4. Discussion
● Principles of composition, causality and learning to learn machines will help in creating
better one-shot learning algorithms.
● BPL still sees less info than humans do, like symmetry, parallel lines e.t.c
● This can be extended beyond visual perception.
● Audio recognition is a good application for this algorithm.
● Comparing how children parse and generalize behaviour at different stages of learning,
will help in comparing different BPL (adult learning) algorithms.

You might also like