Deep Learning Unit2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Machine Learning

Introduction to Neural Learning :


Predict Compare and Learn, Hot and Cold Learning, Measuring Error,
Gradient Decent, Breaking Gradient Decent, Alpha in Code,
Memorizing, Gradient Decent Learning with Multiple Inputs and
Outputs.
Step 1: Predict
A simple neural network making a
prediction
What is input data?

It’s a number that you recorded in the real world


somewhere. It’s usually something that is easily
knowable, like today’s temperature, a baseball
player’s batting average, or yesterday’s stock
price.
What is a prediction?

A prediction is what the neural network tells you, given the input data,
such as

• Given the temperature, it is 0% likely that people will wear


sweatsuits today or

• Given a baseball player’s batting average, he is 30% likely to hit a


home run” or

• Given yesterday’s stock price, today’s stock price will be 101.52


Is this prediction always right?

No. Sometimes a neural network will make


mistakes, but it can learn from them. For
example, if it predicts too high, it will adjust its
weight to predict lower next time, and vice
versa.
How does the network learn?

• Trial and error! First, it tries to make a


prediction. Then, it sees whether the
prediction was too high or too low.

• Finally, it changes the weight (up or down) to


predict more accurately the next time it sees
the same input.
The interface for the neural network is simple:

It accepts an input variable as information and a


weights variable as knowledge, and it outputs a
prediction.
Compare

• Comparing gives a measurement of how much


a prediction “missed” by.
• To measure error is one of the most important
and complicated subjects of deep learning.
• one simple way of measuring error: mean
squared error. It’s but one of many ways to
evaluate the accuracy of a neural network
Compare
• The output of the compare logic is a “hot or cold”
type signal.
• Given some prediction, you’ll calculate an error
measure that says either “a lot” or “a little.”
• It won’t tell you why you missed, what direction
you missed, or what you should do to fix the
error. It more or less says “big miss,” “little miss,”
or “perfect prediction.” What to do about the
error is captured in the next step, learn.
Learn

• Learning tells each weight how it can change


to reduce the error.
• Learning is all about error attribution, or the
art of figuring out how each weight played its
part in creating error
Why measure error?

• Measuring error simplifies the problem. The goal of


training a neural network is to make correct
predictions. That’s what you want.

• You want the network to take input that you can


easily calculate (today’s stock price) and predict
things that are hard to calculate (tomorrow’s stock
price). That’s what makes a neural network useful
What’s the simplest form of neural learning?
• Learning using the hot and cold method

Hot and cold learning


• Hot and cold learning means wiggling the
weights to see which direction reduces the
error the most, moving the weights in that
direction, and repeating until the error gets to
0.
Characteristics of hot and cold learning

• Hot and cold learning is simple. After making a


prediction, you predict two more times, once
with a slightly higher weight and again with a
slightly lower weight.

• You then move weight depending on which


direction gave a smaller error. Repeating this
enough times eventually reduces error to 0
Problem 1:
It’s inefficient. You have to predict multiple
times to make a single knob_weight update. This
seems very inefficient.

Problem 2:
Sometimes it’s impossible to predict the exact
goal prediction.
Calculating both direction and amount from error

direction_and_amount represents how you want to change


weight.

• The first part is pure error, which equals (pred - goal_pred).

• The second part is the multiplication by the input that


performs scaling, negative reversal, and stopping,
modifying the pure error so it’s ready to update weight.
What is the pure error?
• The pure error is (pred - goal_pred), which
indicates the raw direction and amount you
missed.
• If this is a positive number, you predicted too
high, and vice versa.
• If this is a big number, you missed by a big
amount, and so on.
What are scaling, negative reversal, and stopping?

• These three attributes have the combined effect


of translating the pure error into the absolute
amount you want to change weight.

• They do so by addressing three major edge cases


where the pure error isn’t sufficient to make a
good modification to weight.
What is stopping?

• Stopping is the first (and simplest) effect on the pure error caused by
multiplying it by input.

• Imagine plugging a CD player into your stereo. If you turned the volume
all the way up but the CD player was off, the volume change wouldn’t
matter.

• Stopping addresses this in a neural network. If input is 0, then it will force


direction_and_amount to also be 0. You don’t learn (change the volume)
when input is 0, because there’s nothing to learn.

• Every weight value has the same error, and moving it makes no difference
because pred is always 0
What is negative reversal?

• This is probably the most difficult and important effect. Normally


(when input is positive), moving weight upward makes the
prediction move upward.

• But if input is negative, then all of a sudden weight changes


directions! When input is negative, moving weight up makes the
prediction go down. It’s reversed!

• How do you address this? Well, multiplying the pure error by input
will reverse the sign of direction_and_amount in the event that
input is negative. This is negative reversal, ensuring that weight
moves in the correct direction even if input is negative.
What is scaling?

• Scaling is the third effect on the pure error caused by


multiplying it by input.

• Logically, if input is big, your weight update should also


be big.

• This is more of a side effect, because it often goes out


of control. Later, you’ll use alpha to address when that
happens

You might also like