0% found this document useful (0 votes)
64 views33 pages

Neural Network Presentation

This document provides an overview of artificial neural networks including their history, applications, properties, and how they work. Some key points: - Neural networks were inspired by biological neural systems and can learn from large datasets to classify inputs. - Examples of applications include handwriting recognition, speech recognition, and face recognition. - The basic building block is the perceptron, which uses weighted inputs and an activation function to make classifications. - Multilayer networks can represent complex functions using techniques like backpropagation to calculate errors and update weights. - With enough hidden units, neural networks can approximate any function, though interpretation is difficult for humans.

Uploaded by

hrlive123
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views33 pages

Neural Network Presentation

This document provides an overview of artificial neural networks including their history, applications, properties, and how they work. Some key points: - Neural networks were inspired by biological neural systems and can learn from large datasets to classify inputs. - Examples of applications include handwriting recognition, speech recognition, and face recognition. - The basic building block is the perceptron, which uses weighted inputs and an activation function to make classifications. - Multilayer networks can represent complex functions using techniques like backpropagation to calculate errors and update weights. - With enough hidden units, neural networks can approximate any function, though interpretation is difficult for humans.

Uploaded by

hrlive123
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 33

Artificial Neural Networks

What can they do? How do they work? What might we use them for it our project? Why are they so cool?

History

late-1800's - Neural Networks appear as an analogy to biological systems 1960's and 70's Simple neural networks appear

Fall out of favor because the perceptron is not effective by itself, and there were no good algorithms for multilayer nets Neural Networks have a resurgence in popularity

1986 Backpropagation algorithm appears

Applications

Handwriting recognition Recognizing spoken words Face recognition

You will get a chance to play with this later!

ALVINN TD-BACKGAMMON

ALVINN

Autonomous Land Vehicle in a Neural Network Robotic car Created in 1980s by David Pomerleau 1995

Drove 1000 miles in traffic at speed of up to 120 MPH Steered the car coast to coast (throttle and brakes controlled by human)

30 x 32 image as input, 4 hidden units, and 30 outputs

TD-GAMMON

Plays backgammon Created by Gerry Tesauro in the early 90s Uses variation of Q-learning (similar to what we might use)

Neural network was used to learn the evaluation function

Trained on over 1 million games played against itself Plays competitively at world class level

Basic Idea

Modeled on biological systems

This association has become much looser


Can do more than this

Learn to classify objects

Learn from given training data of the form (x1...xn, output)

Properties

Inputs are flexible

any real values Highly correlated or independent

Target function may be discrete-valued, realvalued, or vectors of discrete or real values

Outputs are real numbers between 0 and 1

Resistant to errors in the training data Long training time Fast evaluation The function produced can be difficult for humans to interpret

Perceptrons

Basic unit in a neural network Linear separator Parts


N inputs, x1 ... xn Weights for each input, w1 ... wn A bias input x0 (constant) and associated weight w0 Weighted sum of inputs, y = w0x0 + w1x1 + ... + wnxn A threshold function, i.e 1 if y > 0, -1 if y <= 0

Diagram
x1
x2 w1

x0
w0

w2

. . . xn wn

y = wixi

Thres hold 1 if y >0 -1 otherwise

Linear Separator
This... + + + x2 But not this (XOR) x2

+
x1 -

x1 +

Boolean Functions
x1
x0=-1 w0 = 1.5 w1=1 w2=1 x1 x0=-1 w0 = 0.5 w1=1 w2=1 x1 OR x2 Thus all boolean functions can be represented by layers of perceptrons! x1 AND x2 x0=-1 w0 = -0.5

x2

w1=1

NOT x1

x1

x2

Perceptron Training Rule


wi= wi wi w i = t o x i
w i : The weight of input i : The 'learning rate' between 0 and 1 t : The target output o: The actual output x i : The ith input

Gradient Descent

Perceptron training rule may not converge if points are not linearly separable Gradient descent will try to fix this by changing the weights by the total error for all training points, rather than the individual

If the data is not linearly separable, then it will converge to the best fit

Gradient Descent
1 Error function : E x = t d od 2 d D wi E w i= wi w i = t d o d x id
d D 2

wi= wi

Gradient Descent Algorithm


GRADIENT-DESCENT(training_examples, ) Each training example is a pair of the form ( x , t where x is the vector of input values, and t is the target output value, is learning rate (0< <1) Initialize each wi to some small random value Until the termination condition is met, Do ----For each (vec x, t) in training_examples, Do --------Input the instance x to the unit and compute the output o --------For each linear unit weight wi , Do wi= wi t o xi ----For each linear unit wi, Do wi = w i wi

Gradient Descent Issues

Converging to a local minimum can be very slow

The while loop may have to run many times

May converge to a local minima Stochastic Gradient Descent


Update the weights after each training example rather than all at once Takes less memory Can sometimes avoid local minima must decrease with time in order for it to converge

Multi-layer Neural Networks

Single perceptron can only learn linearly separable functions Would like to make networks of perceptrons, but how do we determine the error of the output for an internal node? Solution: Backpropogation Algorithm

Differentiable Threshold Unit

We need a differentiable threshold unit in order to continue Our old threshold function (1 if y > 0, 0 otherwise) is not differentiable One solution is the sigmoid unit

Graph of Sigmoid Function

Sigmoid Function
Output : o= wx

1 y= y 1 e y = y y 1 y

Variable Definitions

xij = the input from to unit j from unit i wij = the weight associated with the input to unit j from unit i oj = the output computed by unit j tj = the target output for unit j outputs = the set of units in the final layer of the network Downstream(j) = the set of units whose immediate inputs include the output of unit j

Backpropagation Rule
1 Ed w = t k ok 2 k outputs
2

Ed w ij = w ij For output units: w ij = t j o j o j 1 o j x ij For internal units: w ij = j x ij = o j 1 o j

k Downstream j

w jk

Backpropagation Algorithm

For simplicity, the following algorithm is for a two-layer neural network, with one output layer and one hidden layer

Thus, Downstream(j) = outputs for any internal node j Note: Any boolean function can be represented by a two-layer neural network!

BACKPROPAGATION(training_examples,

, n in , nout , n hidden )

Create a feed-forward network with n in inputs, n hidden units in the hidden layer, and n out output units Initialize all the network weights to small random numbers (e.g. between -.05 and .05 Until the termination condition is met, Do --- Propogate the input forward through the network : ---Input the instance x to the network and compute the output o u for every ---unit u in the network --- Propogate the errors backward through the network ---For each network output unit k, calculate its error term k k = o k 1 o k t k o k ---For each hidden unit h, calculate its error term h w hk d k h= o h 1 o h
k outputs

---Update each network weight w ij wij = w ij

xij

Momentum

Add the a fraction 0 <= < 1 of the previous update for a weight to the current update May allow the learner to avoid local minimums May speed up convergence to global minimum

When to Stop Learning

Learn until error on the training set is below some threshold

Bad idea! Can result in overfitting

If you match the training examples too well, your performance on the real problems may suffer

Learn trying to get the best result on some validation data

Data from your training set that is not trained on, but instead used to check the function Stop when the performance seems to be decreasing on this, while saving the best network seen so far. There may be local minimums, so watch out!

Representational Capabilities

Boolean functions Every boolean function can be represented exactly by some network with two layers of units

Size may be exponential on the number of inputs

Continuous functions Can be approximated to arbitrary accuracy with two layers of units Arbitrary functions Any function can be approximated to arbitrary accuracy with three layers of units

Example: Face Recognition

From Machine Learning by Tom M. Mitchell Input: 30 by 32 pictures of people with the following properties:

Wearing eyeglasses or not Facial expression: happy, sad, angry, neutral Direction in which they are looking: left, right, up, straight ahead

Output: Determine which category it fits into for one of these properties (we will talk about direction)

Input Encoding

Each pixel is an input

30*32 = 960 inputs

The value of the pixel (0 255) is linearly mapped onto the range of reals between 0 and 1

Output Encoding

Could use a single output node with the classifications assigned to 4 values (e.g. 0.2, 0.4, 0.6, and 0.8) Instead, use 4 output nodes (one for each value)

1-of-N output encoding Provides more degrees of freedom to the network


The sigmoid function can never reach 0 or 1!

Use values of 0.1 and 0.9 instead of 0 and 1

Example: (0.9, 0.1, 0.1, 0.1) = left, (0.1, 0.9, 0.1, 0.1) = right, etc.

Network structure
Inputs

x1
x2 . . .

3 Hidden Units

Outputs

x960

Other Parameters

training rate: = 0.3 momentum: = 0.3 Used full gradient descent (as opposed to stochastic) Weights in the output units were initialized to small random variables, but input weights were initialized to 0

Result: 90% accuracy on test set!

Yields better visualizations

Try it yourself!

Get the code from http://www.cs.cmu.edu/~tom/mlbook.html


Go to the Software and Data page, then follow the Neural network learning to recognize faces link Follow the documentation

You can also copy the code and data from my ACM account (provide you have one too), although you will want a fresh copy of facetrain.c and imagenet.c from the website

/afs/acm.uiuc.edu/user/jcander1/Public/NeuralNetwork

You might also like