We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28
UNIT-4
Back Propagation in Neural Network
Syllabus: Standard Back Propagation
Neural Net, Gaussian Machine, Cauchy Machine, Boltzmann Machine with Learning, Simple Recurrent Net. Back-Propagation Network • The back propagation learning algorithm is one of the most important developments in neural networks(Bryson and Ho , 1969;Webos 1974; parker,1985; Rumel hart 1986). • This learning algorithm is applied to multilayer feed forward network consisting of processing elements with continuous differentiable activation functions. Overview of Back propagation neural network • The aim is to train the net to achieve a balance between the ability to respond correctly to the input patterns that are used for training and the ability to give reasonable responses to input that is similar, but not identical, to that used in training. • The training of a network by back propagation involves three stages: • The feed forward of the input training pattern, • The calculation and back propagation of the associated error, • The adjustment of the weights Architecture Nomenclature • The nomenclature we use in the training algorithm for the backpropagation net is as follows: Activation function
• An activation function for a backpropagation net should have
several important characteristics: • It should be continuous, differentiable, and monotonically non decreasing. • For computational efficiency, it is desirable that its derivative be easy to compute. • One of the most typical activation functions is the binary sigmoid function, which has range of (0, 1) and is defined as The bipolar sigmoid function is closely related to the function Updation of weights and bias Draw backs