0% found this document useful (0 votes)
10 views12 pages

Mod 2 1

Uploaded by

annaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views12 pages

Mod 2 1

Uploaded by

annaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Adaline


The units with linear activation function are called linear
units.

A network with a single linear unit is called an Adaline
(adaptive linear neuron).

The input-output relationship is linear.

Adaline uses bipolar activation for its input signals and its
target output.

The weights between the input and the output are adjustable.

The bias in Adaline acts like an adjustable weight, whose
connection is from a unit with activations being always 1

Adaline is a net which has only one output unit.

The Adaline nerwork may be trained using delta rule.

The delta rule may afso be called as least mean
square (LMS) rule or Widrow &Hoff rule.

This learning rule is found to minimize the mean
squared error between the activation and the target
value

This allows the adaline to learn to all training ptterns
even if the output is correct.

The delta rule is derived from the Gradient descent
method (it can be generalized to more than one layer).

the gradient descent approach continues forever,
converging only asymptotically to the solution.

The delta rule updates the weights between the
connections so as to minimize the difference between
the net input to the output unit and the target value.

The major aim is to minimize the error over all training
parrerns
Architecture
Implementation of OR function using Adaline


w1=w2=b=0.1

α=0.1

You might also like