0% found this document useful (0 votes)
256 views4 pages

A Simple Scilab 5.3.3 Code For ANN (Artificial Neuron Networks)

This document contains a 3 paragraph summary of an artificial neural network (ANN) model and backpropagation algorithm implemented in Scilab: 1) The first paragraph introduces the ANN model which uses a sigmoid activation function to simulate neuron firing between 0-1 based on input. It describes the network architecture with input, hidden, and output layers connected by weight matrices. 2) The second paragraph provides an example of training the ANN on a sample input and output to minimize error through iterative adjustment of the weight matrices. 3) The third paragraph outlines the Scilab code used to implement the ANN with backpropagation, including defining nodes, initializing weights, calculating outputs and errors, and updating weights over multiple

Uploaded by

Andrey de Souza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
256 views4 pages

A Simple Scilab 5.3.3 Code For ANN (Artificial Neuron Networks)

This document contains a 3 paragraph summary of an artificial neural network (ANN) model and backpropagation algorithm implemented in Scilab: 1) The first paragraph introduces the ANN model which uses a sigmoid activation function to simulate neuron firing between 0-1 based on input. It describes the network architecture with input, hidden, and output layers connected by weight matrices. 2) The second paragraph provides an example of training the ANN on a sample input and output to minimize error through iterative adjustment of the weight matrices. 3) The third paragraph outlines the Scilab code used to implement the ANN with backpropagation, including defining nodes, initializing weights, calculating outputs and errors, and updating weights over multiple

Uploaded by

Andrey de Souza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

aug 2014

A simple Scilab 5.3.3 code for ANN (Artificial Neuron Networks)

1 A brief introduction for ANN with backpropagation of Error.

The Ann basic component is the sigmoid function f(net) = 1/(1+exp(-c*net ).


Here net is the total input value which when it is sufficiently large gives f(net) =1.
When it is much lower than zero(negative ) we get f(net) =0.
The above simulates aneoron firing from 0 to 1 based on input crossing a threshold.

The ANN consists of neuron network as below.

The output is obtained as follows:


net1 = w1*ein ein is the input vector
h =f(net1) h is the hidden neurons vector
net2 = w2 *h
aout = f(net2) this is the ANN output

The desired output during training is zout.

The algorithm initialises with random matrices w1 and w2.


then error E = (z1- aout1)^2 + (z2- aout2)^2 +...= err1^2 +err2^2+....... is minimized
by adjusting matrices w1 and w2.

The corrections to w1 and w2 is obtained approx as below, which can be improved with
accelerating factor say alp and f(x)*(1-f(x).

[ dw1 ] = [ w11*err1 w12 err1


w21*err2 w22*err2
w31*err3 w32*err3 ]

[dw2 ] = [ e1*sr1 e2*sr1 e3*sr1


e1*sr2 e2*sr2 e3*sr2 ]
with
[ sr1 sr2 ] = rowsum[ dw1]

A very simple Scilab 5.3.3 code will be found below.


2. Example

Consider nodes ni = 3 nh =2 no=3 as in figure.


let input ein =
1.
2.
3.

let desired output


zout =

1.
1.
1.

then with initial random w1 andw2

w1 =
0.2113249 0.0002211 0.6653811
0.7560439 0.3303271 0.62839

w2
0.8497452 0.0683740
0.6857310 0.5608486
0.8782165 0.6623569
we get

aout =
0.6966785
0.7611054
0.8069129
and error
err =

0.3033215
0.2388946
0.1930871

after 10 iterations we get


aout =
0.9484678
0.9001817
0.8724000
error is reduced
err =

0.0515322
0.0998183
0.1276000
As already mentioned the speed etc can be improved .

3. Simple Scilab code for ANN

// ANN using Scilab5.3.3

ni =3 // input nodes
nh =2 // hidden nodes
no =3 // output nodes
c= 1 // sigmoid constant
niter=5
function [sg ]= sig( c, x)
nx= length(x) ;
on=ones(1,nx) ;
sg= (on' + exp(-c*x) ) ;
sg=on'. /sg;
endfunction
function [dw] =dw2(err,w2)
terr=[] ;// build total error matrix
sz=size(w2) ;
nh=sz(1,2) ;
for i =1:nh
terr =[terr , err] ;
end
dw = w2.*terr ;// correction matrix
endfunction

w1 =rand(nh,ni) // random input matrix


w2 =rand(no,nh) // random output matrix

ein= (1:ni)' // input vector


zout = ones(1,no)' // desired out vector

neth = w1*ein // net h vector


hout =sig(1,neth) // neth out vector

neta =w2*hout // net vector output


aout = sig(1,neta) // output vector

err = zout -aout // error vector


starterr=err ;

for iter =1 : niter


cdw2 = dw2(err,w2) ; //change w2 vector

rscdw2 = sum(cdw2,1) ; //rowsum of cdw2


ts =[] ;
for i=1: ni
ts= [ts,rscdw2'];
end

eint=[] ;
for i=1:nh
eint =[ eint ;ein'] ;
end
dw1 = eint.*ts ;

w1=w1+dw1 ;//new w1 matrix


w2= w2+cdw2 ; //neww2 matrix

neth = w1*ein; // net h vector


hout =sig(1,neth) ; // neth out vector

neta =w2*hout ; // net vector output


aout = sig(1,neta) ; // output vector

err = zout -aout ; // error vector


end ;

// final Ann input/ output


neth = w1*ein ;// net h vector
hout =sig(1,neth) ; // neth out vector

neta =w2*hout ; // net vector output


aout = sig(1,neta) ; // output vector
ein
zout
aout

You might also like