0% found this document useful (0 votes)
21 views4 pages

W 1

The document describes training a distributed delay neural network to model a system. It defines input and output signals, creates the network, trains it using different functions, and tests it on new input data, calculating the error of the output versus the expected targets.

Uploaded by

tinashe murwira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views4 pages

W 1

The document describes training a distributed delay neural network to model a system. It defines input and output signals, creates the network, trains it using different functions, and tests it on new input data, calculating the error of the output versus the expected targets.

Uploaded by

tinashe murwira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

k1 = 0:0.

01:1;
p1 = sin(4 * pi * k1);
t1(1:length(k1)) = -1;
k2 = 2:0.01:4.06;
p2 = cos(cos(k2).*k2.^2-k2)

p2 = 1×207
-0.8663 -0.8330 -0.7959 -0.7551 -0.7108 -0.6629 -0.6117 -0.5573

t2(1:length(k2)) = 1;
R = [1,4,7];

P = [repmat(p1,1,R(1)),p2,repmat(p1,1,R(2)),p2,repmat(p1,1,R(3)),p2];
T = [repmat(t1,1,R(1)),t2,repmat(t1,1,R(2)),t2,repmat(t1,1,R(3)),t2];
net = distdelaynet({0:4,0:4},8);
display(net);

net =

Neural Network

name: 'Distributed Delay Neural Network'


userdata: (your custom info)

dimensions:

numInputs: 1
numLayers: 2
numOutputs: 1
numInputDelays: 4
numLayerDelays: 4
numFeedbackDelays: 4
numWeightElements: 8
sampleTime: 1

connections:

biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]

subobjects:

input: Equivalent to inputs{1}


output: Equivalent to outputs{2}

inputs: {1x1 cell array of 1 input}


layers: {2x1 cell array of 2 layers}
outputs: {1x2 cell array of 1 output}
biases: {2x1 cell array of 2 biases}
inputWeights: {2x1 cell array of 1 weight}
layerWeights: {2x2 cell array of 1 weight}

functions:

adaptFcn: 'adaptwb'
adaptParam: (none)
derivFcn: 'defaultderiv'
divideFcn: 'dividerand'

1
divideParam: .trainRatio, .valRatio, .testRatio
divideMode: 'time'
initFcn: 'initlay'
performFcn: 'mse'
performParam: .regularization, .normalization
plotFcns: {'plotperform', 'plottrainstate', 'ploterrhist',
'plotregression', 'plotresponse', 'ploterrcorr',
'plotinerrcorr'}
plotParams: {1x7 cell array of 7 params}
trainFcn: 'trainlm'
trainParam: .showWindow, .showCommandLine, .show, .epochs,
.time, .goal, .min_grad, .max_fail, .mu, .mu_dec,
.mu_inc, .mu_max

weight and bias values:

IW: {2x1 cell} containing 1 input weight matrix


LW: {2x2 cell} containing 1 layer weight matrix
b: {2x1 cell} containing 2 bias vectors

methods:

adapt: Learn while in continuous use


configure: Configure inputs & outputs
gensim: Generate Simulink model
init: Initialize weights & biases
perform: Calculate performance
sim: Evaluate network outputs given inputs
train: Train network with examples
view: View diagram
unconfigure: Unconfigure inputs & outputs

[Ps,Pi,Ai,Ts] = preparets(net,con2seq(P),con2seq(T));
net.trainFcn = 'trainbr';
net.trainParam.epochs = 100;
net.trainParam.goal = 1e-5; net = init(net);
net = train(net,Ps,Ts); Y = net(Ps,Pi,Ai);
W = net.IW{1}

W = 8×5
4.1687 0.4032 -0.6016 -0.4438 -3.3814
-3.0684 -0.4697 0.7308 1.6303 2.8240
-2.8784 -0.1226 0.1204 1.3845 0.4538
-1.1068 -0.2526 0.1477 1.1033 2.2513
-5.3620 0.6758 2.4581 2.4845 0.2972
-5.1815 -1.7582 -1.3219 1.5159 6.7418
10.7999 -7.0795 -10.1697 -2.7963 9.3315
-1.9219 0.5817 5.1222 2.6459 -6.7074

LW = net.LW{2,1}

LW = 1×40
-4.9549 -5.1514 -7.6916 -0.2093 -3.5259 1.9662 -2.6684 3.6533

b1 = net.b{1}

b1 = 8×1
-1.9220
-2.5901
1.1196

2
-2.4545
0.5653
-0.8311
-0.7048
-1.8846

b2 = net.b{2}

b2 = -3.0843

error = cell2mat(Y)-cell2mat(Ts);
mse_error = sqrt(mse(error))

mse_error = 0.3803

X = 1:length(Y);
plot(X,cell2mat(Ts),X,cell2mat(Y)),grid;
legend('reference','output');

plot(X,error),grid;
legend('error');

3
R = [1,6,7];
P2 = [repmat(p1,1,R(1)),p2,repmat(p1,1,R(2)),p2,repmat(p1,1,R(3)),p2];
T2 = [repmat(t1,1,R(1)),t2,repmat(t1,1,R(2)),t2,repmat(t1,1,R(3)),t2];
[Ps2,Pi2,Ai2,Ts2] = preparets(net,con2seq(P2),con2seq(T2));
Y2 = net(Ps2,Pi2,Ai2);
X2 = 1:length(Y2);
error2 = cell2mat(Y2)-cell2mat(Ts2);
mse_error2 = sqrt(mse(error2));

You might also like