0% found this document useful (0 votes)
79 views76 pages

Ann Tech

dfgd dfgsdv fdd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views76 pages

Ann Tech

dfgd dfgsdv fdd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 76

CHAPTER 1

1. INTRODUCTION
1.1Overview:
The functionality of Switched Reluctance Motor is already known for more than 150
years, but only some vast improvements of the power electronics drive technologies have
made a great success of adjustable speed drives with Switched Reluctance Motor.
Due to enormous demand for variable speed drives and development of power
semiconductors the conventional reluctance machine has been come into picture and is
known as Switched Reluctance Machine. The name “Switched Reluctance”, first used by
one of the authors of [1], describes the two features of the machine configuration (a)
switched,(b) reluctance.
Switched word comes into picture because this machine can be operated in a
continuous switching mode. Secondly reluctance word comes into picture because in this case
both stator and rotor consist of variable reluctance magnetic circuits or we can say that it have
doubly salient structure.
A SRM has salient poles on both stator and rotor. Each stator pole has a simple
concentrated winding, where the rotor does not contain any kind of winding or permanent
magnet [2]-[4]. It is made up of soft magnetic material that is laminated steel. Two
diametrically opposite windings are connected together in order to form the motor phases.
During the rotor rotation a circuit with a single controlled switch is sufficient to supply an
unidirectional current for each phase. For forward motoring operation the stator phase winding
must be excited when the rate of change of phase inductance is positive. Otherwise the
machine will develop breaking torque or no torque at all. As SRM has simple, rugged
construction, low manufacturing cost, fault tolerance capability and high efficiency the SRM
drive is getting more and more recognisation among the electric drives. It also have some
disadvantages that it requires an electronic control and shaft position sensor and double salient
structure causes noise and torque ripple. SRMs are typically designed in order to achieve a
good utilization in terms of converter rating.

1
1.2 Advantages, Limitations and Applications of SRM.
1.2.1 Advantages:
In a SRM, only stator consists of phase windings while rotor is made of steel
laminations without any conductors or permanent magnet. So, the SRM has several
advantages over conventional motors.
(a) SRM drive maintain high efficiency over wide speed and load range because as
there is no winding present on rotor. So, cu loss, heat loss reduces in this case.
So, efficiency of SRM drive increases.
(b) As there is no windings or permanent magnets on its rotor, and there are no brushes
on its stator, along with its salient rotor poles make the SRM’s rotor inertia less
than that of its conventional motor. So, SRM can accelerate more quickly.
(c) As it does not have a brush commutator mechanical speed limit, no winding or
permanent magnet present on rotor. So, it can run up to high speeds. It can also
operate at low speeds providing full rated torque.
(d) As there are no windings or permanent magnet present on rotor so, the cost of the
SRM drive reduces.
(e) It follows four quadrant operations; it can run forward or backward direction. We
can call it as motoring or generating mode of operation.
(f) Rugged construction suitable for high temperature and vibrating zone.
(g) Most losses that will occur in SRM that must be in stator which can easily be cooled.
(h) Torque produced by SRM is independent of the polarity of the phase current,
allowing the use of simplified power converters with a reduced number of semi
converter switches.
1.2.2Limitations of SRM:
Along with the above advantages SRM drives also has some limitations. Following
are some of the limitation of SRM drive.
(a) As SRM drive is having doubly salient structure which causes inherent torque ripple
and acoustic noise.
(b) The converter which is used in case of SRM drive that requires high KVA rating.
(c) As the inductance of the winding is very high and it is required to remove the stored
energy after excitation so, a large energy removal period is usually required limiting
the maximum current to relatively low range.
(d) SRM drive cannot operate directly from ac or dc supply and require current pulse

2
signal for torque production.

The requirement of rotor position sensor, higher torque pulsation [5-7] and acoustic
noise [8-10] are the major drawbacks of SRM drive and that may limit the SRM in some
application.
1.2.3 Application of Switched Reluctance Motor Drives:
SRM drive has greater potential in motion control because it will give high
performance in harsh condition like high temperature and dusty environment [11-13].
(1) Electric Vehicles
(2) Aerospace [14,15]
(3) Household appliances like washing machine and vacuum cleaners [16].
(4) Variable speed and servo type application
1.2.4 Direct Torque Control of Switched Reluctance Motor:
As SRM drive is having doubly salient structure thus it has high torque ripple and
acoustic noise problem. Various proposed methods are used in order to reduce the torque
ripple. One of the methods is by skewing the rotor which can minimize the torque ripple
[20], [21]. Similarly another method is direct torque control method of SRM. DTC is the
advanced vector control method. This method is used to control the torque of SRM through
the control of the magnitude of flux linkage and change in speed (acceleration or
deceleration) of the stator flux vector.

1.3Motivation
1.3.1 Switched Reluctance Motor:
It works under reluctance principle. The main difference between the synchronous
reluctance machine and switched reluctance machine is that, if the excitation of
synchronous machine gets fail then it will act like synchronous reluctance machine. So
synchronous reluctance machine can only run if both the stator and rotor poles are same.
But the beauty of Switched Reluctance Motor is that even though the poles of stator and
rotor are different then also it will rotate by following the reluctance principle. The first
aim of SRM model is that whether it is capable of representing both flux linkage and
inductance profile characteristics. The second aim is to design the machine which is
capable of operating over a wide speed range in all four-quadrants of the torque-speed
graph. We can also achieve high performance with SRM drives which offers high
efficiency by using one of the optimization technique [11,12]. The third aim of the

3
research is to improve the reliability, accurate positioning and evaluation of performance
characteristics.
1.3.2 Direct Torque Control of Switched Reluctance Motor:
In order to improve the dynamic performance of switched reluctance motor drives
vector control technique is preferred. But the main disadvantage of vector control technique is
complexity of coordinate transformation. This problem can be solved by using advanced
vector control technique which is known as direct toque control technique.

1.4Objectives
i. To study principle of operation of switched reluctance motor drive and obtain the
mathematical model of SRM.
ii. In order to design the various phases of SRM and observe what are the major
changes that may be occurred in various phases of SRM.
iii. To observe by changing the turn-on and turn-off angle how its characteristic changes.
iv. To observe by using PID controller how the reference speed track the actual speed.
v. To implement an advance vector control technique known as DTC technique in order
to reduce the torque ripple in case of SRM.

1.5Thesis Outline
This thesis contains six chapters and that are given below.
Chapter 1 Presents a brief idea about switched reluctance motor drive. It contains the
introduction, advantages, disadvantages, application, control strategy,
motivation and objectives.
Chapter 2 The principle of operation of SRM, elementary operation of SRM, Converter
topology for SRM drive, various voltage state.
Chapter 3 Mathematical modelling of SRM, its torque equation, PID controller, block
diagram representation of SRM.
Chapter 4 Simulation modelling and results of 3-phase,4-phase,5-phase switched
reluctance motor drive.
Chapter 5 Direct Torque Control of 3-phase switched reluctance motor drive and its
simulation results.
Chapter 6 Gives the overall conclusion and scope for future work of the project.

4
CHAPTER 2

2. PRINCIPLE OF OPERATION OF THE


SWITCHED RELUCTANCE MOTOR
2.1Introduction
The machine operation and salient feature can be deduced from the torque
expression. The torque expression is nothing but the relationship between machine flux
linkages or inductance and rotor position. The torque v/s speed characteristics of the
machine operation in all of its four quadrants can be derived from the inductance v/s rotor
position characteristics of the machine. Switched Reluctance Machine can be designed of
any phases. For single phase machine it have low performance but high volume application.

2.2 Switched Reluctance Motor Configuration


Switched Reluctance Motor can be made up of laminated stator and rotor cores with
Ns =2mq poles on the stator and Nr poles on rotor.
Where m is number of phases and each phase made up of concentrated windings
placed on 2q stator poles. Switched reluctance motor is having salient pole stator with
concentrated winding and salient pole rotor with no winding or permanent magnet. As both
stator and rotor have salient pole structure, hence we can say that switched reluctance motor
is having doubly salient structure which is single excited with different number of stator
and rotor poles. It is constructed in such a manner that in no way the rotor poles in a
position wher the torque due to current in any phase is zero. The common stator/rotor pole
configuration are 6/4,8/6,10/8. In stator the coils on two diametrically opposite poles are
connected in series in order to form single phase. So, 6/4 stator/rotor pole configuration
means that represent the 3-phase configuration of switched reluctance motor drive.
Similarly 8/6 and 10/8 stator/rotor pole configuration represents the 4 and 5 phase
configuration of switched reluctance motor drive.

5
Fig.2.1 6/4 switched reluctance motor configuration

Similarly for 8/6 SRM configuration it have 8 stator and 6 rotor poles and in 10/8 SRM
configuration it have 10 stator pole and 8 rotor poles are present.

2.3 Principle of operation:

An electromagnetic system in order to form stable equilibrium position gives rise to


minimum magnetic reluctance is the main principle of operation of switched reluctance
motor. When the two diametrically opposite poles are excited, the nearest rotor poles are
attracted towards each other, in order to produce torque. When the two rotor poles gets
aligned with the stator pole then it gets de energise and the adjacent stator pole gets
energise to attract another pair of rotor poles. According to this principle switched
reluctance motor gets run.

When both the stator and rotor poles gets aligned with each other then that position is
known as aligned position. The phase inductance during the aligned position reaches its
maximum value known as La as the reluctance reaches its minimum value. The phase
inductance decreases gradually as the rotor poles move away from its aligned position. When
the rotor poles get completely unaligned or misaligned from stator poles then the phase
inductance at that moment reaches its minimum value known as L u. Reluctance in this case
reaches its maximum value.

6
2.4 Elementary Operation of Switched Reluctance Motor:

a a

c' b c'
b

c c b'
b'

a'
a'

(a) (b)

Fig.2.2 Operation of SRM (a) Phase ‘c’ aligned (b) Phase ‘a’ aligned

7
' ’
 In the fig.(a) the rotor poles & r and stator poles C & C are aligned. By applying
r 1
1

the current to phase ‘a’ with current direction as shown in fig. the flux is established
‘ '
through stator poles a & a and rotor poles r & r which tend to pull the rotor poles r
2 2 2
' ‘
& r towards the stator poles a & a respectively. When they are aligned then stator
2

current of phase a gets turned off as shown in fig. (b).

' '
 Now the stator winding b is excited, pulling r & r1 towards b & b in a clockwise
1

direction. Likewise, energization of c phase winding results in the alignment of r 2 & r
2
’,
with c & c respectively.
0
 It takes 3 phase energization to move the rotor by 90 , and one revolution of rotor
movement is affected by switching currents in each phase as many times as there are

8
no. of rotor poles. The switching of currents in the sequence of acb results in the
reversal of the rotor rotation.

2.4 The Relation Between Inductance And Rotor Position (Non Linear
Analysis):

Fig.2.3 Basic Rotor Position in A Two Pole SRM

The relationship between the flux linkages and the rotor position as a function of
current gives rise to the characteristics of torque. The stator and rotor pole arc and the
number of rotor poles helps to determine the changes in the inductance profile.

Followings are some angles that can be derived from figures 2.3 and figure 2.4.
 1
 
1     ....................................... (2.1)
2
2   
 p s r
 r
 2
 1  
s
.............................................. (2.2)

 3
  2     ............................................. (2.3)
r s

  3  
4 s ............................................ (2.4)

 
............................................ (2.5)
 
2
5 4 1
p r

Where  and are stator and rotor pole arcs respectively p r


is the number
and
s r
of rotor

poles.

Fig.2.4 Inductance Profile for Switched Reluctance Motor


1. 0-θ1 and θ4-θ5: In this region both the stator and rotor poles are not aligned with each
other. Thus inductance in this case is minimum and almost constant. The inductance in
this portion is minimum and is known as unaligned inductance which is also called as
Lu. This region does not contribute any role in torque production.

2. θ1-θ2: In this region the rotor pole starts overlapping on to the stator pole. So, the flux
path in this region is predominantly through stator and rotor laminations. So, the
inductance gets increased with respect to rotor position and that gives rise to positive
slope. During this period the current produced in the winding produces the motoring
torque or positive torque. When the rotor pole completely overlaps the stator pole at
that period this region comes to an end.

3. θ2-θ3: In this region the rotor pole completely overlap the stator pole. This region gives
rise to predominantly high flux path. So, effect on inductance in this region is very high
and it is constant. This inductance is also known as aligned inductance and can be
represented as La. As torque is the function of rate of change of inductance with respect
to rotor position and in this region inductance is constant . So, torque is zero in this case
even though current present in this interval.
4. θ3-θ4: In this region the rotor pole is moving away from the stator pole. This region is
very much similar with the region like θ1-θ2 but in reverse manner. In this case as the
misalignment of rotor pole increases with respect to stator pole the inductance get
decreases and it gives rise to negative slope. So, the negative torque will be produced in
this region, which is nothing but the generation of electrical energy from the mechanical
input to the switched reluctance machine.
So, from the above analysis we will get that it is not possible to achieve the ideal
inductance profile in actual motor due to saturation.

10
2.5 Converters For Switched Reluctance Motor Drive:
2.5.1 Power Converter Topology:
In order to achieve the smooth rotation and optimal torque output the phase-to-phase
switching in the switched reluctance motor drive is required with respect to rotor
position. The phase-to-phase switching logic can only be realized by using the semi
converter device. We can also say that the power semi converter device topology put a
great impact on switched reluctance motor’s performance.
As the torque produced in the switched reluctance motor drive is independent of the
excitation current polarity. So, it requires only one switch per phase winding. Where as
for other ac machine it requires two switches per phase in order to control the current.
For ac motor the winding is also not present in series with the switches, which gives
rise to irreparable damage in shoot-through fault. But in case of switched reluctance
motor as the winding is present in series with the switch, so, during shoot-through fault
the rate of rise in current can be limited or reduced by using winding inductance and
provides time to protective relay in order to isolate the faults. Switched reluctance
motor drive is more reliable because in this case all the phases are independent of each
other. Even though if some problem will occur to switched reluctance motor and one
winding gets damaged then also switched reluctance motor can provide the
uninterrupted operation with reduced power output.

2.6 Asymmetric Bridge Converter:

In case of switched reluctance motor, we are using the number of half bridge
converters which are same as the number of phases. So, as one phase of the switched
reluctance motor is connected with the asymmetric bridge converter, similarly the rest are
also connected. For example for three phase switched reluctance motor we are using three
half bridge converter because from three half bridge converter we are getting six outputs and
at the input of switched reluctance motor it have six input ports. As shown in figure below for
each phase we are using asymmetric bridge converter which contain two IGBT’s and two diodes
and the phase winding is connected between them. When both Sa1 and Sa 2 switch gets turn on
then current will circulate through phase ‘A’. But when current exceeds the commanded value
then Sa1 and Sa2 gets turned off. At that moment energy stored in the winding will keep the
current in the same direction by making D1 and D2 forward bias. So, the winding gets
discharge and this will decrease the current below the commanded value.

11
Similarly the other phases are also operated like phase ‘A’ operated. Following is the
complete diagram of the inverter circuit that is used for switched reluctance motor drive.

Sa1 Sb1 Sc1

La Lc
Lb

E
Ra Rb Rc

Sc2
Sb2
Sa2

Fig.2.5 Asymmetric H-bridge Drive Circuit For SRM

The above fig. represent the asymmetric H-bridge for SRM.’L’ and ‘R’ denote inductance
and resistance of the phase winding. The operation of the above fig. can be explained below.


Let say the rotor pole r1 and r 1 is aligned with the stator pole c and c’ then now S a1
and S a2

are turned on in order to excite the a-phase so as to produce the rotation in the positive direction.
Reluctance torque is generated so that stator pole a, a’ and rotor pole r2, r2’ face each other, and the
rotor rotates in clockwise direction. Then other phases are excited so as to align the next stator
pole to rotor pole and in this manner the switched reluctance motor starts rotating.

The switched reluctance motor torque ‘T’ is generally expressed as follows assuming a
linearly magnetic circuit with ia, ib and ic denoting the respective phase currents.

12
1L a 
T Lb Lc 2

2


i 2
a 

i 2
b 

i c  ……………………… (2.6)
 

This equation effective only when the magnetic circuit is linear.

13
2.7 Stator Current Control By Modified Hysteresis Band Control:

The asymmetric H-bridge shown in figure can apply a three level voltage to the stator
winding i.e. (+E,0,-E).

Positive voltage mode: When both switches Sa1 and Sa2 are turned on, source voltage E is applied
to the winding. As a result winding current increases. In this case voltage V=E and current flows in
downward direction as shown in the below figure.

Sa1

La
V
E

Ra

Sa2

Fig.2.6(a) Positive voltage mode

Negative Voltage Mode: When both switches Sa1 and Sa2 are turned off while current flows in the
winding, the two diodes conduct electricity voltage –E is applied to the winding and the current
decreases. In this case voltage V=-E and current direction remains same but its value reduces.

Return Current Mode: Either of switches Sa1 and Sa2 is turned off while current flows in the
winding. When Sa1 turned off, the diode shown in the above diagram conducts electricity. Zero
voltage is applied across the winding and current decreases. However this decrease is smaller than
in the negative voltage mode.

As inductor is a storing device in this mode it discharges through one of the switch and
diode. So voltage applied across phase winding is zero, but the current direction remains same.
So only unipolar current produces inside switched reluctance motor in order to produce
unidirectional torque.
Sa1

La
V
E

Ra

Sa2

Fig.2.6(b) Negative Voltage Mode

Sa
1

L
V
a

E
R
a

Sa
2

Fig. 2.6(c) Return Current Mode


CHAPTER 3

3.MATHEMATICAL MODELLING AND CONTROL OF SWITCHED


RELUCTANCE MOTOR DRIVE
3.1Mathematical Modeling of Switched Reluctance Motor Drive

The equivalent circuit for the switched reluctance motor can be derived by
neglecting the mutual inductance between the phases as follows. Applied voltage to a phase
can be derived as the sum of the resistive voltage drop and the rate of change of flux
linkages with respect to time and it is given as
d (
,Vi) R i  ……………………… (3.1)
s
dt
Where ‘Rs’ is the resistance per phase and ‘ ’ is flux linkage per phase.

 L(,i)i ……………………………………………….. (3.2)

Where ‘L’ is the inductance dependent on the rotor position & the phase current. The
phase voltage equation is given by,

d{L(,i)i} di d dL( ,i)


VRi  R i  L( ,i) i .
s
sdt dt dt (3.3)
d
di dL(,i)
 R i  L( ,i)  i
s m
dt d

In this equation all the three terms on the right hand side represent the resistive
voltage drop, inductive voltage drop and induced emf respectively and the result is
equivalent to the series excited dc motor voltage equation.

The induced emf ‘e’ is obtained as,

dL( , i)
e ik
 i ................................. (3.4)

m b m
d
Where Kb may be construed as an emf constant similar to that of dc series excited
machine and is given as,
dL( , i)
b  d
k ........................................ (3.5)

Substituting for flux linkages in the voltage equation and multiply with the current results in
instantaneous i/p power given by,

dL( , i)
P  Vi  R i2  i2  L( ,
di ............. (3.6)
i)i
i s
dt dt

So, the equivalent circuit diagram for single phase SRM is given by,

Fig.3.1 Single-Phase Equivalent circuit of Switched Reluctance Motor

In order to get meaningful inference the above equation need to express with known variables
d 1 2 di 1 2 dL( , i)
L( , i)  L( , i)i  ......... (3.7)

dt 2
i 
dt 2
i dt
 

Substituting the above equation into (3.6) then we will get,


d 1

2
 L( 2 dL( , i)
, i) 1 
2 ....... (3.8)

P Ri i s

dt 2 i 2i dt
 
Where, ‘ P i ’ is the instantaneous power input which can be expressed as the sum of
the winding resistive losses represented as
R , the rate of change of field energy i.e

d 1
i2

s
L( , 2  1
i) and air gap power ‘ ’ i.e represented as 2 dL( ,i)
 i  P i .
a
dt 2 2 dt
 

Time can also be represented in terms of rotor position and speed which is given below,


t ............................................ (3.9)
 m

The air gap power can be represented as,

i) 1 2 dL( , i ) 1 2 dL( , i) d 1 2 dL( ,


P i  i   i
(3.10)
a m
2 dt 2 d dt 2 d

The air gap power can also be represented as the product of the electromagnetic torque and
rotor speed and is given by,

P   mT
a e
...................................... (3.11)

By equating the above two equation we will get,

1 2 dL( , i )
T  i .............................. (3.12)
e
2 d

So, this shows that the electromagnetic torque is independent of current direction as T e
is

2
directly proportional to i . So, whatever may be the current value positive or negative the

torque it will produce the unidirectional torque. But T e is directly proportional dL( , i)
.
to d
dL( ,
So, > 0 then, it will produce positive torque and electrical power is converted into
i)
if
d
dL( ,
mechanical power output (motoring) and if < 0 then, it will produce the negative
i)
d
torque and mechanical power is converted into electrical power (generating).

This completes the development of the equivalent circuit and equation for
evaluating electromagnetic torque and input power to the switched reluctance motor for
both dynamic and steady state operation [1].
3.2PID Controller:

Due to simple control structure, Easy of design and inexpensive cost the
conventional proportional-integral-derivative (PID) controller is most widely used in the
industry. More than 90% of the control loops were of the PID types. As the formulas of PID
controller are very simple and can be easily adopted by various controlled plant.

PID controller helps to correct the error between the reference variable and the
actual variable. So, that the system can adjust the process accordingly. The general
structure of PID controller is given below.

Fig.3.2 Structure of PID controller

For PID control the actuating signal consists of proportional error signal added with
derivative and integral of the error signal.

The transfer function for the above block diagram i.e for PID controller is given as,
 ki
1 s
G PID
 k p k d  s  ……………. (3.13)
 
Where ‘ k p
’ can be represented as proportionality gain, ‘ k d
’ as derivative gain

constant and ‘ k i ’ as the integral gain constant.

3.3Function of Proportional-Integral-Derivative Controller:

3.3.1 Proportional Gain Constant:

In proportional control the actuating signal for the control action in control system
is proportional to the error signal. The error signal is being the difference between the
reference input signal and the feedback signal obtained from the output.

For satisfactory performance of a control system a convenient adjustment has to be


made between the maximum overshoot and steady state error. By the help of proportional
constant without sacrificing the steady state accuracy, the maximum overshoot can be
reduced to same extent by modifying the actuating signal.

3.3.2 Integral Gain Constant:

For integral control action the actuating signal consists of proportional-error signal
added with integral of the error signal.

By the help of an integrator, it reduces the steady state errors through low frequency
compensation. By the help of this integral term the actual variable will track the reference
variable more quickly.

3.3.3 Derivative Gain Constant:

For the derivative control action the actuating signal consists of proportional error
signal added with derivative of the error signal.

By the help of a differentiator it improves the transient response through high


frequency compensation. The steady state error is not affected by derivative control action.
As the derivative of the error is used in actuating signal and as such if the error varies with
time, then in that case the derivative control reduces the error.

So, PID control combines the advantages of proportional, derivative and integral
control actions. In a closed loop system by changing one of the variable from
k p k ,k
d i

how the effect of other two variables will change that can be summarized in the table below.
Gain/Effect Rise Time Over Shoot Settling Time Steady State
Error

k Decrease Increase Small change Decrease


p

k i
Decrease Increase Increase Eliminate

k d
Small change Decrease Decrease Small change

Table 3.1 Effects of k p, k d , k i on a closed loop system

3.4Block Diagram Representation of Switched Reluctance Motor Drive:

Figure.3.3 BLOCK DIAGRAM OF TRADITIONAL FEEDBACK CONTROL

This will give the closed loop control of switched reluctance motor. So, the actual
speed will track the reference speed. So, machine will always remain in synchronism. In
place of speed controller we are using PID controller and the output of this we are getting
the error signal. That will move to the multiplexer along with 8 which gives the reference
current signal, this should be compared with the actual current signal in order to get the
error current signal that is to be used as the gate pulse to the power converter. For 3-phase
machine we are using 3 half bridge converters, for 4-phase ‘4’ and for 5-phase ‘5’ half
bridge converters are used in order to get required amount of input to switched reluctance

20
motor.

21
CHAPTER 4

4. MODELLING AND SIMULATION OF SRM DRIVE

4.1. Switched Reluctance Motor Specification:

Stator Resistance : 0.01


ohm/phase Friction : 0.01 N m
s
Inertia : 0.0082 kg.m2
Initial Speed : 0 rad/sec
Position : 0 rad
Unaligned Inductance : 0.7 mH
Aligned Inductance : 20 mH
Maximum Current : 450
Amps
Maximum Flux Linkage : 0.486 Weber-turn

4.2. . Modelling of Three Phase Switched Reluctance Motor Drive:


In figure 3.3 that is the block diagram of switched reluctance motor, we are using the

speed controller. Here the speed controller is nothing but the PID controller whose input is
the speed error that is the difference between the speed reference and the filtered speed
feedback signal and its output is unmodified torque command. Then that torque command
goes to current command controller and feedback from position sensor gives rise to
reference current that compare with the actual current signal that will feedback from
Switched reluctance motor output gives the current error signal that goes to hysteresis band
controller. That signal acting as the gate signal for converter. A dc supply has given to

22
converter that converts to 2 level ac signals. Here we use 3 half bridge converters in order
to produce 3 phase ac signal. That should be the input for Switched reluctance motor. At
Switched

23
reluctance motor output we are getting flux linkage, current, output torque as well as actual
speed of motor.

4.2.1 Simulation Results for Three Phase Switched Reluctance Motor:

Various characteristics for 3-phase switched reluctance motor has given below,

500
Voltage(volts)

24
-500
0 0.01 0.02 0.03 0.04 0.05 0.06 0.08
Time(secs)

Figure.4.1. Voltage v/s Time characteristics

This is nothing but the output voltage of converter which becomes the
input voltage for the three phase switched reluctance motor drive. This shows
that the three phase voltages are 1200 apart from each other.

200

150
Torque(N.m)

100

50

-50
0 0.02 0.04 0.06 0.08
Time(secs)

Figure.4.2 Torque v/s Time characteristics

25
Here torque is directly proportional to square of the current, so, torque is independent
dL
of current direction but it depends upon the
d . If it is positive then torque is positive
otherwise the torque is negative. This torque contains lots of noise and harmonics.

0.45
fa vs t
0.4
fb vs
0.35 t
fc vs t
0.3

0.25

F lu x L i n k a g e
0.2
(v.s) 0.15

0.1

0.05

-0.05
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure.4.3 Flux Linkage v/s Time characteristics

50
Ia vs
t
40 Ib vs
t
30 Ic vs
t
cu rren t(am

20
p)

10

-10
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(sec)

Figure.4.4 Current v/s Time characteristics

Here as flux linkage and currents are proportional to each other so as flux linkage
will vary according to that current will vary. Initially current is very high because of inrush
current, then it lies within 10 to 20 ampere.
1200
Actual Speed
1000 Reference Speed

800

Sp e e d (r p
600

m)
400

200

0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure4.5 Speed v/s Time characteristics

La vs
0.2 t Lb
vs t
Lc vs
t
0.15
I n d u c ta n c e

0.1
(H )

0.05

0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure.4.6 Inductance v/s Time characteristics

Here the relation between the speed and inductance is that when the actual speed
will track the reference speed at that moment the inductance remains constant. Initially
inductance gets varies when it track at that moment inductance gets settle down and
remains constant.

Figure 4.6 shows that the inductance of stator phase winding is the function of
angular position of the rotor. It can also be observed that the unaligned inductance is 0.8
mH and aligned inductance is 18 mH.
4.3 Modelling Four Phase Switched Reluctance Motor Drive:

It is similar to 3 phases SRM, the only difference is that inside of the power
converter block in order to produce 4 phase ac supply it will use 4 half bridge converters.
Which helps to produce 4 phase voltages which are 900 apart from each other and that
becomes the input voltage for four phase switched reluctance motor drive? The advantage
is that we can track the reference speed as quickly as possible if the no. of phases increases.

4.3.1 Simulation Results for Four Phase Switched Reluctance Motor:

Various characteristics for 4-phase switched reluctance motor has given below,

200
0
Va

-200
200
Vb

0
Voltage (volts)

-200

200
Vc

0
-200

200
Vd

0
-200
0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15
Time(secs)

Figure.4.7 Voltage v/s Time characteristics

Here the four output voltage of inverters Va, Vb, Vc and Vd are 90 0 apart from each
other, which gives supply to the 4-phase switched reluctance motor.

800

700

600

500

400
T o rque
(N .m )

300

200

100

-100
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure.4.8 Torque v/s Time characteristics


Here torque is directly proportional to square of the current, so, torque is independent
dL
of current direction but it depends upon the
d . If it is positive then torque is positive
otherwise the torque is negative. This torque contains lots of noise and harmonics but that
must be less than 3-phase switched reluctance motor.

0.9
fa vs
0.8 t
fc vs
fb vs
0.7 t fd vs
0.6 t
F lu x L in k a g e (v .
0.5

0.4

0.3
s)

0.2

0.1

0
-0.1
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure.4.9 Flux Linkage v/s Time characteristics

50
Ia vs
40 t Ib
vs t
30 Ic vs
C u rr e nt(a m

t
20 Id vs t

10
p)

-10
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure.4.10 Current v/s Time characteristics

The flux linkage and currents are proportional to each other so that they will vary
almost similarly with respect to time axis. Initially current is very high because of inrush
current, then it lies within 5 to 10 ampere.
1200
ActualSpeed
Reference Speed
1000

800

Sp e ed (r p
600

m)
400

200

0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure.4.11 Speed v/s Time characteristics

La vs
0.02 t Lb
vs t
Lc vs
t Ld
0.015 vs t
I nductance (H )

0.01

0.005

0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time(secs)

Figure.4.12 Inductance v/s Time characteristics


Here the relation between the speed and inductance is that when the actual speed
will track the reference speed at that moment the inductance remains constant. Initially
inductance gets varies when it track at that moment inductance gets settle down and
remains constant. As it’s a 4-phase machine so, it consists of four inductances having some
phase difference. But this will fluctuate till actual speed track the reference and finally its
settle down. As it is a 4- phase switched reluctance motor, so in this case the reference
speed will track the actual speed more quickly in comparision to 3-phase switched
reluctance motor. In this case the actual speed will track the reference speed nearly 0.1 sec.

Figure 4.12 shows that the inductance of stator phase winding is the function of
angular position of the rotor. It can also be observed that the unaligned inductance is nearly
0.8 mH and aligned inductance is 17 to 18 mH.
4.4 Modelling Five Phase Switched Reluctance Motor Drive:

Here the speed controller is nothing but the PID controller whose input is the speed
error that is the difference between the speed reference and the filtered speed feedback
signal and its output is unmodified torque command. Then that torque command goes to
current command controller and feedback from position sensor gives rise to reference
current that compare with the actual current signal that will feedback from SRM output
gives the current error signal that goes to hysteresis band controller. That signal acting as
the gate signal for converter. A dc supply has given to converter that converts to 2 level ac
signals. Here we use 5 half bridge converters in order to produce 5 phase ac signal. That
should be the input for switched reluctance motor. At switched reluctance motor output we
got flux linkage, current, output torque as well as actual speed of motor. It helps to produce
5-phase voltage which is 72 0 apart from each other. The advantage is that we can track the
reference speed as quickly as possible if the no. of phases increases.

4.4.1 Simulation Results for Five Phase Switched Reluctance Motor:

Various characteristics of 5 phase switched reluctance motor has given below,

Figure.4.13 Voltage v/s Time characteristics

Here the 5 output voltage of inverters Va, Vb, Vc, Vd and Ve are 72 0 apart from each
other, which gives supply to the 5-phase switched reluctance motor.
1200

1000

800

Torque
600

(N .m )
400

200

0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure.4.14 Torque v/s Time characteristics

Here torque is directly proportional to square of the current, so, torque is independent
dL
of current direction but it depends upon the
d . If it is positive then torque is positive
otherwise the torque is negative. This torque contains lots of noise and harmonics but that
must be less than 3-phase and 4-phase switched reluctance motor.

0.9
fa vs
0.8 t fb
vs t
0.7
fc vs
0.6 t fd
vs t
F lu x L in k a g e

0.5 fe vs
t
0.4

0.3
(v .s )

0.2

0.1

-0.1
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure.4.15 Flux Linkage v/s Time characteristics


50
Ia vs
t Ib
40
vs t
Ic vs

C urrent(am p)
30 t Id
vs t
20 Ie vs t

10

-10
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure.4.16 Current v/s Time characteristics

The flux linkage and currents are proportional to each other so that they will vary
almost similarly with respect to time axis. Initially current is very high because of inrush
current, then it lies within 5 ampere.

1200
Actual speed
1000 Reference speed

800
Speed(rp

600
m)

400

200

0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure.4.17 Speed v/s Time characteristics

As it is a 5-phase switched reluctance motor, so in this case the reference speed will
track the actual speed more quickly in comparision to 4 and 3-phase switched reluctance
motor. In this case the actual speed will track the reference speed nearly 0.02 sec.

30
0.02

0.015

Inductance
0.01
La vs
(H)

t Lb
vs t
0.005
Lc vs
t Ld
vs t
Le vs t
0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time(secs)

Figure.4.18 Inductance v/s Time characteristics

Here the relation between the speed and inductance is that when
the actual speed will track the reference speed at that moment the
inductance remains constant. Initially inductance gets varies when it
track at that moment inductance gets settle down and remains constant.
As it’s a 4-phase m/c so it consists of 4 inductances having some phase
difference. But this will fluctuate till actual speed track the reference and
finally its settle down.

Figure 4.18 shows that the inductance of stator phase winding is


the function of angular position of the rotor. It can also be observed that
the unaligned inductance is nearly
0.8 mH and aligned inductance
5.1Introduction
Artificial Neural Networks are relatively crude electronic models based on the
neural structure of the brain. The brain basically learns from experiens.It is natural proof that
are beyond the scope of current computers are indeed solvable by small energy efficient
packages. This brain modeling also promises a less technical way to develop machine
solutions. These biologically inspired methods of computing are thought to be the next major
advancement in the computing industry. Even simple animal brains are capable of functions
that are currently impossible for computers. Computers do rote things well, like keeping
ledgers or performing complex math. But computers have trouble recognizing even simple
patterns much less generalizing those patterns of the past into action of the future.
Now, advance in biological research promise an initial understanding of the
natural thinking mechanism. This research shows that brain stores information, as patterns.
Some of these patterns are very complicated and allow us the ability to recognize individual
faces from any different angles. This process of storing information as patterns, utilizing
those patterns, and then solving problems encompasses a new field in computing. This field
does not utilize traditional programming but involves the creation of massively parallel
networks and the training of those networks to solve specific problems. This field also
utilizes words very different from traditional computing, words like behave, react, self-
organize, learn, genaralize, and forgot. An artificial neural network (ANN), often just called
a "neural network" (NN), is a mathematical model or computational model based on
biological neural networks. It consists of an interconnected group of artificial neurons and
processes information using a connectionist approach to computation. In most cases an ANN
is an adaptive system that changes its structure based on external or internal information that
flows through the network during the learning phase.
In more practical terms neural networks are non-linear statistical data modeling tools. They
can be used to model complex relationships between inputs and outputs or to find patterns in
data. A neural network is an interconnected group of nodes, akin to the vast network of
neurons in the human brain.

5.2 Background
Component based representation of a neural network. This kind of more general
representation is used by some neural network software.
28
There is no precise agreed-upon definition among researchers as to what a neural network is,
but most would agree that it involves a network of simple processing elements (neurons),
which can exhibit complex global behavior, determined by the connections between the
processing elements and element parameters. The original inspiration for the technique was
from examination of the central nervous system and the neurons (and their axons, dendrites
and synapses) which constitute one of its most significant information processing elements
(see Neuroscience). In a neural network model, simple nodes (called variously "neurons",
"neurodes", "PEs" ("processing elements") or "units") are connected together to form a
network of nodes — hence the term "neural network." While a neural network does not have
to be adaptive per sé, its practical use comes with algorithms designed to alter the strength
(weights) of the connections in the network to produce a desired signal flow.
These networks are also similar to the biological neural networks in the sense that functions
are performed collectively and in parallel by the units, rather than there being a clear
delineation of subtasks to which various units are assigned (see also connectionism).
Currently, the term Artificial Neural Network (ANN) tends to refer mostly to neural network
models employed in statistics, cognitive psychology and artificial intelligence. Neural
network models designed with emulation of the central nervous system (CNS) in mind are a
subject of theoretical neuroscience (computational neuroscience).
In modern software implementations of artificial neural networks the approach inspired by
biology has more or less been abandoned for a more practical approach based on statistics
and signal processing. In some of these systems neural networks, or parts of neural networks
(such as artificial neurons) are used as components in larger systems that combine both
adaptive and non-adaptive elements. While the more general approach of such adaptive
systems is more suitable for real-world problem solving, it has far less to do with the
traditional artificial intelligence connectionist models. What they do, however, have in
common is the principle of non-linear, distributed, parallel and local processing and
adaptation.

5.3 Analogy to the brain

29
The exact workings of the human brain are still a mystery. Yet, some aspects if this amzing
processor are known.Inparticular, the most basic element of the human brain is a spcific type
of cell which, unlike the rest of the body, doesn’t appear to regenerate.Because this type of
cell is the only part of the body that isn’t slowly replaced,it is assumed that these cells are
what provide us with our abilities to remember, think, and apply previous experiences to our
every action.These cells,all 100 billion of them, are known as neurons.Each of these neurons
can connect with up to 200,000 other neurons, although 1,000 to 10,000 are typical.The
power of the human mind comes from the sheer numbers of these basic components and the
multiple connections between them.It also comes from genetic programming and
learning.The individual neurons are complicated.They have a myriad of parts, sub-systems,
and control mechanisms.They convey information via a host of electrochemical pathways.
There are over one hundred different classes of neurons, depending on the classification
method used. Together these neurons and their connections form a process which is not
binary, not stable, and not synchronous. In short, it is nothing like the currently available
electronic computers, or even artificial neural networks. These artificial neural networks try
to replicate only the most basic elements of this complicated, versatile, and powerful
organism. They do it in a primitive way. But for the software engineer who is trying to solve
problems, neural computing was never about replicating human brains. It is about machines
and a new way to solve problems.
5.4 Artificial Neurons And How They Work?
The fundamental processing element of a neural network is a neurons. This building block of
human awareness encompasses a few general capabilities. Basically, a biological neurons
receives inputs from other sources, combines them in some way, performs a generally
nonlinear operation on the result, and then outputs the final result. Figure 4.1 shows the
relationship of these four parts. Within humans there are many variations on this basic type of
neurons,further complicating man’s attempts at electrically replicating the process of
thinking. Yet, all natural neurons have the same four basic Components. These components
are known by their biological names – dendrites, soma, axon, and synapses.Dendrites are
hair-like extensions of the soma which act like input channels.These input channels receive
their input through the synapses of other neurons. The soma then processes these incoming
signals over time. The soma then turns that processed value into an output which is sent out
to other neurons through the axon and the synapses.Recent experimental data has provided
30
further evidence that biological
neurons are structurally more complex than the existing artificial neurons that are built into
today’s artificial neural networks. As biology provides a better understanding of the neurons,
and as technology advances, networkdesigners can continue to improve their systems by
building upon man’s understanding of the biological brain. But currently, the goal of artificial
neural networks is not the grandiose recreation31
of the brain. On the contrary, neural network
researches are seeking an understanding of nature’s capabilities for which people can
engineer solutions to problems that have not been solved by traditional computing. To do this,
the basic unit of neural networks, the artificial neurons, simulates the four basic functions of
natural neurons.
5.5Models
Neural network models in artificial intelligence are usually referred to as artificial neural
networks (ANNs); these are essentially simple mathematical models defining a function .
Each type of ANN model corresponds to a class of such functions.
5.6Learning
However interesting such functions may be in themselves, what has attracted the most
interest in neural networks is the possibility of learning, which in practice means the
following:
Given a specific task to solve, and a class of functions F, learning means using a set of
observations, in order to find which solves the task in an optimal sense.
This entails defining a cost function such that, for the optimal solution f * , (no solution has a
cost less than the cost of the optimal solution).
The cost function C is an important concept in learning, as it is a measure of how far away we
are from an optimal solution to the problem that we want to solve. Learning algorithms
search through the solution space in order to find a function that has the smallest possible
cost.For applications where the solution is dependent on some data, the cost must necessarily
be a function of the observations, otherwise we would not be modelling anything related to
the data. It is frequently defined as a statistic to which only approximations can be made. As a
simple example consider the problem of finding the model f which minimizes , for data pairs
(x,y) drawn from some distribution . In practical situations we would only have N samples
from and thus, for the above example, we would only minimize . Thus, the cost is minimized
over a sample of the data rather than the true data distribution.
When some form of online learning must be used, where the cost is partially minimized as
each new example is seen. While online learning is often used when is fixed, it is most useful
in the case where the distribution changes slowly over time. In neural network methods, some
form of online learning is frequently also used for finite datasets.

5.7 Simple Neuron


32

A neuron with a single scalar input and no bias appears on the left below.
Fig 5.1 simple Neuron
The scalar input p is transmitted through a connection that multiplies its strength by the scalar
weight w to form the product wp, again a scalar. Here the weighted input wp is the only
argument of the transfer function f, which produces the scalar output a. The neuron on the
right has a scalar bias, b. You can view the bias as simply being added to the product wp as
shown by the summing junction or as shifting the function f to the left by an amount b. The
bias is much like a weight, except that it has a constant input of 1.
5.8 Transfer Functions
Many transfer functions are included in this toolbox. A complete list of them can be found in
the reference pages. Three of the most commonly used functions are shown below.

5.8.1 Hard-Limit Transfer Function


The hard-limit transfer function shown above limits the output of the neuron to either 0, if the
net input argument n is less than 0, or 1, if n is greater than or equal to 0. This function is
used in Chapter 3, “Perceptrons,” to create neurons that make classification decisions.
The toolbox has a function, hardlim, to realize the mathematical hard-limit transfer function
shown above. Try the following code:
n = -5:0.1:5; plot(n,hardlim(n),'c+:'); It produces a plot of the function hardlim over the range
-5 to +5. All the mathematical transfer functions in the toolbox can be realized with a
function having the same name. The linear transfer function is shown below.

33
5.8.2 Linear Transfer Function
The sigmoid transfer function shown below takes the input, which can have any value
between plus and minus infinity, and squashes the output into the range 0 to 1.

5.8.3 Log-Sigmoid Transfer Function


This transfer function is commonly used in backpropagation networks, in part because it is
differentiable.The symbol in the square to the right of each transfer function graph shown
above represents the associated transfer function. These icons replace the general f in the
boxes of network diagrams to show the particular transfer function being used.For a complete
listing of transfer functions and their icons, see the reference pages. You can also specify your
own transfer functions.You can experiment with a simple neuron and various transfer
functions by running the demonstration program nnd2n1.

5.8.4. Neuron with Vector Input A neuron with a single R-element input vector is shown
below. Here the individual element inputs p1, p2,... pR
are multiplied by weights w1,1, w1,2, ... w1, R
and the weighted values are fed to the summing junction. Their sum is simply Wp, the dot
product of the (single row) matrix W and the vector p.

34
The neuron has a bias b, which is summed with the weighted inputs to form the net input n.
This sum, n, is the argument of the transfer function f. n=w1, 1p1+w1, 2p2+...+w1, RpR+b
This expression can, of course, be written in MATLAB® code as
n = W*p + b
However, you will seldom be writing code at this level, for such code is already built into
functions to define and simulate entire networks.
5.9 Abbreviated Notation
The figure of a single neuron shown above contains a lot of detail. When you consider
networks with many neurons, and perhaps layers of many neurons, there is so much detail
that the main thoughts tend to be lost. Thus, the authors have devised an abbreviated notation
for an individual neuron. This notation, which is used later in circuits of multiple neurons, is
shown.

Here the input vector p is represented by the solid dark vertical bar at the left.The dimensions
of p are shown below the symbol p in the figure as Rx1. (Note that a capital letter, such as R
in the previous sentence, is used when referring to the size of a vector.) Thus, p is a vector of
R input elements. These inputs postmultiply the single-row, R-column matrix W. As before, a
constant 1 enters the neuron as an input and is multiplied by a scalar bias b. The net input to
thetransfer function f is n, the sum of the bias b35
and the product Wp. This sum ispassed to the
transfer function f to get the neuron’s output a, which in this case is a scalar. Note that if there
were more than one neuron, the network output would be a vector.
A layer of a network is defined in the previous figure. A layer includes the combination of the
weights, the multiplication and summing operation (here realized as a vector product Wp),
the bias b, and the transfer function f. The array of inputs, vector p, is not included in or
called a layer.Each time this abbreviated network notation is used, the sizes of the matrices
are shown just below their matrix variable names. This notation will allow you to understand
the architectures and follow the matrix mathematics associated with them.
when a specific transfer function is to be used in a figure, the symbol for that transfer
function replaces the f shown above. Here are some examples.

You can experiment with a two-element neuron by running the demonstration program
nnd2n2.
5.10 Network Architectures
Two or more of the neurons shown earlier can be combined in a layer, and a particular
network could contain one or more such layers. First consider aSingle layer of neurons.
5.10.1 A Layer of Neurons
A one-layer network with R input elements and S neurons follow.
In this network, each element of the input vector p is connected to each neuron input through
the weight matrix W. The ith neuron has a summer that gathers
its weighted inputs and bias to form its own scalar output n(i). The various n(i) taken together
form an S-element net input vector n.

36
Fig5.2 single layer neural network
Finally, the neuron layer outputs form a column vector a. The expression for a is shown at the
bottom of the figure.
Note that it is common for the number of inputs to a layer to be different from the number of
neurons (i.e., R is not necessarily equal to S). A layer is not constrained to have the number of
its inputs equal to the number of its neurons.You can create a single (composite) layer of
neurons having different transfer functions simply by putting two of the networks shown
earlier in parallel. Both networks would have the same inputs, and each network would create
some of the outputs.The input vector elements enter the network through the weight matrix
W.

Note that the row indices on the elements of matrix W indicate the destination neuron of the
weight, and the column indices indicate which source is the input for that weight. Thus, the
indices in w1,2 say that the strength of the signal from the second input element to the first
(and only) neuron is w1,2. The S neuron R input one-layer network also can be drawn in
abbreviated notation.

37
Here p is an R length input vector, W is an SxR matrix, and a and b are S length vectors. As
defined previously, the neuron layer includes the weight matrix, the multiplication
operations, the bias vector b, the summer, and the transfer function boxes.
5.10.2 Inputs and Layers
To describe networks having multiple layers, the notation must be extended. Specifically, it
needs to make a distinction between weight matrices that are connected to inputs and weight
matrices that are connected between layers. It also needs to identify the source and
destination for the weight matrices. We will call weight matrices connected to inputs input
weights; we will call weight matrices coming from layer outputs layer weights. Further,
superscripts are used to identify the source (second index) and the destination(first index) for
the various weights and other elements of the network. To illustrate, the one-layer multiple
input network shown earlier is redrawn in abbreviated form below.

As you can see, the weight matrix connected to the input vector p is labeled as an input
weight matrix (IW1,1) having a source 1 (second index) and a destination 1 (first index).
Elements of layer 1, such as its bias, net input, and output have a superscript 1 to say that they
are associated with the first layer.“Multiple Layers of Neurons” uses layer weight (LW)
matrices as well as input weight (IW) matrices.
5.10.3 Multiple Layers of Neurons
A network can have several layers. Each layer has
38 a weight matrix W, a bias vector b, and an
output vector a. To distinguish between the weight matrices,output vectors, etc., for each of
these layers in the figures, the number of the layer is appended as a superscript to the variable
of interest. You can see the use of this layer notation in the three-layer network shown below,
and in the equations at the bottom of the figure.

Fig 5.3 Multi Layer neural networks


The network shown above has R1 inputs, S1 neurons in the first layer, S2 neurons in the
second layer, etc. It is common for different layers to have different numbers of neurons. A
constant input 1 is fed to the bias for each neuron. Note that the outputs of each intermediate
layer are the inputs to the following layer. Thus layer 2 can be analyzed as a one-layer
network with S1 inputs, S2 neurons, and an S2xS1 weight matrix W2. The input to layer 2 is
a1; the output
is a2. Now that all the vectors and matrices of layer 2 have been identified, it can be treated
as a single-layer network on its own. This approach can be taken with any layer of the
network.The layers of a multilayer network play different roles. A layer that produces the
network output is called an output layer. All other layers are called hidden layers. The three-
layer network shown earlier has one output layer (layer 3) and two hidden layers (layer 1 and
layer 2). Some authors refer to the inputs as a fourth layer. This toolbox does not use that
designation. The same three-layer network can also be drawn using abbreviated notation.

39
Multiple-layer networks are quite powerful. For instance, a network of two layers, where the
first layer is sigmoid and the second layer is linear, can be trained to approximate any
function (with a finite number of discontinuities) arbitrarily well. Here it is assumed that the
output of the third layer, a3, is the network output of interest, and this output is labeled as y.
This notation is used to specify the output of multilay er networks.
5.10.4 Input and Output Processing Functions
Network inputs might have associated processing functions. Processing functions transform
user input data to a form that is easier or more efficient for a network. For instance,
mapminmax transforms input data so that all values fall into the interval [-1, 1]. This can
speed up learning for many networks. removeconstantrows removes the values for input
elements that always have the same value because these input elements are not providing any
useful information to the network. The third common processing function is
fixunknowns, which recodes unknown data (represented in the user’s data with NaN values)
into a numerical form for the network. fixunknowns preserves information about which
values are known and which are unknown. Similarly, network outputs can also have
associated processing functions.Output processing functions are used to transform user-
provided target vectorsfor network use. Then, network outputs are reverse-processed using
the same functions to produce output data with the same characteristics as the original
user-rovided targets.Both mapminmax and removeconstantrows are often associated with

network outputs. However, fixunknowns is not. Unknown values in targets (represented by

NaN values) do not need to altered for network use.

5.11 Training an Artificial Neural Network

Once a network has


40 been structured for a particular application,

that network is ready to be trained. To start this process the initial weights are chosen
randomly. Then, the training, or learning, begins. There are two approaches to training –

‘SUPERVISED’ and ‘UNSUPERVISED’. Supervised training involves a mechanism of

providing the network with the desired output either by manually “grading” the network’s

performance or by providing the desired outputs with the inputs. Unsupervised training is

where the network has to make sense of the inputs without outside help. The vast bulk of

networks utilize supervised training. Unsupervised training is used to perform some initial

Characterization on inputs.

Training can also be classified on basis of how the training

pairs are presented to the network. They are ‘INCREMENTAL TRAINING’ and ‘BATCH

TRAINING’. In incremental training the weights and biases of the network are updated each

time an input is presented to the network. In batch training the weights and biases are only

updated after all of the inputs have been presented.

5.11.1 Supervised Training

In supervised training, both the inputs and the outputs are provided.

The network then processes the inputs and compares its resulting outputs against the desired

outputs. Errors are then propagated back through the system, causing the system to adjust the

weights are continually tweaked. The set of data which enables the training is called the

“training set”. During the training of a network the same set of data is processed many times

as the connection weights are ever refined. However, some networks never learn. This could

be because the input data does not contain the specific information from which the desired

output is derived. Networks also don’t converge if there is not enough data to enable

complete learning. Many layered networks with multiple nodes are capable of memorizing

data. To monitor the network to determine if the system is simply memorizing its data in

some non significant way, supervised training needs to hold back a set of data to be used to

41
test the system after it has undergone its training. Typical diagrams for supervised training of

a network is given in figure 5.4

Fig 5.4 supervised training

If a network is simply can’t solve the problem, the designer then has to review the input and

outputs, the number of layers, the number of elements per layer, the connections between the

layers, the summation, transfer, and training functions, and even the initial weights

themselves. Those changes required to create a successful network constitute a process

wherein the “art” of neural networking occurs. Another part of the designer’s creativity

governs the rules of training. There are many laws (algorithms) used to implement the

adaptive feedback required to adjust the weights during training. The most common

technique is backward-error propagation, more commonly known as back-propagation. Yet,

training is not just a technique. It involves a “feel”, and conscious analysis, to insure that the

network is not ‘over trained’. Initially, an artificial neural network configures itself with the
42
general statistical trends of the data. Later, it continues to “learn” about other aspects of the
data which may be spurious from a general viewpoint. When finally the system has been

correctly trained, and no further learning is needed, the weights can, if

desired, be “frozen”.

5.11.2 Unsupervised or Adaptive Training

The other type of training is called unsupervised training. In

Unsupervised Training, the network is provided with inputs but not with desired outputs. The

system itself must then decided what features it will use to group the input data. This is often

referred to as self-organization or adaptation. At the present time, unsupervised learning is not

well understood. This adaptation to the environment is the promise which would enable

science fiction types of robots to continually learn on their own as they encounter new

situations and new environments. Life is filled with situation where exact training sets do not

exist. Some of these situations involve military action where new combat techniques and new

weapons might be encountered. Because of this unexpected aspect to life and the human

desire to be prepared, there continues to be research into, and hope for, this field. Yet, at the

present time, the vast bulk of neural network is in systems with supervised learning.

Supervised learning is achieving results. One of the leading researchers into unsupervised

learning is Tuevo Kohonen, an electrical engineer at the Helsinki University of Technology.

He has developed a self-organizing network, sometimes called an auto-associator that learns

without the benefit of knowing the right answer. It is an unusual looking network in that it

contains one single layer with many connections. The weights for those connections have to

be initialized and the inputs have to be normalized. The neurons are set up to compete in a

winner - take - all fashion. Kohonen continues his research into networks that are structured

differently than standard; feed forward, back- propagation approaches. Kohonen’s work deals

with the grouping of neurons into fields. 43


5.11.3 Comparison Of Traditional Computing With Expert Systems

Neural networks offer a different way to analyze data, and to

recognize patterns within that data, than traditional computing methods. However, they are

not a solution for all computing problems. Traditional computing methods work well for

problems that can be well characterized. Balancing checkbooks, keeping ledgers and keeping

tabs of inventory are well defined and do not require the special characteristics of neural

networks. Table 4.1 identifies the basic differences between the two computing approaches.

Traditional computers are ideal for many applications. They can process data, track

inventories, network results and protect equipment. These applications do not need the

special characteristics of neural networks. Expert systems are an extension of traditional

computing and are sometimes called the fifth generation

of computing. The fifth generation involves artificial intelligence.

CHARACTRISTICS TRADITIONAL ARTIFICIAL NEURAL


COMPUTING NETWORKS
(including Expert
Systems)
Processing style Sequential Parallel
Functions Logically (left brained) Gestault (right brained)
via via
Rules Images
Concepts Pictures
Calculations Controls
Learning Method by rules (didactically) by example
(Socratically)
Applications Accounting, word Sensor processing, speech
processing, math, inventory, recognition, pattern
digital communications recognition, next
recognition
Table 5.1 Comparison of computing approaches

Efforts to make expert systems general have run into a number of problems.

As the complexity of the system increases, the system simply demands too much computing

resources and becomes too slow. Expert systems have been found to be feasible only when
44
narrowly confined. Artificial neural networks offer a completely different approach to
problem solving and they are sometimes called the sixth generation of computing. They try to

provide a tool that both programs itself and learns on its own. Neural networks are structured

to provide the capability to solve problems without the benefits of an expert and without the

need of programming. They can seek patterns in data that no one knows are there.

A comparison of artificial intelligence’s expert systems and neural networks is

contained in Table 4.2. Yet, despite the advantages of neural networks over both expert

systems and more traditional computing in these specific areas, neural nets are not complete

solutions. They learn, and as such,

they do continue to make “mistakes”.

Table 5.2 Comparisons of Expert Systems and Neural Networks


CHARACTERISTICS VON NEUMANN ARTIFICIAL NEURAL
ARCHITECTURE USED NETWORKS
FOR EXPERT SYSTEMS
Processors VLSI (traditional Artificial Neural Networks:
processors) variety of technologies:
hardware development is on
going
Memory Separate The same
Processing Approach Processes problem one rule Multiple simultaneously
at a time: sequential
Connections Externally programmable Dynamically self
programming
Self Learning Only algorithmic Continuously adaptable
parameters modified
Fault tolerance None without special Significant in the very
processors nature of the interconnected
neurons
Use of Neurobiology in None Moderate
design
Programming Through a rule based shell: Self-programming: but
complicated network must be properly
set up
Ability to be fast Requires big processors Requires multiple custom-
built chips
5.12 Major Components Of Artificial Neuron

This section describes the five major components which make

up an artificial neuron. These components


45 are valid whether the
neuron is used for input, output, or is in one of the hidden layers.

Component 1.

Weighting Factors : A neuron usually receives many inputs simultaneously.

Each input has its own relative weight which gives the input the impact that it needs on the

processing element’s summation function. These weights perform the same type of function

as do the varying synaptic strengths of biological neurons. In both cases, some inputs are

made more important than others so that they have a greater effect on the processing element

as they combine to produce a neural response. Weights are a measure of an input’s connection

strength. These strengths can be modified in response to various training sets and according

to a network’s specific topology or thought its

learning rules.

Component 2.

Summation Function : The first step in a processing element’s operation is

to compute the weighted sum of all of the inputs. Mathematically, the inputs and the

corresponding weights are vectors which can be represented as (i1, i2 … in) and (w1, w2 … wn).

The total input signal is the dot, or inner, product of these two vectors. This simplistic

summation function is found by multiplying each component of the ‘i’ vector by the

corresponding component of the w vector and then adding up all the products. Input 1 = i 1 *

w1, input 2 = i 2 * w2 etc., are added as input 1 + input 2 + … + input n. The result is a

single number, not a multi-element vector.

Component 3.

Transfer Function: The result of the summation function, the weighted sum,

is transformed to a working output through an algorithmic process known as the transfer

function. In the transfer function the summation total can be compared with some threshold

46
to determine the neural output. If the sum is greater than the threshold value, the processing
element generates a signal. If the sum of the input and weight products is less than the

threshold, no signal (or some inhibitory signal) is generated. Both types of response are

significant. The threshold, or transfer function, is generally non-linear. Linear (straight-line)

functions are limited because the output is simply proportional to

the input. Linear functions are not very useful.

Sample transfer functions are shown in the figure 6.3 below.

Figure 5.6. Sample Transfer Functions

Component 4.

Scaling and Limiting : After the processing element’s transfer function, the

result can pass through additional processes which scale and limit. This scaling simply

multiplies a scale factor times the transfer value, and then adds an offset. Limiting is the

mechanism which insures that the scaled result does not exceed an upper or lower bound.
47
This limiting is in addition to the hard
limits that the original transfer function may have performed.

Component 5.

Output Function (Competition) : Each processing element is allowed one

output signal which it may output to hundreds of other neurons. This is just like the biological

neuron, where there are many inputs and only one output action. Normally, the output is

directly equivalent to the transfer function’s result. Some network topologies, however,

modify the transfer result to incorporate competition among neighboring processing

elements. Neurons are allowed to compete with each other, inhibiting processing elements

unless they have great strength. Competition can occur at one or both of two levels. First,

competition determines which artificial neuron will be active, or provides an output. Second,

competitive inputs help determine which processing element will

participate in the learning or adaptation process.

5.13 Neural Network Applications

Neural network have been applied in many other fields. List of some

applications mentioned in the literature are as mentioned below:

Aerospace

High performance aircraft autopilot, flight path simulation, aircraft control systems,

autopilot enhancements, aircraft component simulation, aircraft

Component fault detection

Automotive

Automobile automatic guidance system, warranty activity analysis

Banking

48 application evaluation
Check and other document reading, credit
Defense

Weapon steering, target tracking, object discrimination, facial recognition, new kinds

of sensors, sonar, radar and image signal processing including data compression,

feature extraction and noise suppression, signal/image identification

Electronics

Code sequence prediction, integrated circuit chip layout, process control, chip failure

analysis, machine vision, voice synthesis, nonlinear modeling

Entertainment

Animation, special effects, market forecasting

Financial

Real estate appraisal, loan advisor, mortgage screening, corporate bond rating, credit

line use analysis, portfolio trading program, corporate

financial analysis, currency price prediction

Insurance

Policy application evaluation, product optimization

Manufacturing

Manufacturing process control, product design and analysis, process and machine

diagnosis, real-time particle identification, visual quality inspection systems, beer

testing, welding quality analysis, paper quality prediction, computer chip quality

analysis, analysis of grinding operations, chemical product design analysis, machine

maintenance analysis, project bidding, planning and management, dynamic

modeling of chemica process system

Medical

49
Breast cancer cell analysis, EEG and ECG analysis, prosthesis design, optimization of

transplant times, hospital expense reduction, hospital

quality improvement and emergency room test advisement

Oil and Gas

Exploration

Robotics

Trajectory control, forklift robot, manipulator controllers, vision systems

Speech

Speech recognition, speech compression, vowel classification, text to speech synthesis

Securities

Market analysis, automatic bond rating, stock trading advisory systems

Telecommunications

Image and data compression, automated information services, real-time translation of

spoken language, customer payment processing systems

Transportation

Truck brake diagnosis systems, vehicle scheduling, routing systems

5.14 MATLAB – ANN Toolbox Functions

What is MATLAB ?

MATLAB is a high performance language for technical computing. It

integrates computation, visualization and programming in an easy – to – use environment

where problems and solutions are expressed in familiar

mathematical notation. Typical uses include

* Math and computation

* Algorithm development 50
* Data acquisition

* Modeling, Simulation and Prototyping

* Data analysis, exploration and visualization

* Scientific and engineering graphics

* Application development, including graphical user interface

building

MATLAB is an interactive system whose basic data element is an array that

does not require dimensioning. This allows you to solve many technical computing problems,

especially those with matrix and vector formulations, in a fraction of the time it would take to

write a program in a scalar

no interactive language such as C or FORTRAN

MATLAB features a family of add-on application-specific solutions called

toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply

specified technology. Toolboxes are comprehensive collections of MATLAB functions (M -

files) that extend the MATLAB environment to solve particular classes of problems. Areas in

which toolboxes are available include Signal Processing, Control Systems, Neural Networks,

Fuzzy Logic, Wavelets,

Simulation and many others.

The MATLAB System

The MATLAB System consists of five main parts:

1. Development Environment

2. The MATLAB Mathematical Function Library

3. The MATLAB Language

4. Graphics 51
5. The MATLAB Application Program Interface (API)

 Creating a Network (NEWFF)

The first step in training a feed forward network is to crate the

network object. The function newff creates a trainable feed forward network. It requires four

inputs and returns the network object. The first input is an R by 2 matrixes of minimum an

maximum values for each of the r elements of the input vector. The second input is an array

containing the sizes of each layer. The third input is a cell array containing the names of the

transfer functions to be used in each layer. The final input contains the name of the training

function to be used.

SYNTAX :-

NEWFF (PR, [S1 S2 … SN1], {TF1 TF2 … TFN1}, BTF, BLF, PF) takes,

PR – Rx2 matrix of mind and max values for input elements

Si – Size of iTh layer, for N1 layers

TFi – Transfer function of iTh layer, default = ‘tansig’

BTF – Back prop network training function, default = ‘trainlm’

BLF – Back prop weight/bias learning function, default = ‘learngdm’

PF – Performance function, default = ‘mse’

and returns an N layer feed – forward back prop network

Initializing Weights (INIT, INITNW, RANDS)

Before training a feed forward network, the weights and biases must be

initialized. The initial weights and biases are created with the command init. This function

takes a network object as input and returns a network object with all weights and biases

initialized. Here is how a network is

initialized: 52
SYNTAX: - net = init (net);

The specific technique which is used to initialize a given

network will depend on how the network parameters net.initFcn and net.layers {i}.initFcn are

set. The parameter net.initFcn is used to determine the overall initialization function for the

network. The default initialization function for the feed forward network is initlay, which

allows each layer to use its own initialization function. With this setting for net.initFcn, the

parameters net.layers,

{i}.initFcn is used to determine the initialization method for each layer.

The function initnw is normally used for layers of feed forward

networks where the transfer function is sigmoid. The function rands are normally used for

layers of feed forward networks where the transfer function is linear. The initialization

function init is called by newff, therefore the network is automatically initialized with the

default parameters when it is created, and init

does not have to be called separately.

 Training

There are two types of training procedures according to the way in which the

inputs are applied to the network. They are ‘incremental training’ where each training pair

will be applied one after the other and ‘batch training’ in

which entire set of training pairs will be applied at once.

The syntaxes for them are as below:

SYNTAX: - Batch Training : net = train (net, p, t);

Incremental Training : [net, a, e] = adapt (net, p, t);

53
Where ‘p’ and ‘t’ together constitute the training set. ‘P’ is input vector and ‘t’

is target vector presented to the feed forward network. With these functions the network will

be trained according to the training algorithm

mentioned in ‘newff’.

 SIMULATION

The function sim simulates a network. Sim takes the network p, and the

Network objects net, and returns the network outputs ‘k’.

SYNTAX: - K = sim (net, p);

 TRAINING PARAMETERS

SYNTAX DESCRIPTION

net.trainparam.epochs indicates maximum number of


epochs for training

net.trainparam.lr specifies the learning rate

net.trainparam.goal specifies the performance goal

net.trainparam.show specifies number of epochs


between Showing progress

The above syntaxes are applicable when batch training is used. But if

we opt incremental training we must replace ‘train’ with ‘adapt’.

54
MATLAB PROGGRAM FOR ANN CONTROLLER TO TRAIN AN ANN
BLOCK FOR SERIES VOLTAGE REGULATOR IN UPQC

p= [0.92866 0.92866 0.92448 0.92448 0.92413 0.92415 0.92418 0.92418 0.92898 0.92908

0.92908 0.92473 0.92473 0.92475 0.92486 0.92464 0.92468 0.92468 0.92942 0.99225 ;

0.16319 0.16319 0.16292 0.16292 0.16277 0.16273 0.16267 0.16267 0.16244 0.16221

0.16221 0.16213 0.16213 0.16208 0.16182 0.16177 0.16167 0.16167 0.16141 0.17659];

%p=p';

o=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ;1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1];

net.numInputs=2;

net.numLayers=1;

net=newff([0.924130.99225;0.161410.17651],[20,2],{'tansig','poslin'},'traingdm');

net.trainparam.lr=0.02;

net.trainparam.epochs=500;
55
net.trainparam.goal=0.0;
net.trainParam.min_grad=1e-40;

net=train(net,p,o);

%net = newlind(p,o);

gensim(net,-1)

MATLAB PROGGRAM FOR ANN CONTROLLER TO TRAIN AN ANN


BLOCK IN SHUNT VOLTAGE REGULATOR IN UPQC

p=[0.14911 0.14911 0.14835 0.14835 0.14825 0.14824 0.14823 0.14823 0.14892 0.14886

0.14886 0.14814 0.14814 0.14812 0.14806 0.14801 0.14799 0.14799 0.14867 0.15413

0.014604 0.014604 0.014546 0.014546 0.014514 0.014504 0.014492 0.014492 0.014443

0.014394 0.014394 0.014377 0.014377 0.014366 0.014311 0.014301 0.01428 0.01428

0.014226 0.017879];

%p=p';

o=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ;1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1];

net.numInputs=2;

net.numLayers=1;

net=newff([0.92413 0.99225;0.16141 0.17651],[20,2],{'tansig','poslin'},'traingdm');

net.trainparam.lr=0.02;

net.trainparam.epochs=500;

net.trainparam.goal=0.0; 56
net.trainParam.min_grad=1e-40;

net=train(net,p,o);

%net = newlind(p,o);

gensim(net,-1)

57

You might also like