Modeling Induction Motors: Abstract: in This Paper, A Novel Technique For On-Line Estimation of Most
Modeling Induction Motors: Abstract: in This Paper, A Novel Technique For On-Line Estimation of Most
Received:November21th,2011.Accepted:April25th,2012
361
Tarek I. Haweel
obtained. Being adaptive, these parameters can be monitored in the real time. The rest of the paper is organized as follows: Section 2 summarizes the essential features of steady state and dynamic models of the induction motor. Section 3 describes the Volterra neural networks. Section 4 introduces the proposed method and presents the results obtained. Finally Section 5 concludes the work. 2. Steady State and Dynamic Models of An Induction Motor A. Steady State Equivalent Circuit The steady state motor model can be deduced from the description of the stator and rotor electrical circuits. With this physical approach, five electrical elements are defined as the stator and rotor resistances (Rs and Rr), stator and rotor leakage inductances (Lls , Llr) and a magnetizing inductance (Lm). This definition leads to the usual equivalent circuit for steadystate operation of an induction motor [9]. From this equivalent circuit, one can obtain expressions for the motor torque, stator current, input power factor and efficiency [9]. B. Mathematical Dynamic Model of an Induction Motor The induction motor can be represented in the stator stationary reference frame (- coordinate axes) by a second order differential equation relating the stator input voltages and currents as [10]
d 2is dt
2
+ g1
di s dv + g 0 i s = h1 s + h0 v s dt dt
(1)
Where the coefficients of equation (1) are given in the following way
R s Lr + R r L s
g1 = go = h1 = ho =
j r
Lr R s 1 j r T s r Lr
(2)
Lr 1 j r s Tr
These coefficients are functions of the machine parameters and motor speed (r ). Where Ls and Lr are stator and rotor self inductance respectively, and are defined by Ls = Lm + Lls and Lr = Lm + Llr , Tr = Lr/Rr is a rotor time constant and s = Ls Lr - Lm2 is known as leakage index. The stator phase voltages and currents can also be transformed to the stationary reference frame (- axes) [18]. The stator voltage and current equations, which are complex, can be written as (3) vs = vs + jvs
i s = is + jis
Where reference frame and i s
(4)
vs , vs are the -axis and -axis stator voltage components in the stationary
second derivative of the stator current, components at constant speed can be derived as
362
A1 d 2 i s di s di s dv s + r v s + r v s i s r i s A2 2 dt dt dt dt A3 = di dv 2 d i s di s s s + r v s v s i s r i s A4 r dt dt 2 dt dt A5 L R R L + Rr Ls RR L R Where A1 = s r , A2 = r , A3 = r , A4 = r s and A5 = r s
(5)
Applying the following constraints to equation (5), Constraint 1: v s = Vs cos s t Constraint 2: v s = Vs sin s t Constraint 3: i s = I s cos( s t ) Constraint 4: i s = I s sin( s t ) Where s is the stator supply angular frequency (rad./sec.) and is the phase angle between the stator voltage and current components. This results in the following equations:
i s = 1 ( s 2 + s r A4 )
[A2 s v s + A3 v s + A1 s i s A2 r v s + A5 r i s ]
(6) (7 )
i s =
1 ( s + s r A4 )
2
[A3 v s + A2 s v s + A1 s is + A2 r v s A5 r is ]
Equations (6) and (7) represent the model of the induction motor. These equations may be put in the form
i s = K 1v s + K 2 v s + K 3 i s + K 4 r v s + K 5 r i s i s = K 2 v s + K 1v s + K 3 i s K 4 r v s K 5 r i s
(8) (9)
Where
K1 =
A2 s
2 s
+ s r A4
K2 =
K3 = K4 =
(
( (
A3
2 s 2 s 2
+ s r A4 A1 s
+ s r A4 A2 + s r A4
)
)
s 2 s
K5 =
A5 + s r A4
363
Tarek I. Haweel
At constant known motor speed and if the leakage inductance in both stator and rotor circuits are considered the same [9] (Lls Llr), equations (10)-(14) may be solved together to obtain the electrical machine parameters in the form
Rs = Ls =
Rr =
K 5 s K1 K 3 R s K1 / s K2
K 2 r2 Ls K1
s =
Rr + K 2 Ls R s ( r2 + s r ) K 2
(18) (19)
Lm = L2 s s
3. Volterra Neural Networks Consider a continuous and smooth mapping on the form
y = f ( x) ;
n
y Rn ,
m
x Rm
(20)
Where y R and x R indicate that the dimensional space of y and x are n and m respectively. Each output can be expanded in a Taylor series around some fixed-point say 0 = (0,0,L,0) resulting in
y k = f k ( x) = a k (0) +
m m m
j1 =1
bk ( j1 ) x j1 +
c
j1 =1 j2 =1
k ( j1 , j 2 ) x j1 x j2
+L
(21)
K x jr + L ; k = 1, L , n
K+
L q
j1 =1 j2 =1 jr =1
k ( j1 , j 2 , K , j r ) x j1 x j2
Where a k (0) = f k (0) , bk ( j1 ) are the linear expansion coefficients, c k ( j1 , j 2 ) are the quadratic expansion coefficients and so on. A truncated version of the original infinite series is always employed. In case of dynamical systems such as a time series, where the vector x may be formed as a collection of past samples, the term Volterra series expansion is used. The expansion coefficients q ( j1 , j 2 , L , j r ) are called the rth order Volterra Kernel. It is noted that the number of coefficients is proportional to m . The number of coefficients may be reduced assuming that the expansion coefficients are symmetric, that is all the q ( j1 , j 2 , L , j r ) are the same for all the r! permutations of the indices ( j1 , j 2 , L , j r ) [11]. In this case, equation (21) becomes
y k = f k ( x) = a k (0) +
m m
j1 =1
bk ( j1 ) x j1 +
c
j1 =1 j2 = j1
k ( j1 , j 2 ) x j1 x j2
+L
K+
L q
j1 =1 j2 = j1 jr = jr 1
(22)
K x jr + L ; k = 1, L , n
k ( j1 , j 2 , K , j r ) x j1 x j2
364
Moreover,
pruned
version
in
which
the
j1 j 2 L j r , may also be employed [12]. The Volterra series expansion provides an important tool for dealing with nonlinear systems and models. Recently, this expansion has been utilized in many application [13]-[16], [8]. The Volterra Neural Network (VNN) [8] employs a truncated Volterra series expansion of the input vector. The expanded input vector is then utilized as the actual input to a normal NN connection. The VNN adopts only the linear transfer functions for all the neurons involved. Provided that a sufficient kernel order is employed in Volterra expanding the input vectors, only one layer may be utilized. The linearity of the neurons transfer functions leads to explicit formulas describing the input/target patterns. These formulae are completely determined by the Volterra expansion coefficients after convergence, which are the biases (the as in equations (21), (22)) and the weights (rest of the coefficients in equations (21), (22)). To clarify VNN, lets assume that it is required to associate a size N two-element input pattern [ x1 x2] to a size N two-element desired (target) pattern [d1 d2] employing VNN. If a symmetric second order kernel Volterra expansion is used, the expanded input vector, Xe according to (22) is:
X e = 1 x1
x2
2 x1
2 x2
x1 x 2
(23)
The expanded input vector is now the new input for a single layer (flat) neural network structure as shown in Fig.1. The number of neurons in the output layer equals the number of elements in the output vectors (two in our example). There are weights w(m,n) connecting the nth expanded input element to the mth output neuron as shown. The output layer neurons employ linear transfer functions that lead to explicit formulas relating the output/desired patterns to the input patterns. Such formulas are the outstanding feature of VNN. The formulas in our example are given in matrix form as
y1 = y2
1 x 1 w(1,0) w(1,1) w(1,2) w(1,3) w(1,4) w(1,5) x 2 2 w( 2,0) w( 2,1) w( 2,2) w( 2,3) w( 2,4) w( 2,5) x1 2 x2 x x 1 2
(24)
It is noted that w(1,0) and w(2,0) are the biases. The VNN is trained incrementally. That is, the weights/biases are updated each time an input pattern is presented to the network using the error vector defined in our example as
e1 d1 y1 = e 2 k d 2 k y 2 k
th
k [1, N ]
(25)
Where, [d1 d2]kt is the kth desired pattern, [y1 y2]kt is the kth output vector and [e1 e2]kt is the k error vector. The presentations of all N patterns are equivalent to an epoch. Epochs are repeated until a convergence is achieved. After convergence, the error vectors tend to null, the output vectors tend to the desired patterns and the N input/desired pattern association is complete. The equations relating the desired patterns to the input patterns are the same as in (24) with the y vector replaced by the d vector.
365
Tarek I. Haweel
The algorithm employed in training the VNN is the LMS-Newton (LMSN) with variable convergence factor [17],[8]. This algorithm achieves a uniform and fast second order convergence using estimates for the Hessian matrix at each update. The LMSN adaptive algorithm [17] has been extended to the multiple input/output cases to match the VNN architecture [8].
4. Induction Motor Characterization Based on The Vnn This section illustrates the implementation of the VNN in characterizing the induction motor. The first part is a parametric characterization. That is the VNN weights (after training) are used to estimate the actual electrical parameters of the induction motor. The second part is non-parametric. The VNN weights provide a set of equations, which relates a number of induction motor performance criteria to the essential induction motor inputs.
A. Parametric Characterization The tested motor was a 9.8 HP, 220-Volt, 50-Hz, delta connection, slip-ring induction motor. The rated stator current per phase was 15.1 A at 1450 rpm. A DC generator of about the same rating is coupled to the motor. The equivalent circuit parameters for it have been determined by tests given in [9]. The VNN may be employed to obtain the electrical machine parameters as follows. The stator input voltages, currents and motor speed are measured instantaneously. The measured stator phase voltages and currents are transformed to the corresponding - components. Equation (8) has been employed, although equation (9) may be used either. A VNN configuration with four inputs, vs , vs , is and r and one target i s is constructed (n = 1 and m = 4). A truncated Volterra expansion of the input up to the second Kernel is implemented. Referring to equation (22) for the symmetric Volterra series expansion and assigning v s = x1 ; v s = x 2 ; i s = x3 ; r = x 4 ; i s = y1 Then the symmetric second order kernel expansion yields (dropping k for simplicity) 2 i s = a(0) + b(1)v s + b(2)v s + b(3)i s + b(4) r + c(1,1)v s + c(1,2)v s v s +
2 + c(1,3)v s i s + c(1,4)v s r + c(2,2)v s + c( 2,3)v s i s + c( 2,4)v s r +
(26)
2 c(3,3)i s
+ c(3,4)i s r +
c(4,4) r2
The number of input patterns employed is 100, each contains four sample values of the four inputs and the output patterns are the corresponding 100 target current samples. Comparing equations (8) and (26) it is clear that
K1 = b(1); K 2 = b(2); K 3 = b(3); K 4 = c(1,4); K 5 = c(3,4)
The rest of the Volterra series expansion coefficients that are not involved in equation (8) have been reset to zero during training to save time. The adaptive session has been run for two epochs only, where an epoch is a complete presentation for all input patterns. Figure 2.a shows the estimated value of stator phase current using VNN and the measured one. From this figure, one can conclude that the estimated and measured values of the stator phase current are in reasonable agreement. This is also proved from Fig. 2.b, which shows the squared error, achieved during the session. After about 20 iterations the error power has been decreased to about 50 dB (relative to unity) meaning that the target current has reached a reasonable accuracy (around 10-5). The coefficients estimated after the two epochs are:
366
(27)
Equation (27) is used to characterize the steady-state model of an induction motor by relating the motor performance characteristics to the motor inputs. This characterization is valid to obtain the motor performance characteristics accurately at any loading conditions. It is worth mentioning here that such explicit equations are impossible to be obtained implementing conventional neural networks.
5. Conclusion A novel technique for on-line estimation of most electrical parameters as well as deducing the steady-state performance characteristics of an induction motor has been proposed. In the proposed technique, measurements of some essential quantities such as the stator voltages, currents and motor speed are employed in suitable VNN configurations with second order Kernels. Explicit formulae relating the convergent VNN weights and biases to the acquired electrical parameters are provided. Other formulae are also obtained to get the steady state motor torque, stator current, input power factor and efficiency at any supply voltage and motor speed. The accuracy of the estimated parameters using the proposed technique is reasonable compared to the nominal values. An excellent match between the steady-state performance characteristics obtained from the trained VNN and those obtained experimentally has been achieved.
367
Tarek I. Haweel
References [1]. C. L. Becnel, J. W. Kilgrore and E. F. Merrill, Determining Motor Efficiency by Field Testing IEEE Transactions on Industry Applications, Vol. IA-23, No. 3, pp. 440-443, 1987. [2]. T. W. Jian, D. W. Novotny and N. L. Schmitz, Characteristic Induction Motor Values for Variable Voltage Part Load Performance Optimization IEEE Transactions on Power Apparatus and Systems, vol. PAS-102, pp. 38-46, 1983. [3]. D. S. Kirschen, D. W. Novotny and T. A. Lipo, Optimal Efficiency of an Induction Motor Drives, IEEE Transactions on Energy Conversion, vol. EC-2, no. 1, pp. 70-75, 1987. [4]. R. Krishnan and A. S. Bharadwaj, A Review of Parameter Sensitivity and Adaptation in Indirect Vector Controlled Induction Motor Drive Systems, PESC90 Record. 21st Annual IEEE Trans. on Power Electronics, vol. 6, no. 4, pp. 695-703, Oct. 1991. [5]. T. Iwasaki and T. Kataoka, Application of an Extended Kalman Filter to Parameter Identification of an Induction Motor, IEEE-IAS Annual Meeting Conference Record, pp. 248-253, 1989. [6]. L. Loron and G. Laliberte, Application of the Extended Kalman Filter to Parameter Estimation of Induction Motors, The European Power Electronics Association, pp. 7378, Sep. 1993. [7]. T. Kataoka, S. Toda and Y. Sato, On-Line Estimation of Induction Motor Parameters by Extended Kalman Filter, The European Power Electronics Association, pp. 325-329, Sep. 1993. [8]. Tarek I Haweel and Fahd A. Alturki, "Modeling nonlinear multi-input multi-output systems using neural networks based on Volterra series," Proceedings of the IASTED International Conference Control and Applications (CA 2009) July 13 - 15, 2009 Cambridge, UK, pp. 133-139, Jul. 2009. [9]. M. Liwschitz-Garik and C. C. Whipple, Alternating-Current Machines, Van Nostrand., 1961. [10]. M. V. Reyes, K. Miaami and G. C. Verghese, Recursive Speed and Parameter Estimation, Proc. IEEE - IAS Annual Meeting, pp. 607-611, 1989. [11]. Junghsi Lee and V. John Mathews, A Fast Recursive Least Squares Adaptive SecondOrder Volterra Filter and its Performance Analysis, IEEE trans. On Signal Processing, vol. 41, no. 3, pp. 1087-1101, Mar. 1993. [12]. Ronald K. Pearson, et al, Identification of Structurally Constrained Second-Order Volterra Models, IEEE Transactions on Signal Processing, vol. 44, no. 11, pp. 28372846, Nov. 1996. [13]. Tarek I Haweel, Block Adaptive Volterra Filtering, International Journal of Circuit Theory and Applications, vol. 29, No. 4, pp. 389-396, 2001. [14]. Weng, B. and Barner, K.E., |Time-varying Volterra system identification using Kalman filtering," Proceeding of the 4th Annual Conference on Sciences and Systems, 22-24 Mar., 2006, p.p. 1617-1622. [15]. Seagroves, E., Walcott, B. and Feinauer, D., "Efficient implementation of Volterra systems using a multi-linear SVD," Proceedings of the International Symposium on intelligent Signal processing and Communication systems, ISPACS 2007, Nov. 28-Dec. 1, 2007, p.p. 762-765. [16]. Bjorsel, N, Suchanek, P. and Ronnow, D, "Measuring Volterra kernels of analog-todigital converters using a stepped three-tone Scan," IEEE Trans. on Instrumentation and Measurement, vol. 57, ISS:4, Apr. 2008, pp. 666-671. [17]. Paulo S. R. Diniz, Marcello L. R. de Campos and Andreas Antoniou, Analysis of LMSNewton Adaptive Filtering With Variable Convergence Factor, IEEE Transactions on Signal Processing, vol. 43, no. 3, pp. 617-627, Mar. 1995. [18]. D. W. Novotny and T. A. Lipo, Vector Control and Dynamics of AC Drivers, [Book}, Clarendon press. OXFORD, 1996.
368
Figure 1. VNN Example Table 1. Estimated and measured induction motor electrical parameters
Electrical Parameter Employing VNN Experimentally % |error|
Stator phase current (A)
Rs
(ohms) 0.49675 0.512 2.9 %
Ls
(Henry) 0.15364 0.1503 2.1 %
Rr
(ohms) 0.17954 0.174 3.1 %
s 2
Lm
(Henry) 0.14659 0.1437 2%
(Henry)
40 20 0 -20 -40 50 0 -50 -100 -150 0.20 Sec. 0.10 0.15 0 0 Figure 2b. Squared error between Experimental and VNN output 0 0.50 0 0.12 0.08 0.16 0.04 0 Figure 2a. Stator phase current, ( ) Experimental, (o) 0.20 Sec.
Figure 2. Agreement between measured and VNN estimated stator phase current
369
Tarek I. Haweel
100 50 0 30 20 10 0.9 0.85 0.8 1 0.9 0.8 0.84 0.86 0.88 0.90 092 0.94 0.96 0.98
Per-unit speed
Figure 3. Measured performance criteria (-- ) and convergent VNN outputs (o)
Tarek I Haweel, full professor in Digital Signal Processing at Assiut University in Egypt and currently on leave at the Electrical Engineering Department at Majmaah University in Saudi Arabia. The scope of research: adaptive signal processing, neural networks, image processing, speech processing and applied mathematics. Prof. Haweel is an IEEE senior member. He received two prestigious academic awards in Egypt. He is the author of many original referenced articles.
370