0% found this document useful (0 votes)
34 views37 pages

Eee 541

Control Systems

Uploaded by

Desmond Cassidy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
34 views37 pages

Eee 541

Control Systems

Uploaded by

Desmond Cassidy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 37
Conventional control theory relies on the key assumption of small range operation for the linear model to be valid. When the operation range is large, a linear controller is likely to earities in the system cannot be properly ne of physical processes, in spite of the use dort ‘Another assumption of linear control is that the system is indeed linearizable. However in control systems there are many nonlinearities whose discontinuous nature does not allow linear approximation. These nonlinearities include Coloumb friction, valve hysteresis, reactor deadzones, backlash etc. are often found in control engineering. These effects cannot be derived from linear model and need a nonlinear technique. In designing linear controllers, it is usually necessary to assume that the parameters of the system model are reasonably well known. However, many control problems involve uncertainties in the model parameters. This may be a slow variation of the parameters eg., fouling of heat exchangers to an abrupt change in parameters e.g., inertial parameters of a robot when a new object is grasped. A linear controller based on inaccurate or obsolete values of the model parameters may exhibit significant performance degradation or even instability. Nonlinearities can be intentionally introduced into the controller part of a control system so that model uncertainties can be tolerated. To implement high-performance control systems when the plant dynamic characteristics are poorly known or when large and unpredictable variations occur, a new class of control systems called nonlinear control systems have evolved which provide potential solutions. Four 766 | Purpose are robust controllers, adaptive controllers fuzzy logic controllers and neural controllers. A brief overview on robust control ig given 1 Chapter 10. In this chapter we will deal with adaptive, fuzzy and neural controllers, An adaptive controller is a controller that can modify its behaviour in response to changes in the dynamics of the process and the disturbances. Adaptive control can be considered as a special type of nonlinear feedback control in which the stages of the process can be separated into two categories, which can change at different rates, The slowly changing states are viewed as parameters with a fast time scale for the ordinary feedback and a slower one for updating regulator parameters. One of the goals of adaptive control is to compensate for parameter variations, which may occur due to nonlinear actuators, changes in the operating conditions of the process, and non-stationary disturbances acting on the process. Some of systems to be controlled have parameter uncertainty at the beginning of the control operation for e.g., in robot manipulation the mass and the link lengths are the uncertain parameters. Unless such parameter uncertainty is gradually reduced on-line by an adaptation or estimation technique, it may ¢ause inaccuracy or instability for the control systems. In other systems like a pH control system, the'system dynamics ‘may be well known at the beginning, but may experience unpredictable parameter variations (for e.g., due to buffering) as the control operation. goes on. Without, continuous redesign of the controller, the initial controller may not be able, to;control:the changing plant, well. Thus, the basic objective of adaptive controller is to maintain: a consistent performance of a system in the presence of uncertainty or unknown variation in the plant parameters. An adaptive controller is a controller with adjustable parameters and a mechanism for adjusting the parameters. An adaptive control system can be thought of as having two ioops. One loop is a normal feedback with the process (plant) and controller. The other loop is a parameter adjustment loop. A block diagram of an adaptive system is shown in Fig. 16.1. The parameter adjustment loop is often slower than the normal feedback loop. Fig. 16.1. Adaptive controller, i control ‘There are two main approaches for designing adaptive control ia kaos Model Reference Adaptive Control method, while the other is called as Self-Tuning method. ers. One is known as 768 SoS a i CONTROL SYSTEMS ENGINE! Model-Reference Adaptive Control (MRAC) The Model-Reference Adaptive Control syatem is an adaptive servo system in which the desire performance is expressed in terms of a reference model, which gives desired response ta the reference signal. Such a system can be schematically represented by Fig. 16.2. [ee in —— "Fig, 16:2. Model Réterence Adaptive Controller. ~ MRAC is composed of four parts :a plant containing unknown parameters, a reference model for compactly specifying the desired output of the control system, a feedback control law containing adjustable parameters. The ordinary feedback loop is known as the inner loop and the parameter adjustment loop is called as the outer loop. > Plant “The plant's ass ) hav own structure, although the parameters are (16.1) where k, is the system gain and tis the time constant. Here order of the system is known (second order), but system gain k, and’ time constant, are not known. Similarly consider nonlinear system, say a single li yulator, whose dynamics can be given as : (16.2) where m is the link mass, Tis dis the acceleration and ris the joint torque of the manipulator. Here the structure of the dynamics is known but the parameters such as mass and link length are not known (at least not fully known). In such cases, adaptive control provides the solution. * Reference Model ; A reference niodel is used to specify the ideal response of the adaptive control system to the external command, ‘The choice of the reference model has to satisfy two requirements, Lit should reflect the performance specification in the control tasks, such as rise time, settling time, overshoot or equivalent frequency domaf characteristics. 2. The ideal behaviour specfied by the reference model should be achievable for the adaptive control system. FOVANCES IN CONTROL SYSTEMS z Controller: The controller is usually parameterized by a number of adjustable parameters “> sphis implies that there exists different sets of controller parameter values for which thv doired control task is achievable. Usually the control law is linear in terms of the adjustable parameters (inear parameterization). Existing adaptive control designs normally require linear para, meterization the controller in order to obtain adaptation mechanisms with guaranteed stability ‘and tracking convergence. Adaptation Mechanism : The adaptation mechanism is used to adjust the parameters in the control law. In MRAC systems the adaptation law searches for the parameters such that the response of the plant under adaptive control becomes the same as that of the reference model. Thus adaptation mechanism drives the tracking error to zero. This adaptation mechanism is designed to guarantee the stability of the control system as well as convergence ofthe tracking error to zero. We will use Lyapunov’ theory to synthesize the adaptation mechanism although other approaches such as hyper-stability and passivity theory exist in the literature. . The design of MRAC usually involves the following three steps: 1. Choose a control law containing variable parameters. 2. Choose an adaptation law for adjusting those parameters. 3. Analyze the convergence properties of the resulting closed loop control system. We will formally discuss the design methodology of a MRAC using Lyapunov theory ~ efter introducing MIT rule, an original approach to model reference adaptive control. We will consider a closed loop system in which the controller has one adjustable parameter 8. The desired closed loop résponse is specified by a model whose output is y,,. Let e be the error between the output y of the closed loop system and the output y,, of the reference model. The variable control parameter @ is adapted in such a way that the cost function KO=3e 16.3) is minimized. The given cost funetion ean be minimized if we change the parameters in the direction of the negative gradient of J, generally known as gradient descent approach, in the following manner doa ke aa Eqn. (16.4) is known as MIT role, Application to a first order control system : Consider a system described by the model (16.4) (16.5) Bay stu where u is the control variable and y is the measured output, Assume that we want to obtain a closed loop system described by Yn 16.6) dt Oy + Oh :: “pad =P = (ors) ( ay Jaa Top 8 poaprop oq tid % 40g oft eMpdn orf) ALKETLUIG “(GT'9T) “ube ‘gursn zoyoumered Ta] oY OTRPIN OF UMOUT og JanUtE q Jo UAE sore aTCEL, ‘+ =,horoqn “pad “ord T9e .? mee Seba tee 88 puny oq UD (F'9T) 91D LIW Stten srqouresred soyoxy100 amp soy oH eqepdn oy ‘woREMTXorddy, sryD ILM “Bupdoyjoy [epour payed 429 = % a1aqn “orev \ “pads gq ened -SeuoT[Oy Topout yoa}I0d Joy seo‘ ou BF Yor HOryeuTEcOsddy:FurMofjoy soy 08 of erent “aMony, jou axe g pure 0 erojourezed seoo0rd oyy esnv00q £p}oaIIp posn aq Jouttes'oeHUTIOF OST], Faq+v+d... a9++d) _ "oe _ %oe A ROD AB (eT st)” se (O91) “whe opps wee ons 2? = o = P ie corer 49 +6 fq: +y-= @ 333 2m (91) -uba ut (191) ‘ube BuyNyysqns Ag “4 A= 288 UAAIS St J0LIe BUTE oy, aie 2 “umouyum oze 9 pur o srojourered yuepd orp yyy Surumsse aina-y7yy Seren % pure 1g sxqyouresed soyoxjuco 10F wstUEyaM WORE dupe We eALop [TL om SuLmonjoy oy Uy “BupoT]o} [pout yayzed ur [sex TIM (6'9T ‘g'9T) ‘sube. Aq.uoArS sonqwa aq} OF ‘Sleqourpsed s9y]oaquco oy} SALI JU} WsTUEyoUE NOHPE\dupe Aue Jt yyy soNdem sryT, (9°9T) ‘be [pour sousasyaz a8 [UA em “('gT) “ube yueyd ut (191) ABT [orJUOD ayy ayNANSqns am JT ei @at%e : pap 0-8 (eon) felony "y 9 :amoy]oy 8B UasoYp oq s1OJOUTUIVA AO[Jox;HOD O'K4 OY 19°] Hae (so-so = (yn Aq, wand aq A9[oxqUod. 04) 121 ‘esuudsor [pour douasoJor O14} 8t {uuds oouodayost oy St 4 Araya ee] Bia ~ = ANIONS SW31SAS IOUINOD ii quyd payeunse oy, ‘fJoaroodser g*] puv Q'T Jo onjuA v oF afisaAu09 %g pue ‘g sroyourered 07 0} SORIANO OIL FuppoNsy otf) Ose ase SIT UT “g'gT “Big UT UMOYs are d pur ooumunaoptod Suypousy oy %= (9) topysu0s mou ayy °% 1eAjoodsox z pure 7 Jo onqea year ayy 0) poxoattod Jou eALY g PUL p SOyMUTY}s0 LoyouRVINd yuBL OLY eI Ho9S ApLeOTD 9q treD J] aren” OL'T = 96°Os9L'8- 6 = 99-“o=0 89's = 98'0/8 = ‘o/"q= 19q 0} No seutoo srojouresed yuujd poywUITYSe OU, “6'9T PUE GOT “sby Aq wo Sy onqea sojourozed qurojd poywusryso ou, “Ajoarjoodsex g8'0 PUP Ge'0 JO ana v Oy aBxaan09 % pue lg sxajoureied xofjoxjuco oy, ‘010z 04 SeBseAUO LOLI SuppeT YsnoMsTe eq} Moys syMser oy], “91 “Sty UT UMoYs ore UoYeUISo aoyoUrered pue coueuLIO}ed Bure, “oz LOW Sursn ayy joryu09 Suquourojdur Ag °z = (2)4 oq 0 [eUBIs eduarazer oy }9TOS OMA “T “aoHnyos “uoTje|HUNs oY} UI psn are (7g) UIS Z = 4 pue Zz = (7) STRUTS eouarayar JuoxayTp omy, “0x07 Tog axe [opou oT} PUE quE[d 97} Jo SuOTHIPHOD TeINTUT OY, “0792 oq 0} UasoYD are Jaf[oxju09 ay} Jo srejuTered Tog Jo SeNTeA TeTUT BY, “G'T 0} Tenba aq 0} uasoyp std ued woneydepe ou, ug + “hg —-= a ‘Aq, wast Japour oy sor[oy 04 poxmbaz st asuodsax queyd ay, °G=q pur | = vexoym ‘ng + fo-= s se uoAt8 st sormeusp quod ogy, ‘oneTUns ynonp az LIN Suisn w9jsés sepso ys1y & Joy QYYW eTexSNTTL [LM om opdurexe sty] Uy :T-gT ofdurexy “d yo 090) Bhs sopso way OUT soy 9UNA LI BuysH 9 SHVALSAS TOUNOD Ni S3ONVACY “uorjuunnyss toyourvsed Jo soueBia/ > oy AroseoooU s} nduy Supsysiod AYsnonuyUCD v yu sMOYS KsvAqI aduiese wage oa ee, “Ge) uls Z = 0)! e2eyM LON eUNSe JejeLUEIEd pu eoUeUHOed BupsELy, “g’oL “BIE ‘oui, aout, oz, st on s S 2 , Oy seu voeunse insu ile : sae @ a ie “Ayanjoodsos z pue T Jo anyea [Bat 24) 0 pafueauoo exvy q pur p soqoUNTySe 1oJouTNIEd quEld OU} FUR Wods A|TEOID Oq WO IT (BL9Ty T= 1s8-€= %9q-“o= 0 o=918= “gn q 29q No sowod sisjourused quad poyoUNse oY], “GOT PUY B'OT ‘subs Kq WoALT sy ONL aojoused [i ONS SWAJSAS IOHINOO ( ‘ i (GES IN CONTAOL SYSTEMS: Cs ‘jquation. This Diophantine equation determines only the polynomials R and S. Other conditions caret be introduced to also determine the polynomial 7: One condition may be the model following where plant response y has to follow the model. Aj (Q(0) = Bart) (16.82) ‘This condition ean be met if ar_ 2, / bre (16.83) RVs A Ae ‘The selections of R, S and T'to satisfy eqn. (16.83) can be done in many ways. We will rive two ways in this chapter. ‘A Pure Feed-forward Solution. Irwe select R = BA,,, S = 0 and T'= AB,,, then eqn, (16.83) is satisfied. Thus the controller thus “obtained is a pure feed-forward compensator without feedback. The compensator simply cancels the process dynamics and adds the desired dynamics. But if the process has poles and zeros outside the unit disc, then the system will be unstable. Error Feedback Ifthe polynomials S and Tare equal, the control law eqn. (16.78) can be written as Ru= S(r—y) = Se (16.84) _ where e is the control error. This means that the control law is based on feedback from the ‘error only. It is easy to verify that the polynomials R = B(A,, —B,,) and S = T=AB,,, satisfy the s a feedback solution. Here also the closed loop system will les'ahd Zeros dutside the unit disc. Fuzzy Logic has emergéd ‘as oné of the active areas of research activity particularly in control applications. Fuzzy logic is avery powérful method of reasoning when mathematical models are not available and input'data’are' imprecise: Its applications, mainly to.control, is being studied throughout ‘thé ‘World’ by*control ‘engineers. ‘The results of these’studies have shown that fuzzy logic is indeed a powerful control tool, when it ‘comes to control systems or processes which are complex. Some studies have also shown that the fuzzy logic performs better when compared to conventional control mechanisms like PID. Wherever a logic in the spirit of human thinking can be introduced, fuzzy logic finds extreme application there. Sacrificing some amount of information we get a more robust summary of the information. What does this realy mean ? ‘Though we are conditioned to think in precise quantities, at 4 subconscious level, we think and take actions that are fuzzy in nature, And that is the way we perceive the nature and react to it. Before going into the details, let’s look how fuzzy logic has become a house hold jargon. ‘Though fuzzy logic originated in the U.S, some thirty years back, the researchers there were skeptic about its applicability in real world applications and some even scoffed it off as nothing but probability. On the other hand, the Japanese watched closely the pioneering work done by cae [a CONTROL SYSTEMS ENGINEERING | : Mamdani and his associates in steam engine control and started applying fuzzy control even to-consumer goods like cameras, air conditioners, vacuum cleaners, ete. Thus fuzzy*baség’*?* products became highly competitive due to better performance, high reliability, robustness ow power consumption, cheapness otc. One of the landmark success of fuzzy control was the complete automation of the subway train’s drive control system in Japan. With fuzzy logic getting a wider acceptance in the recent years, it is predicted that by the end of the decade fuzzy logic will replace most of the conventional logic. Many projects which were nearly impossible earlier are now finding a new way out by fuzzifying them. In the classical control paradigm, much stress is laid on the precision of the input, the intermediate steps that process them, and the modeling of the system in question. In spite of this we observe that many a time such sophisticated classical controllers developed often find it difficult to perform in real world control problems. Because the real world is so complex that regardless of the complexity of our model of the problem and the care taken to design such models, there exist.so many parameters that have not been properly accounted for and many more of which we: are totally ignorant:of. Whereas’a fuzzy logic solution is tolerant to the imprecision in the inputs and the model of the system.and still produces an output that is desired out of the system. This was put in a more effective way by Lofti A. Zadeh, the father of fazzy set theory, when he said, “most applications of fuzzy logic exploit its tolerance for imprecision. Becatise precision is to8tly,’it makes sense to minimize the precision needed to perform a task,” Thus the applicability of fuzzy logic is indeed promising. ‘The thinking process involved in, the fuzzy:realm. is not complex. It is simple, elegant, aplicity arises because it eludes mathematics to a great extent v pel yh jow anything about a crane ‘an expert crane operator. The main objective behind fuzzy logic is to represent and reason with some particular form of knowledge expressed in a linguistic form. However, when using a language-oriented approach. for knowledge, representation, one has to build a conceptual framework to tackle its inherent vagueness, So let’s start with a brief introduction to fuzzy-set concepts. The traditional (or crisp) setsiare based on a two-value logic: objects are either members or not members. Every individual abject is assigned,a membership value j1 of either 1 or 0 that discriminates, hetween,, members and non- : members of the crisp set. For example, the crisp set. (Fig. 16.12) High in terms of temperature may be defined such that : Oif T.<30°C Hitigh {te 7 230°C " where T'is the actual temperature, High is the Linguistic Variable that describes members of — g|__._ the set. The membership function fij9, i8 the 0 0 20 3% 4 60 High If Af Delonsingneey’ fo the category of Temporature °C i = BOC, Hyigy, (60°C) and the i temperature is 100% J Fig. 16.12, Fuzzy-set and crisp-set m 100% high and definitely not dotinition of linguistic variable High, low. [Rovances iN CONTROL SYSTEMS 75) Fuzzy-set theory on the other hand, is based on multivalued logic, and deals with the = goncepts that are not sharply defined. In fuzzy logic membership in the Linguistic-variable High (Fig. 16.12) may be defined as: 0 if CST <20°C Muigh = 7 20) if 20°C << 30°C 1 ifs0°C< Ts 40°C ‘The Domain of a fuzzy set is the range of allowable values of the variable. In this example the domain of a fuzzy set High is any value of T' from 0°C to 40°C. The membership value ft,9, goes from zero (no membership) to unity (complete membership) through intermediate values (partial membership). A temperature of 25°C has a degree of ‘membership of aa = 0.50 in the fuzzy set High ie., ye, (25°C) = 0.50. The variable Temperature may have many fuzzy sets associated with it (for example Low, Medium, High etc.) and the domains of all the fuzzy sets constitutes what is known as the Universe of Discourse. The fuzzy sets Low, Medium and High associated with T may be defined as shown in Fig. 16.13. Here the universe of discourse is the range of values from 0°C to 40°C. A temperature of 22°C belongs to the set Low to a degree of 0 (11;,,(22°C) = 0), to the set Medium to a degree of 0.8 (isan (22°C) = 0.8) and to the set High to a degree of 0.2 (pj(22°C) = 0.2). As Fig. 16.13 shows, fuzzy sets always overlap to some degree, indicating that their boundaries are fuzzy or imprecises. A temperature of 22°C is both Medium’ and High. You might have wanted to call it “The temperature is lightly high”. A more precise statement is Hpgqi(22°C) = 0, Lyfe: (22°C) = 018; pzgi(22°C) = 0:2. Temperature °C Fig, 16.13. Overlapping sets for temperature. ‘The traditional set theory and logic always have maintained two inviolate laws : The law of Non-Contradictions and the Law of Excluded Middle. The Law of Non-Contradiction states that an clement cannot belong both to sets A and A (the complement of 4). Mathematically this can be written as: where ¢ is the null set, An =o (16.85) The Law of Excluded Middle declares that an element must belong to either set Aor #. There is no middle. Mathematically this can be written as: AA =U (16.86) [786 where Vis the universal set. Pazzy-set theory and Togie* Nout both thesé-“laws" Hence; fuzzy logic represents significant paradigm shit. It Wranscends traditional set theory and logic, and reduces them to special limiting cases. Fuzzy Set Theory and Operations In this section, we shall dwell in brief on some of the basic elements of fuzzy set theory that ie required in fuzzy control applications, Consider, a, universe of discourse, U, that consists of Ulements which shall be denoted by x. A fuzzy set A can be defined in Vas follows. A=((Uy@), 2) [ze UI (16,87) where 1,(x) denotes the degrée to which an element + in w belongs to A. For this reason p1, is known as the fuzzy membership function of A and i,(x) is known as the membership grade‘of xin fuzzy set A. : 2 Set Union: The union of two fuzzy sets Aand & in uis a fuzzy set AU @ is defined as follows, : NTROL SYSTEMS ENGINEERING | AU B= (Uy Ga 1€-0) (16:88) where, ip(xe)) Wz € U (16.89) s Aand @ in wis a fuzzy set ANB, 18 (6.90). 2069) “Set Complement: The cor follows: A=(ug@a ize v) (16.92) where; WR) =1- WG) Vee U (16.93) ‘The following example illustrates the above operations. - Consider the fuzzy definition of High as given in Section 16.2.2, along with the notion of low Pressure (Lo-Press) defined as: * a ss 0 if 15.atm a TT L 2 4, 6 Buea de 4 (16 18" 20 Time, s Fig. 16.23. Design of rule base thased on system response. When e is ZE and & is PL (point D) or PS (point H), process state y, is moving away from the setpoint. Thus a positive change in thé previous control output.is intended to reverse this trend and make it, instead of moving away from the set point, to start moving towards it. Ife is ZE and.at the same time, & is NL (point B) or NS (point F), y is moving away from the set point. Thus a negative change in the ‘previous control output is intended to reverse this trend and make y, instead of moving away from the set point, start:moving towards it. The prototype of thesé fuzzy-control rules are summarized in Table 16.2..Better control performance can be obtained by using finer fuzzy partitioned subspaces, for example, with the term set (NL, NM, NS, ZE, PS, PM, PL). Fuzzy inference engine There are two types of approaches employed in the design of the inference engine of a FLC : * Composition based inference. * Individual rule-based inference, In the composition based inference, the fuzzy max-min composition operator is employed. ‘The rulebase matrix represents the fuzzy relation matrix, R. R is then composed with e first and then the resulting fuzzy relation is gomposed with &. O= & oe oR 16.102) | z CONTHOL SYSTEMS ENGWEEFING | Table 16.2. Rule Base Design : Prototype of Fuzzy Control Rules [ Rule No. enh Vane be F Ref, Point 2 D> ana aro She : Meese ae z The final result of the above compositions result in 0 from which the fuzzified output can be extracted. aoe ie Reconsit ontroller example in section 4.3.3, The rules activa’ were rule no. se and i Sig is asa that the ‘valve action should be zero, while rule 14 Tndicates that it should be positive-small. Both the rules have a claim on how much the valve should change. The truth of the control rule (weight value, an extent to which each active rule is applicable) can be obtained by:, : ‘ ae Hh rsl@) > Mzat®e) in (0.8, 1.0) = 0.8 ‘The basic function of second type of inference is.to compute the-overall value of the ‘control output variable, based:on:the:indiyidual: contributions of each:rule:in the rule base. ‘Each;such individual contribution: “represents :the values of the,control. output. variables as ‘computed by. a single rule. ‘The, output of the fuzzification module, representing the current crisp values of the process state variables, is matched to each rule-antecedent and a degree of sntch for each rule is established. Based on this degree of match, the value of the control output variable in the rule-antecedent is modified, i.e., the clipped fuzzy set representing fuzzy value of the control output variable is determined. The set of all clipped fuzzy sets represents the overal] fuzzy output. Defuzzification module ‘The functions of a defuzzification module (DM) are as follows: ° Performs the 80 called defuzzification which converts the set of modified control output values into a single point-wise values. * Performs an output denormalization which maps the, point-wise value of the control output onto its physical domain, This wap is not needed if non-normalized fuzzy sets \ | “unity: ‘ADVANCES IN CONTROL SYSTEMS: aE | ‘A defurzification strategy is aimed at producing a non-fuzzy control action that best gepresents the possibility of an inferred fizzy control action. There many procedures outlined inthe literature for defuuzzifieation which are, conter of gravity/area, center of sums, center of Jargest area, first of maxima, middle of maxima, and height. Of these the center of gravity (COG) is the most efficient in that it gives a defuzzified output which conveys the real meaning of the action that had to be taken at that instant. ‘The COG strategy generates the center of gravity of the possi control action. In the case of a discrete universe, this method yields ity distribution of a SP Ho. =e + 16.103) yr vo “truth” of the i** where m ig the number of quantization levels of the output, 1) represents the rule and @, represents the action that i'* rule would dictate. In the temperature controller example considered in the Section 16.2.4, Rule No. 13 would make zero change, while Rule No. 14 would make +2.5% change (Fig. 16.22). The actual ~Satput is an average of the active rule outputs (Fig. 16.22), weighted by their corresponding truthes. Thus the output is given by : 0x 0.2+25x08 0.2+08. = 2%, For this particulat examiple, the sum of the truths of activated rules. Rule 13 and 14 is 8 +.0.2'= 1.0..This is’neither a/necessary, nor a desirable, nor.common.event. It is simply the happenstance of this example: Case Study 2 seb e We shall now take up the example of a temperature control in two strirred tanks in series to explain the methodology of designinig'a fuzzy logic controller. The benchscale process as shown in Fig. 16.24 consists of two stirred tanks connected by a:long pipe which introduces time delay. The temperatiire of the water in the second tank 7 is controlled by adjusting the heat duty from the electrical heater in the first tank; An additional heater can be introduced to introduce load disturbances manually. The volume of water in each tank is held constant by using overflow lines. The water flowrate through the system is controlled manually and can be adjusted to introduce a second type of disturbance. Fig. 16,24. Schematic diagram of the benchj-scale plant. CONTROL SYSTEMS ENGINEERING 00, ‘assuming constant flowrate of «= 0.05 kg/s the dynamic model of the heating process ig obtained as : T)(s) 4.16 exp(-Tys) an) =G(S)= Gigs+ Tis +) ee sghere Ty is the outlet water temperature and g, is the control input. The time delay Ty = 53s, J determined empirically. Using the techniques of Chapter 11, the digital model is found out ‘eth a zero order hold out included in the process, The sampling time is chosen to be 5.3s, From eqn. (11.56) we get (16.105) G(s) = ‘The approximation time delay term oa; is done by Padé approximation, which is given in the following for a second order approximation: nye A= Ty + TH, (16.106) T+ Tys/2+Ty'3°/4 ‘The digital model then is given by B® nae 16.107) a@) 4 -u 12 % ota etna (16.108) it OARTEE Chapter 12 as: ety x(k +1) = Ax(k) + Bul) +(16.109) . 1ss45096 10000000000 03876429 01000000000 0. 0010 0 0000.0 0 0,.0,.0.1.0,0 0.0.,0.0.0 © 0 000 010-000 0.0 oe 000! 0-010" 000 0° 00000010 0 0 0 0000000100 0 0000000010 0 0000000001 0 0000000000 0 0000000000 and B=[0 0 0 0 0 0 0 0 0 0 000658785 0008872325)" Also x(k) = T,(h) and u(k) = q,(k) ‘The above system is controlled using the fuzzy logic controller designed as described in the earlier sections. The inputs to the FLC are error ({NL, NM, NS, NZ, P2, PS, PM, PL) and change in error (NL, NM, NS, NZ, PZ, PS, PM, PL). ‘The heat input, qg, is the output of the FLC and is defined by singletons. ‘The response of the system using a FLC is compared with that of a PID and is given in Fig. 16.26, 3 ADVANCES IN CONTROL SYSTEMS wi Amplitude 0 0 100 200,300 400 500 600 700 800 900 1000 Time, s Fig. 16.25. Step input response : FLC Vs PID. “Artificial netwral networks have emerged from the studies of how brain performs. The human brain is made up of many millions of individual processing elements, called neurons, that are highly interconnected. A schematic diagram of a single biological neuron is shown in Fig. 16.26. Information from the outputs of the other neurons, in the form of electrical pulses, are réceived by the cell at. connections called synapses.,The synapses connect to the cell inputs, or space dendrites, and the single output of the neirron appears at the axon. An electrical pulse is sent down the axon when the total input stimuli from all of the dendrites exceeds a certain threshold. Fig. 16.26. A biological neuron. / a : CONTROL SYSTEMS ENGINEERING | “atfcal neural networks are made up of simplified individual models of th bilogiat neuron that are connected together to form a network. Information is strored in the network “ often in the form of different connection strengths, or weights, associated with the synapses in the artificial neuron models. Some of the various properties of neural networks are: Inherent parallelism in the network architecture due to the repeated use of simple neuron rocessing elements. This leads to the possibility of very fast hardware implementations of neural networks. + Capability of learning information by example. The learning mechanism is often achieved by appropriate adjustment of the weights in the synapses of the artificial neuron models. « Ability to generalize to. new inputs (Le, @ trained network is capable of providing sensible output when presented with input data that has not been used before) « Robustness to noisy data that occurs in real world applications « Fault tolerance. In general, network perfortnance does not significantly degenerate if some of the network connections become, faulty. = jeg of neural networks indicate their poten in solving complex, il- ae RS he wasiderable interest in neural ‘petworks that has occurred in recent years is not only:due ‘to'the si power that has” their implementation,’ “Neur e abe resent massively pi Nowa ne reecind eeent awe yume of data more realizable with the potential for ever~ ree performance through Aypamicl lear ‘Anotwork of “neuron-like” units operates improving Pat onoe” rather than “step by step” asin conventional computer. Neural networks. on data Sjoved asa large dimensional nat-linear dpa system, ich is defined by.a eaiia or elements in control M : set of I* order non-linear differential systems. F , 4 "A neural network is a eystem with inputs and outputs and is composed of many simple end similar processing elements, ‘The most commonly used model is depicted in Fig. 16.27 and is based on the model proposed by McCulloch and Pitts in 1943. Each neuron input, x, is weighted by the values w, A bias, or offset, in the node in characterized by an additional gonstant input of 1 weighted by the value w,,‘The output, y, is obtained by summing the weighted oe ie ene and passing the result through a non-linear activation function, Re Yarious non-linearity are possible and some of these, for eg, hard limiter, threshold logic, sigmoidal and tanh functions are shown in Fig. 16.28. : 3 a ‘The rene slemenly each have a number of internal parameters called weights. ieee ae a ee will alter the behaviour of the whole network. The goal This procona is asia ights of the network to achieve a desired input/output relationshiy & as training the network, The network can be considered memoryless in IN CONTROL SYSTEMS we ts are kept constant, the output vector depends only on the current dent. of past inputs. This may be also called as neural computer. [aia the sense that, if the weight input.vector and is indepen’ ‘Axons Synapses he, * ‘Thréshold Tanh Fig. 16.28. Activation functions, 10) | Fig. 16.29. General control systems. Neural network controller Plants such as dynamical systems in manufacturing and process industries and other areas including medical applications are typically inaccurately modeled, time varying, and subject to disturbances, but with moderately slow response times in relation to modern computer processing speeds. The niche of interest to artificial neural networks comprises those processes which ‘nvolve also a degree of non-linearity, which is the main limitation of the otherwise suceessful conventional control methods, especially where the non-linearity is of an unknown structure, or is very severe. << CONTROL SYSTEMS: ENGINEERING | ‘A neural controller performs a specific form of adaptive control, with the controll sing bm oe notnear malilvertelverk and he eae prune being the ofthe intereonnections between the neurons. In summary, a controller that is designed era necural network architecture should exhibit three important characteristics: the utilization oflarge amounts of sensory information, collective processing capability and adaptation. ‘A general diagram for a control systems is shown in Fig. 16.29. The feedback and feed. forward controllers and the prefilter can all be implemented as multilayered neural networks. The learning process gradually tunes the weights of the neural network so that the error signal between the desired and actual plant response ig minimized. Since the error signal is the input to the feedback controller, the training of ‘the network will lead to a gradual switching from feedback to feed forward action as the error signal becomes small. In this text, only the implementation of. feed-forward controller as a neural network is considered. An immediate consequence of the increased use of feed-forward control action isto speed up the response of the system. In following sections four control learning methods are explained, error-back propagation algorithm described, which is the ‘method used here to adapt the weights in the neural networks that is used as controller. Simulations for a very simple plant to demonstrate the operation of the control learning methods are presented. ee es the multi-l tion. The network ar neural network architecture is the m ti-layer perception. or! Tas Bt ee a layer, a hidden layer and an output layer as shown in Fig. 16.30. The output and the hidden layer faré'made up of a number of nodes as described in Section 16.3.2. However'the input layer'is essentially-a diréct link to the inputs of the hidden layer and is jncluded by-convention; jigmoidal activa functions forthe nodes in the hidden and output layers are the most common hoieé, although variants.on.this are also possible. ‘The outputs of ‘each node in a layer are ‘connected to the inputs. of all the nodes in the subsequent layer. Data direction only, from input to output; hence, this type of feed-forward network. flows through the network in one network is called ‘The network trained in a supervised fashion- ‘This means that during training both the network inputs end target outputs are used. A number of algorithms have been proposed for ork ‘the MLN’and the most popular is the back-propagation algorithm which will be Goseribed in the next section. Briefly, with this algorithm a set of input and corresponding output data is collected that the network is required to learn. An input pattern is applied to the network and an output is generated. This output is compared to the corresponding target output and an error is produced. The error is then propagated back through the network, from output to input, and the network weights are adjusted in such a way as to minimize a cost function, typically the sum of the errors squared. The procedure is repeated through all the data in the training set and numerous passes of the complete training data set are usually necessary before the cost function is reduced to a sufficient value. ‘ois An importa dusture of MLN is that this network can accurately represent any rata te ear hanes ion relating the inputs and the outputs. Hence MLN exhibits many application including modeling and control of real non-linear processes. ‘ADVANCES IN CONTROL SYSTEMS a Back Propagation Algorithm ‘The back-propagation algorithm is central to much current work on learning in neural networks. ‘The algorithm gives a prescription for changing the weights w,,, in any feed-forward network to learn a training set of input-output pairs. Since maximum control application use a two layer MLN, we will provide back propagation algorithm. The network is shown in Fig. 16.90. x is the n x 1 input vector. y is a m x 1 diagonal output vector. The hidden layer consists of h computational units, 1, represents a typical connection weight between output layer and hidden layer while W, represents a typical connection weight between hidden layer and input layer. Fig. 16.30. A two layered network Forward Propagation ; The forward response of such a network is given as follows : ‘The input of the j hidden unit is expressed as S,= > Wate (16.110) fa Outpitt of the j hidden unit is given as y=f(S) ~-(16.111) where f is the squashing function, generally taken as sigmoidal activation 1 fess (16.112) Finally the input to the i* output unit is » a= ye; s(16,113) Output y is given as w= aq) e(16.114) Backward Propagation The instantaneous error back i ithm i followii i i a ¢ propagation algorithm is derived following the simple gradient Principle where, a typical weight W,, 18 updated as follows: an Wg (t+ Dw, (0) — 1 na Wpg()~ Ns ap see(16.115) _ CONTROL SYSTEMS ENGINEERING ( 1 isthe Jearning rate and E = > (A(t) = (0)? is the error function to be minimized. augoritb™ rithm can be given as follows : sre implementation of error back-propagation algo 4, Initialize mall random values. 2, Choose a pattern x, j= Ly By vy and apply it to the input layer. 3, Propagate the signal aa dsvouigh the-notwork using S= Was kel y=f Spit aon A the weights to sn sp: 4, Compute the deltas for the output layer = favo? : by comparing the actual outputs; ¥ the desired ones y/for the pattern x being considered. ing thd ervors backwards ; for the hidden | layers by. propaga Gein D 5. Comipste the deltas f 6. Use W,t+D= m0 amd to update all connections. 77 Go back to'step 2 and repeat for the niet pattern. Learning Control Architecture Figure 16.31 shows the feed forward controller implemented as a neural network architecture, ie sis output w driving the plant. The desire “2 Giige of the plant is denoted by d and its @) Neural! network Fig. 16.31, Indirect leaming architecture. ADVANCES IN CONTROL SYSTEMS ‘807 actual output by y. The neural controller should act as the inverse of the plant, producing from sctpeired response d; a signal 1 that drives the output of the plant te y =. Fhe goal ofthe earning procedure, therefore is to select the values of the weights of the network so that it prodtices the d > « mappings at least over the range ofd’s in which the plant is to be operated. Four different learning methods are considered. Indirect Learning Architecture ‘This architecture is shown in Fig. 16.31. ‘Suppose the feed-forward controller is successfully trained so that the plant output y =, then’ the network ‘used as the feed-forward controller ‘wall approximately reproduce the plant input from y (ef = u), Thus the network can be Trained to minimize the error ¢ = u ~ f using this architecture, because if the overall error €, = d—y goes to 0, 80 does &- ‘The positive features of this artangement would be the fact that the network can be trained only in region’ of interest since all other signals are generated from d which is the starting point. In addition, it is advantageous to adapt the weights in order to minimize the ator directly at the output of the network. Unfortunately, this methods as described is not a valid training Mare bacause minimizing ¢ does not necessarily minimize €. For instance “inulations with a simple plant shows that the network tends to settle to a solution that maps tT g's t6 a single u = tty for which ¢, is zero but obviously ¢ is not: This training a rnaine interesting, however because it could be used n conjunction with one of the procedures described below that minimize ¢. Example 16.6: The above learning architectures were realized using a two-layer architecture with toa hidden neurons plus a fixed-unity input hidden neuron. The hidden neurons have a 1 sigmoid transfer function, Ax) Initial weights were selected randomly in the range 1+e*)" 0.0 — 0.01, d'was taken to be 1.0 and the:cases the function of the plant chosen was yk +) u(h) 14970)" Indirect Learning Epochs (xX 10000) Fig. 16.32, Error. CONTROL’ SYSTEMS ENGINEERING | ‘The plots of the error and the desired and predicted values are given in Fi ‘ rand given in Figs, 16.32 3 respectively, From this itis observed that the predicted value reaches the desired atu 16.3 it beconies stagnant after thousarids of epochs. slowly and also General Learning Architecture ‘The architecture is shown in Fig. 16.34, which provides a method for training the neural controller that does minimize the overall error. The training sequence is as follow : A plant inpat w is selected and applied to the plant to,obtain a corresponding y, and the network is trained to reproduce w at its output from y. The trained network will be able to'take a desired response d and produce the appropriate u, making the actual plant output y approach d. This will work if the d happens to be sufficiently close to one of the y’s that were used during the training phase. Thus, the success of this method is intimately tied to the ability of the neural network to generalize or learn to respond correctly to inputs it has not specifically been trained for. One of the drawbacks of this method is that the system cannot be selectively trained to respond in regions of interest because which plant inputs u correspond to the desired outputs Fis not known, Thus the input space of the plant is uniformly populated with training samples, so that the network can interpolate the intermediate points. Indirect Learning 6080 +100 120° 140 * Epochs (X 10000) Fig. 16.33. Prediction. “he Fig. 16.34. General leaming architecture ADVANCES IN CONTROL SYSTEMS Example 16.7 : Reconsider the system of example 16.6. From the Figs. 16.35 and 16. observed that the error reaches 0, but still the desired output is not obtained. Since with this training method, we cannot place the training samples in the regions of interest, we cannot guarantee what the error will be in these regions. General learning 0.09 a 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 °9 2 40 60 80 100 120 Epochs (x 2000) Fig. 16.35. Error. ‘ : General learning Epochs (x 2000) Fig. 16.36, Prediction, ; ae case, the general procedure may not be efficient since the network may have to lear the response of the plant over a large operational range than is actually necessary. One ta we is to combine the general method with the specialized procedure 0. CONTROL SYSTEMS ENGINEERING d Learning Architecture specialize re 637 shows this architecture which is used for tra yin regions of specialization only. The network is trained using desired response d, as eo tne network to find the plant input, u that drives the system output, y tothe desired d, This is accomplished by using the error between the desired and actual responses of the plant to adjust the weights of the network using steepest descent procedure. During each iteration the weights are adjusted to maximally decrease the error. This architecture can specifically learn jn the region of interest, and it may be trained on-line, fine-tuning itself while actually performing useful work. The general learning architecture, on the other hand, -dynamical and therefore, input- rust be trained offline. Feed-forward neural networks are non off-line training of the neural network presents 00 stability problem output stable. Consequently, for the control systems. ag architecture. again co siem of example 16.6. The plots of the error desired ant vedicted values are given ii: igs: 16:38 and 16.39 respectively. First the = ie is trained ae general learning architecture ‘and then the specialized learning is To This is also called hybrid learning « tt due to this there is a definite advantage. General ape create better initial weights for ‘specialized training. Thus ‘ag ean speed the learning process by: reducing the number of iterations of the ensuing speci: antage of preliminary general learning iereat it may result in networks that can adapt more easily if the operating points of the system change or new ones are added. awe Forward Inverse Learning Architecture ‘his architecture is shown in Fig 1640 for training the neural controller on-line. Training involves using the desired response d, as input to the network. But here the neural network model NN1 is frst trained as the plant ising the input/output sequence of the plant. Then using another neural network NN2, with input as desired response d, output u is obtained that drives the model to y =d. This architecture is trained using error back propagation. The error found ie,, ¢, is back propagated in NN1 and also the error of in NN2. Because of initial a ra network mode, this will have a tondeney to create better initial weights See hus the learning process is speeded up by this training. Another advantage eel ing is that this may result in networks that can adopt more easily if the points of the system change or new ones are added. ing the neural controller to operate" [ABVANCES IN CONTROL SYSTEMS aT Specialized Learning 0.06 + TF r Squared error 3 oot a pocns (2000) Fig; 16.38. Error. seca Learing : 1008 =— sons | 10% i 3 1002s rE z 1.002) B 1.0015 © 1.001 10005 | | 0 5 io is Epochs (X 10000) 16.39. Prediction. Fig. 16.40, Forward inverse arcbitectre, CONTROL SYSTEMS ENGINEERING CE Pela ss 9 Reconsider the system described in exampl i mpl 16: example 16.6. The Figs. 16.41 and 2 io that though the error values first vary, eventually it reaches zeros and #0 the peat 298 Ue actos the desited value. This proces more nceurate results and it in also Loss ioe consuming. conclusions Neural networks were used for controlling physical systems. Specifically, four different methods for using error back propagation to train a feed-forward neural network controller to act as the inverse of the plant. The general learning method attempts to produce the inverse of the plant ver the entire state space, but it can be very difficult to use it alone to provide adequate cctical control application. In order to circumvent this problem, error was the network exactly on the operational performance in a pra hack-propagated through the plant, which allows to train ‘also explained to gain the advantages ial disadvantages. Forward inverse range of the plant. Also the method of hybrid learning was of the two different methods and to avoid their potenti very accurate learning architecture. Jearning was explored and was found that it was a 16:1. Consider the process + Determine a controller that can give the closed loop system where a is an unknown paramete! os. OF sea)" 06 05 04 Err Nanda Epochs (x 100) Fig. 16.41, Error

You might also like