Control Systems Theory
Control Systems Theory
Control Systems Theory
Wikibooks.org
On the 28th of April 2012 the contents of the English as well as German Wikibooks and Wikipedia projects were licensed under Creative Commons Attribution-ShareAlike 3.0 Unported license. An URI to this license is given in the list of gures on page 313. If this document is a derived work from the contents of one of these projects and the content was still licensed by the project under this license at the time of derivation this document has to be licensed under the same, a similar or a compatible license, as stated in section 4b of the license. The list of contributors is included in chapter Contributors on page 309. The licenses GPL, LGPL and GFDL are included in chapter Licenses on page 319, since this book and/or parts of it may or may not be licensed under one or more of these licenses, and thus require inclusion of these licenses. The licenses of the gures are given in the list of A A gures on page 313. This PDF was generated by the L TEX typesetting software. The L TEX source code is included as an attachment (source.7z.txt) in this PDF le. To extract the source from the PDF le, we recommend the use of http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ utility or clicking the paper clip attachment symbol on the lower left of your PDF Viewer, selecting Save Attachment. After extracting it from the PDF le you have to rename it to source.7z. To A uncompress the resulting archive we recommend the use of http://www.7-zip.org/. The L TEX source itself was generated by a program written by Dirk Hnniger, which is freely available under an open source license from http://de.wikibooks.org/wiki/Benutzer:Dirk_Huenniger/wb2pdf. This distribution also contains a congured version of the pdflatex compiler with all necessary A packages and fonts needed to compile the L TEX source included in this PDF le.
Contents
1 2 Preface Introduction 2.1 This Wikibook . . . . . . . . . . 2.2 What are Control Systems? . . . 2.3 Classical and Modern . . . . . . . 2.4 Who is This Book For? . . . . . . 2.5 What are the Prerequisites? . . . 2.6 How is this Book Organized? . . 2.7 Dierential Equations Review . . 2.8 History . . . . . . . . . . . . . . . 2.9 Branches of Control Engineering 2.10 MATLAB . . . . . . . . . . . . . 2.11 About Formatting . . . . . . . . System Identication 3.1 Systems . . . . . . . 3.2 System Properties . 3.3 Initial Time . . . . . 3.4 Additivity . . . . . . 3.5 Homogeneity . . . . 3.6 Linearity . . . . . . . 3.7 Memory . . . . . . . 3.8 Causality . . . . . . 3.9 Time-Invariance . . . 3.10 LTI Systems . . . . . 3.11 Lumpedness . . . . . 3.12 Relaxed . . . . . . . 3.13 Stability . . . . . . . 3.14 Inputs and Outputs . 3 5 5 5 6 7 7 8 9 10 12 13 13 15 15 15 15 16 17 18 19 19 19 20 20 21 21 21 23 23 25 26 27 28 28
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
Digital and Analog 4.1 Digital and Analog . . . . . . 4.2 Analog . . . . . . . . . . . . . 4.3 Digital . . . . . . . . . . . . . 4.4 Hybrid Systems . . . . . . . . 4.5 Continuous and Discrete . . . 4.6 Sampling and Reconstruction
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
III
Contents 5 System Metrics 5.1 System Metrics . . 5.2 Standard Inputs . . 5.3 Steady State . . . . 5.4 Target Value . . . 5.5 Rise Time . . . . . 5.6 Percent Overshoot 5.7 Steady-State Error 5.8 Settling Time . . . 5.9 System Order . . . 5.10 System Type . . . 5.11 Visually . . . . . . 31 31 31 34 35 35 36 36 36 37 37 39 41 41 41 42 43 43 43 44 45 47 47 47 55 57 57 58 58 59 59 59 63 64 64 66 67 67 67 68 70 72 76 76
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
System Modeling 6.1 The Control Process . 6.2 External Description . 6.3 Internal Description . 6.4 Complex Descriptions 6.5 Representations . . . . 6.6 Analysis . . . . . . . . 6.7 Modeling Examples . . 6.8 Manufacture . . . . . . Transforms 7.1 Transforms . . . . 7.2 Laplace Transform 7.3 Fourier Transform 7.4 Complex Plane . . 7.5 Euler s Formula . . 7.6 MATLAB . . . . . 7.7 Further Reading .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
Transfer Functions 8.1 Transfer Functions . . . . . 8.2 Impulse Response . . . . . . 8.3 Convolution . . . . . . . . . 8.4 Convolution Theorem . . . 8.5 Using the Transfer Function 8.6 Frequency Response . . . . Sampled Data Systems 9.1 Ideal Sampler . . . . 9.2 Geometric Series . . 9.3 The Star Transform 9.4 The Z-Transform . . 9.5 Star Z . . . . . . 9.6 Z Fourier . . . . . 9.7 Reconstruction . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
IV
Contents 9.8 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 83 83 83 84 84 85 86 87 87 87 88 89 89 90 91 91 92 92 93 94 97 99 100 101 102 103 103 103 104 105 111 111 112 116 117 117 117 118 120 121 122 123
10 System Delays 10.1 Delays . . . . . . . . . . . 10.2 Time Shifts . . . . . . . . 10.3 Delays and Stability . . . 10.4 Delay Margin . . . . . . . 10.5 Transform-Domain Delays 10.6 Modied Z-Transform . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
11 Poles and Zeros 11.1 Poles and Zeros . . . . . . . 11.2 Time-Domain Relationships 11.3 What are Poles and Zeros . 11.4 Eects of Poles and Zeros . 11.5 Second-Order Systems . . . 11.6 Higher-Order Systems . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
12 State-Space Equations 12.1 Time-Domain Approach . . . . . . . 12.2 State-Space . . . . . . . . . . . . . . 12.3 State Variables . . . . . . . . . . . . 12.4 Multi-Input, Multi-Output . . . . . . 12.5 State-Space Equations . . . . . . . . 12.6 Obtaining the State-Space Equations 12.7 State-Space Representation . . . . . 12.8 Discretization . . . . . . . . . . . . . 12.9 Note on Notations . . . . . . . . . . 12.10 MATLAB Representation . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
13 Solutions for Linear Systems 13.1 State Equation Solutions . . . . . . . . 13.2 Solving for x(t) With Zero Input . . . 13.3 Solving for x(t) With Non-Zero Input 13.4 State-Transition Matrix . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
14 Time-Variant System Solutions 14.1 General Time Variant Solution . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Time-Variant, Zero Input . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Time-Variant, Non-zero Input . . . . . . . . . . . . . . . . . . . . . . . . . 15 Digital State-Space 15.1 Digital Systems . . . . . . . . . . 15.2 Digital Systems . . . . . . . . . . 15.3 Relating Continuous and Discrete 15.4 Converting Dierence Equations 15.5 Solving for x[n] . . . . . . . . . . 15.6 Time Variant Solutions . . . . . . 15.7 MATLAB Calculations . . . . . .
. . . . . . . . . . Systems . . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
Contents 16 Eigenvalues and Eigenvectors 16.1 Eigenvalues and Eigenvectors . . . 16.2 Characteristic Equation . . . . . . 16.3 Exponential Matrix Decomposition 16.4 Non-Unique Eigenvalues . . . . . . 16.5 Equivalence Transformations . . . 17 MIMO Systems 17.1 Multi-Input, Multi-Output . 17.2 State-Space Representation 17.3 Transfer Function Matrix . 17.4 Discrete MIMO Systems . . 125 125 125 127 129 131 133 133 133 134 136
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
18 System Realization 139 18.1 Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 18.2 Realization Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 18.3 Realizing the Transfer Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 139 19 Gain 141 19.1 What is Gain? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 19.2 Responses to Gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 19.3 Gain and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 20 Block Diagrams 20.1 Systems in Series . . . . . . 20.2 Systems in Parallel . . . . . 20.3 State Space Model . . . . . 20.4 Adders and Multipliers . . . 20.5 Simplifying Block Diagrams 20.6 External Sites . . . . . . . . 143 143 146 146 148 148 149 151 151 151 152 154 156 156 157 157
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
21 Feedback Loops 21.1 Feedback . . . . . . . . . . . . . . 21.2 Basic Feedback Structure . . . . 21.3 Negative vs Positive Feedback . . 21.4 Feedback Loop Transfer Function 21.5 Open Loop vs Closed Loop . . . 21.6 Placement of a Controller . . . . 21.7 Second-Order Systems . . . . . . 21.8 System Sensitivity . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
22 Signal Flow Diagrams 159 22.1 Signal Flow Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 22.2 Mason s Gain Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 23 Bode Plots 163 23.1 Bode Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 23.2 Bode Gain Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 23.3 Bode Phase Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
VI
Contents 23.4 23.5 23.6 Bode Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
24 Nichols Charts 173 24.1 Nichols Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 25 Stability 25.1 Stability . . . . . . . . . . . . 25.2 BIBO Stability . . . . . . . . 25.3 Determining BIBO Stability . 25.4 Poles and Stability . . . . . . 25.5 Poles and Eigenvalues . . . . 25.6 Transfer Functions Revisited . 25.7 State-Space and Stability . . 25.8 Marginal Stablity . . . . . . . 175 175 175 176 177 177 178 178 179 181 181 181 182 182 182 183
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
26 Introduction to Digital Controls 26.1 Discrete-Time Stability . . . . . 26.2 Input-Output Stability . . . . . 26.3 Stability of Transfer Function . 26.4 Lyapunov Stability . . . . . . . 26.5 Poles and Eigenvalues . . . . . 26.6 Finite Wordlengths . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
27 Routh-Hurwitz Criterion 185 27.1 Stability Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 27.2 Routh-Hurwitz Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 28 Jury 28.1 28.2 28.3 s Test 189 Routh-Hurwitz in Digital Systems . . . . . . . . . . . . . . . . . . . . . . 189 Jury s Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 193 193 193 195 196 198 199 200 205 205 205 206 208 208
29 Root Locus 29.1 The Problem . . . . . . . . 29.2 Root-Locus . . . . . . . . . 29.3 The Root-Locus Procedure 29.4 Root Locus Rules . . . . . . 29.5 Root Locus Equations . . . 29.6 Root Locus and Stability . . 29.7 Examples . . . . . . . . . . 30 Nyquist Criterion 30.1 Nyquist Stability Criteria 30.2 Contours . . . . . . . . . . 30.3 Argument Principle . . . . 30.4 The Nyquist Contour . . . 30.5 Nyquist Criteria . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
VII
Contents 30.6 Nyquist Bode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 30.7 Nyquist in the Z Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 31 State-Space Stability 31.1 State-Space Stability . . . 31.2 Eigenvalues and Poles . . 31.3 Impulse Response Matrix 31.4 Positive Deniteness . . . 31.5 Lyapunov Stability . . . . 211 211 213 214 214 215 217 217 217 220 223 225 225 225 225 225 225 227 227 227 228 229 230 233 233 233 234 234 234 235
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
32 Controllability and Observability 32.1 System Interaction . . . . . . . 32.2 Controllability . . . . . . . . . . 32.3 Observability . . . . . . . . . . 32.4 Duality Principle . . . . . . . . 33 System Specications 33.1 System Specication . 33.2 Steady-State Accuracy 33.3 Sensitivity . . . . . . . 33.4 Disturbance Rejection 33.5 Control Eort . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
34 Controllers and Compensators 34.1 Controllers . . . . . . . . . . . 34.2 Proportional Controllers . . . 34.3 Derivative Controllers . . . . 34.4 Integral Controllers . . . . . . 34.5 PID Controllers . . . . . . . . 34.6 Bang-Bang Controllers . . . . 34.7 Compensation . . . . . . . . . 34.8 Phase Compensation . . . . . 34.9 Phase Lead . . . . . . . . . . 34.10 Phase Lag . . . . . . . . . . . 34.11 Phase Lead-Lag . . . . . . . . 34.12 External Sites . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
35 Nonlinear Systems 237 35.1 Nonlinear General Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 237 35.2 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 36 Common Nonlinearities 36.1 Hysteresis . . . . . . 36.2 Backlash . . . . . . . 36.3 Dead-Zone . . . . . . 36.4 Inverse Nonlinearities 37 Noise Driven Systems 37.1 Noise-Driven Systems 241 241 241 241 242
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
243 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
VIII
Contents 37.2 37.3 37.4 37.5 37.6 Probability Refresher . . . . . . . Noise-Driven System Description Mean System Response . . . . . System Covariance . . . . . . . . Alternate Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 244 244 245 246 249 249 249 249 249 249 251 251 251 253 254 254 254 255 255 257 259 261 261 262 263 265 265 265 265 266 266 269 269 270 270 270 271 271 271 272 272 273
38 Appendix: Physical Models 38.1 Physical Models . . . . . . . 38.2 Electrical Systems . . . . . 38.3 Mechanical Systems . . . . 38.4 Civil/Construction Systems 38.5 Chemical Systems . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
39 Appendix: Z Transform Mappings 39.1 Z Transform Mappings . . . . . . 39.2 Bilinear Transform . . . . . . . . 39.3 Matched Z-Transform . . . . . . 39.4 Simpson s Rule . . . . . . . . . . 39.5 (w, v) Transform . . . . . . . . . 39.6 Z-Forms . . . . . . . . . . . . . . 40 Appendix: Transforms 40.1 Laplace Transform . . 40.2 Fourier Transform . . 40.3 Z-Transform . . . . . . 40.4 Modied Z-Transform 40.5 Star Transform . . . . 40.6 Bilinear Transform . . 40.7 Wikipedia Resources .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
41 System Representations 41.1 System Representations 41.2 General Description . . 41.3 State-Space Equations . 41.4 Transfer Functions . . . 41.5 Transfer Matrix . . . . . 42 Matrix Operations 42.1 Laws of Matrix Algebra 42.2 Transpose Matrix . . . . 42.3 Determinant . . . . . . . 42.4 Inverse . . . . . . . . . . 42.5 Eigenvalues . . . . . . . 42.6 Eigenvectors . . . . . . . 42.7 Left-Eigenvectors . . . . 42.8 Generalized Eigenvectors 42.9 Transformation Matrix . 42.10 MATLAB . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
IX
Contents 43 Appendix: MATLAB 43.1 MATLAB . . . . . . . . . 43.2 Step Response . . . . . . . 43.3 Classical Modern . . . 43.4 z-Domain Digital Filters . 43.5 State-Space Digital Filters 43.6 Root Locus Plots . . . . . 43.7 Digital Root-Locus . . . . 43.8 Bode Plots . . . . . . . . . 43.9 Nyquist Plots . . . . . . . 43.10 Lyapunov Equations . . . 43.11 Controllability . . . . . . . 43.12 Observability . . . . . . . 43.13 Further Reading . . . . . 44 Glossary and List 44.1 A, B, C . . . 44.2 D, E, F . . . 44.3 G, H, I . . . . 44.4 J, K, L . . . . 44.5 M, N, O . . . 44.6 P, Q, R . . . 44.7 S, T, U, V . . 44.8 W, X, Y, Z . 275 275 276 277 278 278 279 280 280 281 282 282 282 282 283 283 285 287 288 289 291 293 295 297 297 298 298 299 300 300 301 301 302 303 303 305 305 306 306 306 308 308 309
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
45 List of Equations 45.1 Fundamental Equations . . . 45.2 Basic Inputs . . . . . . . . . . 45.3 Error Constants . . . . . . . . 45.4 System Descriptions . . . . . 45.5 Feedback Loops . . . . . . . . 45.6 Transforms . . . . . . . . . . 45.7 Transform Theorems . . . . . 45.8 State-Space Methods . . . . . 45.9 Root Locus . . . . . . . . . . 45.10 Lyapunov Stability . . . . . . 45.11 Controllers and Compensators 46 Resources and Further Reading 46.1 Wikibooks . . . . . . . . . . . 46.2 Wikiversity . . . . . . . . . . 46.3 Wikipedia . . . . . . . . . . . 46.4 Software . . . . . . . . . . . . 46.5 External Publications . . . . 46.6 External Resources . . . . . . 47 Contributors
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
48 Licenses 319 48.1 GNU GENERAL PUBLIC LICENSE . . . . . . . . . . . . . . . . . . . . 319 48.2 GNU Free Documentation License . . . . . . . . . . . . . . . . . . . . . . 320 48.3 GNU Lesser General Public License . . . . . . . . . . . . . . . . . . . . . . 321
1 Preface
This book will discuss the topic of Control Systems, which is an interdisciplinary engineering topic. Methods considered here will consist of both "Classical" control methods, and "Modern" control methods. Also, discretely sampled systems (digital/computer systems) will be considered in parallel with the more common analog methods. This book will not focus on any single engineering discipline (electrical, mechanical, chemical, etc.), although readers should have a solid foundation in the fundamentals of at least one discipline. This book will require prior knowledge of linear algebra, integral and dierential calculus, and at least some exposure to ordinary dierential equations. In addition, a prior knowledge of integral transforms, specically the Laplace and Z transforms will be very benecial. Also, prior knowledge of the Fourier Transform will shed more light on certain subjects. Wikibooks with information on calculus topics or transformation topics required for this book will be listed below: Calculus1 Linear Algebra2 Signals and Systems3 Digital Signal Processing4
1 2 3 4
2 Introduction
2.1 This Wikibook
This book was written at Wikibooks, a free online community where people write opencontent textbooks. Any person with internet access is welcome to participate in the creation and improvement of this book. Because this book is continuously evolving, there are no nite "versions" or "editions" of this book. Permanent links to known good versions of the pages may be provided. All other works that reference or cite this book should include a link to the project page at Wikibooks: http://en.wikibooks.org/wiki/Control_Systems. Printed or other distributed versions of this book should likewise contain that link. All text contributions are the property of the respective contributors, and this book may only be used under the terms of the GFDL1 .
1 2 3
Introduction accelerates faster. Then we can reduce the supply back down to 10 volts once it reaches ideal speed. This is clearly a simplistic example, but it illustrates an important point: we can add special "Controller units" to preexisting systems, to improve performance and meet new system specications. Here are some formal denitions of terms used throughout this book: Control System A Control System is a device, or a collection of devices that manage the behavior of other devices. Some devices are not controllable. A control system is an interconnection of components connected or related in such a manner as to command, direct, or regulate itself or another system. Controller A controller is a control system that manages the behavior of another device or system. Compensator A Compensator is a control system that regulates another system, usually by conditioning the input or the output to that system. Compensators are typically employed to correct a single design aw, with the intention of aecting other aspects of the design in a minimal manner. There are essentially two methods to approach the problem of designing a new control system: the Classical Approach, and the Modern Approach.
Who is This Book For? Modern Control Methods, instead of changing domains to avoid the complexities of timedomain ODE mathematics, converts the dierential equations into a system of lower-order time domain equations called State Equations, which can then be manipulated using techniques from linear algebra. This book will consider Modern Methods second. A third distinction that is frequently made in the realm of control systems is to divide analog methods (classical and modern, described above) from digital methods. Digital Control Methods were designed to try and incorporate the emerging power of computer systems into previous control methodologies. A special transform, known as the Z-Transform, was developed that can adequately describe digital systems, but at the same time can be converted (with some eort) into the Laplace domain. Once in the Laplace domain, the digital system can be manipulated and analyzed in a very similar manner to Classical analog systems. For this reason, this book will not make a hard and fast distinction between Analog and Digital systems, and instead will attempt to study both paradigms in parallel.
Introduction Algebra4 Calculus5 The reader should have a good understanding of dierentiation and integration. Partial dierentiation, multiple integration, and functions of multiple variables will be used occasionally, but the students are not necessarily required to know those subjects well. These advanced calculus topics could better be treated as a co-requisite instead of a pre-requisite. Linear Algebra6 State-space system representation draws heavily on linear algebra techniques. Students should know how to operate on matrices. Students should understand basic matrix operations (addition, multiplication, determinant, inverse, transpose). Students would also benet from a prior understanding of Eigenvalues and Eigenvectors, but those subjects are covered in this text. Ordinary Dierential Equations7 All linear systems can be described by a linear ordinary dierential equation. It is benecial, therefore, for students to understand these equations. Much of this book describes methods to analyze these equations. Students should know what a dierential equation is, and they should also know how to nd the general solutions of rst and second order ODEs. Engineering Analysis8 This book reinforces many of the advanced mathematical concepts used in this book, and this book will refer to the relevant sections in the Engineering Analysis9 book for further information on some subjects. This is essentially a math book, but with a focus on various engineering applications. It relies on a previous knowledge of the other math books in this list. Signals and Systems10 The Signals and Systems11 book will provide a basis in the eld of systems theory, of which control systems is a subset. Readers who have not read the Signals and Systems12 book will be at a severe disadvantage when reading this book.
Dierential Equations Review 2 will contain a brief primer on digital information, for students who are not necessarily familiar with them. This is done so that digital and analog signals can be considered in parallel throughout the rest of the book. Next, this book will introduce the state-space method of system description and control. After section 3, topics in the book will use state-space and transform methods interchangeably (and occasionally simultaneously). It is important, therefore, that these three chapters be well read and understood before venturing into the later parts of the book. After the "basic" sections of the book, we will delve into specic methods of analyzing and designing control systems. First we will discuss Laplace-domain stability analysis techniques (Routh-Hurwitz, root-locus), and then frequency methods (Nyquist Criteria, Bode Plots). After the classical methods are discussed, this book will then discuss Modern methods of stability analysis. Finally, a number of advanced topics will be touched upon, depending on the knowledge level of the various contributors. As the subject matter of this book expands, so too will the prerequisites. For instance, when this book is expanded to cover nonlinear systems, a basic background knowledge of nonlinear mathematics will be required.
2.6.1 Versions
This wikibook has been expanded to include multiple versions13 of its text, dierentiated by the material covered, and the order in which the material is presented. Each dierent version is composed of the chapters of this book, included in a dierent order. This book covers a wide range of information, so if you don t need all the information that this book has to oer, perhaps one of the other versions would be right for you and your educational needs. Each separate version has a table of contents outlining the dierent chapters that are included in that version. Also, each separate version comes complete with a printable version, and some even come with PDF versions as well. Take a look at the All Versions Listing Page14 to nd the version of the book that is right for you and your needs.
13 14
http://en.wikibooks.org/wiki/Control%20Systems%2FAll%20Versions http://en.wikibooks.org/wiki/Control%20Systems%2FAll%20Versions
Introduction
dP dt
= rP
Where dP dt is the interest (rate of change of the principal), and r is the interest rate. Notice in this case that P is a function of time (t), and can be rewritten to reect that:
dP (t) dt
= rP (t)
To solve this basic, rst-order equation, we can use a technique called "separation of variables", where we move all instances of the letter P to one side, and all instances of t to the other:
dP (t) P (t)
= r dt
And integrating both sides gives us: ln |P (t)| = rt + C This is all ne and good, but generally, we like to get rid of the logarithm, by raising both sides to a power of e: P (t) = ert+C Where we can separate out the constant as such: D = eC P (t) = Dert D is a constant that represents the initial conditions of the system, in this case the starting principal. Dierential equations are particularly dicult to manipulate, especially once we get to higher-orders of equations. Luckily, several methods of abstraction have been created that allow us to work with ODEs, but at the same time, not have to worry about the complexities of them. The classical method, as described above, uses the Laplace, Fourier, and Z Transforms to convert ODEs in the time domain into polynomials in a complex domain. These complex polynomials are signicantly easier to solve than the ODE counterparts. The Modern method instead breaks dierential equations into systems of low-order equations, and expresses this system in terms of matrices. It is a common precept in ODE theory that an ODE of order N can be broken down into N equations of order 1. Readers who are unfamiliar with dierential equations might be able to read and understand the material in this book reasonably well. However, all readers are encouraged to read the related sections in Calculus15 .
2.8 History
The eld of control systems started essentially in the ancient world. Early civilizations, notably the Greeks and the Arabs were heavily preoccupied with the accurate measurement of time, the result of which were several "water clocks" that were designed and implemented.
15
http://en.wikibooks.org/wiki/Calculus
10
History However, there was very little in the way of actual progress made in the eld of engineering until the beginning of the renaissance in Europe. Leonhard Euler (for whom Euler s Formula is named) discovered a powerful integral transform, but Pierre-Simon Laplace used the transform (later called the Laplace Transform) to solve complex problems in probability theory. Joseph Fourier was a court mathematician in France under Napoleon I. He created a special function decomposition called the Fourier Series, that was later generalized into an integral transform, and named in his honor (the Fourier Transform).
The "golden age" of control engineering occurred between 1910-1945, where mass communication methods were being created and two world wars were being fought. During this period, some of the most famous names in controls engineering were doing their work: Nyquist and Bode. Hendrik Wade Bode and Harry Nyquist, especially in the 1930 s while working with Bell Laboratories, created the bulk of what we now call "Classical Control Methods". These methods were based o the results of the Laplace and Fourier Transforms, which had been previously known, but were made popular by Oliver Heaviside around the turn of the century. Previous to Heaviside, the transforms were not widely used, nor respected mathematical tools. Bode is credited with the "discovery" of the closed-loop feedback system, and the logarithmic plotting technique that still bears his name (bode plots). Harry Nyquist did extensive research in the eld of system stability and information theory. He created a powerful stability criteria that has been named for him (The Nyquist Criteria). Modern control methods were introduced in the early 1950 s, as a way to bypass some of the shortcomings of the classical methods. Rudolf Kalman is famous for his work in modern control theory, and an adaptive controller called the Kalman Filter was named in his
11
Introduction honor. Modern control methods became increasingly popular after 1957 with the invention of the computer, and the start of the space program. Computers created the need for digital control methodologies, and the space program required the creation of some "advanced" control techniques, such as "optimal control", "robust control", and "nonlinear control". These last subjects, and several more, are still active areas of study among research engineers.
12
MATLAB
2.10 MATLAB
Information about using MATLAB for control systems can be found in the Appendix16 MATLAB is a programming tool that is commonly used in the eld of control engineering. We will discuss MATLAB in specic sections of this book devoted to that purpose. MATLAB will not appear in discussions outside these specic sections, although MATLAB may be used in some example problems. An overview of the use of MATLAB in control engineering can be found in the appendix at: Control Systems/MATLAB17 . For more information on MATLAB in general, see: MATLAB Programming18 . For more information about properly referencing MATLAB, see: Resources19 Nearly all textbooks on the subject of control systems, linear systems, and system analysis will use MATLAB as an integral part of the text. Students who are learning this subject at an accredited university will certainly have seen this material in their textbooks, and are likely to have had MATLAB work as part of their classes. It is from this perspective that the MATLAB appendix is written. In the future, this book may be expanded to include information on Simulink , as well as MATLAB. There are a number of other software tools that are useful in the analysis and design of control systems. Additional information can be added in the appendix of this book, depending on the experience and prior knowledge of contributors.
est F (s) ds
16 17 18 19
Chapter 43 on page 275 Chapter 43 on page 275 http://en.wikibooks.org/wiki/MATLAB%20Programming Chapter 46 on page 305
13
Introduction Equations that are named in this manner will also be copied into the List of Equations Glossary20 in the end of the book, for an easy reference. Italics will be used for English variables, functions, and equations that appear in the main text. For example e, j, f(t) and X(s) are all italicized. Wikibooks contains a LaTeX mathematics formatting engine, although an attempt will be made not to employ formatted mathematical equations inline with other text because of the dierence in size and font. Greek letters, and other non-English characters will not be italicized in the text unless they appear in the midst of multiple variables which are italicized (as a convenience to the editor). Scalar time-domain functions and variables will be denoted with lower-case letters, along with a t in parenthesis, such as: x(t), y(t), and h(t). Discrete-time functions will be written in a similar manner, except with an [n] instead of a (t). Fourier, Laplace, Z, and Star transformed functions will be denoted with capital letters followed by the appropriate variable in parenthesis. For example: F(s), X(j ), Y(z), and F*(s). Matrices will be denoted with capital letters. Matrices which are functions of time will be denoted with a capital letter followed by a t in parenthesis. For example: A(t) is a matrix, a(t) is a scalar function of time. Transforms of time-variant matrices will be displayed in uppercase bold letters, such as H(s). Math equations rendered using LaTeX will appear on separate lines, and will be indented from the rest of the text.
20
14
3 System Identication
3.1 Systems
Systems, in one sense, are devices that take input and produce an output. A system can be thought to operate on the input to produce the output. The output is related to the input by a certain relationship known as the system response. The system response usually can be modeled with a mathematical relationship between the system input and the system output.
1 2
15
System Identication techniques, such as the Laplace Transform require that the initial time of the system be zero. The initial time of a system is typically denoted by t0 . The value of any variable at the initial time t0 will be denoted with a 0 subscript. For instance, the value of variable x at time t0 is given by: x(t0 ) = x0 Likewise, any time t with a positive subscript are points in time after t0 , in ascending order: t0 t1 t2 tn So t1 occurs after t0 , and t2 occurs after both points. In a similar fashion above, a variable with a positive subscript (unless specifying an index into a vector) also occurs at that point in time: x(t1 ) = x1
3.4 Additivity
A system satises the property of additivity, if a sum of inputs results in a sum of outputs. By denition: an input of x3 (t) = x1 (t) + x2 (t) results in an output of y3 (t) = y1 (t) + y2 (t). To determine whether a system is additive, use the following test: Given a system f that takes an input x and outputs a value y, assume two inputs (x1 and x2 ) produce two outputs: y1 = f (x1 )
y2 = f (x2 ) Now, create a composite input that is the sum of the previous inputs: x3 = x1 + x2 Then the system is additive if the following equation is true: y3 = f (x3 ) = f (x1 + x2 ) = f (x1 ) + f (x2 ) = y1 + y2
16
Homogeneity Systems that satisfy this property are called additive. Additive systems are useful because a sum of simple inputs can be used to analyze the system response to a more complex input.
3.5 Homogeneity
A system satises the condition of homogeneity if an input scaled by a certain factor produces an output scaled by that same factor. By denition: an input of ax1 results in an output of ay1 . In other words, to see if function f() is homogeneous, perform the following test: Stimulate the system f with an arbitrary input x to produce an output y: y = f (x) Now, create a second input x1 , scale it by a multiplicative factor C (C is an arbitrary constant value), and produce a corresponding output y1 : y1 = f (Cx1 ) Now, assign x to be equal to x1 : x1 = x Then, for the system to be homogeneous, the following equation must be true: y1 = f (Cx) = Cf (x) = Cy Systems that are homogeneous are useful in many applications, especially applications with gain or amplication.
17
System Identication
3.6 Linearity
A system is considered linear if it satises the conditions of Additivity and Homogeneity. In short, a system is linear if the following is true: Take two arbitrary inputs, and produce two arbitrary outputs: y1 = f (x1 )
y2 = f (x2 ) Now, a linear combination of the inputs should produce a linear combination of the outputs: f (Ax + By ) = f (Ax) + f (By ) = Af (x) + Bf (y ) This condition of additivity and homogeneity is called superposition. A system is linear if it satises the condition of superposition.
+ y (t) = x(t)
To determine whether this system is linear, construct a new composite input: x(t) = Ax1 (t) + Bx2 (t) Now, create the expected composite output: y (t) = Ay1 (t) + By2 (t) Substituting the two into our original equation:
18
Memory
d[Ay1 (t)+By2 (t)] dt
Finally, convert the various composite terms into the respective variables, to prove that this system is linear:
dy (t) dt
+ y (t) = x(t)
For the record, derivatives and integrals are linear operators, and ordinary dierential equations typically are linear equations.
3.7 Memory
A system is said to have memory if the output from the system is dependent on past inputs (or future inputs!) to the system. A system is called memoryless if the output is only dependent on the current input. Memoryless systems are easier to work with, but systems with memory are more common in digital signal processing applications. Systems that have memory are called dynamic systems, and systems that do not have memory are static systems.
3.8 Causality
Causality is a property that is very similar to memory. A system is called causal if it is only dependent on past and/or current inputs. A system is called anti-causal if the output of the system is dependent only on future inputs. A system is called non-causal if the output depends on past and/or current and future inputs. A system design that is not causal cannot be physically implemented. If the system can t be built, the design is generally worthless.
3.9 Time-Invariance
A system is called time-invariant if the system relationship between the input and output signals is not dependent on the passage of time. If the input signal x(t) produces an output y (t) then any time shifted input, x(t + ), results in a time-shifted output y (t + ) This property can be satised if the transfer function of the system is not a function of time except expressed by the input and output. If a system is time-invariant then the system block is commutative with an arbitrary delay. This facet of time-invariant systems will be discussed later. To determine if a system f is time-invariant, perform the following test: Apply an arbitrary input x to a system and produce an arbitrary output y:
19
System Identication
y (t) = f (x(t)) Apply a second input x1 to the system, and produce a second output: y1 (t) = f (x1 (t)) Now, assign x1 to be equal to the rst input x, time-shifted by a given constant value : x1 (t) = x(t ) Finally, a system is time-invariant if y1 is equal to y shifted by the same value : y1 (t) = y (t )
3.11 Lumpedness
A system is said to be lumped if one of the two following conditions are satised: 1. There are a nite number of states that the system can be in. 2. There are a nite number of state variables. The concept of "states" and "state variables" are relatively advanced, and they will be discussed in more detail in the discussion about modern controls. Systems which are not lumped are called distributed. A simple example of a distributed system is a system with delay, that is, A(s)y (t) = B (s)u(t ), which has an innite number of state variables (Here we use s to denote the Laplace variable). However, although distributed systems are quite common, they are very dicult to analyze in practice, and there are few tools available to work with such systems. Fortunately, in most cases, a delay can be suciently modeled with the Pade approximation. This book will not discuss distributed systems much.
20
Relaxed
3.12 Relaxed
A system is said to be relaxed if the system is causal, and at the initial time t0 the output of the system is zero, i.e., there is no stored energy in the system. y (t0 ) = f (x(t0 )) = 0 In terms of dierential equations, a relaxed system is said to have "zero initial state". Systems without an initial state are easier to work with, but systems that are not relaxed can frequently be modied to approximate relaxed systems.
3.13 Stability
Control Systems engineers will frequently say that an unstable system has "exploded". Some physical systems actually can rupture or explode when they go unstable. Stability is a very important concept in systems, but it is also one of the hardest function properties to prove. There are several dierent criteria for system stability, but the most common requirement is that the system must produce a nite output when subjected to a nite input. For instance, if 5 volts is applied to the input terminals of a given circuit, it would be best if the circuit output didn t approach innity, and the circuit itself didn t melt or explode. This type of stability is often known as "Bounded Input, Bounded Output" stability, or BIBO. There are a number of other types of stability, most of which are based o the concept of BIBO stability. Because stability is such an important and complicated topic, an entire section of this text is devoted to its study.
21
Figure 3
23
Figure 4
4.1.3 Quantized
A signal is called Quantized if it can only be certain values, and cannot be other values. This concept is best illustrated with examples: 1. Students with a strong background in physics will recognize this concept as being the root word in "Quantum Mechanics". In quantum mechanics, it is known that energy comes only in discrete packets. An electron bound to an atom, for example, may occupy one of several discrete energy levels, but not intermediate levels. 2. Another common example is population statistics. For instance, a common statistic is that a household in a particular country may have an average of "3.5 children", or some other fractional number. Actual households may have 3 children, or they may have 4 children, but no household has 3.5 children. 3. People with a computer science background will recognize that integer variables are quantized because they can only hold certain integer values, not fractions or decimal points.
24
Analog The last example concerning computers is the most relevant, because quantized systems are frequently computer-based. Systems that are implemented with computer software and hardware will typically be quantized. Here is an example waveform of a quantized signal. Notice how the magnitude of the wave can only take certain values, and that creates a step-like appearance. This image is discrete in magnitude, but is continuous in time:
Figure 5
4.2 Analog
By denition: Analog A signal is considered analog if it is dened for all points in time and if it can take any real magnitude value within its range. An analog system is a system that represents data using a direct conversion from one form to another. In other words, an analog system is a system that is continuous in both time and magnitude.
25
Digital and Analog Where is the output in terms of Rad/sec, and f(v) is the motor s conversion function between the input voltage (v) and the output. For any value of v we can calculate out specically what the rotational speed of the motor should be.
4.3 Digital
Digital data is represented by discrete number values. By denition: Digital A signal or system is considered digital if it is both discrete-time and quantized. Digital data always have a certain granularity, and therefore there will almost always be an error associated with using such data, especially if we want to account for all real numbers. The tradeo, of course, to using a digital system is that our powerful computers with our powerful, Moore s law microprocessor units, can be instructed to operate on digital data only. This benet more than makes up for the shortcomings of a digital representation system. Discrete systems will be denoted inside square brackets, as is a common notation in texts that deal with discrete values. For instance, we can denote a discrete data set of ascending numbers, starting at 1, with the following notation: x[n] = [1 2 3 4 5 6 ...] n, or other letters from the central area of the alphabet (m, i, j, k, l, for instance) are commonly used to denote discrete time values. Analog, or "non-discrete" values are denoted in regular expression syntax, using parenthesis. Here is an example of an analog waveform and the digital equivalent. Notice that the digital waveform is discrete in both time and magnitude:
26
Hybrid Systems
27
http://en.wikibooks.org/wiki/Analog%20and%20Digital%20Conversion
28
Figure 8
29
5 System Metrics
5.1 System Metrics
When a system is being designed and analyzed, it doesn t make any sense to test the system with all manner of strange input functions, or to measure all sorts of arbitrary performance metrics. Instead, it is in everybody s best interest to test the system with a set of standard, simple, reference functions. Once the system is tested with the reference functions, there are a number of dierent metrics that we can use to determine the system performance. It is worth noting that the metrics presented in this chapter represent only a small number of possible metrics that can be used to evaluate a given system. This wikibook will present other useful metrics along the way, as their need becomes apparent.
u(t) =
The unit step function is a highly important function, not only in control systems engineering, but also in signal processing, systems analysis, and all branches of engineering. If the unit step function is input to a system, the output of the system is known as the step response. The step response of a system is an important tool, and we will study step responses in detail in later chapters.
31
System Metrics
Figure 9
Ramp A unit ramp is dened in terms of the unit step function, as such: Unit Ramp Function r(t) = tu(t) It is important to note that the unit step function is simply the dierential of the unit ramp function: r(t) = u(t)dt = tu(t)
This denition will come in handy when we learn about the Laplace Transform.
32
Standard Inputs
Figure 10
Parabolic A unit parabolic input is similar to a ramp input: Unit Parabolic Function 1 p(t) = t2 u(t) 2 Notice also that the unit parabolic input is equal to the integral of the ramp function: p(t) = r(t)dt = 1 1 tu(t)dt = t2 u(t) = tr(t) 2 2
Again, this result will become important when we learn about the Laplace Transform.
33
System Metrics
Figure 11
Also, sinusoidal and exponential functions are considered basic, but they are too dicult to use in initial analysis of a system.
34
Target Value takes for the transient response to end and the steady-state response to begin is known as the settling time. It is common for a systems engineer to try and improve the step response of a system. In general, it is desired for the transient response to be reduced, the rise and settling times to be shorter, and the steady-state to approach a particular desired "reference" output.
35
System Metrics
36
System Order although common values are 20%, 10%, or 5% of the target value. The settling time will be denoted as ts .
The highest exponent in the denominator is s2 , so the system is order 2. Also, since the denominator is a higher degree than the numerator, this system is proper. in the above example, G(s) is a second-order transfer function because in the denominator one of the s variables has an exponent of 2. Second-order functions are the easiest to work with.
37
System Metrics
G(s) =
K sM
i (s si ) j (s sj )
Poles at the origin are called integrators, because they have the eect of performing integration on the input signal. we call the parameter M the system type. Note that increased system type number correspond to larger numbers of poles at s = 0. More poles at the origin generally have a benecial eect on the system, but they increase the order of the system, and make it increasingly dicult to implement physically. System type will generally be denoted with a letter like N, M, or m. Because these variables are typically reused for other purposes, this book will make clear distinction when they are employed. Now, we will dene a few terms that are commonly used when discussing system type. These new terms are Position Error, Velocity Error, and Acceleration Error. These names are throwbacks to physics terms where acceleration is the derivative of velocity, and velocity is the derivative of position. Note that none of these terms are meant to deal with movement, however. Position Error The position error, denoted by the position error constant Kp . This is the amount of steady state error of the system when stimulated by a unit step input. We dene the position error constant as follows: Position Error Constant Kp = lim G(s)
s0
Where G(s) is the transfer function of our system. Velocity Error The velocity error is the amount of steady state error when the system is stimulated with a ramp input. We dene the velocity error constant as such: Velocity Error Constant Kv = lim sG(s)
s 0
Acceleration Error The acceleration error is the amount of steady-state error when the system is stimulated with a parabolic input. We dene the acceleration error constant to be: Acceleration Error Constant Ka = lim s2 G(s)
s0
Now, this table will show briey the relationship between the system type, the kind of input (step, ramp, parabolic), and the steady state error of the system:
38
Visually {| class="wikitable" ! ! colspan=3 | Unit System Input |- ! Type, M !! Au(t) !! Ar(t) !! Ap(t) |- |0 || ess = || ess = || ess = |- |1 || ess = 0 || ess = |- | > 2 || ess = 0 || ess = 0 || ess = 0 |}
A Kv
A 1+Kp A ess = K a
Where the constant M is the order of the digital system. Now, we will show how to nd the various error constants in the Z-Domain: Z-Domain Error Constants {| class="wikitable" ! Error Constant !! Equation |- | Kp || Kp = limz 1 G(z ) |- | Kv || Kv = limz 1 (z 1)G(z ) |- | Ka || Ka = limz 1 (z 1)2 G(z ) |}
5.11 Visually
Here is an image of the various system metrics, acting on a system in response to a step input:
Figure 14
39
System Metrics The target value is the value of the input step response. The rise time is the time at which the waveform rst reaches the target value. The overshoot is the amount by which the waveform exceeds the target value. The settling time is the time it takes for the system to settle into a particular bounded region. This bounded region is denoted with two short dotted lines above and below the target value.
40
6 System Modeling
6.1 The Control Process
It is the job of a control engineer to analyze existing systems, and to design new systems to meet specic needs. Sometimes new systems need to be designed, but more frequently a controller unit needs to be designed to improve the performance of existing systems. When designing a system, or implementing a controller to augment an existing system, we need to follow some basic steps: 1. 2. 3. 4. Model the system mathematically Analyze the mathematical model Design system/controller Implement system/controller and test
The vast majority of this book is going to be focused on (2), the analysis of the mathematical systems. This chapter alone will be devoted to a discussion of the mathematical modeling of the systems.
Figure 15 If the system can be represented by a mathematical function h(t, r), where t is the time that the output is observed, and r is the time that the input is applied. We can relate the system function h(t, r) to the input x and the output y through the use of an integral: General System Description
41
System Modeling
y (t) =
h(t, r)x(r)dr
This integral form holds for all linear systems, and every linear system can be described by such an equation. If a system is causal, then there is no output of the system before time r, and we can change the limits of the integration:
t
y (t) =
0
h(t, r)x(r)dr
y (t) =
0
h(t r)x(r)dr
This equation is known as the convolution integral, and we will discuss it more in the next chapter. Every Linear Time-Invariant (LTI) system can be used with the Laplace Transform, a powerful tool that allows us to convert an equation from the time domain into the S-Domain, where many calculations are easier. Time-variant systems cannot be used with the Laplace Transform.
y (t) = C (t)x(t) + D(t)u(t) We will discuss the state space equations more when we get to the section on modern controls.
42
Analysis
6.5 Representations
To recap, we will prepare a table with the various system properties, and the available methods for describing the system: {| class="wikitable" |- !Properties !! State-Space Equations !! Laplace Transform !! Transfer Matrix |- |Linear, Time-Variant, Distributed || no || no || no |- |Linear, Time-Variant, Lumped || yes || no || no |- |Linear, Time-Invariant, Distributed || no || yes || no |- |Linear, Time-Invariant, Lumped || yes || yes || yes |} We will discuss all these dierent types of system representation later in the book.
6.6 Analysis
Once a system is modeled using one of the representations listed above, the system needs to be analyzed. We can determine the system metrics and then we can compare those metrics to our specication. If our system meets the specications we are nished with the design process. However if the system does not meet the specications (as is typically the case), then suitable controllers and compensators need to be designed and added to the system. Once the controllers and compensators have been designed, the job isn t nished: we need to analyze the new composite system to ensure that the controllers work properly. Also, we need to ensure that the systems are stable: unstable systems can be dangerous.
43
System Modeling Brief Overview of the Math Frequency domain modeling is a matter of determining the impulse response of a system to a random process.
Figure 16
G ( ) is the frequency response function of the system and SY Y ( ) is the one-sided output PSD or auto power spectral density function. The frequency response function, G ( ), is related to the impulse response function (transfer function) by g ( ) = 1 2
eit H ( ) d
Note some texts will state that this is only valid for random processes which are stationary. Other texts suggest stationary and ergodic while still others state weakly stationary processes. Some texts do not distinguish between strictly stationary and weakly stationary. From practice, the rule thumb is if the PSD of the input process is the same from hour to hour and day to day then the input PSD can be used and the above equation is valid. Notes See a full explanation with example at ControlTheoryPro.com2
44
Manufacture Hovering Helicopter Example3 Reaction Torque Cancellation Example4 List of all examples at ControlTheoryPro.com5
6.8 Manufacture
Once the system has been properly designed we can prototype our system and test it. Assuming our analysis was correct and our design is good, the prototype should work as expected. Now we can move on to manufacture and distribute our completed systems.
3 4 5
45
7 Transforms
7.1 Transforms
There are a number of transforms that we will be discussing throughout this book, and the reader is assumed to have at least a small prior knowledge of them. It is not the intention of this book to teach the topic of transforms to an audience that has had no previous exposure to them. However, we will include a brief refresher here to refamiliarize people who maybe cannot remember the topic perfectly. If you do not know what the Laplace Transform or the Fourier Transform are yet, it is highly recommended that you use this page as a simple guide, and look the information up on other sources. Specically, Wikipedia1 has lots of information on these subjects.
Where the function f(a) is the function being transformed, and g(a,b) is known as the kernel of the transform. Typically, the only dierence between the various integral transforms is the kernel.
1 2
http://en.wikipedia.org/wiki/ http://en.wikipedia.org/wiki/Laplace%20transform
47
Transforms The Laplace Transform converts an equation from the time-domain into the so-called "S-domain", or the Laplace domain, or even the "Complex domain". These are all dierent names for the same mathematical space and they all may be used interchangeably in this book and in other texts on the subject. The Transform can only be applied under the following conditions: 1. 2. 3. 4. The The The The system system system system or or or or signal signal signal signal in in in in question question question question is is is is analog. Linear. Time-Invariant. causal.
Laplace transform results have been tabulated extensively. More information on the Laplace transform, including a transform table can be found in the Appendix3 . If we have a linear dierential equation in the time domain: y (t) = ax(t) + bx (t) + cx (t) With zero initial conditions, we can take the Laplace transform of the equation as such: Y (s) = aX (s) + bsX (s) + cs2 X (s) And separating, we get: Y (s) = X (s)[a + bs + cs2 ]
The inverse transform converts a function from the Laplace domain back into the time domain.
48
Laplace Transform
y2 (t) = a2 x2 (t) We can arrange these equations into matrix form, as shown: y1 (t) a 0 = 1 y2 (t) 0 a2 And write this symbolically as: y(t) = Ax(t) We can take the Laplace transform of both sides: L[y(t)] = Y(s) = L[Ax(t)] = AL[x(t)] = AX(s) Which is the same as taking the transform of each individual equation in the system of equations. x1 (t) x2 (t)
http://en.wikibooks.org/wiki/Circuit%20Theory
49
Transforms
Figure 17 Circuit diagram for the RL circuit example problem. VL is the voltage over the inductor, and is the quantity we are trying to nd.
Let s say that we have a 1st order RL series electric circuit. The resistor has resistance R, the inductor has inductance L, and the voltage source has input voltage Vin . The system output of our circuit is the voltage over the inductor, Vout . In the time domain, we have the following rst-order dierential equations to describe the circuit:
(t) Vout (t) = VL (t) = L di dt (t) Vin (t) = Ri(t) + L di dt
However, since the circuit is essentially acting as a voltage divider, we can put the output in terms of the input as follows: Vout (t) =
di(t) dt di(t) Ri(t)+L dt
Vin (t)
This is a very complicated equation, and will be dicult to solve unless we employ the Laplace transform: Vout (s) =
Ls R+Ls Vin (s)
We can divide top and bottom by L, and move Vin to the other side:
Vout Vin
s
R +s L
And using a simple table look-up, we can solve this for the time-domain relationship between the circuit input and the circuit output:
vout vin
Rt d ( L ) u(t) dt e
50
Laplace Transform
A (s+1)
B + (s+2) =
This looks impossible, because we have a single equation with 3 unknowns (s, A, B), but in reality s can take any arbitrary value, and we can "plug in" values for s to solve for A and B, without needing other equations. For instance, in the above equation, we can multiply through by the denominator, and cancel terms: (2s + 1) = A(s + 2) + B (s + 1) Now, when we set s -2, the A term disappears, and we are left with B 3. When we set s -1, we can solve for A -1. Putting these values back into our original equation, we have: F (s ) =
1 (s+1) 3 + (s+2)
Remember, since the Laplace transform is a linear operator, the following relationship holds true: L[F (s)] = L
1 (s+1) 3 + (s+2) =L 1 s+1
+L
3 (s+2)
Finding the inverse transform of these smaller terms should be an easier process then nding the inverse transform of the whole function. Partial fraction expansion is a useful, and oftentimes necessary tool for nding the inverse of an S-domain equation.
5 6
http://en.wikibooks.org/wiki/Calculus http://en.wikibooks.org/wiki/Calculus
51
Transforms F (s) =
79s2 +916s+1000 s(s+10)3
A(s + 10)3 + Bs + Cs(s + 10) + Ds(s + 10)2 = 79s2 + 916s + 1000 Canceling terms wouldn t be enough here, we will open the brackets (separated onto multiple lines): As3 + 30As2 + 300As + 1000A + Bs+ Cs2 + 10Cs + Ds3 + 20Ds2 + 100Ds = 79s2 + 916s + 1000 Let s compare coecients: A+D=0 30A + C + 20D = 79 300A + B + 10C + 100D = 916 1000A = 1000 And solving gives us: A=1 B = 26 C = 69 D = -1 We know from the Laplace Transform table that the following relation holds:
1 (s+)n+1 tn t u(t) n! e
We can plug in our values for A, B, C, and D into our expansion, and try to convert it into the form above. F (s) =
A s B C D + (s+10) 3 + (s+10)2 + s+10
52
Laplace Transform F (s ) =
7s+26 s2 80s+1681
As+B s2 80s+1681
When the solution of the denominator is a complex number, we use a complex representation A + iB, like 3+i4 as opposed to the use of a single letter (e.g. D) - which is for real numbers: As + B = 7s + 26 A=7 B = 26 We will need to reform it into two fractions that look like this (without changing its value): et sin(t) u(t) (s+ )2 + 2
+ et cos(t) u(t) (s+s )2 + 2
Let s start with the denominator (for both fractions): The roots of s2 - 80s + 1681 are 40 + j9 and 40 - j9.
As+B (s + a)2 + 2 = (s 40)2 + 92 (s 40)2 +92
A s
B Cs+D + s 3 + s2 12s+37
We multiply through by the denominators to make the equation rational: A(s 3)(s2 12s + 37) + Bs(s2 12s + 37) + (Cs + D)s(s 3) = 90s2 1110 And then we combine terms: As3 15As2 + 73As 111A + Bs3 12Bs2 + 37Bs + Cs3 3Cs2 + Ds2 3Ds = 90s2 1110 Comparing coecients: A+B+C=0 -15A - 12B - 3C + D = 90
53
Transforms 73A + 37B - 3D = 0 -111A = -1110 Now, we can solve for A, B, C and D: A = 10 B = -10 C=0 D = 120 And now for the "tting": The roots of s2 - 12s + 37 are 6 + j and 6 - j
1 s 1 A1 s + B s3 + C (s6)2 +12 + D (s6)2 +12
No need to t the fraction of D, because it is complete; no need to bother tting the fraction of C, because C is equal to zero.
1 1 s 10 1 s 10 s3 + 0 (s6)2 +12 + 120 (s6)2 +12
From our chapter on system metrics, you may recognize the value of the system at time innity as the steady-state time of the system. The dierence between the steady state value and the expected output value we remember as being the steady-state error of the system. Using the Final Value Theorem, we can nd the steady-state value and the steady-state error of the system in the Complex S domain.
54
Fourier Transform
1+s lims 0 s 1+2 s+s2
And since the unit ramp is the integral of the unit step, we can multiply the above result times 1/s to get the transform of the unit ramp: L[r(t)] = 1 s2
Again, we can multiply by 1/s to get the transform of the unit parabola: L[p(t)] = 1 s3
http://en.wikipedia.org/wiki/Fourier%20Transform
55
Transforms converts a time-domain signal into its frequency-domain representation, as a function of the radial frequency, , The Fourier Transform is dened as such: Fourier Transform
F (j ) = F [f (t)] =
0
f (t)ejt dt
This operation can be performed using this MATLAB command: fourier We can now show that the Fourier Transform is equivalent to the Laplace transform, when the following condition is true: s = j Because the Laplace and Fourier Transforms are so closely related, it does not make much sense to use both transforms for all problems. This book, therefore, will concentrate on the Laplace transform for nearly all subjects, except those problems that deal directly with frequency values. For frequency problems, it makes life much easier to use the Fourier Transform representation. Like the Laplace Transform, the Fourier Transform has been extensively tabulated. Properties of the Fourier transform, in addition to a table of common transforms is available in the Appendix8 .
F (j )ejt d
56
Complex Plane
Figure 18
Using the above equivalence, we can show that the Laplace transform is always equal to the Fourier Transform, if the variable s is an imaginary number. However, the Laplace transform is dierent if s is a real or a complex variable. As such, we generally dene s to have both a real part and an imaginary part, as such: s = + j And we can show that s = j if = 0. Since the variable s can be broken down into 2 independent values, it is frequently of some value to graph the variable s on its own special "S-plane". The S-plane graphs the variable on the horizontal axis, and the value of j on the vertical axis. This axis arrangement is shown at right.
57
Transforms
ej = cos( ) + j sin( ) This formula will be used extensively in some of the chapters of this book, so it is important to become familiar with it now.
7.6 MATLAB
The MATLAB symbolic toolbox contains functions to compute the Laplace and Fourier transforms automatically. The function laplace, and the function fourier can be used to calculate the Laplace and Fourier transforms of the input functions, respectively. For instance, the code: t = sym( t ); fx = 30*t2 + 20*t; laplace(fx) produces the output: ans = 60/s3+20/s2 We will discuss these functions more in The Appendix9 .
9 10 11 12
58
8 Transfer Functions
8.1 Transfer Functions
This operation can be performed using this MATLAB command: tf A Transfer Function is the ratio of the output of a system to the input of a system, in the Laplace domain considering its initial conditions and equilibrium point to be zero. If we have an input function of X(s), and an output function Y(s), we dene the transfer function H(s) to be: Transfer Function H (s ) = Y (s) X (s)
Readers who have read the Circuit Theory1 book will recognize the transfer function as being the Laplace transform of a circuit s impulse response.
Figure 19
http://en.wikibooks.org/wiki/Circuit%20Theory
59
Transfer Functions output of the system as y(t). The relationship between the input and the output is denoted as the impulse response, h(t). We dene the impulse response as being the relationship between the system output to its input. We can use the following equation to dene the impulse response: h(t) = y (t) x(t)
(t) =
The impulse function is also known as the delta function because it s denoted with the Greek lower-case letter . The delta function is typically graphed as an arrow towards innity, as shown below:
60
Impulse Response
Figure 20
It is drawn as an arrow because it is dicult to show a single point at innity in any other graphing method. Notice how the arrow only exists at location 0, and does not exist for any other time t. The delta function works with regular time shifts just like any other function. For instance, we can graph the function (t - N) by shifting the function (t) to the right, as such:
61
Transfer Functions
Figure 21
An examination of the impulse function will show that it is related to the unit-step function as follows: (t) = and u(t) = (t)dt du(t) dt
The impulse function is not dened at point t = 0, but the impulse response must always satisfy the following condition, or else it is not a true impulse function:
(t)dt = 1
The response of a system to an impulse input is called the impulse response. Now, to get the Laplace Transform of the impulse function, we take the derivative of the unit step function, which means we multiply the transform of the unit step function by s:
62
Convolution
1 s
s =1 s
8.3 Convolution
This operation can be performed using this MATLAB command: conv However, the impulse response cannot be used to nd the system output from the system input in the same manner as the transfer function. If we have the system input and the impulse response of the system, we can calculate the system output using the convolution operation as such: y (t) = h(t) x(t) Remember: an asterisk means convolution, not multiplication! Where " * " (asterisk) denotes the convolution operation. Convolution is a complicated combination of multiplication, integration and time-shifting. We can dene the convolution between two functions, a(t) and b(t) as the following: Convolution
(a b)(t) = (b a)(t) =
a( )b(t )d
(The variable (Greek tau) is a dummy variable for integration). This operation can be dicult to perform. Therefore, many people prefer to use the Laplace Transform (or another transform) to convert the convolution operation into a multiplication operation, through the Convolution Theorem.
2 Chapter 7 on page 47
63
Transfer Functions
x( )h(t )d
L[f (t)g (t)] = F (s) G(s) This also serves as a good example of the property of Duality.
64
Using the Transfer Function If the complex Laplace variable is s, then we generally denote the transfer function of a system as either G(s) or H(s). If the system input is X(s), and the system output is Y(s), then the transfer function can be dened as such: H (s ) = Y (s) X (s)
If we know the input to a given system, and we have the transfer function of the system, we can solve for the system output by multiplying: Transfer Function Description Y (s) = H (s)X (s)
Plugging that result into our relation for the transfer function gives us: Y (s) = X (s)H (s) Y (s) = 1 s H (s) Y (s) =
H (s) s
And we can see that the step response is simply the impulse response divided by s.
65
Transfer Functions
We can separate out our numerator and denominator polynomials as such: num = [79 916 1000]; den = [1 30 300 1000 0]; sys = tf(num, den); Now, we can get our step response from the step function, and plot it for time from 1 to 10 seconds: T = 1:0.001:10; step(sys, T);
Figure 22
Because the frequency response and the transfer function are so closely related, typically only one is ever calculated, and the other is gained by simple variable substitution. However, despite the close relationship between the two representations, they are both useful individually, and are each used for dierent purposes.
66
In the equation above, notice that each term in the series has a coecient value, a. We can optionally factor out this coecient, if the resulting equation is easier to work with:
http://en.wikipedia.org/wiki/Geometric%20progression
67
a
k=0
rk = a r0 + r1 + r2 + r3 + + rn
Once we have an innite series in either of these formats, we can conveniently solve for the total sum of this series using the following equation:
n
a
k=0
rk = a
1 rn+1 1r
Let s say that we start our series o at a number that isn t zero. Let s say for instance that we start our series o at n = 1 or n = 100. Let s see:
n
ark =
k=m
a(rm rn+1 ) 1r
With that result out of the way, now we need to worry about making this series converge. In the above sum, we know that n is approaching innity (because this is an innite sum). Therefore, any term that contains the variable n is a matter of worry when we are trying to make this series converge. If we examine the above equation, we see that there is one term in the entire result with an n in it, and from that, we can set a fundamental inequality to govern the geometric series. rn+1 < To satisfy this equation, we must satisfy the following condition: Geometric convergence condition r1 Therefore, we come to the nal result: The geometric series converges if and only if the value of r is less than one.
68
F (s) = L [f (t)] =
i=0
f (iT )esiT
w:Star transform2 The Star Transform depends on the sampling time T and is dierent for a single signal depending on the speed at which the signal is sampled. Since the Star Transform is dened as an innite series, it is important to note that some inputs to the Star Transform will not converge, and therefore some functions do not have a valid Star Transform. Also, it is important to note that the Star Transform may only be valid under a particular region of convergence. We will cover this topic more when we discuss the Z-transform.
residues of X ()
at poles of E ()
This math is advanced for most readers, so we can also use an alternate method, as follows: X (s ) = 1 T
X (s + jms ) +
n=
x(0) 2
Neither one of these methods are particularly easy, however, and therefore we will not discuss the relationship between the Laplace transform and the Star Transform any more than is absolutely necessary in this book. Suce it to say, however, that the Laplace transform and the Star Transform are related mathematically.
69
Sampled Data Systems Y (s) = X (s)H (s) Then: Y (s) = X (s)H (s) Given: Y (s) = X (s)H (s) Then: Y (s) = XH (s) Y (s) = X (s)H (s) Where XH (s) is the Star Transform of the product of X(s)H(s).
X (z ) = Z {x[n]} =
n=
x[n]z n
Z-Transform properties, and a table of common transforms can be found in: the Appendix5 .
4 5
70
The Z-Transform Like the Star Transform the Z Transform is dened as an innite series and therefore we need to worry about convergence. In fact, there are a number of instances that have identical Z-Transforms, but dierent regions of convergence (ROC). Therefore, when talking about the Z transform, you must include the ROC, or you are missing valuable information.
Figure 23
The transfer function in the Z domain operates exactly the same as the transfer function in the S Domain: H (z ) = Y (z ) X (z )
Z{h[n]} = H (z ) Similarly, the value h[n] which represents the response of the digital system is known as the impulse response of the system. It is important to note, however, that the denition of an "impulse" is dierent in the analog and digital domains.
Where C is a counterclockwise closed path encircling the origin and entirely in the region of convergence (ROC). The contour or path, C, must encircle all of the poles of X(z). There is more information about complex integrals in the book Engineering Analysis6 .
6 http://en.wikibooks.org/wiki/Engineering%20Analysis
71
Sampled Data Systems This math is relatively advanced compared to some other material in this book, and therefore little or no further attention will be paid to solving the inverse Z-Transform in this manner. Z transform pairs are heavily tabulated in reference texts, so many readers can consider that to be the primary method of solving for inverse Z transforms. There are a number of Z-transform pairs available in table form in The Appendix7 .
This equation can be used to nd the steady-state response of a system, and also to calculate the steady-state error of the system.
9.5 Star Z
The Z transform is related to the Star transform though the following change of variables: z = esT Notice that in the Z domain, we don t maintain any information on the sampling period, so converting to the Z domain from a Star Transformed signal loses that information. When converting back to the star domain however, the value for T can be re-insterted into the equation, if it is still available. Also of some importance is the fact that the Z transform is bilinear, while the Star Transform is unilinear. This means that we can only convert between the two transforms if the sampled signal is zero for all values of n < 0. Because the two transforms are so closely related, it can be said that the Z transform is simply a notational convenience for the Star Transform. With that said, this book could easily use the Star Transform for all problems, and ignore the added burden of Z transform notation entirely. A common example of this is Richard Hamming s book "Numerical Methods for Scientists and Engineers" which uses the Fourier Transform for all problems, considering the Laplace, Star, and Z-Transforms to be merely notational conveniences. However, the Control Systems wikibook is under the impression that the correct utilization of dierent transforms can make problems more easy to solve, and we will therefore use a multi-transform approach.
Chapter 7 on page 47
72
Star Z
9.5.1 Z plane
Note: The lower-case z is the name of the variable, and the upper-case Z is the name of the Transform and the plane. z is a complex variable with a real part and an imaginary part. In other words, we can dene z as such: z = Re(z ) + j Im(z ) Since z can be broken down into two independent components, it often makes sense to graph the variable z on the Z-plane. In the Z-plane, the horizontal axis represents the real part of z, and the vertical axis represents the magnitude of the imaginary part of z. Notice also that if we dene z in terms of the star-transform relation: z = esT we can separate out s into real and imaginary parts: s = + j We can plug this into our equation for z: z = e(+j)T = eT ejT Through Euler s formula, we can separate out the complex exponential as such: z = eT (cos(T ) + j sin(T )) If we dene two new variables, M and : M = eT = T We can write z in terms of M and . Notice that it is Euler s equation: z = M cos() + jM sin() Which is clearly a polar representation of z, with the magnitude of the polar function (M) based on the real-part of s, and the angle of the polar function () is based on the imaginary part of s.
73
Note that we can remove the unit step function, and change the limits of the sum: X (z ) =
2n z n n=0 e
This is because the series is 0 for all time less than n 0. If we try to combine the n terms, we get the following result: X (z ) =
2 n n=0 (e z )
Once we have our series in this term, we can break this down to look like our geometric series: a=1 r = (e2 z )1 And nally, we can nd our nal value, using the geometric series formula: a
n k k=0 r r = a 1 1r
n+1
((e z ) ) = 1 11 (e2 z )1
1 n+1
Again, we know that to make this series converge, we need to make the r value less than 1: |(e2 z )1 | = |e2 z | 1 And nally we obtain the region of convergance for this Z-transform: |z |
1 e2 1 e2 z
z and s are complex variables, and therefore we need to take the magnitude in our ROC calculations. The "Absolute Value symbols" are actually the "magnitude calculation", and is dened as such: x = A + jB
|x| =
A2 + B 2
74
Star Z
9.5.3 Laplace Z
There are no easy, direct ways to convert between the Laplace transform and the Z transform directly. Nearly all methods of conversions reproduce some aspects of the original equation faithfully, and incorrectly reproduce other aspects. For some of the main mapping techniques between the two, see the Z Transform Mappings Appendix8 . However, there are some topics that we need to discuss. First and foremost, conversions between the Laplace domain and the Z domain are not linear, this leads to some of the following problems: 1. L[G(z )H (z )] = G(s)H (s) 2. Z [G(s)H (s)] = G(z )H (z ) This means that when we combine two functions in one domain multiplicatively, we must nd a combined transform in the other domain. Here is how we denote this combined transform: Z [G(s)H (s)] = GH (z ) Notice that we use a horizontal bar over top of the multiplied functions, to denote that we took the transform of the product, not of the individual pieces. However, if we have a system that incorporates a sampler, we can show a simple result. If we have the following format: Y (s) = X (s)H (s) Then we can put everything in terms of the Star Transform: Y (s) = X (s)H (s) and once we are in the star domain, we can do a direct change of variables to reach the Z domain: Y (s) = X (s)H (s) Y (z ) = X (z )H (z ) Note that we can only make this equivalence relationship if the system incorporates an ideal sampler, and therefore one of the multiplicative terms is in the star domain.
9.5.4 Example
Let s say that we have the following equation in the Laplace domain:
75
Sampled Data Systems Y (s) = A (s)B (s) + C (s)D(s) And because we have a discrete sampler in the system, we want to analyze it in the Z domain. We can break up this equation into two separate terms, and transform each: Z [A (s)B (s)] Z [A (s)B (s)] = A(z )B (z ) And Z [C (s)D(s)] = CD(z ) And when we add them together, we get our result: Y (z ) = A(z )B (z ) + CD(z )
9.6 Z Fourier
By substituting variables, we can relate the Star transform to the Fourier Transform as well: esT = ej
e(+j)T = ej If we assume that T = 1, we can relate the two equations together by setting the real part of s to zero. Notice that the relationship between the Laplace and Fourier transforms is mirrored here, where the Fourier transform is the Laplace transform with no real-part to the transform variable. There are a number of discrete-time variants to the Fourier transform as well, which are not discussed in this book. For more information about these variants, see Digital Signal Processing9 .
9.7 Reconstruction
Some of the easiest reconstruction circuits are called "Holding circuits". Once a signal has been transformed using the Star Transform (passed through an ideal sampler), the signal must be "reconstructed" using one of these hold systems (or an equivalent) before it can be analyzed in a Laplace-domain system. If we have a sampled signal denoted by the Star Transform X (s), we want to reconstruct that signal into a continuous-time waveform, so that we can manipulate it using Laplacetransform techniques. Let s say that we have the sampled input signal, a reconstruction circuit denoted G(s), and an output denoted with the Laplace-transform variable Y(s). We can show the relationship as follows:
9 http://en.wikibooks.org/wiki/Digital%20Signal%20Processing
76
Reconstruction
Y (s) = X (s)G(s) Reconstruction circuits then, are physical devices that we can use to convert a digital, sampled signal into a continuous-time domain, so that we can take the Laplace transform of the output signal.
Figure 24
A zero-order hold circuit is a circuit that essentially inverts the sampling process: The value of the sampled signal at time t is held on the output for T time. The output waveform of a zero-order hold circuit therefore looks like a staircase approximation to the original waveform. The transfer function for a zero-order hold circuit, in the Laplace domain, is written as such: Zero Order Hold 1 eT s s
Gh0 =
The Zero-order hold is the simplest reconstruction circuit, and (like the rest of the circuits on this page) assumes zero processing delay in converting between digital to analog.
77
A continuous input signal (gray) and the sampled signal with a zero-order
78
Reconstruction
Figure 26
The zero-order hold creates a step output waveform, but this isn t always the best way to reconstruct the circuit. Instead, the First-Order Hold circuit takes the derivative of the waveform at the time t, and uses that derivative to make a guess as to where the output waveform is going to be at time (t + T). The rst-order hold circuit then "draws a line" from the current position to the expected future position, as the output of the waveform. First Order Hold 1 + T s 1 eT s Gh1 = T s
2
Keep in mind, however, that the next value of the signal will probably not be the same as the expected value of the next data point, and therefore the rst-order hold may have a number of discontinuities.
79
Figure 27
An input signal (grey) and the rst-order hold circuit output (red)
This circuit is more complicated than either of the other hold circuits, but sometimes added complexity is worth it if we get better performance from our reconstruction circuit.
80
Reconstruction
Figure 28
81
Figure 29 circuit
An input signal (grey) and the output signal through a linear approximation
Category:Control Systems10
10 11 12 13
82
10 System Delays
10.1 Delays
A system can be built with an inherent delay. Delays are units that cause a time-shift in the input signal, but that don t aect the signal characteristics. An ideal delay is a delay system that doesn t aect the signal characteristics at all, and that delays the signal for an exact amount of time. Some delays, like processing delays or transmission delays, are unintentional. Other delays however, such as synchronization delays, are an integral part of a system. This chapter will talk about how delays are utilized and represented in the Laplace Domain. Once we represent a delay in the Laplace domain, it is an easy matter, through change of variables, to express delays in other domains.
Figure 30
83
System Delays
Oftentimes, the phase margin is approximated by the following relationship: Delay Margin (approx) m 100 The Greek letter zeta ( ) is a quantity called the damping ratio, and we discuss this quantity in more detail in the next chapter.
84
Transform-Domain Delays
85
System Delays
Z (x[n], ) = X (z, ) =
n=
x[n ]z n
1m
= Z X (s)eT s
1m
X (z, m) = Z (x[n], m) =
n=
x[n + m 1]z n
http://en.wikipedia.org/wiki/Advanced%20Z-transform
86
The poles are located at s = -l, -m, -n. Now, we can use partial fraction expansion to separate out the transfer function: H (s) = a A B C = + + (s + l)(s + m)(s + n) s + l s + m s + n
Using the inverse transform on each of these component fractions (looking up the transforms in our table), we get the following: h(t) = Aelt u(t) + Bemt u(t) + Cent u(t) But, since s is a complex variable, l m and n can all potentially be complex numbers, with a real part ( ) and an imaginary part (j ). If we just look at the rst term: Aelt u(t) = Ae(l +jl )t u(t) = Ael t ejl t u(t) Using Euler s Equation1 on the imaginary exponent, we get:
1 http://en.wikipedia.org/wiki/Euler%27s_identity
87
Ael t [cos(l t) j sin(l t)]u(t) And taking the real part of this equation, we are left with our nal result: Ael t cos(l t)u(t) We can see from this equation that every pole will have an exponential part, and a sinusoidal part to its response. We can also go about constructing some rules: 1. 2. 3. 4. if if if if l l l l = 0, the response of the pole is a perfect sinusoid (an oscillator) = 0, the response of the pole is a perfect exponential. > 0, the exponential part of the response will decay towards zero. < 0, the exponential part of the response will rise towards innity.
From the last two rules, we can see that all poles of the system must have negative real parts, and therefore they must all have the form (s + l) for the system to be stable. We will discuss stability in later chapters.
Where N(s) and D(s) are simple polynomials. Zeros are the roots of N(s) (the numerator of the transfer function) obtained by setting N(s) = 0 and solving for s. The polynomial order of a function is the value of the highest exponent in the polynomial. Poles are the roots of D(s) (the denominator of the transfer function), obtained by setting D(s) = 0 and solving for s. Because of our restriction above, that a transfer function must not have more zeros than poles, we can state that the polynomial order of D(s) must be greater than or equal to the polynomial order of N(s).
11.3.1 Example
Consider the transfer function: H (s) =
s+2 s2 +0.25
We dene N(s) and D(s) to be the numerator and denominator polynomials, as such: N (s) = s + 2 D(s) = s2 + 0.25 We set N(s) to zero, and solve for s:
88
Eects of Poles and Zeros N (s) = s + 2 = 0 s = 2 So we have a zero at s -2. Now, we set D(s) to zero, and solve for s to obtain the poles of the equation: D(s) = s2 + 0.25 = 0 s = +i 0.25, i 0.25 And simplifying this gives us poles at: -i/2 , +i/2. Remember, s is a complex variable, and it can therefore take imaginary and real values.
H (s) =
Where is called the damping ratio of the function, and is called the natural frequency of the system. and , if exactly known for a second order system, the time responses can be easily plotted and stability can easily be checked. More information on second order systems can be found here2 .
http://wikis.controltheorypro.com/index.php?title=Second_Order_Systems
89
90
12 State-Space Equations
12.1 Time-Domain Approach
The "Classical" method of controls (what we have been studying so far) has been based mostly in the transform domain. When we want to control the system in general we use the Laplace transform (Z-Transform for digital systems) to represent the system, and when we want to examine the frequency characteristics of a system, we use the Fourier Transform. The question arises, why do we do this? Let s look at a basic second-order Laplace Transform transfer function: Y (s) 1+s = G(s) = X (s) 1 + 2s + 5s2 And we can decompose this equation in terms of the system inputs and outputs: (1 + 2s + 5s2 )Y (s) = (1 + s)X (s) Now, when we take the inverse Laplace transform of our equation, we can see that: dy (t) d2 y (t) dx(t) +5 = x(t) + dt dt2 dt
y (t) + 2
The Laplace transform is transforming the fact that we are dealing with second-order dierential equation. The Laplace transform moves a system out of the time-domain into the complex frequency domain, to study and manipulate our systems as algebraic polynomials instead of linear ODEs. Given the complexity of dierential equations, why would we ever want to work in the time domain? It turns out that to decompose our higher-order dierential equations into multiple rst-order equations, one can nd a new method for easily manipulating the system without having to use integral transforms. The solution to this problem is state variables . By taking our multiple rst-order dierential equations, and analyzing them in vector form, we can not only do the same things we were doing in the time domain using simple matrix algebra, but now we can easily account for systems with multiple inputs and multiple outputs, without adding much unnecessary complexity. This demonstrates why the "modern" state-space approach to controls has become popular.
91
State-Space Equations
12.2 State-Space
w:State space (controls)1 In a state space system, the internal state of the system is explicitly accounted for by an equation known as the state equation. The system output is given in terms of a combination of the current system state, and the current system input, through the output equation. These two equations form a system of equations known collectively as statespace equations. The state-space is the vector space that consists of all the possible internal states of the system. Because the state-space must be nite, a system can only be described by state-space equations if the system is lumped. For a system to be modeled using the state-space method, the system must meet this requirement: 1. The system must be lumped This text mostly considers linear state space systems, where the state and output equations satisfy the superposition principle and the state space is linear. However, the state-space approach is equally valid for nonlinear systems although some specic methods are not applicable to nonlinear systems. State Central to the state-space notation is the idea of a state. A state of a system is the current value of internal elements of the system, that change separately (but not completely unrelated) to the output of the system. In essence, the state of a system is an explicit account of the values of the internal system components. Here are some examples: Consider an electric circuit with both an input and an output terminal. This circuit may contain any number of inductors and capacitors. The state variables may represent the magnetic and electric elds of the inductors and capacitors, respectively. Consider a spring-mass-dashpot system. The state variables may represent the compression of the spring, or the acceleration at the dashpot. Consider a chemical reaction where certain reagents are poured into a mixing container, and the output is the amount of the chemical product produced over time. The state variables may represent the amounts of un-reacted chemicals in the container, or other properties such as the quantity of thermal energy in the container (that can serve to facilitate the reaction).
http://en.wikipedia.org/wiki/State%20space%20%28controls%29
92
Multi-Input, Multi-Output A SISO (Single Input Single Output) system will only have a single input value, but a MIMO system may have multiple inputs. We need to dene all the inputs to the system, and we need to arrange them into a vector. Output variables This is the system output value, and in the case of MIMO systems, we may have several. Output variables should be independent of one another, and only dependent on a linear combination of the input vector and the state vector. State Variables The state variables represent values from inside the system, that can change over time. In an electric circuit, for instance, the node voltages or the mesh currents can be state variables. In a mechanical system, the forces applied by springs, gravity, and dashpots can be state variables. We denote the input variables with u, the output variables with y, and the state variables with x. In essence, we have the following relationship: y = f (x, u) Where f(x, u) is our system. Also, the state variables can change with respect to the current state and the system input: x = g (x, u) Where x is the rate of change of the state variables. We will dene f(u, x) and g(u, x) in the next chapter.
93
State-Space Equations
y (t) = h[t, x(t), u(t)] Note: If x (t) and y(t) are not linear combinations of x(t) and u(t), the system is said to be nonlinear. We will attempt to discuss non-linear systems in a later chapter. The rst equation shows that the system state change is dependent on the previous system state, the initial state of the system, the time, and the system inputs. The second equation shows that the system output is dependent on the current system state, the system input, and the current time. If the system state change x (t) and the system output y(t) are linear combinations of the system state and input vectors, then we can say the systems are linear systems, and we can rewrite them in matrix form: State Equation x = A(t)x(t) + B (t)u(t) Output Equation y (t) = C (t)x(t) + D(t)u(t) If the systems themselves are time-invariant, we can re-write this as follows: x = Ax(t) + Bu(t)
y (t) = Cx(t) + Du(t) The State Equation shows the relationship between the system s current state and its input, and the future state of the system. The Output Equation shows the relationship between the system state and its input, and the output. These equations show that in a
94
State-Space Equations given system, the current output is dependent on the current input and the current state. The future state is also dependent on the current state and the current input. It is important to note at this point that the state space equations of a particular system are not unique, and there are an innite number of ways to represent these equations by manipulating the A, B, C and D matrices using row operations. There are a number of "standard forms" for these matrices, however, that make certain computations easier. Converting between these forms will require knowledge of linear algebra. ;State-Space Basis Theorem:Any system that can be described by a nite number of nth order dierential equations or nth order dierence equations, or any system that can be approximated by them, can be described using state-space equations. The general solutions to the state-space equations, therefore, are solutions to all such sets of equations.
12.5.1 Matrices: A B C D
Our system has the form: x (t) = g[t0 , t, x(t), x(0), u(t)]
y(t) = h[t, x(t), u(t)] We ve bolded several quantities to try and reinforce the fact that they can be vectors, not just scalar quantities. If these systems are time-invariant, we can simplify them by removing the time variables: x (t) = g[x(t), x(0), u(t)]
y(t) = h[x(t), u(t)] Now, if we take the partial derivatives of these functions with respect to the input and the state vector at time t0 , we get our system matrices: A = gx [x(0), x(0), u(0)]
C = hx [x(0), u(0)]
D = hu [x(0), u(0)]
95
State-Space Equations In our time-invariant state space equations, we write these matrices and their relationships as: x (t) = Ax(t) + Bu(t)
y (t) = Cx(t) + Du(t) We have four constant matrices: A, B, C, and D. We will explain these matrices below: Matrix A Matrix A is the system matrix, and relates how the current state aects the state change x . If the state change is not dependent on the current state, A will be the zero matrix. The exponential of the state matrix, eAt is called the state transition matrix, and is an important function that we will describe below. Matrix B Matrix B is the control matrix, and determines how the system input aects the state change. If the state change is not dependent on the system input, then B will be the zero matrix. Matrix C Matrix C is the output matrix, and determines the relationship between the system state and the system output. Matrix D Matrix D is the feed-forward matrix, and allows for the system input to aect the system output directly. A basic feedback system like those we have previously considered do not have a feed-forward element, and therefore for most of the systems we have already considered, the D matrix is the zero matrix.
96
If the matrix and vector dimensions do not agree with one another, the equations are invalid and the results will be meaningless. Matrices and vectors must have compatible dimensions or they cannot be combined using matrix operations. For the rest of the book, we will be using the small template on the right as a reminder about the matrix dimensions, so that we can keep a constant notation throughout the book.
(A, B, C, D)
x = Ax + Bu y = Cx + Du
97
State-Space Equations
We can create the state variable vector x in the following manner: x1 = y (t) x2 = x3 =
dy (t) dt d2 y (t) dt2
Now, we can dene the state vector x in terms of the individual x components, and we can create the future state vector as well: x1 x1 x = x2 , x = x2 x3 x3 And with that, we can assemble the state-space equations for the system: 0 1 0 0 0 1 x(t) + 0 u(t) x = 0 a0 a1 a2 1 y (t) = 1 0 0 x(t) Granted, this is only a simple example, but the method should become apparent to most readers.
T (s) =
98
0 0 . . .
1 0 . . .
0 1 . . .
.. .
0 0 . . .
0 0 b0 b1 b2
1 bn1
0 0 |- |B = . |- |C = a0 . .
a1 am1 |- |D = 0 |}
This form of the equations is known as the controllable canonical form of the system matrices, and we will discuss this later. Notice that to perform this method, the denominator and numerator polynomials must be monic, the coecients of the highest-order term must be 1. If the coecient of the highest order term is not 1, you must divide your equation by that coecient to make it 1.
d2 y (t) = a1 x1 a2 x2 + x3 dt2
99
State-Space Equations
x3 = a0 y (t) + u(t) The state-space equations for the system will then be given by 0 0 1 0 x = a1 a2 1 x(t) + 0 u(t) 1 a0 0 0 y (t) = 1 0 0 x(t) x may also be used in any number of variable transformations, as a matter of mathematical convenience. However, the variables y and u correspond to physical signals, and may not be arbitrarily selected, redened, or transformed as x can be.
As we can see from this equation, even though we have a valid state-equation, the variables 1 and 2 don t necessarily correspond to any measurable physical event, but are instead dummy variables constructed by the user to help dene the system. Note, however, that the variables and do correspond to physical values, and cannot be changed.
12.8 Discretization
If we have a system (A, B, C, D) that is dened in continuous time, we can discretize the system so that an equivalent process can be performed using a digital computer. We can use the denition of the derivative, as such: x (t) = lim x(t + T ) x(t) T 0 T
100
Note on Notations And substituting this into the state equation with some approximation (and ignoring the limit for now) gives us: x(t + T ) x(t) = Ax(t) + Bu(t) T 0 T lim x(t + T ) = x(t) + Ax(t)T + Bu(t)T x(t + T ) = (1 + AT )x(t) + (BT )u(t) We are able to remove that limit because in a discrete system, the time interval between samples is positive and non-negligible. By denition, a discrete system is only dened at certain time points, and not at all time points as the limit would have indicated. In a discrete system, we are interested only in the value of the system at discrete points. If those points are evenly spaced by every T seconds (the sampling time), then the samples of the system occur at t = kT, where k is an integer. Substituting kT for t into our equation above gives us: x(kT + T ) = (1 + AT )x(kT ) + T Bu(kT ) Or, using the square-bracket shorthand that we ve developed earlier, we can write: x[k + 1] = (1 + AT )x[k ] + T Bu[k ] In this form, the state-space system can be implemented quite easily into a digital computer system using software, not complicated analog hardware. We will discuss this relationship and digital systems more specically in a later chapter. We will write out the discrete-time state-space equations as: x[n + 1] = Ad x[n] + Bd u[n]
101
State-Space Equations Where A is the transpose of matrix A. The prime notation is also frequently used to denote the time-derivative. Most of the matrices that we will be talking about are time-invariant; there is no ambiguity because we will never take the time derivative of a time-invariant matrix. However, for a time-variant matrix we will use the following notations to distinguish between the time-derivative and the transpose: A(t) the transpose. A (t) the time-derivative. Note that certain variables which are time-variant are not written with the (t) postscript, such as the variables x, y, and u. For these variables, the default behavior of the prime is the time-derivative, such as in the state equation. If the transpose needs to be taken of one of these vectors, the (t) postx will be added explicitly to correspond to our notation above. For instances where we need to use the Hermitian transpose, we will use the notation: AH This notation is common in other literature, and raises no obvious ambiguities here.
http://he.wikibooks.org/wiki/%05%EA%05%D5%05%E8%05%EA%20%05%D4%05%D1%05%E7%05%E8%05% D4%2F%05%DE%05%E9%05%EA%05%E0%05%D9%20%05%DE%05%E6%05%D1
102
1 2
http://en.wikibooks.org/wiki/Ordinary%20Differential%20Equations http://en.wikibooks.org/wiki/Ordinary%20Differential%20Equations
103
Solutions for Linear Systems Integrating both sides, and raising both sides to a power of e, we obtain the result: x(t) = eAt+C Where C is a constant. We can assign D = eC to make the equation easier, but we also know that D will then be the initial conditions of the system. This becomes obvious if we plug the value zero into the variable t. The nal solution to this equation then is given as: x(t) = eA(tt0 ) x(t0 ) We call the matrix exponential eAt the state-transition matrix, and calculating it, while dicult at times, is crucial to analyzing and manipulating systems. We will talk more about calculating the matrix exponential below.
13.3.1 Example
Take the derivative of the following with respect to time: eAt x(t) The product rule from dierentiation reminds us that if we have two functions multiplied together: f (t)g (t) and we dierentiate with respect to t, then the result is: f (t)g (t) + f (t)g (t) If we set our functions accordingly:
104
State-Transition Matrix f (t) = eAt g (t) = x(t) f (t) = AeAt g (t) = x (t)
Then the output result is: eAt x (t) eAt Ax(t) If we look at this result, it is the same as from our equation above. Using the result from our example, we can condense the left side of our equation into a derivative: d(eAt x(t)) = eAt Bu(t) dt Now we can integrate both sides, from the initial time (t0 ) to the current time (t), using a dummy variable , we will get closer to our result. Finally, if we premultiply by eAt , we get our nal result: General State Equation Solution x(t) = eA(tt0 ) x(t0 ) +
t t0
eA(t ) Bu( )d
If we plug this solution into the output equation, we get: General Output Equation Solution y (t) = CeA(tt0 ) x(t0 ) + C
t t0
This is the general Time-Invariant solution to the state space equations, with non-zero input. These equations are important results, and students who are interested in a further study of control systems would do well to memorize these equations.
http://en.wikibooks.org/wiki/Engineering%20Analysis
105
eAt =
(At)n n! n=0
More information about diagonal matrices and Jordan-form matrices can be found in: Engineering Analysis4 Also, we can attempt to diagonalize the matrix A into a diagonal matrix or a Jordan Canonical matrix. The exponential of a diagonal matrix is simply the diagonal elements individually raised to that exponential. The exponential of a Jordan canonical matrix is slightly more complicated, but there is a useful pattern that can be exploited to nd the solution quickly. Interested readers should read the relevant passages in Engineering Analysis5 . The state transition matrix, and matrix exponentials in general are very important tools in control engineering.
1 2 2! t
t . . . 0
.. .
1 n n! t 1 n1 (n1)! t
. . . 1
Where is the eigenvalue (the value on the diagonal) of the jordan-canonical matrix.
4 5
http://en.wikibooks.org/wiki/Engineering%20Analysis http://en.wikibooks.org/wiki/Engineering%20Analysis
106
State-Transition Matrix If A is a high-order matrix, this inverse can be dicult to solve. If the A matrix is in the Jordan Canonical form, then the matrix exponential can be generated quickly using the following formula: 1 t 0 1 eJt = et . . . . . . 0 0
1 2 2! t
t . . . 0
.. .
1 n n! t 1 n1 (n1)! t
. . . 1
Where is the eigenvalue (the value on the diagonal) of the jordan-canonical matrix.
eAt =
i=1
ei t vi wi
Note that wi is the transpose of the ith left-eigenvector, not the derivative of it. We will discuss the concepts of "eigenvalues", "eigenvectors", and the technique of spectral decomposition in more detail in a later chapter.
http://en.wikibooks.org/wiki/Engineering%20Analysis
107
We can nd the eigenvalues of this matrix as = i, -i. If we plug these values into our eigenvector equation, we get: i 1 v =0 1 i 1 i 1 v =0 1 i 2 And we can solve for our eigenvectors: v1 = v2 = 1 i 1 i
Now, using spectral decomposition, we can construct the state-transition matrix: eAt = eit 1 i 1 i + eit 1 i 1 i
If we remember Euler s Identity, we can decompose the complex exponentials into sinusoids. Performing the vector multiplications, all the imaginary terms cancel out, and we are left with our result: eAt = cos t sin t sin t cos t
The reader is encouraged to perform the multiplications, and attempt to derive this result.
108
State-Transition Matrix function [phi] = statetrans(A) t = sym( t ); phi = expm(A * t); end Use this MATLAB function to nd the state-transition matrix for the following matrices (warning, calculation may take some time): 1. A1 = 2. A2 = 3. A3 = 2 0 0 2 0 1 1 0
2 1 0 2 Matrix 1 is a diagonal matrix, Matrix 2 has complex eigenvalues, and Matrix 3 is Jordan canonical form. These three matrices should be representative of some of the common forms of system matrices. The following code snippets are the input commands into MATLAB to produce these matrices, and the output results:
Matrix A1 >> A1 = [2 0 ; 0 2]; >> statetrans(A1) ans = [ exp(2*t), 0] [ 0, exp(2*t)] Matrix A2 >> A2 = [0 1 ; -1 0]; >> statetrans(A1) ans = [ cos(t), sin(t)] [ -sin(t), cos(t)] Matrix A3 >> A1 = [2 1 ; 0 2]; >> statetrans(A1) ans = [ exp(2*t), t*exp(2*t)] [ 0, exp(2*t)]
109
Solutions for Linear Systems s = sym( s ); [n,n] = size(A); in = inv(s*eye(n) - A); eAt = ilaplace(in); Spectral Decomposition t = sym( t ); [n,n] = size(A); [V, e] = eig(A); W = inv(V); sum = [0 0;0 0]; for I = 1:n sum = sum + expm(e(I,I)*t)*V(:,I)*W(I,:); end; eAt = sum; All three of these methods should produce the same answers. The student is encouraged to verify this. he: / 7
110
(t, )B ( )u( )d
Matrix Dimensions: A: p p B: p q C: r p D: r q
The function is called the state-transition matrix, because it (like the matrix exponential from the time-invariant case) controls the change for states in the state equation. However, unlike the time-invariant case, we cannot dene this as a simple exponential. In fact, can t be dened in general, because it will actually be a dierent function for every system. However, the state-transition matrix does follow some basic properties that we can use to determine the state-transition matrix. In a time-variant system, the general solution is obtained when the state-transition matrix is determined. For that reason, the rst thing (and the most important thing) that we need to do here is nd that matrix. We will discuss the solution to that matrix below.
111
Time-Variant System Solutions this matrix typically is composed of exponential or sinusoidal functions. The exact form of the state-transition matrix is dependant on the system itself, and the form of the system s dierential equation. There is no single "template solution" for this matrix. The state transition matrix is not completely unknown, it must always satisfy the following relationships: (t, t0 ) = A(t)(t, t0 ) t (, ) = I And also must have the following properties: {| class="wikitable" |- |1.||(t2 , t1 )(t1 , t0 ) = (t2 , t0 ) |- |2.||1 (t, ) = (, t) |- |3.||1 (t, )(t, ) = I |t0 ,t0 ) |4.|| d(dt = A(t) |} If the system is time-invariant, we can dene as: (t, t0 ) = eA(tt0 ) The reader can verify that this solution for a time-invariant system satises all the properties listed above. However, in the time-variant case, there are many dierent functions that may satisfy these requirements, and the solution is dependant on the structure of the system. The state-transition matrix must be determined before analysis on the time-varying solution can continue. We will discuss some of the methods for determining this matrix below.
112
x (t) = A(t)x(t) The solutions to this equation form an n-dimensional vector space in the interval T = (a, b). Any set of n linearly-independent solutions {x1 , x2 , ..., xn } to the equation above is called a fundamental set of solutions. Readers who have a background in Linear Algebra1 may recognize that the fundamental set is a basis set for the solution space. Any basis set that spans the entire solution space is a valid fundamental set. A fundamental matrix is formed by creating a matrix out of the n fundamental vectors. We will denote the fundamental matrix with a script capital X: X = x1 x2 xn The fundamental matrix will satisfy the state equation: X (t) = A(t)X (t) Also, any matrix that solves this equation can be a fundamental matrix if and only if the determinant of the matrix is non-zero for all time t in the interval T. The determinant must be non-zero, because we are going to use the inverse of the fundamental matrix to solve for the state-transition matrix.
http://en.wikibooks.org/wiki/Linear%20Algebra
113
X (t) =
the rst task is to nd the inverse of the fundamental matrix. Because the fundamental matrix is a 2 2 matrix, the inverse can be given easily through a common formula:
t et 1 2e t 0 e e2t 1 3t et 2 e t 0 e
X 1 (t) =
The state-transition matrix is given by: (t, t0 ) = X (t)X 1 (t0 ) = (t, t0 ) = et+t0 0
1 t+t0 2 (e t et 1 2e 0 et
et0 0
1 3t0 2e et0
et+3t0 ) ett0
A(t)
A( )d =
A( )d A(t)
(t, ) = e
A( )d
The state transition matrix will commute as described above if any of the following conditions are true: 1. A is a constant matrix (time-invariant) 2. A is a diagonal matrix (t), where A is a constant matrix, and f(t) is a single-valued function (not a 3. If A = Af matrix). If none of the above conditions are true, then you must use method 3. Method 3 If A(t) can be decomposed as the following sum:
114
A(t) =
i=1
Mi fi (t)
Where Mi is a constant matrix such that Mi Mj = Mj Mi , and fi is a single-valued function. If A(t) can be decomposed in this way, then the state-transition matrix can be given as:
n
(t, ) =
i=1
e Mi
fi ()d
It will be left as an exercise for the reader to prove that if A(t) is time-invariant, that the equation in method 2 above will reduce to the state-transition matrix eA(t ) .
Where f1 (t) = t, and f2 (t) = 1. Using the formula described above gives us: (t, ) = eM1
t
d M2
(t, ) = e
(t2 2 ) 0 2 0 (t 2 )
0 t +
t 0
The rst term is a diagonal matrix, and the solution to that matrix function is all the individual elements of the matrix raised as an exponent of e. The second term can be decomposed as: 0 t + t 0 0 1 1 0
(t )
=e
0 e 2 (t
1 2 2 )
115
(t, )B ( )u( )d
116
15 Digital State-Space
15.1 Digital Systems
Digital systems, expressed previously as dierence equations or Z-Transform transfer functions can also be used with the state-space representation. Also, all the same techniques for dealing with analog systems can be applied to digital systems, with only minor changes.
(t )Bu( )d
t0
Now, we apply a zero-order hold on our input, to make the system digital. Notice that we set our start time t0 = kT, because we are only interested in the behavior of our system during a single sample period:
117
Digital State-Space
We were able to remove u(kT) from the integral because it did not rely on . We now dene a new function, , as follows:
t
(t, t0 ) =
t0
(t, )Bd
Inserting this new expression into our equation, and setting t = (k + 1)T gives us: x((k + 1)T ) = ((k + 1)T, kT )x(kT ) + ((k + 1)T, kT )u(kT ) Now (T) and (T) are constant matrices, and we can give them new names. The d subscript denotes that they are digital versions of the coecient matrices: Ad = ((k + 1)T, kT )
Bd = ((k + 1)T, kT ) We can use these values in our state equation, converting to our bracket notation instead: x[k + 1] = Ad x[k ] + Bd u[k ]
http://en.wikipedia.org/wiki/Discretization
118
eA(kT ) Bu( )d
eA(kT ) Bu( )d
Now, if we want to analyze the k+1 term, we can solve the equation again: x[k + 1] = eA(k+1)T x[0] +
0 (k+1)T
eA((k+1)T ) Bu( )d
Separating out the variables, and breaking the integral into two parts gives us:
kT
(k+1)T kT
eA(kT +T ) Bu( )d
If we substitute in a new variable = (k + 1)T + , and if we see the following relationship: eAkT x[0] = x[k ] We get our nal result: x[k + 1] = eAT x[k ] +
0 T
eA d Bu[k ]
Comparing this equation to our regular solution gives us a set of relationships for converting the continuous-time system into a discrete-time system. Here, we will use "d" subscripts to denote the system matrices of a discrete system, and we will use a "c" subscript to denote the system matrices of a continuous system. Matrix Dimensions: A: p p B: p q C: r p D: r q
{| class="wikitable"
119
|- |Cd = Cc |- |Dd = Dc |}
UNKNOWN TEMPLATE Matlab CMD c2d If the Ac matrix is nonsingular, then we can nd its inverse and instead dene Bd as:
1 B d = A c (Ad I )Bc
The dierences in the discrete and continuous matrices are due to the fact that the underlying equations that describe our systems are dierent. Continuous-time systems are represented by linear dierential equations, while the digital systems are described by dierence equations. High order terms in a dierence equation are delayed copies of the signals, while high order terms in the dierential equations are derivatives of the analog signal. If we have a complicated analog system, and we would like to implement that system in a digital computer, we can use the above transformations to make our matrices conform to the new paradigm.
15.3.2 Notation
Because the coecent matrices for the discrete systems are computed dierently from the continuous-time coecient matrices, and because the matrices technically represent dierent things, it is not uncommon in the literature to denote these matrices with dierent variables. For instance, the following variables are used in place of A and B frequently: = Ad
R = Bd These substitutions would give us a system dened by the ordered quadruple (, R, C, D) for representing our equations. As a matter of notational convenience, we will use the letters A and B to represent these matrices throughout the rest of this book.
120
Solving for x[n] x2 [n] = y [n + 1] x3 [n] = y [n + 2] Which in turn gives us 3 rst-order dierence equations: x1 [n + 1] = y [n + 1] = x2 [n] x2 [n + 1] = y [n + 2] = x3 [n] x3 [n + 1] = y [n + 3] Again, we say that matrix x is a vertical vector of the 3 state variables we have dened, and we can write our state equation in the same form as if it were a continuous-time system: 0 1 0 0 0 1 x[n] + 0 u[n] x[n + 1] = 0 a0 a1 a2 1 y [ n] = 1 0 0 x[ n]
x[3] = Ax[2] + Bu[2] = A3 x[0] + A2 Bu[0] + ABu[1] + Bu[2] With a little algebraic trickery, we can reduce this pattern to a single equation: General State Equation Solution
n1
x[n] = An x[n0 ] +
m=0
An1m Bu[m]
Substituting this result into the output equation gives us: General Output Equation Solution
121
Digital State-Space
n1
Where , the state transition matrix, is dened in a similar manner to the state-transition matrix in the continuous case. However, some of the properties in the discrete time are dierent. For instance, the inverse of the state-transition matrix does not need to exist, and in many systems it does not exist.
[k, k0 ] =
m=1
A[k m]
122
MATLAB Calculations
123
125
|A I |v = 0 Another value worth nding are the left eigenvectors of a system, dened as w in the modied characteristic equation: Left-Eigenvector Equation wA = w For more information about eigenvalues, eigenvectors, and left eigenvectors, read the appropriate sections in the following books: Linear Algebra1 Engineering Analysis2
16.2.1 Diagonalization
Note: The transition matrix T should not be confused with the sampling time of a discrete system. If needed, we will use subscripts to dierentiate between the two. If the matrix A has a complete set of distinct eigenvalues, the matrix can be diagonalized. A diagonal matrix is a matrix that only has entries on the diagonal, and all the rest of the entries in the matrix are zero. We can dene a transformation matrix, T, that satises the diagonalization transformation: A = T DT 1 Which in turn will satisfy the relationship: eAt = T eDt T 1 The right-hand side of the equation may look more complicated, but becauseD is a diagonal matrix here (not to be confused with the feed-forward matrix from the output equation), the calculations are much easier. We can dene the transition matrix, and the inverse transition matrix in terms of the eigenvectors and the left eigenvectors: T = v1 v2 v3 vn
1 2
http://en.wikibooks.org/wiki/Linear%20Algebra http://en.wikibooks.org/wiki/Engineering%20Analysis
126
eAt =
i=1
ei t vi wi
Notice that this equation only holds in this form if the matrix A has a complete set of n distinct eigenvalues. Since w i is a row vector, and x(0) is a column vector of the initial system states, we can combine those two into a scalar coecient :
n
eAt x(t0 ) =
i=1
i ei t vi
Since the state transition matrix determines how the system responds to an input, we can see that the system eigenvalues and eigenvectors are a key part of the system response. Let us plug this decomposition into the general solution to the state equation: State Equation Spectral Decomposition
n n
x(t) =
i=1
i ei t vi +
i=1 0
ei (t ) vi wi Bu( )d
http://en.wikibooks.org/wiki/Engineering%20Analysis%2FSpectral%20Decomposition
127
16.3.2 Decoupling
For people who are familiar with linear algebra, the left-eigenvector of the matrix A must be in the null space of the matrix B to decouple the system. If a system can be designed such that the following relationship holds true: wi B = 0 then the system response from that particular eigenvalue will not be aected by the system input u, and we say that the system has been decoupled. Such a thing is dicult to do in practice.
|wi vi |
Systems with smaller condition numbers are better, for a number of reasons: 1. Large condition numbers lead to a large transient response of the system 2. Large condition numbers make the system eigenvalues more sensitive to changes in the system. We will discuss the issue of eigenvalue sensitivity more in a later section.
16.3.4 Stability
We will talk about stability at length in later chapters, but is a good time to point out a simple fact concerning the eigenvalues of the system. Notice that if the eigenvalues of the system matrix A are positive, or (if they are complex) that they have positive real parts, that the system state (and therefore the system output, scaled by the C matrix) will approach innity as time t approaches innity. In essence, if the eigenvalues are positive, the system will not satisfy the condition of BIBO stability, and will therefore become unstable. Another factor that is worth mentioning is that a manufactured system never exactly matches the system model, and there will always been inaccuracies in the specications of the component parts used, within a certain tolerance. As such, the system matrix will be slightly dierent from the mathematical model of the system (although good systems will not be severely dierent), and therefore the eigenvalues and eigenvectors of the system will not be the same values as those derived from the model. These facts give rise to several results:
128
Non-Unique Eigenvalues 1. Systems with high condition numbers may have eigenvalues that dier by a large amount from those derived from the mathematical model. This means that the system response of the physical system may be very dierent from the intended response of the model. 2. Systems with high condition numbers may become unstable simply as a result of inaccuracies in the component parts used in the manufacturing process. For those reasons, the system eigenvalues and the condition number of the system matrix are highly important variables to consider when analyzing and designing a system. We will discuss the topic of stability in more detail in later chapters.
129
Eigenvalues and Eigenvectors [b e a c d] because the generalized eigenvectors are listed in order after the regular eigenvector that they are generated from. Regular eigenvectors can be listed in any order.
d=a d=b
If a generates the correct vector d, we will order our eigenvectors as: [a d b c ] but if b generates the correct vector, we can order it as: [a b d c]
http://en.wikibooks.org/wiki/Engineering%20Analysis%2FMatrix%20Forms
130
Equivalence Transformations
D 0 0 J1 J = . . . . . . 0 0
.. .
Jn
0 0 . . .
Where D is the diagonal block produced by the regular eigenvectors that are not associated with generalized eigenvectors (if any). The Jn blocks are standard Jordan blocks with a size corresponding to the number of eigenvectors/generalized eigenvectors in each sequence. In each Jn block, the eigenvalue associated with the regular eigenvector of the sequence is on the main diagonal, and there are 1 s in the sub-diagonal.
x (t) y (t) = C (t) + Du Where: {| class="wikitable" = P AP 1 |- |B = P B |- |C = CP 1 |- |D = D |} |- |A We call the matrix P the equivalence transformation between the two sets of equations. It is important to note that the eigenvalues of the matrix A (which are of primary importance to the system) do not change under the equivalence transformation. The are related by the matrix P. eigenvectors of A, and the eigenvectors of A
131
Eigenvalues and Eigenvectors If a system is time-variant, it can frequently be useful to use a Lyapunov transformation to convert the system to an equivalent system with a constant A matrix. This is not always possible in general, however it is possible if the A(t) matrix is periodic.
y (t) = CV 1 V x(t) + Du(t) Since our system matrix is now diagonal (or Jordan canonical), the calculation of the state-transition matrix is simplied: eV AV Where is a diagonal matrix.
1
132
17 MIMO Systems
17.1 Multi-Input, Multi-Output
Systems with more than one input and/or more than one output are known as Multi-Input Multi-Output systems, or they are frequently known by the abbreviation MIMO. This is in contrast to systems that have only a single input and a single output (SISO), like we have been discussing previously.
133
MIMO Systems x2 = a1 x2 a0 (x1 + x4 ) + u1 (t) x4 = a2 (x4 x1 ) + u2 (t) And nally we can assemble our state space equations: 0 1 a a 1 x = 0 0 0 a2 0
0 0 0 0 a0 1 x+ 0 0 0 0 0 a2
0 0 u1 0 u2 1
1 0 0 0 y1 = x(t) 0 0 0 1 y2
L[Y (t)] = L[CX (t)] + L[DU (t)] Which gives us the result: sX(s) X (0) = AX(s) + B U(s)
Y(s) = C X(s) + DU(s) Where X(0) is the initial conditions of the system state vector in the time domain. If the system is relaxed, we can ignore this term, but for completeness we will continue the derivation with it. We can separate out the variables in the state equation as follows: sX(s) AX(s) = X (0) + B U(s) Then factor out an X(s): X(s)[sI A] = X (0) + B U(s) And then we can multiply both sides by the inverse of [sI - A] to give us our state equation: X(s) = [sI A]1 X (0) + [sI A]1 B U(s)
134
Transfer Function Matrix Now, if we plug in this value for X(s) into our output equation, above, we get a more complicated equation: Y(s) = C ([sI A]1 X (0) + [sI A]1 B U(s)) + DU(s) And we can distribute the matrix C to give us our answer: Y(s) = C [sI A]1 X (0) + C [sI A]1 B U(s) + DU(s) Now, if the system is relaxed, and therefore X(0) is 0, the rst term of this equation becomes 0. In this case, we can factor out a U(s) from the remaining two terms: Y(s) = (C [sI A]1 B + D)U(s) We can make the following substitution to obtain the Transfer Function Matrix, or more simply, the Transfer Matrix, H(s): Transfer Matrix C [sI A]1 B + D = H(s) And rewrite our output equation in terms of the transfer matrix as follows: Transfer Matrix Description Y(s) = H(s)U(s) If Y(s) and X(s) are 1 1 vectors (a SISO system), then we have our external description: Y (s) = H (s)X (s) Now, since X(s) = X(s), and Y(s) = Y(s), then H(s) must be equal to H(s). These are simply two dierent ways to describe the same exact equation, the same exact system.
17.3.1 Dimensions
If our system has q inputs, and r outputs, our transfer function matrix will be an r q matrix.
135
MIMO Systems
Y(z ) = C [zI A]1 zX (0) + C [zI A]1 B U(z ) + DU(z ) If X(0) is zero, that term drops out, and we can derive a Transfer Function Matrix in the Z domain as well: Y(z ) = (C [zI A]1 B + D)U(z )
136
Discrete MIMO Systems Transfer Matrix C [zI A]1 B + D = H(z ) Transfer Matrix Description Y(z ) = H(z )U(z )
= H (z ) = C (zI A)1 B + D
So the system response to a digital system can be derived from the pulse response equation by: Y (z ) = H (z )U (z ) And we can set U(z) to a step input through the following Z transform: u(t) U (z ) =
z z 1
Plugging this into our pulse response we get our step response: Y (z ) = (C (zI A)1 B + D) Y(z ) = H(z )
z z 1 z z 1
137
18 System Realization
18.1 Realization
Realization is the process of taking a mathematical model of a system (either in the Laplace domain or the State-Space domain), and creating a physical system. Some systems are not realizable. An important point to keep in mind is that the Laplace domain representation, and the state-space representations are equivalent, and both representations describe the same physical systems. We want, therefore, a way to convert between the two representations, because each one is well suited for particular methods of analysis. The state-space representation, for instance, is preferable when it comes time to move the system design from the drawing board to a constructed physical device. For that reason, we call the process of converting a system from the Laplace representation to the state-space representation "realization".
139
System Realization
G(s) = G() + Gsp (s) Where Gsp (s) is a strictly proper transfer matrix. Also, we can use this to nd the value of our D matrix: D = G() We can dene d(s) to be the lowest common denominator polynomial of all the entries in G(s): Remember, q is the number of inputs, p is the number of internal system states, and r is the number of outputs. d(s) = sr + a1 sr1 + + ar1 s + ar Then we can dene Gsp as: Gsp (s) = Where N (s) = N1 sr1 + + Nr1 s + Nr And the Ni are p q constant matrices. If we remember our method for converting a transfer function to a state-space equation, we can follow the same general method, except that the new matrix A will be a block matrix, where each block is the size of the transfer matrix: a1 Ip a2 Ip I 0 p 0 I p A= . . . . . . 0 0
1 N (s ) d(s)
ar1 Ip ar Ip 0 0 0 0 . . .. . . . . . Ip 0
Ip
0 0 B= . . .
0 C = Ip 0 0 0
140
19 Gain
19.1 What is Gain?
Gain is a proportional value that shows the relationship between the magnitude of the input to the magnitude of the output signal at steady state. Many systems contain a method by which the gain can be altered, providing more or less "power" to the system. However, increasing gain or decreasing gain beyond a particular safety zone can cause the system to become unstable. Consider the given second-order system: T (s) = 1 s2 + 2s + 1
We can include an arbitrary gain term, K in this system that will represent an amplication, or a power increase: T (s) = K 1 s2 + 2s + 1
In a state-space system, the gain term k can be inserted as follows: x (t) = Ax(t) + kBu(t)
y (t) = Cx(t) + kDu(t) The gain term can also be inserted into other places in the system, and in those cases the equations will be slightly dierent.
Figure 31
141
Gain
142
20 Block Diagrams
When designing or analyzing a system, often it is useful to model the system graphically. Block Diagrams are a useful and simple method for analyzing a system graphically. A "block" looks on paper exactly how it sounds:
Figure 32 If we have two systems, f(t) and g(t), we can put them in series with one another so that the output of system f(t) is the input to system g(t). Now, we can analyze them depending on whether we are using our classical or modern methods. If we dene the output of the rst system as h(t), we can dene h(t) as: h(t) = x(t) f (t) Now, we can dene the system output y(t) in terms of h(t) as: y (t) = h(t) g (t) We can expand h(t): y (t) = [x(t) f (t)] g (t) But, since convolution is associative, we can re-write this as: y (t) = x(t) [f (t) g (t)]
143
Figure 33
Figure 34
In the time domain we know that: y (t) = x(t) [f (t) g (t)] But, in the frequency domain we know that convolution becomes multiplication, so we can re-write this as: Y (s) = X (s)[F (s)G(s)] We can represent our system in the frequency domain as:
Figure 35
144
Systems in Series
yF = CF xF + DF u System 2: xG = AG xG + BG yF
yG = CG xG + DG yF And we can write substitute these equations together form the complete response of system H, that has input u, and output yG : Series state equation xG AG B G C F = xF 0 AF Series output equation yG C DG CF = G yF 0 CF xG D D + G F u xF DF xG B D + G F u xF BF
145
Block Diagrams
Figure 36
Blocks may not be placed in parallel without the use of an adder. Blocks connected by an adder as shown above have a total transfer function of: Y (s) = X (s)[F (s) + G(s)] Since the Laplace transform is linear, we can easily transfer this to the time domain by converting the multiplication to convolution: y (t) = x(t) [f (t) + g (t)]
Figure 37
146
Figure 38
In this image, the strange-looking block in the center is either an integrator or an ideal delay, and can be represented in the transfer domain as:
1 s
or
1 z
Depending on the time characteristics of the system. If we only consider continuous-time systems, we can replace the funny block in the center with an integrator:
Figure 39
147
Block Diagrams
Y (s) 1 C (s) + D(s) = B (s) U (s ) s A(s) We will explain how we got this result, and how we deal with feedforward and feedback loop structures in the next chapter.
1 2
Cascaded Blocks Combining Blocks in Parallel Removing a Block from a Forward Loop Eliminating a Feedback Loop Removing a Block from a Feedback Loop
Y = ( P1 P2 ) X Y = P1 X P2 X
Y = P1 X P2 X
Y = P1 (X P2 Y ) Y = P1 (X P2 Y )
Z = W X Y
Z = PX Y
148
Z = P (X Y )
Y = PX
10
Y = PX
11
Z = W X
12
Z = X Y
http://wikis.controltheorypro.com/index.php?title=Block_Diagram_Quick_Reference
149
21 Feedback Loops
21.1 Feedback
A feedback loop is a common and powerful tool when designing a control system. Feedback loops take the system output into consideration, which enables the system to adjust its performance to meet a desired output response. When talking about control systems it is important to keep in mind that engineers typically are given existing systems such as actuators, sensors, motors, and other devices with set parameters, and are asked to adjust the performance of those systems. In many cases, it may not be possible to open the system (the "plant") and adjust it from the inside: modications need to be made external to the system to force the system response to act as desired. This is performed by adding controllers, compensators, and feedback structures to the system.
Figure 63
framed
w:Feedback1 This is a basic feedback structure. Here, we are using the output value of the system to help us prepare the next output value. In this way, we can create systems that correct errors. Here we see a feedback loop with a value of one. We call this a unity feedback. Here is a list of some relevant vocabulary, that will be used in the following sections: Plant The term "Plant" is a carry-over term from chemical engineering to refer to the main system process. The plant is the preexisting system that does not (without the aid of a controller
http://en.wikipedia.org/wiki/Feedback
151
Feedback Loops or a compensator) meet the given specications. Plants are usually given "as is", and are not changeable. In the picture above, the plant is denoted with a P. Controller A controller, or a "compensator" is an additional system that is added to the plant to control the operation of the plant. The system can have multiple compensators, and they can appear anywhere in the system: Before the pick-o node, after the summer, before or after the plant, and in the feedback loop. In the picture above, our compensator is denoted with a C. Some texts, or texts in other disciplines may refer to a "summer" as an adder. Summer A summer is a symbol on a system diagram, (denoted above with parenthesis) that conceptually adds two or more input signals, and produces a single sum output signal. Pick-o node A picko node is simply a fancy term for a split in a wire. Forward Path The forward path in the feedback loop is the path after the summer, that travels through the plant and towards the system output. Reverse Path The reverse path is the path after the pick-o node, that loops back to the beginning of the system. This is also known as the "feedback path". Unity feedback When the multiplicative value of the feedback path is 1.
152
Figure 64
Now, we will derive the I/O relationship into the state-space equations. If we examine the inner-most feedback loop, we can see that the forward path has an integrator system, 1 s, and the feedback loop has the matrix value A. If we take the transfer function only of this loop, we get: Tinner (s) =
1 s 1 1 A s
1 sA
Pre-multiplying by the factor B, and post-multiplying by C, we get the transfer function of the entire lower-half of the loop: Tlower (s) = B
1 s A
We can see that the upper path (D) and the lower-path Tlower are added together to produce the nal result: Ttotal (s) = B
1 s A
C +D
Now, for an alternate method, we can assume that x is the value of the inner-feedback loop, right before the integrator. This makes sense, since the integral of x should be x (which we see from the diagram that it is. Solving for x , with an input of u, we get: x = Ax + Bu This is because the value coming from the feedback branch is equal to the value x times the feedback loop matrix A, and the value coming from the left of the sumer is the input u times the matrix B. If we keep things in terms of x and u, we can see that the system output is the sum of u times the feed-forward value D, and the value of x times the value C: y = Cx + Du These last two equations are precisely the state-space equations of our system.
153
Feedback Loops
The reader is encouraged to use the above equations to derive the result by themselves. The function E(s) is known as the error signal. The error signal is the dierence between the system output (Y(s)), and the system input (X(s)). Notice that the error signal is now the direct input to the system G(s). X(s) is now called the reference input. The purpose of the negative feedback loop is to make the system output equal to the system input, by identifying large dierences between X(s) and Y(s) and correcting for them.
154
Feedback Loop Transfer Function e(t1 ) = x(t1 ) y (t1 ) = 5 2 = 3 Which means the elevator has 3 more oors to go. Finally, at time t4 , when the elevator reaches the top, the error signal is: e(t4 ) = x(t4 ) y (t4 ) = 5 5 = 0 And when the error signal is zero, the elevator stops moving. In essence, we can dene three cases: e(t) is positive: In this case, the elevator goes up one oor, and checks again. e(t) is zero: The elevator stops. e(t) is negative: The elevator goes down one oor, and checks again.
y (t) = Cx(t) + Du(t) The plant is considered to be pre-existing, and the matrices A, B, C, and D are considered to be internal to the plant (and therefore unchangeable). Also, in a typical system, the state variables are either ctional (in the sense of dummy-variables), or are not measurable. For these reasons, we need to add external components, such as a gain element, or a feedback element to the plant to enhance performance. Consider the addition of a gain matrix K installed at the input of the plant, and a negative feedback element F that is multiplied by the system output y, and is added to the input signal of the plant. There are two cases: 1. The feedback element F is subtracted from the input before multiplication of the K gain matrix. 2. The feedback element F is subtracted from the input after multiplication of the K gain matrix. In case 1, the feedback element F is added to the input before the multiplicative gain is applied to the input. If v is the input to the entire system, then we can dene u as: u(t) = F v (t) F Ky (t) In case 2, the feeback element F is subtracted from the input after the multiplicative gain is applied to the input. If v is the input to the entire system, then we can dene u as: u(t) = F v (t) Ky (t)
155
Feedback Loops
Figure 65
Let s say that we have the generalized system shown above. The top part, Gp(s) represents all the systems and all the controllers on the forward path. The bottom part, Gb(s) represents all the feedback processing elements of the system. The letter "K" in the beginning of the system is called the Gain. We will talk about the gain more in later chapters. We can dene the Closed-Loop Transfer Function as follows: Closed-Loop Transfer Function Hcl (s) = KGp(s) 1 + Gp(s)Gb(s)
If we "open" the loop, and break the feedback node, we can dene the Open-Loop Transfer Function, as: Open-Loop Transfer Function Hol (s) = Gp(s)Gb(s) We can redene the closed-loop transfer function in terms of this open-loop transfer function: Hcl (s) = KGp(s) 1 + Hol (s)
These results are important, and they will be used without further explanation or derivation throughout the rest of the book.
156
Second-Order Systems |- |
Figure 66
|- | 1. 2. 3. 4. 5. |} Each location has certain benets and problems, and hopefully we will get a chance to talk about all of them. In front of the system, before the feedback loop. Inside the feedback loop, in the forward path, before the plant. In the forward path, after the plant. In the feedback loop, in the reverse path. After the feedback loop.
157
159
Figure 67
160
Figure 68 Angular position servo and signal ow graph. C = desired angle command, L = actual load angle, KP = position loop gain, VC = velocity command, VM = motor velocity sense voltage, KV = velocity loop gain, VIC = current command, VIM = current sense voltage, KC = current loop gain, VA = power amplier output voltage, LM = motor inductance, VM = voltage across motor inductance, IM = motor current, RM = motor resistance, RS = current sense resistance, KM = motor torque constant (Nm/amp) , T = torque, M = momment of inertia of all rotating components = angular acceleration, = angular velocity, = mechanical damping, GM = motor back EMF constant, GT = tachometer conversion gain constant,. There is one forward path (shown in a dierent color) and six feedback loops. The drive shaft assumed to be sti enough to not treat as a spring. Constants are shown in black and variables in purple.
161
22.2.2 Loops
A loop is a structure in a signal ow diagram that leads back to itself. A loop does not contain the beginning and ending points, and the end of the loop is the same node as the beginning of a loop. Loops are said to touch if they share a node or a line in common. The Loop gain is the total gain of the loop, as you travel from one point, around the loop, back to the starting point.
If the given system has no pairs of loops that do not touch, for instance, B and all additional letters after B will be zero.
M=
Where M is the total gain of the system, represented as the ratio of the output gain (yout ) to the input gain (yin ) of the system. Mk is the gain of the kth forward path, and k is the loop gain of the kth loop.
http://en.wikipedia.org/wiki/Mason%27s%20rule
162
23 Bode Plots
23.1 Bode Plots
A Bode Plot is a useful tool that shows the gain and phase response of a given LTI system for dierent frequencies. Bode Plots are generally used with the Fourier Transform of a given system.
Figure 69 An example of a Bode magnitude and phase plot set. The Magnitude plot is typically on the top, and the Phase plot is typically on the bottom of the set.
The frequency of the bode plots are plotted against a logarithmic frequency axis. Every tickmark on the frequency axis represents a power of 10 times the previous value. For instance, on a standard Bode plot, the values of the markers go from (0.1, 1, 10, 100, 1000, ...) Because each tickmark is a power of 10, they are referred to as a decade. Notice that the "length" of a decade increases as you move to the right on the graph.
163
Bode Plots The bode Magnitude plot measures the system Input/Output ratio in special units called decibels. The Bode phase plot measures the phase shift in degrees (typically, but radians are also used).
23.1.1 Decibels
A Decibel is a ratio between two numbers on a logarithmic scale. A Decibel is not itself a number, and cannot be treated as such in normal calculations. To express a ratio between two numbers (A and B) as a decibel we apply the following formula: dB = 20 log Where dB is the decibel result. Or, if we just want to take the decibels of a single number C, we could just as easily write: dB = 20 log(C ) A B
To get the magnitude gain plot, we must rst transition the transfer function into the frequency response by using the change of variables: s = j From here, we can say that our frequency response is a composite of two parts, a real part R and an imaginary part X: T (j ) = R( ) + jX ( ) We will use these forms below.
164
We say that the values for all zn and pm are called break points of the Bode plot. These are the values where the Bode plots experience the largest change in direction. Break points are sometimes also called "break frequencies", "cuto points", or "corner points".
However, it is frequently dicult to transition a function that is in "numerator/denominator" form to "real+imaginary" form. Luckily, our decibel calculation comes in handy. Let s say we have a frequency response dened as a fraction with numerator and denominator polynomials dened as: T (j ) =
n |j + zn | m |j + pm |
If we convert both sides to decibels, the logarithms from the decibel calculations convert multiplication of the arguments into additions, and the divisions into subtractions: Gain =
n
20 log(j + zn )
m
20 log(j + pm )
And calculating out the gain of each term and adding them together will give the gain of the system at that frequency.
165
Bode Plots
Where A is a non-zero constant (can be negative or positive). Here are the steps involved in sketching the approximate Bode magnitude plots: Bode Magnitude Plots Step 1 Factor the transfer function into pole-zero form. Step 2
166
Examples Find the frequency response from the Transfer function. Step 3 Use logarithms to separate the frequency response into a sum of decibel terms Step 4 Use = 0 to nd the starting magnitude. Step 5 The locations of every pole and every zero are called break points. At a zero breakpoint, the slope of the line increases by 20dB/Decade. At a pole, the slope of the line decreases by 20dB/Decade. Step 6 At a zero breakpoint, the value of the actual graph diers from the value of the straight-line graph by 3dB. A zero is +3dB over the straight line, and a pole is -3dB below the straight line. Step 7 Sketch the actual bode plot as a smooth-curve that follows the straight lines of the previous point, and travels through the breakpoints. Here are the steps to drawing the Bode phase plots: Bode Phase Plots Step 1 If A is positive, start your graph (with zero slope) at 0 degrees. If A is negative, start your graph with zero slope at 180 degrees (or -180 degrees, they are the same thing). Step 2 For every zero, slope the line up at 45 degrees per decade when = the break frequency). Multiple zeros means the slope is steeper. Step 3 For every pole, slope the line down at 45 degrees per decade when = the break frequency). Multiple poles means the slope is steeper. Step 4 Flatten the slope again when the phase has changed by 90 degrees (for a zero) or -90 degrees (for a pole) (or larger values, for multiple poles or multiple zeros.
pm 10 zn 10
(1 decade before
(1 decade before
23.5 Examples
23.5.1 Example: Constant Gain
Draw the bode plot of an amplier system, with a constant gain increase of 6dB.
167
Bode Plots Because the gain value is constant, and is not dependent on the frequency, we know that the value of the magnitude graph is constant at all places on the graph. There are no break points, so the slope of the graph never changes. We can draw the graph as a straight, horizontal line at 6dB:
Figure 70
Then we convert our magnitude into logarithms: Gain = 20 log(2) 20 log(j ) Notice that the location of the break point for the pole is located at 0, which is all the way to the left of the graph. Also, we notice that inserting 0 in for gives us an undened value (which approaches negative innity, as the limit). We know, because there is a single pole at zero, that the graph to the right of zero (which is everywhere) has a
168
Examples slope of -20dB/Decade. We can determine from our magnitude calculation by plugging in 1 that the second term drops out, and the magnitude at that point is 6dB. We now have the slope of the line, and a point that it intersects, and we can draw the graph:
Figure 71
169
Bode Plots
Figure 72
170
Further Reading
Figure 73
http://wikis.controltheorypro.com/index.php?title=Bode_Plot
171
24 Nichols Charts
24.1 Nichols Charts
This page will talk about the use of Nichols charts to analyze frequency-domain characteristics of control systems. w:Nichols plot1
http://en.wikipedia.org/wiki/Nichols%20plot
173
25 Stability
25.1 Stability
When a system becomes unstable, the output of the system approaches innity (or negative innity), which often poses a security problem for people in the immediate vicinity. Also, systems which become unstable often incur a certain amount of physical damage, which can become costly. This chapter will talk about system stability, what it is, and why it matters. The chapters in this section are heavily mathematical, and many require a background in linear dierential equations. Readers without a strong mathematical background might want to review the necessary chapters in the Calculus1 and Ordinary Dierential Equations2 books (or equivalent) before reading this material. Negativeness of any coecient of a characteristic polynomial indicates that the system is either unstable or at most marginally stable. If any coecient is zero/negative then we can say that the system is unstable.
1 2
http://en.wikibooks.org/wiki/Calculus http://en.wikibooks.org/wiki/Ordinary%20Differential%20Equations
175
Stability Uniformly Stable, Asymptoticly Stable, and Unstable. All of these words mean slightly dierent things.
yM = f (M )
yM = f (M ) Now, all three outputs should be nite for all possible values of M and x, and they should satisfy the following relationship: yM yx yM If this condition is satised, then the system is BIBO stable. For a SISO linear time-invariant (LTI) system is BIBO stable if and only if g (t) is absolutely integrable from [0, ] or from:
|g (t)| dt M <
0
25.3.1 Example
Consider the system: h(t) =
2 t
We can apply our test, selecting an arbitrarily large nite constant M, and an arbitrary input x such that -M < x < M. As M approaches innity (but does not reach innity), we can show that:
yM = limM 2 M =0
And:
176
So now, we can write out our inequality: yM yx yM 0 x < 0+ And this inequality should be satised for all possible values of x. However, we can see that when x is zero, we have the following:
2 yx = limx0 x =
Which means that x is between -M and M, but the value yx is not between y-M and yM . Therefore, this system is not stable.
177
Stability
Where Hcl is the closed-loop transfer function, and Hol is the open-loop transfer function. Again, we dene the open-loop transfer function as the product of the forward path and the feedback elements, as such: Hol (s) = KGp(s)Gb(s) Now, we can dene F(s) to be the characteristic equation. F(s) is simply the denominator of the closed-loop transfer function, and can be dened as such: Characteristic Equation F (s) = 1 + Hol = D(s) We can say conclusively that the roots of the characteristic equation are the poles of the transfer function. Now, we know a few simple facts: 1. The locations of the poles of the closed-loop transfer function determine if the system is stable or not 2. The zeros of the characteristic equation are the poles of the closed-loop transfer function. 3. The characteristic equation is always a simpler equation than the closed-loop transfer function. These functions combined show us that we can focus our attention on the characteristic equation, and nd the roots of that equation.
178
Marginal Stablity
179
181
G[ n ] =
C[n, n0 ]B 0
if k > 0 if k 0
A digital system is BIBO stable if and only if there exists a positive constant L such that for all non-negative k:
k
[ n] L
n=0
182
Finite Wordlengths
183
27 Routh-Hurwitz Criterion
27.1 Stability Criteria
The Routh-Hurwitz stability criterion provides a simple algorithm to decide whether or not the zeros of a polynomial are all in the left half of the complex plane (such a polynomial is called at times "Hurwitz"). A Hurwitz polynomial is a key requirement for a linear continuous-time time invariant to be stable (all bounded inputs produce bounded outputs). Necessary stability conditions Conditions that must hold for a polynomial to be Hurwitz. If any of them fails - the polynomial is not stable. However, they may all hold without implying stability. Sucient stability conditions Conditions that if met imply that the polynomial is stable. However, a polynomial may be stable without implying some or any of them. The Routh criteria provides condition that are both necessary and sucient for a polynomial to be Hurwitz.
If we simplify this equation, we will have an equation with a numerator N(s), and a denominator D(s): T (s) = N (s) D(s)
185
Routh-Hurwitz Criterion
Therefore, if N is odd, the top row will be all the odd coecients. If N is even, the top row will be all the even coecients. We can ll in the remainder of the Routh Array as follows: sN sN 1 aN aN 2 aN 1 aN 3 bN 1 bN 3 cN 1 cN 3 0 0
s0
Now, we can dene all our b, c, and other coecients, until we reach row s0 . To ll them in, we use the following formulae: 1 aN aN 2 aN 1 aN 1 aN 3
bN 1 = And
186
Routh-Hurwitz Criteria
bN 3 =
1 aN aN 4 aN 1 aN 1 aN 5
For each row that we are computing, we call the left-most element in the row directly above it the pivot element. For instance, in row b, the pivot element is aN-1 , and in row c, the pivot element is bN-1 and so on and so forth until we reach the bottom of the array. To obtain any element, we negate the determinant of the following matrix, and divide by the pivot element: k m l n Where: k is the left-most element two rows above the current row. l is the pivot element. m is the element two rows up, and one column to the right of the current element. n is the element one row up, and one column to the right of the current element.
187
Routh-Hurwitz Criterion s3 1 4 3 s2 2 1 s bN 1 bN 3 s0 cN 1 cN 3 0 0 0 0
5 2
=0 =3 =0
And lling these values into our Routh Array, we can determine whether the system is stable: s3 1 4 0 s2 2 3 0 s1 5 2 0 0 0 s 3 0 0 From this array, we can clearly see that all of the signs of the rst column are positive, there are no sign changes, and therefore there are no poles of the characteristic equation in the RHP.
188
28 Jury s Test
28.1 Routh-Hurwitz in Digital Systems
Because of the dierences in the Z and S domains, the Routh-Hurwitz criteria can not be used directly with digital systems. This is because digital systems and continuous-time systems have dierent regions of stability. However, there are some methods that we can use to analyze the stability of digital systems. Our rst option (and arguably not a very good option) is to convert the digital system into a continuous-time representation using the bilinear transform. The bilinear transform converts an equation in the Z domain into an equation in the W domain, that has properties similar to the S domain. Another possibility is to use Jury s Stability Test. Jury s test is a procedure similar to the RH test, except it has been modied to analyze digital systems in the Z domain directly.
189
Jury s Test
190
Jury s Test Once the Jury Array has been formed, all the following relationships must be satisifed until the end of the array: |b0 | > |bN 1 | |c0 | > |cN 2 | |d0 | > |dN 3 | And so on until the last row of the array. If all these conditions are satisifed, the system is stable. While you are constructing the Jury Array, you can be making the tests of Rule 4. If the Array fails Rule 4 at any point, you can stop calculating the array: your system is unstable. We will discuss the construction of the Jury Array below.
Now, once we have the rst row of our coecients written out, we add another row of coecients (we will use b for this row, and c for the next row, as per our previous convention), and we will calculate the values of the lower rows from the values of the upper rows. Each new row that we add will have one fewer coecient then the row before it: 1) 2) 3) 4) . . . 2N 3) a0 aN b0 bN 1 . . . v0 a1 ... b1 ... . . . v1 a2 a3 . . . aN a 3 a2 a1 a0 b2 . . . bN 1 b2 b1 b0 . . . v2
Note: The last le is the (2N-3) le, and always has 3 elements. This test doesn t have sense if N=1, but in this case you know the pole! Once we get to a row with 2 members, we can stop constructing the array. To calculate the values of the odd-number rows, we can use the following formulae. The even number rows are equal to the previous row in reverse order. We will use k as an arbitrary subscript value. These formulae are reusable for all elements in the array:
191
Jury s Test
bk =
a0 aN
aN k ak
ck =
b0 bN 1k bN 1 bk c0 cN 2 cN 2k ck
dk =
This pattern can be carried on to all lower rows of the array, if needed.
Where we are using R as the subtractive element from the above equations. Since row c had R 1, and row d had R 2, we can follow the pattern and for row e set R 3. Plugging this value of R into our equation above gives us: ek = d0 dN 3 dN 3k dk
And since we want e5 we know that k is 5, so we can substitute that into the equation: e5 = d0 dN 3 d0 dN 8 dN 35 = dN 3 d5 d5
192
29 Root Locus
29.1 The Problem
Consider a system like a radio. The radio has a "volume" knob, that controls the amount of gain of the system. High volume means more power going to the speakers, low volume means less power to the speakers. As the volume value increases, the poles of the transfer function of the radio change, and they might potentially become unstable. We would like to nd out if the radio becomes unstable, and if so, we would like to nd out what values of the volume cause it to become unstable. Our current methods would require us to plug in each new value for the volume (gain, "K"), and solve the open-loop transfer function for the roots. This process can be a long one. Luckily, there is a method called the root-locus method, that allows us to graph the locations of all the poles of the system for all values of gain, K.
29.2 Root-Locus
As we change gain, we notice that the system poles and zeros actually move around in the S-plane1 . This fact can make life particularly dicult, when we need to solve higher-order equations repeatedly, for each new gain value. The solution to this problem is a technique known as Root-Locus graphs. Root-Locus allows you to graph the locations of the poles and zeros for every value of gain, by following several simple rules. As we know that a fan switch also can control the speed of the fan. Let s say we have a closed-loop transfer function for a particular system: N (s ) KG(s) = D(s) 1 + KG(s)H (s) Where N is the numerator polynomial and D is the denominator polynomial of the transfer functions, respectively. Now, we know that to nd the poles of the equation, we must set the denominator to 0, and solve the characteristic equation. In other words, the locations of the poles of a specic equation must satisfy the following relationship: D(s) = 1 + KG(s)H (s) = 0 from this same equation, we can manipulate the equation as such:
http://en.wikibooks.org/wiki/S-plane
193
Root Locus
1 + KG(s)H (s) = 0 KG(s)H (s) = 1 And nally by converting to polar coordinates: KG(s)H (s) = 180 Now we have 2 equations that govern the locations of the poles of a system for all gain values: The Magnitude Equation 1 + KG(s)H (s) = 0 The Angle Equation KG(s)H (s) = 180
KGH (z ) = 1 We can now convert this to polar coordinates, and take the angle of the polynomial: KGH (z ) = 180
194
The Root-Locus Procedure We are now left with two important equations: The Magnitude Equation 1 + KGH (z ) = 0 The Angle Equation KGH (z ) = 180 If you will compare the two, the Z-domain equations are nearly identical to the S-domain equations, and act exactly the same. For the remainder of the chapter, we will only consider the S-domain equations, with the understanding that digital systems operate in nearly the same manner.
Note: We generally use capital letters for functions in the frequency domain, but a(s) and b(s) are unimportant enough to be lower-case. Now, we can assume that G(s)H(s) is a fraction of some sort, with a numerator and a denominator that are both polynomials. We can express this equation using arbitrary functions a(s) and b(s), as such: a(s) 1 = b(s) K
195
Root Locus We will refer to these functions a(s) and b(s) later in the procedure. We can start drawing the root-locus by rst placing the roots of b(s) on the graph with an X . Next, we place the roots of a(s) on the graph, and mark them with an O . poles are marked on the graph with an X and zeros are marked with an O by common convention. These letters have no particular meaning Next, we examine the real-axis. starting from the right-hand side of the graph and traveling to the left, we draw a root-locus line on the real-axis at every point to the left of an odd number of poles or zeros on the real-axis. This may sound tricky at rst, but it becomes easier with practice. double poles or double zeros count as two. Now, a root-locus line starts at every pole. Therefore, any place that two poles appear to be connected by a root locus line on the real-axis, the two poles actually move towards each other, and then they "break away", and move o the axis. The point where the poles break o the axis is called the breakaway point. From here, the root locus lines travel towards the nearest zero. It is important to note that the s-plane is symmetrical about the real axis, so whatever is drawn on the top-half of the S-plane, must be drawn in mirror-image on the bottom-half plane. Once a pole breaks away from the real axis, they can either travel out towards innity (to meet an implicit zero), or they can travel to meet an explicit zero, or they can re-join the real-axis to meet a zero that is located on the real-axis. If a pole is traveling towards innity, it always follows an asymptote. The number of asymptotes is equal to the number of implicit zeros at innity.
196
Root Locus Rules The roots of a(s) are the zeros of the open-loop transfer function. Mark the roots of a(s) on the graph with an O. There should be a number of O s less than or equal to the number of X s. There is a number of zeros p - z located at innity. These zeros at innity are called "implicit zeros". All branches of the root-locus will move from a pole to a zero (some branches, therefore, may travel towards innity). Rule 4 A point on the real axis is a part of the root-locus if it is to the left of an odd number of poles and zeros. Rule 5 The gain at any point on the root locus can be determined by the inverse of the absolute value of the magnitude equation. b(s) = |K | a(s) Rule 6 The root-locus diagram is symmetric about the real-axis. All complex roots are conjugates. Rule 7 Two roots that meet on the real-axis will break away from the axis at certain break-away points. If we set s (no imaginary part), we can use the following equation: K = And dierentiate to nd the local maximum: dK d b( ) = d d a( ) Rule 8 The breakaway lines of the root locus are separated by angles of of poles intersecting at the breakaway point. Rule 9 The breakaway root-loci follow asymptotes that intersect the real axis at angles given by: = + 2N , pz N = 0, 1, ...p z 1
,
b( ) a( )
The origin of these asymptotes, OA, is given as the sum of the pole locations, minus the sum of the zero locations, divided by the dierence between the number of poles and zeros:
197
Root Locus
p Pi
z Zi
pz
The branches of the root locus cross the imaginary axis at points where the angle equation value is (i.e., 180o ). Rule 11 The angles that the root locus branch makes with a complex-conjugate pole or zero is determined by analyzing the angle equation at a point innitessimally close to the pole or zero. The angle of departure, d is given by the following equation: i +
p z
i + d =
i + a =
198
Root Locus and Stability The angles of the asymptotes are given by: Angle of Asymptotes k = (2k + 1) for values of k = [0, 1, ...Na 1]. The angles for the asymptotes are measured from the positive real-axis P Z
Where P is the sum of all the locations of the poles, and of the explicit zeros.
= 0 or
dGH (z ) dz
=0
Once you solve for z, the real roots give you the breakaway/reentry points. Complex roots correspond to a lack of breakaway/reentry. The breakaway point equation can be dicult to solve, so many times the actual location is approximated.
199
Root Locus {|class="wikitable" ! Region ! colspan=2 | S-Domain ! colspan=2 | Z-Domain |- ! Stable Region | Left-Hand S Plane || < 0|| Inside the Unit Circle || |z | < 1 |- ! Marginally Stable Region | The vertical axis || = 0 || The Unit Circle || |z | = 1 |- ! Unstable Region | Right-Hand S Plane || > 0 || Outside the Unit Circle, || |z | > 1 |}
29.7 Examples
29.7.1 Example 1: First-Order System
Find the root-locus of the open-loop system: T (s ) =
1 1+2s
If we look at the characteristic equation, we can quickly solve for the single pole of the system: D(s) = 1 + 2s = 0 s = 1 2 We plot that point on our root-locus graph, and everything on the real axis to the left of that single point is on the root locus (from the rules, above). Therefore, the root locus of our system looks like this:
Figure 74 From this image, we can see that for all values of gain this system is stable.
200
Examples
Is this system stable? To answer this question, we can plot the root-locus. First, we draw the poles on the graph at locations -1, -2, and -3. The real-axis between the rst and second poles is on the root-locus, as well as the real axis to the left of the third pole. We know also that there is going to be breakaway from the real axis at some point. The origin of asymptotes is located at: OA =
1+2+3 3
= 2,
= 0, 1, 2
We know that the breakaway occurs between the rst and second poles, so we will estimate the exact breakaway point. Drawing the root-locus gives us the graph below.
201
Root Locus
Figure 75
We can see that for low values of gain the system is stable, but for higher values of gain, the system becomes unstable.
202
Examples If we look at the denominator, we have poles at the origin, -1, and -2. Following Rule 4, we know that the real-axis between the rst two poles, and the real axis after the third pole are all on the root-locus. We also know that there is going to be a breakaway point between the rst two poles, so that they can approach the complex conjugate zeros. If we use the quadratic equation on the numerator, we can nd that the zeros are located at: s = (2.25 + j 0.75), (2.25 j 0.75) If we draw our graph, we get the following:
Figure 76
We can see from this graph that the system is stable for all values of K.
203
Root Locus D(s) = s3 + 5s2 + 8s + 6 Now, we can generate the coecient vectors from the numerator and denominator: num = [0 0 1 2]; den = [1 5 8 6]; Next, we can feed these vectors into the rlocus command: rlocus(num, den); Note:In Octave, we need to create a system structure rst, by typing: sys = tf(num, den); rlocus(sys); Either way, we generate the following graph:
Figure 77
204
30 Nyquist Criterion
30.1 Nyquist Stability Criteria
The Nyquist Stability Criteria is a test for system stability, just like the Routh-Hurwitz1 test, or the Root-Locus2 Methodology. However, the Nyquist Criteria can also give us additional information about a system. Routh-Hurwitz and Root-Locus can tell us where the poles of the system are for particular values of gain. By altering the gain of the system, we can determine if any of the poles move into the RHP, and therefore become unstable. The Nyquist Criteria, however, can tell us things about the frequency characteristics of the system. For instance, some systems with constant gain might be stable for low-frequency inputs, but become unstable for high-frequency inputs. Here is an example of a system responding dierently to dierent frequency input values: Consider an ordinary glass of water. If the water is exposed to ordinary sunlight, it is unlikely to heat up too much. However, if the water is exposed to microwave radiation (from inside your microwave oven, for instance), the water will quickly heat up to a boil. Also, the Nyquist Criteria can tell us things about the phase of the input signals, the time-shift of the system, and other important information.
30.2 Contours
A contour is a complicated mathematical construct, but luckily we only need to worry ourselves with a few points about them. We will denote contours with the Greek letter (gamma). Contours are lines, drawn on a graph, that follow certain rules: 1. 2. 3. 4. The contour must close (it must form a complete loop) The contour may not cross directly through a pole of the system. Contours must have a direction (clockwise or counterclockwise, generally). A contour is called "simple" if it has no self-intersections. We only consider simple contours here.
Once we have such a contour, we can develop some important theorems about them, and nally use these theorems to derive the Nyquist stability criterion.
1 2
205
Nyquist Criterion
http://en.wikipedia.org/wiki/Argument%20principle
206
Argument Principle u = 2 + 1 v = 2 Now, to transform , we will plug every point of the contour into F(s), and the resultant values will be the points of F (s) . We will solve for complex values u and v, and we will start with the vertices, because they are the simplest examples: u + jv = F (I ) = 3 + j 2 u + jv = F (J ) = 3 j 2 u + jv = F (K ) = 1 + j 2 u + jv = F (L) = 1 j 2 We can take the lines in between the vertices as a function of s, and plug the entire function into the transform. Luckily, because we are using straight lines, we can simplify very much: Line from I to J: = 1, u = 3, v = Line from J to K: = 1, u = 2 + 1, v = 1 Line from K to L: = 1, u = 1, v = Line from L to I: = 1, u = 2 + 1, v = 1 And when we graph these functions, from vertex to vertex, we see that the resultant contour in the F(s) plane is a square, but not centered at the origin, and larger in size. Notice how the contour encircles the origin of the F(s) plane one time. This will be important later on.
We can see clearly that F(s) has a zero at s -0.5, and a complex conjugate set of poles at s -0.5 + j0.5 and s -0.5 - j0.5. We will use the same unit square contour, , from above: I = 1 + j1 J = 1 j1 K = 1 j 1 L = 1 + j 1 We can see clearly that the poles and the zero of F(s) lie within . Setting F(s) to u + jv and solving, we get the following relationships: u + jv = F ( + j ) =
( +0.5)+j ( ) (2 2 2 2 +2 +1)+j (2 + )
This is a little dicult now, because we need to simplify this whole expression, and separate it out into real and imaginary parts. There are two methods to doing this, neither of which is short or easy enough to demonstrate here to entirety:
207
Nyquist Criterion 1. We convert the numerator and denominator polynomials into a polar representation in terms of r and , then perform the division, and then convert back into rectangular format. 2. We plug each segment of our contour into this equation, and simplify numerically.
http://en.wikipedia.org/wiki/Nyquist%20stability%20criterion
208
Nyquist Bode A feedback control system is stable, if and only if the contour F (s) in the F(s) plane encircles the (-1, 0) point a number of times equal to the number of poles of F(s) enclosed by . w:Nyquist plot5 In other words, if P is zero then N must equal zero. Otherwise, N must equal P. Essentially, we are saying that Z must always equal zero, because Z is the number of zeros of the characteristic equation (and therefore the number of poles of the closed-loop transfer function) that are in the right-half of the s plane. Keep in mind that we don t necessarily know the locations of all the zeros of the characteristic equation. So if we nd, using the nyquist criterion, that the number of poles is not equal to N, then we know that there must be a zero in the right-half plane, and that therefore the system is unstable.
http://en.wikipedia.org/wiki/Nyquist%20plot
209
31 State-Space Stability
31.1 State-Space Stability
If a system is represented in the state-space domain, it doesn t make sense to convert that system to a transfer function representation (or even a transfer matrix representation) in an attempt to use any of the previous stability methods. Luckily, there are other analysis methods that can be used with the state-space representation to determine if a system is stable or not. First, let us rst introduce the notion of unstability: Unstable A system is said to be unstable if the system response approaches innity as time approaches innity. If our system is G(t), then, we can say a system is unstable if: lim (t) =
Also, a key concept when we are talking about stability of systems is the concept of an equilibrium point: Equilibrium Point Given a system f such that: x (t) = f (x(t)) A particular state xe is called an equilibrium point if f (xe ) = 0 for all time t in the interval [t0 , ), where t0 is the starting time of the system. An equilibrium point is also known as a "stationary point", a "critical point", a "singular point", or a "rest state" in other books or literature. The denitions below typically require that the equilibrium point be zero. If we have an equilibrium point xe = a, then we can use the following change of variables to make the equilibrium point zero: x = xe a = 0 We will also see below that a system s stability is dened in terms of an equilibrium point. Related to the concept of an equilibrium point is the notion of a zero point: Zero State A state xz is a zero state if xz = 0. A zero state may or may not be an equilibrium point.
211
State-Space Stability
A time-invariant system is asymptotically stable if all the eigenvalues of the system matrix A have negative real parts. If a system is asymptotically stable, it is also BIBO stable. However the inverse is not true: A system that is BIBO stable might not be asymptotically stable. Uniform Asymptotic Stability A system is dened to be uniformly asymptotically stable if the system is asymptotically stable for all values of t0 . Exponential Stability A system is dened to be exponentially stable if the system response decays exponentially towards zero as time approaches innity. For linear systems, uniform asymptotic stability is the same as exponential stability. This is not the case with non-linear systems.
212
Eigenvalues and Poles 1. A time-invariant system is marginally stable if and only if all the eigenvalues of the system matrix A are zero or have negative real parts, and those with zero real parts are simple roots of the minimal polynomial of A. 2. The equilibrium x = 0 of the state equation is uniformly stable if all eigenvalues of A have non-positive real parts, and there is a complete set of distinct eigenvectors associated with the eigenvalues with zero real parts. 3. The equilibrium x = 0 of the state equation is exponentially stable if and only if all eigenvalues of the system matrix A have negative real parts.
(sI A)X (s) = BU (s) Assuming (sI - A) is nonsingular, we can multiply both sides by the inverse: X (s) = (sI A)1 BU (s) Now, if we remember our formula for nding the matrix inverse from the adjoint matrix: A1 = We can use that denition here: X (s ) = adj(sI A)BU (s) |(sI A)| adj(A) |A|
Let s look at the denominator (which we will now call D(s)) more closely. To be stable, the following condition must be true:
213
State-Space Stability
D(s) = |(sI A)| = 0 And if we substitute for s, we see that this is actually the characteristic equation of matrix A! This means that the values for s that satisfy the equation (the poles of our transfer function) are precisely the eigenvalues of matrix A. In the S domain, it is required that all the poles of the system be located in the left-half plane, and therefore all the eigenvalues of A must have negative real parts.
G(t, ) =
The system is uniformly stable if and only if there exists a nite positive constant L such that for all time t and all initial conditions t0 with t t0 the following integral is satised:
t
(t, ) L
0
In other words, the above integral must have a nite value, or the system is not uniformly stable. In the time-invariant case, the impulse response matrix reduces to: CeAt B 0 if t 0 if t < 0
G(t) =
In a time-invariant system, we can use the impulse response matrix to determine if the system is uniformly BIBO stable by taking a similar integral:
(t) L
0
214
Lyapunov Stability f(x) is positive semi-denite if f (x) 0 for all x, and f(x) = 0 only if x = 0. f(x) is negative denite if f(x) < 0 for all x. f(x) is negative semi-denite if f (x) 0 for all x, and f(x) = 0 only if x = 0. A matrix X is positive denite if all its principle minors are positive. Also, a matrix X is positive denite if all its eigenvalues have positive real parts. These two methods may be used interchangeably. Positive deniteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness.
M=
0
eA t N eAt dt
If the matrix M can be calculated in this manner, the system is asymptotically stable.
215
32.2 Controllability
We will start o with the denitions of the term controllability, and the related term reachability Controllability A system with internal state vector x is called controllable if and only if the system states can be changed by changing the system input. Reachability A particular state x1 is called reachable if there exists an input that transfers the state of the system from the initial state x0 to x1 in some nite time interval [t0 , t). We can also write out the denition of reachability more precisely: A state x1 is called reachable at time t1 if for some nite initial time t0 there exists an input u(t) that transfers the state x(t) from the origin at t0 to x1 . A system is reachable at time t1 if every state x1 in the state-space is reachable at time t1 . Similarly, we can more precisely dene the concept of controllability: A state x0 is controllable at time t0 if for some nite time t1 there exists an input u(t) that transfers the state x(t) from x0 to the origin at time t1 . A system is called controllable at time t0 if every state x0 in the state-space is controllable.
217
218
Controllability Each one of these conditions is both necessary and sucient. If any one test fails, all the tests will fail, and the system is not reachable. If any test is positive, then all the tests will be positive, and the system is reachable.
32.2.3 Gramians
Gramians are complicated mathematical functions that can be used to determine specic things about a system. For instance, we can use gramians to determine whether a system is controllable or reachable. Gramians, because they are more complicated than other methods, are typically only used when other methods of analyzing a system fail (or are too dicult). All the gramians presented on this page are all matrices with dimension p p (the same size as the system matrix A). All the gramians presented here will be described using the general case of Linear time-variant systems. To change these into LTI (time-invariant equations), the following substitutions can be used: (t, ) eA(t )
(t, ) eA (t ) Where we are using the notation X to denote the transpose of a matrix X (as opposed to the traditional notation XT ).
Wr (t0 , t1 ) =
t0
(t1 , )B ( )B ( ) (t1 , )d
The system is reachable if the rank of the reachability gramian is the same as the rank of the system matrix: rank(Wr ) = p
Wc (t0 , t1 ) =
t0
(t0 , )B ( )B ( ) (t0 , )d
219
Controllability and Observability The system is controllable if the rank of the controllability gramian is the same as the rank of the system matrix: rank(Wc ) = p If the system is time-invariant, there are two important points to be made. First, the reachability gramian and the controllability gramian reduce to be the same equation. Therefore, for LTI systems, if we have found one gramian, then we automatically know both gramians. Second, the controllability gramian can also be found as the solution to the following Lyapunov equation: AWc + Wc A = BB Many software packages, notably MATLAB, have functions to solve the Lyapunov equation. By using this last relation, we can also solve for the controllability gramian using these existing functions.
32.3 Observability
The state-variables of a system might not be able to be measured for any of the following reasons: 1. The location of the particular state variable might not be physically accessible (a capacitor or a spring, for instance). 2. There are no appropriate instruments to measure the state variable, or the statevariable might be measured in units for which there does not exist any measurement device. 3. The state-variable is a derived "dummy" variable that has no physical meaning. If things cannot be directly observed, for any of the reasons above, it can be necessary to calculate or estimate the values of the internal state variables, using only the input/output relation of the system, and the output history of the system from the starting time. In other words, we must ask whether or not it is possible to determine what the inside of the system (the internal system states) is like, by only observing the outside performance of the system (input and output)? We can provide the following formal denition of mathematical observability: Observability A system with an initial state, x(t0 ) is observable if and only if the value of the initial state can be determined from the system output y(t) that has been observed through the time interval t0 < t < tf . If the initial state cannot be so determined, the system is unobservable. Complete Observability A system is said to be completely observable if all the possible initial states of the system can be observed. Systems that fail this criteria are said to be unobservable.
220
Observability Detectability A system is Detectable if all states that cannot be observed decay to zero asymptotically. Constructability A system is constructable if the present state of the system can be determined from the present and past outputs and inputs to the system. If a system is observable, then it is also constructable. The relationship does not work the other way around. A system state xi is unobservable at a given time ti if the zero-input response of the system is zero for all time t. If a system is observable, then the only state that produces a zero output for all time is the zero state. We can use this concept to dene the term state-observability. State-Observability A system is completely state-observable at time t0 or the pair (A, C) is observable at t0 if the only state that is unobservable at t0 is the zero state x = 0.
32.3.1 Constructability
A state x is unconstructable at a time t1 if for every nite time t < t1 the zero input response of the system is zero for all time t. A system is completely state constructable at time t1 if the only state x that is unconstructable at t0 is x = 0. If a system is observable at an initial time t0 , then it is constructable at some time t > t0 , if it is constructable at t1 .
x (t) = Ax(t)
y (t) = Cx(t)
221
Controllability and Observability Therefore, we can show that the observability of the system is dependant only on the coecient matrices A and C. We can show precisely how to determine whether a system is observable, using only these two matrices. If we have the observability matrix Q: Observability Matrix
Q=
C CA CA2 . . .
CAp1
we can show that the system is observable if and only if the Q matrix has a rank of p. Notice that the Q matrix has the dimensions pr p. MATLAB allows one to easily create the observability matrix with the obsv command. To create the observabilty matrix Q simply type Q=obsv(A,C) where A and C are mentioned above. Then in order to determine if the system is observable or not one can use the rank command to determine if it has full rank.
Wo (t0 , t1 ) =
t0
(, t0 )C ( )C ( )(, t0 )d
A system is completely state observable at time t0 < t < t1 if and only if the rank of the observability gramian is equal to the size p of the system matrix A. If the system (A, B, C, D) is time-invariant, we can construct the observability gramian as the solution to the Lyapunov equation: A Wo + Wo A = C C
Wcn (t0 , t1 ) =
t0
(, t1 )C ( )C ( )(, t1 )d
222
Duality Principle A system is completely state observable at an initial time t0 if and only if there exists a nite t1 such that: rank(W0 ) = rank(Wcn ) = p Notice that the constructability and observability gramians are very similar, and typically they can both be calculated at the same time, only substituting in dierent values into the state-transition matrix.
223
33 System Specications
33.1 System Specication
There are a number of dierent specications that might need to be met by a new system design. In this chapter we will talk about some of the specications that systems use, and some of the ways that engineers analyze and quantify systems.
225
Figure 78
Proportional controllers are simply gain values. These are essentially multiplicative coecients, usually denoted with a K. A P controller can only force the system poles to a spot on the system s root locus. A P controller cannot be used for arbitrary pole placement. We refer to this kind of controller by a number of dierent names: proportional controller, gain, and zeroth-order controller.
227
Figure 79
In the Laplace domain, we can show the derivative of a signal using the following notation: D(s) = L f (t) = sF (s) f (0) Since most systems that we are considering have zero initial condition, this simplies to: D(s) = L f (t) = sF (s) The derivative controllers are implemented to account for future values, by taking the derivative, and controlling based on where the signal is going to be in the future. Derivative controllers should be used with care, because even small amount of high-frequency noise can cause very large derivatives, which appear like amplied noise. Also, derivative controllers are dicult to implement perfectly in hardware or software, so frequently solutions involving only integral controllers or proportional controllers are preferred over using derivative controllers. Notice that derivative controllers are not proper systems, in that the order of the numerator of the system is greater than the order of the denominator of the system. This quality of being a non-proper system also makes certain mathematical analysis of these systems dicult.
228
Integral Controllers
Figure 80
L
0
1 f (t) dt = F (s) s
Integral controllers of this type add up the area under the curve for past time. In this manner, a PI controller (and eventually a PID) can take account of the past performance of the controller, and correct based on past errors.
229
Figure 81
PID controllers are combinations of the proportional, derivative, and integral controllers. Because of this, PID controllers have large amounts of exibility. We will see below that there are denite limites on PID control.
Notice that we can write the transfer function of a PID controller in a slightly dierent way:
230
PID Controllers
D(s) =
A0 + A1 s B0 + B1 s
This form of the equation will be especially useful to us when we look at polynomial design.
Figure 82
Seborg, Dale E.; Edgar, Thomas F.; Mellichamp, Duncan A. (2003). Process Dynamics and Control, Second Edition. John Wiley & Sons,Inc. ISBN 0471000779
231
Controllers and Compensators 3) Controller tuning relations 4) Frequency response techniques 5) Computer simulation 6) On-line tuning after the control system is installed 7)Trial and error Notes:
And we can convert this into a canonical equation by manipulating the above equation to obtain: a0 + a1 z 1 + a2 z 2 1 + b1 z 1 + b2 z 2
D (z ) = Where:
a0 = Kp + a1 = Kp +
Ki T Kd + 2 T
Ki T 2Kd + 2 T Kd T
a2 =
b1 = 1 b2 = 0 Once we have the Z-domain transfer function of the PID controller, we can convert it into the digital time domain: y [n] = x[n]a0 + x[n 1]a1 + x[n 2]a2 y [n 1]b1 y [n 2]b2
232
Bang-Bang Controllers And nally, from this dierence equation, we can create a digital lter structure to implement the PID. For more information about digital lter structures, see Digital Signal Processing2
34.7 Compensation
There are a number of dierent compensation units that can be employed to help x certain system metrics that are outside of a proper operating range. Most commonly, the phase characteristics are in need of compensation, especially if the magnitude response is to remain constant.
http://en.wikibooks.org/wiki/Digital%20Signal%20Processing
233
To make the compensator work correctly, the following property must be satised: |z | < |p| And both the pole and zero location should be close to the origin, in the LHP. Because there is only one pole and one zero, they both should be located on the real axis. Phase lead compensators help to shift the poles of the transfer function to the left, which is benecial for stability purposes.
However, in the lag compensator, the location of the pole and zero should be swapped: |p| < |z | Both the pole and the zero should be close to the origin, on the real axis. The Phase lag compensator helps to improve the steady-state error of the system. The poles of the lag compensator should be very close together to help prevent the poles of the system from shifting right, and therefore reducing system stability.
234
External Sites
Tleadlag (s) =
(s z1 )(s z2 ) . (s p1 )(s p2 )
Where typically the following relationship must hold true: |p1 | > |z1 | > |z2 | > |p2 |
4 5 6
235
35 Nonlinear Systems
35.1 Nonlinear General Solution
w:Non-linear control1 A nonlinear system, in general, can be dened as follows: x (t) = f (t, t0 , x, x0 )
x(t0 ) = x0 Where f is a nonlinear function of the time, the system state, and the initial conditions. If the initial conditions are known, we can simplify this as: x (t) = f (t, x) The general solution of this equation (or the most general form of a solution that we can state without knowing the form of f) is given by:
t
x(t) = x0 +
t0
f (, x)d
and we can prove that this is the general solution to the above equation because when we dierentiate both sides we get the general solution.
xn (t) = x0 +
t0
f (, xn1 ( ))d
237
Nonlinear Systems
The xn series of equations will converge on the solution to the equation as n approaches innity.
35.2 Linearization
Nonlinear systems are dicult to analyze, and for that reason one of the best methods for analyzing those systems is to nd a linear approximation to the system. Frequently, such approximations are only good for certain operating ranges, and are not valid beyond certain bounds. The process of nding a suitable linear approximation to a nonlinear system is known as linearization.
Figure 83
This image shows a linear approximation (dashed line) to a non-linear system response (solid line). This linear approximation, like most, is accurate within a certain range, but becomes
238
Linearization more inaccurate outside that range. Notice how the curve and the linear approximation diverge towards the right of the graph.
239
36 Common Nonlinearities
There are some nonlinearities that happen so frequently in physical systems that they are called "Common nonlinearities". These common nonlinearities include Hysteresis, Backlash, and Dead-zone.
36.1 Hysteresis
Continuing with the example of a household thermostat, let s say that your thermostat is set at 70 degrees (Fahrenheit). The furnace turns on, and the house heats up to 70 degrees, and then the thermostat dutifully turns the furnace o again. However, there is still a large amount of residual heat left in the ducts, and the hot air from the vents on the ground may not all have risen up to the level of the thermostat. This means that after the furnace turns o, the house may continue to get hotter, maybe even to uncomfortable levels. So the furnace turns o, the house heats up to 80 degrees, and then the air conditioner turns on. The temperature of the house cools down to 70 degrees again, and the A/C turns back o. However, the house continues to cool down, and then it gets too cold, and the furnace needs to turn back on. As we can see from this example, a bang-bang controller, if poorly designed, can cause big problems, and it can waste lots of energy. To avoid this, we implement the idea of Hysteresis, which is a set of threshold values that allow for overow outputs. Implementing hysteresis, our furnace now turns o when we get to 65 degrees, and the house slowly warms up to 75 degrees, and doesnt turn on the A/C unit. This is a far preferable solution.
36.2 Backlash
Eg: Mechanical gear.
36.3 Dead-Zone
A dead-zone is a kind of non linearity in which the system doesn t respond to the given input until the input reaches a particular level.
241
Common Nonlinearities
242
37.2.1 Expectation
The expectation operatior, E, is used to nd the expected, or mean value of a given random variable. The expectation operator is dened as:
243
E [ x] =
xfx (x)dx
If we have two variables that are independent of one another, the expectation of their product is zero.
37.2.2 Covariance
The covariance matrix, Q, is the expectation of a random vector times it s transpose: E [x(t)x (t)] = Q(t) If we take the value of the x transpose at a dierent point in time, we can calculate out the covariance as: E [x(t)x (s)] = Q(t) (t s) Where is the impulse function.
244
System Covariance
(t, )B ( )v ( )d
If we take the expected value of this function, it should give us the expected value of the output of the system. In other words, we would like to determine what the expected output of our system is going to be by adding a new, noise input.
t
(t, )B ( )v ( )d ]
In the second term of this equation, neither &\phi; nor B are random variables, and therefore they can come outside of the expectaion operation. Since v is zero-mean, the expectation of it is zero. Therefore, the second term is zero. In the rst equation, $phi; is not a random variable, but x0 does create a dependancy on the output of x(t), and we need to take the expectation of it. This means that: E [x(t)] = (t, t0 )E [x0 ] In other words, the expected output of the system is, on average, the value that the output would be if there were no noise. Notice that if our noise vector v was not zero-mean, and if it was not gaussian, this result would not hold.
(, t0 )B ( )v ( )d ]
E [((t, t0 )x0 +
t0
(, t0 )B ( )v ( )d ) ]
If we multiply this out term by term, and cancel out the expectations that have a zero-value, we get the following result: E [x(t)x (t)] = (t, t0 )E [x0 x0 ] (t, t0 ) = P We call this result P, and we can nd the rst derivative of P by using the chain-rule: P (t) = A(t)(t, t0 )P0 (t, t0 ) + (t, t0 )P0 (t, t0 )A (t) Where
245
P0 = E [ x0 x0 ] We can reduce this to: P (t) = A(t)P (t) + P (t)A (t) + B (t)Q(t)B (t) In other words, we can analyze the system without needing to calculate the state-transition matrix. This is a good thing, because it can often be very dicult to calculate the statetransition matrix.
(t, )B ( )v ( )d
We can run into a problem because in a gaussian distribution, especially systems with high variance (especially systems with innite variance), the value of v can momentarily become undened (approach innity), which will cause the value of x to likewise become undened at certain points. This is unacceptable, and makes further analysis of this problem dicult. Let us look again at our original equation, with zero control input: x (t) = A(t)x(t) + B (t)v (t) We can multiply both sides by dt, and get the following result: dx = A(t)x(t)dt + B (t)v (t)dt This new term, dw, is a random process known as a Weiner Process, which the result of transforming a gaussian process in this manner. We can dene a new dierential, dw(t), which is an innitesimal function of time as: dw(t) = v (t)dt Now, we can integrate both sides of this equation:
t t
x(t) = x(t0 ) +
t0
A( )x( )d +
t0
B ( )dw( )
246
Alternate Analysis w:Ito Calculus1 However, this leads us to an unusual place, and one for which we are (probably) not prepared to continue further: in the third term on the left-hand side, we are attempting to integrate with respect to a function, not a variable. In this instance, the standard Riemann integrals that we are all familiar with cannot solve this equation. There are advanced techniques known as Ito Calculus however that can solve this equation, but these methods are currently outside the scope of this book.
http://en.wikipedia.org/wiki/Ito%20Calculus
247
1 2 3
249
251
Figure 84
39.2.1 Prewarping
The W domain is not the same as the Laplace domain, but if we employ the process of prewarping before we take the bilinear transform, we can make our results match more closely to the desired Laplace Domain representation. Using prewarping, we can show the eect of the bilinear transform graphically:
252
Matched Z-Transform
Figure 85
The shape of the graph before and after prewarping is the same as it is without prewarping. However, the destination domain is the S-domain, not the W-domain.
And once we are in this form, we can make a direct conversion between the s and z planes using the following mapping: Matched Z Transform s + = 1 z 1 eT Pro A good direct mapping in terms of s and a single coecient Con requires the Laplace-domain function be decomposed using partial fraction expansion.
253
s= CON
Essentially multiplies the order of the transfer function by a factor of 2. This makes things dicult when you are trying to physically implement the system.
x(0) 1 + z 1
Directly maps a function in terms of z and s, into a function in terms of only z. Con Requires a function that is already in terms of s, z and .
39.6 Z-Forms
Category:Control Systems1
http://en.wikibooks.org/wiki/Category%3AControl%20Systems
254
40 Appendix: Transforms
40.1 Laplace Transform
w:Laplace Transform1 When we talk about the Laplace transform, we are actually talking about the version of the Laplace transform known as the unilinear Laplace Transform. The other version, the Bilinear Laplace Transform (not related to the Bilinear Transform, below) is not used in this book. The Laplace Transform is dened as: Laplace Transform
f (t)est dt
And the Inverse Laplace Transform is dened as: Inverse Laplace Transform f (t) = L1 {F (s)} = 1 2i
c+i ci
ef t F (s) ds
1 2 3 4 5 6 7 8 9
1
255
Appendix: Transforms Time Domain x(t) = L1 {X (s)} tn eat u(t) cos(t)u(t) sin(t)u(t) cosh(t)u(t) sinh(t)u(t) eat cos(t)u(t) eat sin(t)u(t) 1 (sin t t cos t) 2 3 t 2 sin t
1 2 (sin t + t cos t)
10 11 12 13 14 15 16 17 18 19
Frequency Division Frequency Integration Time Integration Scaling Initial value theorem Final value theorem Frequency Shifts Time Shifts Convolution Theorem Where:
f (t) = L1 {F (s)}
g (t) = L1 {G(s)}
256
Fourier Transform
s = + j
40.1.3 Convergence of the Laplace Integral 40.1.4 Properties of the Laplace Transform
F (j ) = F [f (t)] =
0
f (t)ejt dt
And the Inverse Fourier Transform is dened as: Inverse Fourier Transform f (t) = F 1 {F (j )} = 1 2
F (j )ejt d
1 2 3 4 5 6 7 8 9
1 ejc 1 ( ) + j
1 j +b
[ ( + 0 ) + ( 0 )] ej ( + 0 ) + ej ( 0 )
http://en.wikipedia.org/wiki/Fourier%20Transform
257
Appendix: Transforms Time Domain x(t) = F 1 {X ( )} sin 0 t sin(0 t + ) t rect t sinc 2 2|t| 1 p (t)
2 t 2 sinc 4 ea|t| , {a}
10 11 12 13 14 15 16 Notes:
>0
1. 2. 3. 4.
sinc(x) = sin(x)/x p (t) is the rectangular pulse function of width u(t) is the Heavyside step function (t) is the Dirac delta function
1 2 3
Linearity Shift in time domain Shift in frequency domain, dual of 2 If |a| is large, then g (at) is concentrated around 0 and 1 |a| G a spreads out and attens
g (at)
1 |a| G a
1 |a| G
f a
258
Z-Transform Signal Fourier transform unitary, angular frequency g ( ) Fourier transform unitary, ordinary frequency g (f ) Remarks
G(t)
dn g (t) dtn
(i )n G( )
(i2f )n G(f )
Duality property of the Fourier transform. Results from swapping "dummy" variables of t and . Generalized derivative property of the Fourier transform This is the dual to 6 g h denotes the convolution of g and h this rule is the convolution theorem This is the dual of 8
7 8
tn g (t) (g h)(t)
G( ) in d d n
i 2
n dn G(f ) df n
2G( )H ( )
G( f ) H ( f )
9 10
g (t)h(t) For a purely real even function g (t) For a purely real odd function g (t)
(G H )( ) 2
(G H )(f ) G(f ) is a purely real even function G(f ) is a purely imaginary odd function
11
40.2.3 Convergence of the Fourier Integral 40.2.4 Properties of the Fourier Transform
40.3 Z-Transform
w:Z-transform3 The Z-transform is used primarily to convert discrete data sets into a continuous representation. The Z-transform is notationally very similar to the star transform, except that the
3 http://en.wikipedia.org/wiki/Z-transform
259
Appendix: Transforms Z transform does not take explicit account for the sampling period. The Z transform has a number of uses in the eld of digital signal processing, and the study of discrete signals in general, and is useful because Z-transform results are extensively tabulated, whereas star-transform results are not. The Z Transform is dened as: Z Transform
X (z ) = Z [x[n]] =
i=
x[n]z n
1 2 3 4 5 6 7 8 9 10 11 12 13
ROC all z z=0 |z | > 1 |z | < 1 |z | > 1 |z | < 1 |z | > 1 |z | < 1 |z | > 1 |z | < 1 |z | > |a| |z | < |a| |z | > |a|
260
Modied Z-Transform Signal, x[n] nan u[n 1] n2 an u[n] n2 an u[n 1] cos(0 n)u[n] sin(0 n)u[n] an cos(
0 n)u[n]
Z-transform, X (z )
az 1 (1az 1 )2 az 1 (1+az 1 ) (1az 1 )3 az 1 (1+az 1 ) (1az 1 )3 1z 1 cos(0 ) 12z 1 cos(0 )+z 2 z 1 sin(0 ) 12z 1 cos(0 )+z 2 1az 1 cos(0 ) 12az 1 cos(0 )+a2 z 2 az 1 sin(0 ) 12az 1 cos(0 )+a2 z 2
14 15 16 17 18 19 20
4
ROC |z | < |a| |z | > |a| |z | < |a| |z | > 1 |z | > 1 |z | > |a| |z | > |a|
an sin(0 n)u[n]
4 5 6
261
Appendix: Transforms
F (s) = L [f (t)] =
i=0
f (iT )esiT
Star transform pairs can be obtained by plugging z = esT into the Z-transform pairs, above.
Where T is the sampling time of the discrete signal. Frequencies in the w domain are related to frequencies in the s domain through the following relationship: w = s T 2 tan T 2
This relationship is called the frequency warping characteristic of the bilinear transform. To counter-act the eects of frequency warping, we can pre-warp the Z-domain equation using the inverse warping characteristic. If the equation is prewarped before it is transformed, the resulting poles of the system will line up more faithfully with those in the s-domain. Bilinear Frequency Prewarping
262
Wikipedia Resources
2 T arctan a T 2
Applying these transformations before applying the bilinear transform actually enables direct conversions between the S-Domain and the Z-Domain. The act of applying one of these frequency warping characteristics to a function before transforming is called prewarping.
Category:Control Systems12
7 8 9 10 11 12
263
41 System Representations
41.1 System Representations
This is a table of times when it is appropriate to use each dierent type of system representation: Properties Linear, Distributed Linear, Lumped Linear, TimeInvariant, Distributed Linear, TimeInvariant, Lumped State-Space Equations no yes no Transfer Function no no yes Transfer Matrix no no no
yes
yes
yes
265
System Representations State-Space Equations Time-Variant x (t) = A(t)x(t) + B (t)u(t) y (t) = C (t)x(t) + D(t)u(t) These are the digital versions of the equations listed above. All the variables have the same meanings, except that the systems are digital. Digital State Equations State-Space Equations Time-Invariant x [t] = Ax[t] + Bu[t] y [t] = Cx[t] + Du[t] Time-Variant x [t] = A[t]x[t] + B [t]u[t] y [t] = C [t]x[t] + D[t]u[t]
266
Transfer Matrix
267
42 Matrix Operations
For more about this subject, see: Linear Algebra1 and Engineering Analysis2
1 2
http://en.wikibooks.org/wiki/Linear%20Algebra http://en.wikibooks.org/wiki/Engineering%20Analysis
269
Matrix Operations
42.3 Determinant
The determinant of a matrix it is a scalar value. It is denoted similarly to absolute-value in scalars: |X | A matrix has an inverse if the matrix is square, and if the determinant of the matrix is non-zero.
42.4 Inverse
The inverse of a matrix A, which we will denote here by "B" is any matrix that satises the following equation: AB = BA = I Matrices that have such a companion are known as "invertible" matrices, or "non-singular" matrices. Matrices which do not have an inverse that satises this equation are called "singular" or "non-invertable". An inverse can be computed in a number of dierent ways: 1. Append the matrix A with the Identity matrix of the same size. Use row-reductions to make the left side of the matrice an identity. The right side of the appended matrix will then be the inverse:
270
Eigenvalues
[A|I ] [I |B ] 2. The inverse matrix is given by the adjoint matrix divided by the determinant. The adjoint matrix is the transpose of the cofactor matrix. A 1 = adj(A) |A|
42.5 Eigenvalues
The eigenvalues of a matrix, denoted by the Greek letter lambda , are the solutions to the characteristic equation of the matrix: |X I | = 0 Eigenvalues only exist for square matrices. Non-square matrices do not have eigenvalues. If the matrix X is a real matrix, the eigenvalues will either be all real, or else there will be complex conjugate pairs.
42.6 Eigenvectors
The eigenvectors of a matrix are the nullspace solutions of the characteristic equation: (X i I )vi = 0 There are is least one distinct eigenvector for every distinct eigenvalue. Multiples of an eigenvector are also themselves eigenvectors. However, eigenvalues that are not linearly independent are called "non-distinct" eigenvectors, and can be ignored.
42.7 Left-Eigenvectors
Left Eigenvectors are the right-hand nullspace solutions to the characteristic equation: wi ( A i I ) = 0 These are also the rows of the inverse transition matrix.
271
Matrix Operations
272
MATLAB
42.10 MATLAB
The MATLAB programming environment was specially designed for matrix algebra and manipulation. The following is a brief refresher about how to manipulate matrices in MATLAB: Addition To add two matrices together, use a plus sign ("+"): C = A + B; Multiplication To multiply two matrices together use an asterisk ("*"): C = A * B; If your matrices are not the correct dimensions, MATLAB will issue an error. Transpose To nd the transpose of a matrix, use the apostrophe (" "): C=A; Determinant To nd the determinant, use the det function: d = det(A); Inverse To nd the inverse of a matrix, use the function inv: C = inv(A); Eigenvalues and Eigenvectors To nd the eigenvalues and eigenvectors of a matrix, use the eig command: [E, V] = eig(A); Where E is a square matrix with the eigenvalues of A in the diagonal entries, and V is the matrix comprised of the corresponding eigenvectors. If the eigenvalues are not distinct, the eigenvectors will be repeated. MATLAB will not calculate the generalized eigenvectors. Left Eigenvectors To nd the left eigenvectors, assuming there is a complete set of distinct right-eigenvectors, we can take the inverse of the eigenvector matrix: [E, V] = eig(A); C = inv(V); The rows of C will be the left-eigenvectors of the matrix A.
273
Matrix Operations For more information about MATLAB, see the wikibook MATLAB Programming3 . Category:Control Systems4
3 4
http://en.wikibooks.org/wiki/MATLAB%20Programming http://en.wikibooks.org/wiki/Category%3AControl%20Systems
274
43 Appendix: MATLAB
WarningThis page would highly benet from some screenshots of various systems. Users who have MATLAB or Octave available are highly encouraged to produce some screenshots for the systems here.
43.1 MATLAB
This page assumes a prior knowledge of the fundamentals of MATLAB. For more information about MATLAB, see MATLAB Programming1 . MATLAB is a programming language that is specially designed for the manipulation of matrices. Because of its computational power, MATLAB is a tool of choice for many control engineers to design and simulate control systems. This page is going to discuss using MATLAB for control systems design and analysis. MATLAB has a number of plugin modules called "Toolboxes". Nearly all the functions described below are located in the control systems toolbox. If your system has the control systems toolbox installed, you can get more information about the toolbox by typing help control at the MATLAB prompt. Also, there is an open-source competitor to MATLAB called Octave. Octave is similar to MATLAB, but there are also some dierences. This page will focus on MATLAB, but another page could be added to focus on Octave. As of Sept 10th, 2006, all the MATLAB commands listed below have been implemented in GNU octave. This page will use the {{MATLAB CMD}} template to show MATLAB functions that can be used to perform dierent tasks. MATLAB is a copyrighted product produced by The Mathworks. For more information about MATLAB and The Mathworks, see Control Systems/Resources2 .
1 2
275
Appendix: MATLAB sys = ss(A, B(:,2), C(3,:), D); This page will refer to this technique as "input-output isolation".
This system can eectively be modeled as two vectors of coecients, NUM and DEN: NUM = [5, 10] DEN = [1, 4, 5] Now, we can use the MATLAB step command to produce the step response to this system: step(NUM, DEN, t); Where t is a time vector. If no results on the left-hand side are supplied by you, the step function will automatically produce a graphical plot of the step response. If, however, you use the following format: [y, x, t] = step(NUM, DEN, t); Then MATLAB will not produce a plot automatically, and you will have to produce one yourself. Now, let s look at the modern, state-space approach. If we have the matrices A, B, C and D, we can plug these into the step function, as shown: step(A, B, C, D); or, we can optionally include a vector for time, t: step(A, B, C, D, t); Again, if we supply results on the left-hand side of the equation, MATLAB will not automatically produce a plot for us. This operation can be performed using this MATLAB command: plot If we didn t get an automatic plot, and we want to produce our own, we type:
276
Classical Modern [y, x, t] = step(NUM, DEN, t); And then we can create a graph using the plot command: plot(t, y); y is the output magnitude of the step response, while x is the internal state of the system from the state-space equations: x = Ax + Bu y = Cx + Du
277
Appendix: MATLAB
Where n(z) and d(z) are the numerator and denominator polynomials of the transfer function, respectively. The lter command can be used to apply an input vector x to the lter. The output, y, can be obtained from the following code: y = lter(n, d, x); The word "lter" may be a bit of a misnomer in this case, but the fact remains that this is the method to apply an input to a digital system. Once we have the output magnitude vector, we can plot it using our plot command: plot(y); This operation can be performed using this MATLAB command: ones To get the step response of the digital system, we must rst create a step function using the ones command: u = ones(1, N); Where N is the number of samples that we want to take in our digital system (not to be confused with "n", our numerator coecient). Once we have produced our unit step function, we can pass this function through our digital lter as such: y = lter(n, d, u); And we can plot y: plot(y);
278
x[k + 1] = Ax[k ] + Bu[k ] y [k ] = Cx[k ] + Du[k ] We can convert automatically to the pulse response using the ss2tf function, that we used above: [NUM, DEN] = ss2tf(A, B, C, D); Then, we can lter it with our prepared unit-step sequence vector, u: y = lter(num, den, u) this will give us the step response of the digital system in the state-space representation.
279
Appendix: MATLAB Or: rlocus(A, B, C, D, K); If K is not supplied, MATLAB will supply an automatic gain value for you. Once we have our values [r, K], we can plot a root locus: plot(r); The rlocus command cannot be used with MIMO systems, so if your system is a MIMO system, you must separate out your coecient matrices to isolate each separate Input-output pair, and graph each individually.
280
Nyquist Plots
This operation can be performed using this MATLAB command: logspace When talking about Bode plots in decibels, it makes the most sense (and is the most common occurrence) to also use a logarithmic frequency scale. To create such a logarithmic sequence in omega, we use the logspace command, as such: omega = logspace(a, b, n); This command produces n points, spaced logarithmicly, from 10a up to 10b . If we use the bode command without left-hand arguments, MATLAB will produce a graph of the bode phase and magnitude plots automatically. The bode command, if used with a MIMO system, will use subplots to produce all the input-output relationship graphs on a single plot window. for a system with multiple inputs and multiple outputs, this can become dicult to see clearly. In these cases, it is typically better to separate out your coecient matrices to isolate each individual input-output pair.
281
Appendix: MATLAB
43.12 Observability
An observability matrix can be constructed using the command obsv
3 4
http://en.wikibooks.org/wiki/MATLAB%20Programming http://wikis.controltheorypro.com/index.php?title=Category:MATLAB
282
44.1 A, B, C
Acceleration Error The amount of steady state error of the system when stimulated by a unit parabolic input. Acceleration Error Constant A system metric that determines that amount of acceleration error in the system. Adaptive Control A branch of control theory where controller systems are able to change their response characteristics over time, as the input characteristics to the system change. Adaptive Gain when control gain is varied depending on system state or condition, such as a disturbance Additivity A system is additive if a sum of inputs results in a sum of outputs. Analog System A system that is continuous in time and magnitude. ARMA Autoregressive Moving Average, see http://en.wikipedia.org/wiki/Autoregressive_ moving_average_model ATO Analog Timed Output. Control loop output is correlated to a timed contact closure. A/M Auto-Manual. Control modes, where auto typically means output is computer-driven, calculated while manual can be eld-driven or merely using a static setpoint. Bilinear Transform a variant of the Z-transform, see http://en.wikibooks.org/wiki/Digital_Signal_ Processing/Bilinear_Transform
283
Glossary and List of Equations Block Diagram A visual way to represent a system that displays individual system components as boxes, and connections between systems as arrows. Bode Plots A set of two graphs, a "magnitude" and a "phase" graph, that are both plotted on log scale paper. The magnitude graph is plotted in decibels versus frequency, and the phase graph is plotted in degrees versus frequency. Used to analyze the frequency characteristics of the system. Bounded Input, Bounded Output BIBO. If the input to the system is nite, then the output must also be nite. A condition for stability. Cascade When the output of a control loop is fed to/from another loop. Causal A system whose output does not depend on future inputs. All physical systems must be causal. Classical Approach See Classical Controls. Classical Controls A control methodology that uses the transform domain to analyze and manipulate the Input-Output characteristics of a system. Closed Loop a controlled system using feedback or feedforward Compensator A Control System that augments the shortcomings of another system. Condition Number Conditional Stability A system with variable gain is conditionally stable if it is BIBO stable for certain values of gain, but not BIBO stable for other values of gain. Continuous-Time A system or signal that is dened at all points t. Control Rate the rate at which control is computed and any appropriate output sent. Lower bound is sample rate. Control System
284
D, E, F A system or device that manages the behavior of another system or device. Controller See Control System. Convolution A complex operation on functions dened by the integral of the two functions multiplied together, and time-shifted. Convolution Integral The integral form of the convolution operation. CQI Control Quality Index, = 1 abs(P V SP )/max[P V max SP, SP P V min], 1 being ideal. CV Controlled variable
44.2 D, E, F
Damping Ratio A constant that determines the damping properties of a system. Deadtime time shift between the output change and the related eect (typ. at least one control sample). One sees "Lag" used for this action sometimes. Digital A system that is both discrete-time, and quantized. Direct action target output increase is required to bring the process variable (PV) to setpoint (SP) when PV is below SP. Thus, PV increases with output increase directly. Discrete magnitude See quantized. Discrete time A system or signal that is only dened at specic points in time. Distributed A system is distributed if it has both an innite number of states, and an innite number of state variables. See Lumped. Dynamic
285
Glossary and List of Equations A system is called dynamic if it doesn t have memory. See Instantaneous, Memory. Eigenvalues Solutions to the characteristic equation of a matrix. If the matrix is itself a function of time, the eigenvalues might be functions of time. In this case, they are frequently called eigenfunctions. Eigenvectors The nullspace vectors of the characteristic equation for particular eigenvalues. Used to determine state-transitions, among other things. See http://en.wikibooks.org/wiki/ Control_Systems/Eigenvalues_and_Eigenvectors Euler s Formula An equation that relates complex exponentials to complex sinusoids. Exponential Weighted Average (EWA) Apportions fractional weight to new and existing data to form a working average. Example EWA=0.70*EWA+0.30*latest, see Filtering. External Description A description of a system that relates the input of the system to the output, without explicitly accounting for the internal states of the system. Feedback The output of the system is passed through some sort of processing unit H, and that result is fed into the plant as an input. Feedforward whwn apriori knowledge is used to forecast at least part of the control response. Filtering (noise) Use of signal smoothing techniques to reject undesirable components like noise. Can be as simple as using exponential weighted averaging on the input. Final Value Theorem A theorem that allows the steady-state value of a system to be determined from the transfer function. FOH First order hold Frequency Response The response of a system to sinusoids of dierent frequencies. The Fourier Transform of the impulse response. Fourier Transform An integral transform, similar to the Laplace Transform, that analyzes the frequency characteristics of a system.
286
G, H, I See http://en.wikibooks.org/wiki/Waves/Fourier_Transforms
44.3 G, H, I
Game Theory A branch of study that is related to control engineering, and especially optimal control. Multiple competing entities, or "players" attempt to minimize their own cost, and maximize the cost of the opponents. Gain A constant multiplier in a system that is typically implemented as an amplier or attenuator. Gain can be changed, but is typically not a function of time. Adaptive control can use time-adaptive gains that change with time. General Description An external description of a system that relates the system output to the system input, the system response, and a time constant through integration. Hendrik Wade Bode Electrical Engineer, did work in control theory and communications. Is primarily remembered in control engineering for his introduction of the bode plot. Harry Nyquist Electrical Engineer, did extensive work in controls and information theory. Is remembered in this book primarily for his introduction of the Nyquist Stability Criterion. Homogeniety Property of a system whose scaled input results in an equally scaled output. Hybrid Systems Systems which have both analog and digital components. Impulse A function denoted (t), that is the derivative of the unit step. Impulse Response The system output when the system is stimulated by an impulse input. The Inverse Laplace Transform of the transfer function of the system. Initial Conditions The conditions of the system at time t = t0 , where t0 is the rst time the system is stimulated. Initial Value Theorem A theorem that allows the initial conditions of the system to be determined from the Transfer function.
287
Glossary and List of Equations Input-Output Description See external description. Instantaneous A system is instantaneous if the system doesn t have memory, and if the current output of the system is only dependent on the current input. See Dynamic, Memory. Integrated Absolute Error (IAE) absolute error (ideal vs actual performance) is integrated over the analysis period. Integrated Squared Error (ISE) squared error (ideal vs actual performance) is integrated over the analysis period. Integrators A system pole at the origin of the S-plane. Has the eect of integrating the system input. Inverse Fourier Transform An integral transform that converts a function from the frequency domain into the timedomain. Inverse Laplace Transform An integral transform that converts a function from the S-domain into the time-domain. Inverse Z-Transform An integral transform that converts a function from the Z-domain into the discrete time domain.
44.4 J, K, L
Lag The observed process impact from an output is slower than the control rate. Laplace Transform An integral transform that converts a function from the time domain into a complex frequency domain. Laplace Transform Domain A complex domain where the Laplace Transform of a function is graphed. The imaginary part of s is plotted along the vertical axis, and the real part of s is plotted along the horizontal axis. Left Eigenvectors Left-hand nullspace solutions to the characteristic equation of a matrix for given eigenvalues. The rows of the inverse transition matrix. Linear
288
M, N, O A system that satises the superposition principle. See Additive and Homogeneous. Linear Time-Invariant LTI. See Linear, and Time-Invariant. Low Clamp User-applied lower bound on control output signal. L/R Local/Remote operation. LQR Linear Quadratic Regulator. Lumped A system with a nite number of states, or a nite number of state variables.
44.5 M, N, O
Magnitude the gain component of frequency response. This is often all that is considered in saying a discrete lter s response is well matched to the analog s. It is the DC gain at 0 frequency. Marginal Stability A system has an oscillatory response, as determined by having imaginary poles or imaginary eigenvalues. Mason s Rule see http://en.wikipedia.org/wiki/Masons_rule MATLAB Commercial software having a Control Systems toolbox. Also see Octave. Memory A system has memory if its current output is dependent on previous and current inputs. MFAC Model Free Adaptive Control. MIMO A system with multiple inputs and multiple outputs. Modern Approach see modern controls Modern Controls
289
Glossary and List of Equations A control methodology that uses the state-space representation to analyze and manipulate the Internal Description of a system. Modied Z-Transform A version of the Z-Transform, expanded to allow for an arbitrary processing delay. MPC Model Predictive Control. MRAC Model Reference Adaptive Control. MV can denote Manipulated variable or Measured variable (not the same) Natural Frequency The fundamental frequency of the system, the frequency for which the system s frequency response is largest. Negative Feedback A feedback system where the output signal is subtracted from the input signal, and the dierence is input to the plant. The Nyquist Criteria A necessary and sucient condition of stability that can be derived from Bode plots. Nonlinear Control A branch of control engineering that deals exclusively with non-linear systems. We do not cover nonlinear systems in this book. OCTAVE Open-source software having a Control Systems toolbox. Also see MATLAB. Oset The discrepancy between desired and actual value after settling. P-only control can give oset. Oliver Heaviside Electrical Engineer, Introduced the Laplace Transform as a tool for control engineering. Open Loop when the system is not closed, its behavior has a free-running component rather than controlled Optimal Control A branch of control engineering that deals with the minimization of system cost, or maximization of system performance.
290
P, Q, R Order The order of a polynomial is the highest exponent of the independent variable in that exponent. The order of a system is the order of the Transfer Function s denominator polynomial. Output equation An equation that relates the current system input, and the current system state to the current system output. Overshoot measures the extent of system response against desired (setpoint tracking).
44.6 P, Q, R
Parabolic
2 A parabolic input is dened by the equation 1 2 t u(t).
Partial Fraction Expansion A method by which a complex fraction is decomposed into a sum of simple fractions. Percent Overshoot PO, the amount by which the step response overshoots the reference value, in percentage of the reference value. Phase the directional component of frequency response, not typically well-matched between a discrete lter equivalent to the analog version, especially as frequency approaches the Nyquist limit. The nal value in the limit drives system stability, and stems from the poles and zeros of the characteristic equation. PID Proportional-Integral-Derivative Plant A central system which has been provided, and must be analyzed or controlled. PLC Programmable Logic Controller Pole A value for s that causes the denominator of the transfer function to become zero, and therefore causes the transfer function itself to approach innity. Pole-Zero Form
291
Glossary and List of Equations The transfer function is factored so that the locations of all the poles and zeros are clearly evident. Position Error The amount of steady-state error of a system stimulated by a unit step input. Position Error Constant A constant that determines the position error of a system. Positive Feedback A feedback system where the system output is added to the system input, and the sum is input into the plant. PSD The power spectral density which shows the distribution of power in the spectrum of a particular signal. Pulse Response The response of a digital system to a unit step input, in terms of the transfer matrix. PV Process variable Quantized A system is quantized if it can only output certain discrete values. Quarter-decay the time or number of control rates required for process overshoot to be limited to within 1/4 of the maximum peak overshoot (PO) after a SP change. If the PO is 25% at sample time N, this would be time N+k when subsequent PV remains < SP*1.0625, presuming the process is settling. Raise-Lower Output type that works from present position rather than as a completely new computed spanned output. For R/L, the % change should be applied to the working clamps i.e. 5%(hi clamp-lo clamp). Ramp A ramp is dened by the function tu(t). Reconstructors A system that converts a digital signal into an analog signal. Reference Value The target input value of a feedback system. Relaxed A system is relaxed if the initial conditions are zero.
292
S, T, U, V Reverse action target output decrease is required to bring the process variable (PV) to setpoint (SP) when PV is below SP. Thus, PV decreases with output increase. Rise Time The amount of time it takes for the step response of the system to reach within a certain range of the reference value. Typically, this range is 80%. Robust Control A branch of control engineering that deals with systems subject to external and internal noise and disruptions.
44.7 S, T, U, V
Samplers A system that converts an analog signal into a digital signal. Sampled-Data Systems See Hybrid Systems. Sampling Time In a discrete system, the sampling time is the amount of time between samples. Reects the lower bound for Control rate. SCADA Supervisory Control and Data Acquisition. S-Domain The domain of the Laplace Transform of a signal or system. Second-order System; Settling Time The amount of time it takes for the system s oscillatory response to be damped to within a certain band of the steady-state value. That band is typically 10%. Signal Flow Diagram A method of visually representing a system, using arrows to represent the direction of signals in the system. SISO Single input, single output. Span the designed operation region of the item,=high range-low range. Working span can be smaller if output clamps are used.
293
Glossary and List of Equations Stability Typically "BIBO Stability", a system with a well-behaved input will result in a well-behaved output. "Well-behaved" in this sense is arbitrary. Star Transform A version of the Laplace Transform that acts on discrete signals. This transform is implemented as an innite sum. State Equation An equation that relates the future states of a system with the current state and the current system input. State Transition Matrix A coecient matrix, or a matrix function that relates how the system state changes in response to the system input. In time-invariant systems, the state-transition matrix is the matrix exponential of the system matrix. State-Space Equations A set of equations, typically written in matrix form, that relates the input, the system state, and the output. Consists of the state equation and the output equation. See http://en.wikibooks.org/wiki/Control_Systems/State-Space_Equations State-Variable A vector that describes the internal state of the system. Stability The system output cannot approach innity as time approaches innity. See BIBO, Lyapunov Stability. Step Response The response of a system when stimulated by a unit-step input. A unit step is a setpoint change for setpoint tracking. Steady State The output value of the system as time approaches innity. Steady State Error At steady state, the amount by which the system output diers from the reference value. Superposition A system satises the condition of superposition if it is both additive and homogeneous. System Identication method of trying to identify the system characterization , typically through least squares analysis of input,output and noise data vectors. May use ARMA type framework. System Type
294
W, X, Y, Z The number of ideal integrators in the system. Time-Invariant A system is time-invariant if an input time-shifted by an arbitrary delay produces an output shifted by that same delay. Transfer Function The ratio of the system output to its input, in the S-domain. The Laplace Transform of the function s impulse response. Transfer Function Matrix The Laplace transform of the state-space equations of a system, that provides an external description of a MIMO system. Uniform Stability Also "Uniform BIBO Stability", a system where an input signal in the range [0, 1] results in a nite output from the initial time until innite time. See http://en.wikibooks.org/ wiki/Control_Systems/Stability. Unit Step An input dened by u(t). Practically, a setpoint change. Unity Feedback A feedback system where the feedback loop element H has a transfer function of 1. Velocity Error The amount of steady-state error when the system is stimulated by a ramp input. Velocity Error Constant A constant that determines that amount of velocity error in a system.
44.8 W, X, Y, Z
W-plane Reference plane used in the bilinear transform. Wind-up when the numerics of computed control adjustment can "wind-up", yielding control correction with an inappropriate component unless prevented. An example is the "I" contribution of PID if output has been disconnected during PID calculation Zero A value for s that causes the numerator of the transfer function to become zero, and therefore causes the transfer function itself to become zero. Zero Input Response
295
Glossary and List of Equations The response of a system with zero external input. Relies only on the value of the system state to produce output. Zero State Response The response of the system with zero system state. The output of the system depends only on the system input. ZOH Zero order hold. Z-Transform An integral transform that is related to the Laplace transform through a change of variables. The Z-Transform is used primarily with digital systems. See http://en.wikibooks.org/ wiki/Digital_Signal_Processing/Z_Transform
296
45 List of Equations
The following is a list of the important equations from the text, arranged by subject. For more information about these equations, including the meaning of each variable and symbol, the uses of these functions, or the derivations of these equations, see the relevant pages in the main text.
(a b)(t) =
a( )b(t )d
Av = v
wA = w Decibels dB = 20 log(C )
297
List of Equations
Kp = lim G(z )
z 1
Kv = lim (z 1)G(z )
z 1
298
System Descriptions
y (t) =
g (t, r)x(r)dr
Convolution Description
x( )h(t )d
M=
299
List of Equations
45.6 Transforms
Laplace Transform
f (t)est dt
1 2
c+i ci
est F (s) ds
F (j ) = F [f (t)] =
0
f (t)ejt dt
1 2
F (j )ejt d
F (s) = L [f (t)] =
i=0
f (iT )esiT
Z Transform
300
Transform Theorems
X (z ) = Z {x[n]} =
i=
x[n]z n
1 2j
X (z )z n1 dz
C
X (z, m) = Z (x[n], m) =
n=
x[n + m 1]z n
eA(t ) Bu( )d
x[n] = An x[0] +
m=0
An1m Bu[n]
301
List of Equations
n1
(, t0 )B ( )u( )d
G(t, ) =
G[ n ] =
if k > 0 if k 0
1 + KGH (z ) = 0 The Angle Equation KG(s)H (s) = 180 KGH (z ) = 180 Number of Asymptotes Na = P Z Angle of Asymptotes k = (2k + 1) P Z
302
= 0 or
GH (z ) dz
=0
D(z ) = Kp + Ki
T z +1 z 1 + Kd 2 z 1 Tz
303
1 2 3 4 5 6 7 8 9 10 11
http://en.wikibooks.org/wiki/Linear%20algebra http://en.wikibooks.org/wiki/Linear%20Algebra%20with%20Differential%20Equations http://en.wikibooks.org/wiki/Algebra%2FComplex%20Numbers http://en.wikibooks.org/wiki/Calculus http://en.wikibooks.org/wiki/Signals%20and%20Systems http://en.wikibooks.org/wiki/Engineering%20Analysis http://en.wikibooks.org/wiki/Engineering%20Tables http://en.wikibooks.org/wiki/Analog%20and%20Digital%20Conversion http://en.wikibooks.org/wiki/MATLAB%20Programming http://en.wikibooks.org/wiki/Signal%20Processing http://en.wikibooks.org/wiki/Digital%20Signal%20Processing
305
Resources and Further Reading Communication Systems12 Embedded Control Systems Design13
46.2 Wikiversity
v:Control Systems14 The Wikiversity project also contains a number of collaborative learning eorts in the eld of control systems, and related subjects. As best as possible, we will attempt to list those eorts here: v:Control Systems15 Wikiversity is also a place to host learning materials, such as assignments, tests, and reading plans. It is the goal of the authors of this book to create such materials for use in conjunction with this book. As such materials are added to wikiversity, they will be referenced here.
46.3 Wikipedia
There are a number of Wikipedia articles on the topics covered in this book, and those articles will be linked to from the appropriate pages of this book. However, some of the articles that are of general use to the book are: w:Control theory16 w:Control engineering17 w:Process control18 A complete listing of all Wikipedia articles related to this topic can be found at: w:Category:Control theory19 .
46.4 Software
46.4.1 Root Locus
Root-Locus is a free program that was used to create several of the images in this book. That software can be obtained from the following web address: http://www.geocities.com/aseldawy/root_locus.html
12 13 14 15 16 17 18 19
306
Software Explicit permission has been granted by the author of the program to include screenshots on wikibooks. Images generated from the Root-Locus program should be included in Category:Root Locus Images20 , and appropriately tagged as a screenshot of a free software program.
46.4.2 MATLAB
MATLAB, Simulink, the Control Systems Toolbox and the Symbolic Toolbox are trademarks of The MathWorks, Inc. Other product or brand names are trademarks or registered trademarks of their respective holders. For more information about MATLAB, or to purchase a copy, visit: http://www.themathworks.com For information about the proper way to refer to MATLAB, please see: http://www.mathworks.com/company/pressroom/editorial_guidelines.html All MATLAB code appearing in this book has been released under the terms of the GFDL by the respective authors. All screenshots, graphs, and images relating to MATLAB have been produced in Octave, with changes to the original MATLAB code made as necessary.
46.4.3 Octave
Octave is a free open-source alternative program to MATLAB. Octave utilizes a scripting language that is very similar to that of MATLAB, although there are several dierences. Most of the basic examples described in this book will work equally well in MATLAB or Octave, with no changes or only minor changes. For more information, or to download a copy of Octave, visit: http://octave.sourceforge.net
20 21 22
307
23 24 25
308
47 Contributors
Edits 1 16 1 2 1 23 2 4 58 2 1 1 1 1 1 1 2 5 1 1 1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
User Abletried1 Adrignola2 Agentx3r3 Anith 5554 Antonysigma5 Avicennasis6 Az15687 Bestable8 Billymac009 Bo Dorku10 Cbarlog11 CommonsDelinker12 Constant31413 DSP-user14 Danny B.15 Deepcyan16 Derby County FC17 Dirk Hnniger18 Discostu519 Doleszki20 Druzhnik21
http://en.wikibooks.org/w/index.php?title=User:Abletried http://en.wikibooks.org/w/index.php?title=User:Adrignola http://en.wikibooks.org/w/index.php?title=User:Agentx3r http://en.wikibooks.org/w/index.php?title=User:Anith_555 http://en.wikibooks.org/w/index.php?title=User:Antonysigma http://en.wikibooks.org/w/index.php?title=User:Avicennasis http://en.wikibooks.org/w/index.php?title=User:Az1568 http://en.wikibooks.org/w/index.php?title=User:Bestable http://en.wikibooks.org/w/index.php?title=User:Billymac00 http://en.wikibooks.org/w/index.php?title=User:Bo_Dorku http://en.wikibooks.org/w/index.php?title=User:Cbarlog http://en.wikibooks.org/w/index.php?title=User:CommonsDelinker http://en.wikibooks.org/w/index.php?title=User:Constant314 http://en.wikibooks.org/w/index.php?title=User:DSP-user http://en.wikibooks.org/w/index.php?title=User:Danny_B. http://en.wikibooks.org/w/index.php?title=User:Deepcyan http://en.wikibooks.org/w/index.php?title=User:Derby_County_FC http://en.wikibooks.org/w/index.php?title=User:Dirk_H%C3%BCnniger http://en.wikibooks.org/w/index.php?title=User:Discostu5 http://en.wikibooks.org/w/index.php?title=User:Doleszki http://en.wikibooks.org/w/index.php?title=User:Druzhnik
309
Contributors 2 6 9 2 1 1 1 2 2 1 2 1 1 3 8 1 1 2 25 1 12 2 1 2 3 Ducleotide22 Ennui23 Esj8824 Feraudyh25 Fishpi26 Foreign127 Frigotoni28 Hagindaz29 Hammer of Moradin30 Helptry31 Herbythyme32 Herbythyme is the Antichrist of Mumfum33 HethrirBot34 Hypergeek1435 Inductiveload36 Istevie37 Jawnsy38 JenVan39 Jguk40 Jmah41 Jomegat42 Josephkiran43 Kayau44 Kevinp245 Leisuresuitwally46
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
http://en.wikibooks.org/w/index.php?title=User:Ducleotide http://en.wikibooks.org/w/index.php?title=User:Ennui http://en.wikibooks.org/w/index.php?title=User:Esj88 http://en.wikibooks.org/w/index.php?title=User:Feraudyh http://en.wikibooks.org/w/index.php?title=User:Fishpi http://en.wikibooks.org/w/index.php?title=User:Foreign1 http://en.wikibooks.org/w/index.php?title=User:Frigotoni http://en.wikibooks.org/w/index.php?title=User:Hagindaz http://en.wikibooks.org/w/index.php?title=User:Hammer_of_Moradin http://en.wikibooks.org/w/index.php?title=User:Helptry http://en.wikibooks.org/w/index.php?title=User:Herbythyme http://en.wikibooks.org/w/index.php?title=User:Herbythyme_is_the_Antichrist_of_Mumfum http://en.wikibooks.org/w/index.php?title=User:HethrirBot http://en.wikibooks.org/w/index.php?title=User:Hypergeek14 http://en.wikibooks.org/w/index.php?title=User:Inductiveload http://en.wikibooks.org/w/index.php?title=User:Istevie http://en.wikibooks.org/w/index.php?title=User:Jawnsy http://en.wikibooks.org/w/index.php?title=User:JenVan http://en.wikibooks.org/w/index.php?title=User:Jguk http://en.wikibooks.org/w/index.php?title=User:Jmah http://en.wikibooks.org/w/index.php?title=User:Jomegat http://en.wikibooks.org/w/index.php?title=User:Josephkiran http://en.wikibooks.org/w/index.php?title=User:Kayau http://en.wikibooks.org/w/index.php?title=User:Kevinp2 http://en.wikibooks.org/w/index.php?title=User:Leisuresuitwally
310
External Resources 11 4 1 2 1 1 1 5 1 2 14 8 1 10 13 1 2 1 1 2 1 1 1 4 1 Lpkeys47 Macsdev48 Mike.lifeguard49 Mintz l50 Mreiki51 Murughendra52 Napalm Llama53 Nithinvgeorge54 Nonstandard55 Nostraticispeak56 Nrs13557 Panic2k458 Pedro Fonini59 QuiteUnusual60 Recent Runes61 Rmaax62 Ro890Z63 Roman12364 SPat65 Satyabrata66 Scru32367 Sdayal68 Shaers2169 Simoneau70 Soeb71
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
http://en.wikibooks.org/w/index.php?title=User:Lpkeys http://en.wikibooks.org/w/index.php?title=User:Macsdev http://en.wikibooks.org/w/index.php?title=User:Mike.lifeguard http://en.wikibooks.org/w/index.php?title=User:Mintz_l http://en.wikibooks.org/w/index.php?title=User:Mreiki http://en.wikibooks.org/w/index.php?title=User:Murughendra http://en.wikibooks.org/w/index.php?title=User:Napalm_Llama http://en.wikibooks.org/w/index.php?title=User:Nithinvgeorge http://en.wikibooks.org/w/index.php?title=User:Nonstandard http://en.wikibooks.org/w/index.php?title=User:Nostraticispeak http://en.wikibooks.org/w/index.php?title=User:Nrs135 http://en.wikibooks.org/w/index.php?title=User:Panic2k4 http://en.wikibooks.org/w/index.php?title=User:Pedro_Fonini http://en.wikibooks.org/w/index.php?title=User:QuiteUnusual http://en.wikibooks.org/w/index.php?title=User:Recent_Runes http://en.wikibooks.org/w/index.php?title=User:Rmaax http://en.wikibooks.org/w/index.php?title=User:Ro890Z http://en.wikibooks.org/w/index.php?title=User:Roman123 http://en.wikibooks.org/w/index.php?title=User:SPat http://en.wikibooks.org/w/index.php?title=User:Satyabrata http://en.wikibooks.org/w/index.php?title=User:Scruff323 http://en.wikibooks.org/w/index.php?title=User:Sdayal http://en.wikibooks.org/w/index.php?title=User:Shaffers21 http://en.wikibooks.org/w/index.php?title=User:Simoneau http://en.wikibooks.org/w/index.php?title=User:Soeb
311
Contributors 1 10 1 1 1 4 4 3 1 1 1823 4 1 4 5 1 1 3 Sonia72 Spradlig73 Supermackin74 Tawker75 Tdenewiler76 Thenub31477 Tim.greatrex78 Ubigene79 Upul80 Van der Hoorn81 Whiteknight82 Wknight811183 XMollioTKs84 Xania85 Xris86 YMS87 Z3588 Zoomzoom89
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
http://en.wikibooks.org/w/index.php?title=User:Sonia http://en.wikibooks.org/w/index.php?title=User:Spradlig http://en.wikibooks.org/w/index.php?title=User:Supermackin http://en.wikibooks.org/w/index.php?title=User:Tawker http://en.wikibooks.org/w/index.php?title=User:Tdenewiler http://en.wikibooks.org/w/index.php?title=User:Thenub314 http://en.wikibooks.org/w/index.php?title=User:Tim.greatrex http://en.wikibooks.org/w/index.php?title=User:Ubigene http://en.wikibooks.org/w/index.php?title=User:Upul http://en.wikibooks.org/w/index.php?title=User:Van_der_Hoorn http://en.wikibooks.org/w/index.php?title=User:Whiteknight http://en.wikibooks.org/w/index.php?title=User:Wknight8111 http://en.wikibooks.org/w/index.php?title=User:XMollioTKs http://en.wikibooks.org/w/index.php?title=User:Xania http://en.wikibooks.org/w/index.php?title=User:Xris http://en.wikibooks.org/w/index.php?title=User:YMS http://en.wikibooks.org/w/index.php?title=User:Z35 http://en.wikibooks.org/w/index.php?title=User:Zoomzoom
312
List of Figures
GFDL: Gnu Free Documentation License. http://www.gnu.org/licenses/fdl.html cc-by-sa-3.0: Creative Commons Attribution ShareAlike 3.0 License. creativecommons.org/licenses/by-sa/3.0/ cc-by-sa-2.5: Creative Commons Attribution ShareAlike 2.5 License. creativecommons.org/licenses/by-sa/2.5/ cc-by-sa-2.0: Creative Commons Attribution ShareAlike 2.0 License. creativecommons.org/licenses/by-sa/2.0/ cc-by-sa-1.0: Creative Commons Attribution ShareAlike 1.0 License. creativecommons.org/licenses/by-sa/1.0/ http:// http:// http:// http://
cc-by-2.0: Creative Commons Attribution 2.0 License. http://creativecommons. org/licenses/by/2.0/ cc-by-2.0: Creative Commons Attribution 2.0 License. http://creativecommons. org/licenses/by/2.0/deed.en cc-by-2.5: Creative Commons Attribution 2.5 License. http://creativecommons. org/licenses/by/2.5/deed.en cc-by-3.0: Creative Commons Attribution 3.0 License. http://creativecommons. org/licenses/by/3.0/deed.en GPL: GNU General Public License. http://www.gnu.org/licenses/gpl-2.0.txt LGPL: GNU Lesser General Public License. http://www.gnu.org/licenses/lgpl. html PD: This image is in the public domain. ATTR: The copyright holder of this le allows anyone to use it for any purpose, provided that the copyright holder is properly attributed. Redistribution, derivative work, commercial use, and all other use is permitted. EURO: This is the common (reverse) face of a euro coin. The copyright on the design of the common face of the euro coins belongs to the European Commission. Authorised is reproduction in a format without relief (drawings, paintings, lms) provided they are not detrimental to the image of the euro. LFK: Lizenz Freie Kunst. http://artlibre.org/licence/lal/de CFR: Copyright free use.
313
List of Figures EPL: Eclipse Public License. http://www.eclipse.org/org/documents/epl-v10. php Copies of the GPL, the LGPL as well as a GFDL are included in chapter Licenses90 . Please note that images in the public domain do not require attribution. You may click on the image numbers in the following table to open the webpage of the images in your webbrower.
90
314
List of Figures
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Jules Boilly
Inductiveload91 Spradlig92
image source obtained from en:User:Petr.adamek93 (with permission) and previously saved as PD in PNG format. touched up a little and converted to SVG by en:User:Rbj94
PD PD GFDL GFDL GFDL GFDL GFDL GFDL GFDL GFDL GFDL GFDL GFDL GFDL PD GFDL GFDL GFDL GFDL GFDL GFDL GFDL GFDL PD PD
26 27 28 29 30 31 32 33 34 35 36 37 38
PD PD PD PD GFDL GFDL PD PD PD PD PD PD PD
91 92 93 94 95 96 97 98 99 100 101
http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3ASpradlig http://en.wikibooks.org/wiki/%3Aen%3AUser%3APetr.adamek http://en.wikibooks.org/wiki/%3Aen%3AUser%3ARbj http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload
315
List of Figures
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66
Inductiveload102 Inductiveload103 Inductiveload104 Inductiveload105 Inductiveload106 Inductiveload107 Inductiveload108 Inductiveload109 Inductiveload110 Inductiveload111 Inductiveload112 Inductiveload113 Inductiveload114 Inductiveload115 Inductiveload116 Inductiveload117 Inductiveload118 Inductiveload119 Inductiveload120 Inductiveload121 Inductiveload122 Inductiveload123 Inductiveload124 Inductiveload125
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload http://en.wikibooks.org/wiki/User%3AInductiveload
316
List of Figures
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85
cc-by-sa-2.5 Constant314126 User:Netnet127 User:Netnet128 User:Netnet129 User:Netnet130 User:Netnet131 PD PD PD PD PD GFDL GFDL GFDL cc-by-sa-3.0 PD PD PD PD PD GFDL GFDL GFDL
126 127 128 129 130 131 132 133 134 135
http://en.wikibooks.org/wiki/User%3AConstant314 http://en.wikibooks.org/wiki/User%3ANetnet http://en.wikibooks.org/wiki/User%3ANetnet http://en.wikibooks.org/wiki/User%3ANetnet http://en.wikibooks.org/wiki/User%3ANetnet http://en.wikibooks.org/wiki/User%3ANetnet http://en.wikibooks.org/wiki/User%3ANetnet http://en.wikibooks.org/wiki/User%3ANetnet http://en.wikibooks.org/wiki/User%3ANetnet http://en.wikibooks.org/wiki/%3Ask%3Auser%3Arobo
317
48 Licenses
48.1 GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007 Copyright 2007 Free Software Foundation, Inc. <http://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a programto make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) oer you this License giving you legal permission to copy, distribute and/or modify it. For the developers and authors protection, the GPL clearly explains that there is no warranty for this free software. For both users and authors sake, the GPL requires that modied versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modied versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it eectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program nonfree. The precise terms and conditions for copying, distribution and modication follow. TERMS AND CONDITIONS 0. Denitions. This License refers to version 3 of the GNU General Public License. Copyright also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. The Program refers to any copyrightable work licensed under this License. Each licensee is addressed as you. Licensees and recipients may be individuals or organizations. To modify a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a modied version of the earlier work or a work based on the earlier work. A covered work means either the unmodied Program or a work based on the Program. To propagate a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modication), making available to the public, and in some countries other activities as well. To convey a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays Appropriate Legal Notices to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The source code for a work means the preferred form of the work for making modications to it. Object code means any non-source form of a work. A Standard Interface means an interface that either is an ocial standard dened by a recognized standards body, or, in the case of interfaces specied for a particular programming language, one that is widely used among developers working in that language. The System Libraries of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A Major Component, in this context, means a major essential component (kernel, window system, and so on) of the specic operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The Corresponding Source for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the works System Libraries, or generalpurpose tools or generally available free programs which are used unmodied in performing those activities but which are not part of the work. For example, Corresponding Source includes interface denition les associated with source les for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specically designed to require, such as by intimate data communication or control ow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly arms your unlimited permission to run the unmodied Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users Legal Rights From AntiCircumvention Law. No covered work shall be deemed part of an eective technological measure under any applicable law fullling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modication of the work as a means of enforcing, against the works users, your or third parties legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Programs source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may oer support or warranty protection for a fee. 5. Conveying Modied Source Versions. You may convey a work based on the Program, or the modications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: * a) The work must carry prominent notices stating that you modied it, and giving a relevant date. * b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modies the requirement in section 4 to keep intact all notices. * c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. * d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an aggregate if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilations users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: * a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source xed on a durable physical medium customarily used for software interchange. * b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written oer, valid for at least three years and valid for as long as you oer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. * c) Convey individual copies of the object code with a copy of the written oer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. * d) Convey the object code by oering access from a designated place (gratis or for a charge), and oer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a dierent server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to nd the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. * e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being oered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A User Product is either (1) a consumer product, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, normally used refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or nonconsumer uses, unless such uses represent the only signicant mode of use of the product. Installation Information for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modied versions of a covered work in that User Product from a modied version of its Corresponding Source. The information must suce to ensure that the continued functioning of the modied object code is in no case prevented or interfered with solely because modication has been made. If you convey an object code work under this section in, or with, or specically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a xed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modied object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modied or installed by the recipient, or for the User Product in which it has been modied or installed. Access to a network may be denied when the modication itself materially and adversely aects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. Additional permissions are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: * a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or * b) Requiring preservation of specied reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or * c) Prohibiting misrepresentation of the origin of that material, or requiring that modied versions of such material be marked in reasonable ways as dierent from the original version; or * d) Limiting the use for publicity purposes of names of licensors or authors of the material; or * e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or * f) Requiring indemnication of licensors and authors of that material by anyone who conveys the material (or modied versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered further restrictions within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source les, a statement of the additional terms that apply to those les, or a notice indicating where to nd the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and nally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder noties you of the violation by some reasonable means, this is the rst time you have received notice of violation of this License (for any work)
from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An entity transaction is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the partys predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable eorts. You may not impose any further restrictions on the exercise of the rights granted or armed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, oering for sale, or importing the Program or any portion of it. 11. Patents. A contributor is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributors contributor version. A contributors essential patent claims are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modication of the contributor version. For purposes of this denition, control includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributors essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a patent license is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To grant such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benet of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. Knowingly relying means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipients use of the covered work in a country, would infringe one or more identiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specic copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is discriminatory if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specic products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Aero General Public License.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Aero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Aero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program species that a certain numbered version of the GNU General Public License or any later version applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program species that a proxy can decide which future versions of the GNU General Public License can be used, that proxys public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or dierent permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal ef-
fect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source le to most eectively state the exclusion of warranty; and each le should have at least the copyright line and a pointer to where the full notice is found. <one line to give the programs name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: <program> Copyright (C) <year> <name of author> This program comes with ABSOLUTELY NO WARRANTY; for details type show w. This is free software, and you are welcome to redistribute it under certain conditions; type show c for details. The hypothetical commands show w and show c should show the appropriate parts of the General Public License. Of course, your programs commands might be dierent; for a GUI interface, you would use an about box. You should also get your employer (if you work as a programmer) or school, if any, to sign a copyright disclaimer for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <http://www.gnu.org/licenses/>. The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But rst, please read <http://www.gnu.org/philosophy/whynot-lgpl.html>.
Page, as authors, one or more persons or entities responsible for authorship of the modications in the Modied Version, together with at least ve of the principal authors of the Document (all of its principal authors, if it has fewer than ve), unless they release you from this requirement. * C. State on the Title page the name of the publisher of the Modied Version, as the publisher. * D. Preserve all the copyright notices of the Document. * E. Add an appropriate copyright notice for your modications adjacent to the other copyright notices. * F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modied Version under the terms of this License, in the form shown in the Addendum below. * G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Documents license notice. * H. Include an unaltered copy of this License. * I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modied Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modied Version as stated in the previous sentence. * J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. * K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. * L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. * M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modied Version. * N. Do not retitle any existing section to be Entitled "Endorsements" or to conict in title with any Invariant Section. * O. Preserve any Warranty Disclaimers. If the Modied Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modied Versions license notice. These titles must be distinct from any other section titles. You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modied Version by various partiesfor example, statements of peer review or that the text has been approved by an organization as the authoritative denition of a standard. You may add a passage of up to ve words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modied Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add an-
other; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modied Version. 5. COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms dened in section 4 above for modied versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodied, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but dierent contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements". 6. COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. 7. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilations users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Documents Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate. 8. TRANSLATION
Translation is considered a kind of modication, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail. If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. 9. TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License. However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and nally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder noties you of the violation by some reasonable means, this is the rst time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it. 10. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/. Each version of the License is given a distinguishing version number. If the Document species that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specied version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document species that a proxy can decide which future versions of
this License can be used, that proxys public statement of acceptance of a version permanently authorizes you to choose that version for the Document. 11. RELICENSING "Massive Multiauthor Collaboration Site" (or "MMC Site") means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A "Massive Multiauthor Collaboration" (or "MMC") contained in the site means any set of copyrightable works thus published on the MMC site. "CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-prot corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization. "Incorporate" means to publish or republish a Document, in whole or in part, as part of another Document. An MMC is "eligible for relicensing" if it is licensed under this License, and if all works that were rst published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008. The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing. ADDENDUM: How to use this License for your documents To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page: Copyright (C) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the "with . . . Texts." line with this: with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST. If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation. If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.