Lab View
Lab View
TM
Analysis Concepts
Support Worldwide Technical Support and Product Information ni.com National Instruments Corporate Headquarters 11500 North Mopac Expressway Worldwide Offices Australia 1800 300 800, Austria 43 0 662 45 79 90 0, Belgium 32 0 2 757 00 20, Brazil 55 11 3262 3599, Canada (Calgary) 403 274 9391, Canada (Ottawa) 613 233 5949, Canada (Qubec) 450 510 3055, Canada (Toronto) 905 785 0085, Canada (Vancouver) 514 685 7530, China 86 21 6555 7838, Czech Republic 420 224 235 774, Denmark 45 45 76 26 00, Finland 385 0 9 725 725 11, France 33 0 1 48 14 24 24, Germany 49 0 89 741 31 30, Greece 30 2 10 42 96 427, India 91 80 51190000, Israel 972 0 3 6393737, Italy 39 02 413091, Japan 81 3 5472 2970, Korea 82 02 3451 3400, Malaysia 603 9131 0918, Mexico 001 800 010 0793, Netherlands 31 0 348 433 466, New Zealand 0800 553 322, Norway 47 0 66 90 76 60, Poland 48 22 3390150, Portugal 351 210 311 210, Russia 7 095 783 68 51, Singapore 65 6226 5886, Slovenia 386 3 425 4200, South Africa 27 0 11 805 8197, Spain 34 91 640 0085, Sweden 46 0 8 587 895 00, Switzerland 41 56 200 51 51, Taiwan 886 2 2528 7227, Thailand 662 992 7519, United Kingdom 44 0 1635 523545 For further support information, refer to the Technical Support and Professional Services appendix. To comment on the documentation, send email to [email protected]. 20002004 National Instruments Corporation. All rights reserved. Austin, Texas 78759-3504 USA Tel: 512 683 0100
Important Information
Warranty
The media on which you receive National Instruments software are warranted not to fail to execute programming instructions, due to defects in materials and workmanship, for a period of 90 days from date of shipment, as evidenced by receipts or other documentation. National Instruments will, at its option, repair or replace software media that do not execute programming instructions if National Instruments receives notice of such defects during the warranty period. National Instruments does not warrant that the operation of the software shall be uninterrupted or error free. A Return Material Authorization (RMA) number must be obtained from the factory and clearly marked on the outside of the package before any equipment will be accepted for warranty work. National Instruments will pay the shipping costs of returning to the owner parts which are covered by warranty. National Instruments believes that the information in this document is accurate. The document has been carefully reviewed for technical accuracy. In the event that technical or typographical errors exist, National Instruments reserves the right to make changes to subsequent editions of this document without prior notice to holders of this edition. The reader should consult National Instruments if errors are suspected. In no event shall National Instruments be liable for any damages arising out of or related to this document or the information contained in it. EXCEPT AS SPECIFIED HEREIN, NATIONAL INSTRUMENTS MAKES NO WARRANTIES , EXPRESS OR IMPLIED, AND SPECIFICALLY DISCLAIMS ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE . C USTOMERS RIGHT TO RECOVER DAMAGES CAUSED BY FAULT OR NEGLIGENCE ON THE PART OF NATIONAL INSTRUMENTS SHALL BE LIMITED TO THE AMOUNT THERETOFORE PAID BY THE CUSTOMER. NATIONAL INSTRUMENTS WILL NOT BE LIABLE FOR DAMAGES RESULTING FROM LOSS OF DATA, PROFITS, USE OF PRODUCTS, OR INCIDENTAL OR CONSEQUENTIAL DAMAGES, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. This limitation of the liability of National Instruments will apply regardless of the form of action, whether in contract or tort, including negligence. Any action against National Instruments must be brought within one year after the cause of action accrues. National Instruments shall not be liable for any delay in performance due to causes beyond its reasonable control. The warranty provided herein does not cover damages, defects, malfunctions, or service failures caused by owners failure to follow the National Instruments installation, operation, or maintenance instructions; owners modification of the product; owners abuse, misuse, or negligent acts; and power failure or surges, fire, flood, accident, actions of third parties, or other events outside reasonable control.
Copyright
Under the copyright laws, this publication may not be reproduced or transmitted in any form, electronic or mechanical, including photocopying, recording, storing in an information retrieval system, or translating, in whole or in part, without the prior written consent of National Instruments Corporation. For a listing of the copyrights, conditions, and disclaimers regarding components used in USI (Xerces C++, ICU, and HDF5), refer to the USICopyrights.chm. This product includes software developed by the Apache Software Foundation ( http://www.apache.org/). Copyright 1999 The Apache Software Foundation. All rights reserved. Copyright 19952003 International Business Machines Corporation and others. All rights reserved. NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities Copyright 1998, 1999, 2000, 2001, 2003 by the Board of Trustees of the University of Illinois. All rights reserved.
Trademarks
CVI, LabVIEW , National Instruments, NI , and ni.com are trademarks of National Instruments Corporation. MATLAB is a registered trademark of The MathWorks, Inc. Other product and company names mentioned herein are trademarks or trade names of their respective companies.
Patents
For patents covering National Instruments products, refer to the appropriate location: HelpPatents in your software, the patents.txt file on your CD, or ni.com/patents.
INSTRUMENTS, THE USER OR APPLICATION DESIGNER IS ULTIMATELY RESPONSIBLE FOR VERIFYING AND VALIDATING THE SUITABILITY OF NATIONAL INSTRUMENTS PRODUCTS WHENEVER NATIONAL INSTRUMENTS PRODUCTS ARE INCORPORATED IN A SYSTEM OR APPLICATION, INCLUDING, WITHOUT LIMITATION, THE APPROPRIATE DESIGN, PROCESS AND SAFETY LEVEL OF SUCH SYSTEM OR APPLICATION.
Contents
About This Manual
Conventions ...................................................................................................................xv Related Documentation..................................................................................................xv
PART I Signal Processing and Signal Analysis Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW
The Importance of Data Analysis ..................................................................................1-1 Sampling Signals ...........................................................................................................1-2 Aliasing ..........................................................................................................................1-4 Increasing Sampling Frequency to Avoid Aliasing.........................................1-6 Anti-Aliasing Filters........................................................................................1-7 Converting to Logarithmic Units ...................................................................................1-8 Displaying Results on a Decibel Scale............................................................1-9
Contents
Classifying Filters by Impulse Response ........................................................ 3-3 Filter Coefficients ............................................................................. 3-4 Characteristics of an Ideal Filter.................................................................................... 3-5 Practical (Nonideal) Filters............................................................................................ 3-6 Transition Band............................................................................................... 3-6 Passband Ripple and Stopband Attenuation ................................................... 3-7 Sampling Rate ............................................................................................................... 3-8 FIR Filters...................................................................................................................... 3-9 Taps ................................................................................................................. 3-11 Designing FIR Filters...................................................................................... 3-11 Designing FIR Filters by Windowing .............................................. 3-14 Designing Optimum FIR Filters Using the Parks-McClellan Algorithm....................................................................................... 3-15 Designing Equiripple FIR Filters Using the Parks-McClellan Algorithm....................................................................................... 3-16 Designing Narrowband FIR Filters .................................................. 3-16 Designing Wideband FIR Filters ...................................................... 3-19 IIR Filters....................................................................................................................... 3-19 Cascade Form IIR Filtering............................................................................. 3-20 Second-Order Filtering ..................................................................... 3-22 Fourth-Order Filtering ...................................................................... 3-23 IIR Filter Types ............................................................................................... 3-23 Minimizing Peak Error ..................................................................... 3-24 Butterworth Filters............................................................................ 3-24 Chebyshev Filters ............................................................................. 3-25 Chebyshev II Filters.......................................................................... 3-26 Elliptic Filters ................................................................................... 3-27 Bessel Filters..................................................................................... 3-28 Designing IIR Filters....................................................................................... 3-30 IIR Filter Characteristics in LabVIEW............................................. 3-31 Transient Response........................................................................... 3-32 Comparing FIR and IIR Filters...................................................................................... 3-33 Nonlinear Filters............................................................................................................ 3-33 Example: Analyzing Noisy Pulse with a Median Filter.................................. 3-34 Selecting a Digital Filter Design ................................................................................... 3-35
vi
ni.com
Contents
Discrete Fourier Transform (DFT) ................................................................................4-5 Relationship between N Samples in the Frequency and Time Domains.........4-5 Example of Calculating DFT...........................................................................4-6 Magnitude and Phase Information...................................................................4-8 Frequency Spacing between DFT Samples ...................................................................4-9 FFT Fundamentals .........................................................................................................4-12 Computing Frequency Components ................................................................4-13 Fast FFT Sizes .................................................................................................4-14 Zero Padding ...................................................................................................4-14 FFT VI .............................................................................................................4-15 Displaying Frequency Information from Transforms....................................................4-16 Two-Sided, DC-Centered FFT ......................................................................................4-17 Mathematical Representation of a Two-Sided, DC-Centered FFT .................4-18 Creating a Two-Sided, DC-Centered FFT.......................................................4-19 Power Spectrum .............................................................................................................4-22 Converting a Two-Sided Power Spectrum to a Single-Sided Power Spectrum............................................................................................4-23 Loss of Phase Information...............................................................................4-25 Computations on the Spectrum......................................................................................4-25 Estimating Power and Frequency....................................................................4-25 Computing Noise Level and Power Spectral Density .....................................4-27 Computing the Amplitude and Phase Spectrums ..........................................................4-28 Calculating Amplitude in Vrms and Phase in Degrees .....................................4-29 Frequency Response Function .......................................................................................4-30 Cross Power Spectrum...................................................................................................4-31 Frequency Response and Network Analysis .................................................................4-31 Frequency Response Function.........................................................................4-32 Impulse Response Function.............................................................................4-33 Coherence Function.........................................................................................4-33 Windowing.....................................................................................................................4-34 Averaging to Improve the Measurement .......................................................................4-35 RMS Averaging...............................................................................................4-35 Vector Averaging ............................................................................................4-36 Peak Hold ........................................................................................................4-36 Weighting ........................................................................................................4-37 Echo Detection...............................................................................................................4-37
vii
Contents
Characteristics of Different Smoothing Windows ........................................................ 5-11 Main Lobe ....................................................................................................... 5-12 Side Lobes....................................................................................................... 5-12 Rectangular (None) ......................................................................................... 5-13 Hanning ........................................................................................................... 5-14 Hamming......................................................................................................... 5-15 Kaiser-Bessel .................................................................................................. 5-15 Triangle ........................................................................................................... 5-16 Flat Top ........................................................................................................... 5-17 Exponential ..................................................................................................... 5-18 Windows for Spectral Analysis versus Windows for Coefficient Design .................... 5-19 Spectral Analysis............................................................................................. 5-19 Windows for FIR Filter Coefficient Design ................................................... 5-21 Choosing the Correct Smoothing Window.................................................................... 5-21 Scaling Smoothing Windows ........................................................................................ 5-23
viii
ni.com
Contents
ix
Contents
Moment about the Mean ................................................................................. 10-5 Skewness .......................................................................................... 10-6 Kurtosis............................................................................................. 10-6 Histogram........................................................................................................ 10-6 Mean Square Error (mse) ................................................................................ 10-7 Root Mean Square (rms) ................................................................................. 10-8 Probability ..................................................................................................................... 10-8 Random Variables........................................................................................... 10-8 Discrete Random Variables .............................................................. 10-9 Continuous Random Variables ......................................................... 10-9 Normal Distribution ........................................................................................ 10-10 Computing the One-Sided Probability of a Normally Distributed Random Variable ........................................................ 10-11 Finding x with a Known p ................................................................ 10-12 Probability Distribution and Density Functions.............................................. 10-12
Chapter 12 Optimization
Introduction to Optimization ......................................................................................... 12-1 Constraints on the Objective Function............................................................ 12-2 Linear and Nonlinear Programming Problems ............................................... 12-2 Discrete Optimization Problems....................................................... 12-2 Continuous Optimization Problems.................................................. 12-2 Solving Problems Iteratively........................................................................... 12-3
ni.com
Contents
Linear Programming ......................................................................................................12-3 Linear Programming Simplex Method............................................................12-4 Nonlinear Programming ................................................................................................12-4 Impact of Derivative Use on Search Method Selection ..................................12-5 Line Minimization ...........................................................................................12-5 Local and Global Minima................................................................................12-5 Global Minimum...............................................................................12-6 Local Minimum.................................................................................12-6 Downhill Simplex Method ..............................................................................12-6 Golden Section Search Method .......................................................................12-7 Choosing a New Point x in the Golden Section ................................12-8 Gradient Search Methods ................................................................................12-9 Caveats about Converging to an Optimal Solution...........................12-10 Terminating Gradient Search Methods .............................................12-10 Conjugate Direction Search Methods..............................................................12-11 Conjugate Gradient Search Methods...............................................................12-12 Theorem A ........................................................................................12-12 Theorem B.........................................................................................12-13 Difference between Fletcher-Reeves and Polak-Ribiere ..................12-14
Chapter 13 Polynomials
General Form of a Polynomial.......................................................................................13-1 Basic Polynomial Operations.........................................................................................13-2 Order of Polynomial ........................................................................................13-2 Polynomial Evaluation ....................................................................................13-2 Polynomial Addition .......................................................................................13-3 Polynomial Subtraction ...................................................................................13-3 Polynomial Multiplication...............................................................................13-3 Polynomial Division........................................................................................13-3 Polynomial Composition .................................................................................13-5 Greatest Common Divisor of Polynomials......................................................13-5 Least Common Multiple of Two Polynomials ................................................13-6 Derivatives of a Polynomial ............................................................................13-7 Integrals of a Polynomial.................................................................................13-8 Indefinite Integral of a Polynomial ...................................................13-8 Definite Integral of a Polynomial......................................................13-8 Number of Real Roots of a Real Polynomial ..................................................13-8 Rational Polynomial Function Operations.....................................................................13-11 Rational Polynomial Function Addition..........................................................13-11 Rational Polynomial Function Subtraction .....................................................13-11 Rational Polynomial Function Multiplication .................................................13-12 Rational Polynomial Function Division ..........................................................13-12
xi
Contents
Negative Feedback with a Rational Polynomial Function.............................. 13-12 Positive Feedback with a Rational Polynomial Function ............................... 13-12 Derivative of a Rational Polynomial Function ............................................... 13-13 Partial Fraction Expansion .............................................................................. 13-13 Heaviside Cover-Up Method............................................................ 13-14 Orthogonal Polynomials................................................................................................ 13-15 Chebyshev Orthogonal Polynomials of the First Kind ................................... 13-15 Chebyshev Orthogonal Polynomials of the Second Kind............................... 13-16 Gegenbauer Orthogonal Polynomials ............................................................. 13-16 Hermite Orthogonal Polynomials ................................................................... 13-17 Laguerre Orthogonal Polynomials .................................................................. 13-17 Associated Laguerre Orthogonal Polynomials ............................................... 13-18 Legendre Orthogonal Polynomials ................................................................. 13-18 Evaluating a Polynomial with a Matrix......................................................................... 13-19 Polynomial Eigenvalues and Vectors ............................................................. 13-20 Entering Polynomials in LabVIEW............................................................................... 13-22
xii
ni.com
Contents
Analysis Stages of the Train Wheel PtByPt VI...............................................14-13 DAQ Stage ........................................................................................14-13 Filter Stage ........................................................................................14-13 Analysis Stage...................................................................................14-14 Events Stage ......................................................................................14-15 Report Stage ......................................................................................14-15 Conclusion.......................................................................................................14-16
xiii
Conventions
This manual uses the following conventions: The symbol leads you through nested menu items and dialog box options to a final action. The sequence FilePage SetupOptions directs you to pull down the File menu, select the Page Setup item, and select Options from the last dialog box. This icon denotes a note, which alerts you to important information. bold Bold text denotes items that you must select or click in the software, such as menu items and dialog box options. Bold text also denotes parameter names. Italic text denotes variables, emphasis, a cross reference, or an introduction to a key concept. This font also denotes text that is a placeholder for a word or value that you must supply. Text in this font denotes text or characters that you should enter from the keyboard, sections of code, programming examples, and syntax examples. This font is also used for the proper names of disk drives, paths, directories, programs, subprograms, subroutines, device names, functions, operations, variables, filenames, and extensions.
italic
monospace
Related Documentation
The following documents contain information that you might find helpful as you read this manual: LabVIEW Measurements Manual The Fundamentals of FFT-Based Signal Analysis and Measurement in LabVIEW and LabWindows/CVI Application Note, available on the National Instruments Web site at ni.com/info, where you enter the info code rdlv04
xv
LabVIEW Help, available by selecting HelpVI, Function, & How-To Help LabVIEW User Manual Getting Started with LabVIEW On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform (Proceedings of the IEEE, Volume 66, No. 1, January 1978)
xvi
ni.com
Part I
Signal Processing and Signal Analysis
This part describes signal processing and signal analysis concepts. Chapter 1, Introduction to Digital Signal Processing and Analysis in LabVIEW, provides a background in basic digital signal processing and an introduction to signal processing and measurement VIs in LabVIEW. Chapter 2, Signal Generation, describes the fundamentals of signal generation. Chapter 3, Digital Filtering, introduces the concept of filtering, compares analog and digital filters, describes finite infinite response (FIR) and infinite impulse response (IIR) filters, and describes how to choose the appropriate digital filter for a particular application. Chapter 4, Frequency Analysis, describes the fundamentals of the discrete Fourier transform (DFT), the fast Fourier transform (FFT), basic signal analysis computations, computations performed on the power spectrum, and how to use FFT-based functions for network measurement. Chapter 5, Smoothing Windows, describes spectral leakage, how to use smoothing windows to decrease spectral leakage, the different types of smoothing windows, how to choose the correct type of smoothing window, the differences between smoothing windows used for spectral analysis and smoothing windows used for filter coefficient design, and the importance of scaling smoothing windows.
I-1
Part I
Chapter 6, Distortion Measurements, describes harmonic distortion, total harmonic distortion (THD), signal noise and distortion (SINAD), and when to use distortion measurements. Chapter 7, DC/RMS Measurements, introduces measurement analysis techniques for making DC and RMS measurements of a signal. Chapter 8, Limit Testing, provides information about setting up an automated system for performing limit testing, specifying limits, and applications for limit testing.
I-2
ni.com
Digital signals are everywhere in the world around us. Telephone companies use digital signals to represent the human voice. Radio, television, and hi-fi sound systems are all gradually converting to the digital domain because of its superior fidelity, noise reduction, and signal processing flexibility. Data is transmitted from satellites to earth ground stations in digital form. NASA pictures of distant planets and outer space are often processed digitally to remove noise and extract useful information. Economic data, census results, and stock market prices are all available in digital form. Because of the many advantages of digital signal processing, analog signals also are converted to digital form before they are processed with a computer. This chapter provides a background in basic digital signal processing and an introduction to signal processing and measurement VIs in LabVIEW.
1-1
Chapter 1
By analyzing and processing the digital data, you can extract the useful information from the noise and present it in a form more comprehensible than the raw data, as shown in Figure 1-2.
The LabVIEW block diagram programming approach and the extensive set of LabVIEW signal processing and measurement VIs simplify the development of analysis applications.
Sampling Signals
Measuring the frequency content of a signal requires digitalization of a continuous signal. To use digital signal processing techniques, you must first convert an analog signal into its digital representation. In practice, the conversion is implemented by using an analog-to-digital (A/D) converter. Consider an analog signal x(t) that is sampled every t seconds. The time interval t is the sampling interval or sampling period. Its reciprocal, 1/t, is the sampling frequency, with units of samples/second. Each of the discrete values of x(t) at t = 0, t, 2t, 3t, and so on, is a sample. Thus, x(0), x(t), x(2t), , are all samples. The signal x(t) thus can be represented by the following discrete set of samples. {x(0), x(t), x(2t), x(3t), , x(kt), } Figure 1-3 shows an analog signal and its corresponding sampled version. The sampling interval is t. The samples are defined at discrete points in time.
1-2
ni.com
Chapter 1
1.1 0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
The following notation represents the individual samples. x[i] = x(it) for i = 0, 1, 2, If N samples are obtained from the signal x(t), then you can represent x(t) by the following sequence. X = {x[0], x[1], x[2], x[3], , x[N 1]} The preceding sequence representing x(t) is the digital representation, or the sampled version, of x(t). The sequence X = {x[i]} is indexed on the integer variable i and does not contain any information about the sampling rate. So knowing only the values of the samples contained in X gives you no information about the sampling frequency. One of the most important parameters of an analog input system is the frequency at which the DAQ device samples an incoming signal. The sampling frequency determines how often an A/D conversion takes place. Sampling a signal too slowly can result in an aliased signal.
1-3
Chapter 1
Aliasing
An aliased signal provides a poor representation of the analog signal. Aliasing causes a false lower frequency component to appear in the sampled data of a signal. Figure 1-4 shows an adequately sampled signal and an undersampled signal.
In Figure 1-4, the undersampled signal appears to have a lower frequency than the actual signalthree cycles instead of ten cycles. Increasing the sampling frequency increases the number of data points acquired in a given time period. Often, a fast sampling frequency provides a better representation of the original signal than a slower sampling rate. For a given sampling frequency, the maximum frequency you can accurately represent without aliasing is the Nyquist frequency. The Nyquist frequency equals one-half the sampling frequency, as shown by the following equation. fs -, f N = -2 where fN is the Nyquist frequency and fs is the sampling frequency. Signals with frequency components above the Nyquist frequency appear aliased between DC and the Nyquist frequency. In an aliased signal, frequency components actually above the Nyquist frequency appear as
1-4
ni.com
Chapter 1
frequency components below the Nyquist frequency. For example, a component at frequency fN < f0 < fs appears as the frequency fs f0. Figures 1-5 and 1-6 illustrate the aliasing phenomenon. Figure 1-5 shows the frequencies contained in an input signal acquired at a sampling frequency, fs, of 100 Hz.
Magnitude
F1 25 Hz
F2 70 Hz
F3 160 Hz
F4 510 Hz
Figure 1-6 shows the frequency components and the aliases for the input signal from Figure 1-5.
Solid Arrows Actual Frequency Dashed Arrows Alias
Magnitude
F2 70 Hz
F3 160 Hz
F4 510 Hz
Frequency
500
In Figure 1-6, frequencies below the Nyquist frequency of fs/2 = 50 Hz are sampled correctly. For example, F1 appears at the correct frequency. Frequencies above the Nyquist frequency appear as aliases. For example, aliases for F2, F3, and F4 appear at 30 Hz, 40 Hz, and 10 Hz, respectively.
1-5
Chapter 1
The alias frequency equals the absolute value of the difference between the closest integer multiple of the sampling frequency and the input frequency, as shown in the following equation. AF = CIMSF IF where AF is the alias frequency, CIMSF is the closest integer multiple of the sampling frequency, and IF is the input frequency. For example, you can compute the alias frequencies for F2, F3, and F4 from Figure 1-6 with the following equations. Alias F2 = 100 70 = 30 Hz Alias F3 = ( 2 ) 100 160 = 40 Hz Alias F4 = ( 5 ) 100 510 = 10 Hz
A. 1 sample/1 cycle
C. 2 samples/cycle
B. 7 samples/4 cycles
D. 10 samples/cycle
1-6
ni.com
Chapter 1
In case A of Figure 1-7, the sampling frequency fs equals the frequency f of the sine wave. fs is measured in samples/second. f is measured in cycles/second. Therefore, in case A, one sample per cycle is acquired. The reconstructed waveform appears as an alias at DC. In case B of Figure 1-7, fs = 7/4f, or 7 samples/4 cycles. In case B, increasing the sampling rate increases the frequency of the waveform. However, the signal aliases to a frequency less than the original signalthree cycles instead of four. In case C of Figure 1-7, increasing the sampling rate to fs = 2f results in the digitized waveform having the correct frequency or the same number of cycles as the original signal. In case C, the reconstructed waveform more accurately represents the original sinusoidal wave than case A or case B. By increasing the sampling rate to well above f, for example, fs = 10f = 10 samples/cycle, you can accurately reproduce the waveform. Case D of Figure 1-7 shows the result of increasing the sampling rate to fs = 10f.
Anti-Aliasing Filters
In the digital domain, you cannot distinguish alias frequencies from the frequencies that actually lie between 0 and the Nyquist frequency. Even with a sampling frequency equal to twice the Nyquist frequency, pickup from stray signals, such as signals from power lines or local radio stations, can contain frequencies higher than the Nyquist frequency. Frequency components of stray signals above the Nyquist frequency might alias into the desired frequency range of a test signal and cause erroneous results. Therefore, you need to remove alias frequencies from an analog signal before the signal reaches the A/D converter. Use an anti-aliasing analog lowpass filter before the A/D converter to remove alias frequencies higher than the Nyquist frequency. A lowpass filter allows low frequencies to pass but attenuates high frequencies. By attenuating the frequencies higher than the Nyquist frequency, the anti-aliasing analog lowpass filter prevents the sampling of aliasing components. An anti-aliasing analog lowpass filter should exhibit a flat passband frequency response with a good high-frequency alias rejection and a fast roll-off in the transition band. Because you apply the anti-aliasing filter to the analog signal before it is converted to a digital signal, the anti-aliasing filter is an analog filter.
1-7
Chapter 1
Figure 1-8 shows both an ideal anti-alias filter and a practical anti-alias filter. The following information applies to Figure 1-8: f1 is the maximum input frequency. Frequencies less than f1 are desired frequencies. Frequencies greater than f1 are undesired frequencies.
Transition Band Filter Output Filter Output Frequency
f1
f1
f2
Frequency
An ideal anti-alias filter, shown in Figure 1-8a, passes all the desired input frequencies and cuts off all the undesired frequencies. However, an ideal anti-alias filter is not physically realizable. Figure 1-8b illustrates actual anti-alias filter behavior. Practical anti-alias filters pass all frequencies < f1 and cut off all frequencies > f2. The region between f1 and f2 is the transition band, which contains a gradual attenuation of the input frequencies. Although you want to pass only signals with frequencies < f1, the signals in the transition band might cause aliasing. Therefore, in practice, use a sampling frequency greater than two times the highest frequency in the transition band. Using a sampling frequency greater than two times the highest frequency in the transition band means fs might be more than 2f1.
1-8
ni.com
Chapter 1
The following equations define the decibel. Equation 1-1 defines the decibel in terms of power. Equation 1-2 defines the decibel in terms of amplitude. P - , dB = 10 log 10 ---Pr (1-1)
P - is the where P is the measured power, Pr is the reference power, and ---Pr power ratio. A - , dB = 20 log 10 ---Ar (1-2)
A where A is the measured amplitude, Ar is the reference amplitude, and ---Ar is the voltage ratio. Equations 1-1 and 1-2 require a reference value to measure power and amplitude in decibels. The reference value serves as the 0 dB level. Several conventions exist for specifying a reference value. You can use the following common conventions to specify a reference value for calculating decibels: Use the reference one volt-rms squared ( 1Vrms ) for power, which yields the unit of measure dBVrms. Use the reference one volt-rms (1 Vrms) for amplitude, which yields the unit of measure dBV. Use the reference 1 mW into a load of 50 for radio frequencies where 0 dB is 0.22 Vrms, which yields the unit of measure dBm. Use the reference 1 mW into a load of 600 for audio frequencies where 0 dB is 0.78 Vrms, which yields the unit of measure dBm.
2
When using amplitude or power as the amplitude-squared of the same signal, the resulting decibel level is exactly the same. Multiplying the decibel ratio by two is equivalent to having a squared ratio. Therefore, you obtain the same decibel level and display regardless of whether you use the amplitude or power spectrum.
1-9
Chapter 1
on a device with a display height of 10 cm. Using a linear scale, if the device requires the entire display height to display the 100 V amplitude, the device displays 10 V of amplitude per centimeter of height. If the device displays 10 V/cm, displaying the 0.1 V amplitude of the signal requires a height of only 0.1 mm. Because a height of 0.1 mm is barely visible on the display screen, you might overlook the 0.1 V amplitude component of the signal. Using a logarithmic scale in decibels allows you to see the 0.1 V amplitude component of the signal. Table 1-1 shows the relationship between the decibel and the power and voltage ratios.
Table 1-1. Decibels and Power and Voltage Ratio Relationship
dB +40 +20 +6 +3 0 3 6 20 40
Table 1-1 shows how you can compress a wide range of amplitudes into a small set of numbers by using the logarithmic decibel scale.
1-10
ni.com
Signal Generation
The generation of signals is an important part of any test or measurement system. The following applications are examples of uses for signal generation: Simulate signals to test your algorithm when real-world signals are not available, for example, when you do not have a DAQ device for obtaining real-world signals or when access to real-world signals is not possible. Generate signals to apply to a digital-to-analog (D/A) converter.
Measurement Total harmonic distortion Intermodulation distortion Frequency response Interpolation Sine wave
Signal
Multitone (two sine waves) Multitone (many sine waves, impulse, chirp), broadband noise Sinc
2-1
Chapter 2
Signal Generation
Measurement Rise time, fall time, overshoot, undershoot Jitter Pulse Square wave
Signal
These signals form the basis for many tests and are used to measure the response of a system to a particular stimulus. Some of the common test signals available in most signal generators are shown in Figure 2-1 and Figure 2-2.
2-2
ni.com
Chapter 2
Signal Generation
1.0 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Time
1.1 1.0 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.1 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 Time
Amplitude
1
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.0
Amplitude
2
1.0 0.8 0.6 0.4 Amplitude 0.2 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Time
Amplitude
0.1
0.2
0.3
0.4 0.5
0.6
0.7
0.8
0.9
1.0
Time
3
1.0 0.9 0.8 0.7 Amplitude Amplitude 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0 100 200 300 400 500 600 700 800 900 1000 Time 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0
0 100 200 300 400 500 600 700 800 900 1000 Time
5
1 2 Sine Wave Square Wave 3 4 Triangle Wave Sawtooth Wave 5 6
6
Ramp Impulse
2-3
Chapter 2
Signal Generation
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.1 0.1 0.0 0.1 0.2 0.3 0 100 200 300 400 500 600 700 800 900 1000 Time
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0 100 200 300 400 500 600 700 800 900 1000 Time
Amplitude
Amplitude
7
1.0 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 0 100 200 300 400 500 600 700 800 900 1000 Time
Amplitude
9
7 Sinc 8 Pulse 9 Chirp
The most useful way to view the common test signals is in terms of their frequency content. The common test signals have the following frequency content characteristics: Sine waves have a single frequency component. Square waves consist of the superposition of many sine waves at odd harmonics of the fundamental frequency. The amplitude of each harmonic is inversely proportional to its frequency. Triangle and sawtooth waves have harmonic components that are multiples of the fundamental frequency. An impulse contains all frequencies that can be represented for a given sampling rate and number of samples. Chirp signals are sinusoids swept from a start frequency to a stop frequency, thus generating energy across a given frequency range. Chirp patterns have discrete frequencies that lie within a certain range. The discrete frequencies of chirp patterns depend on the sampling rate, the start and end frequencies, and the number of samples.
2-4
ni.com
Chapter 2
Signal Generation
Multitone Generation
Except for the sine wave, the common test signals do not allow full control over their spectral content. For example, the harmonic components of a square wave are fixed in frequency, phase, and amplitude relative to the fundamental. However, you can generate multitone signals with a specific amplitude and phase for each individual frequency component. A multitone signal is the superposition of several sine waves or tones, each with a distinct amplitude, phase, and frequency. A multitone signal is typically created so that an integer number of cycles of each individual tone are contained in the signal. If an FFT of the entire multitone signal is computed, each of the tones falls exactly onto a single frequency bin, which means no spectral spread or leakage occurs. Multitone signals are a part of many test specifications and allow the fast and efficient stimulus of a system across an arbitrary band of frequencies. Multitone test signals are used to determine the frequency response of a device and with appropriate selection of frequencies, also can be used to measure such quantities as intermodulation distortion.
2-5
Chapter 2
Signal Generation
Crest Factor
The relative phases of the constituent tones with respect to each other determine the crest factor of a multitone signal with specified amplitude. The crest factor is defined as the ratio of the peak magnitude to the RMS value of the signal. For example, a sine wave has a crest factor of 1.414:1. For the same maximum amplitude, a multitone signal with a large crest factor contains less energy than one with a smaller crest factor. In other words, a large crest factor means that the amplitude of a given component sine tone is lower than the same sine tone in a multitone signal with a smaller crest factor. A higher crest factor results in individual sine tones with lower signal-to-noise ratios. Therefore, proper selection of phases is critical to generating a useful multitone signal. To avoid clipping, the maximum value of the multitone signal should not exceed the maximum capability of the hardware that generates the signal, which means a limit is placed on the maximum amplitude of the signal. You can generate a multitone signal with a specific amplitude by using different combinations of the phase relationships and amplitudes of the constituent sine tones. A good approach to generating a signal is to choose amplitudes and phases that result in a lower crest factor.
Phase Generation
The following schemes are used to generate tone phases of multitone signals: Varying the phase difference between adjacent frequency tones linearly from 0 to 360 degrees Varying the tone phases randomly
Varying the phase difference between adjacent frequency tones linearly from 0 to 360 degrees allows the creation of multitone signals with very low crest factors. However, the resulting multitone signals have the following potentially undesirable characteristics: The multitone signal is very sensitive to phase distortion. If in the course of generating the multitone signal the hardware or signal path induces non-linear phase distortion, the crest factor might vary considerably. The multitone signal might display some repetitive time-domain characteristics, as shown in the multitone signal in Figure 2-3.
2-6
ni.com
Chapter 2
Signal Generation
1.000 0.800 0.600 0.400 Amplitude 0.200 0.000 0.200 0.400 0.600 0.800 1.000 0.000 0.010 0.020 0.030 0.040 0.050 0.060 0.070 0.080 0.090 0.100 Time
Figure 2-3. Multitone Signal with Linearly Varying Phase Difference between Adjacent Tones
The signal in Figure 2-3 resembles a chirp signal in that its frequency appears to decrease from left to right. The apparent decrease in frequency from left to right is characteristic of multitone signals generated by linearly varying the phase difference between adjacent frequency tones. Having a signal that is more noise-like than the signal in Figure 2-3 often is more desirable. Varying the tone phases randomly results in a multitone signal whose amplitudes are nearly Gaussian in distribution as the number of tones increases. Figure 2-4 illustrates a signal created by varying the tone phases randomly.
2-7
Chapter 2
Signal Generation
1.000 0.800 0.600 0.400 Amplitude 0.200 0.000 0.200 0.400 0.600 0.800 1.000 0.000 0.010 0.020 0.030 0.040 0.050 0.060 0.070 0.080 0.090 0.100 Time
Figure 2-4. Multitone Signal with Random Phase Difference between Adjacent Tones
In addition to being more noise-like, the signal in Figure 2-4 also is much less sensitive to phase distortion. Multitone signals with the sort of phase relationship shown in Figure 2-4 generally achieve a crest factor between 10 dB and 11 dB.
A multitone signal has significant advantages over the swept sine and stepped sine approaches. For a given range of frequencies, the multitone approach can be much faster than the equivalent swept sine measurement, due mainly to settling time issues. For each sine tone in a stepped sine
2-8
ni.com
Chapter 2
Signal Generation
measurement, you must wait for the settling time of the system to end before starting the measurement. The settling time issue for a swept sine can be even more complex. If the system has low-frequency poles and/or zeroes or high Q-resonances, the system might take a relatively long time to settle. For a multitone signal, you must wait only once for the settling time. A multitone signal containing one period of the lowest frequencyactually one period of the highest frequency resolutionis enough for the settling time. After the response to the multitone signal is acquired, the processing can be very fast. You can use a single fast Fourier transform (FFT) to measure many frequency points, amplitude and phase, simultaneously. The swept sine approach is more appropriate than the multitone approach in certain situations. Each measured tone within a multitone signal is more sensitive to noise because the energy of each tone is lower than that in a single pure tone. For example, consider a single sine tone of amplitude 10 V peak and frequency 100 Hz. A multitone signal containing 10 tones, including the 100 Hz tone, might have a maximum amplitude of 10 V. However, the 100 Hz tone component has an amplitude somewhat less than 10 V. The lower amplitude of the 100 Hz tone component is due to the way that all the sine tones sum. Assuming the same level of noise, the signal-to-noise ratio (SNR) of the 100 Hz component is better for the case of the swept sine approach. In the multitone approach, you can mitigate the reduced SNR by adjusting the amplitudes and phases of the tones, applying higher energy where needed, and applying lower energy at less critical frequencies. When viewing the response of a system to a multitone stimulus, any energy between FFT bins is due to noise or unit-under-test (UUT) induced distortion. The frequency resolution of the FFT is limited by your measurement time. If you want to measure your system at 1.000 kHz and 1.001 kHz, using two independent sine tones is the best approach. Using two independent sine tones, you can perform the measurement in a few milliseconds, while a multitone measurement requires at least one second. The difference in measurement speed is because you must wait long enough to obtain the required number of samples to achieve a frequency resolution of 1 Hz. Some applications, such as finding the resonant frequency of a crystal, combine a multitone measurement for coarse measurement and a narrow-range sweep for fine measurement.
2-9
Chapter 2
Signal Generation
Noise Generation
You can use noise signals to perform frequency response measurements or to simulate certain processes. Several types of noise are typically used, namely uniform white noise, Gaussian white noise, and periodic random noise. The term white in the definition of noise refers to the frequency domain characteristic of noise. Ideal white noise has equal power per unit bandwidth, resulting in a flat power spectral density across the frequency range of interest. Thus, the power in the frequency range from 100 Hz to 110 Hz is the same as the power in the frequency range from 1,000 Hz to 1,010 Hz. In practical measurements, achieving the flat power spectral density requires an infinite number of samples. Thus, when making measurements of white noise, the power spectra are usually averaged, with more number of averages resulting in a flatter power spectrum. The terms uniform and Gaussian refer to the probability density function (PDF) of the amplitudes of the time-domain samples of the noise. For uniform white noise, the PDF of the amplitudes of the time domain samples is uniform within the specified maximum and minimum levels. In other words, all amplitude values between some limits are equally likely or probable. Thermal noise produced in active components tends to be uniform white in distribution. Figure 2-5 shows the distribution of the samples of uniform white noise.
2-10
ni.com
Chapter 2
Signal Generation
For Gaussian white noise, the PDF of the amplitudes of the time domain samples is Gaussian. If uniform white noise is passed through a linear system, the resulting output is Gaussian white noise. Figure 2-6 shows the distribution of the samples of Gaussian white noise.
Periodic random noise (PRN) is a summation of sinusoidal signals with the same amplitudes but with random phases. PRN consists of all sine waves with frequencies that can be represented with an integral number of cycles in the requested number of samples. Because PRN contains only integral-cycle sinusoids, you do not need to window PRN before performing spectral analysis. PRN is self-windowing and therefore has no spectral leakage. PRN does not have energy at all frequencies as white noise does but has energy only at discrete frequencies that correspond to harmonics of a fundamental frequency. The fundamental frequency is equal to the sampling frequency divided by the number of samples. However, the level of noise at each of the discrete frequencies is the same. You can use PRN to compute the frequency response of a linear system with one time record instead of averaging the frequency response over several time records, as you must for nonperiodic random noise sources. Figure 2-7 shows the spectrum of PRN and the averaged spectra of white noise.
2-11
Chapter 2
Signal Generation
Figure 2-7. Spectral Representation of Periodic Random Noise and Averaged White Noise
Normalized Frequency
In the analog world, a signal frequency is measured in hertz (Hz), or cycles per second. But the digital system often uses a digital frequency, which is the ratio between the analog frequency and the sampling frequency, as shown by the following equation. analog frequency digital frequency = ---------------------------------------------sampling frequency The digital frequency is known as the normalized frequency and is measured in cycles per sample.
2-12
ni.com
Chapter 2
Signal Generation
Some of the Signal Generation VIs use a frequency input f that is assumed to use normalized frequency units of cycles per sample. The normalized frequency ranges from 0.0 to 1.0, which corresponds to a real frequency range of 0 to the sampling frequency fs. The normalized frequency also wraps around 1.0 so a normalized frequency of 1.1 is equivalent to 0.1. For example, a signal sampled at the Nyquist rate of fs/2 means it is sampled twice per cycle, that is, two samples/cycle. This sampling rate corresponds to a normalized frequency of 1/2 cycles/sample = 0.5 cycles/sample. The reciprocal of the normalized frequency, 1/f, gives you the number of times the signal is sampled in one cycle, that is, the number of samples per cycle. When you use a VI that requires the normalized frequency as an input, you must convert your frequency units to the normalized units of cycles per sample. You must use normalized units of cycles per sample with the following Signal Generation VIs: Sine Wave Square Wave Sawtooth Wave Triangle Wave Arbitrary Wave Chirp Pattern
If you are used to working in frequency units of cycles, you can convert cycles to cycles per sample by dividing cycles by the number of samples generated. You need only divide the frequency in cycles by the number of samples. For example, a frequency of two cycles is divided by 50 samples, resulting in a normalized frequency of f = 1/25 cycles/sample. This means that it takes 25, the reciprocal of f, samples to generate one cycle of the sine wave. However, you might need to use frequency units of Hz, cycles per second. If you need to convert from Hz to cycles per sample, divide your frequency in Hz by the sampling rate given in samples per second, as shown in the following equation. cycles per second - = ---------------cycles---------------------------------------------samples per second sample For example, you divide a frequency of 60 Hz by a sampling rate of 1,000 Hz to get the normalized frequency of f = 0.06 cycles/sample.
2-13
Chapter 2
Signal Generation
Therefore, it takes almost 17, or 1/0.06, samples to generate one cycle of the sine wave. The Signal Generation VIs create many common signals required for network analysis and simulation. You also can use the Signal Generation VIs in conjunction with National Instruments hardware to generate analog output signals.
Phase Control
The wave VIs have a phase in input that specifies the initial phase in degrees of the first sample of the generated waveform. The wave VIs also have a phase out output that indicates the phase of the next sample of the generated waveform. In addition, a reset phase input specifies whether the phase of the first sample generated when the wave VI is called is the phase specified in the phase in input or the phase available in the phase out output when the VI last executed. A TRUE value for reset phase sets the initial phase to phase in. A FALSE value for reset phase sets the initial phase to the value of phase out when the VI last executed. All the wave VIs are reentrant, which means they can keep track of phase internally. The wave VIs accept frequency in normalized units of cycles per sample. The only pattern VI that uses normalized units is the Chirp Pattern VI. Wire FALSE to the reset phase input to allow for continuous sampling simulation.
2-14
ni.com
Digital Filtering
This chapter introduces the concept of filtering, compares analog and digital filters, describes finite impulse response (FIR) and infinite impulse response (IIR) filters, and describes how to choose the appropriate digital filter for a particular application.
Introduction to Filtering
The filtering process alters the frequency content of a signal. For example, the bass control on a stereo system alters the low-frequency content of a signal, while the treble control alters the high-frequency content. Changing the bass and treble controls filters the audio signal. Two common filtering applications are removing noise and decimation. Decimation consists of lowpass filtering and reducing the sample rate. The filtering process assumes that you can separate the signal content of interest from the raw signal. Classical linear filtering assumes that the signal content of interest is distinct from the remainder of the signal in the frequency domain.
3-1
Chapter 3
Digital Filtering
Digital filters have the following advantages compared to analog filters: Digital filters are software programmable, which makes them easy to build and test. Digital filters require only the arithmetic operations of multiplication and addition/subtraction. Digital filters do not drift with temperature or humidity or require precision components. Digital filters have a superior performance-to-cost ratio. Digital filters do not suffer from manufacturing variations or aging.
Traditional filter classification begins with classifying a filter according to its impulse response.
Impulse Response
An impulse is a short duration signal that goes from zero to a maximum value and back to zero again in a short time. Equation 3-1 provides the mathematical definition of an impulse. x0 = 1 xi = 0 for all i 0 (3-1)
The impulse response of a filter is the response of the filter to an impulse and depends on the values upon which the filter operates. Figure 3-1 illustrates impulse response.
3-2
ni.com
Chapter 3
Digital Filtering
Amplitude
Amplitude
Amplitude
Frequency fc Lowpass
Frequency fc Highpass
Amplitude
The Fourier transform of the impulse response is the frequency response of the filter. The frequency response of a filter provides information about the output of the filter at different frequencies. In other words, the frequency response of a filter reflects the gain of the filter at different frequencies. For an ideal filter, the gain is one in the passband and zero in the stopband. An ideal filter passes all frequencies in the passband to the output unchanged but passes none of the frequencies in the stopband to the output.
The following statements describe the operation of the cash register: The cash register adds the cost of each item to produce the running total y[k]. The following equation computes y[k] up to the kth item. y[k] = x[k] + x[k 1] + x[k 2] + x[k 3] + + x[1] Therefore, the total for N items is y[N]. (3-2)
3-3
Chapter 3
Digital Filtering
y[k] equals the total up to the kth item. y[k 1] equals the total up to the (k 1) item. Therefore, Equation 3-2 can be rewritten as the following equation. y[k] = y[k 1] + x[k] (3-3)
Add a tax of 8.25% and rewrite Equations 3-2 and 3-3 as the following equations. y[k] = 1.0825x[k] + 1.0825x[k 1] + 1.0825x[k 2] + 1.0825x[k 3] + + 1.0825x[1] y[k] = y[k 1] + 1.0825x[k] (3-4) (3-5)
Equations 3-4 and 3-5 identically describe the behavior of the cash register. However, Equation 3-4 describes the behavior of the cash register only in terms of the input, while Equation 3-5 describes the behavior in terms of both the input and the output. Equation 3-4 represents a nonrecursive, or FIR, operation. Equation 3-5 represents a recursive, or IIR, operation. Equations that describe the operation of a filter and have the same form as Equations 3-2, 3-3, 3-4, and 3-5 are difference equations. FIR filters are the simplest filters to design. If a single impulse is present at the input of an FIR filter and all subsequent inputs are zero, the output of an FIR filter becomes zero after a finite time. Therefore, FIR filters are finite. The time required for the filter output to reach zero equals the number of filter coefficients. Refer to the FIR Filters section of this chapter for more information about FIR filters. Because IIR filters operate on current and past input values and current and past output values, the impulse response of an IIR filter never reaches zero and is an infinite response. Refer to the IIR Filters section of this chapter for more information about IIR filters.
Filter Coefficients
In Equation 3-4, the multiplying constant for each term is 1.0825. In Equation 3-5, the multiplying constants are 1 for y[k 1] and 1.0825 for x[k]. The multiplying constants are the coefficients of the filter. For an IIR filter, the coefficients multiplying the inputs are the forward coefficients. The coefficients multiplying the outputs are the reverse coefficients.
3-4
ni.com
Chapter 3
Digital Filtering
Figure 3-2 shows the ideal frequency response of each of the preceding filter types.
Amplitude
Amplitude
Amplitude
Frequency fc Lowpass
Frequency fc Highpass
Amplitude
In Figure 3-2, the filters exhibit the following behavior: The lowpass filter passes all frequencies below fc. The highpass filter passes all frequencies above fc. The bandpass filter passes all frequencies between fc1 and fc2. The bandstop filter attenuates all frequencies between fc1 and fc2.
The frequency points fc, fc1, and fc2 specify the cut-off frequencies for the different filters. When designing filters, you must specify the cut-off frequencies. The passband of the filter is the frequency range that passes through the filter. An ideal filter has a gain of one (0 dB) in the passband so the amplitude of the signal neither increases nor decreases. The stopband of the filter is the range of frequencies that the filter attenuates. Figure 3-3 shows the passband (PB) and the stopband (SB) for each filter type.
3-5
Chapter 3
Digital Filtering
Amplitude
Amplitude
Amplitude
Amplitude
Passband Stopband
Stopband Passband
Stopband
Stopband
Passband
Passband
Passband
Stopband
fc Lowpass
Freq
fc Highpass
Freq
Freq
Freq
The filters in Figure 3-3 have the following passband and stopband characteristics: The lowpass and highpass filters have one passband and one stopband. The bandpass filter has one passband and two stopbands. The bandstop filter has two passbands and one stopband.
Transition Band
Figure 3-4 shows the passband, the stopband, and the transition band for each type of practical filter.
3-6
ni.com
Chapter 3
Digital Filtering
Lowpass Passband
Highpass Passband
Stopband
Stopband
Bandpass Passband
Stopband
Stopband
Stopband
Transition Regions
In each plot in Figure 3-4, the x-axis represents frequency, and the y-axis represents the magnitude of the filter in dB. The passband is the region within which the gain of the filter varies from 0 dB to 3 dB.
3-7
Chapter 3
Digital Filtering
The ratio of the amplitudes shows how close the passband or stopband is to the ideal. For example, for a passband ripple of 0.02 dB, Equation 3-6 yields the following set of equations. A o ( f )$ 0.02 = 20 log # -----------! Ai ( f ) " Ao ( f ) 0.001 = 0.9977 ------------ = 10 Ai ( f ) (3-7)
(3-8)
Equations 3-7 and 3-8 show that the ratio of input and output amplitudes is close to unity, which is the ideal for the passband. Practical filter design attempts to approximate the ideal desired magnitude response, subject to certain constraints. Table 3-1 compares the characteristics of ideal filters and practical filters.
Table 3-1. Characteristics of Ideal and Practical Filters
Practical Filters Might contain ripples Might contain ripples Have transition regions
Practical filter design involves compromise, allowing you to emphasize a desirable filter characteristic at the expense of a less desirable characteristic. The compromises you can make depend on whether the filter is an FIR or IIR filter and the design algorithm.
Sampling Rate
The sampling rate is important to the success of a filtering operation. The maximum frequency component of the signal of interest usually determines the sampling rate. In general, choose a sampling rate 10 times higher than the highest frequency component of the signal of interest. Make exceptions to the previous sampling rate guideline when filter cut-off frequencies must be very close to either DC or the Nyquist frequency. Filters with cut-off frequencies close to DC or the Nyquist frequency might
3-8
ni.com
Chapter 3
Digital Filtering
have a slow rate of convergence. You can take the following actions to overcome the slow convergence: If the cut-off is too close to the Nyquist frequency, increase the sampling rate. If the cut-off is too close to DC, reduce the sampling rate.
FIR Filters
Finite impulse response (FIR) filters are digital filters that have a finite impulse response. FIR filters operate only on current and past input values and are the simplest filters to design. FIR filters also are known as nonrecursive filters, convolution filters, and moving average (MA) filters. FIR filters perform a convolution of the filter coefficients with a sequence of input values and produce an equally numbered sequence of output values. Equation 3-9 defines the finite convolution an FIR filter performs.
yi =
%h x
k=0
n1
k ik
(3-9)
where x is the input sequence to filter, y is the filtered sequence, and h is the FIR filter coefficients. FIR filters have the following characteristics: FIR filters can achieve linear phase because of filter coefficient symmetry in the realization. FIR filters are always stable. FIR filters allow you to filter signals using the convolution. Therefore, you generally can associate a delay with the output sequence, as shown in the following equation. n1 delay = ----------2 where n is the number of FIR filter coefficients.
3-9
Chapter 3
Digital Filtering
Figure 3-5 shows a typical magnitude and phase response of an FIR filter compared to normalized frequency.
Figure 3-5. FIR Filter Magnitude and Phase Response Compared to Normalized Frequency
In Figure 3-5, the discontinuities in the phase response result from the discontinuities introduced when you use the absolute value to compute the magnitude response. The discontinuities in phase are on the order of pi. However, the phase is clearly linear.
3-10
ni.com
Chapter 3
Digital Filtering
Taps
The terms tap and taps frequently appear in descriptions of FIR filters, FIR filter design, and FIR filtering operations. Figure 3-6 illustrates the process of tapping.
xn
xn1
xn2
h 0 xn
h0
Figure 3-6 represents an n-sample shift register containing the input samples [xi, xi 1, ]. The term tap comes from the process of tapping off of the shift register to form each hkxi k term in Equation 3-9. Taps usually refers to the number of filter coefficients for an FIR filter.
3-11
Chapter 3
Digital Filtering
The VI in Figure 3-7 completes the following steps to compute the frequency response of the filter. 1. 2. Pass an impulse signal through the filter. Pass the filtered signal out of the Case structure to the FFT VI. The Case structure specifies the filter typelowpass, highpass, bandpass, or bandstop. The signal passed out of the Case structure is the impulse response of the filter. Use the FFT VI to perform a Fourier transform on the impulse response and to compute the frequency response of the filter, such that the impulse response and the frequency response comprise the Fourier transform pair h ( t ) H ( f ) . h(t) is the impulse response. H(f) is the frequency response. Use the Array Subset function to reduce the data returned by the FFT VI. Half of the real FFT result is redundant so the VI needs to process only half of the data returned by the FFT VI. Use the Complex To Polar function to obtain the magnitude-and-phase form of the data returned by the FFT VI. The magnitude-and-phase form of the complex output from the FFT VI is easier to interpret than the rectangular component of the FFT. Unwrap the phase and convert it to degrees. Convert the magnitude to decibels.
3.
4.
5.
6. 7.
3-12
ni.com
Chapter 3
Digital Filtering
Figure 3-8 shows the magnitude and phase responses returned by the VI in Figure 3-7.
Figure 3-8. Magnitude and Phase Response of a Bandpass Equiripple FIR Filter
In Figure 3-8, the discontinuities in the phase response result from the discontinuities introduced when you use the absolute value to compute the magnitude response. However, the phase response is a linear response because all frequencies in the system have the same propagation delay. Because FIR filters have ripple in the magnitude response, designing FIR filters has the following design challenges: Designing a filter with a magnitude response as close to the ideal as possible Designing a filter that distributes the ripple in a desired fashion
For example, a lowpass filter has an ideal characteristic magnitude response. A particular application might allow some ripple in the passband and more ripple in the stopband. The filter design algorithm must balance the relative ripple requirements while producing the sharpest transition band. The most common techniques for designing FIR filters are windowing and the Parks-McClellan algorithm, also known as Remez Exchange.
3-13
Chapter 3
Digital Filtering
4.
Truncating the ideal impulse response results in the Gibbs phenomenon. The Gibbs phenomenon appears as oscillatory behavior near cut-off frequencies in the FIR filter frequency response. You can reduce the effects of the Gibbs phenomenon by using a smoothing window to smooth the truncation of the ideal impulse response. By tapering the FIR coefficients at each end, you can decrease the height of the side lobes in the frequency response. However, decreasing the side lobe heights causes the main lobe to widen, resulting in a wider transition band at the cut-off frequencies. Selecting a smoothing window requires a trade-off between the height of the side lobes near the cut-off frequencies and the width of the transition band. Decreasing the height of the side lobes near the cut-off frequencies increases the width of the transition band. Decreasing the width of the transition band increases the height of the side lobes near the cut-off frequencies. Designing FIR filters by windowing has the following disadvantages: Inefficiency Windowing results in unequal distribution of ripple. Windowing results in a wider transition band than other design techniques.
3-14
ni.com
Chapter 3
Digital Filtering
Difficulty in specification Windowing increases the difficulty of specifying a cut-off frequency that has a specific attenuation. Filter designers must specify the ideal cut-off frequency. Filter designers must specify the sampling frequency. Filter designers must specify the number of taps. Filter designers must specify the window type.
Designing FIR filters by windowing does not require a large amount of computational resources. Therefore, windowing is the fastest technique for designing FIR filters. However, windowing is not necessarily the best technique for designing FIR filters.
FIR filters you design using the Parks-McClellan algorithm have an optimal response. However, the design process is complex, requires a large amount of computational resources, and is much longer than designing FIR filters by windowing.
3-15
Chapter 3
Digital Filtering
The cut-off frequency for equiripple filters specifies the edge of the passband, the stopband, or both. The ripple in the passband and stopband of equiripple filters causes the following magnitude responses: Passbanda magnitude response greater than or equal to 1 Stopbanda magnitude response less than or equal to the stopband attenuation
For example, if you specify a lowpass filter, the passband cut-off frequency is the highest frequency for which the passband conditions are true. Similarly, the stopband cut-off frequency is the lowest frequency for which the stopband conditions are true.
3-16
ni.com
Chapter 3
Digital Filtering
You must specify the following parameters when developing narrowband filter specifications: Filter type, such as lowpass, highpass, bandpass, or bandstop Passband ripple on a linear scale Sampling frequency Passband frequency, which refers to passband width for bandpass and bandstop filters Stopband frequency, which refers to stopband width for bandpass and bandstop filters Center frequency for bandpass and bandstop filters Stopband attenuation in decibels
Figure 3-9 shows the block diagram of a VI that estimates the frequency response of a narrowband FIR bandpass filter by transforming the impulse response into the frequency domain.
Figure 3-9. Estimating the Frequency Response of a Narrowband FIR Bandpass Filter
Figure 3-10 shows the filter response from zero to the Nyquist frequency that the VI in Figure 3-9 returns.
3-17
Chapter 3
Digital Filtering
Figure 3-10. Estimated Frequency Response of a Narrowband FIR Bandpass Filter from Zero to Nyquist
In Figure 3-10, the narrow passband centers around 1 kHz. The narrow passband center at 1 kHz is the response of the filter specified by the front panel controls in Figure 3-10. Figure 3-11 shows the filter response in detail.
Figure 3-11. Detail of the Estimated Frequency Response of a Narrowband FIR Bandpass Filter
In Figure 3-11, the narrow passband clearly centers around 1 kHz and the attenuation of the signal at 60 dB below the passband.
3-18
ni.com
Chapter 3
Digital Filtering
Refer to the works of Vaidyanathan, P. P. and Neuvo, Y. et al. in Appendix A, References, for more information about designing IFIR filters.
Figure 3-12. Frequency Response of a Wideband FIR Lowpass Filter from Zero to Nyquist
In Figure 3-12, the front panel controls define a narrow bandwidth between the stopband at 23.9 kHz and the Nyquist frequency at 24 kHz. However, the frequency response of the filter runs from zero to 23.9 kHz, which makes the filter a wideband filter.
IIR Filters
Infinite impulse response (IIR) filters, also known as recursive filters and autoregressive moving-average (ARMA) filters, operate on current and past input values and current and past output values. The impulse response
3-19
Chapter 3
Digital Filtering
of an IIR filter is the response of the general IIR filter to an impulse, as Equation 3-1 defines impulse. Theoretically, the impulse response of an IIR filter never reaches zero and is an infinite response. The following general difference equation characterizes IIR filters. # 1& yi = ---a0 & !
Nb 1 j=0 Na 1
bj x i j
(3-10)
where bj is the set of forward coefficients, Nb is the number of forward coefficients, ak is the set of reverse coefficients, and Na is the number of reverse coefficients. Equation 3-10 describes a filter with an impulse response of theoretically infinite length for nonzero coefficients. However, in practical filter applications, the impulse response of a stable IIR filter decays to near zero in a finite number of samples. In most IIR filter designs and all of the LabVIEW IIR filters, coefficient a0 is 1. The output sample at the current sample index i is the sum of scaled current and past inputs and scaled past outputs, as shown by Equation 3-11.
yi =
Nb 1 j=0
bj x i j
%a y
k=1
Na 1
k ik
(3-11)
where xi is the current input, xi j is the past inputs, and yi k is the past outputs. IIR filters might have ripple in the passband, the stopband, or both. IIR filters have a nonlinear-phase response.
(3-12)
3-20
ni.com
Chapter 3
Digital Filtering
Equation 3-11 is a direct-form IIR filter. A direct-form IIR filter often is sensitive to errors introduced by coefficient quantization and by computational precision limits. Also, a filter with an initially stable design can become unstable with increasing coefficient length. The filter order is proportional to the coefficient length. As the coefficient length increases, the filter order increases. As filter order increases, the filter becomes more unstable. You can lessen the sensitivity of a filter to error by writing Equation 3-12 as a ratio of z transforms, which divides the direct-form transfer function into lower order sections, or filter stages. By factoring Equation 3-12 into second-order sections, the transfer function of the filter becomes a product of second-order filter functions, as shown in Equation 3-13. b 0 k + b1 k z + b 2 k z ------------------------------------------------1 2 1 + a1 k z + a2 k z k=1
Ns 1 2
H(z) =
(3-13)
Na Na -, where Ns is the number of stages, N s = ----- is the largest integer ----2 2 and Na Nb. You can describe the filter structure defined by Equation 3-13 as a cascade of second-order filters. Figure 3-13 illustrates cascade filtering.
x(i)
Stage 1
Stage 2
Stage NS
y(i)
You implement each individual filter stage in Figure 3-13 with the direct-form II filter structure. You use the direct-form II filter structure to implement each filter stage for the following reasons: The direct-form II filter structure requires a minimum number of arithmetic operations. The direct-form II filter structure requires a minimum number of delay elements, or internal filter states. Each kth stage has one input, one output, and two past internal states, sk[i 1] and sk[i 2].
3-21
Chapter 3
Digital Filtering
If n is the number of samples in the input sequence, the filtering operation proceeds as shown in the following equations. y0 [ i ] = x [ i ] [ i 1 ] a1 k sk [ i 1 ] a 2 k s k [ i 2 ] sk [ i ] = y k1 yk [ i ] = b 0 k s k [ i ] + b 1 k s k [ i 1 ] + b 2 k s k [ i 2 ] for each sample i = 0, 1, 2, , n 1. k = 1, 2, , Ns k = 1, 2, , Ns
Second-Order Filtering
For lowpass and highpass filters, which have a single cut-off frequency, you can design second-order filter stages directly. The resulting IIR lowpass or highpass filter contains cascaded second-order filters. Each second-order filter stage has the following characteristics: k = 1, 2, , Ns, where k is the second-order filter stage number and Ns is the total number of second-order filter stages. Each second-order filter stage has two reverse coefficients, (a1k, a2k). The total number of reverse coefficients equals 2Ns. Each second-order filter stage has three forward coefficients, (b0k, b1k, b2k). The total number of forward coefficients equals 3Ns.
In Signal Processing VIs with Reverse Coefficients and Forward Coefficients parameters, the Reverse Coefficients and the Forward Coefficients arrays contain the coefficients for one second-order filter stage, followed by the coefficients for the next second-order filter stage, and so on. For example, an IIR filter with two second-order filter stages must have a total of four reverse coefficients and six forward coefficients, as shown in the following equations. Total number of reverse coefficients = 2Ns = 2 2 = 4 Reverse Coefficients = {a11, a21, a12, a22} Total number of forward coefficients = 3Ns = 3 2 = 6 Forward Coefficients = {b01, b11, b21, b02, b12, b22}
3-22
ni.com
Chapter 3
Digital Filtering
Fourth-Order Filtering
For bandpass and bandstop filters, which have two cut-off frequencies, fourth-order filter stages are a more direct form of filter design than second-order filter stages. IIR bandpass or bandstop filters resulting from fourth-order filter design contain cascaded fourth-order filters. Each fourth-order filter stage has the following characteristics: k = 1, 2, , Ns, where k is the fourth-order filter stage number and Ns is the total number of fourth-order filter stages. Na + 1 N s = -------------- . 4 Each fourth-order filter stage has four reverse coefficients, (a1k, a2k, a3k, a4k). The total number of reverse coefficients equals 4Ns. Each fourth-order filter stage has five forward coefficients, (b0k, b1k, b2k, b3k, b4k). The total number of forward coefficients equals 5Ns.
You implement cascade stages in fourth-order filtering in the same manner as in second-order filtering. The following equations show how the filtering operation for fourth-order stages proceeds. y0 [ i ] = x [ i ] sk [ i ] = y [ i 1 ] a 1 k sk [ i 1 ] a2 k sk [ i 2 ] a 3 k sk [ i 3 ] a 4 k s k [ i 4 ] k1 yk [ i ] = b 0 k s k [ i ] + b 1 k s k [ i 1 ] + b 2 k s k [ i 2 ] b 3 k s k [ i 3 ] b 4 k s k [ i 4 ] where k = 1, 2, , Ns.
3-23
Chapter 3
Digital Filtering
The IIR filter designs differ in the sharpness of the transition between the passband and the stopband and where they exhibit their various characteristicsin the passband or the stopband.
Butterworth Filters
Butterworth filters have the following characteristics: Smooth response at all frequencies Monotonic decrease from the specified cut-off frequencies Maximal flatness, with the ideal response of unity in the passband and zero in the stopband Half-power frequency, or 3 dB down frequency, that corresponds to the specified cut-off frequencies
The advantage of Butterworth filters is their smooth, monotonically decreasing frequency response. Figure 3-14 shows the frequency response of a lowpass Butterworth filter.
3-24
ni.com
Chapter 3
Digital Filtering
As shown in Figure 3-14, after you specify the cut-off frequency of a Butterworth filter, LabVIEW sets the steepness of the transition proportional to the filter order. Higher order Butterworth filters approach the ideal lowpass filter response. Butterworth filters do not always provide a good approximation of the ideal filter response because of the slow rolloff between the passband and the stopband.
Chebyshev Filters
Chebyshev filters have the following characteristics: Minimization of peak error in the passband Equiripple magnitude response in the passband Monotonically decreasing magnitude response in the stopband Sharper rolloff than Butterworth filters
Compared to a Butterworth filter, a Chebyshev filter can achieve a sharper transition between the passband and the stopband with a lower order filter. The sharp transition between the passband and the stopband of a Chebyshev filter produces smaller absolute errors and faster execution speeds than a Butterworth filter.
3-25
Chapter 3
Digital Filtering
In Figure 3-15, the maximum tolerable error constrains the equiripple response in the passband. Also, the sharp rolloff appears in the stopband.
Chebyshev II Filters
Chebyshev II filters have the following characteristics: Minimization of peak error in the stopband Equiripple magnitude response in the stopband Monotonically decreasing magnitude response in the passband Sharper rolloff than Butterworth filters
Chebyshev II filters are similar to Chebyshev filters. However, Chebyshev II filters differ from Chebyshev filters in the following ways: Chebyshev II filters minimize peak error in the stopband instead of the passband. Minimizing peak error in the stopband instead of the passband is an advantage of Chebyshev II filters over Chebyshev filters. Chebyshev II filters have an equiripple magnitude response in the stopband instead of the passband. Chebyshev II filters have a monotonically decreasing magnitude response in the passband instead of the stopband.
3-26
ni.com
Chapter 3
Digital Filtering
In Figure 3-16, the maximum tolerable error constrains the equiripple response in the stopband. Also, the smooth monotonic rolloff appears in the stopband. Chebyshev II filters have the same advantage over Butterworth filters that Chebyshev filters havea sharper transition between the passband and the stopband with a lower order filter, resulting in a smaller absolute error and faster execution speed.
Elliptic Filters
Elliptic filters have the following characteristics: Minimization of peak error in the passband and the stopband Equiripples in the passband and the stopband
Compared with the same order Butterworth or Chebyshev filters, the elliptic filters provide the sharpest transition between the passband and the stopband, which accounts for their widespread use.
3-27
Chapter 3
Digital Filtering
In Figure 3-17, the same maximum tolerable error constrains the ripple in both the passband and the stopband. Also, even low-order elliptic filters have a sharp transition edge.
Bessel Filters
Bessel filters have the following characteristics: Maximally flat response in both magnitude and phase Nearly linear-phase response in the passband
You can use Bessel filters to reduce nonlinear-phase distortion inherent in all IIR filters. High-order IIR filters and IIR filters with a steep rolloff have a pronounced nonlinear-phase distortion, especially in the transition regions of the filters. You also can obtain linear-phase response with FIR filters. Figure 3-18 shows the magnitude and phase responses of a lowpass Bessel filter.
3-28
ni.com
Chapter 3
Digital Filtering
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.1 0.2 0.3 0.4 0.5 Order = 2 Order = 5 Order = 10 Bessel Magnitude Response
In Figure 3-18, the magnitude is smooth and monotonically decreasing at all frequencies. Figure 3-19 shows the phase response of a lowpass Bessel filter.
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 0.0 Order = 2 Order = 5 Order = 10 0.1 0.2 0.3 0.4 0.5 Bessel Phase Response
Figure 3-19 shows the nearly linear phase in the passband. Also, the phase monotonically decreases at all frequencies. Like Butterworth filters, Bessel filters require high-order filters to minimize peak error, which accounts for their limited use.
3-29
Chapter 3
Digital Filtering
Because the same mathematical theory applies to designing IIR and FIR filters, the block diagram in Figure 3-20 of a VI that returns the frequency response of an IIR filter and the block diagram in Figure 3-7 of a VI that returns the frequency response of an FIR filter share common design elements. The main difference between the two VIs is that the Case structure on the left side of Figure 3-20 specifies the IIR filter design and filter type instead of specifying only the filter type. The VI in Figure 3-20 computes the frequency response of an IIR filter by following the same steps outlined in the Designing FIR Filters section of this chapter. Figure 3-21 shows the magnitude and the phase responses of a bandpass elliptic IIR filter.
3-30
ni.com
Chapter 3
Digital Filtering
Figure 3-21. Magnitude and Phase Responses of a Bandpass Elliptic IIR Filter
In Figure 3-21, the phase information is clearly nonlinear. When deciding whether to use an IIR or FIR filter to process data, remember that IIR filters provide nonlinear phase information. Refer to the Comparing FIR and IIR Filters section and the Selecting a Digital Filter Design section of this chapter for information about differences between FIR and IIR filters and selecting an appropriate filter design.
3-31
Chapter 3
Digital Filtering
Transient Response
The transient response occurs because the initial filter state is zero or has values at negative indexes. The duration of the transient response depends on the filter type. The duration of the transient response for lowpass and highpass filters equals the filter order. delay = order The duration of the transient response for bandpass and bandstop filters equals twice the filter order. delay = 2 order You can eliminate the transient response on successive calls to an IIR filter VI by enabling state memory. To enable state memory for continuous filtering, wire a value of TRUE to the init/cont input of the IIR filter VI. Figure 3-22 shows the transient response and the steady state for an IIR filter.
Transient
Steady State
Figure 3-22. Transient Response and Steady State for an IIR Filter
3-32
ni.com
Chapter 3
Digital Filtering
Nonlinear Filters
Smoothing windows, IIR filters, and FIR filters are linear because they satisfy the superposition and proportionality principles, as shown in Equation 3-14. L {ax(t) + by(t)} = aL {x(t)} + bL {y(t)} (3-14)
where a and b are constants, x(t) and y(t) are signals, L{} is a linear filtering operation, and inputs and outputs are related through the convolution operation, as shown in Equations 3-9 and 3-11. A nonlinear filter does not satisfy Equation 3-14. Also, you cannot obtain the output signals of a nonlinear filter through the convolution operation because a set of coefficients cannot characterize the impulse response of the filter. Nonlinear filters provide specific filtering characteristics that are difficult to obtain using linear techniques. The median filter, a nonlinear filter, combines lowpass filter characteristics and high-frequency characteristics. The lowpass filter characteristics allow the median filter to remove high-frequency noise. The high-frequency characteristics allow the median filter to detect edges, which preserves edge information.
3-33
Chapter 3
Digital Filtering
The VI in Figure 3-23 generates a noisy pulse with an expected peak noise amplitude greater than 100% of the expected pulse amplitude. The signal the VI in Figure 3-23 generates has the following ideal pulse values: Amplitude of 5.0 V Delay of 64 samples Width of 32 samples
3-34
ni.com
Chapter 3
Digital Filtering
Figure 3-24 shows the noisy pulse, the filtered pulse, and the estimated pulse parameters returned by the VI in Figure 3-23.
Figure 3-24. Noisy Pulse and Pulse Filtered with Median Filter
In Figure 3-24, you can track the pulse signal produced by the median filter, even though noise obscures the pulse. You can remove the high-frequency noise with the Median Filter VI to achieve the 50% pulse-to-noise ratio the Pulse Parameters VI needs to complete the analysis accurately.
Use Figure 3-25 as a guideline for selecting the appropriate filter for an analysis application.
3-35
Chapter 3
Digital Filtering
Linear phase?
Yes
FIR Filter
Ripple acceptable?
No
Yes
No
No
Chebyshev Filter
No
Elliptic Filter
Figure 3-25 can provide guidance for selecting an appropriate filter type. However, you might need to experiment with several filter types to find the best type.
3-36
ni.com
Frequency Analysis
This chapter describes the fundamentals of the discrete Fourier transform (DFT), the fast Fourier transform (FFT), basic signal analysis computations, computations performed on the power spectrum, and how to use FFT-based functions for network measurement. Use the NI Example Finder to find examples of using the digital signal processing VIs and the measurement analysis VIs to perform FFT and frequency analysis.
4-1
Chapter 4
Frequency Analysis
Time Axis
In the frequency domain, you can separate conceptually the sine waves that add to form the complex time-domain signal. Figure 4-1 shows single frequency components, which spread out in the time domain, as distinct impulses in the frequency domain. The amplitude of each frequency line is the amplitude of the time waveform for that frequency component. The representation of a signal in terms of its individual frequency components is the frequency-domain representation of the signal. The frequency-domain representation might provide more insight about the signal and the system from which it was generated. The samples of a signal obtained from a DAQ device constitute the time-domain representation of the signal. Some measurements, such as harmonic distortion, are difficult to quantify by inspecting the time waveform on an oscilloscope. When the same signal is displayed in the frequency domain by an FFT Analyzer, also known as a Dynamic Signal Analyzer, you easily can measure the harmonic frequencies and amplitudes.
4-2
ni.com
Chapter 4
Frequency Analysis
Parsevals Relationship
Parsevals Theorem states that the total energy computed in the time domain must equal the total energy computed in the frequency domain. It is a statement of conservation of energy. The following equation defines the continuous form of Parsevals relationship.
x ( t ) x ( t ) dt =
( X( f)
df
xi
1 = -n
k=0
%X
n1
2 k
(4-1)
where x i X k is a discrete FFT pair and n is the number of elements in the sequence. Figure 4-2 shows the block diagram of a VI that demonstrates Parsevals relationship.
The VI in Figure 4-2 produces a real input sequence. The upper branch on the block diagram computes the energy of the time-domain signal using the left side of Equation 4-1. The lower branch on the block diagram converts the time-domain signal to the frequency domain and computes the energy of the frequency-domain signal using the right side of Equation 4-1.
4-3
Chapter 4
Frequency Analysis
In Figure 4-3, the total computed energy in the time domain equals the total computed energy in the frequency domain.
Fourier Transform
The Fourier transform provides a method for examining a relationship in terms of the frequency domain. The most common applications of the Fourier transform are the analysis of linear time-invariant systems and spectral analysis. The following equation defines the two-sided Fourier transform. X(f) = F{x(t )} =
( x( t )e
j 2 ft
dt
( X ( f )e
j2 ft
df
Two-sided means that the mathematical implementation of the forward and inverse Fourier transform considers all negative and positive frequencies and time of the signal. Single-sided means that the mathematical implementation of the transforms considers only the positive frequencies and time history of the signal. A Fourier transform pair consists of the signal representation in both the time and frequency domain. The following relationship commonly denotes a Fourier transform pair. x( t ) X( f)
4-4
ni.com
Chapter 4
Frequency Analysis
DFT
Suppose you obtained N samples of a signal from a DAQ device. If you apply the DFT to N samples of this time-domain representation of the signal, the result also is of length N samples, but the information it contains is of the frequency-domain representation.
where t is the sampling interval and fs is the sampling rate in samples per second (S/s). The sampling interval is the smallest frequency that the system can resolve through the DFT or related routines.
4-5
Chapter 4
Frequency Analysis
Equation 4-3 defines the DFT. The equation results in X[k], the frequency-domain representation of the sample signal.
X[ k] =
% x[ i]e
i=0
N1
j 2 ik N
for k = 0,1,2, , N 1 ,
(4-3)
where x[i] is the time-domain representation of the sample signal and N is the total number of samples. Both the time domain x and the frequency domain X have a total of N samples. Similar to the time spacing of t between the samples of x in the time domain, you have a frequency spacing, or frequency resolution, between the components of X in the frequency domain, which Equation 4-4 defines. fs 1 - = -------- f = --N N t (4-4)
where f is the frequency resolution, fs is the sampling rate, N is the number of samples, t is the sampling interval, and Nt is the total acquisition time. To improve the frequency resolution, that is, to decrease f, you must increase N and keep fs constant or decrease fs and keep N constant. Both approaches are equivalent to increasing Nt, which is the time duration of the acquired samples.
4-6
ni.com
Chapter 4
Frequency Analysis
Amplitude
x[0]
x[1]
x[2]
x[3]
+1 V
Time 0 1 2 3
The DFT calculation makes use of Eulers identity, which is given by the following equation. exp (i) = cos() jsin() If you use Equation 4-3 to calculate the DFT of the sequence shown in Figure 4-5 and use Eulers identity, you get the following equations.
N1
X[0] =
%x e
i
j2 i 0 N
= x[ 0 ] + x[ 1 ] + x[ 2 ] + x[ 3 ] = 4
i=0
$ $ $ - + x [ 2 ] ( cos ( ) j sin ( ) ) + - j sin # -X [ 1 ] = x [ 0 ] + x [ 1 ] # cos # -! ! 2" " ! 2" 3 -$ j sin # 3 -----$ $ = ( 1 j 1 + j ) = 0 x [ 3 ] # cos # ----! ! 2" ! 2 "" X [ 2 ] = x [ 0 ] + x [ 1 ] ( cos ( ) j sin ( ) ) + x [ 2 ] ( cos ( 2 ) j sin ( 2 ) ) + x [ 3 ] ( cos ( 3 ) j sin ( 3 ) ) = ( 1 1 + 1 1 ) = 0 3 -$ j sin # 3 -----$ $ + x [ 2 ] ( cos ( 3 ) j sin ( 3 ) ) + X [ 3 ] = x [ 0 ] + x [ 1 ] # cos # ----! ! 2" ! 2 "" 9 $ $ 9 $ - = ( 1 j 1 j )= 0 - j sin # ----x [ 3 ] # cos # ----! ! 2 "" !2" where X[0] is the DC component and N is the number of samples.
4-7
Chapter 4
Frequency Analysis
Therefore, except for the DC component, all other values for the sequence shown in Figure 4-5 are zero, which is as expected. However, the calculated value of X[0] depends on the value of N. Because in this example N = 4, X[0] = 4. If N = 10, the calculation results in X[0] = 10. This dependency of X[ ] on N also occurs for the other frequency components. Therefore, you usually divide the DFT output by N to obtain the correct magnitude of the frequency component.
4-8
ni.com
Chapter 4
Frequency Analysis
Even Symmetry
Odd Symmetry
Because of this symmetry, the N samples of the DFT contain repetition of information. Because of this repetition of information, only half of the samples of the DFT actually need to be computed or displayed because you can obtain the other half from this repetition. If the input signal is complex, the DFT is asymmetrical, and you cannot use only half of the samples to obtain the other half.
4-9
Chapter 4
Frequency Analysis
f DC f 2 f 3 f 4f (Nyquist frequency) 3f 2f f
The negative entries in the second column beyond the Nyquist frequency represent negative frequencies, that is, those elements with an index value >p. For N = 8, X[1] and X[7] have the same magnitude; X[2] and X[6] have the same magnitude; and X[3] and X[5] have the same magnitude. The difference is that X[1], X[2], and X[3] correspond to positive frequency components, while X[5], X[6], and X[7] correspond to negative frequency components. X[4] is at the Nyquist frequency. Figure 4-7 illustrates the complex output sequence X for N = 8.
4-10
ni.com
Chapter 4
Frequency Analysis
A representation where you see the positive and negative frequencies is the two-sided transform. When N is odd, there is no component at the Nyquist frequency. Table 4-2 lists the values of f for X[p] when N = 7 and p = (N 1)/2 = (71)/2 = 3.
Table 4-2. X [p] for N = 7
f DC f 2 f 3 f 3f 2f f
For N = 7, X[1] and X[6] have the same magnitude; X[2] and X[5] have the same magnitude; and X[3] and X[4] have the same magnitude. However, X[1], X[2], and X[3] correspond to positive frequencies, while X[4], X[5], and X[6] correspond to negative frequencies. Because N is odd, there is no component at the Nyquist frequency.
4-11
Chapter 4
Frequency Analysis
DC
Positive Frequencies
Negative Frequencies
Figure 4-8 also shows a two-sided transform because it represents the positive and negative frequencies.
FFT Fundamentals
Directly implementing the DFT on N data samples requires approximately N 2 complex operations and is a time-consuming process. The FFT is a fast algorithm for calculating the DFT. The following equation defines the DFT.
X( k) =
% x(n)e
n=0
N1
The following measurements comprise the basic functions for FFT-based signal analysis: FFT Power spectrum Cross power spectrum
You can use the basic functions as the building blocks for creating additional measurement functions, such as the frequency response, impulse response, coherence, amplitude spectrum, and phase spectrum.
4-12
ni.com
Chapter 4
Frequency Analysis
The FFT and the power spectrum are useful for measuring the frequency content of stationary or transient signals. The FFT produces the average frequency content of a signal over the total acquisition. Therefore, use the FFT for stationary signal analysis or in cases where you need only the average energy at each frequency line. An FFT is equivalent to a set of parallel filters of bandwidth f centered at each frequency increment from DC to (Fs/2) (Fs/N). Therefore, frequency lines also are known as frequency bins or FFT bins. Refer to the Power Spectrum section of this chapter for more information about the power spectrum.
X(k) =
x(n)e
N1
% x( n )
n=0
n=0
The DC component is the dot product of x(n) with [cos(0) jsin(0)], or with 1.0. The first bin, or frequency component, is the dot product of x(n) with cos(2n/N) jsin(2n/N). Here, cos(2n/N) is a single cycle of the cosine wave, and sin(2n/N) is a single cycle of a sine wave. In general, bin k is the dot product of x(n) with k cycles of the cosine wave for the real part of X(k) and the sine wave for the imaginary part of X(k). The use of the FFT for frequency analysis implies two important relationships. The first relationship links the highest frequency that can be analyzed to the sampling frequency and is given by the following equation. fs F max = --, 2 where Fmax is the highest frequency that can be analyzed and fs is the sampling frequency. Refer to the Windowing section of this chapter for more information about Fmax.
National Instruments Corporation 4-13 LabVIEW Analysis Concepts
Chapter 4
Frequency Analysis
The second relationship links the frequency resolution to the total acquisition time, which is related to the sampling frequency and the block size of the FFT and is given by the following equation. fs 1 f = -- = ---, T N where f is the frequency resolution, T is the acquisition time, fs is the sampling frequency, and N is the block size of the FFT.
for m, k, j = 0, 1, 2, 3,
(4-5)
For the input sequence size defined by Equation 4-5, the FFT-based VIs can compute the DFT with speeds comparable to an FFT whose input sequence size is a power of two. Common input sequence sizes that are factorable as the product of small prime numbers include 480, 640, 1,000, and 2,000.
Zero Padding
Zero padding is a technique typically employed to make the size of the input sequence equal to a power of two. In zero padding, you add zeros to the end of the input sequence so that the total number of samples is equal to the next higher power of two. For example, if you have 10 samples of a signal, you can add six zeros to make the total number of samples equal to 16, or 24, which is a power of two. Figure 4-9 illustrates padding 10 samples of a signal with zeros to make the total number of samples equal 16.
4-14
ni.com
Chapter 4
Frequency Analysis
The addition of zeros to the end of the time-domain waveform does not improve the underlying frequency resolution associated with the time-domain signal. The only way to improve the frequency resolution of the time-domain signal is to increase the acquisition time and acquire longer time records. In addition to making the total number of samples a power of two so that faster computation is made possible by using the FFT, zero padding can lead to an interpolated FFT result, which can produce a higher display resolution.
FFT VI
The polymorphic FFT VI computes the FFT of a signal and has two instancesReal FFT and Complex FFT. The difference between the two instances is that the Real FFT instance computes the FFT of a real-valued signal, whereas the Complex FFT instance computes the FFT of a complex-valued signal. However, the outputs of both instances are complex. Most real-world signals are real valued. Therefore, you can use the Real FFT instance for most applications. You also can use the Complex FFT instance by setting the imaginary part of the signal to zero. An example of an application where you use the Complex FFT instance is when the signal consists of both a real and an imaginary component. A signal consisting of a real and an imaginary component occurs frequently
4-15
Chapter 4
Frequency Analysis
in the field of telecommunications, where you modulate a waveform by a complex exponential. The process of modulation by a complex exponential results in a complex signal, as shown in Figure 4-10.
x(t)
Modulation by exp(j t)
4-16
ni.com
Chapter 4
Frequency Analysis
Figure 4-12 shows the display and f that the VI in Figure 4-11 returns.
Two other common ways of presenting frequency information are displaying the DC component in the center and displaying one-sided spectrums. Refer to the Two-Sided, DC-Centered FFT section of this chapter for information about displaying the DC component in the center. Refer to the Power Spectrum section of this chapter for information about displaying one-sided spectrums.
4-17
Chapter 4
Frequency Analysis
X ( f f0 )
fs 1 = -- = -------2 2 t
f0 is set to the index corresponding to fN because causing the DC component to appear in the location of the Nyquist component requires a frequency shift equal to fN. Setting f0 to the index corresponding to fN results in the discrete Fourier transform pair shown in the following relationship. xi e
ji
Xk n -2
where n is the number of elements in the discrete sequence, xi is the time-domain sequence, and Xk is the frequency-domain representation of xi. Expanding the exponential term in the time-domain sequence produces the following equation. e
ji
+ = cos ( i ) + j sin ( i ) = * )
1 if i is even 1 if i is odd
(4-6)
Equation 4-6 represents a sequence of alternating +1 and 1. Equation 4-6 means that negating the odd elements of the original time-domain sequence and performing an FFT on the new sequence produces a spectrum whose DC component appears in the center of the sequence.
4-18
ni.com
Chapter 4
Frequency Analysis
Therefore, if the original input sequence is X = {x0, x1, x2, x3, , xn 1} then the sequence Y = {x0, x1, x2, x3, , xn 1} generates a DC-centered spectrum. (4-7)
In Figure 4-13, the For Loop iterates through the input sequence, alternately multiplying array elements by 1.0 and 1.0, until it processes the entire input array. Figure 4-14 shows the block diagram of a VI that generates a time-domain sequence and uses the Nyquist Shift and Power Spectrum VIs to produce a DC-centered spectrum.
4-19
Chapter 4
Frequency Analysis
In the VI in Figure 4-14, the Nyquist Shift VI preprocesses the time-domain sequence by negating every other element in the sequence. The Power Spectrum VI transforms the data into the frequency domain. To display the frequency axis of the processed data correctly, you must supply x0, which is the x-axis value of the initial frequency bin. For a DC-centered spectrum, the following equation computes x0. n x 0 = -2 Figure 4-15 shows the time-domain sequence and DC-centered spectrum the VI in Figure 4-14 returns.
4-20
ni.com
Chapter 4
Frequency Analysis
In the DC-centered spectrum display in Figure 4-15, the DC component appears in the center of the display at f = 0. The overall format resembles that commonly found in tables of Fourier transform pairs. You can create DC-centered spectra for even-sized input sequences by negating the odd elements of the input sequence. You cannot create DC-centered spectra by directly negating the odd elements of an input time-domain sequence containing an odd number of elements because the Nyquist frequency appears between two frequency bins. To create DC-centered spectra for odd-sized input sequences, you must rotate the FFT arrays by the amount given in the following relationship. n1 ----------2
4-21
Chapter 4
Frequency Analysis
For a DC-centered spectrum created from an odd-sized input sequence, the following equation computes x0. n1 x 0 = ----------2
Power Spectrum
As described in the Magnitude and Phase Information section of this chapter, the DFT or FFT of a real signal is a complex number, having a real and an imaginary part. You can obtain the power in each frequency component represented by the DFT or FFT by squaring the magnitude of that frequency component. Thus, the power in the kth frequency componentthat is, the kth element of the DFT or FFTis given by the following equation. power = |X[k]|2, where |X[k]| is the magnitude of the frequency component. Refer to the Magnitude and Phase Information section of this chapter for information about computing the magnitude of the frequency components. The power spectrum returns an array that contains the two-sided power spectrum of a time-domain signal and that shows the power in each of the frequency components. You can use Equation 4-8 to compute the two-sided power spectrum from the FFT. FFT ( A ) FFT ( A ) Power Spectrum S AA ( f ) = -----------------------------------------------N (4-8)
where FFT*(A) denotes the complex conjugate of FFT(A). The complex conjugate of FFT(A) results from negating the imaginary part of FFT(A). The values of the elements in the power spectrum array are proportional to the magnitude squared of each frequency component making up the time-domain signal. Because the DFT or FFT of a real signal is symmetric, the power at a positive frequency of kf is the same as the power at the corresponding negative frequency of kf, excluding DC and Nyquist components. The total power in the DC component is |X[0]|2. The total power in the Nyquist component is |X[N/2]|2.
4-22
ni.com
Chapter 4
Frequency Analysis
A plot of the two-sided power spectrum shows negative and positive frequency components at a height given by the following relationship. Ak ----4
2
where Ak is the peak amplitude of the sinusoidal component at frequency k. 2 The DC component has a height of A 0 where A0 is the amplitude of the DC component in the signal. Figure 4-16 shows the power spectrum result from a time-domain signal that consists of a 3 Vrms sine wave at 128 Hz, a 3 Vrms sine wave at 256 Hz, and a DC component of 2 VDC. A 3 Vrms sine wave has a peak voltage of 3.0 2 or about 4.2426 V. The power spectrum is computed from the basic FFT function, as shown in Equation 4-8.
4-23
Chapter 4
Frequency Analysis
second half of the array and multiply every point except for DC by two, as shown in the following equations. G AA ( i ) = S AA ( i ) , i = 0 (DC) N G AA ( i ) = ( 2 S AA ( i ) ) , i = 1 to ---1 2 where SAA(i) is the two-sided power spectrum, GAA(i) is the single-sided power spectrum, and N is the length of the two-sided power spectrum. You discard the remainder of the two-sided power spectrum SAA, N/2 through N 1. The non-DC values in the single-sided spectrum have a height given by the following relationship. Ak ----2
2
(4-9)
Ak - is the root mean square (rms) amplitude of the sinusoidal where -----2 component at frequency k. The units of a power spectrum are often quantity squared rms, where quantity is the unit of the time-domain signal. For example, the single-sided 2 power spectrum of a voltage waveform is in volts rms squared, V rms . Figure 4-17 shows the single-sided spectrum of the signal whose two-sided spectrum Figure 4-16 shows.
4-24
ni.com
Chapter 4
Frequency Analysis
In Figure 4-17, the height of the non-DC frequency components is twice the height of the non-DC frequency component in Figure 4-16. Also, the spectrum in Figure 4-17 stops at half the frequency of that in Figure 4-16.
4-25
Chapter 4
Frequency Analysis
weighted average of the frequencies around a detected peak in the power spectrum, as shown in the following equation.
j+3
%
i = j3
( Power ( i ) ( i f ) )
j+3
%
i =j3
Power ( i )
where j is the array index of the apparent peak of the frequency of interest. The span j 3 is reasonable because it represents a spread wider than the main lobes of the smoothing windows listed in Table 5-3, Correction Factors and Worst-Case Amplitude Errors for Smoothing Windows, of Chapter 5, Smoothing Windows.
2 You can estimate the power in V rms of a discrete peak frequency component by summing the power in the bins around the peak. In other words, you compute the area under the peak. You can use the following equation to estimate the power of a discrete peak frequency component.
j+3
i = j3
Power ( i ) (4-10)
Equation 4-10 is valid only for a spectrum made up of discrete frequency components. It is not valid for a continuous spectrum. Also, if two or more frequency peaks are within six lines of each other, they contribute to inflating the estimated powers and skewing the actual frequencies. You can reduce this effect by decreasing the number of lines spanned by Equation 4-10. If two peaks are within six lines of each other, it is likely that they are already interfering with one another because of spectral leakage. If you want the total power in a given frequency range, sum the power in each bin included in the frequency range and divide by the noise power bandwidth of the smoothing window. Refer to Chapter 5, Smoothing Windows, for information about the noise power bandwidth of smoothing windows.
4-26
ni.com
Chapter 4
Frequency Analysis
4-27
Chapter 4
Frequency Analysis
(4-11)
where the arctangent function returns values of phase between and +, a full range of 2 radians. The following relationship defines the rectangular-to-polar conversion function. FFT ( A ) -----------------N (4-13)
4-28
ni.com
Chapter 4
Frequency Analysis
Using the rectangular-to-polar conversion function to convert the complex spectrum to its magnitude (r) and phase () is equivalent to using Equations 4-11 and 4-12. The two-sided amplitude spectrum actually shows half the peak amplitude at the positive and negative frequencies. To convert to the single-sided form, multiply each frequency, other than DC, by two and discard the second half of the array. The units of the single-sided amplitude spectrum are then in quantity peak and give the peak amplitude of each sinusoidal component making up the time-domain signal. To obtain the single-sided phase spectrum, discard the second half of the array.
where i is the frequency line number, or array index, of the FFT of A. The magnitude in Vrms gives the rms voltage of each sinusoidal component of the time-domain signal. The amplitude spectrum is closely related to the power spectrum. You can compute the single-sided power spectrum by squaring the single-sided rms amplitude spectrum. Conversely, you can compute the amplitude spectrum by taking the square root of the power spectrum. Refer to the Power Spectrum section of this chapter for information about computing the power spectrum.
4-29
Chapter 4
Frequency Analysis
Use the following equation to view the phase spectrum in degrees. 180 - Phase FFT ( A ) Phase Spectrum in Degrees = -------
The frequency response of a system is described by the magnitude, |H|, and phase, H, at each frequency. The gain of the system is the same as its magnitude and is the ratio of the output magnitude to the input magnitude at each frequency. The phase of the system is the difference of the output phase and input phase at each frequency.
4-30
ni.com
Chapter 4
Frequency Analysis
4-31
Chapter 4
Frequency Analysis
Applied Stimulus
Measured Response(B)
In Figure 4-19, you apply a stimulus to the network under test and measure the stimulus and response signals. From the measured stimulus and response signals, you compute the frequency response function. The frequency response function gives the gain and phase versus frequency of a network. You use Equation 4-14 to compute the response function. S AB ( f ) H ( f ) = -------------S AA ( f ) (4-14)
where H( f) is the response function, A is the stimulus signal, B is the response signal, SAB( f) is the cross power spectrum of A and B, and SAA( f) is the power spectrum of A. The frequency response function is a two-sided complex form, having real and imaginary parts. To convert to the frequency response gain and the frequency response phase, use the rectangular-to-polar conversion function from Equation 4-13. To convert to single-sided form, discard the second half of the response function array. You might want to take several frequency response function readings and compute the average. Complete the following steps to compute the average frequency response function. 1. 2. 3. Compute the average SAB( f) by finding the sum in the complex form and dividing the sum by the number of measurements. Compute the average SAA( f) by finding the sum and dividing the sum by the number of measurements. Substitute the average SAB( f) and the average SAA( f) in Equation 4-14.
4-32
ni.com
Chapter 4
Frequency Analysis
Coherence Function
The coherence function provides an indication of the quality of the frequency response function measurement and of how much of the response energy is correlated to the stimulus energy. If there is another signal present in the response, either from excessive noise or from another signal, the quality of the network response measurement is poor. You can use the coherence function to identify both excessive noise and which of the multiple signal sources are contributing to the response signal. Use Equation 4-15 to compute the coherence function. ( Magnitude of the Average S AB ( f ) ) 2 ( f ) = -------------------------------------------------------------------------------------( Average S AA ( f ) ) ( Average S BB ( f ) )
2
(4-15)
where SAB is the cross power spectrum, SAA is the power spectrum of A, and SBB is the power spectrum of B. Equation 4-15 yields a coherence factor with a value between zero and one versus frequency. A value of zero for a given frequency line indicates no correlation between the response and the stimulus signal. A value of one for a given frequency line indicates that 100% of the response energy is due to the stimulus signal and that no interference is occurring at that frequency. For a valid result, the coherence function requires an average of two or more readings of the stimulus and response signals. For only one reading, the coherence function registers unity at all frequencies.
4-33
Chapter 4
Frequency Analysis
Windowing
In practical applications, you obtain only a finite number of samples of the signal. The FFT assumes that this time record repeats. If you have an integral number of cycles in your time record, the repetition is smooth at the boundaries. However, in practical applications, you usually have a nonintegral number of cycles. In the case of a nonintegral number of cycles, the repetition results in discontinuities at the boundaries. These artificial discontinuities were not originally present in your signal and result in a smearing or leakage of energy from your actual frequency to all other frequencies. This phenomenon is spectral leakage. The amount of leakage depends on the amplitude of the discontinuity, with a larger amplitude causing more leakage. A signal that is exactly periodic in the time record is composed of sine waves with exact integral cycles within the time record. Such a perfectly periodic signal has a spectrum with energy contained in exact frequency bins. A signal that is not periodic in the time record has a spectrum with energy split or spread across multiple frequency bins. The FFT spectrum models the time domain as if the time record repeated itself forever. It assumes that the analyzed record is just one period of an infinitely repeating periodic signal. Because the amount of leakage is dependent on the amplitude of the discontinuity at the boundaries, you can use windowing to reduce the size of the discontinuity and reduce spectral leakage. Windowing consists of multiplying the time-domain signal by another time-domain waveform, known as a window, whose amplitude tapers gradually and smoothly towards zero at edges. The result is a windowed signal with very small or no discontinuities and therefore reduced spectral leakage. You can choose from among many different types of windows. The one you choose depends on your application and some prior knowledge of the signal you are analyzing. Refer to Chapter 5, Smoothing Windows, for more information about windowing.
4-34
ni.com
Chapter 4
Frequency Analysis
RMS Averaging
RMS averaging reduces signal fluctuations but not the noise floor. The noise floor is not reduced because RMS averaging averages the energy, or power, of the signal. RMS averaging also causes averaged RMS quantities of single-channel measurements to have zero phase. RMS averaging for dual-channel measurements preserves important phase information. RMS-averaged measurements are computed according to the following equations. FFT spectrum power spectrum cross spectrum frequency response , X X, X X, X Y, X Y H 1 = -------------------, X XY Y -H 2 = , -------------Y X H1 + H2) H3 = ( ------------------------2 where X is the complex FFT of signal x (stimulus), Y is the complex FFT of signal y (response), X* is the complex conjugate of X, Y* is the complex conjugate of Y, and ,X- is the average of X, real and imaginary parts being averaged separately.
4-35
Chapter 4
Frequency Analysis
Vector Averaging
Vector averaging eliminates noise from synchronous signals. Vector averaging computes the average of complex quantities directly. The real part is averaged separately from the imaginary part. Averaging the real part separately from the imaginary part can reduce the noise floor for random signals because random signals are not phase coherent from one time record to the next. The real and imaginary parts are averaged separately, reducing noise but usually requiring a trigger. FFT spectrum power spectrum cross spectrum frequency response , X, X - , X, X- , Y, Y-------- (H1 = H2 = H3) , X-
where X is the complex FFT of signal x (stimulus), Y is the complex FFT of signal y (response), X* is the complex conjugate of X, and ,X- is the average of X, real and imaginary parts being averaged separately.
Peak Hold
Peak hold averaging retains the peak levels of the averaged quantities. Peak hold averaging is performed at each frequency line separately, retaining peak levels from one FFT record to the next. FFT spectrum power spectrum MAX ( X X ) MAX ( X X )
where X is the complex FFT of signal x (stimulus) and X* is the complex conjugate of X.
4-36
ni.com
Chapter 4
Frequency Analysis
Weighting
When performing RMS or vector averaging, you can weight each new spectral record using either linear or exponential weighting. Linear weighting combines N spectral records with equal weighting. When the number of averages is completed, the analyzer stops averaging and presents the averaged results. Exponential weighting emphasizes new spectral data more than old and is a continuous process. Weighting is applied according to the following equation. N1 1 -Y -X , Y i = -----------+ --N i1 N i where Xi is the result of the analysis performed on the ith block, Yi is the result of the averaging process from X1 to Xi, N = i for linear weighting, and N is a constant for exponential weighting (N = 1 for i = 1).
Echo Detection
Echo detection using Hilbert transforms is a common measurement for the analysis of modulation systems. Equation 4-16 describes a time-domain signal. Equation 4-17 yields the Hilbert transform of the time-domain signal. x(t) = Aet/cos(2f0t) xH(t) = Aet/sin(2f0t) (4-16) (4-17)
where A is the amplitude, f0 is the natural resonant frequency, and is the time decay constant. Equation 4-18 yields the natural logarithm of the magnitude of the analytic signal xA(t). t ln xA ( t ) = ln x ( t ) + jx H ( t ) = - + ln A (4-18)
4-37
Chapter 4
Frequency Analysis
The result from Equation 4-18 has the form of a line with slope m = 1 --. Therefore, you can extract the time constant of the system by graphing ln|xA(t)|. Figure 4-20 shows a time-domain signal containing an echo signal.
The following conditions make the echo signal difficult to locate in Figure 4-20: The time delay between the source and the echo signal is short relative to the time decay constant of the system. The echo amplitude is small compared to the source.
You can make the echo signal visible by plotting the magnitude of xA(t) on a logarithmic scale, as shown in Figure 4-21.
4-38
ni.com
Chapter 4
Frequency Analysis
In Figure 4-21, the discontinuity is plainly visible and indicates the location of the time delay of the echo. Figure 4-22 shows a section of the block diagram of the VI used to produce Figures 4-20 and 4-21.
The VI in Figure 4-22 completes the following steps to detect an echo. 1. 2. 3. Processes the input signal with the Fast Hilbert Transform VI to produce the analytic signal xA(t). Computes the magnitude of xA(t) with the 1D Rectangular To Polar VI. Computes the natural log of xA(t) to detect the presence of an echo.
4-39
Smoothing Windows
This chapter describes spectral leakage, how to use smoothing windows to decrease spectral leakage, the different types of smoothing windows, how to choose the correct type of smoothing window, the differences between smoothing windows used for spectral analysis and smoothing windows used for filter coefficient design, and the importance of scaling smoothing windows. Applying a smoothing window to a signal is windowing. You can use windowing to complete the following analysis operations: Define the duration of the observation. Reduce spectral leakage. Separate a small amplitude signal from a larger amplitude signal with frequencies very close to each other. Design FIR filter coefficients.
The Windows VIs provide a simple method of improving the spectral characteristics of a sampled signal. Use the NI Example Finder to find examples of using the Windows VIs.
Spectral Leakage
According to the Shannon Sampling Theorem, you can completely reconstruct a continuous-time signal from discrete, equally spaced samples if the highest frequency in the time signal is less than half the sampling frequency. Half the sampling frequency equals the Nyquist frequency. The Shannon Sampling Theorem bridges the gap between continuous-time signals and digital-time signals. Refer to Chapter 1, Introduction to Digital Signal Processing and Analysis in LabVIEW, for more information about the Shannon Sampling Theorem. In practical, signal-sampling applications, digitizing a time signal results in a finite record of the signal, even when you carefully observe the Shannon Sampling Theorem and sampling conditions. Even when the data meets the Nyquist criterion, the finite sampling record might cause energy leakage, called spectral leakage. Therefore, even though you use proper signal
5-1
Chapter 5
Smoothing Windows
acquisition techniques, the measurement might not result in a scaled, single-sided spectrum because of spectral leakage. In spectral leakage, the energy at one frequency appears to leak out into all other frequencies. Spectral leakage results from an assumption in the FFT and DFT algorithms that the time record exactly repeats throughout all time. Thus, signals in a time record are periodic at intervals that correspond to the length of the time record. When you use the FFT or DFT to measure the frequency content of data, the transforms assume that the finite data set is one period of a periodic signal. Therefore, the finiteness of the sampling record results in a truncated waveform with different spectral characteristics from the original continuous-time signal, and the finiteness can introduce sharp transition changes into the measured data. The sharp transitions are discontinuities. Figure 5-1 illustrates discontinuities.
One Period
Discontinuity
Time
The discontinuities shown in Figure 5-1 produce leakage of spectral information. Spectral leakage produces a discrete-time spectrum that appears as a smeared version of the original continuous-time spectrum.
5-2
ni.com
Chapter 5
Smoothing Windows
In Figure 5-2, Graph 1 shows the sampled time-domain waveform. Graph 2 shows the periodic time waveform of the sine wave from Graph 1. In Graph 2, the waveform repeats to fulfill the assumption of periodicity for the Fourier transform. Graph 3 shows the spectral representation of the waveform. Because the time record in Graph 2 is periodic with no discontinuities, its spectrum appears in Graph 3 as a single line showing the frequency of the sine wave. The waveform in Graph 2 does not have any discontinuities because the data set is from an integer number of cyclesin this case, one. The following methods are the only methods that guarantee you always acquire an integer number of cycles: Sample synchronously with respect to the signal you measure. Therefore, you can acquire an integral number of cycles deliberately. Capture a transient signal that fits entirely into the time record.
5-3
Chapter 5
Smoothing Windows
sampling an integer number of cycles. If the time record contains a noninteger number of cycles, spectral leakage occurs because the noninteger cycle frequency component of the signal does not correspond exactly to one of the spectrum frequency lines. Spectral leakage distorts the measurement in such a way that energy from a given frequency component appears to spread over adjacent frequency lines or bins, resulting in a smeared spectrum. You can use smoothing windows to minimize the effects of performing an FFT over a noninteger number of cycles. Because of the assumption of periodicity of the waveform, artificial discontinuities between successive periods occur when you sample a noninteger number of cycles. The artificial discontinuities appear as very high frequencies in the spectrum of the signalfrequencies that are not present in the original signal. The high frequencies of the discontinuities can be much higher than the Nyquist frequency and alias somewhere between 0 and fs/2. Therefore, spectral leakage occurs. The spectrum you obtain by using the DFT or FFT is a smeared version of the spectrum and is not the actual spectrum of the original signal. Figure 5-3 shows a sine wave sampled at a noninteger number of cycles and the Fourier transform of the sine wave.
5-4
ni.com
Chapter 5
Smoothing Windows
In Figure 5-3, Graph 1 consists of 1.25 cycles of the sine wave. In Graph 2, the waveform repeats periodically to fulfill the assumption of periodicity for the Fourier transform. Graph 3 shows the spectral representation of the waveform. The energy is spread, or smeared, over a wide range of frequencies. The energy has leaked out of one of the FFT lines and smeared itself into all the other lines, causing spectral leakage. Spectral leakage occurs because of the finite time record of the input signal. To overcome spectral leakage, you can take an infinite time record, from infinity to +infinity. With an infinite time record, the FFT calculates one single line at the correct frequency. However, waiting for infinite time is not possible in practice. To overcome the limitations of a finite time record, windowing is used to reduce the spectral leakage. In addition to causing amplitude accuracy errors, spectral leakage can obscure adjacent frequency peaks. Figure 5-4 shows the spectrum for two close frequency components when no smoothing window is used and when a Hanning window is used.
20 0 20
dBV
No Window 40 60 80 Hann Window 100 120 100 150 200 250 300
Hz
In Figure 5-4, the second peak stands out more prominently in the windowed signal than it does in the signal with no smoothing window applied.
Windowing Signals
Use smoothing windows to improve the spectral characteristics of a sampled signal. When performing Fourier or spectral analysis on finite-length data, you can use smoothing windows to minimize the discontinuities of truncated waveforms, thus reducing spectral leakage.
5-5
Chapter 5
Smoothing Windows
The amount of spectral leakage depends on the amplitude of the discontinuity. As the discontinuity becomes larger, spectral leakage increases, and vice versa. Smoothing windows reduce the amplitude of the discontinuities at the boundaries of each period and act like predefined, narrowband, lowpass filters. The process of windowing a signal involves multiplying the time record by a smoothing window of finite length whose amplitude varies smoothly and gradually towards zero at the edges. The length, or time interval, of a smoothing window is defined in terms of number of samples. Multiplication in the time domain is equivalent to convolution in the frequency domain. Therefore, the spectrum of the windowed signal is a convolution of the spectrum of the original signal with the spectrum of the smoothing window. Windowing changes the shape of the signal in the time domain, as well as affecting the spectrum that you see. Figure 5-5 illustrates convolving the original spectrum of a signal with the spectrum of a smoothing window.
*
Signal Spectrum Window Spectrum
Even if you do not apply a smoothing window to a signal, a windowing effect still occurs. The acquisition of a finite time record of an input signal produces the effect of multiplying the signal in the time domain by a uniform window. The uniform window has a rectangular shape and uniform height. The multiplication of the input signal in the time domain by the uniform window is equivalent to convolving the spectrum of the signal with
LabVIEW Analysis Concepts 5-6 ni.com
Chapter 5
Smoothing Windows
the spectrum of the uniform window in the frequency domain, which has a sinc function characteristic. Figure 5-6 shows the result of applying a Hamming window to a time-domain signal.
In Figure 5-6, the time waveform of the windowed signal gradually tapers to zero at the ends because the Hamming window minimizes the discontinuities along the transition edges of the waveform. Applying a smoothing window to time-domain data before the transform of the data into the frequency domain minimizes spectral leakage. Figure 5-7 shows the effects of the following smoothing windows on a signal: None (uniform) Hanning Flat top
5-7
Chapter 5
Smoothing Windows
Figure 5-7. Power Spectrum of 1 Vrms Signal at 256 Hz with Uniform, Hanning, and Flat Top Windows
The data set for the signal in Figure 5-7 consists of an integer number of cycles, 256, in a 1,024-point record. If the frequency components of the original signal match a frequency line exactly, as is the case when you acquire an integer number of cycles, you see only the main lobe of the spectrum. The smoothing windows have a main lobe around the frequency of interest. The main lobe is a frequency-domain characteristic of windows. The uniform window has the narrowest lobe. The Hanning and flat top windows introduce some spreading. The flat top window has a broader main lobe than the uniform or Hanning windows. For an integer number of cycles, all smoothing windows yield the same peak amplitude reading and have excellent amplitude accuracy. Side lobes do not appear because the spectrum of the smoothing window approaches zero at f intervals on either side of the main lobe. Figure 5-7 also shows the values at frequency lines of 254 Hz through 258 Hz for each smoothing window. The amplitude error at 256 Hz equals 0 dB for each smoothing window. The graph shows the spectrum values between 240 Hz and 272 Hz. The actual values in the resulting spectrum
5-8
ni.com
Chapter 5
Smoothing Windows
array for each smoothing window at 254 Hz through 258 Hz are shown below the graph. f equals 1 Hz. If a time record does not contain an integer number of cycles, the continuous spectrum of the smoothing window shifts from the main lobe center at a fraction of f that corresponds to the difference between the frequency component and the FFT line frequencies. This shift causes the side lobes to appear in the spectrum. In addition, amplitude error occurs at the frequency peak because sampling of the main lobe is off center and smears the spectrum. Figure 5-8 shows the effect of spectral leakage on a signal whose data set consists of 256.5 cycles.
Figure 5-8. Power Spectrum of 1 Vrms Signal at 256.5 Hz with Uniform, Hanning, and Flat Top Windows
In Figure 5-8, for a noninteger number of cycles, the Hanning and flat top windows introduce much less spectral leakage than the uniform window. Also, the amplitude error is better with the Hanning and flat top windows. The flat top window demonstrates very good amplitude accuracy and has a wider spread and higher side lobes than the Hanning window.
5-9
Chapter 5
Smoothing Windows
Figure 5-9 shows the block diagram of a VI that measures the windowed and nonwindowed spectrums of a signal composed of the sum of two sinusoids.
Figure 5-9. Measuring the Spectrum of a Signal Composed of the Sum of Two Sinusoids
Figure 5-10 shows the amplitudes and frequencies of the two sinusoids and the measurement results. The frequencies shown are in units of cycles.
5-10
ni.com
Chapter 5
Smoothing Windows
Figure 5-10. Windowed and Nonwindowed Spectrums of the Sum of Two Sinusoids
In Figure 5-10, the nonwindowed spectrum shows leakage that is more than 20 dB at the frequency of the smaller sinusoid. You can apply more sophisticated techniques to get a more accurate description of the original time-continuous signal in the frequency domain. However, in most applications, applying a smoothing window is sufficient to obtain a better frequency representation of the signal.
5-11
Chapter 5
Smoothing Windows
Frequency
Main Lobe
The center of the main lobe of a smoothing window occurs at each frequency component of the time-domain signal. By convention, to characterize the shape of the main lobe, the widths of the main lobe at 3 dB and 6 dB below the main lobe peak describe the width of the main lobe. The unit of measure for the main lobe width is FFT bins or frequency lines. The width of the main lobe of the smoothing window spectrum limits the frequency resolution of the windowed signal. Therefore, the ability to distinguish two closely spaced frequency components increases as the main lobe of the smoothing window narrows. As the main lobe narrows and spectral resolution improves, the window energy spreads into its side lobes, increasing spectral leakage and decreasing amplitude accuracy. A trade-off occurs between amplitude accuracy and spectral resolution.
Side Lobes
Side lobes occur on each side of the main lobe and approach zero at multiples of fs/N from the main lobe. The side lobe characteristics of the smoothing window directly affect the extent to which adjacent frequency components leak into adjacent frequency bins. The side lobe response of a strong sinusoidal signal can overpower the main lobe response of a nearby weak sinusoidal signal.
5-12
ni.com
Chapter 5
Smoothing Windows
Maximum side lobe level and side lobe roll-off rate characterize the side lobes of a smoothing window. The maximum side lobe level is the largest side lobe level in decibels relative to the main lobe peak gain. The side lobe roll-off rate is the asymptotic decay rate in decibels per decade of frequency of the peaks of the side lobes. Table 5-1 lists the characteristics of several smoothing windows.
Table 5-1. Characteristics of Smoothing Windows
Smoothing Window Uniform (none) Hanning Hamming Blackman-Harris Exact Blackman Blackman Flat Top
3 dB Main Lobe Width (bins) 0.88 1.44 1.30 1.62 1.61 1.64 2.94
6 dB Main Lobe Width (bins) 1.21 2.00 1.81 2.27 2.25 2.30 3.56
Rectangular (None)
The rectangular window has a value of one over its length. The following equation defines the rectangular window. w(n) = 1.0 for n = 0, 1, 2, , N 1
where N is the length of the window and w is the window value. Applying a rectangular window is equivalent to not using any window because the rectangular function just truncates the signal to within a finite time interval. The rectangular window has the highest amount of spectral leakage. Figure 5-12 shows the rectangular window for N = 32.
5-13
Chapter 5
Smoothing Windows
The rectangular window is useful for analyzing transients that have a duration shorter than that of the window. Transients are signals that exist only for a short time duration. The rectangular window also is used in order tracking, where the effective sampling rate is proportional to the speed of the shaft in rotating machines. In order tracking, the rectangular window detects the main mode of vibration of the machine and its harmonics.
Hanning
The Hanning window has a shape similar to that of half a cycle of a cosine wave. The following equation defines the Hanning window. 2n w ( n ) = 0.5 0.5 cos --------N for n = 0, 1, 2, , N 1
where N is the length of the window and w is the window value. Figure 5-13 shows a Hanning window with N = 32.
The Hanning window is useful for analyzing transients longer than the time duration of the window and for general-purpose applications.
5-14
ni.com
Chapter 5
Smoothing Windows
Hamming
The Hamming window is a modified version of the Hanning window. The shape of the Hamming window is similar to that of a cosine wave. The following equation defines the Hamming window. 2n w ( n ) = 0.54 0.46 cos --------N for n = 0, 1, 2, , N 1
where N is the length of the window and w is the window value. Figure 5-14 shows a Hamming window with N = 32.
The Hanning and Hamming windows are similar, as shown in Figures 5-13 and 5-14. However, in the time domain, the Hamming window does not get as close to zero near the edges as does the Hanning window.
Kaiser-Bessel
The Kaiser-Bessel window is a flexible smoothing window whose shape you can modify by adjusting the beta input. Thus, depending on your application, you can change the shape of the window to control the amount of spectral leakage. Figure 5-15 shows the Kaiser-Bessel window for different values of beta.
5-15
Chapter 5
Smoothing Windows
For small values of beta, the shape is close to that of a rectangular window. Actually, for beta = 0.0, you do get a rectangular window. As you increase beta, the window tapers off more to the sides. The Kaiser-Bessel window is useful for detecting two signals of almost the same frequency but with significantly different amplitudes.
Triangle
The shape of the triangle window is that of a triangle. The following equation defines the triangle window. nN w(n) = 1 2 --------------N for n = 0, 1, 2, , N 1
5-16
ni.com
Chapter 5
Smoothing Windows
Flat Top
The flat top window has the best amplitude accuracy of all the smoothing windows at 0.02 dB for signals exactly between integral cycles. Because the flat top window has a wide main lobe, it has poor frequency resolution. The following equation defines the flat top window.
k=0
% ( 1 ) a cos ( k )
k k
where
5-17
Chapter 5
Smoothing Windows
The flat top window is most useful in accurately measuring the amplitude of single frequency components with little nearby spectral energy in the signal.
Exponential
The shape of the exponential window is that of a decaying exponential. The following equation defines the exponential window. w[n] = e
ln ( f ) #n --------------$ ! N1"
= f
n -$ # -----------! N 1"
for n = 0, 1, 2, , N 1
where N is the length of the window, w is the window value, and f is the final value. The initial value of the window is one and gradually decays toward zero. You can adjust the final value of the exponential window to between 0 and 1. Figure 5-18 shows the exponential window for N = 32, with the final value specified as 0.1.
5-18
ni.com
Chapter 5
Smoothing Windows
The exponential window is useful for analyzing transient response signals whose duration is longer than the length of the window. The exponential window damps the end of the signal, ensuring that the signal fully decays by the end of the sample block. You can apply the exponential window to signals that decay exponentially, such as the response of structures with light damping that are excited by an impact, such as the impact of a hammer.
Spectral Analysis
The smoothing windows designed for spectral analysis must be DFT even. A smoothing window is DFT even if its dot product, or inner product, with integral cycles of sine sequences is identically zero. In other words, the DFT of a DFT-even sequence has no imaginary component. Figures 5-19 and 5-20 show the Hanning window for a sample size of 8 and one cycle of a sine pattern for a sample size of 8.
5-19
Chapter 5
Smoothing Windows
In Figure 5-19, the DFT-even Hanning window is not symmetric about its midpoint. The last point of the window is not equal to its first point, similar to one complete cycle of the sine pattern shown in Figure 5-20. Smoothing windows for spectral analysis are spectral windows and include the following window types: Scaled time-domain window Hanning window Hamming window Triangle window Blackman window Exact Blackman window Blackman-Harris window Flat top window Kaiser-Bessel window General cosine window Cosine tapered window
5-20
ni.com
Chapter 5
Smoothing Windows
where N is the length of the window and w is the window value. Equation 5-2 defines a symmetrical Hanning window for filter coefficient design. 2i $ $ w [ i ] = 0.5 # 1 cos # ! ! N 1" " for i = 0, 1, 2, , N 1 (5-2)
where N is the length of the window and w is the window value. By modifying a spectral window, as shown in Equation 5-2, you can define a symmetrical window for designing filter coefficients. Refer to Chapter 3, Digital Filtering, for more information about designing digital filters.
5-21
Chapter 5
Smoothing Windows
of the component in a given frequency bin, choose a smoothing window with a wide main lobe. If the signal spectrum is rather flat or broadband in frequency content, use the uniform window, or no window. In general, the Hanning window is satisfactory in 95% of cases. It has good frequency resolution and reduced spectral leakage. If you do not know the nature of the signal but you want to apply a smoothing window, start with the Hanning window. Table 5-2 lists different types of signals and the appropriate windows that you can use with them.
Table 5-2. Signals and Windows
Type of Signal Transients whose duration is shorter than the length of the window Transients whose duration is longer than the length of the window General-purpose applications Spectral analysis (frequency-response measurements) Rectangular
Window
Exponential, Hanning Hanning Hanning (for random excitation), Rectangular (for pseudorandom excitation) Kaiser-Bessel Rectangular Flat top Hanning Flat top Hanning Uniform Uniform, Hamming Force Exponential Hanning
Separation of two tones with frequencies very close to each other but with widely differing amplitudes Separation of two tones with frequencies very close to each other but with almost equal amplitudes Accurate single-tone amplitude measurements Sine wave or combination of sine waves Sine wave and amplitude accuracy is important Narrowband random signal (vibration data) Broadband random (white noise) Closely spaced sine waves Excitation signals (hammer blow) Response signals Unknown content
5-22
ni.com
Chapter 5
Smoothing Windows
Initially, you might not have enough information about the signal to select the most appropriate smoothing window for the signal. You might need to experiment with different smoothing windows to find the best one. Always compare the performance of different smoothing windows to find the best one for the application.
5-23
Chapter 5
Smoothing Windows
Table 5-3. Correction Factors and Worst-Case Amplitude Errors for Smoothing Windows (Continued)
5-24
ni.com
Distortion Measurements
This chapter describes harmonic distortion, total harmonic distortion (THD), signal noise and distortion (SINAD), and when to use distortion measurements.
Defining Distortion
Applying a pure single-frequency sine wave to a perfectly linear system produces an output signal having the same frequency as that of the input sine wave. However, the output signal might have a different amplitude and/or phase than the input sine wave. Also, when you apply a composite signal consisting of several sine waves at the input, the output signal consists of the same frequencies but different amplitudes and/or phases. Many real-world systems act as nonlinear systems when their input limits are exceeded, resulting in distorted output signals. If the input limits of a system are exceeded, the output consists of one or more frequencies that did not originally exist at the input. For example, if the input to a nonlinear system consists of two frequencies f1 and f2, the frequencies at the output might have the following components: f1 and harmonics, or integer multiples, of f1 f2 and harmonics of f2 Sums and differences of f1, f2 Harmonics of f1 and f2
The number of new frequencies at the output, their corresponding amplitudes, and their relationships with respect to the original frequencies vary depending on the transfer function. Distortion measurements quantify the degree of nonlinearity of a system. Common distortion measurements include the following measurements: Total harmonic distortion (THD) Total harmonic distortion + noise (THD + N) Signal noise and distortion (SINAD) Intermodulation distortion
6-1
Chapter 6
Distortion Measurements
Application Areas
You can make distortion measurements for many devices, such as A/D and D/A converters, audio processing devices, analog tape recorders, cellular phones, radios, televisions, stereos, and loudspeakers. Measurements of harmonics often provide a good indication of the cause of the nonlinearity of a system. For example, nonlinearities that are asymmetrical around zero produce mainly even harmonics. Nonlinearities symmetrical around zero produce mainly odd harmonics. You can use distortion measurements to diagnose faults such as bad solder joints, torn speaker cones, and incorrectly installed components. However, nonlinearities are not always undesirable. For example, many musical sounds are produced specifically by driving a device into its nonlinear region.
Harmonic Distortion
When a signal x(t) of a particular frequency f1 passes through a nonlinear system, the output of the system consists of f1 and its harmonics. The following expression describes the relationship between f1 and its harmonics. f1, f2 = 2f1, f3 = 3f1, f4 = 4f1, , fn = nf1 The degree of nonlinearity of the system determines the number of harmonics and their corresponding amplitudes the system generates. In general, as the nonlinearity of a system increases, the harmonics become higher. As the nonlinearity of a system decreases, the harmonics become lower. Figure 6-1 illustrates an example of a nonlinear system where the output y(t) is the cube of the input signal x(t).
cos(t)
cos3(t)
The following equation defines the input for the system shown in Figure 6-1. x ( t ) = cos ( t )
LabVIEW Analysis Concepts 6-2 ni.com
Chapter 6
Distortion Measurements
Equation 6-1 defines the output of the system shown in Figure 6-1. x ( t ) = 0.5 cos ( t ) + 0.25 [ cos ( t ) + cos ( 3 t ) ] In Equation 6-1, the output contains not only the input fundamental frequency but also the third harmonic 3. A common cause of harmonic distortion is clipping. Clipping occurs when a system is driven beyond its capabilities. Symmetrical clipping results in odd harmonics. Asymmetrical clipping creates both even and odd harmonics.
3
(6-1)
THD
To determine the total amount of nonlinear distortion, also known as total harmonic distortion (THD), a system introduces, measure the amplitudes of the harmonics the system introduces relative to the amplitude of the fundamental frequency. The following equation yields THD. A2 + A 3 + A4 + THD = ----------------------------------------------------A1 where A1 is the amplitude of the fundamental frequency, A2 is the amplitude of the second harmonic, A3 is the amplitude of the third harmonic, A4 is the amplitude of the fourth harmonic, and so on. You usually report the results of a THD measurement in terms of the highest order harmonic present in the measurement, such as THD through the seventh harmonic. The following equation yields the percentage total harmonic distortion (%THD). # A2 + A3 + A4 + $ -' %THD = ( 100 ) & ----------------------------------------------------A1 ! "
2 2 2 2 2 2
6-3
Chapter 6
Distortion Measurements
THD + N
Real-world signals usually contain noise. A system can introduce additional noise into the signal. THD + N measures signal distortion while taking into account the amount of noise power present in the signal. Measuring THD + N requires measuring the amplitude of the fundamental frequency and the power present in the remaining signal after removing the fundamental frequency. The following equation yields THD + N. A2 + A3 + + N THD + N = -----------------------------------------------------------2 2 2 2 A1 + A2 + A3 + N where N is the noise power. A low THD + N measurement means that the system has a low amount of harmonic distortion and a low amount of noise from interfering signals, such as AC mains hum and wideband white noise. As with THD, you usually report the results of a THD + N measurement in terms of the highest order harmonic present in the measurement, such as THD + N through the third harmonic. The following equation yields percentage total harmonic distortion + noise (%THD + N).
2 2 2 # A2 + A3 + + N $ -' %THD + N = ( 100 ) & -----------------------------------------------------------& 2' 2 2 2 ! A1 + A2 + A3 + N " 2 2 2
SINAD
Similar to THD + N, SINAD takes into account both harmonics and noise. However, SINAD is the reciprocal of THD + N. The following equation yields SINAD. Fundamental + Noise + Distortion SINAD = -----------------------------------------------------------------------------------------Noise + Distortion You can use SINAD to characterize the performance of FM receivers in terms of sensitivity, adjacent channel selectivity, and alternate channel selectivity.
6-4
ni.com
DC/RMS Measurements
Two of the most common measurements of a signal are its direct current (DC) and root mean square (RMS) levels. This chapter introduces measurement analysis techniques for making DC and RMS measurements of a signal.
Voltage
t1
Time
t2
The DC level of a continuous signal V(t) from time t1 to time t2 is given by the following equation. 1 V dc = -------------------- (t2 t1)
t2 t1
V ( t ) dt
7-1
Chapter 7
DC/RMS Measurements
For digitized signals, the discrete-time version of the previous equation is given by the following equation.
N
V dc
1 - = --N
%V
i=1
For a sampled system, the DC value is defined as the mean value of the samples acquired in the specified measurement time window. Between pure DC signals and fast-moving dynamic signals is a gray zone where signals become more complex, and measuring the DC level of these signals becomes challenging. Real-world signals often contain a significant amount of dynamic influence. Often, you do not want the dynamic part of the signal. The DC measurement identifies the static DC signal hidden in the dynamic signal, for example, the voltage generated by a thermocouple in an industrial environment, where external noise or hum from the main power can disturb the DC signal significantly.
t2 2 t1
V ( t ) dt
7-2
ni.com
Chapter 7
DC/RMS Measurements
The RMS level of a discrete signal Vi is given by the following equation. 1 --- N
V rms =
i=1
%V
2 i
One difficulty is encountered when measuring the dynamic part of a signal using an instrument that does not offer an AC-coupling option. A true RMS measurement includes the DC part in the measurement, which is a measurement you might not want.
Voltage
t1
t2
Time
7-3
Chapter 7
DC/RMS Measurements
An RMS measurement is an averaged quantity because it is the average energy in the signal over a measurement period. You can improve the RMS measurement accuracy by using a longer averaging time, equivalent to the integration time or measurement time. There are several different strategies to use for making DC and RMS measurements, each dependent on the type of error or noise sources. When choosing a strategy, you must decide if accuracy or speed of the measurement is more important.
Voltage
t1
Time
t2
Any remaining partial period, shown in Figure 7-3 with vertical hatching, introduces an error in the average value and therefore in the DC measurement. Increasing the averaging time reduces this error because the integration is always divided by the measurement time t2 t1. If you know
7-4
ni.com
Chapter 7
DC/RMS Measurements
the period of the sine tone, you can take a more accurate measurement of the DC value by using a measurement period equal to an integer number of periods of the sine tone. The most severe error occurs when the measurement time is a half-period different from an integer number of periods of the sine tone because this is the maximum area under or over the signal curve.
7-5
Chapter 7
DC/RMS Measurements
Figure 7-4. Digits versus Measurement Time for 1.0 VDC Signal with 0.5 V Single Tone
7-6
ni.com
Chapter 7
DC/RMS Measurements
Figure 7-5. Digits versus Measurement Time for DC + Tone Using Hann Window
You can use other types of window functions to further reduce the necessary measurement time or greatly increase the resulting accuracy. Figure 7-6 shows that the Low Sidelobe (LSL) window can achieve more than six ENOD of worst accuracy when averaging your DC signal over only five periods of the sine tone (same test signal).
Figure 7-6. Digits versus Measurement Time for DC + Tone Using LSL Window
7-7
Chapter 7
DC/RMS Measurements
Applying the window to the signal increases RMS measurement accuracy significantly, but the improvement is not as large as in DC measurements. For this example, the LSL window achieves six digits of accuracy when the measurement time reaches eight periods of the sine tone.
7-8
ni.com
Chapter 7
DC/RMS Measurements
You also must make sure that the window is scaled correctly or that you update scaling after applying the window. The most useful window functions are pre-scaled by their coherent gainthe mean value of the window functionso that the resulting mean value of the scaled window function is always 1.00. DC measurements do not need to be scaled when using a properly scaled window function. For RMS measurements, each window has a specific equivalent noise bandwidth that you must use to scale integrated RMS measurements. You must scale RMS measurements using windows by the reciprocal of the square root of the equivalent noise bandwidth.
7-9
Chapter 7
DC/RMS Measurements
You can use bandpass or bandstop filtering before RMS computations to measure the RMS power in a specific band of frequencies. You also can use the Fast Fourier Transform (FFT) to pick out specific frequencies for RMS processing. Refer to Chapter 4, Frequency Analysis, for more information about the FFT. The RMS level of a specific sine tone that is part of a complex or noisy signal can be extracted very accurately using frequency domain processing, leveraging the power of the FFT, and using the benefits of windowing.
7-10
ni.com
Limit Testing
This chapter provides information about setting up an automated system for performing limit testing, specifying limits, and applications for limit testing. You can use limit testing to monitor a waveform and determine if it always satisfies a set of conditions, usually upper and lower limits. The region bounded by the specified limits is a mask. The result of a limit or mask test is generally a pass or fail.
The following sections describe steps 1 and 3 in further detail. Assume that the signal to be monitored starts at x = x0 and all the data points are evenly spaced. The spacing between each point is denoted by dx.
Specifying a Limit
Limits are classified into two typescontinuous limits and segmented limits, as shown in Figure 8-1. The top graph in Figure 8-1 shows a continuous limit. A continuous limit is specified using a set of x and y points {{x1, x2, x3, }, {y1, y2, y3, }}. Completing step 1 creates a limit with the first point at x0 and all other points at a uniform spacing of dx (x0 + dx, x0 + 2dx, ). This is done through a linear interpolation of the x and y values that define the limit. In Figure 8-1, black dots represent the
8-1
Chapter 8
Limit Testing
points at which the limit is defined and the solid line represents the limit you create. Creating the limit in step 1 reduces test times in step 3. If the spacing between the samples changes, you can repeat step 1. The limit is undefined in the region x0 < x < x1 and for x > x4.
Y Continuous Limit
y2
y3
y1
y4
x0
x1
x2
x3
x4
Y Segmented Limit
y2
y3
y4 y5
y1
x0
x1
x2
x3
x4
x5
The bottom graph of Figure 8-1 shows a segmented limit. The first segment is defined using a set of x and y points {{x1, x2}, {y1, y2}}. The second segment is defined using a set of points {x3, x4, x5} and {y3, y4, y5}. You can define any number of such segments. As with continuous limits, step 1 uses linear interpolation to create a limit with the first point at x0 and all other points with an uniform spacing of dx. The limit is undefined in the region x0 < x < x1 and in the region x > x5. Also, the limit is undefined in the region x2 < x < x3.
8-2
ni.com
Chapter 8
Limit Testing
Maximum (Upper Limit) Value (dBm/Hz) 97.5 92.5 + 21.5 log2(f/4,000) 34.5 34.5 48.0 log2(f/138,000) 90
The limit is specified as an array of a set of x and y points, [{0.3, 4.0}{97.5, 97.5}, {4.0, 25.9}{92.5 + 21.5 log2(f/4,000), 92.5 + 21.5 log2(f/4,000)}, , {307.0, 1,221.0}{90, 90}]. Each element of the array corresponds to a segment. Figure 8-2 shows the segmented limit plot specified using the formulas shown in Table 8-1. The x-axis is on a logarithmic scale.
8-3
Chapter 8
Limit Testing
Limit Testing
After you define your mask, you acquire a signal using a DAQ device. The sample rate is set at 1/dx S/s. Compare the signal with the limit. In step 1, you create a limit value at each point where the signal is defined. In step 3, you compare the signal with the limit. For the upper limit, if the data point is less than or equal to the limit point, the test passes. If the data point is greater than the limit point, the test fails. For the lower limit, if the data point is greater than or equal to the limit point, the test passes. If the data point is less than the limit point, the test fails. Figure 8-3 shows the result of limit testing in a continuous mask case. The test signal falls within the mask at all the points it is sampled, other than points b and c. Thus, the limit test fails. Point d is not tested because it falls outside the mask.
8-4
ni.com
Chapter 8
Limit Testing
Figure 8-4 shows the result of limit testing in a segmented mask case. All the points fall within the mask. Points b and c are not tested because the mask is undefined at those points. Thus, the limit test passes. Point d is not tested because it falls outside the mask.
8-5
Chapter 8
Limit Testing
Applications
You can use limit mask testing in a wide range of test and measurement applications. For example, you can use limit mask testing to determine that the power spectral density of ADSL signals meets the recommendations in the ANSI T1.413 specification. Refer to the Specifying a Limit Using a Formula section of this chapter for more information about ADSL signal limits. The following sections provide examples of when you can use limit mask testing. In all these examples, the specifications are recommended by standards-generating bodies, such as the CCITT, ITU-T, ANSI, and IEC, to ensure that all the test and measurement systems conform to a universally accepted standard. In some other cases, the limit testing specifications are proprietary and are strictly enforced by companies for quality control.
Figure 8-5. Upper and Lower Limit for V.34 Modem Transmitted Spectrum
8-6
ni.com
Chapter 8
Limit Testing
The ITU-T V.34 recommendation contains specifications for a modem operating at data signaling rates up to 33,600 bits/s. It specifies that the spectrum for the line signal that transmits data conforms to the template shown in Figure 8-5. For example, for a normalized frequency of 1.0, the spectrum must always lie between 3 dB and 1 dB. All the modems must meet this specification. A modem manufacturer can set up an automated test system to monitor the transmit spectrum for the signals that the modem outputs. If the spectrum conforms to the specification, the modem passes the test and is ready for customer use. Recommendations such as the ITU-T V.34 are essential to ensure interoperability between modems from different manufacturers and to provide high-quality service to customers.
8-7
Chapter 8
Limit Testing
8-8
ni.com
Part II
Mathematics
This part provides information about mathematical concepts commonly used in analysis applications. Chapter 9, Curve Fitting, describes how to extract information from a data set to obtain a functional description. Chapter 10, Probability and Statistics, describes fundamental concepts of probability and statistics and how to use these concepts to solve real-world problems. Chapter 11, Linear Algebra, describes how to use the Linear Algebra VIs to perform matrix computation and analysis. Chapter 12, Optimization, describes basic concepts and methods used to solve optimization problems. Chapter 13, Polynomials, describes polynomials and operations involving polynomials.
II-1
Curve Fitting
This chapter describes how to extract information from a data set to obtain a functional description. Use the NI Example Finder to find examples of using the Curve Fitting VIs.
where e(a) is the least square error, y(x) is the observed data set, f(x, a) is the functional description of the data set, and a is the set of curve coefficients that best describes the curve. For example, if a = {a0, a1}, the following equation yields the functional description. f(x, a) = a0 + a1 x The least squares algorithm finds a by solving the system defined by Equation 9-2. ----e(a) = 0 a (9-2)
To solve the system defined by Equation 9-2, you set up and solve the Jacobian system generated by expanding Equation 9-2. After you solve the system for a, you can use the functional description f(x, a) to obtain an estimate of the observed data set for any value of x. The Curve Fitting VIs automatically set up and solve the Jacobian system and return the set of coefficients that best describes the data set. You can concentrate on the functional description of the data without having to solve the system in Equation 9-2.
9-1
Chapter 9
Curve Fitting
You can modify the block diagram to fit exponential and polynomial curves by replacing the Linear Fit VI with the Exponential Fit VI or the General Polynomial Fit VI. Figure 9-2 shows a multiplot graph of the result of fitting a line to a noisy data set.
9-2
ni.com
Chapter 9
Curve Fitting
The practical applications of curve fitting include the following applications: Removing measurement noise Filling in missing data points, such as when one or more measurements are missing or improperly recorded Interpolating, which is estimating data between data points, such as if the time between measurements is not small enough Extrapolating, which is estimating data beyond data points, such as looking for data values before or after a measurement Differentiating digital data, such as finding the derivative of the data points by modeling the discrete data with a polynomial and differentiating the resulting polynomial equation Integrating digital data, such as finding the area under a curve when you have only the discrete points of the curve Obtaining the trajectory of an object based on discrete measurements of its velocity, which is the first derivative, or acceleration, which is the second derivative
y i = b o x i 0 + + b k 1 x ik 1 =
%b x
j=0
k1
j ij
i = 0, 1, , n 1
(9-3)
where xij is the observed data contained in the observation matrix H, n is the number of elements in the set of observed data and the number of rows of in H, b is the set of coefficients that fit the linear model, and k is the number of coefficients.
9-3
Chapter 9
Curve Fitting
You can rewrite Equation 9-3 as the following equation. Y = HB. The general LS linear fit model is a multiple linear regression model. A multiple linear regression model uses several variables, xi0, xi1, , xik 1, to predict one variable, yi. In most analysis situations, you acquire more observation data than coefficients. Equation 9-3 might not yield all the coefficients in set B. The fit problem becomes to find the coefficient set B that minimizes the difference between the observed data yi and the predicted value zi. Equation 9-4 defines zi.
zi =
%b x
j=0
k1
j ij
(9-4)
You can use the least chi-square plane method to find the solution set B that minimizes the quantity given by Equation 9-5.
k1 # $ b j x ij' & yi ' n 1& n1 y z 2 j=0 i 2 i$ # -----------& ----------------------------' = |H0B Y0|2 = = ! i " & ' i ' i = 0& i=0 & ' ! " 2
(9-5)
9-4
ni.com
Chapter 9
Curve Fitting
In Equation 9-5, i is the standard deviation. If the measurement errors are independent and normally distributed with constant standard deviation, i = , Equation 9-5 also is the least-square estimation. You can use the following methods to minimize 2 from Equation 9-5: Solve normal equations of the least-square problems using LU or Cholesky factorization. Minimize 2 to find the least-square solution of equations.
Solving normal equations involves completing the following steps. 1. Set the partial derivatives of 2 to zero with respect to b0, b1, , bk 1, as shown by the following equations. # = 0 & ------- b0 & & 2 - = 0 & -------& b1 & & . & . & & . & & . & 2 & ------------- = 0 ! bk 1 2.
2
(9-6)
(9-7)
Equations of the form given by Equation 9-7 are called normal equations of the least-square problems. You can solve them using LU or Cholesky factorization algorithms. However, the solution from the normal equations is susceptible to roundoff error. The preferred method of minimizing 2 is to find the least-square solution of equations. Equation 9-8 defines the form of the least-square solution of equations. H0B = Y0 (9-8)
9-5
Chapter 9
Curve Fitting
You can use QR or SVD factorization to find the solution set B for Equation 9-8. For QR factorization, you can use the Householder algorithm, the Givens algorithm, or the Givens 2 algorithm, which also is known as the fast Givens algorithm. Different algorithms can give you different precision. In some cases, if one algorithm cannot solve the equation, another algorithm might solve it. You can try different algorithms to find the one best suited for the observation data.
yi =
%b x
j=0
k1
j j i
= b 0 + b 1 x i + b 2 x i2 + + b k 1 x i
k1
i = 0, 1, 2, , n 1 (9-9)
Comparing Equations 9-3 and 9-9 shows that x ij = x ij , as shown by the following equations. xi 0 = xi 2 k1 = 1, x i 1 = x i, x i 2 = x i , , x ik 1 = x i Because x ij = x ij , you can build the observation matrix H as shown by the following equation. + 2 k1 / 1 x x0 x 0 0 / 2 k1 / / 1 x 1 x1 x 1 / H = * . / . . / / / 2 k1 / 1 x n 1 xn 1 x n 1 ) 1 / / / / / 0 / / / / / .
0
Instead of using x ij = x ji , you also can choose another function formula to fit the data sets {xi, yi}. In general, you can select xij = fj(xi). Here, fj(xi)
9-6
ni.com
Chapter 9
Curve Fitting
is the function model that you choose to fit your observation data. j In polynomial fit, f j ( x i ) = x i . In general, you can build H as shown in the following equation. + f (x ) f (x ) f (x ) f (x ) 1 0 2 0 k1 0 / 0 0 / f (x ) f (x ) f (x ) f (x ) 1 1 2 1 k1 1 / 0 1 / H = * . / . . / / / f (x ) f (x ) f (x ) fk 1 ( xn 1 ) ) 0 n1 1 n1 2 n1 The following equation defines the fit model. yi = b0 f0 ( x ) + b1 f1 ( x ) + + bk 1 fk 1 ( x ) 1 / / / / 0 / / / / .
1 MSE = -n
% (f y )
i i i=0
where f is the sequence representing the fitted values, y is the sequence representing the observed values, and n is the number of observed sample points.
9-7
Chapter 9
Curve Fitting
Linear Fit
The Linear Fit VI fits experimental data to a straight line of the general form described by the following equation. y = mx + b The Linear Fit VI calculates the coefficients a0 and a1 that best fit the experimental data (x[i] and y[i]) to a straight line model described by the following equation. y[i]=a0 + a1x[i] where y[i] is a linear combination of the coefficients a0 and a1.
Exponential Fit
The Exponential Fit VI fits data to an exponential curve of the general form described by the following equation. y = aebx The following equation specifically describes the exponential curve resulting from the exponential fit algorithm. y [ i ] = a0 e
a1 x [ i ]
9-8
ni.com
Chapter 9
Curve Fitting
9-9
Chapter 9
Curve Fitting
Computing Covariance
The General LS Linear Fit VI returns a k k matrix of covariances between the coefficients ak. The General LS Linear Fit VI uses the following equation to compute the covariance matrix C.
TH ) C = ( H0 0 1
To build H, set each column of H to the independent functions evaluated at each x value x[i]. If the data set contains 100 x values, the following equation defines H. 1 sin ( x 0 ) cos ( x 0 ) x 0 1 sin ( x 1 ) cos ( x 1 ) x 1 H = 1 sin ( x 2 ) cos ( x 2 ) x 2
2 2 2 2
1 sin ( x 99 ) cos ( x 99 ) x 99 If the data set contains N data points and if k coefficients (a0, a1, ak 1) exist for which to solve, H is an N k matrix with N rows and k columns. Therefore, the number of rows in H equals the number of data points N. The number of columns in H equals the number of coefficients k.
9-10
ni.com
Chapter 9
Curve Fitting
9-11
10
This chapter describes fundamental concepts of probability and statistics and how to use these concepts to solve real-world problems. Use the NI Example Finder to find examples of using the Probability and Statistics VIs.
Statistics
Statistics allow you to summarize data and draw conclusions for the present by condensing large amounts of data into a form that brings out all the essential information and is yet easy to remember. To condense data, single numbers must make the data more intelligible and help draw useful inferences. For example, in a season, a sports player participates in 51 games and scores a total of 1,568 points. The total of 1,568 points includes 45 points in Game A, 36 points in Game B, 51 points in Game C, 45 points in Game D, and 40 points in Game E. As the number of games increases, remembering how many points the player scored in each individual game becomes increasingly difficult. If you divide the total number of points that the player scored by the number of games played, you obtain a single number that tells you the average number of points the player scored per game. Equation 10-1 yields the points per game average for the player. 1,568 points ---------------------------- = 30.7 points per game average 51 games (10-1)
Computing percentage provides a method for making comparisons. For example, the officials of an American city are considering installing a traffic signal at a major intersection. The purpose of the traffic signal is to protect motorists turning left from oncoming traffic. However, the city has only enough money to fund one traffic signal but has three intersections that potentially need the signal. Traffic engineers study each of the three intersections for a week. The engineers record the total number of cars using the intersection, the number of cars travelling straight through the
10-1
Chapter 10
intersection, the number of cars making left-hand turns, and the number of cars making right-hand turns. Table 10-1 shows the data for one of the intersections.
Table 10-1. Data for One Major Intersection
Day 1 2 3 4 5 6 7 Totals
Total Number of Cars Using the Intersection 1,258 1,306 1,355 1,227 1,334 694 416 7,590
Number of Cars Turning Left 528 549 569 515 560 291 174 3,186
Number of Cars Turning Right 330 340 352 319 346 180 108 1,975
Number of Cars Continuing Straight 400 417 434 393 428 223 134 2,429
Looking only at the raw data from each intersection might make determining which intersection needs the traffic signal difficult because the raw numbers can vary widely. However, computing the percentage of cars turning at each intersection provides a common basis for comparison. To obtain the percentage of cars turning left, divide the number of cars turning left by the total number of cars using the intersection and multiply that result by 100. For the intersection whose data is shown in Table 10-1, the following equation gives the percentage of cars turning left. 3,186 ------------ 100 = 42% 7,590 Given the data for the other two intersections, the city officials can obtain the percentage of cars turning left at those two intersections. Converting the raw data to a percentage condenses the information for the three intersections into single numbers representing the percentage of cars that turn left at each intersection. The city officials can compare the percentage of cars turning left at each intersection and rank the intersections in order of highest percentage of cars turning left to the lowest percentage of cars
10-2
ni.com
Chapter 10
turning left. Ranking the intersections can help determine where the traffic signal is needed most. Thus, in a broad sense, the term statistics implies different ways to summarize data to derive useful and important information from it.
Mean
The mean value is the average value for a set of data samples. The following equation defines an input sequence X consisting of n samples. X = {x0, x1, x2, x3, , xn 1} The following equation yields the mean value for input sequence X. 1 - ( x + x1 + x2 + x3 + + xn 1 ) x = -n 0 The mean equals the sum of all the sample values divided by the number of samples, as shown in Equation 10-1.
Median
The median of a data sequence is the midpoint value in the sorted version of the sequence. The median is useful for making qualitative statements, such as whether a particular data point lies in the upper or lower portion of an input sequence. The following equation represents the sorted sequence of an input sequence X. S = {s0, s1, s2, , sn 1} You can sort the sequence either in ascending order or in descending order. The following equation yields the median value of S. + si x median = * ) 0.5 ( s k 1 + s k ) n is odd n is even
(10-2)
n1 n where i = ----------- and k = --. 2 2 Equation 10-3 defines a sorted sequence consisting of an odd number of samples sorted in descending order. S = {5, 4, 3, 2, 1} (10-3)
10-3
Chapter 10
In Equation 10-3, the median is the midpoint value 3. Equation 10-4 defines a sorted sequence consisting of an even number of samples sorted in ascending order. S = {1, 2, 3, 4} (10-4)
The sorted sequence in Equation 10-4 has two midpoint values, 2 and 3. Using Equation 10-2 for n is even, the following equation yields the median value for the sorted sequence in Equation 10-4. xmedian = 0.5(sk 1 + sk) = 0.5(2 + 3) = 2.5
Sample Variance
Sample variance measures the spread or dispersion of the sample values. You can use the sample variance as a measure of the consistency. The sample variance is always positive, except when all the sample values are equal to each other and in turn, equal to the mean. The sample variance s2 for an input sequence X equals the sum of the squares of the deviations of the sample values from the mean divided by n 1, as shown in the following equation.
2 2 2 1 2 [ ( x x ) + ( x 2 x ) + + ( xn x ) ] s = ----------n1 1
10-4
ni.com
Chapter 10
Population Variance
The population variance 2 for an input sequence X equals the sum of the squares of the deviations of the sample values from the mean divided by n, as shown in the following equation.
2 2 2 1 2 - [ ( x1 x ) + ( x2 x ) + + ( x n x ) ] = -n
Standard Deviation
The standard deviation s of an input sequence equals the positive square root of the sample variance s2, as shown in the following equation. s = s
2
Mode
The mode of an input sequence is the value that occurs most often in the input sequence. The following equation defines an input sequence X. X = { 0, 1, 3, 3, 4, 4, 4, 5, 5, 7 } The mode of X is 4 because 4 is the value that occurs most often in X.
1 = -n
% (x x)
i i=0
where n is the number of elements in X and x is the mean of X. For m = 2, the moment about the mean equals the population variance 2.
10-5
Chapter 10
Skewness
Skewness is a measure of symmetry and corresponds to the third-order moment.
Kurtosis
Kurtosis is a measure of peakedness and corresponds to the fourth-order moment.
Histogram
A histogram is a bar graph that displays frequency data and is an indication of the data distribution. A histogram provides a method for graphically displaying data and summarizing key information. Equation 10-5 defines a data sequence. X = {0, 1, 3, 3, 4, 4, 4, 5, 5, 8} (10-5)
To compute a histogram for X, divide the total range of values into the following eight intervals, or bins: 01 12 23 34 45 56 67 78
The histogram display for X indicates the number of data samples that lie in each interval, excluding the upper boundary. Figure 10-1 shows the histogram for the sequence in Equation 10-5.
10-6
ni.com
Chapter 10
Figure 10-1 shows that no data samples are in the 23 and 67 intervals. One data sample lies in each of the intervals 01, 12, and 78. Two data samples lie in each of the intervals 34 and 56. Three data samples lie in the 45 interval. The number of intervals in the histogram affects the resolution of the histogram. A common method of determining the number of intervals to use in a histogram is Sturges Rule, which is given by the following equation. Number of Intervals = 1 + 3.3log(size of (X))
where n is the number of data points. You can use the mse to compare two sequences. For example, system S1 receives a digital signal x and produces an output signal y1. System S2 produces y2 when it receives x. Theoretically, y1 = y2. To verify that y1 = y2, you want to compare y1 and y2. Both y1 and y2 contain a large number of data points. Because y1 and y2 are large, an element-by-element comparison is difficult. You can calculate the mse of y1 and y2. If the mse is smaller than an acceptable tolerance, y1 and y2 are equivalent.
10-7
Chapter 10
x =
1 -n
%x
i=0
2 i
where n is the number of elements in X. Root mean square is a widely used quantity for analog signals. The following equation yields the root mean square voltage Vrms for a sine voltage waveform. Vp V rms = -----2 where Vp is the peak amplitude of the signal.
Probability
In any random experiment, a chance, or probability, always exists that a particular event will or will not occur. The probability that event A will occur is the ratio of the number of outcomes favorable to A to the total number of equally likely outcomes. You can assign a number between zero and one to an event as an indication of the probability that the event will occur. If you are absolutely sure that the event will occur, its probability is 100% or one. If you are sure that the event will not occur, its probability is zero.
Random Variables
Many experiments generate outcomes that you can interpret in terms of real numbers. Some examples are the number of cars passing a stop sign during a day, the number of voters favoring candidate A, and the number of accidents at a particular intersection. Random variables are the numerical outcomes of an experiment whose values can change from experiment to experiment.
10-8
ni.com
Chapter 10
Histogram
Figure 10-2 shows that most of the values for x are between zero and 100 hours. The histogram values drop off smoothly for larger values of x. The value of x can equal any value between zero and the largest observed value, making x a continuous random variable. You can approximate the histogram in Figure 10-2 by an exponentially decaying curve. The exponentially decaying curve is a mathematical model for the behavior of the data sample. If you want to know the probability that a randomly selected battery will last longer than 400 hours, you can approximate the probability value by the area under the curve to the right of the value 4. The function that models the histogram of the random variable is the probability density function. Refer to the Probability
10-9
Chapter 10
Distribution and Density Functions section of this chapter for more information about the probability density function. A random variable X is continuous if it can take on an infinite number of possible values associated with intervals of real numbers and a probability density function f(x) exists such that the following relationships and equations are true. f ( x ) 0 for all x
( f ( x ) dx = 1 ( f ( x )dx
a b
P(a X b) =
(10-6)
The chance that X will assume a specific value of X = a is extremely small. The following equation shows solving Equation 10-6 for a specific value of X.
X = a, P ( X = a ) =
( f ( x )dx = 0
a
Because X can assume an infinite number of possible values, the probability of it assuming a specific value is zero.
Normal Distribution
The normal distribution is a continuous probability distribution. The functional form of the normal distribution is the normal density function. The following equation defines the normal density function f(x). 1 ( x x )2 ( 2 s 2 ) -e f ( x ) = -----------2s
10-10
ni.com
Chapter 10
The normal density function has a symmetric bell shape. The following parameters completely determine the shape and location of the normal density function: The center of the curve is the mean value x = 0. The spread of the curve is the variance s2 = 1.
If a random variable has a normal distribution with a mean equal to zero and a variance equal to one, the random variable has a standard normal distribution.
The choice of the probability density function is fundamental to obtaining a correct probability value.
10-11
Chapter 10
In addition to the normal distribution method, you can use the following methods to compute p: Chi-Square distribution F distribution T distribution
F( x) =
( f ( ) d
(10-7)
( f ( x ) dx = 1.
By performing differentiation, you can derive the following equation from Equation 10-7. (x) f ( x ) = dF -------------dx
10-12
ni.com
Chapter 10
You can use a histogram to obtain a denormalized discrete representation of f(x). The following equation defines the discrete representation of f(x).
i=0
%x x = 1
i
n1
The following equation yields the sum of the elements of the histogram.
m1
l=0
%h
= n
where m is the number of samples in the histogram and n is the number of samples in the input sequence representing the function. Therefore, to obtain an estimate of F(x) and f(x), normalize the histogram by a factor of x = 1/n and let hj = xj. Figure 10-3 shows the block diagram of a VI that generates F(x) and f(x) for Gaussian white noise.
Figure 10-3. Generating Probability Distribution Function and Probability Density Function
10-13
Chapter 10
The VI in Figure 10-3 uses 25,000 samples, 2,500 in each of the 10 loop iterations, to compute the probability distribution function for Gaussian white noise. The Integral x(t) VI computes the probability distribution function. The Derivative x(t) VI performs differentiation on the probability distribution function to compute the probability density function. Figure 10-4 shows the results the VI in Figure 10-3 returns.
Figure 10-4. Input Signal, Probability Distribution Function, and Probability Density Function
10-14
ni.com
Chapter 10
Figure 10-4 shows the last block of Gaussian-distributed noise samples, the plot of the probability distribution function F(x), and the plot of the probability density function f(x). The plot of F(x) monotonically increases and is limited to the maximum value of 1.00 as the value of the x-axis increases. The plot of f(x) shows a Gaussian distribution that conforms to the specific pattern of the noise signal.
10-15
Linear Algebra
11
This chapter describes how to use the Linear Algebra VIs to perform matrix computation and analysis. Use the NI Example Finder to find examples of using the Linear Algebra VIs.
Types of Matrices
Whatever the application, it is always necessary to find an accurate solution for the system of equations in a very efficient way. In matrix-vector notation, such a system of linear algebraic equations has the following form. Ax = b where A is an n n matrix, b is a given vector consisting of n elements, and x is the unknown solution vector to be determined. A matrix is a 2D array of elements with m rows and n columns. The elements in the 2D array might be real numbers, complex numbers, functions, or operators. The matrix A shown below is an array of m rows and n columns with m n elements. a 0, 0 A = a 1, 0 a 0, 1 a 1, 1 a 0, n 1 a 1, n 1
a m 1, 0 a m 1, 1 a m 1, n 1
Here, ai, j denotes the (i, j)th element located in the ith row and the jth column. In general, such a matrix is a rectangular matrix. When m = n so that the
11-1
Chapter 11
Linear Algebra
number of rows is equal to the number of columns, the matrix is a square matrix. An m 1 matrixm rows and one columnis a column vector. A row vector is a 1 n matrixone row and n columns. If all the elements other than the diagonal elements are zerothat is, ai, j = 0, i jsuch a matrix is a diagonal matrix. For example, 4 0 0 A = 0 5 0 0 0 9 is a diagonal matrix. A diagonal matrix with all the diagonal elements equal to one is an identity matrix, also known as unit matrix. If all the elements below the main diagonal are zero, the matrix is an upper triangular matrix. On the other hand, if all the elements above the main diagonal are zero, the matrix is a lower triangular matrix. When all the elements are real numbers, the matrix is a real matrix. On the other hand, when at least one of the elements of the matrix is a complex number, the matrix is a complex matrix.
Determinant of a Matrix
One of the most important attributes of a matrix is its determinant. In the simplest case, the determinant of a 2 2 matrix A = a b c d is given by ad bc. The determinant of a square matrix is formed by taking the determinant of its elements. For example, if 2 5 3 A = 6 1 7 1 6 9 then the determinant of A, denoted by A , is
11-2
ni.com
Chapter 11
Linear Algebra
The determinant of a diagonal matrix, an upper triangular matrix, or a lower triangular matrix is the product of its diagonal elements. The determinant tells many important properties of the matrix. For example, if the determinant of the matrix is zero, the matrix is singular. In other words, the above matrix with nonzero determinant is nonsingular. Refer to the Matrix Inverse and Solving Systems of Linear Equations section of this chapter for more information about singularity and the solution of linear equations and matrix inverses.
Transpose of a Matrix
The transpose of a real matrix is formed by interchanging its rows and columns. If the matrix B represents the transpose of A, denoted by AT, then bj, i = ai, j. For the matrix A defined above, 2 6 1 T B = A = 5 1 6 3 7 9 In the case of complex matrices, we define complex conjugate transposition. If the matrix D represents the complex conjugate transpose (if a = x + iy, then complex conjugate a* = x iy) of a complex matrix C, then D = C 2 d i, j = c j, i That is, the matrix D is obtained by replacing every element in C by its complex conjugate and then interchanging the rows and columns of the resulting matrix. A real matrix is a symmetric matrix if the transpose of the matrix is equal to the matrix itself. The example matrix A is not a symmetric matrix. If a complex matrix C satisfies the relation C = CH, C is a Hermitian matrix.
H
Linear Independence
A set of vectors x1, x2, , xn is linearly dependent only if there exist scalars 1, 2, , n, not all zero, such that 1 x1 + 2 x2 + + n xn = 0 (11-1)
11-3
Chapter 11
Linear Algebra
In simpler terms, if one of the vectors can be written in terms of a linear combination of the others, the vectors are linearly dependent. If the only set of i for which Equation 11-1 holds is 1 = 0, 2 = 0, , n = 0, the set of vectors x1, x2, , xn is linearly independent. So in this case, none of the vectors can be written in terms of a linear combination of the others. Given any set of vectors, Equation 11-1 always holds for 1 = 0, 2 = 0, , n = 0. Therefore, to show the linear independence of the set, you must show that 1 = 0, 2 = 0, , n = 0 is the only set of i for which Equation 11-1 holds. For example, first consider the vectors x = 1 2 y = 3 4
1 = 0 and 2 = 0 are the only values for which the relation 1x + 2y = 0 holds true. Therefore, these two vectors are linearly independent of each other. Now consider the vectors x = 1 2 y = 2 4
If 1 = 2 and 2 = 1, 1x + 2y = 0. Therefore, these two vectors are linearly dependent on each other. You must understand this definition of linear independence of vectors to fully appreciate the concept of the rank of the matrix.
Matrix Rank
The rank of a matrix A, denoted by (A), is the maximum number of linearly independent columns in A. If you look at the example matrix A, you find that all the columns of A are linearly independent of each other. That is, none of the columns can be obtained by forming a linear combination of the other columns. Hence, the rank of the matrix is 3. Consider one more example matrix, B, where 0 1 1 B = 1 2 3 2 0 2
11-4
ni.com
Chapter 11
Linear Algebra
This matrix has only two linearly independent columns because the third column of B is linearly dependent on the first two columns. Hence, the rank of this matrix is 2. It can be shown that the number of linearly independent columns of a matrix is equal to the number of independent rows. So the rank can never be greater than the smaller dimension of the matrix. Consequently, if A is an n m matrix, then ( A ) min ( n, m ) where min denotes the minimum of the two numbers. In matrix theory, the rank of a square matrix pertains to the highest order nonsingular matrix that can be formed from it. A matrix is singular if its determinant is zero. So the rank pertains to the highest order matrix that you can obtain whose determinant is not zero. For example, consider a 4 4 matrix 1 B = 0 1 1 For this matrix, det(B) = 0, but 1 2 3 0 1 1 1 0 1 2 1 0 1 3 1 1 0 4 0 2 2
= 1
Hence, the rank of B is 3. A square matrix has full rank only if its determinant is different from zero. Matrix B is not a full-rank matrix.
11-5
Chapter 11
Linear Algebra
There are ways to compute the norm of a matrix. These include the 2-norm (Euclidean norm), the 1-norm, the Frobenius norm (F-norm), and the Infinity norm (inf-norm). Each norm has its own physical interpretation. Consider a unit ball containing the origin. The Euclidean norm of a vector is simply the factor by which the ball must be expanded or shrunk in order to encompass the given vector exactly, as shown in Figure 11-1.
2 1 2 1
2 2
2 2
Figure 11-1a shows a unit ball of radius = 1 unit. Figure 11-1b shows a 2 2 vector of length 2 + 2 = 8 = 2 2 . As shown in Figure 11-1c, the unit ball must be expanded by a factor of 2 2 before it can exactly encompass the given vector. Hence, the Euclidean norm of the vector is 2 2 . The norm of a matrix is defined in terms of an underlying vector norm. It is the maximum relative stretching that the matrix does to any vector. With the vector 2-norm, the unit ball expands by a factor equal to the norm. On the other hand, with the matrix 2-norm, the unit ball might become an ellipsoidal (ellipse in 3D), with some axes longer than others. The longest axis determines the norm of the matrix. Some matrix norms are much easier to compute than others. The 1-norm is obtained by finding the sum of the absolute value of all the elements in each column of the matrix. The largest of these sums is the 1-norm. In mathematical terms, the 1-norm is simply the maximum absolute column sum of the matrix.
n1
= max j
%a
i=0
i, j
11-6
ni.com
Chapter 11
Linear Algebra
= max ( 3, 7 ) = 7
The inf-norm of a matrix is the maximum absolute row sum of the matrix.
n1
= max i
%a
j=0
i, j
(11-2)
In this case, you add the magnitudes of all elements in each row of the matrix. The maximum value that you get is the inf-norm. For the Equation 11-2 example matrix, A
= max ( 4, 6 ) = 6
The 2-norm is the most difficult to compute because it is given by the largest singular value of the matrix. Refer to the Matrix Factorization section of this chapter for more information about singular values.
where p can be one of the four norm types described in the Magnitude (Norms) of Matrices section of this chapter. For example, to find the condition number of a matrix A, you can find the 2-norm of A, the 2-norm of the inverse of the matrix A, denoted by A1, and then multiply them together. The inverse of a square matrix A is a square matrix B such that AB = I, where I is the identity matrix. As described earlier in this chapter,
11-7
Chapter 11
Linear Algebra
the 2-norm is difficult to calculate on paper. You can use the Matrix Norm VI to compute the 2-norm. For example,
1 A = 1 2 , A = 2 1 , A 3 4 1.5 0.5
= 5.4650, A
= 2.7325, cond ( A ) = 14.9331 The condition number can vary between 1 and infinity. A matrix with a large condition number is nearly singular, while a matrix with a condition number close to 1 is far from being singular. The matrix A above is nonsingular. However, consider the matrix B = 1 0.99 1.99 2
The condition number of this matrix is 47,168, and hence the matrix is close to being singular. A matrix is singular if its determinant is equal to zero. However, the determinant is not a good indicator for assessing how close a matrix is to being singular. For the matrix B above, the determinant (0.0299) is nonzero. However, the large condition number indicates that the matrix is close to being singular. Remember that the condition number of a matrix is always greater than or equal to one; the latter being true for identity and permutation matrices. A permutation matrix is an identity matrix with some rows and columns exchanged. The condition number is a very useful quantity in assessing the accuracy of solutions to linear systems.
11-8
ni.com
Chapter 11
Linear Algebra
For example, 2 1 2 = 2 4 3 4 6 8 Two (or more) matrices can be added or subtracted only if they have the same number of rows and columns. If both matrices A and B have m rows and n columns, their sum C is an m n matrix defined as C = A B , where ci, j = ai, j bi, j. For example, 1 2 + 2 4 = 36 3 4 5 1 85 For multiplication of two matrices, the number of columns of the first matrix must be equal to the number of rows of the second matrix. If matrix A has m rows and n columns and matrix B has n rows and p columns, their product C is an m p matrix defined as C = AB, where
n1
c i, j =
%a
k=0
i, k b k, j
For example, 1 2 2 4 = 12 6 3 4 5 1 26 16 So you multiply the elements of the first row of A by the corresponding elements of the first column of B and add all the results to get the elements in the first row and first column of C. Similarly, to calculate the element in the ith row and the jth column of C, multiply the elements in the ith row of A by the corresponding elements in the jth column of C, and then add them all. This is shown pictorially in Figure 11-2.
11-9
Chapter 11
Linear Algebra
R1 C1 R1 C1 Cm
R1 Cm
X
Rn
=
Rn C1 Rn Cm
Matrix multiplication, in general, is not commutative, that is, AB BA. Also, multiplication of a matrix by an identity matrix results in the original matrix.
XY =
%x y
i i
where n is the number of elements in X and Y. Both vectors must have the same number of elements. The dot product is a scalar quantity and has many practical applications. For example, consider the vectors a = 2i + 4j and b = 2i + j in a two-dimensional rectangular coordinate system, as shown in Figure 11-3.
11-10
ni.com
Chapter 11
Linear Algebra
a=2i+4j
=36.86 b=2i+j
Then the dot product of these two vectors is given by d = 2 2 = (2 2) + (4 1) = 8 4 1 The angle between these two vectors is given by ab 8-$ -$ = inv cos # ---- = inv cos # ----------= 36.86 o ! a b" ! 10" where |a| denotes the magnitude of a. As a second application, consider a body on which a constant force a acts, as shown in Figure 11-4. The work W done by a in displacing the body is defined as the product of |d| and the component of a in the direction of displacement d. That is, W = a d cos = a d
Force a
Body
11-11
Chapter 11
Linear Algebra
On the other hand, the outer product of these two vectors is a matrix. The (i, j)th element of this matrix is obtained using the formula a(i,j) = xi yj For example, 1 3 = 3 4 2 4 6 8
In Equation 11-3, is an eigenvalue. Similar matrices have the same eigenvalues. In Equation 11-3, x is the eigenvector that corresponds to the eigenvalue. An eigenvector of a matrix is a nonzero vector that does not rotate when the matrix is applied to it. Calculating the eigenvalues and eigenvectors are fundamental principles of linear algebra and allow you to solve many problems such as systems of differential equations when you understand what they represent. Consider an eigenvector x of a matrix A as a nonzero vector that does not rotate when x is multiplied by A, except perhaps to point in precisely the opposite direction. x may change length or reverse its direction, but it will not turn sideways. In other words, there is some scalar constant such that Equation 11-3 holds true. The value is an eigenvalue of A. Consider the following example. One of the eigenvectors of the matrix A, where A = 2 3 3 5 is x = 0.62 1.00
11-12
ni.com
Chapter 11
Linear Algebra
Multiplying the matrix A and the vector x simply causes the vector x to be expanded by a factor of 6.85. Hence, the value 6.85 is one of the eigenvalues of the vector x. For any constant , the vector x also is an eigenvector with eigenvalue because A ( x ) = Ax = x In other words, an eigenvector of a matrix determines a direction in which the matrix expands or shrinks any vector lying in that direction by a scalar multiple, and the expansion or contraction factor is given by the corresponding eigenvalue. A generalized eigenvalue problem is to find a scalar and a nonzero vector x such that Ax = Bx where B is another n n matrix. The following are some important properties of eigenvalues and eigenvectors: The eigenvalues of a matrix are not necessarily all distinct. In other words, a matrix can have multiple eigenvalues. All the eigenvalues of a real matrix need not be real. However, complex eigenvalues of a real matrix must occur in complex conjugate pairs. The eigenvalues of a diagonal matrix are its diagonal entries, and the eigenvectors are the corresponding columns of an identity matrix of the same dimension. A real symmetric matrix always has real eigenvalues and eigenvectors. Eigenvectors can be scaled arbitrarily.
There are many practical applications in the field of science and engineering for an eigenvalue problem. For example, the stability of a structure and its natural modes and frequencies of vibration are determined by the eigenvalues and eigenvectors of an appropriate matrix. Eigenvalues also are very useful in analyzing numerical methods, such as convergence analysis of iterative methods for solving systems of algebraic equations and the stability analysis of methods for solving systems of differential equations. The EigenValues and Vectors VI has an Input Matrix input, which is an N N real square matrix. The matrix type input specifies the type of the input matrix. The matrix type input could be 0, indicating a general matrix, or 1, indicating a symmetric matrix. A symmetric matrix always has real
11-13
Chapter 11
Linear Algebra
eigenvalues and eigenvectors. A general matrix has no special property such as symmetry or triangular structure. The output option input specifies what needs to be computed. A value of 0 indicates that only the eigenvalues need to be computed. A value of 1 indicates that both the eigenvalues and the eigenvectors should be computed. It is computationally expensive to compute both the eigenvalues and the eigenvectors. So it is important that you use the output option input of the EigenValues and Vectors VI carefully. Depending on your particular application, you might just want to compute the eigenvalues or both the eigenvalues and the eigenvectors. Also, a symmetric matrix needs less computation than a nonsymmetric matrix. Choose the matrix type carefully.
= I
where I is the identity matrix. The inverse of a matrix exists only if the determinant of the matrix is not zerothat is, it is nonsingular. In general, you can find the inverse of only a square matrix. However, you can compute the pseudoinverse of a rectangular matrix. Refer to the Matrix Factorization section of this chapter for more information about the pseudoinverse of a rectangular matrix.
11-14
ni.com
Chapter 11
Linear Algebra
Otherwise, the matrix is nonsingular. If the matrix is nonsingular, its inverse A1 exists, and the system Ax = b has a unique solution, x = A1b, regardless of the value for b. On the other hand, if the matrix is singular, the number of solutions is determined by the right-hand-side vector b. If A is singular and Ax = b, A(x + z) = b for any scalar , where the vector z is as in the previous definition. Thus, if a singular system has a solution, the solution cannot be unique. Explicitly computing the inverse of a matrix is prone to numerical inaccuracies. Therefore, you should not solve a linear system of equations by multiplying the inverse of the matrix A by the known right-hand-side vector. The general strategy to solve such a system of equations is to transform the original system into one whose solution is the same as that of the original system but is easier to compute. One way to do so is to use the Gaussian Elimination technique. The Gaussian Elimination technique has three basic steps. First, express the matrix A as a product A = LU where L is a unit lower triangular matrix and U is an upper triangular matrix. Such a factorization is LU factorization. Given this, the linear system Ax = b can be expressed as LUx = b. Such a system then can be solved by first solving the lower triangular system Ly = b for y by forward-substitution. This is the second step in the Gaussian Elimination technique. For example, if l = a 0 b c then r s bp ) -, q = ( -----------------p = -a c y = p q b = r s
11-15
Chapter 11
Linear Algebra
The first element of y can be determined easily due to the lower triangular nature of the matrix L. Then you can use this value to compute the remaining elements of the unknown vector sequentiallyhence the name forward-substitution. The final step involves solving the upper triangular system Ux = y by back-substitution. For example, if U = a b 0 c then q ( p bn ) -, m = ------------------n = -c a In this case, this last element of x can be determined easily and then used to determine the other elements sequentiallyhence the name back-substitution. So far, this chapter has described the case of square matrices. Because a nonsquare matrix is necessarily singular, the system of equations must have either no solution or a nonunique solution. In such a situation, you usually find a unique solution x that satisfies the linear system in an approximate sense. You can use the Linear Algebra VIs to compute the inverse of a matrix, compute LU decomposition of a matrix, and solve a system of linear equations. It is important to identify the input matrix properly, as it helps avoid unnecessary computations, which in turn helps to minimize numerical inaccuracies. The four possible matrix types are general matrices, positive definite matrices, and lower and upper triangular matrices. A real matrix is positive definite only if it is symmetric, and if the quadratic form for all nonzero vectors is X. If the input matrix is square but does not have a full rank (a rank-deficient matrix), the VI finds the least square solution x. The least square solution is the one that minimizes the norm of Ax b. The same also holds true for nonsquare matrices. x = m n y = p q
Matrix Factorization
The Matrix Inverse and Solving Systems of Linear Equations section of this chapter describes how a linear system of equations can be transformed into a system whose solution is simpler to compute. The basic idea was to factorize the input matrix into the multiplication of several, simpler matrices. The LU decomposition technique factors the input matrix as a product of upper and lower triangular matrices. Other commonly used factorization methods are Cholesky, QR, and the Singular Value
LabVIEW Analysis Concepts 11-16 ni.com
Chapter 11
Linear Algebra
Decomposition (SVD). You can use these factorization methods to solve many matrix problems, such as solving linear system of equations, inverting a matrix, and finding the determinant of a matrix. If the input matrix A is symmetric and positive definite, an LU factorization can be computed such that A = UTU, where U is an upper triangular matrix. This is Cholesky factorization. This method requires only about half the work and half the storage compared to LU factorization of a general matrix by Gaussian Elimination. You can determine if a matrix is positive definite by using the Test Positive Definite VI. A matrix Q is orthogonal if its columns are orthonormalthat is, if QT Q = I, the identity matrix. QR factorization technique factors a matrix as the product of an orthogonal matrix Q and an upper triangular matrix Rthat is, A = QR. QR factorization is useful for both square and rectangular matrices. A number of algorithms are possible for QR factorization, such as the Householder transformation, the Givens transformation, and the Fast Givens transformation. The Singular Value Decomposition (SVD) method decomposes a matrix into the product of three matricesA = USVT. U and V are orthogonal matrices. S is a diagonal matrix whose diagonal values are called the singular values of A. The singular values of A are the nonnegative square roots of the eigenvalues of AT A, and the columns of U and V, which are called left and right singular vectors, are orthonormal eigenvectors of AAT and ATA, respectively. SVD is useful for solving analysis problems such as computing the rank, norm, condition number, and pseudoinverse of matrices.
Pseudoinverse
The pseudoinverse of a scalar is defined as 1/ if 0, and zero otherwise. In case of scalars, pseudoinverse is the same as the inverse. You now can define the pseudoinverse of a diagonal matrix by transposing the matrix and then taking the scalar pseudoinverse of each entry. Then the pseudoinverse of a general real m n matrix A, denoted by A, is given by A = VS U
T
The pseudoinverse exists regardless of whether the matrix is square or rectangular. If A is square and nonsingular, the pseudoinverse is the same as the usual matrix inverse. You can use the PseudoInverse Matrix VI to compute the pseudoinverse of real and complex matrices.
11-17
Optimization
12
This chapter describes basic concepts and methods used to solve optimization problems. Refer to Appendix A, References, for a list of references to more information about optimization.
Introduction to Optimization
Optimization is the search for a set of parameters that minimize a function. For example, you can use optimization to define an optimal set of parameters for the design of a specific application, such as the optimal parameters for designing a control mechanism for a system or the conditions that minimize the cost of a manufacturing process. Generally, optimization problems involve a set of possible solutions X and the objective function f(x), also known as the cost function. f(x) is the function of the variable or variables you want to minimize or maximize. The optimization process either minimizes or maximizes f(x) until reaching the optimal value for f(x). When minimizing f(x), the optimal solution x* X satisfies the following condition. f ( x ) f ( x ) x X (12-1)
The optimization process searches for the value of x* that minimizes f(x), subject to the constraint x* X, where X is the constraint set. A value that satisfies the conditions defined in Equation 12-1 is a global minimum. Refer to the Local and Global Minima section of this chapter for more information about global minima. In the case of maximization, x* satisfies the following condition. f ( x ) f ( x ) x X A value satisfying the preceding condition is a global maximum. This chapter describes optimization in terms of minimizing f(x).
12-1
Chapter 12
Optimization
Note Currently, LabVIEW does not include VIs you can use to solve optimization
programming. Programming also refers to scheduling or planning. Linear and nonlinear programming are subsets of mathematical programming. The objective of mathematical programming is the same as optimizationmaximizing or minimizing f(x).
12-2
ni.com
Chapter 12
Optimization
Linear Programming
Linear programming problems have the following characteristics: Linear objective function Solution set X with a polyhedron shape defined by linear inequality constraints Continuous f(x) Partially combinatorial structure
Solving linear programming problems involves finding the optimal value of f(x) where f(x) is a linear combination of variables, as shown in Equation 12-2. f(x) = a1x1 + + anxn (12-2)
The value of f(x) in Equation 12-2 can have the following constraints: Primary constraints of x 1 0, , x n 0 Additional constraints of M = m1 + m2 + m3 m1 of the following form a i 1 x 1 + + a in x n b i, ( b i 0 ), i = 1, , m 1 m2 of the following form a j 1 x 1 + + a jn x n b j, ( b j 0 ), j = m 1 + 1, , m 1 + m 2
12-3
Chapter 12
Optimization
Any vector x that satisfies all the constraints on the value of f(x) constitutes a feasible answer to the linear programming problem. The vector yielding the best result for f(x) is the optimal solution. The following relationship represents the standard form of the linear programming problem. min { c x : Ax = b, x 0 } where x IR is the vector of unknowns, c IR is the cost vector, and mn is the constraint matrix. At least one member of solution set X A IR is at a vertex of the polyhedron that describes X.
n n T
simplex method use the concept of a simplex, the methods have nothing else in common. Refer to the Downhill Simplex Method section of this chapter for information about the downhill simplex method.
Nonlinear Programming
Nonlinear programming problems have either a nonlinear f(x) or a solution set X defined by nonlinear equations and inequalities. Nonlinear programming is a broad category of optimization problems and includes the following subcategories: Quadratic programming problems Least-squares problems Convex problems
12-4
ni.com
Chapter 12
Optimization
Line Minimization
The process of iteratively searching along a vector for the minimum value on the vector is line minimization or line searching. Line minimization can help establish a search direction or verify that the chosen search direction is likely to produce an optimal solution. Nonlinear programming search algorithms use line minimization to solve the subproblems leading to an optimal value for f(x). The search algorithm searches along a vector until it reaches the minimum value on the vector. After the search algorithm reaches the minimum on one vector, the search continues along another vector, usually orthogonal to the first vector. The line search continues along the new vector until reaching its minimum value. The line minimization process continues until the search algorithm finds the optimal solution.
12-5
Chapter 12
Optimization
Global Minimum
In terms of solution set X, x* is a global minimum of f over X if it satisfies the following relationship. f ( x ) f ( x ) x X
Local Minimum
A local minimum is a minimum of the function over a subset of the domain. In terms of solution set X, x* is a local minimum of f over X if x* X, and an > 0 exists so that the following relationship is true. f ( x ) f ( x ) where x = x x. x X with x x <
Figure 12-1 illustrates a function of x where the domain is any value between 32 and 65; x [32, 65].
In Figure 12-1, A is a local minimum because you can find > 0, such that f ( x ) f ( x ) . = 1 would suffice. Similarly, C is a local minimum. B is the global minimum because f ( x ) x for x [ 32, 65 ] .
use the concept of a simplex, the methods have nothing else in common. Refer to the Linear
12-6
ni.com
Chapter 12
Optimization
Programming Simplex Method section of this chapter for information about the linear programming simplex method and the geometry of the simplex. Most practical applications involve solution sets that are nondegenrate simplexes. A nondegenrate simplex encloses a finite volume of N dimensions. If you take any point of the nondegenrate simplex as the origin of the simplex, the remaining N points of the simplex define vector directions spanning the N-dimensional space. The downhill simplex method requires that you define an initial simplex by specifying N + 1 starting points. No effective means of determining the initial starting point exists. You must use your judgement about the best location from which to start. After deciding upon an initial starting point P0, you can use Equation 12-3 to determine the other points needed to define the initial simplex. Pi = P0 + ei (12-3)
where ei is a unit vector and is an estimate of the characteristic length scale of the problem. Starting with the initial simplex defined by the points from Equation 12-3, the downhill simplex method performs a series of reflections. A reflection moves from a point on the simplex through the opposite face of the simplex to a point where the function f is smaller. The configuration of the reflections conserves the volume of the simplex, which maintains the nondegeneracy of the simplex. The method continues to perform reflections until the function value reaches a predetermined tolerance. Because of the multidimensional nature of the downhill simplex method, the value it finds for f(x) might not be the optimal solution. You can verify that the value for f(x) is the optimal solution by repeating the process. When you repeat the process, use the optimal solution from when you first ran the method as P0. Reinitialize the method to N + 1 starting points using Equation 12-3.
12-7
Chapter 12
Optimization
Because the relationship in Equation 12-4 is true, the minimum of the function is within the interval (a, c). The search method starts by choosing a new point x between either a and b or between b and c. For example, choose a point x between b and c and evaluate f(x). If f(b) < f(x), the new bracketing triplet is a < b < x. If f(b) > f(x), the new bracketing triplet is b < x < c. In each instance, the middle point, b or x, is the current optimal minimum found during the current iteration of the search.
A new point x is an additional fractional distance Z beyond b, as shown in Equation 12-5. x b ---------- = Z ca (12-5)
Given Equation 12-5, the next bracketing triplet can have either a length of W + Z relative to the current bracketing triplet or a length of 1 W. To minimize the possible worst case, choose Z such that the following equations are true. W+Z=1W Z = 1 2W (12-6)
Given Equation 12-6, the new x is the point in the interval symmetric to b. Therefore, Equation 12-7 is true. | b a | = | x c| (12-7)
You can imply from Equation 12-7 that x is within the larger segment because Z is positive only if W < 1/2. If Z is the current optimal value of f(x), W is the previous optimal value of f(x). Therefore, the fractional distance of x from b to c equals the fractional distance of b from a to c, as shown in Equation 12-8. Z - = W ------------1W (12-8)
12-8
ni.com
Chapter 12
Optimization
Equations 12-6 and 12-8 yield the following quadratic equation. W2 3W + 1 = 0 3 5 - 0.38197 W = --------------2 (12-9)
Therefore, the middle point b of the optimal bracketing interval a < b < c is the fractional distance of 0.38197 from one of the end points and the fractional distance of 0.61803 from the other end point. 0.38197 and 0.61803 comprise the golden mean, or golden section, of the Pythagoreans. The golden section search method uses a bracketing triplet and measures from point b to find a new point x a fractional distance of 0.38197 into the larger interval, either (a, b) or (b, c), on each iteration of the search method. Even when starting with an initial bracketing triplet whose segments are not within the golden section, the process of successively choosing a new point x at the golden mean quickly causes the method to converge linearly to the correct, self-replicating golden section. After the search method converges to the self-replicating golden section, each new function evaluation brackets the minimum to an interval only 0.61803 times the size of the preceding interval.
where k is the iteration number, f(xk + 1) is the objective function value at iteration k + 1, and f(xk) is the objective function value at iteration k. Successively decreasing f improves the current estimate of the solution. The iterative descent process attempts to decrease f to its minimum.
12-9
Chapter 12
Optimization
The following equations and relationships provide a general definition of the gradient search method of solving nonlinear programming problems. xk + 1 = xk + k dk k = 0, 1, (12-10)
where dk is the search direction and k is the step size. In Equation 12-10, if the gradient of the objective function f ( x k ) 0, the gradient search method needs a positive value for k and a value for dk that fulfills the following relationship. f ( x k ) d k < 0 Iterations of gradient search methods continue until xk + 1 = xk.
12-10
ni.com
Chapter 12
Optimization
Use the accuracy input of the Optimization VIs to specify a value for . The nonlinear programming optimization VIs iteratively compare the difference between the highest and lowest input values to the value of accuracy until two consecutive approximations do not differ by more than the value of accuracy. When two consecutive approximations do not differ by more than the value of accuracy, the VI stops.
%
i
f 1 ------ x + - xi i 2
%
i, j
f --------------x x + xi xj i j
(12-11)
1 - xAx c bx + -2 where
The components of matrix A are the second partial derivative matrix of f. Matrix A is the Hessian matrix of f at P. The following equation gives the gradient of f. f = Ax b
12-11
Chapter 12
Optimization
The following equation shows how the gradient f changes with movement along A. ( f ) = A ( x ) After the search method reaches the minimum by moving in direction u, it moves in a new direction v. To fulfill the condition that minimization along one vector does not interfere with minimization along another vector, the gradient of f must remain perpendicular to u, as shown in Equation 12-12. 0 = u ( f ) = uAv (12-12)
When Equation 12-12 is true for two vectors u and v, u and v are conjugate vectors. When Equation 12-12 is true pairwise for all members of a set of vectors, the set of vectors is a conjugate set. Performing successive line minimizations of a function along a conjugate set of vectors prevents the search method from having to repeat the minimization along any member of the conjugate set. If a conjugate set of vectors contains N linearly independent vectors, performing N line minimizations arrives at the minimum for functions having the quadratic form shown in Equation 12-11. If a function does not have exactly the form of Equation 12-11, repeated cycles of N line minimizations eventually converge quadratically to the minimum.
Theorem A
Theorem A has the following conditions: A is a symmetric, positive-definite, n n matrix. g0 is an arbitrary vector. h0 = g0.
12-12
ni.com
Chapter 12
Optimization
The following equations define the two sequences of vectors for i = 0, 1, 2, . g i + 1 = g i i Ah i h i + 1 = gi + 1 + i h i (12-13) (12-14)
where the chosen values for i and i make gi + 1gi = 0 and hi + 1Ahi = 0, as shown in the following equations. g i + 1 Ah i i = -----------------h i Ah i gi gi i = -----------g i Ahi If the denominators equal zero, take i = 0, i = 0. The following equations are true for all i j . gi gj = 0 h i Ahj = 0 (12-17) (12-15)
(12-16)
The elements in the sequence that Equation 12-13 produces are mutually orthogonal. The elements in the sequence that Equation 12-14 produces are mutually conjugate. Because Equation 12-17 is true, you can rewrite Equations 12-15 and 12-16 as the following equations. gi + 1 gi + 1 ( g i + 1 g i ) gi + 1 - = --------------------------------------- i = ------------------------gi gi gi gi gi hi i = -------------------hi A hi (12-18)
Theorem B
The following theorem defines a method for constructing the vector from Equation 12-13 when the Hessian matrix A is unknown: gi is the vector sequence defined by Equation 12-13. hi is the vector sequence defined by Equation 12-14.
12-13
Chapter 12
Optimization
g i = f ( P i ) for some point Pi. Proceed from Pi in the direction hi to the local minimum of f at point Pi + 1. Set the value for gi + 1 according to Equation 12-19. g i + 1 = f ( P i + 1 ) (12-19)
The vector gi + 1 that Equation 12-19 yields is the same as the vector that Equation 12-13 yields when the Hessian matrix A is known. Therefore, you can optimize f without having knowledge of Hessian matrix A and without the computational resources to calculate and store the Hessian matrix A. You construct the direction sequence hi with line minimization of the gradient vector and the latest vector in the g sequence.
The Polak-Ribiere method uses the second term from Equation 12-18 for i, as shown in Equation 12-21. ( gi + 1 gi ) gi + 1 i = ---------------------------------------g i gi (12-21)
Equation 12-20 equals Equation 12-21 for functions with exact quadratic forms. However, most functions in practical applications do not have exact quadratic forms. Therefore, after you find the minimum for the quadratic form, you might need another set of iterations to find the actual minimum.
12-14
ni.com
Chapter 12
Optimization
When the Polak-Ribiere method reaches the minimum for the quadratic form, it resets the direction h along the local gradient, essentially starting the conjugate-gradient process again. Therefore, the Polak-Ribiere method can make the transition to additional iterations more efficiently than the Fletcher-Reeves method.
12-15
Polynomials
13
P(x) = a0 + a1x + a2x2 + + anxn (13-1)
Polynomials have many applications in various areas of engineering and science, such as curve fitting, system identification, and control design. This chapter describes polynomials and operations involving polynomials.
where P(x) is the nth-order polynomial, the highest power n is the order of the polynomial if an 0, a0, a1, , an are the constant coefficients of the polynomial and can be either real or complex. You can rewrite Equation 13-1 in its factored form, as shown in Equation 13-2. P(x) = an(x r1)(x r2) (x rn) where r1, r2, , rn are the roots of the polynomial. The root ri of P(x) satisfies the following equation. P(x)
x = ri
(13-2)
= 0
i = 1, 2, , n
In general, P(x) might have repeated roots, such that Equation 13-3 is true. P ( x ) = a n ( x r 1 ) ( x r 2 ) ( x r l ) ( x r l + 1 ) ( x r l + 2 ) ( x r l + j ) The following conditions are true for Equation 13-3: r1, r2, , rl are the repeated roots of the polynomial ki is the multiplicity of the root ri, i = 1, 2, , l
k1 k2 kl
(13-3)
13-1
Chapter 13
Polynomials
A polynomial of order n must have n roots. If the polynomial coefficients are all real, the roots of the polynomial are either real or complex conjugate numbers.
The following equations define two polynomials used in the following sections. P(x) = a0 + a1x + a2x2 + a3x3 = a3(x p1)(x p2)(x p3) Q(x) = b0 +b1x + b2x2 = b2(x q1)(x q2) (13-4) (13-5)
Order of Polynomial
The largest exponent of the variable determines the order of a polynomial. The order of P(x) in Equation 13-4 is three because of the variable x3. The order of Q(x) in Equation 13-5 is two because of the variable x2.
Polynomial Evaluation
Polynomial evaluation determines the value of a polynomial for a particular value of x, as shown by the following equation. P( x)
x = x0
= a 0 + a1 x0 + a 2 x0 + a 3 x 0 = a 0 + x0 ( a1 + x0 ( a 2 + x0 a 3 ) )
13-2
ni.com
Chapter 13
Polynomials
Polynomial Addition
The addition of two polynomials involves adding together coefficients whose variables have the same exponent. The following equation shows the result of adding together the polynomials defined by Equations 13-4 and 13-5. P(x) + Q(x) = (ao + b0) + (a1 + b1)x +(a2 + b2)x2 + a3x3
Polynomial Subtraction
Subtracting one polynomial from another involves subtracting coefficients whose variables have the same exponent. The following equation shows the result of subtracting the polynomials defined by Equations 13-4 and 13-5. P(x) Q(x) = (a0 b0) + (a1 b1)x + (a2 b2)x2 + a3x3
Polynomial Multiplication
Multiplying one polynomial by another polynomial involves multiplying each term of one polynomial by each term of the other polynomial. The following equations show the result of multiplying the polynomials defined by Equations 13-4 and 13-5. P(x)Q(x) = (a0 + a1x + a2x2 + a3x3)(b0 + b1x + b2x2) = a0(b0 + b1x + b2x2) + a1x(b0 + b1x + b2x2) + a2x2(b0 + b1x + b2x2) + a3x3(b0 + b1x + b2x2) = a3b2x5 + (a3b1 + a2b2)x4 + (a3b0 + a2b1 + a1b2)x3 + (a2b0 + a1b1 + a0b2)x2 + (a1b0 + a0b1)x + a0b0
Polynomial Division
Dividing the two polynomials P(x) and Q(x) results in the quotient U(x) and remainder V(x), such that the following equation is true. P(x) = Q(x)U(x) + V(x) For example, the following equations define polynomials P(x) and Q(x). P(x) = 5 3x x2 + 2x3 (13-6)
13-3
Chapter 13
Polynomials
(13-7)
Divide the highest order term in Equation 13-6 by the highest order term in Equation 13-7.
2x x 2 x + 1 2 x 3 x 2 3x + 5
2
(13-8) 2. Multiply the result of Equation 13-8 by Q(x) from Equation 13-7. 2xQ(x) = 2x 4x2 + 2x3 3. Subtract the product of Equation 13-9 from P(x). (13-9)
2x x 2 x + 1 2 x 3 x 2 3x + 5
2
(2 x 3 4 x 2 + 2 x ) 3x 2 5x + 5
The highest order term becomes 3x2. 4. Repeat step 1 through step 3 using 3x2 as the highest term of P(x). a. Divide 3x2 by the highest order term in Equation 13-7. 3x ------- = 3 2 x b. Multiply the result of Equation 13-10 by Q(x) from Equation 13-7. 3 Q( x ) = 3 x 2 6 x + 3 c. Subtract the result of Equation 13-11 from 3 x2 5x + 5. (13-11)
2
(13-10)
2x + 3 x 2 x + 1 2 x 3 x 2 3x + 5
2
(2 x 3 4 x 2 + 2 x ) 3x 2 5x + 5 (3 x 2 6 x + 3) x+2
13-4
ni.com
Chapter 13
Polynomials
Because the order of the remainder x + 2 is lower than the order of Q(x), the polynomial division procedure stops. The following equations give the quotient polynomial U(x) and the remainder polynomial V(x) for the division of Equation 13-6 by Equation 13-7. U(x) = 3 + 2x V(x) = 2 + x.
Polynomial Composition
Polynomial composition involves replacing the variable x in a polynomial with another polynomial. For example, replacing x in Equation 13-4 with the polynomial from Equation 13-5 results in the following equation. P(Q(x)) = a0 + a1Q(x) + a2(Q(x))2 + a3(Q(x))3 = a0 + Q(x){a1 + Q(x)[a2 +a3Q(x)]} where P(Q(x)) denotes the composite polynomial.
The following equations define two polynomials P(x) and Q(x). P(x) = U(x)R(x) Q(x) = V(x)R(x) where U(x), V(x), and R(x) are polynomials. The following conditions are true for Equations 13-12 and 13-13: U(x) and R(x) are factors of P(x). V(x) and R(x) are factors of Q(x). P(x) is a multiple of U(x) and R(x). Q(x) is a multiple of V(x) and R(x). R(x) is a common factor of polynomials P(x) and Q(x). (13-12) (13-13)
13-5
Chapter 13
Polynomials
If P(x) and Q(x) have the common factor R(x), and if R(x) is divisible by any other common factors of P(x) and Q(x) such that the division does not result in a remainder, R(x) is the greatest common divisor of P(x) and Q(x). If the greatest common divisor R(x) of polynomials P(x) and Q(x) is equal to a constant, P(x) and Q(x) are coprime. You can find the greatest common divisor of two polynomials by using Euclids division algorithm and an iterative procedure of polynomial division. If the order of P(x) is larger than Q(x), you can complete the following steps to find the greatest common divisor R(x). 1. Divide P(x) by Q(x) to obtain the quotient polynomial Q1(x) and remainder polynomial R1(x). P(x) = Q(x)Q1(x) + R1(x) 2. Divide Q(x) by R1(x) to obtain the new quotient polynomial Q2(x) and new remainder polynomial R2(x). Q(x) = R1(x)Q2(x) + R2(x) 3. Divide R1(x) by R2(x) to obtain Q3(x) and R3(x) R1(x) = R2(x)Q3(x) + R3(x) R2(x) = R3(x)Q4(x) + R4(x) . . . If the remainder polynomial becomes zero, as shown by the following equation, Rn 1(x) = Rn(x)Qn + 1(x), the greatest common divisor R(x) of polynomials P(x) and Q(x) equals Rn(x).
Chapter 13
Polynomials
all the common multiples of P(x) and Q(x), L(x) is the least common multiple of P(x) and Q(x). If L(x) is the least common multiple of P(x) and Q(x) and if R(x) is the greatest common divisor of P(x) and Q(x), dividing the product of P(x) and Q(x) by R(x) obtains L(x), as shown by the following equation. P(x )Q(x ) U( x )R(x )V(x )R(x ) L ( x ) = ----------------------- = ----------------------------------------------- = U ( x ) V ( x ) R ( x ) R(x ) R(x )
Derivatives of a Polynomial
Finding the derivative of a polynomial involves finding the sum of the derivatives of the terms of the polynomial. Equation 13-14 defines an nth-order polynomial, T(x). T(x) = c0 + c1x + c2x2 + +cnxn (13-14)
The first derivative of T(x) is a polynomial of order n 1, as shown by the following equation. d2 n1 ----T ( x ) = c 1 + 2 c 2 x + 3 c 3 x + + nc n x dx The second derivative of T(x) is a polynomial of order n 2, as shown by the following equation. d ------- T ( x ) = 2 c 2 + 6 c 3 x + + n ( n 1 ) cn x n 2 2 dx The following equation defines the kth derivative of T(x). ( k + 2 )! n! - n k ( k + 1 )! d2 ------c x T ( x ) = k! c k + ------------------ c k + 1 x + ------------------ c k + 2 x + + ----------------k 2 ! ( n k )! n 1 ! dx where k n. The Newton-Raphson method of finding the zeros of an arbitrary equation is an application where you need to determine the derivative of a polynomial.
k 2
13-7
Chapter 13
Polynomials
Integrals of a Polynomial
Finding the integral of a polynomial involves the summation of integrals of the terms of the polynomial.
1 - c xn + 1 + + ----------n+1 n
Because the derivative of a constant is zero, c can be an arbitrary constant. For convenience, you can set c to zero.
x=b
x=a
13-8
ni.com
Chapter 13
Polynomials
the following equation defines the Sturm function. Pi 2 ( x ) 1 + - , P i ( x ) = * P i 2 ( x ) P i 1 ( x ) -----------------Pi 1 ( x ) 0 ) . where Pi(x) is the Sturm function and Pi 2 ( x ) -----------------Pi 1 ( x ) represents the quotient polynomial resulting from the division of Pi 2(x) by Pi 1(x). You can calculate Pi(x) until it becomes a constant. For example, the following equations show the calculation of the Sturm function over the interval (2,1). P0(x) = P(x) = 1 4x + 2x3 d - P ( x ) = 4 + 6 x2 P 1 ( x ) = ----dx P0 ( x ) 1 + P 2 ( x ) = * P 0 ( x ) P 1 ( x ) ------------P1 ( x ) 0 ) . + 1 1 -x0 = * P 0 ( x ) P 1 ( x ) -3 . ) 8 -x = 1 + -3 P1 ( x ) 1 + P 3 ( x ) = * P 1 ( x ) P 2 ( x ) ------------P2 ( x ) 0 ) . + 9 $1 -x = * P 1 ( x ) P 2 ( x ) # 27 ----- + -! 32 4 " 0 ) . 101 = -------32 i = 2, 3 ,
13-9
Chapter 13
Polynomials
To evaluate the Sturm functions at the boundary of the interval (2,1), you do not have to calculate the exact values in the evaluation. You only need to know the signs of the values of the Sturm functions. Table 13-1 lists the signs of the Sturm functions for the interval (2,1).
Table 13-1. Signs of the Sturm Functions for the Interval (2, 1)
x 2 1
P0(x)
P 1( x ) + +
P2(x) +
P3(x) + +
In Table 13-1, notice the number of sign changes for each boundary. For x = 2, the evaluation of Pi(x) results in three sign changes. For x = 1, the evaluation of Pi(x) results in one sign change. The difference in the number of sign changes between the two boundaries corresponds to the number of real roots that lie in the interval. For the calculation of the Sturm function over the interval (2,1), the difference in the number of sign changes is two, which means two real roots of polynomial P(x) lie in the interval (2,1). Figure 13-1 shows the result of evaluating P(x) over (2,1).
P(x)
4 2
1.5
0.5
0 2 4 6 8
x
0.5 1
In Figure 13-1, the two real roots lie at approximately 1.5 and 0.26.
13-10
ni.com
Chapter 13
Polynomials
13-11
Chapter 13
Polynomials
F1 F2
For the system shown in Figure 13-2, the following equation yields the transfer function of the system. B1 ( x ) A2 ( x ) F1 ( x ) - = ------------------------------------------------------------H ( x ) = -----------------------------------1 + F1 ( x ) F2 ( x ) A1 ( x ) A2 ( x ) + B1 ( x ) B2 ( x )
13-12
ni.com
Chapter 13
Polynomials
F1 F2
For the system shown in Figure 13-3, the following equation yields the transfer function of the system. F1 ( x ) B1 ( x ) A2 ( x ) H ( x ) = ------------------------------------ = --------------------------------------------------1 F1 ( x ) F2 ( x ) A 1 xA 2 x B 1 ( x ) B 2 ( x )
13-13
Chapter 13
Polynomials
i = 1, 2, , n k
j = 0, 1, , k 1
13-14
ni.com
Chapter 13
Polynomials
Orthogonal Polynomials
A set of polynomials Pi(x) are orthogonal polynomials over the interval a < x < b if each polynomial in the set satisfies the following equations. + / / * / / )
( w ( x ) P ( x ) P ( x ) dx = 0, ( w ( x ) P ( x ) P ( x ) dx 0,
a n m b a n n
nm n=m
The interval (a, b) and the weighting function w(x) vary depending on the set of orthogonal polynomials. One of the most important applications of orthogonal polynomials is to solve differential equations.
Chebyshev orthogonal polynomials of the first kind satisfy the following equations. + / / / * / / / )
( (
1 ----------------- T n ( x ) T m ( x ) dx = 0, 2 1x
nm
+ -, / -1 ----------------- T n ( x ) T n ( x ) dx = * 2 2 1 / , 1x )
1
n0 n = 0
13-15
Chapter 13
Polynomials
Chebyshev orthogonal polynomials of the second kind satisfy the following equations. + / / * / / )
1
( (
1 1
1 x U n ( x ) U m ( x ) dx = 0
2 1 x U n ( x ) U n ( x ) dx = -2
nm n = m
( (
1 1
(1 x )
2 a12
C n ( x ) C m ( x ) dx = 0
nm a0 a = 0
13-16
ni.com
Chapter 13
Polynomials
z 1 t
e dt
( (
x
2
H n ( x ) H m ( x ) dx = 0 2 n!
n
nm n = m
H n ( x ) H n ( x ) dx =
( (
e L n ( x ) L m ( x ) dx = 0
nm n = m
e L n ( x ) L n ( x ) dx = 1
13-17
Chapter 13
Polynomials
(e (e
0 0
x L n ( x ) L m ( x ) dx = 0
nm n = m
x a
a a (a + n + 1) x L n ( x ) L n ( x ) dx = ----------------------------n!
( (
1 1
P n ( x ) P m ( x ) dx = 0 2 P n ( x ) P n ( x ) dx = -------------2n + 1
nm n = m
13-18
ni.com
Chapter 13
Polynomials
(13-17)
In 2D polynomial evaluation, you evaluate P(x) at each element of matrix G, as shown by the following equation. P(x ) P(x ) P( x) P( x)
P(G) =
x = g1 x = g3
x = g2 x = g4
When performing matrix polynomial evaluation, you replace the variable x with matrix G, as shown by the following equation. P([G]) = aoI +a1G + a1GG where I is the identity matrix of the same size as G. In the following equations, actual values replace the variables a and g in Equations 13-16 and 13-17. P(x) = 5 + 3x +2x2 G = 1 2 3 4 (13-18)
(13-19)
13-19
Chapter 13
Polynomials
The following equation shows the matrix evaluation of the polynomial P(x) from Equation 13-18 with matrix G from Equation 13-19. P([G]) = 5 1 0 + 3 1 2 + 2 1 2 1 2 0 1 3 4 3 4 3 4 = 5 0 + 3 6 + 14 20 0 5 9 12 30 44 = 22 26 39 61
Cn 1 + Cn )x = 0
(13-20)
The following conditions apply to Equation 13-20: () is the matrix polynomial whose coefficients are square matrixes. Ci is a square matrix of size m m, i = 0, 1, , n. is the eigenvalue of (). x is the corresponding eigenvector of () and has length m. 0 is the zero vector and has length m.
13-20
ni.com
Chapter 13
Polynomials
You can write the polynomial eigenvalue problem as a generalized eigenvalue problem, as shown by the following equation. Az = Bz where = 0 = 0 A = . . . . . . = 0 I = 0 = 0 . . . I and is an nm nm matrix;
I = 0 . . . . . .
.. .
. . .
C 0 C 1 C 2 C n 1
I B =
I.
.. and is an nm nm matrix; I Cn
x x z = x . . .
n1 2
and is an nm matrix; x
13-21
Chapter 13
Polynomials
13-22
ni.com
Chapter 13
Polynomials
In Figure 13-4, you enter the polynomial coefficients into the array controls, P(x) and Q(x), in ascending order of power. Also, the VI displays the results of the addition in P(x) + Q(x) in ascending order of power, based on the order of the two input arrays.
13-23
Part III
Point-By-Point Analysis
This part describes the concepts of point-by-point analysis, answers frequently asked questions about point-by-point analysis, and describes a case study that illustrates the use of the Point By Point VIs.
III-1
Point-By-Point Analysis
14
This chapter describes the concepts of point-by-point analysis, answers frequently asked questions about point-by-point analysis, and describes a case study that illustrates the use of the Point By Point VIs. Use the NI Example Finder to find examples of using the Point By Point VIs.
14-1
Chapter 14
Point-By-Point Analysis
Parameter input data output data initialize sample length Incoming data
Description
Outgoing, analyzed data Routine that resets the internal state of a VI Setting for your data acquisition system or computation system that best represents the area of interest in the data
Refer to the Case Study of Point-By-Point Analysis section of this chapter for an example of a point-by-point analysis system.
For example, the Value Has Changed PtByPt VI can respond to change events such as the following: Receiving the input data Detecting the change
14-2
ni.com
Chapter 14
Point-By-Point Analysis
Generating a Boolean TRUE value that triggers initialization in another VI Transferring the input data to another VI for processing
Figure 14-1 shows the Value Has Changed PtByPt VI triggering initialization in another VI and transferring data to that VI. In this case, the input data is a parameter value for the target VI.
Many point-by-point applications do not require use of the initialize parameter because initialization occurs automatically whenever an operator quits an application and then starts again.
Figure 14-2. Using the First Call? Function with a While Loop
14-3
Chapter 14
Point-By-Point Analysis
Error codes usually identify invalid parameters and settings. For higher-level error checking, configure your program to monitor and respond to irregularities in data acquisition or in computation. For example, you create a form of error checking when you range check your data. A Point By Point VI generates an error code once at the initial call to the VI or at the first call to the VI after you initialize your application. Because Point By Point VIs generate error codes only once, they can perform optimally in a real-time, deterministic application. The Point By Point VIs generate an error code to inform you of any invalid parameters or settings when they detect an error during the first call. In subsequent calls, the Point By Point VIs set the error code to zero and continue running, generating no error codes. You can program your application to take one of the following actions in response to the first error: Report the error and continue running. Report the error and stop. Ignore the error and continue running. This is the default behavior.
The following programming sequence describes how to use the Value Has Changed PtByPt VI to build a point-by-point error checking mechanism for Point By Point VIs that have an error parameter. 1. 2. 3. 4. Choose a parameter that you want to monitor closely for errors. Wire the parameter value as input data to the Value Has Changed PtByPt VI. Transfer the output data, which is always the unchanged input data in Value Has Changed PtByPt VI, to the target VI. Pass the TRUE event generated by the Value Has Changed PtByPt VI to the target VI to trigger initialization, as shown in Figure 14-1. The Value Has Changed PtByPt VI outputs a TRUE value whenever the input parameter value changes.
For the first call that follows initialization of the target VI, LabVIEW checks for errors. Initialization of the target VI and error checking occurs every time the input parameter changes.
14-4
ni.com
Chapter 14
Point-By-Point Analysis
What Are the Differences between Point-By-Point Analysis and Array-Based Analysis in LabVIEW?
Tables 14-2 and 14-3 compare array-based LabVIEW analysis to point-by-point analysis from multiple perspectives. In Table 14-2, the differences between two automotive fuel delivery systems, carburation and fuel injection, demonstrate the differences between array-based data analysis and point-by-point analysis.
Table 14-2. Comparison of Traditional and Newer Paradigms
Traditional Paradigm Automotive Technology Carburation Fuel accumulates in a float bowl. Engine vacuum draws fuel through a single set of metering valves that serve all combustion chambers. Somewhat efficient combustion occurs. Data Analysis Technology Array-Based Analysis Prepare a buffer unit of data. Analyze data. Produce a buffer of analyzed data. Generate report.
Newer Paradigm
Fuel Injection Fuel flows continuously from gas tank. Fuel sprays directly into each combustion chamber at the moment of combustion. Responsive, precise combustion occurs.
Point-By-Point Analysis Receive continuous stream of data. Filter and analyze data continuously. Generate real-time events and reports continuously.
Table 14-3 presents other comparisons between array-based and point-by-point analysis.
14-5
Chapter 14
Point-By-Point Analysis
Characteristic Compatibility
Array-Based Analysis Limited compatibility with real-time systems Array-oriented Interruptions critical You observe, offline Compensate for startup data loss (45 seconds) with complex state machines Reflection of a process, like a mirror Specify a buffer Output a report Delayed processing Stop Wait Asynchronous
Data Acquisition and Analysis with Point By Point VIs Compatible with real-time systems; backward compatible with array-based systems Scalar-oriented Interruptions tolerated You control, online Startup data loss does not occur; initialize the data acquisition system once and run continuously Direct, natural flow of a process No explicit buffers Output a report and an event in real time Real time Continue Now Synchronous
Data typing Interruptions Operation Performance and programming Point of view Programming Results Run-time behavior Run-time behavior Run-time behavior Work style
14-6
ni.com
Chapter 14
Point-By-Point Analysis
Note If you create custom VIs to use in your own point-by-point application, be sure to enable re-entrant execution. Re-entrant execution is enabled by default in almost all Point By Point VIs.
Deterministic PerformancePoint-by-point analysis is the natural companion to many deterministic systems, because it efficiently integrates with the flow of a real-time data signal.
14-7
Chapter 14
Point-By-Point Analysis
the Case Study of Point-By-Point Analysis section of this chapter acquires a few thousand samples per second to detect defective train wheels. The input data for the train wheel application comes from the signal generated by a train that is moving at 60 km to 70 km per hour. The sample length corresponds to the minimum distance between wheels. A typical point-by-point analysis application analyzes a long series of sample units, but you are likely to have interest in only a few of those sample units. To identify those crucial samples of interest, the point-by-point application focuses on transitions, such as the end of the relevant signal. The train wheel detection application in the Case Study of Point-By-Point Analysis section of this chapter uses the end of a signal to identify crucial samples of interest. The instant the application identifies the transition point, it captures the maximum amplitude reading of the current sample unit. This particular amplitude reading corresponds to the complete signal for the wheel on the train whose signal has just ended. You can use this real-time amplitude reading to generate an event or a report about that wheel and that train.
14-8
ni.com
Chapter 14
Point-By-Point Analysis
You can continue to work without point-by-point analysis as long as you can control your processes without high-speed, deterministic, point-by-point data acquisition. However, if you dedicate resources in a real-time data acquisition application, use point-by-point analysis to achieve the full potential of your application.
14-9
Chapter 14
Point-By-Point Analysis
The Train Wheel PtByPt VI offers a solution for detecting defective train wheels. Figures 14-3 and 14-4 show the front panel and the block diagram, respectively, for the Train Wheel PtByPt VI.
14-10
ni.com
Chapter 14
Point-By-Point Analysis
Figure 14-4. Block Diagram of the Train Wheel PtByPt VI Note This example focuses on implementing a point-by-point analysis program in LabVIEW. The issues of ideal sampling periods and approaches to signal conditioning are beyond the scope of this example.
14-11
Chapter 14
Point-By-Point Analysis
briefly describes what occurs in each stage, and corresponds to the labeled portions of the block diagram in Figure 14-4. 1. 2. 3. 4. 5. In the data acquisition stage (DAQ), waveform data flows into the While Loop. In the Filter stage, separation of low- and high-frequency components of the waveform occurs. In the Analysis stage, detection of the train, wheel, and energy level of the waveform for each wheel occurs. In the Events stage, responses to signal transitions of trains and wheels occurs. In the Report stage, the logging of trains, wheels, and trains that might have defective wheels occurs.
The low-frequency component of train wheel movement represents the normal noise of operation. Defective and normal wheels generate the same low-frequency component in the signal. The peak of the curve represents the moment when the wheel moves directly above the strain gauge. The lowest points of the bell curve represent the beginning and end of the wheel, respectively, as the wheel passes over the strain gauge.
14-12
ni.com
Chapter 14
Point-By-Point Analysis
The signal for a train wheel also contains a high-frequency component that reflects the quality of the wheel. In operation, a defective train wheel generates more energy than a normal train wheel. In other words, the high-frequency component for a defective wheel has greater amplitude.
DAQ Stage
Data moves into the Point By Point VIs through the input data parameter. The point-by-point detection application operates on the continuous stream of waveform data that comes from the wheels of a moving train. For a train moving at 60 km to 70 km per hour, a few hundred to a few thousand samples per second are likely to give you sufficient information to detect a defective wheel.
Filter Stage
The Train Wheel PtByPt VI must filter low- and high-frequency components of the train wheel waveform. Two Butterworth Filter PtByPt VIs perform the following tasks: Extract the low-frequency components of the waveform. Extract the high-frequency components of the waveform.
In the Train Wheel PtByPt VI, the Butterworth Filter PtByPt VIs use the following parameters: order specifies the amount of the waveform data that the VI filters at a given time and is the filter resolution. 2 is acceptable for the Train Wheel PtByPt. fl specifies the low cut-off frequency, which is the minimum signal strength that identifies the departure of a train wheel from the strain gauge. 0.01 is acceptable for the Train Wheel PtByPt.
14-13
Chapter 14
Point-By-Point Analysis
fh specifies the high cut-off frequency, which is the minimum signal strength that identifies the end of high-frequency waveform information. 0.25 is acceptable for the Train Wheel PtByPt.
Analysis Stage
The point-by-point detection application must analyze the low- and high-frequency components separately. The Array Max & Min PtByPt VI extracts waveform data that reveals the level of energy in the waveform for each wheel, the end of each train, and the end of each wheel. Three separate Array Max & Min PtByPt VIs perform the following discrete tasks: Identify the maximum high-frequency value for each wheel. Identify the end of each train. Identify the end of each wheel.
Note The name Array Max & Min PtByPt VI contains the word array only to match the
name of the array-based form of this VI. You do not need to allocate arrays for the Array Max & Min PtByPt VI. In the Train Wheel PtByPt VI, the Array Max & Min PtByPt VIs use the following parameters and functions: sample length specifies the size of the portion of the waveform that the Train Wheel PtByPt VI analyzes. To calculate the ideal sample length, consider the speed of the train, the minimum distance between wheels, and the number of samples you receive per second. 100 is acceptable for the Train Wheel PtByPt VI. The Train Wheel PtByPt VI uses sample length to calculate values for all three Array Max & Min PtByPt VIs. The Multiply function sets a longer portion of the waveform to analyze. When this longer portion fails to display signal activity for train wheels, the Array Max & Min PtByPt VIs identify the end of the train. 4 is acceptable for the Train Wheel PtByPt VI. threshold provides a comparison point to identify when no train wheel signals exist in the signal that you are acquiring. threshold is wired to the Greater? function. 3 is an acceptable setting for threshold in the Train Wheel PtByPt VI.
14-14
ni.com
Chapter 14
Point-By-Point Analysis
Events Stage
After the Analysis stage identifies maximum and minimum values, the Events stage detects when these values cross a threshold setting. The Train Wheel PtByPt VI logs every wheel and every train that it detects. Two Boolean Crossing PtByPt VIs perform the following tasks: Generate an event each time the Array Max & Min PtByPt VIs detect the transition point in the signal that indicates the end of a wheel. Generate an event every time the Array Max & Min PtByPt VIs detect the transition point in the signal that indicates the end of a train.
The Boolean Crossing PtByPt VIs respond to transitions. When the amplitude of a single wheel waveform falls below the threshold setting, the end of the wheel has arrived at the strain gauge. For the Train Wheel PtByPt VI, 3 is a good threshold setting to identify the end of a wheel. When the signal strength falls below the threshold setting, the Boolean Crossing PtByPt VIs recognize a transition event and pass that event to a report. Analysis of the high-frequency signal identifies which wheels, if any, might be defective. When the Train Wheel PtByPt VI encounters a potentially defective wheel, the VI passes the information directly to the report at the moment the end-of-wheel event is detected. In the Train Wheel PtByPt VI, the Boolean Crossing PtByPt VIs use the following parameters: initialize resets the VI for a new session of continuous data acquisition. direction specifies the kind of Boolean crossing.
Report Stage
The Train Wheel PtByPt VI reports on all wheels for all trains that pass through the data acquisition system. The Train Wheel PtByPt VI also reports any potentially defective wheels. Every time a wheel passes the strain gauge, the Train Wheel PtByPt VI captures its waveform, analyzes it, and reports the event. Table 14-4 describes the components of a report on a single train wheel.
14-15
Chapter 14
Point-By-Point Analysis
Information Source Counter mechanism for waveform events Analysis of highpass filter data Counter mechanism for end-of-train events
Meaning of Results Stage One: Wheel number four has passed the strain gauge. Stage Two: Wheel number four has passed the strain gauge and the wheel might be defective. Stage Three: Wheel number four in train number eight has passed the strain gauge, and the wheel might be defective.
The Train Wheel PtByPt VI uses point-by-point analysis to generate a report, not to control an industrial process. However, the Train Wheel PtByPt VI acquires data in real time, and you can modify the application to generate real-time control responses, such as stopping the train when the Train Wheel PtByPt VI encounters a potentially defective wheel.
Conclusion
When acquiring data with real-time performance, point-by-point analysis helps you analyze data in real time. Point-by-point analysis occurs continuously and instantaneously. While you acquire data, you filter and analyze it, point by point, to extract the information you need and to make an appropriate response. This case study demonstrates the effectiveness of the point-by-point approach for generation of both events and reports in real time.
14-16
ni.com
References
This appendix lists the reference material used to produce the analysis VIs, including the signal processing and mathematics VIs. Refer to the following documents for more information about the theories and algorithms implemented in the analysis library. Baher, H. Analog & Digital Signal Processing. New York: John Wiley & Sons, 1990. Baily, David H. and Paul N. Swartztrauber. The Fractional Fourier Transform and Applications. Society of Industrial and Applied Mathematics Review 33, no. 3 (September 1991): 389404. Bates, D. M. and D. G. Watts. Nonlinear Regression Analysis and its Applications. New York: John Wiley & Sons, 1988. Bertsekas, Dimitri P. Nonlinear Programming. 2d ed. Belmont, Massachusetts: Athena Scientific, 1999. Bracewell, R.N. Numerical Transforms. Science 248, (11 May 1990). Burden, R. L. and J. D. Faires. Numerical Analysis. 3d ed. Boston: Prindle, Weber & Schmidt, 1985. Chen, C. H. et al. Signal Processing Handbook. New York: Marcel Decker, Inc., 1988. Chugani, Mahesh L., Abhay R. Samant, and Michael Cerna. LabVIEW Signal Processing. Upper Saddle River, NJ: Prentice Hall PTR, 1998. Crandall, R. E. Projects in Scientific Computation. Berlin: Springer, 1994. DeGroot, M. Probability and Statistics. 2d ed. Reading, Massachusetts: Addison-Wesley Publishing Co., 1986. Dowdy, S. and S. Wearden. Statistics for Research. 2nd ed. New York: John Wiley & Sons. 1991. Dudewicz, E. J. and S. N. Mishra. Modern Mathematical Statistics. New York: John Wiley & Sons, 1988.
A-1
Appendix A
References
Duhamel, P. et al. On Computing the Inverse DFT. IEEE Transactions. Dunn, O. and V. Clark. Applied Statistics: Analysis of Variance and Regression. 2nd ed. New York: John Wiley & Sons. 1987. Ecker, Joseph G. and Michael Kupferschmid. Introduction to Operations Research. New York: Krieger Publishing, 1991. Elliot, D. F. Handbook of Digital Signal Processing Engineering Applications. San Diego: Academic Press, 1987. Fahmy, M. F. Generalized Bessel Polynomials with Application to the Design of Bandpass Filters. Circuit Theory and Applications 5, (1977): 337342. Gander, W. and J. Hrebicek. Solving Problems in Scientific Computing using Maple and MATLAB. Berlin: Springer, 1993. Golub, G.H. and C. F. Van Loan. Matrix Computations. Baltimore: The John Hopkins University Press, 1989. Harris, Fredric J. On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform. Proceedings of the IEEE 66, no. 1 (1978). Lanczos, C. A. Precision Approximation of the Gamma Function. Journal SIAM Numerical Analysis series B, no. 1 (1964): 8796. Maisel, J. E. Hilbert Transform Works With Fourier Transforms to Dramatically Lower Sampling Rates. Personal Engineering and Instrumentation News 7, no. 2 (February 1990). Miller, I. and J. E. Freund. Probability and Statistics for Engineers. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1987. Mitra, Sanjit K. and James F. Kaiser. Handbook for Digital Signal Processing. New York: John Wiley & Sons, 1993. Neter, J. et al. Applied Linear Regression Models. Richard D. Irwin, Inc., 1983. Neuvo, Y., C. Y. Dong, and S. K. Mitra. Interpolated Finite Impulse Response Filters IEEE Transactions on ASSP ASSP-32, no. 6 (June, 1984). ONeill, M. A. Faster Than Fast Fourier. BYTE (April 1988).
A-2
ni.com
Appendix A
References
Oppenheim, A. V. and R. W. Schafer. Discrete-Time Signal Processing. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1989. Oppenheim, Alan V. and Alan S. Willsky. Signals and Systems. New York: Prentice-Hall, Inc., 1983. Parks, T. W. and C. S. Burrus. Digital Filter Design. New York: John Wiley & Sons, Inc., 1987. Pearson, C. E. Numerical Methods in Engineering and Science. New York: Van Nostrand Reinhold Co., 1986. Press, William H., Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical Recipes in C, The Art of Scientific Computing. Cambridge: Cambridge University Press, 1994. Qian, Shie and Dapang Chen. Joint Time-Frequency Analysis. New York: Prentice-Hall, Inc., 1996. Rabiner, L. R. and B. Gold. Theory and Application of Digital Signal Processing. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1975. Rockey, K. C., H. R. Evans, D. W. Griffiths, and D. A. Nethercot. The Finite Element MethodA Basic Introduction for Engineers. New York: John Wiley & Sons, 1983. Sorensen, H. V. et al. On Computing the Split-Radix FFT. IEEE Transactions on ASSP ASSP-34, no. 1 (February 1986). Sorensen, H. V. et al. Real-Valued Fast Fourier Transform Algorithms. IEEE Transactions on ASSP ASSP-35, no. 6 (June 1987). Spiegel, M. Schaums Outline Series on Theory and Problems of Probability and Statistics. New York: McGraw-Hill, 1975. Stoer, J. and R. Bulirsch. Introduction to Numerical Analysis. New York: Springer-Verlag, 1987. Vaidyanathan, P. P. Multirate Systems and Filter Banks. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1993. Wichman, B. and D. Hill. Building a Random-Number Generator: A Pascal Routine for Very-Long-Cycle Random-Number Sequences. BYTE (March 1987): 127128.
A-3
Appendix A
References
Wilkinson, J. H. and C. Reinsch. Linear Algebra, Vol. 2 of Handbook for Automatic Computation. New York: Springer, 1971. Williams, John R. and Kevin Amaratunga. Introduction to Wavelets in Engineering. International Journal for Numerical Methods in Engineering 37, (1994): 23652388. Zwillinger, Daniel. Handbook of Differential Equations. San Diego: Academic Press, 1992.
A-4
ni.com
Visit the following sections of the National Instruments Web site at ni.com for technical support and professional services: SupportOnline technical support resources include the following: Self-Help ResourcesFor immediate answers and solutions, visit our extensive library of technical support resources available in English, Japanese, and Spanish at ni.com/support. These resources are available for most products at no cost to registered users and include software drivers and updates, a KnowledgeBase, product manuals, step-by-step troubleshooting wizards, conformity documentation, example code, tutorials and application notes, instrument drivers, discussion forums, a measurement glossary, and so on. Assisted Support OptionsContact NI engineers and other measurement and automation professionals by visiting ni.com/support. Our online system helps you define your question and connects you to the experts by phone, discussion forum, or email.
Training and CertificationVisit ni.com/training for self-paced training, eLearning virtual classrooms, interactive CDs, and Certification program information. You also can register for instructor-led, hands-on courses at locations around the world. System IntegrationIf you have time constraints, limited in-house technical resources, or other project challenges, NI Alliance Program members can help. To learn more, call your local NI office or visit ni.com/alliance.
If you searched ni.com and could not find the answers you need, contact your local office or NI corporate headquarters. Phone numbers for our worldwide offices are listed at the front of this manual. You also can visit the Worldwide Offices section of ni.com/niglobal to access the branch office Web sites, which provide up-to-date contact information, support phone numbers, email addresses, and current events.
B-1