0% found this document useful (0 votes)
992 views186 pages

Advanced Mechatronics Systems - Laboratory - Mechatronics Control Systems - 1st Ed

This document provides an overview of signal processing and sensors. It discusses topics such as analog signal processing, the Fourier transform, Laplace transform, linear time-invariant systems, common signals like sinusoids and impulses, and common analog systems like filters. It also provides descriptions of different types of sensors such as accelerometers, pressure sensors, temperature sensors, and ultrasonic thickness gauges.

Uploaded by

Jose Prado
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
992 views186 pages

Advanced Mechatronics Systems - Laboratory - Mechatronics Control Systems - 1st Ed

This document provides an overview of signal processing and sensors. It discusses topics such as analog signal processing, the Fourier transform, Laplace transform, linear time-invariant systems, common signals like sinusoids and impulses, and common analog systems like filters. It also provides descriptions of different types of sensors such as accelerometers, pressure sensors, temperature sensors, and ultrasonic thickness gauges.

Uploaded by

Jose Prado
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 186

Contents

Signal processing

Analog signal processing

Fourier transform

Fast Fourier transform

24

Laplace transform

31

Linear system

47

Time-invariant system

49

Dirac delta function

51

Heaviside step function

72

Ramp function

75

Digital signal processing

77

Time domain

82

Z-transform

83

Frequency domain

93

Initial value theorem

95

Final value theorem

95

Sensors
Sensor

97
97

Accelerometer

100

Capacitive sensing

108

Capacitive displacement sensor

111

Current sensor

114

Electro-optical sensor

115

Galvanometer

115

Hall effect sensor

121

Inductive sensor

123

Infrared

124

Linear encoder

137

Photoelectric sensor

142

Photodiode

143

Piezoelectric accelerometer

148

Pressure sensor

150

Resistance thermometer

154

Thermistor

165

Torque sensor

170

Ultrasonic thickness gauge

171

List of sensors

171

References
Article Sources and Contributors

179

Image Sources, Licenses and Contributors

182

Article Licenses
License

184

Signal processing
Analog signal processing
Analog signal processing is any signal processing conducted on analog signals by analog means. "Analog" indicates
something that is mathematically represented as a set of continuous values. This differs from "digital" which uses a
series of discrete quantities to represent signal. Analog values are typically represented as a voltage, electric current,
or electric charge around components in the electronic devices. An error or noise affecting such physical quantities
will result in a corresponding error in the signals represented by such physical quantities.
Examples of analog signal processing include crossover filters in loudspeakers, "bass", "treble" and "volume"
controls on stereos, and "tint" controls on TVs. Common analog processing elements include capacitors, resistors,
inductors and transistors.

Tools used in analog signal processing


A system's behavior can be mathematically modeled and is represented in the time domain as h(t) and in the
frequency domain as H(s), where s is a complex number in the form of s=a+ib, or s=a+jb in electrical engineering
terms (electrical engineers use j because current is represented by the variable i). Input signals are usually called x(t)
or X(s) and output signals are usually called y(t) or Y(s).

Convolution
Convolution is the basic concept in signal processing that states an input signal can be combined with the system's
function to find the output signal. It is the integral of the product of two waveforms after one has reversed and
shifted; the symbol for convolution is *.

That is the convolution integral and is used to find the convolution of a signal and a system; typically a = - and b =
+.
Consider two waveforms f and g. By calculating the convolution, we determine how much a reversed function g
must be shifted along the x-axis to become identical to function f. The convolution function essentially reverses and
slides function g along the axis, and calculates the integral of their (f and the reversed and shifted g) product for each
possible amount of sliding. When the functions match, the value of (f*g) is maximized. This occurs because when
positive areas (peaks) or negative areas (troughs) are multiplied, they contribute to the integral.

Analog signal processing

Fourier transform
The Fourier transform is a function that transforms a signal or system in the time domain into the frequency domain,
but it only works for certain ones. The constraint on which systems or signals can be transformed by the Fourier
Transform is that:

This is the Fourier transform integral:

Most of the time the Fourier transform integral isn't used to determine the transform. Usually a table of transform
pairs is used to find the Fourier transform of a signal or system. The inverse Fourier transform is used to go from
frequency domain to time domain:

Each signal or system that can be transformed has a unique Fourier transform; there is only one time signal and one
frequency signal that goes together.

Laplace transform
The Laplace transform is a generalized Fourier transform. It allows a transform of any system or signal because it is
a transform into the complex plane instead of just the j line like the Fourier transform. The major difference is that
the Laplace transform has a region of convergence for which the transform is valid. This implies that a signal in
frequency may have more than one signal in time; the correct time signal for the transform is determined by the
region of convergence. If the region of convergence includes the j axis, j can be substituted into the Laplace
transform for s and it's the same as the Fourier transform. The Laplace transform is:

and the inverse Laplace transform, if all the singularities of X(s) are in the left half of the complex plane, is:

Bode plots
Bode plots are plots of magnitude vs. frequency and phase vs. frequency for a system. The magnitude axis is in
Decibel (dB). The phase axis is in either degrees or radians. The frequency axes are in a logarithmic scale. These are
useful because for sinusoidal inputs, the output is the input multiplied by the value of the magnitude plot at the
frequency and shifted by the value of the phase plot at the frequency.

Domains
Time domain
This is the domain that most people are familiar with. A plot in the time domain shows the amplitude of the signal
with respect to time.

Analog signal processing

Frequency domain
A plot in the frequency domain shows either the phase shift or magnitude of a signal at each frequency that it exists
at. These can be found by taking the Fourier transform of a time signal and are plotted similarly to a bode plot.

Signals
While any signal can be used in analog signal processing, there are many types of signals that are used very
frequently.

Sinusoids
Sinusoids are the building block of analog signal processing. All real world signals can be represented as an infinite
sum of sinusoidal functions via a Fourier series. A sinusoidal function can be represented in terms of an exponential
by the application of Euler's Formula.

Impulse
An impulse (Dirac delta function) is defined as a signal that has an infinite magnitude and an infinitesimally narrow
width with an area under it of one, centered at zero. An impulse can be represented as an infinite sum of sinusoids
that includes all possible frequencies. It is not, in reality, possible to generate such a signal, but it can be sufficiently
approximated with a large amplitude, narrow pulse, to produce the theoretical impulse response in a network to a
high degree of accuracy. The symbol for an impulse is (t). If an impulse is used as an input to a system, the output
is known as the impulse response. The impulse response defines the system because all possible frequencies are
represented in the input.

Step
A unit step function, also called the Heaviside step function, is a signal that has a magnitude of zero before zero and
a magnitude of one after zero. The symbol for a unit step is u(t). If a step is used as the input to a system, the output
is called the step response. The step response shows how a system responds to a sudden input, similar to turning on a
switch. The period before the output stabilizes is called the transient part of a signal. The step response can be
multiplied with other signals to show how the system responds when an input is suddenly turned on.
The unit step function is related to the Dirac delta function by;

Systems
Linear time-invariant (LTI)
Linearity means that if you have two inputs and two corresponding outputs, if you take a linear combination of those
two inputs you will get a linear combination of the outputs. An example of a linear system is a first order low-pass or
high-pass filter. Linear systems are made out of analog devices that demonstrate linear properties. These devices
don't have to be entirely linear, but must have a region of operation that is linear. An operational amplifier is a
non-linear device, but has a region of operation that is linear, so it can be modeled as linear within that region of
operation. Time-invariance means it doesn't matter when you start a system, the same output will result. For
example, if you have a system and put an input into it today, you would get the same output if you started the system
tomorrow instead. There aren't any real systems that are LTI, but many systems can be modeled as LTI for simplicity
in determining what their output will be. All systems have some dependence on things like temperature, signal level
or other factors that cause them to be non-linear or non-time-invariant, but most are stable enough to model as LTI.

Analog signal processing


Linearity and time-invariance are important because they are the only types of systems that can be easily solved
using conventional analog signal processing methods. Once a system becomes non-linear or non-time-invariant, it
becomes a non-linear differential equations problem, and there are very few of those that can actually be solved.
(Haykin & Van Veen 2003)

Common systems
Some common systems used in everyday life are filters, AM/FM radio, electric guitars and musical instrument
amplifiers. Filters are used in almost everything that has electronic circuitry. Radio and television are good examples
of everyday uses of filters. When a channel is changed on an analog television set or radio, an analog filter is used to
pick out the carrier frequency on the input signal. Once it's isolated, the television or radio information being
broadcast is used to form the picture and/or sound. Another common analog system is an electric guitar and its
amplifier. The guitar uses a magnet with a coil wrapped around it (inductor) to turn the vibration of the strings into a
small electric current. The current is then filtered, amplified and sent to a speaker in the amplifier. Most amplifiers
are analog because they are easier and cheaper to make than making a digital amplifier. There are also many analog
guitar effects pedals, although a large number of pedals are now digital (they turn the input current into a digitized
value, perform an operation on it, then convert it back into an analog signal).

References
Haykin, Simon, and Barry Van Veen. Signals and Systems. 2nd ed. Hoboken, NJ: John Wiley and Sons, Inc.,
2003.
McClellan, James H., Ronald W. Schafer, and Mark A. Yoder. Signal Processing First. Upper Saddle River, NJ:
Pearson Education, Inc., 2003.

Fourier transform
The Fourier transform is a mathematical operation that decomposes a function into its constituent frequencies,
known as its frequency spectrum. For instance, the transform of a musical chord made up of pure notes (without
overtones) is a mathematical representation of the amplitudes and phases of the individual notes that make it up. The
composite waveform depends on time, and therefore is called the time domain representation. The frequency
spectrum is a function of frequency and is called the frequency domain representation. Each value of the function is a
complex number (called complex amplitude) that encodes both a magnitude and phase component. The term "Fourier
transform" refers to both the transform operation and to the complex-valued function it produces.
In the case of a periodic function, like the musical chord, the Fourier transform can be simplified to the calculation
of a discrete set of complex amplitudes, called Fourier series coefficients. Also, when a time-domain function is
sampled to facilitate storage and/or computer-processing, it is still possible to recreate a version of the original
Fourier transform according to the Poisson summation formula, also known as discrete-time Fourier transform.
These topics are addressed in separate articles. For an overview of those and other related operations, refer to
Fourier analysis or List of Fourier-related transforms.

Fourier transform

Definition
There are several common conventions for defining the Fourier transform

of an integrable function : R C

(Kaiser 1994). This article will use the definition:


for every real number .
When the independent variable x represents time (with SI unit of seconds), the transform variable represents
frequency (in hertz). Under suitable conditions, can be reconstructed from

by the inverse transform:

for every real numberx.


For other common conventions and notations, including using the angular frequency instead of the frequency ,
see Other conventions and Other notations below. The Fourier transform on Euclidean space is treated separately, in
which the variable x often represents position and momentum.

Introduction
The motivation for the Fourier transform comes from the study of Fourier series. In the study of Fourier series,
complicated functions are written as the sum of simple waves mathematically represented by sines and cosines. Due
to the properties of sine and cosine, it is possible to recover the amplitude of each wave in the sum by an integral. In
many cases it is desirable to use Euler's formula, which states that e2i=cos2+isin2, to write Fourier series
in terms of the basic waves e2i. This has the advantage of simplifying many of the formulas involved, and provides
a formulation for Fourier series that more closely resembles the definition followed in this article. Re-writing sines
and cosines as complex exponentials makes it necessary for the Fourier coefficients to be complex valued. The usual
interpretation of this complex number is that it gives both the amplitude (or size) of the wave present in the function
and the phase (or the initial angle) of the wave. These complex exponentials sometimes contain negative
"frequencies". If is measured in seconds, then the waves e2i and e2i both complete one cycle per second, but
they represent different frequencies in the Fourier transform. Hence, frequency no longer measures the number of
cycles per unit time, but is still closely related.
There is a close connection between the definition of Fourier series and the Fourier transform for functions which
are zero outside of an interval. For such a function, we can calculate its Fourier series on any interval that includes
the points where is not identically zero. The Fourier transform is also defined for such a function. As we increase
the length of the interval on which we calculate the Fourier series, then the Fourier series coefficients begin to look
like the Fourier transform and the sum of the Fourier series of begins to look like the inverse Fourier transform. To
explain this more precisely, suppose that T is large enough so that the interval [T/2,T/2] contains the interval on
which is not identically zero. Then the n-th series coefficient cn is given by:

Comparing this to the definition of the Fourier transform, it follows that

since (x) is zero outside

[T/2,T/2]. Thus the Fourier coefficients are just the values of the Fourier transform sampled on a grid of width 1/T.
As T increases the Fourier coefficients more closely represent the Fourier transform of the function.
Under appropriate conditions, the sum of the Fourier series of will equal the function . In other words, can be
written:

where the last sum is simply the first sum rewritten using the definitions n=n/T, and =(n+1)/Tn/T=1/T.

Fourier transform

This second sum is a Riemann sum, and so by letting T it will converge to the integral for the inverse Fourier
transform given in the definition section. Under suitable conditions this argument may be made precise (Stein &
Shakarchi 2003).
In the study of Fourier series the numbers cn could be thought of as the "amount" of the wave in the Fourier series of
. Similarly, as seen above, the Fourier transform can be thought of as a function that measures how much of each
individual frequency is present in our function , and we can recombine these waves by using an integral (or
"continuous sum") to reproduce the original function.
The following images provide a visual illustration of how the Fourier transform measures whether a frequency is
present in a particular function. The function depicted
oscillates at 3 hertz (if t measures
seconds) and tends quickly to 0. This function was specially chosen to have a real Fourier transform which can easily
be plotted. The first image contains its graph. In order to calculate
we must integrate e2i(3t)(t). The second
image shows the plot of the real and imaginary parts of this function. The real part of the integrand is almost always
positive, this is because when (t) is negative, then the real part of e2i(3t) is negative as well. Because they oscillate
at the same rate, when (t) is positive, so is the real part of e2i(3t). The result is that when you integrate the real part
of the integrand you get a relatively large number (in this case 0.5). On the other hand, when you try to measure a
frequency that is not present, as in the case when we look at
, the integrand oscillates enough so that the
integral is very small. The general situation may be a bit more complicated than this, but this in spirit is how the
Fourier transform measures how much of an individual frequency is present in a function (t).

Original function showing


oscillation 3 hertz.

Real and imaginary parts of


integrand for Fourier transform
at 3 hertz

Real and imaginary parts of


integrand for Fourier transform
at 5 hertz

Fourier transform with 3 and 5


hertz labeled.

Properties of the Fourier transform


Here we assume f(x), g(x), and h(x) are integrable functions, are Lebesgue-measurable on the real line, and satisfy:

We denote the Fourier transforms of these functions by

, and

respectively.

Basic properties
The Fourier transform has the following basic properties: (Pinsky 2002).
Linearity
For any complex numbers a and b, if h(x)=a(x)+bg(x), then
Translation
For any real number x0, if h(x)=(xx0), then
Modulation
For any real number 0, if h(x)=e2ix0(x), then
Scaling

Fourier transform

For a non-zero real number a, if h(x)=(ax), then


time-reversal property, which states: if h(x)=(x), then
Conjugation
If

. The case a=1 leads to the


.

, then

In particular, if is real, then one has the reality condition


And if is purely imaginary, then
Duality
If

then

Convolution
If

, then

Uniform continuity and the RiemannLebesgue lemma


The Fourier transform may be defined in some cases for non-integrable
functions, but the Fourier transforms of integrable functions have
several strong properties.

The rectangular function is Lebesgue integrable.

The sinc function, which is the Fourier transform


of the rectangular function, is bounded and
continuous, but not Lebesgue integrable.

The Fourier transform

of any integrable function is uniformly continuous and

(Katznelson

1976). By the RiemannLebesgue lemma (Stein & Weiss 1971),


Furthermore,

is bounded and continuous, but need not be integrable. For example, the Fourier transform of the

rectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improper
integrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutely
convergent.

Fourier transform

It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both and

are

integrable, the inverse equality

holds almost everywhere. That is, the Fourier transform is injective on L1(R). (But if is continuous, then equality
holds for every x.)

Plancherel theorem and Parseval's theorem


Let f(x) and g(x) be integrable, and let

and

be their Fourier transforms. If f(x) and g(x) are also

square-integrable, then we have Parseval's theorem (Rudin 1987, p. 187):

where the bar denotes complex conjugation.


The Plancherel theorem, which is equivalent to Parseval's theorem, states (Rudin 1987, p. 186):

The Plancherel theorem makes it possible to define the Fourier transform for functions in L2(R), as described in
Generalizations below. The Plancherel theorem has the interpretation in the sciences that the Fourier transform
preserves the energy of the original quantity. It should be noted that depending on the author either of these theorems
might be referred to as the Plancherel theorem or as Parseval's theorem.
See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups.

Poisson summation formula


The Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation
of a function to values of the function's continuous Fourier transform. It has a variety of useful forms that are derived
from the basic one by application of the Fourier transform's scaling and time-shifting properties. One such form
leads directly to a proof of the Nyquist-Shannon sampling theorem.

Convolution theorem
The Fourier transform translates between convolution and multiplication of functions. If (x) and g(x) are integrable
functions with Fourier transforms
and
respectively, then the Fourier transform of the convolution is
given by the product of the Fourier transforms

and

(under other conventions for the definition of the

Fourier transform a constant factor may appear).


This means that if:

where denotes the convolution operation, then:

In linear time invariant (LTI) system theory, it is common to interpret g(x) as the impulse response of an LTI system
with input (x) and output h(x), since substituting the unit impulse for (x) yields h(x)= g(x). In this case,
represents the frequency response of the system.
Conversely, if (x) can be decomposed as the product of two square integrable functions p(x) and q(x), then the
Fourier transform of (x) is given by the convolution of the respective Fourier transforms
and
.

Fourier transform

Cross-correlation theorem
In an analogous manner, it can be shown that if h(x) is the cross-correlation of (x) and g(x):

then the Fourier transform of h(x) is:

As a special case, the autocorrelation of function (x) is:

for which

Eigenfunctions
One important choice of an orthonormal basis for L2(R) is given by the Hermite functions

where

are the "probabilist's" Hermite polynomials, defined by Hen(x)= (1)nexp(x2/2)Dnexp(x2/2).

Under this convention for the Fourier transform, we have that


In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fourier
transform on L2(R) (Pinsky 2002). However, this choice of eigenfunctions is not unique. There are only four
different eigenvalues of the Fourier transform (1 and i) and any linear combination of eigenfunctions with the
same eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose L2(R) as a direct
sum of four spaces H0, H1, H2, and H3 where the Fourier transform acts on Hek simply by multiplication by ik. This
approach to define the Fourier transform is due to N. Wiener(Duoandikoetxea 2001). Among other properties,
Hermite functions decrease exponentially fast in both frequency and time domains and they are used to define a
generalization of the Fourier transform, namely the fractional Fourier transform used in time-frequency analysis
(Boashash 2003).

Fourier transform on Euclidean space


The Fourier transform can be in any arbitrary number of dimensions n. As with the one-dimensional case there are
many conventions, for an integrable function (x) this article takes the definition:

where x and are n-dimensional vectors, and x is the dot product of the vectors. The dot product is sometimes
written as
.
All of the basic properties listed above hold for the n-dimensional Fourier transform, as do Plancherel's and
Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the
RiemannLebesgue lemma holds. (Stein & Weiss 1971)

Fourier transform

10

Uncertainty principle
Generally speaking, the more concentrated f(x) is, the more spread out its Fourier transform

must be. In

particular, the scaling property of the Fourier transform may be seen as saying: if we "squeeze" a function in x, its
Fourier transform "stretches out" in . It is not possible to arbitrarily concentrate both a function and its Fourier
transform.
The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an
uncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to the
symplectic form on the timefrequency domain: from the point of view of the linear canonical transformation, the
Fourier transform is rotation by 90 in the timefrequency domain, and preserves the symplectic form.
Suppose (x) is an integrable and square-integrable function. Without loss of generality, assume that (x) is
normalized:

It follows from the Plancherel theorem that

is also normalized.

The spread around x= 0 may be measured by the dispersion about zero (Pinsky 2002) defined by

In probability terms, this is the second moment of

about zero.

The Uncertainty principle states that, if (x) is absolutely continuous and the functions x(x) and (x) are square
integrable, then
(Pinsky 2002).
The equality is attained only in the case

(hence

) where > 0

is arbitrary and C1 is such that is L2normalized (Pinsky 2002). In other words, where is a (normalized) Gaussian
function with variance 2, centered at zero, and its Fourier transform is a Gaussian function with variance 1/2.
In fact, this inequality implies that:

for any

in R (Stein & Shakarchi 2003).

In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, to within a factor of
Planck's constant. With this constant properly taken into account, the inequality above becomes the statement of the
Heisenberg uncertainty principle (Stein & Shakarchi 2003).
A stronger uncertainty principle is the Hirschman uncertainty principle which is expressed as:

where H(p) is the differential entropy of the probability density function p(x):

where the logarithms may be in any base which is consistent. The equality is attained for a Gaussian, as in the
previous case.

Fourier transform

11

Spherical harmonics
Let the set of homogeneous harmonic polynomials of degree k on Rn be denoted by Ak. The set Ak consists of the
solid spherical harmonics of degree k. The solid spherical harmonics play a similar role in higher dimensions to the
Hermite polynomials in dimension one. Specifically, if f(x)=e|x|2P(x) for some P(x) in Ak, then
. Let the set Hk be the closure in L2(Rn) of linear combinations of functions of the form f(|x|)P(x)
where P(x) is in Ak. The space L2(Rn) is then a direct sum of the spaces Hk and the Fourier transform maps each
space Hk to itself and is possible to characterize the action of the Fourier transform on each space Hk (Stein & Weiss
1971). Let (x)=0(|x|)P(x) (with P(x) in Ak), then

where

Here J(n+2k2)/2 denotes the Bessel function of the first kind with order (n+2k2)/2. When k=0 this gives a
useful formula for the Fourier transform of a radial function (Grafakos 2004).

Restriction problems
In higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fourier
transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a
square-integrable function the Fourier transform could be a general class of square integrable functions. As such, the
restriction of the Fourier transform of an L2(Rn) function cannot be defined on sets of measure 0. It is still an active
area of study to understand restriction problems in Lp for 1<p<2. Surprisingly, it is possible in some cases to
define the restriction of a Fourier transform to a set S, provided S has non-zero curvature. The case when S is the unit
sphere in Rn is of particular interest. In this case the Tomas-Stein restriction theorem states that the restriction of the
Fourier transform to the unit sphere in Rn is a bounded operator on Lp provided 1p (2n + 2) / (n + 3).
One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial
sum operator. Consider an increasing collection of measurable sets ER indexed by R(0,): such as balls of radius
R centered at the origin, or cubes of side 2R. For a given integrable function , consider the function R defined by:

Suppose in addition that is in Lp(Rn). For n= 1 and 1 < p < , if one takes ER= (R,R), then R converges to in
Lp as R tends to infinity, by the boundedness of the Hilbert transform. Naively one may hope the same holds true for
n> 1. In the case that ER is taken to be a cube with side length R, then convergence still holds. Another natural
candidate is the Euclidean ball ER= {:||< R}. In order for this partial sum operator to converge, it is necessary
that the multiplier for the unit ball be bounded in Lp(Rn). For n2 it is a celebrated theorem of Charles Fefferman
that the multiplier for the unit ball is never bounded unless p=2 (Duoandikoetxea 2001). In fact, when p 2, this
shows that not only may R fail to converge to in Lp, but for some functions Lp(Rn), R is not even an element of
Lp.

Fourier transform on other function spaces


The definition of the Fourier transform by the integral formula

is valid for Lebesgue integrable functions f; that is, f in L1(R). The image of L1 a subset of the space C0(R) of
continuous functions that tend to zero at infinity (the RiemannLebesgue lemma), although it is not the entire space.
Indeed, there is no simple characterization of the image.
It is possible to extend the definition of the Fourier transform to other spaces of functions. Since compactly
supported smooth functions are integrable and dense in L2(R), the Plancherel theorem allows us to extend the

Fourier transform

12

definition of the Fourier transform to general functions in L2(R) by continuity arguments. Further

: L2(R) L2(R) is

a unitary operator (Stein & Weiss 1971, Thm. 2.3). In particular, the image of L2(R) is itself under the Fourier
transform. The Fourier transform in L2(R) is no longer given by an ordinary Lebesgue integral, although it can be
computed by an improper integral, here meaning that for an L2 function f,

where the limit is taken in the L2 sense. Many of the properties of the Fourier transform in L1 carry over to L2, by a
suitable limiting argument.
The definition of the Fourier transform can be extended to functions in Lp(R) for 1 p 2 by decomposing such
functions into a fat tail part in L2 plus a fat body part in L1. In each of these spaces, the Fourier transform of a
function in Lp(R) is again in Lp(R) by the HausdorffYoung inequality. However, except for p = 2, the image is not
easily characterized. Further extensions become more technical. The Fourier transform of functions in Lp for the
range 2 < p < requires the study of distributions (Katznelson 1976). In fact, it can be shown that there are functions
in Lp with p>2 so that the Fourier transform is not defined as a function (Stein & Weiss 1971).

Tempered distributions
The Fourier transform maps the space of Schwartz functions to itself, and gives a homeomorphism of the space to
itself (Stein & Weiss 1971). Because of this it is possible to define the Fourier transform of tempered distributions.
These include all the integrable functions mentioned above, as well as well-behaved functions of polynomial growth
and distributions of compact support, and have the added advantage that the Fourier transform of any tempered
distribution is again a tempered distribution.
The following two facts provide some motivation for the definition of the Fourier transform of a distribution. First let
and g be integrable functions, and let and be their Fourier transforms respectively. Then the Fourier transform
obeys the following multiplication formula (Stein & Weiss 1971),

Secondly, every integrable function defines a distribution T by the relation


for all Schwartz functions .
In fact, given a distribution T, we define the Fourier transform by the relation
for all Schwartz functions .
It follows that

Distributions can be differentiated and the above mentioned compatibility of the Fourier transform with
differentiation and convolution remains true for tempered distributions.

Fourier transform

13

Generalizations
FourierStieltjes transform
The Fourier transform of a finite Borel measure on Rn is given by (Pinsky 2002):

This transform continues to enjoy many of the properties of the Fourier transform of integrable functions. One
notable difference is that the RiemannLebesgue lemma fails for measures (Katznelson 1976). In the case that
d=(x)dx, then the formula above reduces to the usual definition for the Fourier transform of . In the case that is
the probability distribution associated to a random variable X, the Fourier-Stieltjes transform is closely related to the
characteristic function, but the typical conventions in probability theory take eix instead of e2ix (Pinsky 2002). In
the case when the distribution has a probability density function this definition reduces to the Fourier transform
applied to the probability density function, again with a different choice of constants.
The Fourier transform may be used to give a characterization of continuous measures. Bochner's theorem
characterizes which functions may arise as the FourierStieltjes transform of a measure (Katznelson 1976).
Furthermore, the Dirac delta function is not a function but it is a finite Borel measure. Its Fourier transform is a
constant function (whose specific value depends upon the form of the Fourier transform used).

Locally compact abelian groups


The Fourier transform may be generalized to any locally compact abelian group. A locally compact abelian group is
an abelian group which is at the same time a locally compact Hausdorff topological space so that the group
operations are continuous. If G is a locally compact abelian group, it has a translation invariant measure , called
Haar measure. For a locally compact abelian group G it is possible to place a topology on the set of characters
so
that

is also a locally compact abelian group. For a function in L1(G) it is possible to define the Fourier

transform by (Katznelson 1976):

Locally compact Hausdorff space


The Fourier transform may be generalized to any locally compact Hausdorff space, which recovers the topology but
loses the group structure.
Given a locally compact Hausdorff topological space X, the space A=C0(X) of continuous complex-valued functions
on X which vanish at infinity is in a natural way a commutative C*-algebra, via pointwise addition, multiplication,
complex conjugation, and with norm as the uniform norm. Conversely, the characters of this algebra A, denoted
are naturally a topological space, and can be identified with evaluation at a point of x, and one has an isometric
isomorphism

In the case where X=R is the real line, this is exactly the Fourier transform.

Fourier transform

14

Non-abelian groups
The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact.
Unlike the Fourier transform on an abelian group, which is scalar-valued, the Fourier transform on a non-abelian
group is operator-valued (Hewitt & Ross 1971, Chapter 8). The Fourier transform on compact groups is a major tool
in representation theory (Knapp 2001) and non-commutative harmonic analysis.
Let G be a compact Hausdorff topological group. Let denote the collection of all isomorphism classes of
finite-dimensional irreducible unitary representations, along with a definite choice of representation U() on the
Hilbert space H of finite dimension d for each . If is a finite Borel measure on G, then the FourierStieltjes
transform of is the operator on H defined by

where

is the complex-conjugate representation of U() acting on H. As in the abelian case, if is absolutely

continuous with respect to the left-invariant probability measure on G, then it is represented as


for some L1(). In this case, one identifies the Fourier transform of with the FourierStieltjes transform of .
The mapping

defines an isomorphism between the Banach space M(G) of finite Borel measures (see rca

space) and a closed subspace of the Banach space C() consisting of all sequences E=(E) indexed by of
(bounded) linear operators E:HH for which the norm
is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an
isomorphism of C* algebras into a subspace of C(), in which M(G) is equipped with the product given by
convolution of measures and C() the product given by multiplication of operators in each index .
The Peter-Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: if
L2(G), then

where the summation is understood as convergent in the L2 sense.


The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the
development of noncommutative geometry. In this context, a categorical generalization of the Fourier transform to
noncommutative groups is Tannaka-Krein duality, which replaces the group of characters with the category of
representations. However, this loses the connection with harmonic functions.

Alternatives
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no
frequency information, while the Fourier transform has perfect frequency resolution, but no time information: the
magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by
phase (argument of the Fourier transform at a point), and standing waves are not localized in time a sine wave
continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals
that are localized in time, notably transients, or any signal of finite extent.
As alternatives to the Fourier transform, in time-frequency analysis, one uses time-frequency transforms or
time-frequency distributions to represent signals in a form that has some time information and some frequency
information by the uncertainty principle, there is a trade-off between these. These can be generalizations of the
Fourier transform, such as the short-time Fourier transform or fractional Fourier transform, or can use different
functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the

Fourier transform

15

(continuous) Fourier transform being the continuous wavelet transform. (Boashash 2003).

Applications
Analysis of differential equations
Fourier transforms and the closely related Laplace transforms are widely used in solving differential equations. The
Fourier transform is compatible with differentiation in the following sense: if f(x) is a differentiable function with
Fourier transform
, then the Fourier transform of its derivative is given by
. This can be used to
transform differential equations into algebraic equations. Note that this technique only applies to problems whose
domain is the whole set of real numbers. By extending the Fourier transform to functions of several variables partial
differential equations with domain Rn can also be translated into algebraic equations.

Fourier transform spectroscopy


The Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g.
infrared (FTIR). In NMR an exponentially-shaped free induction decay (FID) signal is acquired in the time domain
and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in
magnetic resonance imaging (MRI) and mass spectrometry.

Other notations
Other common notations for

include:

Denoting the Fourier transform by a capital letter corresponding to the letter of function being transformed (such as
f(x) and F()) is especially common in the sciences and engineering. In electronics, the omega () is often used
instead of due to its interpretation as angular frequency, sometimes it is written as F(j), where j is the imaginary
unit, to indicate its relationship with the Laplace transform, and sometimes it is written informally as F(2f) in order
to use ordinary frequency.
The interpretation of the complex function

may be aided by expressing it in polar coordinate form

in terms of the two real functions A() and () where:

is the amplitude and

is the phase (see arg function).


Then the inverse transform can be written:

which is a recombination of all the frequency components of (x). Each component is a complex sinusoid of the
form e2ix whose amplitude is A() and whose initial phase angle (at x=0) is ().
The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted
is used to denote the Fourier transform of the function f. This mapping is linear, which means that

and
can

also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra
of applying a linear transformation to a vector (here the function f) can be used to write
instead of
.
Since the result of applying the Fourier transform is again a function, we can be interested in the value of this

Fourier transform

16

function evaluated at the value for its variable, and this is denoted either as
is implicitly understood that

or as

. Notice that in the former ca

is applied first to f and then the resulting function is evaluated at , not the other way

around.
In mathematics and various applied sciences it is often necessary to distinguish between a function f and the value of
f when its variable equals x, denoted f(x). This means that a notation like
formally can be interpreted as
the Fourier transform of the values of f at x. Despite this flaw, the previous notation appears frequently, often when a
particular function or a function of a particular variable is to be transformed.
For example,

is sometimes used to express that the Fourier transform of a rectangular

function is a sinc function,


or

is used to express the shift property of the Fourier transform.

Notice, that the last example is only correct under the assumption that the transformed function is a function of x, not
of x0.

Other conventions
The Fourier transform can also be written in terms of angular frequency: = 2 whose units are radians per
second.
The substitution = /(2) into the formulas above produces this convention:

Under this convention, the inverse transform becomes:

Unlike the convention followed in this article, when the Fourier transform is defined this way, it is no longer a
unitary transformation on L2(Rn). There is also less symmetry between the formulas for the Fourier transform and its
inverse.
Another convention is to split the factor of (2)n evenly between the Fourier transform and its inverse, which leads
to definitions:

Under this convention, the Fourier transform is again a unitary transformation on L2(Rn). It also restores the
symmetry between the Fourier transform and its inverse.
Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward
and the reverse transform. The signs must be opposites. Other than that, the choice is (again) a matter of convention.

Fourier transform

17

Summary of popular forms of the Fourier transform


ordinary frequency (hertz)

unitary

angular frequency (rad/s)

non-unitary

unitary

As discussed above, the characteristic function of a random variable is the same as the FourierStieltjes transform of
its distribution measure, but in this context it is typical to take a different convention for the constants. Typically
characteristic function is defined

As in the case of the "non-unitary angular frequency" convention above, there is no factor of 2 appearing in either
of the integral, or in the exponential. Unlike any of the conventions appearing above, this convention takes the
opposite sign in the exponential.

Tables of important Fourier transforms


The following tables record some closed form Fourier transforms. For functions (x) , g(x) and h(x) denote their
Fourier transforms by , , and respectively. Only the three most common conventions are included. It may
be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original
function, which can be seen as relating the Fourier transform and its inverse.

Functional relationships
The Fourier transforms in this table may be found in (Erdlyi 1954) or the appendix of (Kammler 2000).
Function

Fourier transform
unitary, ordinary
frequency

Fourier transform
unitary, angular frequency

Fourier transform
non-unitary, angular
frequency

Remarks

Definition

101

Linearity

102

Shift in time domain

103

Shift in frequency domain, dual


of 102

104

Scaling in the time domain. If


is large, then
is
concentrated around 0 and
spreads out and
flattens.

Fourier transform

18

105

Duality. Here

needs to be

calculated using the same


method as Fourier transform
column. Results from swapping
"dummy" variables of and
or

or

106
107

This is the dual of 106

108

The notation
convolution of

denotes the
and

this

rule is the convolution theorem


109

This is the dual of 108

110 For

a purely real

111 For

a purely real

Hermitian symmetry.
indicates the complex
conjugate.
,

and

are purely real even functions.

even function
112 For

a purely real

and

are purely imaginary odd functions.

odd function

Square-integrable functions
The Fourier transforms in this table may be found in (Campbell & Foster 1948), (Erdlyi 1954), or the appendix of
(Kammler 2000).
Function

Fourier transform
unitary, ordinary
frequency

Fourier transform
unitary, angular frequency

Fourier transform
non-unitary, angular
frequency

Remarks

201

The rectangular pulse and the


normalized sinc function, here defined
as sinc(x) = sin(x)/(x)

202

Dual of rule 201. The rectangular


function is an ideal low-pass filter, and
the sinc function is the non-causal
impulse response of such a filter.

203

The function tri(x) is the triangular


function

204

Dual of rule 203.

205

The function u(x) is the Heaviside unit


step function and a>0.

Fourier transform

19

206

This shows that, for the unitary Fourier


transforms, the Gaussian function
exp(x2) is its own Fourier transform
for some choice of . For this to be
integrable we must have Re()>0.

207

For a>0. That is, the Fourier transform


of a decaying exponential function is a
Lorentzian function.

208

Hyperbolic secant is its own Fourier


transform

209

is the Hermite's polynomial. If


then the Gauss-Hermite
functions are eigenfunctions of the
Fourier transform operator. For a
derivation, see Hermite polynomial.
The formula reduces to 206 for
.

Distributions
The Fourier transforms in this table may be found in (Erdlyi 1954) or the appendix of (Kammler 2000).
Function

Fourier transform
unitary, ordinary frequency

Fourier transform
unitary, angular frequency

Fourier transform
non-unitary, angular
frequency

Remarks

301

The distribution ()
denotes the Dirac delta
function.

302

Dual of rule 301.

303

This follows from 103


and 301.

304

This follows from rules


101 and 303 using Euler's
formula:

305

This follows from 101


and 303 using

306

307

Fourier transform

308

20
Here, n is a natural
number and
is
the n-th distribution
derivative of the Dirac
delta function. This rule
follows from rules 107
and 301. Combining this
rule with 101, we can
transform all
polynomials.

309

Here sgn() is the sign


function. Note that 1/x is
not a distribution. It is
necessary to use the
Cauchy principal value
when testing against
Schwartz functions. This
rule is useful in studying
the Hilbert transform.

310

1/xn is the homogeneous


distribution defined by
the distributional
derivative

311

This formula is valid for


0 > > 1. For > 0
some singular terms arise
at the origin that can be
found by differentiating
318. If Re > 1, then
is a locally
integrable function, and
so a tempered
distribution. The function
is a
holomorphic function
from the right half-plane
to the space of tempered
distributions. It admits a
unique meromorphic
extension to a tempered
distribution, also denoted
for 2, 4,...
(See homogeneous
distribution.)

312

The dual of rule 309. This


time the Fourier
transforms need to be
considered as Cauchy
principal value.

313

The function u(x) is the


Heaviside unit step
function; this follows
from rules 101, 301, and
312.

Fourier transform

21
This function is known as
the Dirac comb function.
This result can be derived
from 302 and 102,
together with the fact that

314

as
distributions.
315

The function J0(x) is the


zeroth order Bessel
function of first kind.

316

This is a generalization of
315. The function Jn(x) is
the n-th order Bessel
function of first kind. The
function Tn(x) is the
Chebyshev polynomial of
the first kind.

317

is the
EulerMascheroni
constant.

318

This formula is valid for


1 > > 0. Use
differentiation to drive
formula for higher
exponents. is the
Heaviside function.

Two-dimensional functions
Function

Fourier transform
unitary, ordinary frequency

Fourier transform
unitary, angular frequency

Fourier transform
non-unitary, angular frequency

400

401

402

Remarks
To 400: The variables x, y, x, y, x and y are real numbers. The integrals are taken over the entire plane.
To 401: Both functions are Gaussians, which may not have unit volume.
To 402: The function is defined by circ(r)=1 0r1, and is 0 otherwise. This is the Airy distribution, and is
expressed using J1 (the order 1 Bessel function of the first kind). (Stein & Weiss 1971, Thm. IV.3.3)

Fourier transform

22

Formulas for general n-dimensional functions


Function

Fourier transform
unitary, ordinary frequency

Fourier transform
unitary, angular frequency

Fourier transform
non-unitary, angular frequency

500

501

502

Remarks
To 501: The function [0,1] is the indicator function of the interval [0,1]. The function (x) is the gamma function.
The function Jn/2+ is a Bessel function of the first kind, with order n/2+. Taking n=2 and =0 produces 402.
(Stein & Weiss 1971, Thm. 4.13)
To 502: See Riesz potential. The formula also holds for all n, n1,... by analytic continuation, but then the
function and its Fourier transforms need to be understood as suitably regularized tempered distributions. See
homogeneous distribution.

References
Boashash, B., ed. (2003), Time-Frequency Signal Analysis and Processing: A Comprehensive Reference, Oxford:
Elsevier Science, ISBN0080443354
Bochner S., Chandrasekharan K. (1949), Fourier Transforms, Princeton University Press
Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), Boston: McGraw-Hill,
ISBN0071160434.
Campbell, George; Foster, Ronald (1948), Fourier Integrals for Practical Applications, New York: D. Van
Nostrand Company, Inc..
Duoandikoetxea, Javier (2001), Fourier Analysis, American Mathematical Society, ISBN0-8218-2172-5.
Dym, H; McKean, H (1985), Fourier Series and Integrals, Academic Press, ISBN978-0122264511.
Erdlyi, Arthur, ed. (1954), Tables of Integral Transforms, 1, New Your: McGraw-Hill
Fourier, J. B. Joseph (1822), Thorie Analytique de la Chaleur [1], Paris
Grafakos, Loukas (2004), Classical and Modern Fourier Analysis, Prentice-Hall, ISBN0-13-035399-X.
Hewitt, Edwin; Ross, Kenneth A. (1970), Abstract harmonic analysis. Vol. II: Structure and analysis for compact
groups. Analysis on locally compact Abelian groups, Die Grundlehren der mathematischen Wissenschaften, Band
152, Berlin, New York: Springer-Verlag, MR0262773.
Hrmander, L. (1976), Linear Partial Differential Operators, Volume 1, Springer-Verlag, ISBN978-3540006626.
James, J.F. (2011), A Student's Guide to Fourier Transforms (3rd ed.), New York: Cambridge University Press,
ISBN978-0-521-17683-5.
Kaiser, Gerald (1994), A Friendly Guide to Wavelets, Birkhuser, ISBN0-8176-3711-7
Kammler, David (2000), A First Course in Fourier Analysis, Prentice Hall, ISBN0-13-578782-3
Katznelson, Yitzhak (1976), An introduction to Harmonic Analysis, Dover, ISBN0-486-63331-4
Knapp, Anthony W. (2001), Representation Theory of Semisimple Groups: An Overview Based on Examples [2],
Princeton University Press, ISBN978-0-691-09089-4
Pinsky, Mark (2002), Introduction to Fourier Analysis and Wavelets, Brooks/Cole, ISBN0-534-37660-6
Polyanin, A. D.; Manzhirov, A. V. (1998), Handbook of Integral Equations, Boca Raton: CRC Press,
ISBN0-8493-2876-4.

Fourier transform
Rudin, Walter (1987), Real and Complex Analysis (Third ed.), Singapore: McGraw Hill, ISBN0-07-100276-6.
Stein, Elias; Shakarchi, Rami (2003), Fourier Analysis: An introduction, Princeton University Press,
ISBN0-691-11384-X.
Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.:
Princeton University Press, ISBN978-0-691-08078-9.
Wilson, R. G. (1995), Fourier Series and Optical Transform Techniques in Contemporary Optics, New York:
Wiley, ISBN0471303577.
Yosida, K. (1968), Functional Analysis, Springer-Verlag, ISBN3-540-58654-7.

External links

The Discrete Fourier Transformation (DFT): Definition and numerical examples [3] - A Matlab tutorial
Fourier Series Applet [4] (Tip: drag magnitude or phase dots up or down to change the wave form).
Stephan Bernsee's FFTlab [5] (Java Applet)
Stanford Video Course on the Fourier Transform [6]
Weisstein, Eric W., "Fourier Transform [7]" from MathWorld.
The DFT Pied: Mastering The Fourier Transform in One Day [8] at The DSP Dimension
An Interactive Flash Tutorial for the Fourier Transform [9]

References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]

http:/ / books. google. com/ ?id=TDQJAAAAIAAJ& printsec=frontcover& dq=Th%C3%A9orie+ analytique+ de+ la+ chaleur& q
http:/ / books. google. com/ ?id=QCcW1h835pwC
http:/ / www. nbtwiki. net/ doku. php?id=tutorial:the_discrete_fourier_transformation_dft
http:/ / www. westga. edu/ ~jhasbun/ osp/ Fourier. htm
http:/ / www. dspdimension. com/ fftlab/
http:/ / www. academicearth. com/ courses/ the-fourier-transform-and-its-applications
http:/ / mathworld. wolfram. com/ FourierTransform. html
http:/ / www. dspdimension. com/ admin/ dft-a-pied/
http:/ / www. fourier-series. com/ f-transform/ index. html

23

Fast Fourier transform

Fast Fourier transform


A fast Fourier transform (FFT) is an efficient algorithm to compute the discrete Fourier transform (DFT) and its
inverse. There are many distinct FFT algorithms involving a wide range of mathematics, from simple
complex-number arithmetic to group theory and number theory; this article gives an overview of the available
techniques and some of their general properties, while the specific algorithms are described in subsidiary articles
linked below.
A DFT decomposes a sequence of values into components of different frequencies. This operation is useful in many
fields (see discrete Fourier transform for properties and applications of the transform) but computing it directly from
the definition is often too slow to be practical. An FFT is a way to compute the same result more quickly: computing
a DFT of N points in the naive way, using the definition, takes O(N2) arithmetical operations, while an FFT can
compute the same result in only O(N log N) operations. The difference in speed can be substantial, especially for
long data sets where N may be in the thousands or millionsin practice, the computation time can be reduced by
several orders of magnitude in such cases, and the improvement is roughly proportional to N / log(N). This huge
improvement made many DFT-based algorithms practical; FFTs are of great importance to a wide variety of
applications, from digital signal processing and solving partial differential equations to algorithms for quick
multiplication of large integers.
The most well known FFT algorithms depend upon the factorization of N, but there are FFTs with O(NlogN)
complexity for all N, even for primeN. Many FFT algorithms only depend on the fact that
is an
th
primitive root of unity, and thus can be applied to analogous transforms over any finite field, such as
number-theoretic transforms. Since the inverse DFT is the same as the DFT, but with the opposite sign in the
exponent and a 1/N factor, any FFT algorithm can easily be adapted for it.
The FFT has been described as "the most important numerical algorithm of our lifetime".[1]

Definition and speed


An FFT computes the DFT and produces exactly the same result as evaluating the DFT definition directly; the only
difference is that an FFT is much faster. (In the presence of round-off error, many FFT algorithms are also much
more accurate than evaluating the DFT definition directly, as discussed below.)
Let x0, ...., xN-1 be complex numbers. The DFT is defined by the formula

Evaluating this definition directly requires O(N2) operations: there are N outputs Xk, and each output requires a sum
of N terms. An FFT is any method to compute the same results in O(N log N) operations. More precisely, all known
FFT algorithms require (N log N) operations (technically, O only denotes an upper bound), although there is no
known proof that better complexity is impossible.
To illustrate the savings of an FFT, consider the count of complex multiplications and additions. Evaluating the
DFT's sums directly involves N2 complex multiplications and N(N1) complex additions [of which O(N) operations
can be saved by eliminating trivial operations such as multiplications by 1]. The well-known radix-2 CooleyTukey
algorithm, for N a power of 2, can compute the same result with only (N/2)log2N complex multiplies (again,
ignoring simplifications of multiplications by 1 and similar) and Nlog2N complex additions. In practice, actual
performance on modern computers is usually dominated by factors other than arithmetic and is a complicated subject
(see, e.g., Frigo & Johnson, 2005), but the overall improvement from O(N2) to O(N log N) remains.

24

Fast Fourier transform

25

Algorithms
CooleyTukey algorithm
By far the most common FFT is the CooleyTukey algorithm. This is a divide and conquer algorithm that
recursively breaks down a DFT of any composite size N = N1N2 into many smaller DFTs of sizes N1 and N2, along
with O(N) multiplications by complex roots of unity traditionally called twiddle factors (after Gentleman and Sande,
1966).
This method (and the general idea of an FFT) was popularized by a publication of J. W. Cooley and J. W. Tukey in
1965, but it was later discovered (Heideman & Burrus, 1984) that those two authors had independently re-invented
an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited
forms).
The most well-known use of the CooleyTukey algorithm is to divide the transform into two pieces of size

at

each step, and is therefore limited to power-of-two sizes, but any factorization can be used in general (as was known
to both Gauss and Cooley/Tukey). These are called the radix-2 and mixed-radix cases, respectively (and other
variants such as the split-radix FFT have their own names as well). Although the basic idea is recursive, most
traditional implementations rearrange the algorithm to avoid explicit recursion. Also, because the CooleyTukey
algorithm breaks the DFT into smaller DFTs, it can be combined arbitrarily with any other algorithm for the DFT,
such as those described below.

Other FFT algorithms


There are other FFT algorithms distinct from CooleyTukey. For

with coprime

and

, one can

use the Prime-Factor (Good-Thomas) algorithm (PFA), based on the Chinese Remainder Theorem, to factorize the
DFT similarly to CooleyTukey but without the twiddle factors. The Rader-Brenner algorithm (1976) is a
CooleyTukey-like factorization but with purely imaginary twiddle factors, reducing multiplications at the cost of
increased additions and reduced numerical stability; it was later superseded by the split-radix variant of
CooleyTukey (which achieves the same multiplication count but with fewer additions and without sacrificing
accuracy). Algorithms that recursively factorize the DFT into smaller operations other than DFTs include the Bruun
and QFT algorithms. (The Rader-Brenner and QFT algorithms were proposed for power-of-two sizes, but it is
possible that they could be adapted to general composite . Bruun's algorithm applies to arbitrary even composite
sizes.) Bruun's algorithm, in particular, is based on interpreting the FFT as a recursive factorization of the
polynomial

, here into real-coefficient polynomials of the form

and

Another polynomial viewpoint is exploited by the Winograd algorithm, which factorizes

.
into cyclotomic

polynomialsthese often have coefficients of 1, 0, or 1, and therefore require few (if any) multiplications, so
Winograd can be used to obtain minimal-multiplication FFTs and is often used to find efficient algorithms for small
factors. Indeed, Winograd showed that the DFT can be computed with only
irrational multiplications,
leading to a proven achievable lower bound on the number of multiplications for power-of-two sizes; unfortunately,
this comes at the cost of many more additions, a tradeoff no longer favorable on modern processors with hardware
multipliers. In particular, Winograd also makes use of the PFA as well as an algorithm by Rader for FFTs of prime
sizes.
Rader's algorithm, exploiting the existence of a generator for the multiplicative group modulo prime
, expresses a
DFT of prime size

as a cyclic convolution of (composite) size

, which can then be computed by a pair of

ordinary FFTs via the convolution theorem (although Winograd uses other convolution methods). Another
prime-size FFT is due to L. I. Bluestein, and is sometimes called the chirp-z algorithm; it also re-expresses a DFT as
a convolution, but this time of the same size (which can be zero-padded to a power of two and evaluated by radix-2
CooleyTukey FFTs, for example), via the identity
.

Fast Fourier transform

26

FFT algorithms specialized for real and/or symmetric data


In many applications, the input data for the DFT are purely real, in which case the outputs satisfy the symmetry

and efficient FFT algorithms have been designed for this situation (see e.g. Sorensen, 1987). One approach consists
of taking an ordinary algorithm (e.g. CooleyTukey) and removing the redundant parts of the computation, saving
roughly a factor of two in time and memory. Alternatively, it is possible to express an even-length real-input DFT as
a complex DFT of half the length (whose real and imaginary parts are the even/odd elements of the original real
data), followed by O(N) post-processing operations.
It was once believed that real-input DFTs could be more efficiently computed by means of the discrete Hartley
transform (DHT), but it was subsequently argued that a specialized real-input DFT algorithm (FFT) can typically be
found that requires fewer operations than the corresponding DHT algorithm (FHT) for the same number of inputs.
Bruun's algorithm (above) is another method that was initially proposed to take advantage of real inputs, but it has
not proved popular.
There are further FFT specializations for the cases of real data that have even/odd symmetry, in which case one can
gain another factor of (roughly) two in time and memory and the DFT becomes the discrete cosine/sine transform(s)
(DCT/DST). Instead of directly modifying an FFT algorithm for these cases, DCTs/DSTs can also be computed via
FFTs of real data combined with O(N) pre/post processing.

Computational issues
Bounds on complexity and operation counts
A fundamental question of longstanding theoretical interest is to prove lower bounds on the complexity and exact
operation counts of fast Fourier transforms, and many open problems remain. It is not even rigorously proved
whether DFTs truly require
(i.e., order
or greater) operations, even for the simple case of
power of two sizes, although no algorithms with lower complexity are known. In particular, the count of arithmetic
operations is usually the focus of such questions, although actual performance on modern-day computers is
determined by many other factors such as cache or CPU pipeline optimization.
Following pioneering work by Winograd (1978), a tight

lower bound is known for the number of real

multiplications required by an FFT. It can be shown that only


multiplications are required to compute a DFT of power-of-two length

irrational real
. Moreover, explicit algorithms

that achieve this count are known (Heideman & Burrus, 1986; Duhamel, 1990). Unfortunately, these algorithms
require too many additions to be practical, at least on modern computers with hardware multipliers.
A tight lower bound is not known on the number of required additions, although lower bounds have been proved
under some restrictive assumptions on the algorithms. In 1973, Morgenstern proved an
lower bound
on the addition count for algorithms where the multiplicative constants have bounded magnitudes (which is true for
most but not all FFT algorithms). Pan (1986) proved an
lower bound assuming a bound on a measure
of the FFT algorithm's "asynchronicity", but the generality of this assumption is unclear. For the case of
power-of-two

, Papadimitriou (1979) argued that the number

of complex-number additions achieved

by CooleyTukey algorithms is optimal under certain assumptions on the graph of the algorithm (his assumptions
imply, among other things, that no additive identities in the roots of unity are exploited). (This argument would
imply that at least
real additions are required, although this is not a tight bound because extra additions
are required as part of complex-number multiplications.) Thus far, no published FFT algorithm has achieved fewer
than
complex-number additions (or their equivalent) for power-of-two
.
A third problem is to minimize the total number of real multiplications and additions, sometimes called the
"arithmetic complexity" (although in this context it is the exact count and not the asymptotic complexity that is being

Fast Fourier transform

27

considered). Again, no tight lower bound has been proven. Since 1968, however, the lowest published count for
power-of-two
was long achieved by the split-radix FFT algorithm, which requires
real multiplications
for

. This was recently reduced to

(Johnson and Frigo, 2007; Lundy and Van Buskirk, 2007). A slightly la

(but still better than split radix for N256) was shown to be provably optimal for N512 under additional restrictions
on the possible algorithms (split-radix-like flowgraphs with unit-modulus multiplicative factors), by reduction to a
Satisfiability Modulo Theories problem solvable by brute force (Haynal & Haynal, 2011).
Most of the attempts to lower or prove the complexity of FFT algorithms have focused on the ordinary complex-data
case, because it is the simplest. However, complex-data FFTs are so closely related to algorithms for related
problems such as real-data FFTs, discrete cosine transforms, discrete Hartley transforms, and so on, that any
improvement in one of these would immediately lead to improvements in the others (Duhamel & Vetterli, 1990).

Accuracy and approximations


All of the FFT algorithms discussed below compute the DFT exactly (in exact arithmetic, i.e. neglecting
floating-point errors). A few "FFT" algorithms have been proposed, however, that compute the DFT approximately,
with an error that can be made arbitrarily small at the expense of increased computations. Such algorithms trade the
approximation error for increased speed or other properties. For example, an approximate FFT algorithm by
Edelman et al. (1999) achieves lower communication requirements for parallel computing with the help of a fast
multipole method. A wavelet-based approximate FFT by Guo and Burrus (1996) takes sparse inputs/outputs
(time/frequency localization) into account more efficiently than is possible with an exact FFT. Another algorithm for
approximate computation of a subset of the DFT outputs is due to Shentov et al. (1995). Only the Edelman algorithm
works equally well for sparse and non-sparse data, however, since it is based on the compressibility (rank deficiency)
of the Fourier matrix itself rather than the compressibility (sparsity) of the data.
Even the "exact" FFT algorithms have errors when finite-precision floating-point arithmetic is used, but these errors
are typically quite small; most FFT algorithms, e.g. CooleyTukey, have excellent numerical properties as a
consequence of the pairwise summation structure of the algorithms. The upper bound on the relative error for the
CooleyTukey algorithm is O( log N), compared to O(N3/2) for the nave DFT formula (Gentleman and Sande,
1966), where is the machine floating-point relative precision. In fact, the root mean square (rms) errors are much
better than these upper bounds, being only O( log N) for CooleyTukey and O( N) for the nave DFT
(Schatzman, 1996). These results, however, are very sensitive to the accuracy of the twiddle factors used in the FFT
(i.e. the trigonometric function values), and it is not unusual for incautious FFT implementations to have much worse
accuracy, e.g. if they use inaccurate trigonometric recurrence formulas. Some FFTs other than CooleyTukey, such
as the Rader-Brenner algorithm, are intrinsically less stable.
In fixed-point arithmetic, the finite-precision errors accumulated by FFT algorithms are worse, with rms errors
growing as O(N) for the CooleyTukey algorithm (Welch, 1969). Moreover, even achieving this accuracy requires
careful attention to scaling in order to minimize the loss of precision, and fixed-point FFT algorithms involve
rescaling at each intermediate stage of decompositions like CooleyTukey.
To verify the correctness of an FFT implementation, rigorous guarantees can be obtained in O(N log N) time by a
simple procedure checking the linearity, impulse-response, and time-shift properties of the transform on random
inputs (Ergn, 1995).

Fast Fourier transform

28

Multidimensional FFTs
As defined in the multidimensional DFT article, the multidimensional DFT

transforms an array
summations

with a

-dimensional vector of indices

(over

for

each

),

by a set of

where

the

division

nested

defined

as

, is performed element-wise. Equivalently, it is simply the composition of a


sequence of sets of one-dimensional DFTs, performed along one dimension at a time (in any order).
This compositional viewpoint immediately provides the simplest and most common multidimensional DFT
algorithm, known as the row-column algorithm (after the two-dimensional case, below). That is, one simply
performs a sequence of one-dimensional FFTs (by any of the above algorithms): first you transform along the
dimension, then along the
shown to have the usual

dimension, and so on (or actually, any ordering will work). This method is easily
complexity, where
is the total number of data points

transformed. In particular, there are

transforms of size

, etcetera, so the complexity of the sequence of

FFTs is:

In two dimensions, the

can be viewed as an

matrix, and this algorithm corresponds to first performing

the FFT of all the rows and then of all the columns (or vice versa), hence the name.
In more than two dimensions, it is often advantageous for cache locality to group the dimensions recursively. For
example, a three-dimensional FFT might first perform two-dimensional FFTs of each planar "slice" for each fixed
, and then perform the one-dimensional FFTs along the
direction. More generally, an asymptotically optimal
cache-oblivious algorithm consists of recursively dividing the dimensions into two groups
and
that are transformed recursively (rounding if

is not even) (see Frigo and Johnson, 2005). Still,

this remains a straightforward variation of the row-column algorithm that ultimately requires only a one-dimensional
FFT algorithm as the base case, and still has

complexity. Yet another variation is to perform matrix

transpositions in between transforming subsequent dimensions, so that the transforms operate on contiguous data;
this is especially important for out-of-core and distributed memory situations where accessing non-contiguous data is
extremely time-consuming.
There are other multidimensional FFT algorithms that are distinct from the row-column algorithm, although all of
them have
complexity. Perhaps the simplest non-row-column FFT is the vector-radix FFT algorithm,
which is a generalization of the ordinary CooleyTukey algorithm where one divides the transform dimensions by a
vector
of radices at each step. (This may also have cache benefits.) The simplest case of
vector-radix is where all of the radices are equal (e.g. vector-radix-2 divides all of the dimensions by two), but this is
not necessary. Vector radix with only a single non-unit radix at a time, i.e.

, is

essentially a row-column algorithm. Other, more complicated, methods include polynomial transform algorithms due
to Nussbaumer (1977), which view the transform in terms of convolutions and polynomial products. See Duhamel
and Vetterli (1990) for more information and references.

Fast Fourier transform

Other generalizations
An O(N5/2 logN) generalization to spherical harmonics on the sphere S2 with N2 nodes was described by
Mohlenkamp (1999), along with an algorithm conjectured (but not proven) to have O(N2 log2N) complexity;
Mohlenkamp also provides an implementation in the libftsh library [2]. A spherical-harmonic algorithm with O(N2
logN) complexity is described by Rokhlin and Tygert (2006).
Various groups have also published "FFT" algorithms for non-equispaced data, as reviewed in Potts et al. (2001).
Such algorithms do not strictly compute the DFT (which is only defined for equispaced data), but rather some
approximation thereof (a non-uniform discrete Fourier transform, or NDFT, which itself is often computed only
approximately).

References
[1] (Strang, 1994)
[2] http:/ / www. math. ohiou. edu/ ~mjm/ research/ libftsh. html

Brenner, N.; Rader, C. (1976). "A New Principle for Fast Fourier Transformation". IEEE Acoustics, Speech &
Signal Processing 24 (3): 264266. doi:10.1109/TASSP.1976.1162805.
Brigham, E. O. (2002). The Fast Fourier Transform. New York: Prentice-Hall
Cooley, James W.; Tukey, John W. (1965). "An algorithm for the machine calculation of complex Fourier series".
Math. Comput. 19 (90): 297301. doi:10.1090/S0025-5718-1965-0178586-1.
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein, 2001. Introduction to Algorithms,
2nd. ed. MIT Press and McGraw-Hill. ISBN 0-262-03293-7. Especially chapter 30, "Polynomials and the FFT."
Duhamel, Pierre (1990). "Algorithms meeting the lower bounds on the multiplicative complexity of length-

DFTs and their connection with practical algorithms". IEEE Trans. Acoust. Speech. Sig. Proc. 38 (9): 1504151.
doi:10.1109/29.60070.
P. Duhamel and M. Vetterli, 1990, Fast Fourier transforms: a tutorial review and a state of the art
(doi:10.1016/0165-1684(90)90158-U), Signal Processing 19: 259299.
A. Edelman, P. McCorquodale, and S. Toledo, 1999, The Future Fast Fourier Transform?
(doi:10.1137/S1064827597316266), SIAM J. Sci. Computing 20: 10941114.
D. F. Elliott, & K. R. Rao, 1982, Fast transforms: Algorithms, analyses, applications. New York: Academic
Press.
Funda Ergn, 1995, Testing multivariate linear functions: Overcoming the generator bottleneck
(doi:10.1145/225058.225167), Proc. 27th ACM Symposium on the Theory of Computing: 407416.
M. Frigo and S. G. Johnson, 2005, " The Design and Implementation of FFTW3 (http://fftw.org/
fftw-paper-ieee.pdf)," Proceedings of the IEEE 93: 216231.
Carl Friedrich Gauss, 1866. "Nachlass: Theoria interpolationis methodo nova tractata," Werke band 3, 265327.
Gttingen: Knigliche Gesellschaft der Wissenschaften.
W. M. Gentleman and G. Sande, 1966, "Fast Fourier transformsfor fun and profit," Proc. AFIPS 29: 563578.
doi:10.1145/1464291.1464352
H. Guo and C. S. Burrus, 1996, Fast approximate Fourier transform via wavelets transform
(doi:10.1117/12.255236), Proc. SPIE Intl. Soc. Opt. Eng. 2825: 250259.
H. Guo, G. A. Sitton, C. S. Burrus, 1994, The Quick Discrete Fourier Transform
(doi:10.1109/ICASSP.1994.389994), Proc. IEEE Conf. Acoust. Speech and Sig. Processing (ICASSP) 3:
445448.

Steve Haynal and Heidi Haynal, " Generating and Searching Families of FFT Algorithms (http://jsat.ewi.
tudelft.nl/content/volume7/JSAT7_13_Haynal.pdf)", Journal on Satisfiability, Boolean Modeling and
Computation vol. 7, pp. 145187 (2011).

29

Fast Fourier transform


Heideman, M. T.; Johnson, D. H.; Burrus, C. S. (1984). "Gauss and the history of the fast Fourier transform".
IEEE ASSP Magazine 1 (4): 1421. doi:10.1109/MASSP.1984.1162257.
Heideman, Michael T.; Burrus, C. Sidney (1986). "On the number of multiplications necessary to compute a
lengthDFT". IEEE Trans. Acoust. Speech. Sig. Proc. 34 (1): 9195. doi:10.1109/TASSP.1986.1164785.
S. G. Johnson and M. Frigo, 2007. " A modified split-radix FFT with fewer arithmetic operations (http://www.
fftw.org/newsplit.pdf)," IEEE Trans. Signal Processing 55 (1): 111119.
T. Lundy and J. Van Buskirk, 2007. "A new matrix approach to real FFTs and convolutions of length 2k,"
Computing 80 (1): 23-45.
Kent, Ray D. and Read, Charles (2002). Acoustic Analysis of Speech. ISBN 0-7693-0112-6. Cites Strang, G.
(1994)/MayJune). Wavelets. American Scientist, 82, 250-255.
Morgenstern, Jacques (1973). "Note on a lower bound of the linear complexity of the fast Fourier transform". J.
ACM 20 (2): 305306. doi:10.1145/321752.321761.
Mohlenkamp, M. J. (1999). "A fast transform for spherical harmonics" (http://www.math.ohiou.edu/~mjm/
research/MOHLEN1999P.pdf). J. Fourier Anal. Appl. 5 (2-3): 159184. doi:10.1007/BF01261607.
Nussbaumer, H. J. (1977). "Digital filtering using polynomial transforms". Electronics Lett. 13 (13): 386387.
doi:10.1049/el:19770280.
V. Pan, 1986, The trade-off between the additive complexity and the asyncronicity of linear and bilinear
algorithms (doi:10.1016/0020-0190(86)90035-9), Information Proc. Lett. 22: 11-14.
Christos H. Papadimitriou, 1979, Optimality of the fast Fourier transform (doi:10.1145/322108.322118), J. ACM
26: 95-102.
D. Potts, G. Steidl, and M. Tasche, 2001. " Fast Fourier transforms for nonequispaced data: A tutorial (http://
www.tu-chemnitz.de/~potts/paper/ndft.pdf)", in: J.J. Benedetto and P. Ferreira (Eds.), Modern Sampling
Theory: Mathematics and Applications (Birkhauser).
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Chapter 12. Fast Fourier Transform" (http://
apps.nrbook.com/empanel/index.html#pg=600), Numerical Recipes: The Art of Scientific Computing (3rd ed.),
New York: Cambridge University Press, ISBN978-0-521-88068-8
Rokhlin, Vladimir; Tygert, Mark (2006). "Fast algorithms for spherical harmonic expansions". SIAM J. Sci.
Computing 27 (6): 19031928. doi:10.1137/050623073.
James C. Schatzman, 1996, Accuracy of the discrete Fourier transform and the fast Fourier transform (http://
portal.acm.org/citation.cfm?id=240432), SIAM J. Sci. Comput. 17: 11501166.
Shentov, O. V.; Mitra, S. K.; Heute, U.; Hossen, A. N. (1995). "Subband DFT. I. Definition, interpretations and
extensions". Signal Processing 41 (3): 261277. doi:10.1016/0165-1684(94)00103-7.
Sorensen, H. V.; Jones, D. L.; Heideman, M. T.; Burrus, C. S. (1987). "Real-valued fast Fourier transform
algorithms". IEEE Trans. Acoust. Speech Sig. Processing 35 (35): 849863. doi:10.1109/TASSP.1987.1165220.
See also Sorensen, H.; Jones, D.; Heideman, M.; Burrus, C. (1987). "Corrections to "Real-valued fast Fourier
transform algorithms"". IEEE Transactions on Acoustics, Speech, and Signal Processing 35 (9): 13531353.
doi:10.1109/TASSP.1987.1165284.
Welch, Peter D. (1969). "A fixed-point fast Fourier transform error analysis". IEEE Trans. Audio Electroacoustics
17 (2): 151157. doi:10.1109/TAU.1969.1162035.
Winograd, S. (1978). "On computing the discrete Fourier transform". Math. Computation 32 (141): 175199.
doi:10.1090/S0025-5718-1978-0468306-4. JSTOR2006266.

30

Fast Fourier transform

31

External links
Fast Fourier Algorithm (http://www.cs.pitt.edu/~kirk/cs1501/animations/FFT.html)
Fast Fourier Transforms (http://cnx.org/content/col10550/), Connexions online book edited by C. Sidney
Burrus, with chapters by C. Sidney Burrus, Ivan Selesnick, Markus Pueschel, Matteo Frigo, and Steven G.
Johnson (2008).
Links to FFT code and information online. (http://www.fftw.org/links.html)
National Taiwan University FFT (http://www.cmlab.csie.ntu.edu.tw/cml/dsp/training/coding/transform/
fft.html)
FFT programming in C++ CooleyTukey algorithm. (http://www.librow.com/articles/article-10)
Online documentation, links, book, and code. (http://www.jjj.de/fxt/)
Using FFT to construct aggregate probability distributions (http://www.vosesoftware.com/ModelRiskHelp/
index.htm#Aggregate_distributions/Aggregate_modeling_-_Fast_Fourier_Transform_FFT_method.htm)
Sri Welaratna, " 30 years of FFT Analyzers (http://www.dataphysics.com/support/library/downloads/articles/
DP-30 Years of FFT.pdf)", Sound and Vibration (January 1997, 30th anniversary issue). A historical review of
hardware FFT devices.
FFT Basics and Case Study Using Multi-Instrument (http://www.multi-instrument.com/doc/D1002/
FFT_Basics_and_Case_Study_using_Multi-Instrument_D1002.pdf)
FFT Textbook notes, PPTs, Videos (http://numericalmethods.eng.usf.edu/topics/fft.html) at Holistic
Numerical Methods Institute.
ALGLIB FFT Code (http://www.alglib.net/fasttransforms/fft.php) GPL Licensed multilanguage (VBA, C++,
Pascal, etc.) numerical analysis and data processing library.

Laplace transform
In mathematics, the Laplace transform is a widely used integral transform. Denoted

, it is a linear

operator of a function f(t) with a real argument t (t 0) that transforms it to a function F(s) with a complex argument
s. This transformation is essentially bijective for the majority of practical uses; the respective pairs of f(t) and F(s)
are matched in tables. The Laplace transform has the useful property that many relationships and operations over the
originals f(t) correspond to simpler relationships and operations over the images F(s).[1] The Laplace transform has
many important applications throughout the sciences. It is named for Pierre-Simon Laplace who introduced the
transform in his work on probability theory.
The Laplace transform is related to the Fourier transform, but whereas the Fourier transform resolves a function or
signal into its modes of vibration, the Laplace transform resolves a function into its moments. Like the Fourier
transform, the Laplace transform is used for solving differential and integral equations. In physics and engineering, it
is used for analysis of linear time-invariant systems such as electrical circuits, harmonic oscillators, optical devices,
and mechanical systems. In this analysis, the Laplace transform is often interpreted as a transformation from the
time-domain, in which inputs and outputs are functions of time, to the frequency-domain, where the same inputs and
outputs are functions of complex angular frequency, in radians per unit time. Given a simple mathematical or
functional description of an input or output to a system, the Laplace transform provides an alternative functional
description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system
based on a set of specifications.

Laplace transform

32

History
The Laplace transform is named after mathematician and astronomer Pierre-Simon Laplace, who used the transform
in his work on probability theory. From 1744, Leonhard Euler investigated integrals of the form

as solutions of differential equations but did not pursue the matter very far.[2] Joseph Louis Lagrange was an admirer
of Euler and, in his work on integrating probability density functions, investigated expressions of the form

which some modern historians have interpreted within modern Laplace transform theory.[3] [4]
These types of integrals seem first to have attracted Laplace's attention in 1782 where he was following in the spirit
of Euler in using the integrals themselves as solutions of equations.[5] However, in 1785, Laplace took the critical
step forward when, rather than just looking for a solution in the form of an integral, he started to apply the transforms
in the sense that was later to become popular. He used an integral of the form:

akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of the
transformed equation. He then went on to apply the Laplace transform in the same way and started to derive some of
its properties, beginning to appreciate its potential power.[6]
Laplace also recognised that Joseph Fourier's method of Fourier series for solving the diffusion equation could only
apply to a limited region of space as the solutions were periodic. In 1809, Laplace applied his transform to find
solutions that diffused indefinitely in space.[7]

Formal definition
The Laplace transform of a function f(t), defined for all real numbers t 0, is the function F(s), defined by:

The parameter s is a complex number:


with real numbers and .
The meaning of the integral depends on types of functions of interest. A necessary condition for existence of the
integral is that must be locally integrable on [0,). For locally integrable functions that decay at infinity or are of
exponential type, the integral can be understood as a (proper) Lebesgue integral. However, for many applications it
is necessary to regard it as a conditionally convergent improper integral at . Still more generally, the integral can be
understood in a weak sense, and this is dealt with below.
One can define the Laplace transform of a finite Borel measure by the Lebesgue integral[8]

An important special case is where is a probability measure or, even more specifically, the Dirac delta function. In
operational calculus, the Laplace transform of a measure is often treated as though the measure came from a
distribution function . In that case, to avoid potential confusion, one often writes

where the lower limit of 0 is short notation to mean

Laplace transform

This limit emphasizes that any point mass located at 0 is entirely captured by the Laplace transform. Although with
the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the
LaplaceStieltjes transform.

Probability theory
In pure and applied probability, the Laplace transform is defined by means of an expectation value. If X is a random
variable with probability density function , then the Laplace transform of is given by the expectation

By abuse of language, this is referred to as the Laplace transform of the random variable X itself. Replacing s by t
gives the moment generating function of X. The Laplace transform has applications throughout probability theory,
including first passage times of stochastic processes such as Markov chains, and renewal theory.

Bilateral Laplace transform


When one says "the Laplace transform" without qualification, the unilateral or one-sided transform is normally
intended. The Laplace transform can be alternatively defined as the bilateral Laplace transform or two-sided
Laplace transform by extending the limits of integration to be the entire real axis. If that is done the common
unilateral transform simply becomes a special case of the bilateral transform where the definition of the function
being transformed is multiplied by the Heaviside step function.
The bilateral Laplace transform is defined as follows:

Inverse Laplace transform


The inverse Laplace transform is given by the following complex integral, which is known by various names (the
Bromwich integral, the Fourier-Mellin integral, and Mellin's inverse formula):

where

is a real number so that the contour path of integration is in the region of convergence of F(s). An

alternative formula for the inverse Laplace transform is given by Post's inversion formula.

Region of convergence
If is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the
Laplace transform F(s) of converges provided that the limit

exists. The Laplace transform converges absolutely if the integral

exists (as a proper Lebesgue integral). The Laplace transform is usually understood as conditionally convergent,
meaning that it converges in the former instead of the latter sense.
The set of values for which F(s) converges absolutely is either of the form Re{s} > a or else Re{s} a, where a is an
extended real constant, a. (This follows from the dominated convergence theorem.) The constant a is

33

Laplace transform
known as the abscissa of absolute convergence, and depends on the growth behavior of (t).[9] Analogously, the
two-sided transform converges absolutely in a strip of the form a < Re{s} < b, and possibly including the lines
Re{s}=a or Re{s}=b.[10] The subset of values of s for which the Laplace transform converges absolutely is called
the region of absolute convergence or the domain of absolute convergence. In the two-sided case, it is sometimes
called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence.
Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region of
conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges
(conditionally) at s=s0, then it automatically converges for all s with Re{s}>Re{s0}. Therefore the region of
convergence is a half-plane of the form Re{s}>a, possibly including some points of the boundary line Re{s}=a. In
the region of convergence Re{s} > Re{s0}, the Laplace transform of can be expressed by integrating by parts as the
integral

That is, in the region of convergence F(s) can effectively be expressed as the absolutely convergent Laplace
transform of some other function. In particular, it is analytic.
A variety of theorems, in the form of PaleyWiener theorems, exist concerning the relationship between the decay
properties of and the properties of the Laplace transform within the region of convergence.
In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every
bounded input produces a bounded output. This is equivalent to the absolute convergence of the Laplace transform
of the impulse response function in the region Re{s}0. As a result, LTI systems are stable provided the poles of
the Laplace transform of the impulse response function have negative real part.

Properties and theorems


The Laplace transform has a number of properties that make it useful for analyzing linear dynamical systems. The
most significant advantage is that differentiation and integration become multiplication and division, respectively, by
s (similarly to logarithms changing multiplication of numbers to addition of their logarithms). Because of this
property, the Laplace variable s is also known as operator variable in the L domain: either derivative operator or
(for s1) integration operator. The transform turns integral equations and differential equations to polynomial
equations, which are much easier to solve. Once solved, use of the inverse Laplace transform reverts to the time
domain.
Given the functions f(t) and g(t), and their respective Laplace transforms F(s) and G(s):

the following table is a list of properties of unilateral Laplace transform:[11]

34

Laplace transform

35

Properties of the unilateral Laplace transform


Time domain
Linearity

's' domain

Comment
Can be proved using basic rules of
integration.

Frequency
differentiation

is the first derivative of

Frequency
differentiation

More general form, nth derivative of


F(s).

Differentiation

is assumed to be a differentiable
function, and its derivative is
assumed to be of exponential type.
This can then be obtained by
integration by parts

Second
Differentiation

is assumed twice differentiable and


the second derivative to be of
exponential type. Follows by
applying the Differentiation property
to
.

General
Differentiation

is assumed to be n-times
differentiable, with nth derivative of
exponential type. Follow by
mathematical induction.

Frequency
integration
Integration

is the Heaviside step function.


Note
of

is the convolution
and

Time scaling

Frequency
shifting
Time shifting
Multiplication

is the Heaviside step function


the integration is done along the
vertical line
that lies
entirely within the region of
[12]
convergence of F.

Convolution

(t) and g(t) are extended by zero for


t<0 in the definition of the
convolution.

Complex
conjugation
Cross-correlation
Periodic Function

is a periodic function of period


so that
. This
is the result of the time shifting
property and the geometric series.

Initial value theorem:

Laplace transform

36

Final value theorem:


, if all poles of

are in the left half-plane.

The final value theorem is useful because it gives the long-term behaviour without having to perform partial
fraction decompositions or other difficult algebra. If a function's poles are in the right-hand plane (e.g. or
) the behaviour of this formula is undefined.

Proof of the Laplace transform of a function's derivative


It is often convenient to use the differentiation property of the Laplace transform to find the transform of a function's
derivative. This can be derived from the basic expression for a Laplace transform as follows:

yielding

and in the bilateral case,

The general result


where fn is the n-th derivative of f, can then be established with an inductive argument.

Evaluating improper integrals


Let

, then (see the table above)

or

Letting

, we get the identity

For example,

Another example is Dirichlet integral.

Laplace transform

37

Relationship to other transforms


LaplaceStieltjes transform
The (unilateral) LaplaceStieltjes transform of a function g:RR is defined by the LebesgueStieltjes integral

The function g is assumed to be of bounded variation. If g is the antiderivative of :

then the LaplaceStieltjes transform of g and the Laplace transform of coincide. In general, the LaplaceStieltjes
transform is the Laplace transform of the Stieltjes measure associated to g. So in practice, the only distinction
between the two transforms is that the Laplace transform is thought of as operating on the density function of the
measure, whereas the LaplaceStieltjes transform is thought of as operating on its cumulative distribution
function.[13]
Fourier transform
The continuous Fourier transform is equivalent to evaluating the bilateral Laplace transform with imaginary
argument s = i or s = 2fi :

This expression excludes the scaling factor

, which is often included in definitions of the Fourier transform.

This relationship between the Laplace and Fourier transforms is often used to determine the frequency spectrum of a
signal or dynamical system.
The above relation is valid as stated if and only if the region of convergence (ROC) of F(s) contains the imaginary
axis, =0. For example, the function f(t)=cos(0t) has a Laplace transform F(s)=s/(s2+02) whose ROC is
Re(s)>0. As s=i is a pole of F(s), substituting s=i in F(s) does not yield the Fourier transform of f(t)u(t),
which is proportional to the Dirac delta-function (-0).
However, a relation of the form

holds under much weaker conditions. For instance, this holds for the above example provided that the limit is
understood as a weak limit of measures (see vague topology). General conditions relating the limit of the Laplace
transform of a function on the boundary to the Fourier transform take the form of Paley-Wiener theorems.

Laplace transform

38

Mellin transform
The Mellin transform and its inverse are related to the two-sided Laplace transform by a simple change of variables.
If in the Mellin transform

we set = e-t we get a two-sided Laplace transform.


Z-transform
The unilateral or one-sided Z-transform is simply the Laplace transform of an ideally sampled signal with the
substitution of

where

is the sampling period (in units of time e.g., seconds) and

is the sampling rate (in

samples per second or hertz)


Let

be a sampling impulse train (also called a Dirac comb) and

be the continuous-time representation of the sampled


are the discrete samples of
The Laplace transform of the sampled signal

.
is

This is precisely the definition of the unilateral Z-transform of the discrete function

with the substitution of

Comparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace
transform of the sampled signal:

The similarity between the Z and Laplace transforms is expanded upon in the theory of time scale calculus.

Laplace transform

39

Borel transform
The integral form of the Borel transform

is a special case of the Laplace transform for an entire function of exponential type, meaning that

for some constants A and B. The generalized Borel transform allows a different weighting function to be used, rather
than the exponential function, to transform functions not of exponential type. Nachbin's theorem gives necessary and
sufficient conditions for the Borel transform to be well defined.
Fundamental relationships
Since an ordinary Laplace transform can be written as a special case of a two-sided transform, and since the
two-sided transform can be written as the sum of two one-sided transforms, the theory of the Laplace-, Fourier-,
Mellin-, and Z-transforms are at bottom the same subject. However, a different point of view and different
characteristic problems are associated with each of these four major integral transforms.

Table of selected Laplace transforms


The following table provides Laplace transforms for many common functions of a single variable. For definitions
and explanations, see the Explanatory Notes at the end of the table.
Because the Laplace transform is a linear operator:
The Laplace transform of a sum is the sum of Laplace transforms of each term.

The Laplace transform of a multiple of a function is that multiple times the Laplace transformation of that
function.

The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is
why all of the time domain functions in the table below are multiples of the Heaviside step function, u(t). The entries
of the table that involve a time delay are required to be causal (meaning that >0). A causal system is a system
where the impulse response h(t) is zero for all time t prior to t = 0. In general, the region of convergence for causal
systems is not the same as that of anticausal systems.
Function

Time domain

Laplace s-domain

Region of convergence

Reference

unit impulse

inspection

delayed impulse

time shift of
unit impulse

unit step

integrate unit impulse

delayed unit step

time shift of
unit step

ramp

integrate unit
impulse twice

delayed nth power


with frequency shift

Integrate unit step,


apply frequency
shift,
apply time shift

Laplace transform

40

nth power
( for integer n )

Integrate unit
step n times

qth power
( for complex q )

ref?

nth power with frequency shift

Integrate unit step,


apply frequency shift

exponential decay

Frequency shift of
unit step

two-sided exponential decay

Frequency shift of
unit step

exponential approach

Unit step minus


exponential decay

sine

ref?

cosine

ref?

hyperbolic sine

ref?

hyperbolic cosine

ref?

Exponentially-decaying
sine wave

ref?

Exponentially-decaying
cosine wave

ref?

nth root

ref?

natural logarithm

ref?

Bessel function
of the first kind,
of order n

ref?

Modified Bessel function


of the first kind,
of order n

ref?

Bessel function
of the second kind,
of order 0

ref?

Error function

ref?

Explanatory notes:

represents the Heaviside step function.


represents the Dirac delta function.
represents the Gamma function.
is the EulerMascheroni constant.

, a real number, typically represents time,


although it can represent any independent dimension.
is the complex angular frequency, and
, , , and are real numbers.
is an integer.

is its real part.

Laplace transform

s-Domain equivalent circuits and impedances


The Laplace transform is often used in circuit analysis, and simple conversions to the s-Domain of circuit elements
can be made. Circuit elements can be transformed into impedances, very similar to phasor impedances.
Here is a summary of equivalents:

Note that the resistor is exactly the same in the time domain and the s-Domain. The sources are put in if there are
initial conditions on the circuit elements. For example, if a capacitor has an initial voltage across it, or if the inductor
has an initial current through it, the sources inserted in the s-Domain account for that.
The equivalents for current and voltage sources are simply derived from the transformations in the table above.

Examples: How to apply the properties and theorems


The Laplace transform is used frequently in engineering and physics; the output of a linear time invariant system can
be calculated by convolving its unit impulse response with the input signal. Performing this calculation in Laplace
space turns the convolution into a multiplication; the latter being easier to solve because of its algebraic form. For
more information, see control theory.
The Laplace transform can also be used to solve differential equations and is used extensively in electrical
engineering. The Laplace transform reduces a linear differential equation to an algebraic equation, which can then be
solved by the formal rules of algebra. The original differential equation can then be solved by applying the inverse
Laplace transform. The English electrical engineer Oliver Heaviside first proposed a similar scheme, although
without using the Laplace transform; and the resulting operational calculus is credited as the Heaviside calculus.

41

Laplace transform

Example 1: Solving a differential equation


In nuclear physics, the following fundamental relationship governs radioactive decay: the number of radioactive
atoms N in a sample of a radioactive isotope decays at a rate proportional to N. This leads to the first order linear
differential equation

where is the decay constant. The Laplace transform can be used to solve this equation.
Rearranging the equation to one side, we have

Next, we take the Laplace transform of both sides of the equation:

where

and

Solving, we find

Finally, we take the inverse Laplace transform to find the general solution

which is indeed the correct form for radioactive decay.

Example 2: Deriving the complex impedance for a capacitor


In the theory of electrical circuits, the current flow in a capacitor is proportional to the capacitance and rate of change
in the electrical potential (in SI units). Symbolically, this is expressed by the differential equation

where C is the capacitance (in farads) of the capacitor, i = i(t) is the electric current (in amperes) through the
capacitor as a function of time, and v = v(t) is the voltage (in volts) across the terminals of the capacitor, also as a
function of time.
Taking the Laplace transform of this equation, we obtain

where

and

Solving for V(s) we have

42

Laplace transform

The definition of the complex impedance Z (in ohms) is the ratio of the complex voltage V divided by the complex
current I while holding the initial state Vo at zero:

Using this definition and the previous equation, we find:

which is the correct expression for the complex impedance of a capacitor.

Example 3: Method of partial fraction expansion


Consider a linear time-invariant system with transfer function

The impulse response is simply the inverse Laplace transform of this transfer function:

To evaluate this inverse transform, we begin by expanding H(s) using the method of partial fraction expansion:

The unknown constants P and R are the residues located at the corresponding poles of the transfer function. Each
residue represents the relative contribution of that singularity to the transfer function's overall shape. By the residue
theorem, the inverse Laplace transform depends only upon the poles and their residues. To find the residue P, we
multiply both sides of the equation by s+ to get

Then by letting s=, the contribution from R vanishes and all that is left is

Similarly, the residue R is given by

Note that

and so the substitution of R and P into the expanded expression for H(s) gives

Finally, using the linearity property and the known transform for exponential decay (see Item #3 in the Table of
Laplace Transforms, above), we can take the inverse Laplace transform of H(s) to obtain:

which is the impulse response of the system.

43

Laplace transform

44

Example 3.2: Convolution


The same result can be achieved using convolution property as if system is a series of filters with transfer functions
of 1/(s+a) and 1/(s+b). That is, inverse of

is

Example 4: Mixing sines, cosines, and exponentials


Time function

Laplace transform

Starting with the Laplace transform

we find the inverse transform by first adding and subtracting the same constant to the numerator:

By the shift-in-frequency property, we have

Finally, using the Laplace transforms for sine and cosine (see the table, above), we have

Example 5: Phase delay

Laplace transform

45

Time function

Laplace transform

Starting with the Laplace transform,

we find the inverse by first rearranging terms in the fraction:

We are now able to take the inverse Laplace transform of our terms:

This is just the sine of the sum of the arguments, yielding:

We can apply similar logic to find that

Notes
[1] Korn & Korn 1967, 8.1
[2] Euler 1744, (1753) and (1769)
[3] Lagrange 1773
[4] Grattan-Guinness 1997, p.260
[5] Grattan-Guinness 1997, p.261
[6] Grattan-Guinness 1997, pp.261262
[7] Grattan-Guinness 1997, pp.262266
[8] Feller 1971, XIII.1
[9] Widder 1941, Chapter II, 1
[10] Widder 1941, Chapter VI, 2
[11] (Korn & Korn 1967, pp.226227)
[12] Bracewell 2000, Table 14.1, p. 385
[13] Feller 1971, p.432

Laplace transform

References
Modern
Arendt, Wolfgang; Batty, Charles J.K.; Hieber, Matthias; Neubrander, Frank (2002), Vector-Valued Laplace
Transforms and Cauchy Problems, Birkhuser Basel, ISBN3764365498.
Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), Boston: McGraw-Hill,
ISBN0071160434.
Davies, Brian (2002), Integral transforms and their applications (Third ed.), New York: Springer,
ISBN0-387-95314-0.
Feller, William (1971), An introduction to probability theory and its applications. Vol. II., Second edition, New
York: John Wiley & Sons, MR0270403.
Korn, G.A.; Korn, T.M. (1967), Mathematical Handbook for Scientists and Engineers (2nd ed.), McGraw-Hill
Companies, ISBN0-0703-5370-0.
Polyanin, A. D.; Manzhirov, A. V. (1998), Handbook of Integral Equations, Boca Raton: CRC Press,
ISBN0-8493-2876-4.
Schwartz, Laurent (1952), "Transformation de Laplace des distributions", Comm. Sm. Math. Univ. Lund [Medd.
Lunds Univ. Mat. Sem.] 1952: 196206, MR0052555.
Siebert, William McC. (1986), Circuits, Signals, and Systems, Cambridge, Massachusetts: MIT Press,
ISBN0-262-19229-2.
Widder, David Vernon (1941), The Laplace Transform, Princeton Mathematical Series, v. 6, Princeton University
Press, MR0005923.
Widder, David Vernon (1945), "What is the Laplace transform?", The American Mathematical Monthly (The
American Mathematical Monthly, Vol. 52, No. 8) 52 (8): 419425, doi:10.2307/2305640, ISSN0002-9890,
JSTOR2305640, MR0013447.

Historical
Deakin, M. A. B. (1981), "The development of the Laplace transform", Archive for the History of the Exact
Sciences 25 (4): 343390, doi:10.1007/BF01395660
Deakin, M. A. B. (1982), "The development of the Laplace transform", Archive for the History of the Exact
Sciences 26: 351381

Euler, L. (1744), "De constructione aequationum", Opera omnia, 1st series 22: 150161.
Euler, L. (1753), "Methodus aequationes differentiales", Opera omnia, 1st series 22: 181213.
Euler, L. (1769), "Institutiones calculi integralis, Volume 2", Opera omnia, 1st series 12, Chapters 35.
Grattan-Guinness, I (1997), "Laplace's integral solutions to partial differential equations", in Gillispie, C. C.,
Pierre Simon Laplace 17491827: A Life in Exact Science, Princeton: Princeton University Press,
ISBN0-691-01185-0.
Lagrange, J. L. (1773), Mmoire sur l'utilit de la mthode, uvres de Lagrange, 2, pp.171234.

46

Laplace transform

47

External links
Online Computation (http://wims.unice.fr/wims/wims.cgi?lang=en&+module=tool/analysis/fourierlaplace.
en) of the transform or inverse transform, wims.unice.fr
Tables of Integral Transforms (http://eqworld.ipmnet.ru/en/auxiliary/aux-inttrans.htm) at EqWorld: The
World of Mathematical Equations.
Weisstein, Eric W., " Laplace Transform (http://mathworld.wolfram.com/LaplaceTransform.html)" from
MathWorld.
Laplace Transform Module by John H. Mathews (http://math.fullerton.edu/mathews/c2003/
LaplaceTransformMod.html)
Good explanations of the initial and final value theorems (http://fourier.eng.hmc.edu/e102/lectures/
Laplace_Transform/)
Laplace Transforms (http://www.mathpages.com/home/kmath508/kmath508.htm) at MathPages
Laplace and Heaviside (http://www.intmath.com/Laplace/1a_lap_unitstepfns.php) at Interactive maths.
Laplace Transform Table and Examples (http://www.vibrationdata.com/Laplace.htm) at Vibrationdata.
Examples (http://www.exampleproblems.com/wiki/index.php/PDE:Laplace_Transforms) of solving
boundary value problems (PDEs) with Laplace Transforms
Computational Knowledge Engine (http://www.wolframalpha.com/input/?i=laplace+transform+example)
allows to easily calculate Laplace Transforms and its inverse Transform.

Linear system
A linear system is a mathematical model of a system based on the use of a linear operator. Linear systems typically
exhibit features and properties that are much simpler than the general, nonlinear case. As a mathematical abstraction
or idealization, linear systems find important applications in automatic control theory, signal processing, and
telecommunications. For example, the propagation medium for wireless communication systems can often be
modeled by linear systems.
A general deterministic system can be described by operator,
an output,

, that maps an input,

, as a function of

to

, a type of black box description. Linear systems satisfy the properties of superposition and scaling

or homogeneity. Given two valid inputs

as well as their respective outputs

then a linear system must satisfy

for any scalar values

and

The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler
inputs. In nonlinear systems, there is no such relation. This mathematical property makes the solution of modelling
equations simpler than many nonlinear systems. For time-invariant systems this is the basis of the impulse response
or the frequency response methods (see LTI system theory), which describe a general input function
in terms
of unit impulses or frequency components.
Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace
transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).

Linear system
Another perspective is that solutions to linear systems comprise a system of functions which act like vectors in the
geometric sense.
A common use of linear models is to describe a nonlinear system by linearization. This is usually done for
mathematical convenience.

Time-varying impulse response


The time-varying impulse response h(t2,t1) of a linear system is defined as the response of the system at time t = t2
to a single impulse applied at time t = t1. In other words, if the input x(t) to a linear system is
where (t) represents the Dirac delta function, and the corresponding response y(t) of the system is

then the function h(t2,t1) is the time-varying impulse response of the system.

Time-varying convolution integral


Continuous time
The output of any continuous time linear system is related to the input by the time-varying convolution integral:

or, equivalently,

Discrete time
The output of any discrete time linear system is related to the input by the time-varying convolution sum:

or equivalently,

where

represents the lag time between the stimulus at time m and the response at time n.

Causality
A linear system is causal if and only if the system's time varying impulse response is identically zero whenever the
time t of the response is earlier than the time s of the stimulus. In other words, for a causal system, the following
condition must hold:

48

Time-invariant system

49

Time-invariant system
A time-invariant (TIV) system is one whose output does not depend explicitly on time.
If the input signal

produces an output

then any time shifted input,

, results in a

time-shifted output
This property can be satisfied if the transfer function of the system is not a function of time except expressed by the
input and output. This property can also be stated in another way in terms of a schematic
If a system is time-invariant then the system block is commutative with an arbitrary delay.

Simple example
To demonstrate how to determine if a system is time-invariant then consider the two systems:
System A:
System B:
Since system A explicitly depends on t outside of

and

then it is time-variant. System B, however, does

not depend explicitly on t so it is time-invariant.

Formal example
A more formal proof of why system A & B from above differ is now presented. To perform this proof, the second
definition will be used.
System A:
Start with a delay of the input

Now delay the output by

Clearly

, therefore the system is not time-invariant.

System B:
Start with a delay of the input

Now delay the output by

Clearly
the easiest.

, therefore the system is time-invariant. Although there are many other proofs, this is

Time-invariant system

50

Abstract example
We can denote the shift operator by

where

is the amount by which a vector's index set should be shifted. For

example, the "advance-by-1" system


can be represented in this abstract notation by

where

is a function given by

with the system yielding the shifted output

So

is an operator that advances the input vector by 1.

Suppose we represent a system by an operator

. This system is time-invariant if it commutes with the shift

operator, i.e.,

If our system equation is given by

then it is time-invariant if we can apply the system operator


apply the shift operator

followed by the system operator

results.
Applying the system operator first gives

Applying the shift operator first gives

If the system is time-invariant, then

on

followed by the shift operator

, or we can

, with the two computations yielding equivalent

Dirac delta function

51

Dirac delta function


The Dirac delta function, or
function, is (informally) a generalized
function depending on a real parameter
such that it is zero for all values of the
parameter except when the parameter
is zero, and its integral over the
parameter from to is equal to
one.[1] [2] It was introduced by
theoretical physicist Paul Dirac. In the
context of signal processing it is often
referred to as the unit impulse
function. It is a continuous analog of
the Kronecker delta function which is
usually defined on a finite domain, and
takes values 0 and 1.
From a purely mathematical viewpoint,
the Dirac delta is not strictly a
function, because any extended-real
function that is equal to zero
everywhere but a single point must
have total integral zero.[3] While for
many purposes the Dirac delta can be
manipulated as a function, formally it
can be defined as a distribution that is
also a measure. In many applications,
the Dirac delta is regarded as a kind of
limit (a weak limit) of a sequence of
functions having a tall spike at the
origin. The approximating functions of
the sequence are thus "approximate" or
"nascent" delta functions.

Schematic representation of the Dirac delta function by a line surmounted by an arrow.


The height of the arrow is usually used to specify the value of any multiplicative constant,
which will give the area under the function. The other convention is to write the area next
to the arrowhead.

Overview
The graph of the delta function is
usually thought of as following the
whole x-axis and the positive y-axis.
(This informal picture can sometimes
be misleading, for example in the
limiting case of the sinc function.)

The Dirac delta function as the limit (in the sense


of distributions) of the sequence of Gaussians
as

Dirac delta function

52

Despite its name, the delta function is not truly a function, at least not a usual one with domain in reals. For example,
the objects f(x) = (x) and g(x) = 0 are equal everywhere except at x = 0 yet have integrals that are different.
According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is
integrable if and only if g is integrable and the integrals of f and g are identical. Rigorous treatment of the Dirac delta
requires measure theory, the theory of distributions, or a hyperreal framework.
The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a
point charge, point mass or electron point. For example, to calculate the dynamics of a baseball being hit by a bat,
one can approximate the force of the bat hitting the baseball by a delta function. In doing so, one not only simplifies
the equations, but one also is able to calculate the motion of the baseball by only considering the total impulse of the
bat against the ball rather than requiring knowledge of the details of how the bat transferred energy to the ball.
In applied mathematics, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of
functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions
centered at the origin with variance tending to zero.
An infinitesimal formula for an infinitely tall, unit impulse delta function (infinitesimal version of Cauchy
distribution) explicitly appears in an 1827 text of Augustin Louis Cauchy.[4] Simon Denis Poisson considered the
issue in connection with the study of wave propagation as did Gustav Kirchhoff somewhat later. Kirchhoff and
Hermann von Helmholtz also introduced the unit impulse as a limit of Gaussians, which also corresponded to Lord
Kelvin's notion of a point heat source. At the end of the 19th century, Oliver Heaviside used formal Fourier series to
manipulate the unit impulse.[5] The Dirac delta function as such was introduced as a "convenient notation" by Paul
Dirac in his influential 1927 book Principles of Quantum Mechanics.[6] He called it the "delta function" since he
used it as a continuous analogue of the discrete Kronecker delta.

Definitions
The Dirac delta can be loosely thought of as a function on the real line which is zero everywhere except at the origin,
where it is infinite,

and which is also constrained to satisfy the identity


[7]

This is merely a heuristic characterization. The Dirac delta is not a true function, as no function has the above
properties.[6] Moreover there exist descriptions of the delta function which differ from the above conceptualization.
For example, sinc(x/a)/a becomes the delta function in the limit as a0,[8] yet this function does not approach zero
for values of x outside the origin, rather it oscillates between 1/x and 1/x more and more rapidly as a approaches
zero.
The Dirac delta function can be rigorously defined either as a distribution or as a measure.

As a measure
One way to rigorously define the delta function is as a measure, which accepts as an argument a subset A of the real
line R, and returns (A)=1 if 0A, and (A)=0 otherwise.[9] If the delta function is conceptualized as modeling an
idealized point mass at 0, then (A) represents the mass contained in the set A. One may then define the integral
against as the integral of a function against this mass distribution. Formally, the Lebesgue integral provides the
necessary analytic device. The Lebesgue integral with respect to the measure satisfies

Dirac delta function

53

for all continuous compactly supported functions . The measure is not absolutely continuous with respect to the
Lebesgue measure in fact, it is a singular measure. Consequently, the delta measure has no RadonNikodym
derivative no true function for which the property

holds.[10] As a result, the latter notation is a convenient abuse of notation, and not a standard (Riemann or Lebesgue)
integral.
As a probability measure on R, the delta measure is characterized by its cumulative distribution function, which is
the unit step function[11]

This means that H(x) is the integral of the cumulative indicator function 1(,x] with respect to the measure ; to wit,

Thus in particular the integral of the delta function against a continuous function can be properly understood as a
Stieltjes integral:[12]

All higher moments of are zero. In particular, characteristic function and moment generating function are both
equal to one.

As a distribution
In the theory of distributions a generalized function is thought of not as a function itself, but only in relation to how it
affects other functions when it is "integrated" against them. In keeping with this philosophy, to define the delta
function properly, it is enough to say what the "integral" of the delta function against a sufficiently "good" test
function is. If the delta function is already understood as a measure, then the Lebesgue integral of a test function
against that measure supplies the necessary integral.
A typical space of test functions consists of all smooth functions on R with compact support. As a distribution, the
Dirac delta is a linear functional on the space of test functions and is defined by[13]
(1)

for every test function .


For to be properly a distribution, it must be "continuous" in a suitable sense. In general, for a linear functional S on
the space of test functions to define a distribution, it is necessary and sufficient that, for every positive integer N
there is an integer MN and a constant CN such that for every test function , one has the inequality[14]

With the distribution, one has such an inequality (with CN=1) with MN=0 for all N. Thus is a distribution of
order zero. It is, furthermore, a distribution with compact support (the support being {0}).
The delta distribution can also be defined in a number of equivalent ways. For instance, it is the distributional
derivative of the Heaviside step function. This means that, for every test function , one has

Intuitively, if integration by parts were permitted, then the latter integral should simplify to

Dirac delta function

54

and indeed, a form of integration by parts is permitted for the Stieltjes integral, and in that case one does have

In the context of measure theory, the Dirac measure gives rise to a distribution by integration. Conversely, equation
(1) defines a Daniell integral on the space of all compactly supported continuous functions which, by the Riesz
representation theorem, can be represented as the Lebesgue integral of with respect to some Radon measure.

Generalizations
The delta function can be defined in n-dimensional Euclidean space Rn as the measure such that

for every compactly supported continuous function . As a measure, the n-dimensional delta function is the product
measure of the 1-dimensional delta functions in each variable separately. Thus, formally, with x=(x1,x2,...,xn), one
has[15]
(2)

The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case.[16]
However, despite widespread use in engineering contexts, (2) should be manipulated with care, since the product of
distributions can only be defined under quite narrow circumstances.[17]
The notion of a Dirac measure makes sense on any set whatsoever.[9] Thus if X is a set, x0X is a marked point,
and is any sigma algebra of subsets of X, then the measure defined on sets A by

is the delta measure or unit mass concentrated at x0.


Another common generalization of the delta function is to a differentiable manifold where most of its properties as a
distribution can also be exploited because of the differentiable structure. The delta function on a manifold M centered
at the point x0M is defined as the following distribution:
(3)

for all compactly supported smooth real-valued functions on M.[18] A common special case of this construction is
when M is an open set in the Euclidean space Rn.
On a locally compact Hausdorff space X, the Dirac delta measure concentrated at a point x is the Radon measure
associated with the Daniell integral (3) on compactly supported continuous functions . At this level of generality,
calculus as such is no longer possible, however a variety of techniques from abstract analysis are available. For
instance, the mapping
is a continuous embedding of X into the space of finite Radon measures on X,
equipped with its vague topology. Moreover, the convex hull of the image of X under this embedding is dense in the
space of probability measures on X.[19]

Dirac delta function

55

Properties
Scaling and symmetry
The delta function satisfies the following scaling property for a non-zero scalar :[20]

and so
(4)

In particular, the delta function is an even distribution, in the sense that

which is homogeneous of degree 1.

Algebraic properties
The distributional product of with x is equal to zero:

Conversely, if xf(x)=xg(x), where f and g are distributions, then


for some constant c.[21]

Translation
The integral of the time-delayed Dirac delta is given by:

This is sometimes referred to as the sifting property[22] or the sampling property. The delta function is said to "sift
out" the value at
.
It follows that the effect of convolving a function (t) with the time-delayed Dirac delta is to time-delay (t) by the
same amount:

(using (4):

This holds under the precise condition that f be a tempered distribution (see the discussion of the Fourier transform
below). As a special case, for instance, we have the identity (understood in the distribution sense)

Dirac delta function

Composition with a function


More generally, the delta distribution may be composed with a smooth function g(x) in such a way that the familiar
change of variables formula holds, that

provided that g is a continuously differentiable function with g nowhere zero.[23] That is, there is a unique way to
assign meaning to the distribution
so that this identity holds for all compactly supported test functions . This
distribution satisfies (g(x))=0 if g is nowhere zero, and otherwise if g has a real root at x0, then

It is natural therefore to define the composition (g(x)) for continuously differentiable functions g by

where the sum extends over all roots of g(x), which are assumed to be simple.[23] Thus, for example

In the integral form the generalized scaling property may be written as

Properties in n dimensions
The delta distribution in an n-dimensional space satisfies the following scaling property instead:

so that is a homogeneous distribution of degree n. Under any reflection or rotation , the delta function is
invariant:
As in the one-variable case, it is possible to define the composition of with a bi-Lipschitz function[24] g:RnRn
uniquely so that the identity

for all compactly supported functions .


Using the coarea formula from geometric measure theory, one can also define the composition of the delta function
with a submersion from one Euclidean space to another one of different dimension; the result is a type of current. In
the special case of a continuously differentiable function g:RnR such that the gradient of g is nowhere zero, the
following identity holds[25]

where the integral on the right is over g1(0), the n1 dimensional surface defined by g(x)=0 with respect to the
Minkowski content measure. This is known as a simple layer integral.

56

Dirac delta function

Fourier transform
The delta function is a tempered distribution, and therefore it has a well-defined Fourier transform. Formally, one
finds[26]

Properly speaking, the Fourier transform of a distribution is defined by imposing self-adjointness of the Fourier
transform under the duality pairing
of tempered distributions with Schwartz functions. Thus is defined as
the unique tempered distribution satisfying

for all Schwartz functions . And indeed it follows from this that
As a result of this identity, the convolution of the delta function with any other tempered distribution S is simply S:

That is to say that is an identity element for the convolution on tempered distributions, and in fact the space of
compactly supported distributions under convolution is an associative algebra with identity the delta function. This
property is fundamental in signal processing, as convolution with a tempered distribution is a linear time-invariant
system, and applying the linear time-invariant system measures its impulse response. The impulse response can be
computed to any desired degree of accuracy by choosing a suitable approximation for , and once it is known, it
characterizes the system completely. See LTI system theory:Impulse response and convolution.
The inverse Fourier transform of the tempered distribution ()=1 is the delta function. Formally, this is expressed

and more rigorously, it follows since

for all Schwartz functions .


In these terms, the delta function provides a suggestive statement of the orthogonality property of the Fourier kernel
on R. Formally, one has

This is, of course, shorthand for the assertion that the Fourier transform of the tempered distribution

is

which again follows by imposing self-adjointness of the Fourier transform.


By analytic continuation of the Fourier transform, the Laplace transform of the delta function is found to be[27]

57

Dirac delta function

Distributional derivatives
The distributional derivative of the Dirac delta distribution is the distribution defined on compactly supported
smooth test functions by[28]

The first equality here is a kind of integration by parts, for if were a true function then

The kth derivative of is defined similarly as the distribution given on test functions by

In particular is an infinitely differentiable distribution.


The first derivative of the delta function is the distributional limit of the difference quotients:[29]

More properly, one has

where h is the translation operator, defined on functions by h(x)=(x+h), and on a distribution S by


In the theory of electromagnetism, the first derivative of the delta function represents a point magnetic dipole
situated at the origin. Accordingly, it is referred to as a dipole or the doublet function.[30]
The derivative of the delta function satisfies a number of basic properties, including:

Furthermore, the convolution of ' with a compactly supported smooth function f is

which follows from the properties of the distributional derivative of a convolution.

Higher dimensions
More generally, on an open set U in the n-dimensional Euclidean space Rn, the Dirac delta distribution centered at a
point aU is defined by[31]

for all S(U), the space of all smooth compactly supported functions on U. If = (1, ..., n) is any multi-index
and denotes the associated mixed partial derivative operator, then the th derivative a of a is given by[31]
That is, the th derivative of a is the distribution whose value on any test function is the th derivative of at a
(with the appropriate positive or negative sign).
The first partial derivatives of the delta function are thought of as double layers along the coordinate planes. More
generally, the normal derivative of a simple layer supported on a surface is a double layer supported on that surface,
and represents a laminar magnetic monopole. Higher derivatives of the delta function are known in physics as
multipoles.

58

Dirac delta function

59

Higher derivatives enter into mathematics naturally as the building blocks for the complete structure of distributions
with point support. If S is any distribution on U supported on the set {a} consisting of a single point, then there is an
integer m and coefficients c such that[32]

Representations of the delta function


The delta function can be viewed as the limit of a sequence of functions

where (x) is sometimes called a nascent delta function. This limit is meant in a weak sense: either that
(5)

for all continuous functions having compact support, or that this limit holds for all smooth functions with compact
support. The difference between these two slightly different modes of weak convergence is often subtle: the former
is convergence in the vague topology of measures, and the latter is convergence in the sense of distributions.

Approximations to the identity


Typically a nascent delta function can be constructed in the following manner. Let be an absolutely integrable
function on R of total integral 1, and define

In n dimensions, one uses instead the scaling


Then a simple change of variables shows that also has integral 1.[33] One shows easily that (5) holds for all
continuous compactly supported functions , and so converges weakly to in the sense of measures. If the initial
=1 is itself smooth and compactly supported then the sequence is called a mollifier.
The constructed in this way are known as an approximation to the identity.[34] This terminology is because the
space L1(R) of absolutely integrable functions is closed under the operation of convolution of functions: gL1(R)
whenever and g are in L1(R). However, there is no identity in L1(R) for the convolution product: no element h such
that h= for all . Nevertheless, the sequence does approximate such an identity in the sense that
This limit holds in the sense of mean convergence (convergence in L1). Further conditions on the , for instance that
it be a mollifier associated to a compactly supported function,[35] are needed to ensure pointwise convergence almost
everywhere.
The standard mollifier is given by (x/)/ where is a suitably normalized bump function. For instance,

where

In some situations such as numerical analysis, a piecewise linear approximation to the identity is desirable. This can
be obtained by taking 1 to be a hat function. With this choice of 1, one has

Dirac delta function


which are all continuous and compactly supported, although not smooth and so not a mollifier.

Probabilistic considerations
In the context of probability theory, it is natural to impose the additional condition that the initial 1 in an
approximation to the identity should be positive, as such a function then represents a probability distribution.
Convolution with a probability distribution is sometimes favorable because it does not result in overshoot or
undershoot, as the output is a convex combination of the input values, and thus falls between the maximum and
minimum of the input function. Taking 1 to be any probability distribution at all, and letting (x)=1(x/)/ as
above will give rise to an approximation to the identity. In general this converges more rapidly to a delta function if,
in addition, has mean 0 and has small higher moments. For instance, if 1 is the uniform distribution on [1/2,1/2],
also known as the rectangular function, then setting
with n a new parameter:[36]

Another example is with the Wigner semicircle distribution

This is continuous and compactly supported, but not a mollifier because it is not smooth.

Semigroups
Nascent delta functions often arise as convolution semigroups. This amounts to the further constraint that the
convolution of with must satisfy
for all ,>0. Convolution semigroups in L1 that form a nascent delta function are always an approximation to the
identity in the above sense, however the semigroup condition is quite a strong restriction.
In practice, semigroups approximating the delta function arise as fundamental solutions or Green's functions to
physically motivated elliptic or parabolic partial differential equations. In the context of applied mathematics,
semigroups arise as the output of a linear time-invariant system. Abstractly, if A is a linear operator acting on
functions of x, then a convolution semigroup arises by solving the initial value problem

in which the limit is as usual understood in the weak sense. Setting (x)=(,x) gives the associated nascent delta
function.
Some examples of physically important convolution semigroups arising from such a fundamental solution include
the following.
The heat kernel
The heat kernel, defined by

represents the temperature in an infinite wire at time t>0, if a unit of heat energy is stored at the origin of the wire at
time t=0. This semigroup evolves according to the one-dimensional heat equation:

60

Dirac delta function


In probability theory, (x) is a normal distribution of variance and mean 0. It represents the probability density at
time t= of the position of a particle starting at the origin following a standard Brownian motion. In this context, the
semigroup condition is then an expression of the Markov property of Brownian motion.
In higher dimensional Euclidean space Rn, the heat kernel is

and has the same physical interpretation, mutatis mutandis. It also represents a nascent delta function in the sense
that in the distribution sense as 0.
The Poisson kernel
The Poisson kernel

is the fundamental solution of the Laplace equation in the upper half-plane.[37] It represents the electrostatic potential
in a semi-infinite plate whose potential along the edge is held at fixed at the delta function. The Poisson kernel is
also closely related to the Cauchy distribution. This semigroup evolves according to the equation

where the operator is rigorously defined as the Fourier multiplier

Oscillatory integrals
In areas of physics such as wave propagation and wave mechanics, the equations involved are hyperbolic and so may
have more singular solutions. As a result, the nascent delta functions that arise as fundamental solutions of the
associated Cauchy problems are generally oscillatory integrals. An example, which comes from a solution of the
EulerTricomi equation of transonic gas dynamics,[38] is the rescaled Airy function

Although using the Fourier transform, it is easy to see that this generates a semigroup in some sense, it is not
absolutely integrable and so cannot define a semigroup in the above strong sense. Many nascent delta functions
constructed as oscillatory integrals only converge in the sense of distributions (an example is the Dirichlet kernel
below), rather than in the sense of measures.
Another example is the Cauchy problem for the wave equation in R1+1:[39]

The solution u represents the displacement from equilibrium of an infinite elastic string, with an initial disturbance at
the origin.
Other approximations to the identity of this kind include the sinc function

and the Bessel function

61

Dirac delta function

Plane wave decomposition


One approach to the study of a linear partial differential equation
where L is a differential operator on Rn, is to seek first a fundamental solution, which is a solution of the equation

When L is particularly simple, this problem can often be resolved using the Fourier transform directly (as in the case
of the Poisson kernel and heat kernel already mentioned). For more complicated operators, it is sometimes easier
first to consider an equation of the form

where h is a plane wave function, meaning that it has the form

for some vector . Such an equation can be resolved (if the coefficients of L are analytic functions) by the
CauchyKovalevskaya theorem or (if the coefficients of L are constant) by quadrature. So, if the delta function can
be decomposed into plane waves, then one can in principle solve linear partial differential equations.
Such a decomposition of the delta function into plane waves was part of a general technique first introduced
essentially by Johann Radon, and then developed in this form by Fritz John (1955).[40] Choose k so that n+k is an
even integer, and for a real number s, put

Then is obtained by applying a power of the Laplacian to the integral with respect to the unit sphere measure d of
g(x) for in the unit sphere Sn1:

The Laplacian here is interpreted as a weak derivative, so that this equation is taken to mean that, for any test
function ,

The result follows from the formula for the Newtonian potential (the fundamental solution of Poisson's equation).
This is essentially a form of the inversion formula for the Radon transform, because it recovers the value of (x)
from its integrals over hyperplanes. For instance, if n is odd and k=1, then the integral on the right hand side is

where R(,p) is the Radon transform of :

An alternative equivalent expression of the plane wave decomposition, from Gel'fand & Shilov (19661968, I,
3.10), is

for n even, and

62

Dirac delta function

for n odd.

Fourier kernels
In the study of Fourier series, a major question consists of determining whether and in what sense the Fourier series
associated with a periodic function converges to the function. The nth partial sum of the Fourier series of a function
of period 2 is defined by convolution (on the interval [,]) with the Dirichlet kernel:

Thus,

where

A fundamental result of elementary Fourier series states that the Dirichlet kernel tends to the a multiple of the delta
function as N. This is interpreted in the distribution sense, that

for every compactly supported smooth function . Thus, formally one has

on the interval [,].


In spite of this, the result does not hold for all compactly supported continuous functions: that is DN does not
converge weakly in the sense of measures. The lack of convergence of the Fourier series has led to the introduction
of a variety of summability methods in order to produce convergence. The method of Cesro summation leads to the
Fejr kernel[41]

The Fejr kernels tend to the delta function in a stronger sense that[42]

for every compactly supported continuous function . The implication is that the Fourier series of any continuous
function is Cesro summable to the value of the function at every point.

63

Dirac delta function

64

Hilbert space theory


The Dirac delta distribution is a densely defined unbounded linear functional on the Hilbert space L2 of square
integrable functions. Indeed, smooth compactly support functions are dense in L2, and the action of the delta
distribution on such functions is well-defined. In many applications, it is possible to identify subspaces of L2 and to
give a stronger topology on which the delta function defines a bounded linear functional.
Sobolev spaces
The Sobolev embedding theorem for Sobolev spaces on the real line R implies that any square-integrable function
such that

is automatically continuous, and satisfies in particular


Thus is a bounded linear functional on the Sobolev space H1. Equivalently is an element of the continuous dual
space H1 of H1. More generally, in n dimensions, one has H s(Rn) provideds > n / 2.
Spaces of holomorphic functions
In complex analysis, the delta function enters via Cauchy's integral formula which asserts that if D is a domain in the
complex plane with smooth boundary, then

for all holomorphic functions in D that are continuous on the closure of D. As a result, the delta function z is
represented on this class of holomorphic functions by the Cauchy integral:

More generally, let H2(D) be the Hardy space consisting of the closure in L2(D) of all holomorphic functions in D
continuous up to the boundary of D. Then functions in H2(D) uniquely extend to holomorphic functions in D, and
the Cauchy integral formula continues to hold. In particular for zD, the delta function z is a continuous linear
functional on H2(D). This is a special case of the situation in several complex variables in which, for smooth
domains D, the Szeg kernel plays the role of the Cauchy integral.
Resolutions of the identity
Given a complete orthonormal basis set of functions {

} in a separable Hilbert space, for example, the

normalized eigenvectors of a compact self-adjoint operator, any vector f can be expressed as:

The coefficients {n} are found as:


which may be represented by the notation:
a form of the bra-ket notation of Dirac.[43] Adopting this notation, the expansion of f takes the dyadic form:[44]

Letting I denote the identity operator on the Hilbert space, the expression

Dirac delta function

65

is called a resolution of the identity. When the Hilbert space is the space L2(D) of square-integrable functions on a
domain D, the quantity:

is an integral operator, and the expression for f can be rewritten as:

The right-hand side converges to f in the L2 sense. It need not hold in a pointwise sense, even when f is a continuous
function. Nevertheless, it is common to abuse notation and write

resulting in the representation of the delta function:[45]

With a suitable rigged Hilbert space (,L2(D),) where L2(D) contains all compactly supported smooth
functions, this summation may converge in *, depending on the properties of the basis n. In most cases of
practical interest, the orthonormal basis comes from an integral or differential operator, in which case the series
converges in the distribution sense.[46]

Infinitesimal delta functions


Cauchy used an infinitesimal
satisfying

to write down a unit impulse, infinitely tall and narrow Dirac-type delta function
in a number of articles in 1827.[47] Cauchy defined an infinitesimal in

Cours d'Analyse (1827) in terms of a sequence tending to zero. Namely, such a null sequence becomes an
infinitesimal in Cauchy's and Lazare Carnot's terminology.
Modern set-theoretic approaches allow one to define infinitesimals via the ultrapower construction, where a null
sequence becomes an infinitesimal in the sense of an equivalence class modulo a relation defined in terms of a
suitable ultrafilter. The article by Yamashita (2007) contains a bibliography on modern Dirac delta functions in the
context of an infinitesimal-enriched continuum provided by the hyperreals.

Dirac comb
A so-called uniform "pulse train" of Dirac delta measures, which is
known as a Dirac comb, or as the Shah distribution, creates a sampling
function, often used in digital signal processing (DSP) and discrete
time signal analysis. The Dirac comb is given as the infinite sum,
whose limit is understood in the distribution sense,

A Dirac comb is an infinite series of Dirac delta


functions spaced at intervals of T

which is a sequence of point masses at each of the integers.

Dirac delta function

66

Up to an overall normalizing constant, the Dirac comb is equal to its own Fourier transform. This is significant
because if is any Schwartz function, then the periodization of is given by the convolution

In particular,
is precisely the Poisson summation formula.[48]

SokhatskyWeierstrass theorem
The SokhatskyWeierstrass theorem, important in quantum mechanics, relates the delta function to the distribution
p.v.1/x, the Cauchy principal value of the function 1/x, defined by

Sokhatsky's formula states that[49]

Here the limit is understood in the distribution sense, that for all compactly supported smooth functions f,

Relationship to the Kronecker delta


The Kronecker delta

is the quantity defined by

for all integers i, j. This function then satisfies the following analog of the sifting property: if

be any doubly

infinite sequence, then

Similarly, for any real or complex valued continuous function

on

, the Dirac delta satisfies the sifting property

This exhibits the Kronecker delta function as a discrete analog of the Dirac delta function.[50]

Dirac delta function

67

Applications to probability theory


In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a
partially discrete, partially continuous distribution, using a probability density function (which is normally used to
represent fully continuous distributions). For example, the probability density function
of a discrete
distribution consisting of points

, with corresponding probabilities

, can be

written as

As another example, consider a distribution which 6/10 of the time returns a standard normal distribution, and 4/10
of the time returns exactly the value 3.5 (i.e. a partly continuous, partly discrete mixture distribution). The density
function of this distribution can be written as

The delta function is also used in a completely different way to represent the local time of a diffusion process (like
Brownian motion). The local time of a stochastic process B(t) is given by

and represents the amount of time that the process spends at the point x in the range of the process. More precisely,
in one dimension this integral can be written

where

is the indicator function of the interval [x,x+].

Application to quantum mechanics


We give an example of how the delta function is expedient in quantum mechanics. The wave function of a particle
gives the probability amplitude of finding a particle within a given region of space. Wave functions are assumed to
be elements of the Hilbert space L2 of square-integrable functions, and the total probability of finding a particle
within a given interval is the integral of the magnitude of the wave function squared over the interval. A set {n} of
wave functions is orthonormal if they are normalized by

where here refers to the Kronecker delta. A set of orthonormal wave functions is complete in the space of
square-integrable functions if any wave function can be expressed as a combination of the n:

with

. Complete orthonormal systems of wave functions appear naturally as the eigenfunctions of

the Hamiltonian (of a bound system) in quantum mechanics that measures the energy levels, which are called the
eigenvalues. The set of eigenvalues, in this case, is known as the spectrum of the Hamiltonian. In bra-ket notation, as
above, this equality implies the resolution of the identity:

Here the eigenvalues are assumed to be discrete, but the set of eigenvalues of an observable may be continuous
rather than discrete. An example is the position observable, Q(x)=x(x). The spectrum of the position (in one
dimension) is the entire real line, and is called a continuous spectrum. However, unlike the Hamiltonian, the position
operator lacks proper eigenfunctions. The conventional way to overcome this shortcoming is to widen the class of
available functions by allowing distributions as well: that is, to replace the Hilbert space of quantum mechanics by

Dirac delta function

68

an appropriate rigged Hilbert space.[51] In this context, the position operator has a complete set of
eigen-distributions, labeled by the points y of the real line, given by

The eigenfunctions of position are denoted by

in Dirac notation, and are known as position eigenstates.

Similar considerations apply to the eigenstates of the momentum operator, or indeed any other self-adjoint
unbounded operator P on the Hilbert space, provided the spectrum of P is continuous and there are no degenerate
eigenvalues. In that case, there is a set of real numbers (the spectrum), and a collection y of distributions indexed
by the elements of , such that

That is, y are the eigenvectors of P. If the eigenvectors are normalized so that
in the distribution sense, then for any test function ,

where

That is, as in the discrete case, there is a resolution of the identity

where the operator-valued integral is again understood in the weak sense. If the spectrum of P has both continuous
and discrete parts, then the resolution of the identity involves a summation over the discrete spectrum and an integral
over the continuous spectrum.
The delta function also has many more specialized applications in quantum mechanics, such as the delta potential
models for a single and double potential well.

Application to structural mechanics


The delta function can be used in structural mechanics to describe transient loads or point loads acting on structures.
The governing equation of a simple mass-spring system excited by a sudden force impulse at time
can be
written

where

is the mass,

the deflection and

the spring constant.

As another example, the equation governing the static deflection of a slender beam is, according to Euler-Bernoulli
theory,

where

is the bending stiffness of the beam,

distribution. If a beam is loaded by a point force

the deflection,
at

the spatial coordinate and

the load

, the load distribution is written

As integration of the delta function results in the Heaviside step function, it follows that the static deflection of a
slender beam subject to multiple point loads is described by a set of piecewise polynomials.
Also a point moment acting on a beam can be described by delta functions. Consider two opposing point forces
a distance

apart. They then produce a moment

acting on the beam. Now, let the distance

at

approach

Dirac delta function


the limit zero, while

69
is kept constant. The load distribution, assuming a clockwise moment acting at

, is written

Point moments can thus be represented by the derivative of the delta function. Integration of the beam equation again
results in piecewise polynomial deflection.

Notes
[1]
[2]
[3]
[4]
[5]
[6]

Dirac 1958, 15 The function, p. 58


Gel'fand & Shilov 1968, Volume I, 1.1, 1.3
Vladimirov 1971, 5.1
Laugwitz 1989, p.230
A more complete historical account can be found in van der Pol & Bremmer 1987, V.4.
Dirac 1958, 15

[7] Gel'fand & Shilov 1968, Volume I, 1.1, p. 1


[8] Weisstein, Eric W., " Delta Function (http:/ / mathworld. wolfram. com/ DeltaFunction. html)" from MathWorld.
[9] Rudin 1966, 1.20
[10] Hewitt & Stromberg 1963, 19.61
[11] Driggers 2003, p.2321. See also Bracewell 1986, Chapter 5 for a different interpretation. Other conventions for the assigning the value of
the Heaviside function at zero exist, and some of these are not consistent with what follows.
[12] Hewitt & Stromberg 1965, 9.19
[13] Strichartz 1994, 2.2
[14] Hrmander 1983, Theorem 2.1.5
[15] Bracewell 1986, Chapter 5
[16] Hrmander 1983, 3.1
[17] Strichartz 1994, 2.3; Hrmander 1983, 8.2
[18] Dieudonn 1972, 17.3.3
[19] Federer 1969, 2.5.19
[20] Strichartz 1994, Problem 2.6.2
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]

Vladimirov 1971, Chapter 2, Example 3(d)


Weisstein, Eric W., " Sifting Property (http:/ / mathworld. wolfram. com/ SiftingProperty. html)" from MathWorld.
Gel'fand & Shilov 19661968, Vol. 1, II.2.5
Further refinement is possible, namely to submersions, although these require a more involved change of variables formula.
Hrmander 1983, 6.1
In some conventions for the Fourier transform.
Bracewell 1986
Gel'fand & Shilov 1966, p.26
Gel'fand & Shilov 1966, 2.1
Weisstein, Eric W., " Doublet Function (http:/ / mathworld. wolfram. com/ DoubletFunction. html)" from MathWorld.
Hrmander 1983, p.56
Hrmander 1983, p.56; Rudin 1991, Theorem 6.25
Stein Weiss, Theorem 1.18
Rudin 1991, II.6.31
More generally, one only needs =1 to have an integrable radially symmetric decreasing rearrangement.
Aratyn & Rasinariu 2006, 5.3.1 Dirac delta function, p. 314
Stein & Weiss 1971, I.1
Valle & Soares 2004, 7.2
Hrmander 1983, 7.8

[40] See also Courant & Hilbert 1962, 14.


[41] Lang 1997, p.312
[42] In the terminology of Lang (1997), the Fejr kernel is a Dirac sequence, whereas the Dirichlet kernel is not.

Dirac delta function


[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]

The development of this section in bra-ket notation is found in (Levin 2002, Coordinate-space wave functions and completeness, pp.=109ff)
Davis & Thomson 2000, Perfect operators, p.344
Davis & Thomson 2000, Equation 8.9.11, p. 344
de la Madrid, Bohm & Gadella 2002
See Laugwitz (1989).
Crdoba 1988; Hrmander 1983, 7.2
Vladimirov 1971, 5.7
Hartmann 1997, pp. 154155
Isham 1995, 6.2

References
Aratyn, Henrik; Rasinariu, Constantin (2006), A short course in mathematical methods with Maple (http://books.
google.com/?id=JFmUQGd1I3IC&pg=PA314), World Scientific, ISBN9812564616.
Bracewell, R. (1986), The Fourier Transform and Its Applications (2nd ed.), McGraw-Hill.
Crdoba, A., "La formule sommatoire de Poisson", C.R. Acad. Sci. Paris, Series I 306: 373376.
Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume II, Wiley-Interscience.
Davis, Howard Ted; Thomson, Kendall T (2000), Linear algebra and linear operators in engineering with
applications in Mathematica (http://books.google.com/?id=3OqoMFHLhG0C&pg=PA344#v=onepage&q),
Academic Press, ISBN012206349X
Dieudonn, Jean (1976), Treatise on analysis. Vol. II, New York: Academic Press [Harcourt Brace Jovanovich
Publishers], ISBN978-0-12-215502-4, MR0530406.
Dieudonn, Jean (1972), Treatise on analysis. Vol. III, Boston, MA: Academic Press, MR0350769
Dirac, Paul (1958), Principles of quantum mechanics (4th ed.), Oxford at the Clarendon Press,
ISBN978-0198520115.
Driggers, Ronald G. (2003), Encyclopedia of Optical Engineering, CRC Press, ISBN978-0824709402.
Federer, Herbert (1969), Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, 153,
New York: Springer-Verlag, pp.xiv+676, ISBN978-3540606567, MR0257325.
Gel'fand, I.M.; Shilov, G.E. (19661968), Generalized functions, 15, Academic Press.
Hartman, William M. (1997), Signals, sound, and sensation (http://books.google.com/
books?id=3N72rIoTHiEC), Springer, ISBN9781563962837.
Hewitt, E; Stromberg, K (1963), Real and abstract analysis, Springer-Verlag.
Hrmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., 256,
Springer, ISBN3-540-12104-8, MR0717035.
Isham, C. J. (1995), Lectures on quantum theory: mathematical and structural foundations, Imperial College
Press, ISBN9788177641905.
John, Fritz (1955), Plane waves and spherical means applied to partial differential equations, Interscience
Publishers, New York-London, MR0075429.
Lang, Serge (1997), Undergraduate analysis, Undergraduate Texts in Mathematics (2nd ed.), Berlin, New York:
Springer-Verlag, ISBN978-0-387-94841-6, MR1476913.
Laugwitz, D. (1989), "Definite values of infinite sums: aspects of the foundations of infinitesimal analysis around
1820", Arch. Hist. Exact Sci. 39 (3): 195245, doi:10.1007/BF00329867.
Levin, Frank S. (2002), "Coordinate-space wave functions and completeness" (http://books.google.com/
?id=oc64f4EspFgC&pg=PA109), An introduction to quantum theory, Cambridge University Press, pp.109ff,
ISBN0521598419
Li, Y. T.; Wong, R. (2008), "Integral and series representations of the Dirac delta function", Commun. Pure Appl.
Anal. 7 (2): 229247, doi:10.3934/cpaa.2008.7.229, MR2373214.
de la Madrid, R.; Bohm, A.; Gadella, M. (2002), "Rigged Hilbert Space Treatment of Continuous Spectrum",
Fortschr. Phys. 50 (2): 185216, arXiv:quant-ph/0109154,
doi:10.1002/1521-3978(200203)50:2<185::AID-PROP185>3.0.CO;2-S.

70

Dirac delta function


McMahon, D. (2005-11-22), "An Introduction to State Space" (http://www.mhprofessional.com/product.
php?isbn=0071455469&cat=&promocode=), Quantum Mechanics Demystified, A Self-Teaching Guide,
Demystified Series, New York: McGraw-Hill, pp.108, doi:10.1036/0071455469, ISBN0-07-145546-9, retrieved
2008-03-17.
van der Pol, Balth.; Bremmer, H. (1987), Operational calculus (3rd ed.), New York: Chelsea Publishing Co.,
ISBN978-0-8284-0327-6, MR904873.
Rudin, W. (1991), Functional Analysis (2nd ed.), McGraw-Hill, ISBN0-07-054236-8.
Soares, Manuel; Valle, Olivier (2004), Airy functions and applications to physics, London: Imperial College
Press.
Saichev, A I; Woyczyski, Wojbor Andrzej (1997), "Chapter1: Basic definitions and operations" (http://books.
google.com/?id=42I7huO-hiYC&pg=PA3), Distributions in the Physical and Engineering Sciences:
Distributional and fractal calculus, integral transforms, and wavelets, Birkhuser, ISBN0817639241
Schwartz, L. (19501951), Thorie des distributions, 12, Hermann.
Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University
Press, ISBN0-691-08078-X.
Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN0849382734.
Vladimirov, V. S. (1971), Equations of mathematical physics, Marcel Dekker, ISBN0-8247-1713-9.
Weisstein, Eric W., " Delta Function (http://mathworld.wolfram.com/DeltaFunction.html)" from MathWorld.
Yamashita, H. (2006), "Pointwise analysis of scalar fields: A nonstandard approach", Journal of Mathematical
Physics 47 (9): 092301, Bibcode2006JMP....47i2301Y, doi:10.1063/1.2339017
Yamashita, H. (2007), "Comment on "Pointwise analysis of scalar fields: A nonstandard approach" [J. Math.
Phys. 47, 092301 (2006)]", Journal of Mathematical Physics 48 (8): 084101, Bibcode2007JMP....48h4101Y,
doi:10.1063/1.2771422

External links
KhanAcademy.org video lesson (http://www.khanacademy.org/video/dirac-delta-function)
The Dirac Delta function (http://www.physicsforums.com/showthread.php?t=73447), a tutorial on the Dirac
delta function.
Video Lectures - Lecture 23 (http://ocw.mit.edu/courses/mathematics/
18-03-differential-equations-spring-2010/video-lectures/lecture-23-use-with-impulse-inputs), a lecture by Arthur
Mattuck.
Dirac Delta Function (http://planetmath.org/encyclopedia/DiracDeltaFunction.html) on PlanetMath
The Dirac delta measure is a hyperfunction (http://www.osaka-kyoiku.ac.jp/~ashino/pdf/chinaproceedings.
pdf)
We show the existence of a unique solution and analyze a finite element approximation when the source term is a
Dirac delta measure (http://www.ing-mat.udec.cl/~rodolfo/Papers/BGR-3.pdf)
Non-Lebesgue measures on R. Lebesgue-Stieltjes measure, Dirac delta measure. (http://www.mathematik.
uni-muenchen.de/~lerdos/WS04/FA/content.html)

71

Heaviside step function

Heaviside step function


The Heaviside step function, or the
unit step function, usually denoted by
H (but sometimes u or ), is a
discontinuous function whose value is
zero for negative argument and one for
positive argument. It seldom matters
what value is used for H(0), since
is mostly used as a distribution. Some
common choices can be seen below.
The function is used in the
mathematics of control theory and
signal processing to represent a signal
that switches on at a specified time and
stays switched on indefinitely. It is also
used in structural mechanics together
The Heaviside step function, using the half-maximum convention
with the Dirac delta function to
describe different types of structural
loads. It was named after the English polymath Oliver Heaviside.
It is the cumulative distribution function of a random variable which is almost surely 0. (See constant random
variable.)
The Heaviside function is the integral of the Dirac delta function: H = . This is sometimes written as

although this expansion may not hold (or even make sense) for x = 0, depending on which formalism one uses to
give meaning to integrals involving .

Discrete form
An alternative form of the unit step, as a function of a discrete variable n:

where n is an integer. Unlike the usual (not discrete) case, the definition of H[0] is significant.
The discrete-time unit impulse is the first difference of the discrete-time step

This function is the cumulative summation of the Kronecker delta:

where

is the discrete unit impulse function.

72

Heaviside step function

Analytic approximations
For a smooth approximation to the step function, one can use the logistic function

where a larger k corresponds to a sharper transition at x = 0. If we take H(0) = , equality holds in the limit:

There are many other smooth, analytic approximations to the step function.[1] Among the possibilities are:

These limits hold pointwise and in the sense of distributions. In general, however, pointwise convergence need not
imply distributional convergence, and vice-versa distributional convergence need not imply pointwise convergence.
In general, any cumulative distribution function (c.d.f.) of a continuous probability distribution that is peaked around
zero and has a parameter that controls for variance can serve as an approximation, in the limit as the variance
approaches zero. For example, all three of the above approximations are c.d.f.s of common probability distributions:
The logistic, Cauchy and normal distributions, respectively.

Integral representations
Often an integral representation of the Heaviside step function is useful:

Zero argument
Since H is usually used in integration, and the value of a function at a single point does not affect its integral, it
rarely matters what particular value is chosen of H(0). Indeed when H is considered as a distribution or an element of
(see Lp space) it does not even make sense to talk of a value at zero, since such objects are only defined almost
everywhere. If using some analytic approximation (as in the examples above) then often whatever happens to be the
relevant limit at zero is used.
There exist, however, reasons for choosing a particular value.
H(0) = is often used since the graph then has rotational symmetry; put another way, H- is then an odd
function. In this case the following relation with the sign function holds for all x:

H(0) = 1 is used when H needs to be right-continuous. For instance cumulative distribution functions are usually
taken to be right continuous, as are functions integrated against in LebesgueStieltjes integration. In this case H is
the indicator function of a closed semi-infinite interval:

H(0) = 0 is used when H needs to be left-continuous. In this case H is an indicator function of an open
semi-infinite interval:

73

Heaviside step function

74

Antiderivative and derivative


The ramp function is the antiderivative of the Heaviside step function:
The distributional derivative of the Heaviside step function is the Dirac delta function:

Fourier transform
The Fourier transform of the Heaviside step function is a distribution. Using one choice of constants for the
definition of the Fourier transform we have

Here

is the distribution that takes a test function

to the Cauchy principal value of

The

limit appearing in the integral is also taken in the sense of (tempered) distributions.

Algebraic representation
If

is a decimal number with no more than

decimal digits, the Heaviside step function can be represented by

means of the following algebraic expression:

where

and

are arbitrary integers that satisfy

For instance, if

is integer, the simplest choice is:

decimal numbers with

, and
,

decimal digits, the simplest choice is:

is a Kronecker delta function.

. On the other hand, if


,

belongs to a set of

Hyperfunction representation
This can be represented as a hyperfunction as

References
[1] Weisstein, Eric W., " Heaviside Step Function (http:/ / mathworld. wolfram. com/ HeavisideStepFunction. html)" from MathWorld.

Ramp function

75

Ramp function
The ramp function is an elementary unary real function, easily computable as the mean of its independent variable
and its absolute value.
This function is applied in engineering (e.g., in the theory of DSP). The name ramp function can be derived by the
look of its graph.

Definitions

Graph of the ramp function

The ramp function (

) may be defined analytically in several ways. Possible definitions are:

The mean of a straight line with unity gradient and its modulus:

this can be derived by noting the following definition of

for which

and

The Heaviside step function multiplied by a straight line with unity gradient:

The convolution of the Heaviside step function with itself:

The integral of the Heaviside step function:

Ramp function

76

Analytic properties
Non-negativity
In the whole domain the function is non-negative, so its absolute value is itself, i.e.

and

Proof: by the mean of definition [2] it is non-negative in the I. quarter, and zero in the II.; so everywhere it is
non-negative.

Derivative
Its derivative is the Heaviside function:

From this property definition [5]. goes.

Fourier transform

Where (x) is the Dirac delta (in this formula, its derivative appears).

Laplace transform
The single-sided Laplace transform of

is given as follows,

Algebraic properties
Iteration invariance
Every iterated function of the ramp mapping is itself, as
.
Proof:
.
We applied the non-negative property.

Ramp function

References
Mathworld [1]

References
[1] http:/ / mathworld. wolfram. com/ RampFunction. html

Digital signal processing


Digital signal processing (DSP) is concerned with the representation of discrete time signals by a sequence of
numbers or symbols and the processing of these signals. Digital signal processing and analog signal processing are
subfields of signal processing. DSP includes subfields like: audio and speech signal processing, sonar and radar
signal processing, sensor array processing, spectral estimation, statistical signal processing, digital image processing,
signal processing for communications, control of systems, biomedical signal processing, seismic data processing,
etc.
The goal of DSP is usually to measure, filter and/or compress continuous real-world analog signals. The first step is
usually to convert the signal from an analog to a digital form, by sampling and then digitizing it using an
analog-to-digital converter (ADC), which turns the analog signal into a stream of numbers. However, often, the
required output signal is another analog output signal, which requires a digital-to-analog converter (DAC). Even if
this process is more complex than analog processing and has a discrete value range, the application of computational
power to digital signal processing allows for many advantages over analog processing in many applications, such as
error detection and correction in transmission as well as data compression.[1]
DSP algorithms have long been run on standard computers, on specialized processors called digital signal processor
on purpose-built hardware such as application-specific integrated circuit (ASICs). Today there are additional
technologies used for digital signal processing including more powerful general purpose microprocessors,
field-programmable gate arrays (FPGAs), digital signal controllers (mostly for industrial apps such as motor
control), and stream processors, among others.[2]

Signal sampling
With the increasing use of computers the usage of and need for digital signal processing has increased. In order to
use an analog signal on a computer it must be digitized with an analog-to-digital converter. Sampling is usually
carried out in two stages, discretization and quantization. In the discretization stage, the space of signals is
partitioned into equivalence classes and quantization is carried out by replacing the signal with representative signal
of the corresponding equivalence class. In the quantization stage the representative signal values are approximated
by values from a finite set.
The NyquistShannon sampling theorem states that a signal can be exactly reconstructed from its samples if the
sampling frequency is greater than twice the highest frequency of the signal; but requires an infinite number of
samples. In practice, the sampling frequency is often significantly more than twice that required by the signal's
limited bandwidth.
A digital-to-analog converter is used to convert the digital signal back to analog. The use of a digital computer is a
key ingredient in digital control systems.

77

Digital signal processing

DSP domains
In DSP, engineers usually study digital signals in one of the following domains: time domain (one-dimensional
signals), spatial domain (multidimensional signals), frequency domain, and wavelet domains. They choose the
domain in which to process a signal by making an informed guess (or by trying different possibilities) as to which
domain best represents the essential characteristics of the signal. A sequence of samples from a measuring device
produces a time or spatial domain representation, whereas a discrete Fourier transform produces the frequency
domain information, that is the frequency spectrum. Autocorrelation is defined as the cross-correlation of the signal
with itself over varying intervals of time or space.

Time and space domains


The most common processing approach in the time or space domain is enhancement of the input signal through a
method called filtering. Digital filtering generally consists of some linear transformation of a number of surrounding
samples around the current sample of the input or output signal. There are various ways to characterize filters; for
example:
A "linear" filter is a linear transformation of input samples; other filters are "non-linear". Linear filters satisfy the
superposition condition, i.e. if an input is a weighted linear combination of different signals, the output is an
equally weighted linear combination of the corresponding output signals.
A "causal" filter uses only previous samples of the input or output signals; while a "non-causal" filter uses future
input samples. A non-causal filter can usually be changed into a causal filter by adding a delay to it.
A "time-invariant" filter has constant properties over time; other filters such as adaptive filters change in time.
Some filters are "stable", others are "unstable". A stable filter produces an output that converges to a constant
value with time, or remains bounded within a finite interval. An unstable filter can produce an output that grows
without bounds, with bounded or even zero input.
A "finite impulse response" (FIR) filter uses only the input signals, while an "infinite impulse response" filter
(IIR) uses both the input signal and previous samples of the output signal. FIR filters are always stable, while IIR
filters may be unstable.
Filters can be represented by block diagrams which can then be used to derive a sample processing algorithm to
implement the filter using hardware instructions. A filter may also be described as a difference equation, a collection
of zeroes and poles or, if it is an FIR filter, an impulse response or step response.
The output of a digital filter to any given input may be calculated by convolving the input signal with the impulse
response.

Frequency domain
Signals are converted from time or space domain to the frequency domain usually through the Fourier transform.
The Fourier transform converts the signal information to a magnitude and phase component of each frequency. Often
the Fourier transform is converted to the power spectrum, which is the magnitude of each frequency component
squared.
The most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The
engineer can study the spectrum to determine which frequencies are present in the input signal and which are
missing.
In addition to frequency information, phase information is often needed. This can be obtained from the Fourier
transform. With some applications, how the phase varies with frequency can be a significant consideration.
Filtering, particularly in non-realtime work can also be achieved by converting to the frequency domain, applying
the filter and then converting back to the time domain. This is a fast, O(n log n) operation, and can give essentially

78

Digital signal processing


any filter shape including excellent approximations to brickwall filters.
There are some commonly used frequency domain transformations. For example, the cepstrum converts a signal to
the frequency domain through Fourier transform, takes the logarithm, then applies another Fourier transform. This
emphasizes the frequency components with smaller magnitude while retaining the order of magnitudes of frequency
components.
Frequency domain analysis is also called spectrum- or spectral analysis.

Z-plane analysis
Whereas analog filters are usually analysed in terms of transfer functions in the s plane using Laplace transforms,
digital filters are analysed in the z plane in terms of Z-transforms. A digital filter may be described in the z plane by
its characteristic collection of zeroes and poles. The z plane provides a means for mapping digital frequency
(samples/second) to real and imaginary z components, were
for continuous periodic signals and
(

is the digital frequency). This is useful for providing a visualization of the frequency response of a

digital system or signal.

Wavelet
In numerical analysis and functional
analysis, a discrete wavelet transform
(DWT) is any wavelet transform for which
the wavelets are discretely sampled. As with
other wavelet transforms, a key advantage it
has over Fourier transforms is temporal
resolution: it captures both frequency and
location information (location in time).

Applications
The main applications of DSP are audio
signal processing, audio compression,
digital
image
processing,
video
compression, speech processing, speech
recognition,
digital
communications,
RADAR,
SONAR,
seismology
and
biomedicine. Specific examples are speech
compression and transmission in digital
An example of the 2D discrete wavelet transform that is used in JPEG2000. The
mobile phones, room correction of sound in
original image is high-pass filtered, yielding the three large images, each
hi-fi and sound reinforcement applications,
describing local changes in brightness (details) in the original image. It is then
low-pass filtered and downscaled, yielding an approximation image; this image is
weather forecasting, economic forecasting,
high-pass filtered to produce the three smaller detail images, and low-pass filtered
seismic data processing, analysis and control
to produce the final approximation image in the upper-left.
of industrial processes, medical imaging
such as CAT scans and MRI, MP3
compression, computer graphics, image manipulation, hi-fi loudspeaker crossovers and equalization, and audio
effects for use with electric guitar amplifiers.

Implementation

79

Digital signal processing


Depending on the requirements of the application, digital signal processing tasks can be implemented on general
purpose computers (e.g. super computers, mainframe computers, or personal computers) or with embedded
processors that may or may not include specialized microprocessors call digital signal processors.
Often when the processing requirement is not real-time, processing is economically done with an existing
general-purpose computer and the signal data (either input or output) exists in data files. This is essentially no
different than any other data processing except DSP mathematical techniques (such as the FFT) are made use of and
the sampled data is usually assumed to be uniformly sampled in time or space. For example: processing digital
photographs with software such as Photoshop.
However, when the application requirement is real-time, DSP is often implemented using specialised
microprocessors such as the DSP56000, the TMS320, or the SHARC. These often process data using fixed-point
arithmetic, although some versions are available which use floating point arithmetic and are more powerful. For
faster applications FPGAs[3] might be used. Beginning in 2007, multicore implementations of DSPs have started to
emerge from companies including Freescale and Stream Processors, Inc. For faster applications with vast usage,
ASICs might be designed specifically. For slow applications, a traditional slower processor such as a microcontroller
may be adequate. Also a growing number of DSP applications are now being implemented on Embedded Systems
using powerful PCs with a Multi-core processor.

Techniques

Bilinear transform
Discrete Fourier transform
Discrete-time Fourier transform
Filter design
LTI system theory
Minimum phase
Transfer function
Z-transform
Goertzel algorithm
s-plane

Related fields

Analog signal processing


Automatic control
Computer Engineering
Computer Science
Data compression
Dataflow programming
Electrical engineering
Fourier Analysis
Information theory
Machine Learning
Real-time computing
Stream processing
Telecommunication

Time series
Wavelet

80

Digital signal processing

References
[1] James D. Broesch, Dag Stranneby and William Walker. Digital Signal Processing: Instant access. Butterworth-Heinemann. p.3.
[2] Dag Stranneby and William Walker (2004). Digital Signal Processing and Applications (http:/ / books. google. com/
books?id=NKK1DdqcDVUC& pg=PA241& dq=called+ digital+ signal+ processor+ hardware+ application-specific+ integrated+ circuit+
general-purpose+ microprocessors+ field-programmable+ gate+ arrays+ dsp+ asic+ fpga+ stream) (2nd ed. ed.). Elsevier. ISBN0750663448. .
[3] JpFix. "FPGA-Based Image Processing Accelerator" (http:/ / www. jpfix. com/ About_Us/ Articles/ FPGA-Based_Image_Processing_Ac/
fpga-based_image_processing_ac. html). . Retrieved 2008-05-10.

Further reading
Alan V. Oppenheim, Ronald W. Schafer, John R. Buck : Discrete-Time Signal Processing, Prentice Hall, ISBN
0-13-754920-2
Boaz Porat: A Course in Digital Signal Processing, Wiley, ISBN 0471149616
Richard G. Lyons: Understanding Digital Signal Processing, Prentice Hall, ISBN 0-13-108989-7
Jonathan Yaakov Stein, Digital Signal Processing, a Computer Science Perspective, Wiley, ISBN 0-471-29546-9
Sen M. Kuo, Woon-Seng Gan: Digital Signal Processors: Architectures, Implementations, and Applications,
Prentice Hall, ISBN 0-13-035214-4
Bernard Mulgrew, Peter Grant, John Thompson: Digital Signal Processing - Concepts and Applications, Palgrave
Macmillan, ISBN 0-333-96356-3
Steven W. Smith: Digital Signal Processing - A Practical Guide for Engineers and Scientists, Newnes, ISBN
0-7506-7444-X, ISBN 0-9660176-3-3 (http://www.dspguide.com)
Paul A. Lynn, Wolfgang Fuerst: Introductory Digital Signal Processing with Computer Applications, John Wiley
& Sons, ISBN 0-471-97984-8
James D. Broesch: Digital Signal Processing Demystified, Newnes, ISBN 1-878707-16-7
John G. Proakis, Dimitris Manolakis: Digital Signal Processing - Principles, Algorithms and Applications,
Pearson, ISBN 0-13-394289-9
Hari Krishna Garg: Digital Signal Processing Algorithms, CRC Press, ISBN 0-8493-7178-3
P. Gaydecki: Foundations Of Digital Signal Processing: Theory, Algorithms And Hardware Design, Institution of
Electrical Engineers, ISBN 0-85296-431-5
Gibson, John. Spectral Delay as a Compositional Resource. eContact! 11.4 Toronto Electroacoustic
Symposium 2009 (TES) / Symposium lectroacoustique 2009 de Toronto (http://cec.concordia.ca/econtact/
11_4/Gibson_spectraldelay.html) (December 2009). Montral: CEC.
Paul M. Embree, Damon Danieli: C++ Algorithms for Digital Signal Processing, Prentice Hall, ISBN
0-13-179144-3
Anthony Zaknich: Neural Networks for Intelligent Signal Processing, World Scientific Pub Co Inc, ISBN
981-238-305-0
Vijay Madisetti, Douglas B. Williams: The Digital Signal Processing Handbook, CRC Press, ISBN
0-8493-8572-5
Stergios Stergiopoulos: Advanced Signal Processing Handbook: Theory and Implementation for Radar, Sonar,
and Medical Imaging Real-Time Systems, CRC Press, ISBN 0-8493-3691-0
Joyce Van De Vegte: Fundamentals of Digital Signal Processing, Prentice Hall, ISBN 0-13-016077-6
Ashfaq Khan: Digital Signal Processing Fundamentals, Charles River Media, ISBN 1-58450-281-9
Jonathan M. Blackledge, Martin Turner: Digital Signal Processing: Mathematical and Computational Methods,
Software Development and Applications, Horwood Publishing, ISBN 1-898563-48-9
Bimal Krishna, K. Y. Lin, Hari C. Krishna: Computational Number Theory & Digital Signal Processing, CRC
Press, ISBN 0-8493-7177-5
Doug Smith: Digital Signal Processing Technology: Essentials of the Communications Revolution, American
Radio Relay League, ISBN 0-87259-819-5

81

Digital signal processing


Henrique S. Malvar: Signal Processing with Lapped Transforms, Artech House Publishers, ISBN 0-89006-467-9
Charles A. Schuler: Digital Signal Processing: A Hands-On Approach, McGraw-Hill, ISBN 0-07-829744-3
James H. McClellan, Ronald W. Schafer, Mark A. Yoder: Signal Processing First, Prentice Hall, ISBN
0-13-090999-8
Artur Krukowski, Izzet Kale: DSP System Design: Complexity Reduced Iir Filter Implementation for Practical
Applications, Kluwer Academic Publishers, ISBN 1-4020-7558-8
Kainam Thomas Wong (http://www.eie.polyu.edu.hk/~enktwong/): Statistical Signal Processing lecture
notes (http://ece.uwaterloo.ca/~ece603/) at the University of Waterloo, Canada.
John G. Proakis: A Self-Study Guide for Digital Signal Processing, Prentice Hall, ISBN 0-13-143239-7

Time domain
Time domain is a term used to describe the analysis of mathematical functions, physical signals or time series of
economic or environmental data, with respect to time. In the time domain, the signal or function's value is known for
all real numbers, for the case of continuous time, or at various separate instants in the case of discrete time. An
oscilloscope is a tool commonly used to visualize real-world signals in the time domain. Speaking non-technically, a
time domain graph shows how a signal changes over time, whereas a frequency domain graph shows how much of
the signal lies within each given frequency band over a range of frequencies.

Origin of term
The use of the contrasting terms "time domain" and "frequency domain" developed in US communication
engineering in the 1950s and early 1960s, with the terms appearing together in 1961.[1] [2]

References
[1] Earliest Known Uses of Some of the Words of Mathematics (T) (http:/ / jeff560. tripod. com/ t. html), Jeff Miller, March 25, 2009
[2] Trench, W. F. (1961), "A General Class of Discrete Time-Invariant Filters", Journal of the Society for Industrial and Applied Mathematics 9:
405421

82

Z-transform

83

Z-transform
In mathematics and signal processing, the Z-transform converts a discrete time-domain signal, which is a sequence
of real or complex numbers, into a complex frequency-domain representation.
It can be considered as a discrete-time equivalent of the Laplace transform. This similarity is explored in the theory
of time scale calculus.

History
The basic idea now known as the Z-transform was known to Laplace, and re-introduced in 1947 by W. Hurewicz as
a tractable way to solve linear, constant-coefficient difference equations.[1] It was later dubbed "the z-transform" by
Ragazzini and Zadeh in the sampled-data control group at Columbia University in 1952.[2] [3]
The modified or advanced Z-transform was later developed and popularized by E. I. Jury.[4] [5]
The idea contained within the Z-transform is also known in mathematical literature as the method of generating
functions which can be traced back as early as 1730 when it was introduced by de Moivre in conjunction with
probability theory.[6] From a mathematical view the Z-transform can also be viewed as a Laurent series where one
views the sequence of numbers under consideration as the (Laurent) expansion of an analytic function.

Definition
The Z-transform, like many integral transforms, can be defined as either a one-sided or two-sided transform.

Bilateral Z-transform
The bilateral or two-sided Z-transform of a discrete-time signal x[n] is the formal power series X(z) defined as

where n is an integer and z is, in general, a complex number:

where A is the magnitude of z, j is the imaginary unit, and

is the complex argument (also referred to as angle or

phase) in radians.

Unilateral Z-transform
Alternatively, in cases where x[n] is defined only for n 0, the single-sided or unilateral Z-transform is defined as

In signal processing, this definition can be used to evaluate the Z-transform of the unit impulse response of a
discrete-time causal system.
An important example of the unilateral Z-transform is the probability-generating function, where the component
is the probability that a discrete random variable takes the value , and the function
is usually written
as

, in terms of

of probability theory.

. The properties of Z-transforms (below) have useful interpretations in the context

Z-transform

84

Geophysical definition
In geophysics, the usual definition for the Z-transform is a power series in z as opposed to

. This convention is

used by Robinson and Treitel and by Kanasewich. The geophysical definition is

The two definitions are equivalent; however, the difference results in a number of changes. For example, the location
of zeros and poles move from inside the unit circle using one definition, to outside the unit circle using the other
definition. Thus, care is required to note which definition is being used by a particular author.

Inverse Z-transform
The inverse Z-transform is

where

is a counterclockwise closed path encircling the origin and entirely in the region of convergence (ROC).

The contour or path,

, must encircle all of the poles of

A special case of this contour integral occurs when


unit circle which is always guaranteed when

is the unit circle (and can be used when the ROC includes the
is stable, i.e. all the poles are within the unit circle). The inverse

Z-transform simplifies to the inverse discrete-time Fourier transform:

The Z-transform with a finite range of n and a finite number of uniformly-spaced z values can be computed
efficiently via Bluestein's FFT algorithm. The discrete-time Fourier transform (DTFT) (not to be confused with the
discrete Fourier transform (DFT)) is a special case of such a Z-transform obtained by restricting z to lie on the unit
circle.

Region of convergence
The region of convergence (ROC) is the set of points in the complex plane for which the Z-transform summation
converges.

Example 1 (no ROC)


Let

. Expanding

on the interval

Looking at the sum

Therefore, there are no values of

that satisfy this condition.

it becomes

Z-transform

85

Example 2 (causal ROC)

ROC shown in blue, the unit circle as a dotted grey circle (appears
reddish to the eye) and the circle
is shown as a dashed
black circle

Let

(where

is the Heaviside step function). Expanding

on the interval

it

becomes
Looking at the sum

The last equality arises from the infinite geometric series and the equality only holds if
rewritten in terms of

as

. Thus, the ROC is

with a disc of radius 0.5 at the origin "punched out".

which can be

. In this case the ROC is the complex plane

Z-transform

86

Example 3 (anticausal ROC)

ROC shown in blue, the unit circle as a dotted grey circle and the
circle

is shown as a dashed black circle

Let

(where

is the Heaviside step function). Expanding

on the interval

it becomes
Looking at the sum

Using the infinite geometric series, again, the equality only holds if
of

as

. Thus, the ROC is

which can be rewritten in terms

. In this case the ROC is a disc centered at the origin and of

radius 0.5.
What differentiates this example from the previous example is only the ROC. This is intentional to demonstrate that
the transform result alone is insufficient.

Examples conclusion
Examples 2 & 3 clearly show that the Z-transform

of

is unique when and only when specifying the

ROC. Creating the pole-zero plot for the causal and anticausal case show that the ROC for either case does not
include the pole that is at 0.5. This extends to cases with multiple poles: the ROC will never contain poles.
In example 2, the causal system yields an ROC that includes
yields an ROC that includes

while the anticausal system in example 3

Z-transform

87

ROC shown as a blue ring

In systems with multiple poles it is possible to have an ROC that includes neither
ROC creates a circular band. For example,
ROC will be

nor

. The

has poles at 0.5 and 0.75. The

, which includes neither the origin nor infinity. Such a system is called a

mixed-causality system as it contains a causal term


and an anticausal term
.
The stability of a system can also be determined by knowing the ROC alone. If the ROC contains the unit circle (i.e.,
) then the system is stable. In the above systems the causal system (Example 2) is stable because
contains the unit circle.
If you are provided a Z-transform of a system without an ROC (i.e., an ambiguous

) you can determine a

unique
provided you desire the following:
Stability
Causality
If you need stability then the ROC must contain the unit circle. If you need a causal system then the ROC must
contain infinity and the system function will be a right-sided sequence. If you need an anticausal system then the
ROC must contain the origin and the system function will be a left-sided sequence. If you need both, stability and
causality, all the poles of the system function must be inside the unit circle.
The unique

Properties

can then be found.

Z-transform

88

Properties of the z-transform


Time domain

Z-domain

Proof

ROC

Notation

ROC:

Linearity

At least the
intersection of ROC1
and ROC2

Time expansion

R^{1/k}
:
integer

Time shifting

ROC, except
if
and
if

Scaling in
the z-domain

Time reversal

Complex
conjugation

ROC

Real part

ROC

Imaginary part

ROC

Differentiation

ROC

Convolution

At least the
intersection of ROC1
and ROC2

Cross-correlation

At least the
intersection of ROC
of
and

Z-transform

89

First difference

At least the
intersection of ROC
of X1(z) and

Accumulation

Multiplication

Parseval's
relation

Initial value theorem


, If

causal

Final value theorem


, Only if poles of

are inside the unit circle

Table of common Z-transform pairs


Here:

Signal,
1
2
3
4
5
6

10

Z-transform,

ROC

Z-transform

90
11

12
13
14

15

16

17

18

19

20

21

Relationship to Laplace transform


The Bilinear transform is a useful approximation for converting continuous time filters (represented in Laplace
space) into discrete time filters (represented in z space), and vice versa. To do this, you can use the following
substitutions in H(s) or H(z):

from Laplace to z (Tustin transformation), or

from z to Laplace. Through the bilinear transformation, the complex s-plane (of the Laplace transform) is mapped to
the complex z-plane (of the z-transform). While this mapping is (necessarily) nonlinear, it is useful in that it maps
the entire
axis of the s-plane onto the unit circle in the z-plane. As such, the Fourier transform (which is the
Laplace transform evaluated on the

axis) becomes the discrete-time Fourier transform. This assumes that the

Fourier transform exists; i.e., that the

axis is in the region of convergence of the Laplace transform.

Relationship to Fourier transform


The Z-transform is a generalization of the discrete-time Fourier transform (DTFT). The DTFT can be found by
evaluating the Z-transform
at
(where is the normalized frequency) or, in other words, evaluated
on the unit circle. In order to determine the frequency response of the system the Z-transform must be evaluated on
the unit circle, meaning that the system's region of convergence must contain the unit circle. Otherwise, the DTFT of
the system does not exist.

Z-transform

91

Linear constant-coefficient difference equation


The linear constant-coefficient difference (LCCD) equation is a representation for a linear system based on the
autoregressive moving-average equation.

Both sides of the above equation can be divided by

, if it is not zero, normalizing

and the LCCD

equation can be written

This form of the LCCD equation is favorable to make it more explicit that the "current" output
past outputs

, current input

, and previous inputs

is a function of

Transfer function
Taking the Z-transform of the above equation (using linearity and time-shifting laws) yields

and rearranging results in

Zeros and poles


From the fundamental theorem of algebra the numerator has M roots (corresponding to zeros of H) and the
denominator has N roots (corresponding to poles). Rewriting the transfer function in terms of poles and zeros

where

is the

zero and

is the

pole. The zeros and poles are commonly complex and when plotted on

the complex plane (z-plane) it is called the pole-zero plot.


In addition, there may also exist zeros and poles at

and

. If we take these poles and zeros as well as

multiple-order zeros and poles into consideration, the number of zeros and poles are always equal.
By factoring the denominator, partial fraction decomposition can be used, which can then be transformed back to the
time domain. Doing so would result in the impulse response and the linear constant coefficient difference equation of
the system.

Z-transform

92

Output response
If such a system

is driven by a signal

partial fraction decomposition on

and then taking the inverse Z-transform the output

practice, it is often useful to fractionally decompose


of

then the output is

before multiplying that quantity by

. By performing
can be found. In
to generate a form

which has terms with easily computable inverse Z-transforms.

References
[1] E. R. Kanasewich (1981). Time sequence analysis in geophysics (http:/ / books. google. com/ books?id=k8SSLy-FYagC& pg=PA185) (3rd
ed.). University of Alberta. pp.185186. ISBN9780888640741. .
[2] J. R. Ragazzini and L. A. Zadeh (1952). "The analysis of sampled-data systems". Trans. Am. Inst. Elec. Eng. 71 (II): 225234.
[3] Cornelius T. Leondes (1996). Digital control systems implementation and computational techniques (http:/ / books. google. com/
books?id=aQbk3uidEJoC& pg=PA123). Academic Press. p.123. ISBN9780120127795. .
[4] Eliahu Ibrahim Jury (1958). Sampled-Data Control Systems. John Wiley & Sons.
[5] Eliahu Ibrahim Jury (1973). Theory and Application of the Z-Transform Method. Krieger Pub Co. ISBN0-88275-122-0.
[6] Eliahu Ibrahim Jury (1964). Theory and Application of the Z-Transform Method. John Wiley & Sons. p.1.

Further reading
Refaat El Attar, Lecture notes on Z-Transform, Lulu Press, Morrisville NC, 2005. ISBN 1-4116-1979-X.
Ogata, Katsuhiko, Discrete Time Control Systems 2nd Ed, Prentice-Hall Inc, 1995, 1987. ISBN 0-13-034281-5.
Alan V. Oppenheim and Ronald W. Schafer (1999). Discrete-Time Signal Processing, 2nd Edition, Prentice Hall
Signal Processing Series. ISBN 0-13-754920-2.

External links
Z-Transform table of some common Laplace transforms (http://www.swarthmore.edu/NatSci/echeeve1/Ref/
LPSA/LaplaceZTable/LaplaceZFuncTable.html)
Mathworld's entry on the Z-transform (http://mathworld.wolfram.com/Z-Transform.html)
Z-Transform threads in Comp.DSP (http://www.dsprelated.com/comp.dsp/keyword/Z_Transform.php)
Z-Transform Module by John H. Mathews (http://math.fullerton.edu/mathews/c2003/ZTransformIntroMod.
html)
University of Penn Z-Transform Intro (http://www.ling.upenn.edu/courses/ling525/z.html) ( PDF version
(http://dev.gentoo.org/~redhatter/misc/z-transform.pdf) with more readable formulae)

Frequency domain

Frequency domain
In electronics, control systems engineering, and statistics, frequency domain is a term used to describe the domain
for analysis of mathematical functions or signals with respect to frequency, rather than time.[1]
Speaking non-technically, a time-domain graph shows how a signal changes over time, whereas a frequency-domain
graph shows how much of the signal lies within each given frequency band over a range of frequencies. A
frequency-domain representation can also include information on the phase shift that must be applied to each
sinusoid in order to be able to recombine the frequency components to recover the original time signal.
A given function or signal can be converted between the time and frequency domains with a pair of mathematical
operators called a transform. An example is the Fourier transform, which decomposes a function into the sum of a
(potentially infinite) number of sine wave frequency components. The 'spectrum' of frequency components is the
frequency domain representation of the signal. The inverse Fourier transform converts the frequency domain
function back to a time function.
A spectrum analyzer is the tool commonly used to visualize real-world signals in the frequency domain.
(Note that recent advances in the field of signal processing have also allowed to define representations or transforms
that result in a joint time-frequency domain, with the instantaneous frequency being a key link between the time
domain and the frequency domain.)

Magnitude and phase


In using the Laplace, Z-, or Fourier transforms, the frequency spectrum is complex, describing the magnitude and
phase of a signal, or of the response of a system, as a function of frequency. In many applications, phase information
is not important. By discarding the phase information it is possible to simplify the information in a frequency domain
representation to generate a frequency spectrum or spectral density. A spectrum analyzer is a device that displays the
spectrum.
The power spectral density is a frequency-domain description that can be applied to a large class of signals that are
neither periodic nor square-integrable; to have a power spectral density a signal needs only to be the output of a
wide-sense stationary random process.

Different frequency domains


Although "the" frequency domain is spoken of in the singular, there are a number of different mathematical
transforms which are used to analyze time functions and are referred to as "frequency domain" methods. These are
the most common transforms, and the fields in which they are used:

Fourier series repetitive signals, oscillating systems


Fourier transform nonrepetitive signals, transients
Laplace transform electronic circuits and control systems
Wavelet transform digital image processing, signal compression
Z transform discrete signals, digital signal processing

More generally, one can speak of the transform domain with respect to any transform. The above transforms can be
interpreted as capturing some form of frequency, and hence the transform domain is referred to as a frequency
domain.

93

Frequency domain

Discrete frequency domain


The Fourier transform of a periodic signal only has energy at a base frequency and its harmonics. Another way of
saying this is that a periodic signal can be analyzed using a discrete frequency domain. Dually, a discrete-time signal
gives rise to a periodic frequency spectrum. Combining these two, if we start with a time signal which is both
discrete and periodic, we get a frequency spectrum which is both periodic and discrete. This is the usual context for a
discrete Fourier transform.

Partial frequency-domain example


Due to popular simplifications of the hearing process and titles such as Plomp's "The Ear as a Frequency Analyzer,"
the inner ear is often thought of as converting time-domain sound waveforms to frequency-domain spectra. The
frequency domain is not actually a very accurate or useful model for hearing, but a time/frequency space or
time/place space can be a useful description.[2]

History of term
The use of the terms "frequency domain" and "time domain" arose in communication engineering in the 1950s and
early 1960s, with "frequency domain" appearing in 1953.[3] See time domain: origin of term for details.[4]

References
[1] Broughton, S.A.; Bryan, K. (2008). Discrete Fourier Analysis and Wavelets: Applications to Signal and Image Processing. New York:
Wiley. p.72.
[2] Boashash, B., ed (2003). Time-Frequency Signal Analysis and Processing A Comprehensive Reference. Oxford: Elsevier Science.
ISBN0080443354.
[3] Zadeh, L. A. (1953), "Theory of Filtering", Journal of the Society for Industrial and Applied Mathematics 1: 3551
[4] Earliest Known Uses of Some of the Words of Mathematics (T) (http:/ / jeff560. tripod. com/ t. html), Jeff Miller, March 25, 2009

Further reading
Boashash, B. (Sept 1988). "Note on the Use of the Wigner Distribution for Time Frequency Signal Analysis".
IEEE Transactions on Acoustics, Speech, and Signal Processing 36 (9): 15181521., doi:10.1109/29.90380.
Boashash, B. (April 1992). "Estimating and Interpreting the Instantaneous Frequency of a Signal-Part I:
Fundamentals". Proceedings of the IEEE 80 (4): 519538., doi:10.1109/5.135376.

94

Initial value theorem

95

Initial value theorem


In mathematical analysis, the initial value theorem is a theorem used to relate frequency domain expressions to the
time domain behavior as time approaches zero.[1]
Let

be the (one-sided) Laplace transform of (t). The initial value theorem then says[2]

Notes
[1] http:/ / fourier. eng. hmc. edu/ e102/ lectures/ Laplace_Transform/ node17. html
[2] Robert H. Cannon, Dynamics of Physical Systems, Courier Dover Publications, 2003, page 567.

Final value theorem


In mathematical analysis, the final value theorem (FVT) is one of several similar theorems used to relate frequency
domain expressions to the time domain behavior as time approaches infinity. A final value theorem allows the time
domain behavior to be directly calculated by taking a limit of a frequency domain expression, as opposed to
converting to a time domain expression and taking its limit.
Mathematically, if

has a finite limit, then

where

is the (unilateral) Laplace transform of

[1] [2]

Example where FVT holds


For example, for a system described by transfer function

and so the impulse response converges to

That is, the system returns to zero after being disturbed by a short impulse. However, the Laplace transform of the
unit step response is

and so the step response converges to

and so a zero-state system will follow an exponential rise to a final value of 3.

Final value theorem

96

Example where FVT does not hold


However, for a system described by the transfer function

the final value theorem appears to predict the final value of the impulse response to be 0 and the final value of the
step response to be 1. However, neither time-domain limit exists, and so the final value theorem predictions are not
valid. In fact, both the impulse response and step response oscillate, and (in this special case) the final value theorem
describes the average values around which the responses oscillate.
There are two checks performed in Control theory which confirm valid results for the Final Value Theorem:
1. All roots of the denominator of
must have negative real parts.
2.
must not have more than one pole at the origin.
Rule 1 was not satisfied in this example, in that the roots of the denominator are

and

Notes
[1] Wang, Ruye (2010-02-17). "Initial and Final Value Theorems" (http:/ / fourier. eng. hmc. edu/ e102/ lectures/ Laplace_Transform/ node17.
html). . Retrieved 2011-10-21.
[2] Alan V. Oppenheim, Alan S. Willsky, S. Hamid Nawab (1997). Signals & Systems. New Jersey, USA: Prentice Hall. ISBN0-13-814757-4.

External links
http://wikis.controltheorypro.com/index.php?title=Final_Value_Theorem
http://fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node17.html Final value for Laplace
http://www.engr.iupui.edu/~skoskie/ECE595s7/handouts/fvt_proof.pdf Final value proof for Z-transforms

97

Sensors
Sensor
A sensor (also called detectors) is a device that measures a physical
quantity and converts it into a signal which can be read by an observer
or by an instrument. For example, a mercury-in-glass thermometer
converts the measured temperature into expansion and contraction of a
liquid which can be read on a calibrated glass tube. A thermocouple
converts temperature to an output voltage which can be read by a
voltmeter. For accuracy, most sensors are calibrated against known
standards.
Thermocouple sensor for high temperature
measurement

Use
Sensors are used in everyday objects such as touch-sensitive elevator buttons (tactile sensor) and lamps which dim or
brighten by touching the base. There are also innumerable applications for sensors of which most people are never
aware. Applications include cars, machines, aerospace, medicine, manufacturing and robotics.
A sensor is a device which receives and responds to a signal. A sensor's sensitivity indicates how much the sensor's
output changes when the measured quantity changes. For instance, if the mercury in a thermometer moves 1cm
when the temperature changes by 1 C, the sensitivity is 1cm/C (it is basically the slope Dy/Dx assuming a linear
characteristic). Sensors that measure very small changes must have very high sensitivities. Sensors also have an
impact on what they measure; for instance, a room temperature thermometer inserted into a hot cup of liquid cools
the liquid while the liquid heats the thermometer. Sensors need to be designed to have a small effect on what is
measured; making the sensor smaller often improves this and may introduce other advantages. Technological
progress allows more and more sensors to be manufactured on a microscopic scale as microsensors using MEMS
technology. In most cases, a microsensor reaches a significantly higher speed and sensitivity compared with
macroscopic approaches.

Classification of measurement errors


A good sensor obeys the following rules:
Is sensitive to the measured property only
Is insensitive to any other property likely to be encountered in its
application
Does not influence the measured property
Infrared Sensor
Ideal sensors are designed to be linear or linear to some simple
mathematical function of the measurement, typically logarithmic. The
output signal of such a sensor is linearly proportional to the value or simple function of the measured property. The

sensitivity is then defined as the ratio between output signal and measured property. For example, if a sensor
measures temperature and has a voltage output, the sensitivity is a constant with the unit [V/K]; this sensor is linear

Sensor
because the ratio is constant at all points of measurement.

Sensor deviations
If the sensor is not ideal, several types of deviations can be observed:
The sensitivity may in practice differ from the value specified. This is called a sensitivity error, but the sensor is
still linear.
Since the range of the output signal is always limited, the output signal will eventually reach a minimum or
maximum when the measured property exceeds the limits. The full scale range defines the maximum and
minimum values of the measured property.
If the output signal is not zero when the measured property is zero, the sensor has an offset or bias. This is defined
as the output of the sensor at zero input.
If the sensitivity is not constant over the range of the sensor, this is called non linearity. Usually this is defined by
the amount the output differs from ideal behavior over the full range of the sensor, often noted as a percentage of
the full range.
If the deviation is caused by a rapid change of the measured property over time, there is a dynamic error. Often,
this behavior is described with a bode plot showing sensitivity error and phase shift as function of the frequency
of a periodic input signal.
If the output signal slowly changes independent of the measured property, this is defined as drift
(telecommunication).
Long term drift usually indicates a slow degradation of sensor properties over a long period of time.
Noise is a random deviation of the signal that varies in time.
Hysteresis is an error caused by when the measured property reverses direction, but there is some finite lag in
time for the sensor to respond, creating a different offset error in one direction than in the other.
If the sensor has a digital output, the output is essentially an approximation of the measured property. The
approximation error is also called digitization error.
If the signal is monitored digitally, limitation of the sampling frequency also can cause a dynamic error, or if the
variable or added noise noise changes periodically at a frequency near a multiple of the sampling rate may induce
aliasing errors.
The sensor may to some extent be sensitive to properties other than the property being measured. For example,
most sensors are influenced by the temperature of their environment.
All these deviations can be classified as systematic errors or random errors. Systematic errors can sometimes be
compensated for by means of some kind of calibration strategy. Noise is a random error that can be reduced by
signal processing, such as filtering, usually at the expense of the dynamic behavior of the sensor.

Resolution
The resolution of a sensor is the smallest change it can detect in the quantity that it is measuring. Often in a digital
display, the least significant digit will fluctuate, indicating that changes of that magnitude are only just resolved. The
resolution is related to the precision with which the measurement is made. For example, a scanning tunneling probe
(a fine tip near a surface collects an electron tunnelling current) can resolve atoms and molecules.

Sensors in Nature
Further information: Sense
All living organisms contain biological sensors with functions similar to those of the mechanical devices described.
Most of these are specialized cells that are sensitive to:

98

Sensor
Light, motion, temperature, magnetic fields, gravity, humidity, moisture, vibration, pressure, electrical fields,
sound, and other physical aspects of the external environment
Physical aspects of the internal environment, such as stretch, motion of the organism, and position of appendages
(proprioception)
Environmental molecules, including toxins, nutrients, and pheromones
Estimation of biomolecules interaction and some kinetics parameters
Internal metabolic milieu, such as glucose level, oxygen level, or osmolality
Internal signal molecules, such as hormones, neurotransmitters, and cytokines
Differences between proteins of the organism itself and of the environment or alien creatures.

Biosensor
In biomedicine and biotechnology, sensors which detect analytes thanks to a biological component, such as cells,
protein, nucleic acid or biomimetic polymers, are called biosensors. Whereas a non-biological sensor, even organic
(=carbon chemistry), for biological analytes is referred to as sensor or nanosensor (such a microcantilevers). This
terminology applies for both in vitro and in vivo applications. The encapsulation of the biological component in
biosensors, presents with a slightly different problem that ordinary sensors, this can either be done by means of a
semipermeable barrier, such as a dialysis membrane or a hydrogel, a 3D polymer matrix, which either physically
constrains the sensing macromolecule or chemically (macromolecule is bound to the scaffold).[1]

References
[1] Wolfbeis, O. S. (2000). "Fiber-optic chemical sensors and biosensors." Anal Chem 72(12): 81R-89R

External links
Capacitive Position/Displacement Sensor Theory/Tutorial (http://www.capsensortheory.com)
Capacitive Position/Displacement Overview (http://www.capsensors.com)
Comparing Capacitive and Eddy-Current Sensors (http://www.lionprecision.com/tech-library/technotes/
article-0011-cve.html)
M. Kretschmar and S. Welsby (2005), Capacitive and Inductive Displacement Sensors, in Sensor Technology
Handbook, J. Wilson editor, Newnes: Burlington, MA.
C. A. Grimes, E. C. Dickey, and M. V. Pishko (2006), Encyclopedia of Sensors (10-Volume Set), American
Scientific Publishers. ISBN 1-58883-056-X
Sensors (http://www.mdpi.com/journal/sensors) - Open access journal of MDPI (http://www.mdpi.net)
M. Pohanka, O. Pavlis, and P. Skladal. Rapid Characterization of Monoclonal Antibodies using the Piezoelectric
Immunosensor (http://www.mdpi.org/sensors/papers/s7030341.pdf). Sensors 2007, 7, 341-353
SensEdu; how sensors work (http://www.sensedu.com/)
Clifford K. Ho, Alex Robinson, David R. Miller and Mary J. Davis. Overview of Sensors and Needs for
Environmental Monitoring (http://www.mdpi.net/sensors/papers/s5010004.pdf). Sensors 2005, 5, 4-37
Wireless hydrogen sensor (http://news.ufl.edu/2006/05/24/hydrogen-sensor/)
Sensors and Actuators A: Physical (http://www.elsevier.com/wps/find/journaldescription.cws_home/
504103/description#description) - Elsevier journal
Sensors and Actuators B: Chemical (http://www.elsevier.com/wps/find/journaldescription.cws_home/
504104/description#description) - Elsevier journal
Automotive Electronic Sensors (http://www.cvel.clemson.edu/auto/sensors/auto-sensors.html)

99

Accelerometer

100

Accelerometer
An accelerometer is a device that measures proper acceleration, also
called the four-acceleration. This is not necessarily the same as the
coordinate acceleration (change of velocity of the device in
three-dimensional space), but is rather the type of acceleration
associated with the phenomenon of weight experienced by a test mass
that resides in the frame of reference of the accelerometer device. For
an example of where these types of acceleration differ, an
accelerometer will measure a value when sitting on the ground,
because masses there have weights, even though they do not change
velocity. However, an accelerometer in gravitational free fall toward
the center of the Earth will measure a value of zero because, even
though its speed is increasing, it is in a frame of reference in which it is
weightless.

A depiction of an accelerometer designed at


Sandia National Laboratories.

An accelerometer thus measures weight per unit of (test) mass, a quantity of acceleration also known as specific
force, or g-force (although it is not a force, and these quantities are badly-named). Another way of stating this is that
by measuring weight, an accelerometer measures the acceleration of the free-fall reference frame (inertial reference
frame) relative to itself (the accelerometer). This measurable acceleration is not the ordinary acceleration of Newton
(in three dimensions), but rather four-acceleration, which is acceleration away from a geodesic path in
four-dimensional space-time.
Most accelerometers do not display the value they measure, but supply it to other devices. Real accelerometers also
have practical limitations in how quickly they respond to changes in acceleration, and cannot respond to changes
above a certain frequency of change.
Single- and multi-axis models of accelerometer are available to detect magnitude and direction of the proper
acceleration (or g-force), as a vector quantity, and can be used to sense orientation (because direction of weight
changes), coordinate acceleration (so long as it produces g-force or a change in g-force), vibration, shock, and falling
(a case where the proper acceleration changes, since it tends toward zero). Micromachined accelerometers are
increasingly present in portable electronic devices and video game controllers, to detect the position of the device or
provide for game input.
Pairs of accelerometers extended over a region of space can be used to detect differences (gradients) in the proper
accelerations of frames of references associated with those points. These devices are called gravity gradiometers, as
they measure gradients in the gravitational field. Such pairs of accelerometers in theory may also be able to detect
gravitational waves.

Physical principles
An accelerometer measures proper acceleration, which is the acceleration it experiences relative to freefall and is the
acceleration felt by people and objects. Put another way, at any point in spacetime the equivalence principle
guarantees the existence of a local inertial frame, and an accelerometer measures the acceleration relative to that
frame.[1] Such accelerations are popularly measured in terms of g-force.
An accelerometer at rest relative to the Earth's surface will indicate approximately 1 g upwards, because any point on
the Earth's surface is accelerating upwards relative to the local inertial frame (the frame of a freely falling object near
the surface). To obtain the acceleration due to motion with respect to the Earth, this "gravity offset" must be
subtracted and corrections for effects caused by the Earth's rotation relative to the inertial frame.

Accelerometer
The reason for the appearance of a gravitational offset is Einstein's equivalence principle,[2] which states that the
effects of gravity on an object are indistinguishable from acceleration. When held fixed in a gravitational field by,
for example, applying a ground reaction force or an equivalent upward thrust, the reference frame for an
accelerometer (its own casing) accelerates upwards with respect to a free-falling reference frame. The effects of this
acceleration are indistinguishable from any other acceleration experienced by the instrument, so that an
accelerometer cannot detect the difference between sitting in a rocket on the launch pad, and being in the same
rocket in deep space while it uses its engines to accelerate at 1 g. For similar reasons, an accelerometer will read zero
during any type of free fall. This includes use in a coasting spaceship in deep space far from any mass, a spaceship
orbiting the Earth, an airplane in a parabolic "zero-g" arc, or any free-fall in vacuum. Another example is free-fall at
a sufficiently high altitude that atmospheric effects can be neglected.
However this does not include a (non-free) fall in which air resistance produces drag forces that reduce the
acceleration, until constant terminal velocity is reached. At terminal velocity the accelerometer will indicate 1 g
acceleration upwards. For the same reason a skydiver, upon reaching terminal velocity, does not feel as though he or
she were in "free-fall", but rather experiences a feeling similar to being supported (at 1 g) on a "bed" of uprushing
air.
Acceleration is quantified in the SI unit metres per second per second (m/s2), in the cgs unit gal (Gal), or popularly in
terms of g-force (g).
For the practical purpose of finding the acceleration of objects with respect to the Earth, such as for use in an inertial
navigation system, a knowledge of local gravity is required. This can be obtained either by calibrating the device at
rest,[3] or from a known model of gravity at the approximate current position.

Structure
Conceptually, an accelerometer behaves as a damped mass on a spring. When the accelerometer experiences an
acceleration, the mass is displaced to the point that the spring is able to accelerate the mass at the same rate as the
casing. The displacement is then measured to give the acceleration.
In commercial devices, piezoelectric, piezoresistive and capacitive components are commonly used to convert the
mechanical motion into an electrical signal. Piezoelectric accelerometers rely on piezoceramics (e.g. lead zirconate
titanate) or single crystals (e.g. quartz, tourmaline). They are unmatched in terms of their upper frequency range, low
packaged weight and high temperature range. Piezoresistive accelerometers are preferred in high shock applications.
Capacitive accelerometers typically use a silicon micro-machined sensing element. Their performance is superior in
the low frequency range and they can be operated in servo mode to achieve high stability and linearity.
Modern accelerometers are often small micro electro-mechanical systems (MEMS), and are indeed the simplest
MEMS devices possible, consisting of little more than a cantilever beam with a proof mass (also known as seismic
mass). Damping results from the residual gas sealed in the device. As long as the Q-factor is not too low, damping
does not result in a lower sensitivity.
Under the influence of external accelerations the proof mass deflects from its neutral position. This deflection is
measured in an analog or digital manner. Most commonly, the capacitance between a set of fixed beams and a set of
beams attached to the proof mass is measured. This method is simple, reliable, and inexpensive. Integrating
piezoresistors in the springs to detect spring deformation, and thus deflection, is a good alternative, although a few
more process steps are needed during the fabrication sequence. For very high sensitivities quantum tunneling is also
used; this requires a dedicated process making it very expensive. Optical measurement has been demonstrated on
laboratory scale.
Another, far less common, type of MEMS-based accelerometer contains a small heater at the bottom of a very small
dome, which heats the air inside the dome to cause it to rise. A thermocouple on the dome determines where the
heated air reaches the dome and the deflection off the center is a measure of the acceleration applied to the sensor.

101

Accelerometer
Most micromechanical accelerometers operate in-plane, that is, they are designed to be sensitive only to a direction
in the plane of the die. By integrating two devices perpendicularly on a single die a two-axis accelerometer can be
made. By adding an additional out-of-plane device three axes can be measured. Such a combination always has a
much lower misalignment error than three discrete models combined after packaging.
Micromechanical accelerometers are available in a wide variety of measuring ranges, reaching up to thousands of
g's. The designer must make a compromise between sensitivity and the maximum acceleration that can be measured.

Applications
Engineering
Accelerometers can be used to measure vehicle acceleration. They allow for performance evaluation of both the
engine/drive train and the braking systems.
Accelerometers can be used to measure vibration on cars, machines, buildings, process control systems and safety
installations. They can also be used to measure seismic activity, inclination, machine vibration, dynamic distance and
speed with or without the influence of gravity. Applications for accelerometers that measure gravity, wherein an
accelerometer is specifically configured for use in gravimetry, are called gravimeters.
Notebook computers equipped with accelerometers can contribute to the Quake-Catcher Network (QCN), a BOINC
project aimed at scientific research of earthquakes.[4]

Biology
Accelerometers are also increasingly used in the biological sciences. High frequency recordings of bi-axial[5] or
tri-axial acceleration[6] (>10Hz) allows the discrimination of behavioral patterns while animals are out of sight.
Furthermore, recordings of acceleration allow researchers to quantify the rate at which an animal is expending
energy in the wild, by either determination of limb-stroke frequency[7] or measures such as overall dynamic body
acceleration[8] Such approaches have mostly been adopted by marine scientists due to an inability to study animals in
the wild using visual observations, however an increasing number of terrestrial biologists are adopting similar
approaches. This device can be connected to an amplifier to amplify the signal.

Industry
Accelerometers are also used for machinery health monitoring to report the vibration and it's changes in time of
shafts at the bearings of rotating equipment such as turbines,pumps,[9] fans,[10] rollers,[11] compressors,[12] and
cooling towers,.[13] Vibration monitoring programs are proven to warn of impending failure, save money, reduce
downtime, and improve safety in plants worldwide by detecting conditions such as wear and tear of bearings, shaft
misalignment, rotor imbalance, gear failure[14] or bearing fault[15] which, if not attended to promptly, can lead to
costly repairs. Accelerometer vibration data allows the user to monitor machines and detect these faults before the
rotating equipment fails completely. Vibration monitoring programs are utilized in industries such as automotive
manufacturing,[16] machine tool applications,[17] pharmaceutical production,[18] power generation[19] and power
plants,[20] pulp and paper,[21] sugar mills, food and beverage production, water and wastewater, hydropower,
petrochemical and steel manufacturing.

102

Accelerometer

Building and structural monitoring


Accelerometers are used to measure the motion and vibration of a structure that is exposed to dynamic loads.[22]
Dynamic loads originate from a variety of sources including:

Human activities walking, running, dancing or skipping


Working machines inside a building or in the surrounding area
Construction work driving piles, demolition, drilling and excavating
Moving loads on bridges
Vehicle collisions
Impact loads falling debris
Concussion loads internal and external explosions
Collapse of structural elements
Wind loads and wind gusts
Air blast pressure
Loss of support because of ground failure
Earthquakes and aftershocks

Measuring and recording how a structure responds to these inputs is critical for assessing the safety and viability of a
structure. This type of monitoring is called Dynamic Monitoring.

Medical applications
Zoll's AED Plus uses CPR-Dpadz which contain an accelerometer to measure the depth of CPR chest compressions.
Within the last several years, Nike, Polar and other companies have produced and marketed sports watches for
runners that include footpods, containing accelerometers to help determine the speed and distance for the runner
wearing the unit.
In Belgium, accelerometer-based step counters are promoted by the government to encourage people to walk a few
thousand steps each day.
Herman Digital Trainer uses accelerometers to measure strike force in physical training.[23] [24]

Navigation
An Inertial Navigation System (INS) is a navigation aid that uses a computer and motion sensors (accelerometers)
to continuously calculate via dead reckoning the position, orientation, and velocity (direction and speed of
movement) of a moving object without the need for external references. Other terms used to refer to inertial
navigation systems or closely related devices include inertial guidance system, inertial reference platform, and
many other variations.
An accelerometer alone is unsuitable to determine changes in altitude over distances where the vertical decrease of
gravity is significant, such as for aircraft and rockets. In the presence of a gravitational gradient, the calibration and
data reduction process is numerically unstable.[25] [26]

103

Accelerometer

104

Transport
Accelerometers are used to detect apogee in both professional[27] and in amateur[28] rocketry.
Accelerometers are also being used in Intelligent Compaction rollers. Accelerometers are used alongside gyroscopes
in inertial guidance systems.[29]
One of the most common uses for MEMS accelerometers is in airbag deployment systems for modern automobiles.
In this case the accelerometers are used to detect the rapid negative acceleration of the vehicle to determine when a
collision has occurred and the severity of the collision. Another common automotive use is in electronic stability
control systems, which use a lateral accelerometer to measure cornering forces. The widespread use of
accelerometers in the automotive industry has pushed their cost down dramatically.[30] Another automotive
application is the monitoring of noise, vibration and harshness (NVH), conditions that cause discomfort for drivers
and passengers and may also be indicators of mechanical faults.
Tilting trains use accelerometers and gyroscopes to calculate the required tilt.[31]

Vulcanology
Modern electronic accelerometers are used in remote sensing devices intended for the monitoring of active volcanos
to detect the motion of magma[32]

Consumer electronics
Accelerometers are increasingly being incorporated into personal
electronic devices.
Motion input
Some smartphones, digital audio players and personal digital assistants
contain accelerometers for user interface control; often the
accelerometer is used to present landscape or portrait views of the
device's screen, based on the way the device is being held.
Automatic Collision Notification (ACN) systems also use
accelerometers in a system to call for help in event of a vehicle crash.
Prominent ACN systems include Onstar AACN service, Ford Link's
911 Assist, Toyota's Safety Connect, Lexus Link, or BMW Assist.
Many accelerometer-equipped smartphones also have ACN software
available for download. ACN systems are activated by detecting
crash-strength G-forces.
Nintendo's Wii video game console uses a controller called a Wii
Remote that contains a three-axis accelerometer and was designed
primarily for motion input. Users also have the option of buying an
additional motion-sensitive attachment, the Nunchuk, so that motion
input could be recorded from both of the user's hands independently. Is
also used on the Nintendo 3DS system.

Galaxy Nexus, an example of a smartphone with


[33]
an built-in accelerometer

The Sony PlayStation 3 uses the DualShock 3 remote which uses a three axis accelerometer that can be used to make
steering more realistic in racing games, such as Motorstorm and Burnout Paradise.
The Nokia 5500 sport features a 3D accelerometer that can be accessed from software. It is used for step recognition
(counting) in a sport application, and for tap gesture recognition in the user interface. Tap gestures can be used for
controlling the music player and the sport application, for example to change to next song by tapping through

Accelerometer
clothing when the device is in a pocket. Other uses for accelerometer in Nokia phones include Pedometer
functionality in Nokia Sports Tracker. Some other devices provide the tilt sensing feature with a cheaper component,
which is not a true accelerometer.
Sleep phase alarm clocks use accelerometric sensors to detect movement of a sleeper, so that it can wake the person
when he/she is not in REM phase, therefore awakes more easily.
Orientation sensing
A number of 21st century devices use accelerometers to align the screen depending on the direction the device is
held, for example switching between portrait and landscape modes. Such devices include many tablet PCs and some
smartphones and digital cameras.
For example, Apple uses an LIS302DL accelerometer in the iPhone, iPod Touch and the 4th and 5th generation iPod
Nano allowing the device to know when it is tilted on its side. Third-party developers have expanded its use with
fanciful applications such as electronic bobbleheads.[34] The BlackBerry Storm phone was also an early user of this
orientation sensing feature.
The Nokia N95 and Nokia N82 have accelerometers embedded inside them. It was primarily used as a tilt sensor for
tagging the orientation to photos taken with the built-in camera, later thanks to a firmware update it became available
to other applications.
As of January 2009, almost all new mobile phones and digital cameras contain at least a tilt sensor and sometimes an
accelerometer for the purpose of auto image rotation, motion-sensitive mini-games, and to correct shake when taking
photographs.
Image stabilization
Camcorders use accelerometers for image stabilization. Still cameras use accelerometers for anti-blur capturing. The
camera holds off snapping the CCD "shutter" when the camera is moving. When the camera is still (if only for a
millisecond, as could be the case for vibration), the CCD is "snapped". An example application which has used such
technology is the Glogger VS2,[35] a phone application which runs on Symbian OS based phone with accelerometer
such as Nokia N96. Some digital cameras, contain accelerometers to determine the orientation of the photo being
taken and also for rotating the current picture when viewing.
Device integrity
Many laptops feature an accelerometer which is used to detect drops. If a drop is detected, the heads of the hard disk
are parked to avoid data loss and possible head or disk damage by the ensuing shock.

Gravimetry
A gravimeter or gravitometer, is an instrument used in gravimetry for measuring the local gravitational field. A
gravimeter is a type of accelerometer, except that accelerometers are susceptible to all vibrations including noise,
that cause oscillatory accelerations. This is counteracted in the gravimeter by integral vibration isolation and signal
processing. Though the essential principle of design is the same as in accelerometers, gravimeters are typically
designed to be much more sensitive than accelerometers in order to measure very tiny changes within the Earth's
gravity, of 1 g. In contrast, other accelerometers are often designed to measure 1000 g or more, and many perform
multi-axial measurements. The constraints on temporal resolution are usually less for gravimeters, so that resolution
can be increased by processing the output with a longer "time constant".

105

Accelerometer

Types of accelerometer

Piezoelectric accelerometer
Shear mode accelerometer
Surface micromachined capacitive (MEMS)
Thermal (submicrometre CMOS process)
Bulk micromachined capacitive
Bulk micromachined piezoelectric resistive
Capacitive spring mass base
Electromechanical servo (Servo Force Balance)
Null-balance
Strain gauge
Resonance
Magnetic induction
Optical
Surface acoustic wave (SAW)
Laser accelerometer
DC response

High temperature
Low frequency
High gravity
Triaxial
Modally tuned impact hammers
Seat pad accelerometers
Pendulous integrating gyroscopic accelerometer

References
[1] Einstein, Albert (1920). "20" (http:/ / www. bartleby. com/ 173/ 20. html). Relativity: The Special and General Theory. New York: Henry
Holt. p.168. ISBN1-58734-092-5. .
[2] Penrose, Roger (2005) [2004]. "17.4 The Principle of Equivalence". The Road to Reality. New York: Knopf. pp.393394.
ISBN0-470-08578-9.
[3] "Accelerometer Design and Applications" (http:/ / www. analog. com/ en/ technical-library/ faqs/ design-center/ faqs/ CU_faq_MEMs/
resources/ fca. html). Analog Devices. . Retrieved 2008-12-23.
[4] "Quake-Catcher Network Downloads" (http:/ / qcn. stanford. edu/ downloads/ index. php). Quake-Catcher Network. . Retrieved 15 July
2009. "If you have a Mac laptop (2006 or later), a Thinkpad (2003 or later), or a desktop with a USB sensor, you can download software to
turn your computer into a Quake-Catcher Sensor"
[5] Yoda et al. (2001) Journal of Experimental Biology204(4): 685690
[6] Shepard et al. (2008) Endangered Species Research http:/ / www. int-res. com/ articles/ esr2008/ theme/ Tracking/ TMVpp1. pdf
[7] Kawabe et al. (2003) Fisheries Science 69 (5):959 965
[8] Wilson et al. (2006) Journal of Animal Ecology:75 (5):1081 1090
[9] http:/ / www. wilcoxon. com/ knowdesk/ Know%20the%20health%20of%20your%20pumps. pdf
[10] http:/ / www. wilcoxon. com/ knowdesk/ Guidance%20for%20mounting%204-20mA%20sensors%20on%20fans. pdf
[11] http:/ / www. wilcoxon. com/ knowdesk/ Vibration%20monitoring%20of%20slow%20speed%20rollers. pdf
[12] http:/ / www. wilcoxon. com/ knowdesk/ LF%20VM%20on%20compressor%20gear%20set. pdf
[13] http:/ / www. wilcoxon. com/ knowdesk/ PT104%20VM%20of%20cooling%20towers%20and%20fans. pdf
[14] http:/ / www. wilcoxon. com/ knowdesk/ gear. pdf
[15] http:/ / www. wilcoxon. com/ knowdesk/ bearing. pdf
[16] http:/ / www. wilcoxon. com/ knowdesk/ auto. pdf
[17] http:/ / www. wilcoxon. com/ knowdesk/ rep11. pdf
[18] http:/ / www. wilcoxon. com/ knowdesk/ pharmac. pdf
[19] http:/ / www. wilcoxon. com/ knowdesk/ rep9. pdf
[20] http:/ / www. wilcoxon. com/ knowdesk/ pwrplnt. pdf

106

Accelerometer
[21] http:/ / www. wilcoxon. com/ knowdesk/ pulp_pap. pdf
[22] O. Sircovich Saar "Dynamics in the Practice of Structural Design" 2006 WIT Press ISBN 1-84564-161-2
[23] The Contender 3 Episode 1 SPARQ testing ESPN
[24] Welcome to GoHerman.com innovator of interactive personal training for fitness, MARTIAL ARTS & MMA (http:/ / www. goherman.
com/ martialarts. aspx). Goherman.com. Retrieved on 17 October 2011.
[25] ''Vertical Speed Measurement'', by Ed Hahn in sci.aeronautics.airliners, 1996-11-22 (http:/ / yarchive. net/ air/ airliners/ ins_novert. html).
Yarchive.net. Retrieved on 17 October 2011.
[26] US patent 6640165 (http:/ / worldwide. espacenet. com/ textdoc?DB=EPODOC& IDX=US6640165), Hayward, Kirk W. and Stephenson,
Larry G., "Method and system of determining altitude of flying object", issued 2003-10-28
[27] Dual Deployment (http:/ / westrocketry. com/ articles/ DualDeploy/ DualDeployment. html). Westrocketry.com. Retrieved on 17 October
2011.
[28] PICO altimeter (http:/ / www. picoalt. com/ ). Picoalt.com. Retrieved on 17 October 2011.
[29] "Design of an integrated strapdown guidance and control system for a tactical missile" WILLIAMS, D. E.RICHMAN, J.FRIEDLAND, B.
(Singer Co., Kearfott Div., Little Falls, NJ) AIAA-1983-2169 IN: Guidance and Control Conference, Gatlinburg, TN, August 1517, 1983,
Collection of Technical Papers (A83-41659 1963). New York, American Institute of Aeronautics and Astronautics, 1983, p. 57-66.
[30] http:/ / mafija. fmf. uni-lj. si/ seminar/ files/ 2007_2008/ MEMS_accelerometers-koncna. pdf
[31] Tilting trains shorten transit time (http:/ / www. memagazine. org/ backissues/ membersonly/ june98/ features/ tilting/ tilting. html).
Memagazine.org. Retrieved on 17 October 2011.
[32] USGS volcano monitoring (http:/ / vulcan. wr. usgs. gov/ Glossary/ Seismicity/ description_seismic_monitoring. html).
Vulcan.wr.usgs.gov (18 May 1980). Retrieved on 17 October 2011.
[33] http:/ / www. google. com/ nexus/ #/ tech-specs
[34] Fun with the iPhone accelerometer (http:/ / blog. medallia. com/ 2007/ 08/ fun_with_the_iphone_accelerome. html). Blog.medallia.com.
Retrieved on 17 October 2011.
[35] Glogger (http:/ / m. eyetap. org). M.eyetap.org (17 February 2007). Retrieved on 17 October 2011.

External links
Thinking About Accelerometers and Gravity by Dave Redell, LUNAR #322 (http://www.lunar.org/docs/
LUNARclips/v5/v5n1/Accelerometers.html)
Practical Guide to Accelerometers (http://www.sensr.com/pdf/practical-guide-to-accelerometers.pdf)
How to Design an Accelerometer (http://www.memsuniverse.com/?page_id=1548)
Different types of Accelerometer (http://www.sagem-ds.com/eng/site.php?spage=02010301)
Considerations When Selecting an Accelerometer (http://www.pcb.com/techsupport/docs/vib/
TN_17_VIB-0805.pdf)
Introduction to Accelerometer Basics: Designs, Conditioning, and Mounting (http://www.pcb.com/
techsupport/tech_accel.php)
Database of acceleration data (http://www.opensignals.net/index.php?title=Accelerometry)

107

Capacitive sensing

Capacitive sensing
In electrical engineering, capacitive sensing is a technology based on capacitive coupling that is used in many
different types of sensors, including those to detect and measure proximity, position or displacement, humidity, fluid
level, and acceleration. Capacitive sensing as a human interface device (HID) technology, for example to replace the
computer mouse, is growing increasingly popular.[1] Capacitive touch sensors are used in many devices such as
laptop trackpads, digital audio players, computer displays, mobile phones, mobile devices, Tablets and others. More
and more design engineers are selecting capacitive sensors for their versatility, reliability and robustness, unique
human-device interface and cost reduction over mechanical switches.
Capacitive sensors detect anything that is conductive or has a dielectric different than that of air. While capacitive
sensing applications can replace mechanical buttons with capacitive alternatives, other technologies such as
multi-touch and gesture-based touchscreens are also premised on capacitive sensing.[2]

Sensor design
Capacitive sensors can be constructed from many different media, such as copper, Indium tin oxide (ITO) and
printed ink. Copper capacitive sensors can be implemented on standard FR4 PCBs as well as on flexible material.
ITO allows the capacitive sensor to be up to 90% transparent (for one layer solutions, such as touch phone screens).
The size and spacing of the capacitive sensor are both very important to the sensor's performance. In addition to the
size of the sensor, and its spacing relative to the ground plane, the type of ground plane used is very important. Since
the parasitic capacitance of the sensor is related to the electric field's (e-field) path to ground, it is important to
choose a ground plane that limits the concentration of e-field lines with no conductive object present.
Designing a capacitance sensing system requires first picking the type of sensing material (FR4, Flex, ITO, etc.).
One also needs to understand the environment the device will operate in, such as the full operating temperature
range, what radio frequencies are present and how the user will interact with the interface.
There are two types of capacitive sensing system: mutual capacitance,[3] where the object (finger, conductive stylus)
alters the mutual coupling between row and column electrodes, which are scanned sequentially;[4] and self- or
absolute capacitance where the object (such as a finger) loads the sensor or increases the parasitic capacitance to
ground. In both cases, the difference of a preceding absolute position from the present absolute position yields the
relative motion of the object or finger during that time. The technologies are elaborated in the following section.
Surface capacitance
In this basic technology, only one side of the insulator is coated with a conductive layer. A small voltage is applied
to the conductive layer, resulting in a uniform electrostatic field. When a conductor, such as a human finger, touches
the uncoated surface, a capacitor is dynamically formed. Due to the sheet resistance of the surface, each corner is
measured to have a different effective capacitance. The sensor's controller can determine the location of the touch
indirectly from the change in the capacitance as measured from the four corners of the panel; the larger the change in
capacitance, the closer the touch is to that corner. As it has no moving parts, it is moderately durable. But it has
limited resolution, is prone to false signals from parasitic capacitive coupling, and needs calibration during
manufacture. It is therefore most often used in simple applications such as industrial controls and kiosks.[5]

108

Capacitive sensing
Projected capacitance
Projected capacitive touch (PCT) technology is a capacitive technology which allows more accurate and flexible
operation, by etching the conductive layer. An X-Y grid is formed either by etching one layer to form a grid pattern
of electrodes, or by etching two separate, perpendicular layers of conductive material with parallel lines or tracks to
form the grid; comparable to the pixel grid found in many liquid crystal displays (LCD).[6]
The greater resolution of PCT allows operation with no direct contact, such that the conducting layers can be coated
with further protective insulating layers, and operate even under screen protectors, or behind weather and
vandal-proof glass. Due to the top layer of a PCT being glass, PCT is a more robust solution versus resistive touch
technology. Depending on the implementation, an active or passive stylus can be used instead of or in addition to a
finger. This is common with point of sale devices that require signature capture. Gloved fingers may or may not be
sensed, depending on the implementation and gain settings. Conductive smudges and similar interference on the
panel surface can interfere with the performance. Such conductive smudges come mostly from sticky or sweaty
finger tips, especially in high humidity environments. Collected dust, which adheres to the screen due to the moisture
from fingertips can also be a problem. There are two types of PCT: self capacitance, and mutual capacitance.
Mutual capacitance
Mutual capacitive sensors have a capacitor at each intersection of each row and each column. A 12-by-16 array, for
example, would have 192 independent capacitors. A voltage is applied to the rows or columns. Bringing a finger or
conductive stylus near the surface of the sensor changes the local electric field which reduces the mutual capacitance.
The capacitance change at every individual point on the grid can be measured to accurately determine the touch
location by measuring the voltage in the other axis. Mutual capacitance allows multi-touch operation where multiple
fingers, palms or styli can be accurately tracked at the same time.
Self-capacitance
Self-capacitance sensors can have the same X-Y grid as mutual capacitance sensors, but the columns and rows
operate independently. With self-capacitance, current senses the capacitive load of a finger on each column or row.
This produces a stronger signal than mutual capacitance sensing, but it is unable to resolve accurately more than one
finger, which results in "ghosting", or misplaced location sensing.

Circuit design
Capacitance is typically measured indirectly, by using it to control the frequency of an oscillator, or to vary the level
of coupling (or attenuation) of an AC signal.
The design of a simple capacitance meter is often based on a relaxation oscillator. The capacitance to be sensed
forms a portion of the oscillator's RC circuit or LC circuit. Basically the technique works by charging the unknown
capacitance with a known current. (The equation of state for a capacitor is i = C dv/dt. This means that the
capacitance equals the current divided by the rate of change of voltage across the capacitor.) The capacitance can be
calculated by measuring the charging time required to reach the threshold voltage (of the relaxation oscillator), or
equivalently, by measuring the oscillator's frequency. Both of these are proportional to the RC (or LC) time constant
of the oscillator circuit.
The primary source of error in capacitance measurements is stray capacitance, which if not guarded against, may
fluctuate between roughly 10 pF and 10 nF. The stray capacitance can be held relatively constant by shielding the
(high impedance) capacitance signal and then connecting the shield to (a low impedance) ground reference. Also, to
minimize the unwanted effects of stray capacitance, it is good practice to locate the sensing electronics as near the
sensor electrodes as possible.
Another measurement technique is to apply a fixed-frequency AC-voltage signal across a capacitive divider. This
consists of two capacitors in series, one of a known value and the other of an unknown value. An output signal is

109

Capacitive sensing
then taken from across one of the capacitors. The value of the unknown capacitor can be found from the ratio of
capacitances, which equals the ratio of the output/input signal amplitudes, as could be measured by an AC voltmeter.
More accurate instruments may use a capacitance bridge configuration, similar to a wheatstone bridge.[7] The
capacitance bridge helps to compensate for any variability that may exist in the applied signal.

Comparison with other touchscreen technologies


Since capacitive screens respond to only materials which are conductive (human finger used most commonly), they
can be cleaned with cloths with no accidental command input. Capacitive touchscreens are more responsive than
resistive touchscreens.[8]
A standard stylus cannot be used for capacitive sensing unless it is tipped with some form of conductive material,
such as anti-static conductive foam[9] . However, capacitive stylidifferent from standard stylican be used as well
as finger input on capacitive screens. Capacitive touchscreens are more expensive to manufacture and offer a
significantly lesser degree of accuracy than resistive touchscreens.[8] Some cannot be used with gloves, and can fail
to sense correctly with even a small amount of water on the screen.
Power supplies with high electronic noise can reduce accuracy.

Capacitive stylus
A Capacitive stylus is a special type of stylus that works on capacitive touchscreens primarily designed for fingers,
as on iPhone and most Android devices. They are different from standard styli designed for resistive touchscreens.
According to a report by ABI Research, styli are especially needed in China for handwriting recognition because of
the nature of its writing system.[10]

References
[1] Larry K. Baxter (1996). Capacitive Sensors (http:/ / books. google. com/ books?id=Tjd2laRnO4wC& pg=PA138& dq=capacitive+ sensors+
mouse& lr=& as_brr=3& as_pt=ALLTYPES& ei=sMOASeONJIHmyATKlLWzCw). John Wiley and Sons. p.138. ISBN9780780353510. .
[2] Wilson, Tracy. "HowStuffWorks "Multi-touch Systems"" (http:/ / electronics. howstuffworks. com/ iphone2. htm). . Retrieved August 9,
2009.
[3] US Pat No 5,305,017 5,861,875
[4] e.g. U.S. Pat. No. 4,736,191
[5] "Please Touch! Explore The Evolving World Of Touchscreen Technology" (http:/ / electronicdesign. com/ Articles/ Index. cfm?AD=1&
ArticleID=18592). electronicdesign.com. . Retrieved 2009-09-02.
[6] "Capacitive Touch (Touch Sensing Technologies - Part 2)" (http:/ / www. touchadvance. com/ 2011/ 06/ capacitive-touch-touch-sensing.
html). www.TouchAdvance.com. . Retrieved 2011-20-2011.
[7] http:/ / newton. ex. ac. uk/ teaching/ CDHW/ Sensors/ #Capacitance Basic impedance measurement techniques
[8] Doku, Ernest. "Resistive Vs. Capacitive Touchscreens, What's The Difference?" (http:/ / mobilenews. omio. com/ mobile-phone-guides/
resistive-vs-capacitive-touchascreens-whats-the-difference/ ). . Retrieved March 31, 2010.
[9] http:/ / pocketnow. com/ tweaks-hacks/ how-to-make-a-free-capacitive-stylus
[10] http:/ / www. intomobile. com/ 2008/ 12/ 15/ abi-research-capacitive-touchscreens-not-the-wave-of-the-future-for-most-mobile-phones/

110

Capacitive sensing

111

External links
Build A Touch-Sensor Solution For Wet Environments (http://electronicdesign.com/Articles/Index.
cfm?AD=1&ArticleID=19873)
Capacitive Touch Sensing Design (http://www.slideshare.net/daniel_smith/
capsense-programmable-capacitive-touch-sensing-design-in-minutes-presentation)
Capacitive sensor theory (http://www.lionprecision.com/tech-library/technotes/cap-0020-sensor-theory.html)
- How Capacitive Sensors Work and How to Use Them Effectively
Apple: Accessibility Solutions for iPhone, iPad, and iPod touch (http://www.apple.com/accessibility/
resources/iphone.html)
CNET News: HTC patents stylus for capacitive screens (http://news.cnet.com/8301-17938_105-10310397-1.
html)
Dagi's capacitive stylus (http://www.youtube.com/watch?v=ai4Sr1ktzVY) (video)
GottaBeMobile: iPad Stylus Query (http://www.gottabemobile.com/2010/07/28/ipad-stylus-query/)
(comparative review)
The New York Times: Q&A: Can a Stylus Work on an iPhone? (http://gadgetwise.blogs.nytimes.com/2009/
08/19/qa-can-a-stylus-work-on-an-iphone/)
MSP430 Capacitive Touch Microcontroller from Texas Instruments (http://ti.com/captouch)

Capacitive displacement sensor


Capacitive displacement sensors are non-contact devices capable of high-resolution measurement of the position
and/or change of position of any conductive target.[1] They are also able to measure the thickness or density of
non-conductive materials.[2] Capacitive displacement sensors are used in a wide variety of applications including
semiconductor processing, assembly of precision equipment such as disk drives, precision thickness measurements,
machine tool metrology and assembly line testing. These types of sensors can be found in machining and
manufacturing facilities around the world.

Basic capacitive theory


Capacitance is an electrical property which is created by applying an electrical charge to two conductive objects with
a gap between them. This property is most commonly illustrated using the example of two parallel conductive plates
with a gap between them and a charge applied to them. In this situation, the Capacitance can be expressed by the
equation:
[3]

Where C is the capacitance, 0 is the permittivity of free space constant, K is the dielectric constant of the material in
the gap, A is the area of the plates, and d is the distance between the plates.
A capacitive sensing system for conductive materials uses a similar model, but in place of one of the conductive
plates, is the sensor, and in place of the other, is the conductive target to be measured. Since the area of the probe and
target remain constant, and the dielectric of the material in the gap (usually air) also remains constant, "any change in
capacitance is a result of a change in the distance between the probe and the target." [4] Therefore, the equation above
can be simplified to:

Where indicates a proportional relationship. Due to this proportional relationship, a capacitive sensing system is
able to measure changes in capacitance and translate these changes into distance measurements.

Capacitive displacement sensor


For nonconducting materials, the difference between the dielectric constant of air and that of the material is used to
change the capacitance between two fixed plates.

Applications
Precision positioning
One of the more common applications of capacitive sensors is for precision positioning. Capacitive displacement
sensors can be used to measure the position of objects down to the nanometer level. This type of precise positioning
is used in the semiconductor industry where silicon wafers need to be positioned for exposure. Capacitive sensors are
also used to pre-focus the electron microscopes used in testing and examining the wafers.

Disc drive industry


In the disc drive industry, capacitive displacement sensors are used to measure the runout (a measure of how much
the axis of rotation deviates from an ideal fixed line) of disc drive spindles. By knowing the exact runout of these
spindles, disc drive manufacturers are able to determine the maximum amount of data that can be placed onto the
drives. Capacitive sensors are also used to ensure that disc drive platters are orthogonal to the spindle before data is
written to them.

Precision thickness measurements


Capacitive displacement sensors can be used to make very precise thickness measurements. Capacitive displacement
sensors operate by measuring changes in position. If the position of a reference part of known thickness is measured,
other parts can be subsequently measured and the differences in position can be used to determine the thickness of
these parts.[5] In order for this to be effective using a single probe, the parts must be completely flat and measured on
a perfectly flat surface. If the part to be measured has any curvature or deformity, or simply does not rest firmly
against the flat surface, the distance between the part to be measured and the surface it is placed upon will be
erroneously included in the thickness measurement. This error can be eliminated by using two capacitive sensors to
measure a single part. Capacitive sensors are placed on either side of the part to be measured. By measuring the parts
from both sides, curvature and deformities are taken into account in the measurement and their effects are not
included in the thickness readings.
The thickness of plastic materials can be measured with the material placed between two electrodes a set distance
apart. These form a type of capacitor. The plastic when placed between the electrodes acts as a dielectric and
displaces air (which has dielectric constant of 1, different than the plastic). Consequently the capacitance between
the electrodes changes. The capacitance changes can then be measured and correlated with the material's thickness.[6]
Capacitive sensors circuits can be constructed that are able to detect changes in capacitance on the order of a 105
picofarads (10 attofarads).[7]

Non-conductive targets
While capacitive displacement sensors are most often used to sense changes in position of conductive targets, they
can also be used to sense the thickness and/or density of non-conductive targets as well.[4] A non-conductive object
placed in between the probe and conductive target will have a different dielectric constant than the air in the gap and
will therefore change the Capacitance between probe and target. (See the first equation above) By analyzing this
change in capacitance, the thickness and density of the non-conductor can be determined.

112

Capacitive displacement sensor

Machine tool metrology


Capacitive displacement sensors are often used in metrology applications. In many cases, sensors are used to
measure shape errors in the part being produced. But they also can measure the errors arising in the equipment used
to manufacture the part, a practice known as machine tool metrology.[8] In many cases, the sensors are used to
analyze and optimize the rotation of spindles in various machine tools, examples include surface grinders, lathes,
milling machines, and air bearing spindles.[9] By measuring errors in the machines themselves, rather than simply
measuring errors in the final products, problems can be dealt with and fixed earlier in the manufacturing process.

Assembly line testing


Capacitive displacement sensors are often used in assembly line testing. Sometimes they are used to test assembled
parts for uniformity, thickness or other design features. At other times, they are used to simply look for the presence
or absence of a certain component, such as glue.[10] Using capacitive sensors to test assembly line parts can help to
prevent quality concerns further along in the production process.

Comparison to eddy current displacement sensors


Capacitive displacement sensors share many similarities to eddy current (or inductive) displacement sensors;
however capacitive sensors use an electric field as opposed to the magnetic field used by eddy current sensors [11]
[12]
This leads to a variety of differences between the two sensing technologies, with the most notable differences
being that capacitive sensors are generally capable of higher resolution measurements, and eddy current sensors
work in dirty environments while capacitive sensors do not.[11]

References
[1] Lion Precision Capacitive Sensor Overview (http:/ / www. lionprecision. com/ capacitive-sensors/ index. html#apps), An overview of
capacitive sensing technology from Lion Precision.
[2] Jon S. Wilson (2005). Sensor Technology Handbook (http:/ / books. google. com/ books?id=fdeToUK8edMC& pg=PT94). Newnes. p.94.
ISBN0750677295. .
[3] Paul Allen Tipler (1982). Physics Second Edition. Worth Publishers. pp.653660. ISBN0879011351.
[4] Capacitive Sensor Operation and Optimization How Capacitive Sensors Work and How to Use Them Effectively (http:/ / www. lionprecision.
com/ tech-library/ technotes/ cap-0020-sensor-theory. html), An in depth discussion of capacitive sensor theory from Lion Precision.
[5] Capacitive Thickness Measurements (http:/ / www. lionprecision. com/ tech-library/ appnotes/ app-flash/ f-cap-0030-thickness. html), A
tutorial on capacitive thickness measurements.
[6] Film thickness gauge (http:/ / www. lionprecision. com/ tech-library/ appnotes/ dual-0010-film-thick. html)
[7] U.S. Patent (http:/ / www. patsnap. com/ patents/ view/ US4947131. html)
[8] Lawrence Livermore National Laboratory: Engineering Precision into Laboratory Projects (https:/ / www. llnl. gov/ str/ Blaedel. html),
Examples of advances made by LLNL in the field of precision measurement.
[9] Eric R. Marsh (2009). Precision Spindle Metrology. Destech Pubns Inc. ISBN1605950033.
[10] Sensing Glue on Paper (http:/ / www. lionprecision. com/ tech-library/ appnotes/ app-flash/ f-cap-0040-glue. html), A tutorial on using
capacitive sensors for glue sensing.
[11] Lion Precision Capacitive Eddy Current Comparison (http:/ / www. lionprecision. com/ tech-library/ technotes/ tech-flash/ cve/
f-article-0011-cve. html), A comparison between capacitive and eddy current sensing technology from Lion Precision.
[12] Users Manual for Siemens Capacitive Sensors p.54 (http:/ / www. tech-pedia. com/ Electrical_Engineering/ Circuit Components/ proximity
sensors/ capacitive/ snrs_3. pdf)

113

Capacitive displacement sensor

External links
Medical Engineering (http://www.tectrends.com/cgi/showan?an=00174300) - Patient Monitoring Using
Capacitive Sensors
Capacitive Sensors for Motion Control (http://www.capacitance-sensors.com/capacitive_sensor_tutorial.htm)
- Tutorial on Capacitive Sensors for Nanopositioning Applications
Capacitive Sensor Theory (http://www.lionprecision.com/tech-library/technotes/cap-0020-sensor-theory.
html) - How Capacitive Sensors Work and How to Use Them Effectively

Current sensor
A current sensor is a device that detects electrical current (AC or DC) in a wire, and generates a signal proportional
to it.The generated signal could be analog voltage or current or even digital output.It can be then utilized to display
the measured current in an ammeter or can be stored for further analysis in a data acquisition system or can be
utilized for control purpose.
The sensed current and the output signal can be:
AC current input,
analog output, which duplicates the wave shape of the sensed current
bipolar output, which duplicates the wave shape of the sensed current
unipolar output, which is proportional to the average or RMS value of the sensed current
DC current input,
unipolar, with a unipolar output, which duplicates the wave shape of the sensed current
digital output, which switches when the sensed current exceeds a certain threshold

Technologies

Hall effect IC sensor.


Transformer or Current clamp meter, (suitable for AC current only).
Resistor, whose voltage is directly proportional to the current through it.
Fiber optic current sensor, using an interferometer to measure the phase change in the light produced by a
magnetic field.
Rogowski coil, electrical device for measuring alternating current (AC) or high speed current pulses.

114

Electro-optical sensor

Electro-optical sensor
Electro-optical sensors are electronic detectors that convert light, or a change in light, into an electronic signal.
They are used in many industrial and consumer applications, for example:
Lamps that turn on automatically in response to darkness
Position sensors that activate when an object interrupts a light beam
Flash detection, to synchronize one photographic flash to another

Galvanometer
A galvanometer is a type of ammeter: an instrument for detecting and
measuring electric current. It is an analog electromechanical transducer
that produces a rotary deflection of some type of pointer in response to
electric current flowing through its coil in a magnetic field. .
Galvanometers were the first instruments used to detect and measure
electric currents. Sensitive galvanometers were used to detect signals
from long submarine cables, and were used to discover the electrical
activity of the heart and brain. Some galvanometers used a solid
pointer on a scale to show measurements, other very sensitive types
D'Arsonval/Weston galvanometer movement used a tiny mirror and a beam of light to provide mechanical
with the moving coil shown in orange.
amplification of tiny signals. Initially a laboratory instrument relying
on the Earth's own magnetic field to provide restoring force for the
pointer, galvanometers were developed into compact, rugged, sensitive portable instruments that were essential to
the development of electrotechnology. A type of galvanometer that permanently recorded measurements was the
chart recorder. The term has expanded to include uses of the same mechanism in recording, positioning, and
servomechanism equipment

History
The deflection of a magnetic compass needle by current in a wire was first described by Hans Oersted in 1820. The
phenomenon was studied both for its own sake and as a means of measuring electrical current. The earliest
galvanometer was reported by Johann Schweigger at the University of Halle on 16 September 1820. Andr-Marie
Ampre also contributed to its development. Early designs increased the effect of the magnetic field due to the
current by using multiple turns of wire; the instruments were at first called "multipliers" due to this common design
feature. The term "galvanometer", in common use by 1836, was derived from the surname of Italian electricity
researcher Luigi Galvani, who discovered in 1771 that electric current could make a frog's leg jerk.
Originally the instruments relied on the Earth's magnetic field to provide the restoring force for the compass needle;
these were called "tangent" galvanometers and had to be oriented before use. Later instruments of the "astatic" type
used opposing magnets to become independent of the Earth's field and would operate in any orientation. The most
sensitive form, the Thompson or mirror galvanometer, was invented by William Thomson (Lord Kelvin) and
patented by him in 1858. Instead of a compass needle, it used tiny magnets attached to a small lightweight mirror,
suspended by a thread; the deflection of a beam of light greatly magnified the deflection due to small currents.
Alternatively the deflection of the suspended magnets could be observed directly through a microscope.
The ability to quantitatively measure voltage and current allowed Georg Ohm to formulate Ohm's Law, which states
that the voltage across a conductor is directly proportional to the current through it.

115

Galvanometer
The early moving-magnet form of galvanometer had the disadvantage that it was affected by any magnets or iron
masses near it, and its deflection was not linearly proportional to the current. In 1882 Jacques-Arsne d'Arsonval and
Marcel Deprez developed a form with a stationary permanent magnet and a moving coil of wire, suspended by fine
wires which provided both an electrical connection to the coil and the restoring torque to return to the zero position.
An iron tube within the coil concentrated the magnetic field. A mirror attached to the coil deflected a beam of light
to indicate the coil position. The concentrated magnetic field and delicate suspension made these instruments
sensitive; d'Arsonval's initial instrument could detect ten microamperes.[1]
Edward Weston extensively improved the design. He replaced the fine wire suspension with a pivot, and provided
restoring torque and electrical connections through spiral springs rather like those in a wristwatch balance wheel. He
developed a method of stabilizing the magnetic field of the permanent magnet, so that the instrument would have
consistent accuracy over time. He replaced the light beam and mirror with a knife-edge pointer, which could be
directly read; a mirror under the pointer and in the same plane as the scale eliminated parallax error in observation.
To maintain the field strength, Weston's design used a very narrow slot in which the coil was mounted, with a
minimal air-gap and soft iron pole pieces; this made the deflection of the instrument more linear with respect to coil
current. Finally, the coil was wound on a light former made of conductive metal, which acted as a damper. By 1888
Edward Weston had patented and brought out a commercial form of this instrument, which became a standard
component in electrical equipment. It was known as the "portable" instrument because it was little affected by
mounting position or by transporting it from place to place. This design is almost universally used in moving-coil
meters today.

Operation
The most familiar use is as an analog measuring instrument, often
called a meter. It is used to measure the direct current (flow of electric
charge) through an electric circuit. The D'Arsonval/Weston form used
today is constructed with a small pivoting coil of wire in the field of a
permanent magnet. The coil is attached to a thin pointer that traverses a
calibrated scale. A tiny torsion spring pulls the coil and pointer to the
zero position.
When a direct current (DC) flows through the coil, the coil generates a
magnetic field. This field acts against the permanent magnet. The coil
twists, pushing against the spring, and moves the pointer. The hand
points at a scale indicating the electric current. Careful design of the
D'Arsonval/Weston galvanometer movement.
pole pieces ensures that the magnetic field is uniform, so that the
Part of the magnet's left pole piece is broken out
to show the coil
angular deflection of the pointer is proportional to the current. A useful
meter generally contains provision for damping the mechanical
resonance of the moving coil and pointer, so that the pointer settles quickly to its position without oscillation.
The basic sensitivity of a meter might be, for instance, 100 microamperes full scale (with a voltage drop of, say, 50
millivolts at full current). Such meters are often calibrated to read some other quantity that can be converted to a
current of that magnitude. The use of current dividers, often called shunts, allows a meter to be calibrated to measure
larger currents. A meter can be calibrated as a DC voltmeter if the resistance of the coil is known by calculating the
voltage required to generate a full scale current. A meter can be configured to read other voltages by putting it in a
voltage divider circuit. This is generally done by placing a resistor in series with the meter coil. A meter can be used
to read resistance by placing it in series with a known voltage (a battery) and an adjustable resistor. In a preparatory
step, the circuit is completed and the resistor adjusted to produce full scale deflection. When an unknown resistor is
placed in series in the circuit the current will be less than full scale and an appropriately calibrated scale can display
the value of the previously-unknown resistor.

116

Galvanometer

117

Because the pointer of the meter is usually a small distance above the scale of the meter, parallax error can occur
when the operator attempts to read the scale line that "lines up" with the pointer. To counter this, some meters
include a mirror along the markings of the principal scale. The accuracy of the reading from a mirrored scale is
improved by positioning one's head while reading the scale so that the pointer and the reflection of the pointer are
aligned; at this point, the operator's eye must be directly above the pointer and any parallax error has been
minimized.

Types
Today the main type of galvanometer
mechanism still used is the moving coil
D'Arsonval/Weston mechanism, which is
used in traditional analog meters.

Thompson reflecting galvanometer.

Tangent galvanometer
A tangent galvanometer is an early measuring instrument used for the
measurement of electric current. It works by using a compass needle to
compare a magnetic field generated by the unknown current to the magnetic
field of the Earth. It gets its name from its operating principle, the tangent law
of magnetism, which states that the tangent of the angle a compass needle
makes is proportional to the ratio of the strengths of the two perpendicular
magnetic fields. It was first described by Claude Pouillet in 1837.
A tangent galvanometer consists of a coil of insulated copper wire wound on
a circular non-magnetic frame. The frame is mounted vertically on a
horizontal base provided with levelling screws. The coil can be rotated on a
vertical axis passing through its centre. A compass box is mounted
Tangent galvanometer made by
J.H.Bunnell Co. around 1890.
horizontally at the centre of a circular scale. It consists of a tiny, powerful
magnetic needle pivoted at the centre of the coil. The magnetic needle is free
to rotate in the horizontal plane. The circular scale is divided into four quadrants. Each quadrant is graduated from 0
to 90. A long thin aluminium pointer is attached to the needle at its centre and at right angle to it. To avoid errors
due to parallax a plane mirror is mounted below the compass needle.
In operation, the instrument is first rotated until the magnetic field of the Earth, indicated by the compass needle, is
parallel with the plane of the coil. Then the unknown current is applied to the coil. This creates a second magnetic
field on the axis of the coil, perpendicular to the Earth's magnetic field. The compass needle responds to the vector
sum of the two fields, and deflects to an angle equal to the tangent of the ratio of the two fields. From the angle read
from the compass's scale, the current could be found from a table.[2]
The current supply wires have to be wound in a small helix, like a pig's tail, otherwise the field due to the wire will
affect the compass needle and an incorrect reading will be obtained.

Galvanometer

118

Theory
The galvanometer is oriented so that the plane of the coil is parallel to the local magnetic meridian, that is the
horizontal component
of the Earth's magnetic field. When a current passes through the galvanometer coil, a
second magnetic field B perpendicular to the coil is created, of strength:

where I is the current in amperes, n is the number of turns of the coil and r is the radius of the coil. These two
perpendicular magnetic fields add vectorially, and the compass needle points along the direction of their resultant, at
an angle of:

From tangent law,

, i.e.

or

or

, where K is called the Reduction Factor of the tangent galvanometer.

One problem with the tangent galvanometer is that its resolution degrades at both high currents and low currents.
The maximum resolution is obtained when the value of is 45. When the value of is close to 0 or 90, a large
percentage change in the current will only move the needle a few degrees.
Geomagnetic field measurement
A tangent galvanometer can also be used to measure the magnitude of the horizontal component of the geomagnetic
field. When used in this way, a low-voltage power source, such as a battery, is connected in series with a rheostat,
the galvanometer, and an ammeter. The galvanometer is first aligned so that the coil is parallel to the geomagnetic
field, whose direction is indicated by the compass when there is no current through the coils. The battery is then
connected and the rheostat is adjusted until the compass needle deflects 45 degrees from the geomagnetic field,
indicating that the magnitude of the magnetic field at the center of the coil is the same as that of the horizontal
component of the geomagnetic field. This field strength can be calculated from the current as measured by the
ammeter, the number of turns of the coil, and the radius of the coils.

Galvanometer

119

Astatic galvanometer
The astatic galvanometer was developed by Leopoldo Nobili in
1825.[3]
Unlike a compass-needle galvanometer, the astatic galvanometer has
two magnetic needles parallel to each other, but with the magnetic
poles reversed. The needle assembly is suspended by a silk thread, and
has no net magnetic dipole moment. It is not affected by the earth's
magnetic field. The lower needle is inside the current sensing coils and
is deflected by the magnetic field created by the passing current.

Mirror galvanometer
Extremely sensitive measuring equipment once used mirror
galvanometers that substituted a mirror for the pointer. A beam of light
reflected from the mirror acted as a long, massless pointer. Such
Astatic galvanometer
instruments were used as receivers for early trans-Atlantic telegraph
systems, for instance. The moving beam of light could also be used to
make a record on a moving photographic film, producing a graph of current versus time, in a device called an
oscillograph. The string galvanometer was a type of mirror galvanometer so sensitive that it was used to make the
first electrocardiogram of the electrical activity of the human heart.

Ballistic galvanometer
A Ballistic galvanometer is an instrument with a high inertia, arranged so that its deflection is proportional to the
total charge sent through the meter's coil.

Uses
Past uses
A major early use for galvanometers
was
for
finding
faults
in
telecommunications cables. They were
superseded in this application late in
the 20th century by time-domain
reflectometers.
Probably
the
largest
use
of
galvanometers
was
the
D'Arsonval/Weston type movement
used in analog meters in electronic
equipment.
Since
the
1980s,
galvanometer-type
analog
meter
movements have been displaced by
analog to digital converters (ADCs) for
some uses. A digital panel meter

An automatic exposure unit from an 8 mm movie camera, based on a galvanometer


mechanism (center) and a CdS photoresistor in the opening at left.

Galvanometer

120

(DPM) contains an analog to digital converter and numeric display.


The advantages of a digital instrument are higher precision and
accuracy, but factors such as power consumption or cost may still
favor application of analog meter movements.
Galvanometer mechanisms were also used to position the pens in
analog strip chart recorders such as used in electrocardiographs,
electroencephalographs and polygraphs. Strip chart recorders with
galvanometer driven pens may have a full scale frequency response of
100Hz and several centimeters deflection. The writing mechanism
may be a heated tip on the needle writing on heat-sensitive paper, or a
hollow ink-fed pen. In some types the pen is continuously pressed
against the paper, so the galvanometer must be strong enough to move
the pen against the friction of the paper. In other types, such as the
Rustrak recorders, the needle is only intermittently pressed against the
writing medium; at that moment, an impression is made and then the
pressure is removed, allowing the needle to move to a new position and
the cycle repeats. In this case, the galvanometer need not be especially
strong.

Modern closed-loop galvanometer-driven laser


scanning mirror from Scanlab.

Galvanometer mechanisms were also used in exposure mechanisms in


film cameras.

Modern uses
Most modern uses for the galvanometer mechanism are in positioning and control systems. Galvanometer
mechanisms are divided into moving magnet and moving coil galvanometers; in addition, they are divided into
closed-loop and open-loop - or resonant - types.
Mirror galvanometer systems are used as beam positioning or beam steering elements in laser scanning systems. For
example, for material processing with high-power lasers, mirror galvanometer are typically high power galvanometer
mechanisms used with closed loop servo control systems. The newest galvanometers designed for beam steering
applications can have frequency responses over 10kHz with appropriate servo technology. Examples of
manufacturers of such systems are Cambridge Technology Inc. (www.camtech.com) - now part of General Scanning
(www.gsig.com) - and Scanlab (www.scanlab.de). Closed-loop mirror galvanometers are also used in
stereolithography, in laser sintering, in laser engraving, in laser beam welding, in laser TV, in laser displays, and in
imaging applications such as Optical Coherence Tomography (OCT) retinal scanning. Almost all of these
galvanometers are of the moving magnet type.
Open loop, or resonant mirror galvanometers, are mainly used in laser-based barcode scanners, in some printing
machines, in some imaging applications, in military applications, and in space systems. Their non-lubricated
bearings are especially of interest in applications that require a high vacuum.
A galvanometer mechanism is used for the head positioning servos in hard disk drives and CD and DVD players.
These are all of the moving coil type, in order to keep mass, and thus access times, as low as possible.

Galvanometer

121

References
[1] Joseph F. Keithley The story of electrical and magnetic measurements: from 500 B.C. to the 1940s, John Wiley and Sons, 1999 ISBN
0780311930, pp. 196-198
[2] Tangent Galvanometer (http:/ / physics. kenyon. edu/ EarlyApparatus/ Electrical_Measurements/ Tangent_Galvanometer/
Tangent_Galvanometer. html)
[3] Greenslade, Thomas. "Instruments for Natural Philosophy Astatic Galvanometer" (http:/ / physics. kenyon. edu/ EarlyApparatus/
Electrical_Measurements/ Astatic_Galvanometer/ Astatic_Galvanometer. html). . Retrieved 2010-12-19.

External links
Galvanometer - Interactive Java Tutorial (http://www.magnet.fsu.edu/education/tutorials/java/galvanometer/
index.html) National High Magnetic Field Laboratory
Selection of historic galvanometer (http://vlp.mpiwg-berlin.mpg.de/technology/search?-max=10&-title=1&
-op_varioid=numerical&varioid=7) in the Virtual Laboratory of the Max Planck Institute for the History of
Science

Hall effect sensor


A Hall effect sensor is a transducer that varies its output voltage
in response to a magnetic field. Hall effect sensors are used for
proximity switching, positioning, speed detection, and current
sensing applications.
In its simplest form, the sensor operates as an analogue transducer,
directly returning a voltage. With a known magnetic field, its
distance from the Hall plate can be determined. Using groups of
sensors, the relative position of the magnet can be deduced.
Electricity carried through a conductor will produce a magnetic
field that varies with current, and a Hall sensor can be used to
measure the current without interrupting the circuit. Typically, the
sensor is integrated with a wound core or permanent magnet that
surrounds the conductor to be measured.
Frequently, a Hall sensor is combined with circuitry that allows
the device to act in a digital (on/off) mode, and may be called a
switch in this configuration. Commonly seen in industrial
applications such as the pictured pneumatic cylinder, they are also
used in consumer equipment; for example some computer printers
use them to detect missing paper and open covers. When high
reliability is required, they are used in keyboards.

The magnetic piston (1) in this pneumatic cylinder will


cause the Hall effect sensors (2 and 3) mounted on its
outer wall to activate when it is fully retracted or
extended.

Hall effect sensor

122

Clutch with Hall Effect sensor.

Hall sensors are commonly used to time the speed of wheels and shafts, such as for internal combustion engine
ignition timing, tachometers and anti-lock braking systems. They are used in brushless DC electric motors to detect
the position of the permanent magnet. In the pictured wheel with two equally spaced magnets, the voltage from the
sensor will peak twice for each revolution. This arrangement is commonly used to regulate the speed of disc drives.

Hall probe
A Hall probe contains an indium compound semiconductor crystal such as indium antimonide, mounted on an
aluminum backing plate, and encapsulated in the probe head. The plane of the crystal is perpendicular to the probe
handle. Connecting leads from the crystal are brought down through the handle to the circuit box.
When the Hall Probe is held so that the magnetic field lines are passing at right angles through the sensor of the
probe, the meter gives a reading of the value of magnetic flux density (B). A current is passed through the crystal
which, when placed in a magnetic field has a Hall effect voltage developed across it. The Hall effect is seen when a
conductor is passed through a uniform magnetic field. The natural electron drift of the charge carriers causes the
magnetic field to apply a Lorentz force (the force exerted on a charged particle in an electromagnetic field) to these
charge carriers. The result is what is seen as a charge separation, with a build up of either positive or negative
charges on the bottom or on the top of the plate. The crystal measures 5mm square. The probe handle, being made
of a non-ferrous material, has no disturbing effect on the field.
A Hall Probe is enough to measure the Earth's magnetic field. It must be held so that the Earth's field lines are
passing directly through it. It is then rotated quickly so the field lines pass through the sensor in the opposite
direction. The change in the flux density reading is double the Earth's magnetic flux density. A hall probe must first
be calibrated against a known value of magnetic field strength. For a solenoid the hall probe is placed in the center.

Hall effect sensor

Hall Effect Sensor Interface


Hall effect sensor may require analog circuitry to be interfaced to microprocessors. These interfaces may include
input diagnostics, fault protection for transient conditions, and short/open circuit detection. It may also provide and
monitor the current to the hall effect sensor itself. There are precision IC products available to handle these features.

Further reading
Ed Ramsden (2006). Hall-effect sensors: theory and applications [1] (2, ilustrated ed.). Elsevier.
ISBN0750679344.
R. S. Popovi (2004). Hall effect devices [2] (2, ilustrated ed.). CRC Press. ISBN0750308559.
Classical Hall effect in scanning gate experiments: A. Baumgartner et al., Phys. Rev. B 74, 165426 (2006),
doi:10.1103/PhysRevB.74.165426

External links
AH Hall Effect Sensor [3]

References
[1] http:/ / books. google. com/ books?id=R8VAjMitH1QC& printsec=frontcover#v=onepage& q=& f=false
[2] http:/ / books. google. com/ books?id=_H5n-5sO5BAC& pg=PA1& dq=hall+ effect+ sensor#v=onepage& q=hall%20effect%20sensor&
f=false
[3] http:/ / www. ahest. net/

Inductive sensor
An inductive sensor is an electronic proximity sensor,
which detects metallic objects without touching them.
The sensor consists of an induction loop. Electric
current generates a magnetic field, which collapses
Elements of a simple inductive sensor.
generating a current that falls asymptotically toward
1. Field sensor
zero from its initial level when the input electricity
2. Oscillator
3. Demodulator
ceases. The inductance of the loop changes according
4. Flip-flop
to the material inside it and since metals are much more
5. Output
effective inductors than other materials the presence of
metal increases the current flowing through the loop.
This change can be detected by sensing circuitry, which can signal to some other device whenever metal is detected.
Common applications of inductive sensors include metal detectors, traffic lights, car washes, and a host of automated
industrial processes. Because the sensor does not require physical contact it is particularly useful for applications
where access presents challenges or where dirt is prevalent. The sensing range is rarely greater than 6cm, however,
and it has no directionality.

123

Infrared

124

Infrared
Infrared (IR) light is electromagnetic radiation with a wavelength
longer than that of visible light, measured from the nominal edge of
visible red light at 0.74 micrometres (m), and extending
conventionally to 300m. These wavelengths correspond to a
frequency range of approximately 1 to 400THz,[1] and include most of
the thermal radiation emitted by objects near room temperature.
Microscopically, IR light is typically emitted or absorbed by molecules
when they change their rotational-vibrational movements.
Infrared light is used in industrial, scientific, and medical applications.
Night-vision devices using infrared illumination allow people or
animals to be observed without detection. In astronomy, imaging at
infrared wavelengths allows observation of objects obscured by
interstellar dust. Infrared imaging cameras are used to detect heat loss
in insulated systems, observe changing blood flow in the skin, and
overheating of electrical apparatus.

An image of two people in mid-infrared


("thermal") light (false-color)

Much of the energy from the Sun arrives on Earth in the form of
infrared radiation. Sunlight at zenith provides an irradiance of just over
1kilowatt per square meter at sea level. Of this energy, 527 watts is
infrared radiation, 445 watts is visible light, and 32 watts is ultraviolet
radiation.[2] The balance between absorbed and emitted infrared
radiation has a critical effect on the Earth's climate.
This infrared space telescope image has blue,
green and red correspond to 3.4, 4.6, and 12
micron wavelengths respectively.

Overview
[3]

Light Comparison
Name

Wavelength

Frequency (Hz)

Photon Energy (eV)

Gamma ray

less than 0.01 nm

more than 10 EHZ

100 keV - 300+ GeV

X-Ray

0.01 nm to 10 nm

30 PHz - 30 EHZ

120 eV to 120 keV

Ultraviolet

10 nm - 390 nm

30 EHZ - 790 THz

3 eV to 124 eV

Visible

390 - 750 nm

790 THz - 405 THz

1.7 eV - 3.3 eV

Infrared

750 nm - 1 mm

405 THz - 300 GHz

1.24 meV - 1.7 eV

Microwave

1 mm - 1 meter

300 GHz - 300 MHz

1.24 meV - 1.24 eV

Radio

1 mm - 100,000km

300 GHz - 3 Hz

1.24 meV - 12.4 feV

Infrared imaging is used extensively for military and civilian purposes. Military applications include target
acquisition, surveillance, night vision, homing and tracking. Non-military uses include thermal efficiency analysis,
environmental monitoring, industrial facility inspections, remote temperature sensing, short-ranged wireless

Infrared

125

communication, spectroscopy, and weather forecasting. Infrared astronomy uses sensor-equipped telescopes to
penetrate dusty regions of space, such as molecular clouds; detect objects such as planets, and to view highly
red-shifted objects from the early days of the universe.[4]
Humans at normal body temperature radiate chiefly at wavelengths around 12m (micrometres), as shown by
Wien's displacement law.
At the atomic level, infrared energy elicits vibrational modes in a molecule through a change in the dipole moment,
making it a useful frequency range for study of these energy states for molecules of the proper symmetry. Infrared
spectroscopy examines absorption and transmission of photons in the infrared energy range, based on their frequency
and intensity.[5]

Different regions in the infrared


Objects generally emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of
the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth. Therefore, the
infrared band is often subdivided into smaller sections.

Commonly used sub-division scheme


A commonly used sub-division scheme is:[6]
Division Name

Abbreviation

Wavelength Characteristics

Near-infrared

NIR, IR-A DIN

0.75-1.4m

Defined by the water absorption, and commonly used in fiber optic


telecommunication because of low attenuation losses in the SiO2 glass (silica)
medium. Image intensifiers are sensitive to this area of the spectrum. Examples
include night vision devices such as night vision goggles.

Short-wavelength
infrared

SWIR, IR-B DIN

1.4-3m

Water absorption increases significantly at 1,450nm. The 1,530 to 1,560nm range is


the dominant spectral region for long-distance telecommunications.

Mid-wavelength
infrared

MWIR, IR-C DIN.


Also called
intermediate infrared
(IIR)

3-8m

In guided missile technology the 3-5m portion of this band is the atmospheric
window in which the homing heads of passive IR 'heat seeking' missiles are designed
to work, homing on to the Infrared signature of the target aircraft, typically the jet
engine exhaust plume

Long-wavelength
infrared

LWIR, IR-C DIN

815m

This is the "thermal imaging" region, in which sensors can obtain a completely
passive picture of the outside world based on thermal emissions only and requiring no
external light or thermal source such as the sun, moon or infrared illuminator.
Forward-looking infrared (FLIR) systems use this area of the spectrum. This region is
also called the "thermal infrared."

Far infrared

FIR

15 1,000m

(see also far-infrared laser).

NIR and SWIR is sometimes called "reflected infrared" while MWIR and LWIR is sometimes referred to as "thermal
infrared." Due to the nature of the blackbody radiation curves, typical 'hot' objects, such as exhaust pipes, often
appear brighter in the MW compared to the same object viewed in the LW.

Infrared

126

CIE division scheme


The International Commission on Illumination (CIE) recommended the division of infrared radiation into the
following three bands:[7]
IR-A: 700nm1400nm (0.7m 1.4m, 215THz - 430THz)
IR-B: 1400nm3000nm (1.4m 3m, 100THz - 215THz)
IR-C: 3000nm1mm (3m 1000m, 300GHz - 100THz)

ISO 20473 scheme


ISO 20473 specifies the following scheme:[8]
Designation

Abbreviation

Wavelength

Near Infrared

NIR

0.78 - 3 m

Mid Infrared

MIR

3 - 50 m

Far Infrared

FIR

50 - 1000 m

Astronomy division scheme


Astronomers typically divide the infrared spectrum as follows:[9]
Designation

Abbreviation

Wavelength

Near Infrared

NIR

(0.7-1) to 5m

Mid Infrared

MIR

5 to (25-40) m

Far Infrared

FIR

(25-40) to (200-350) m.

These divisions are not precise and can vary depending on the publication. The three regions are used for observation
of different temperature ranges, and hence different environments in space.

Sensor response division scheme


A third scheme divides up the band based on the response of various
detectors:[10]
Near infrared: from 0.7 to 1.0 m (from the approximate end of the
response of the human eye to that of silicon).
Short-wave infrared: 1.0 to 3 m (from the cut off of silicon to that
of the MWIR atmospheric window. InGaAs covers to about 1.8
m; the less sensitive lead salts cover this region.

Plot of atmospheric transmittance in part of the


infrared region.

Mid-wave infrared: 3 to 5 m (defined by the atmospheric window and covered by Indium antimonide [InSb] and
HgCdTe and partially by lead selenide [PbSe]).
Long-wave infrared: 8 to 12, or 7 to 14 m: the atmospheric window (Covered by HgCdTe and
microbolometers).
Very-long wave infrared (VLWIR): 12 to about 30 m, covered by doped silicon.
These divisions are justified by the different human response to this radiation: near infrared is the region closest in
wavelength to the radiation detectable by the human eye, mid and far infrared are progressively further from the
visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands, water
absorption) and the newest follow technical reasons (The common silicon detectors are sensitive to about 1,050nm,

Infrared

127

while InGaAs' sensitivity starts around 950nm and ends between 1,700 and 2,600nm, depending on the specific
configuration). Unfortunately, international standards for these specifications are not currently available.
The boundary between visible and infrared light is not precisely defined. The human eye is markedly less sensitive to
light above 700nm wavelength, so longer wavelengths make insignificant contributions to scenes illuminated by
common light sources. But particularly intense light (e.g., from IR lasers, or from bright daylight with the visible
light removed by colored gels) can be detected up to approximately 780nm, and will be perceived as red light,
although sources of up to 1050nm can be seen as a dull red glow in intense sources. [11] The onset of infrared is
defined (according to different standards) at various values typically between 700nm and 800nm.

Telecommunication bands in the infrared


In optical communications, the part of the infrared spectrum that is used is divided into seven bands based on
availability of light sources transmitting/absorbing materials (fibers) and detectors:[12]
Band

Descriptor

Wavelength range

O band Original

12601360nm

E band Extended

13601460nm

S band Short wavelength

14601530nm

C band Conventional

15301565nm

L band Long wavelength

15651625nm

U band Ultralong wavelength 16251675nm

The C-band is the dominant band for long-distance telecommunication networks. The S and L bands are based on
less well established technology, and are not as widely deployed.

Heat
Infrared radiation is popularly known as "heat radiation", but light and electromagnetic waves of any frequency will
heat surfaces that absorb them. Infrared light from the Sun only accounts for 49%[13] of the heating of the Earth, with
the rest being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or
ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible radiation. Objects at room
temperature will emit radiation mostly concentrated in the 8 to 25m band, but this is not distinct from the emission
of visible light by incandescent objects and ultraviolet by even hotter objects (see black body and Wien's
displacement law).[14]
Heat is energy in transient form that flows due to temperature difference. Unlike heat transmitted by thermal
conduction or thermal convection, radiation can propagate through a vacuum.
The concept of emissivity is important in understanding the infrared emissions of objects. This is a property of a
surface which describes how its thermal emissions deviate from the ideal of a black body. To further explain, two
objects at the same physical temperature will not "appear" the same temperature in an infrared image if they have
differing emissivities.

Infrared

128

Applications
Night vision
Infrared is used in night vision equipment when there is insufficient
visible light to see.[15] Night vision devices operate through a process
involving the conversion of ambient light photons into electrons which
are then amplified by a chemical and electrical process and then
converted back into visible light.[15] Infrared light sources can be used
to augment the available ambient light for conversion by night vision
devices, increasing in-the-dark visibility without actually using a
visible light source.[15]
The use of infrared light and night vision devices should not be
confused with thermal imaging which creates images based on
differences in surface temperature by detecting infrared radiation (heat)
that emanates from objects and their surrounding environment.[16]

Active-infrared night vision : the camera


illuminates the scene at infrared wavelengths
invisible to the human eye. Despite a dark
back-lit scene, active-infrared night vision
delivers identifying details, as seen on the display
monitor.

Thermography

A thermographic image of a dog

Infrared radiation can be used to remotely determine the temperature of


objects (if the emissivity is known). This is termed thermography, or in
the case of very hot objects in the NIR or visible it is termed
pyrometry. Thermography (thermal imaging) is mainly used in military
and industrial applications but the technology is reaching the public
market in the form of infrared cameras on cars due to the massively
reduced production costs.

Thermographic cameras detect radiation in the infrared range of the


electromagnetic spectrum (roughly 90014,000 nanometers or 0.914m) and produce images of that radiation.
Since infrared radiation is emitted by all objects based on their temperatures, according to the black body radiation
law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of
radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in
temperature (hence the name).

Infrared

129

Hyperspectral imaging

Hyperspectral thermal infrared emission


measurement, an outdoor scan in winter
conditions, ambient temperature -15C, image
produced with a Specim LWIR hyperspectral
imager. Relative radiance spectra from various
targets in the image are shown with arrows. The
infrared spectra of the different objects such as
the watch clasp have clearly distinctive
characteristics. The contrast level indicates the
[17]
temperature of the object.

A hyperspectral image, a basis for chemical imaging, is a "picture"


containing continuous spectrum through a wide spectral range.
Hyperspectral imaging is gaining importance in the applied
spectroscopy particularly in the fields of NIR, SWIR, MWIR, and
LWIR spectral regions. Typical applications include biological,
mineralogical, defence, and industrial measurements.
Thermal Infrared Hyperspectral Camera can be applied similarly to a
Thermographic camera, with the fundamental difference that each
pixel contains a full LWIR spectrum. Consequently, chemical
identification of the object can be performed without a need for an
external light source such as the Sun or the Moon. Such cameras are
typically applied for geological measurements, outdoor surveillance
and UAV applications.[18]

Other imaging
In infrared photography, infrared filters are used to capture the
near-infrared spectrum. Digital cameras often use infrared blockers.
Cheaper digital cameras and camera phones have less effective filters
and can "see" intense near-infrared, appearing as a bright purple-white
color. This is especially pronounced when taking pictures of subjects
near IR-bright areas (such as near a lamp), where the resulting infrared
interference can wash out the image. There is also a technique called
'T-ray' imaging, which is imaging using far-infrared or terahertz
Infrared light from the LED of an Xbox 360
radiation. Lack of bright sources makes terahertz photography
remote control as seen by a digital camera.
technically more challenging than most other infrared imaging
techniques. Recently T-ray imaging has been of considerable interest
due to a number of new developments such as terahertz time-domain spectroscopy.

Tracking
Infrared tracking, also known as infrared homing, refers to a passive missile guidance system which uses the
emission from a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles which use
infrared seeking are often referred to as "heat-seekers", since infrared (IR) is just below the visible spectrum of light
in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft
generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects
in the background.[19]

Infrared

Heating
Infrared radiation can be used as a deliberate heating source. For example it is used in infrared saunas to heat the
occupants, and also to remove ice from the wings of aircraft (de-icing). FIR is also gaining popularity as a safe heat
therapy method of natural health care & physiotherapy. Infrared can be used in cooking and heating food as it
predominantly heats the opaque, absorbent objects, rather than the air around them.
Infrared heating is also becoming more popular in industrial manufacturing processes, e.g. curing of coatings,
forming of plastics, annealing, plastic welding, print drying. In these applications, infrared heaters replace
convection ovens and contact heating. Efficiency is achieved by matching the wavelength of the infrared heater to
the absorption characteristics of the material.

Communications
IR data transmission is also employed in short-range communication among computer peripherals and personal
digital assistants. These devices usually conform to standards published by IrDA, the Infrared Data Association.
Remote controls and IrDA devices use infrared light-emitting diodes (LEDs) to emit infrared radiation which is
focused by a plastic lens into a narrow beam. The beam is modulated, i.e. switched on and off, to encode the data.
The receiver uses a silicon photodiode to convert the infrared radiation to an electric current. It responds only to the
rapidly pulsing signal created by the transmitter, and filters out slowly changing infrared radiation from ambient
light. Infrared communications are useful for indoor use in areas of high population density. IR does not penetrate
walls and so does not interfere with other devices in adjoining rooms. Infrared is the most common way for remote
controls to command appliances. Infrared remote control protocols like RC-5, SIRC, are used to communicate with
infrared.
Free space optical communication using infrared lasers can be a relatively inexpensive way to install a
communications link in an urban area operating at up to 4 gigabit/s, compared to the cost of burying fiber optic
cable.
Infrared lasers are used to provide the light for optical fiber communications systems. Infrared light with a
wavelength around 1,330nm (least dispersion) or 1,550nm (best transmission) are the best choices for standard
silica fibers.
IR data transmission of encoded audio versions of printed signs is being researched as an aid for visually impaired
people through the RIAS (Remote Infrared Audible Signage) project.

Spectroscopy
Infrared vibrational spectroscopy (see also near infrared spectroscopy) is a technique which can be used to identify
molecules by analysis of their constituent bonds. Each chemical bond in a molecule vibrates at a frequency which is
characteristic of that bond. A group of atoms in a molecule (e.g. CH2) may have multiple modes of oscillation
caused by the stretching and bending motions of the group as a whole. If an oscillation leads to a change in dipole in
the molecule, then it will absorb a photon which has the same frequency. The vibrational frequencies of most
molecules correspond to the frequencies of infrared light. Typically, the technique is used to study organic
compounds using light radiation from 4000400cm1, the mid-infrared. A spectrum of all the frequencies of
absorption in a sample is recorded. This can be used to gain information about the sample composition in terms of
chemical groups present and also its purity (for example a wet sample will show a broad O-H absorption around
3200cm1).

130

Infrared

131

Meteorology
Weather satellites equipped with scanning radiometers produce thermal
or infrared images which can then enable a trained analyst to determine
cloud heights and types, to calculate land and surface water
temperatures, and to locate ocean surface features. The scanning is
typically in the range 10.3-12.5m (IR4 and IR5 channels).
High, cold ice clouds such as Cirrus or Cumulonimbus show up bright
white, lower warmer clouds such as Stratus or Stratocumulus show up
as grey with intermediate clouds shaded accordingly. Hot land surfaces
will show up as dark grey or black. One disadvantage of infrared
IR Satellite picture taken 1315 Z on 15th October
imagery is that low cloud such as stratus or fog can be a similar
2006. A frontal system can be seen in the Gulf of
Mexico with embedded Cumulonimbus cloud.
temperature to the surrounding land or sea surface and does not show
Shallower Cumulus and Stratocumulus can be
up. However, using the difference in brightness of the IR4 channel
seen off the Eastern Seaboard.
(10.3-11.5m) and the near-infrared channel (1.58-1.64m), low
cloud can be distinguished, producing a fog satellite picture. The main
advantage of infrared is that images can be produced at night, allowing a continuous sequence of weather to be
studied.
These infrared pictures can depict ocean eddies or vortices and map currents such as the Gulf Stream which are
valuable to the shipping industry. Fishermen and farmers are interested in knowing land and water temperatures to
protect their crops against frost or increase their catch from the sea. Even El Nio phenomena can be spotted. Using
color-digitized techniques, the gray shaded thermal images can be converted to color for easier identification of
desired information.

Climatology
In the field of climatology, atmospheric infrared radiation is monitored to detect trends in the energy exchange
between the earth and the atmosphere. These trends provide information on long term changes in the Earth's climate.
It is one of the primary parameters studied in research into global warming together with solar radiation.
A pyrgeometer is utilized in this field of research to perform continuous outdoor measurements. This is a broadband
infrared radiometer with sensitivity for infrared radiation between approximately 4.5m and 50m.

Infrared

132

Astronomy
Astronomers observe objects in the infrared portion of the
electromagnetic spectrum using optical components, including mirrors,
lenses and solid state digital detectors. For this reason it is classified as
part of optical astronomy. To form an image, the components of an
infrared telescope need to be carefully shielded from heat sources, and
the detectors are chilled using liquid helium.
The sensitivity of Earth-based infrared telescopes is significantly
limited by water vapor in the atmosphere, which absorbs a portion of
the infrared radiation arriving from space outside of selected
atmospheric windows. This limitation can be partially alleviated by
placing the telescope observatory at a high altitude, or by carrying the
telescope aloft with a balloon or an aircraft. Space telescopes do not
suffer from this handicap, and so outer space is considered the ideal
location for infrared astronomy.

Beta Pictoris, the light blue dot off center, as seen


in infrared. It combines two images, the inner
disc is at 3.6 microns.

The infrared portion of the spectrum has several useful benefits for
astronomers. Cold, dark molecular clouds of gas and dust in our galaxy
will glow with radiated heat as they are irradiated by imbedded stars.
Infrared can also be used to detect protostars before they begin to emit
visible light. Stars emit a smaller portion of their energy in the infrared
spectrum, so nearby cool objects such as planets can be more readily
detected. (In the visible light spectrum, the glare from the star will
drown out the reflected light from a planet.)
Infrared light is also useful for observing the cores of active galaxies
which are often cloaked in gas and dust. Distant galaxies with a high
redshift will have the peak portion of their spectrum shifted toward
longer wavelengths, so they are more readily observed in the
infrared.[4]

The Spitzer Space Telescope is a dedicated


infrared space observatory currently in orbit
around the Sun. NASA image.

Infrared

133

Art history
Infrared reflectograms, as called by art historians,[20] are taken of
paintings to reveal underlying layers, in particular the underdrawing or
outline drawn by the artist as a guide. This often uses carbon black
which shows up well in reflectograms, so long as it has not also been
used in the ground underlying the whole painting. Art historians are
looking to see if the visible layers of paint differ from the
under-drawing or layers in between - such alterations are called
pentimenti when made by the original artist. This is very useful
information in deciding whether a painting is the prime version by the
original artist or a copy, and whether it has been altered by
over-enthusiastic restoration work. Generally the more pentimenti, the
more likely a painting is to be the prime version. It also gives useful
insights into working practices.[21]

The Arnolfini Portrait by Jan van Eyck, National


Gallery, London

Among many other changes in the Arnolfini Portrait of 1434 (left), the
man's face was originally higher by about the height of his eye; the
woman's was higher, and her eyes looked more to the front. Each of his
feet was underdrawn in one position, painted in another, and then
overpainted in a third. These alterations are seen in infra-red

reflectograms.[22]
Similar uses of infrared are made by historians on various types of objects, especially very old written documents
such as the Dead Sea Scrolls, the Roman works in the Villa of the Papyri, and the Silk Road texts found in the
Dunhuang Caves.[23] Carbon black used in ink can show up extremely well.

Biological systems
The pit viper has a pair of infrared sensory pits on its head. There is
uncertainty regarding the exact thermal sensitivity of this biological
infrared detection system.[24] [25]
Other organisms that have thermoreceptive organs are pythons (family
Pythonidae), some boas (family Boidae), the Common Vampire Bat
(Desmodus rotundus), a variety of jewel beetles (Melanophila
acuminata),[26] darkly pigmented butterflies (Pachliopta aristolochiae
and Troides rhadamantus plateni), and possibly blood-sucking bugs
(Triatoma infestans).[27]

Thermographic image of a snake eating a mouse

Photobiomodulation
Near infrared light, or photobiomodulation, is used for treatment of
chemotherapy induced oral ulceration as well as wound healing. There
is some work relating to anti herpes virus treatment.[28] Research
projects include work on central nervous system healing effects via
cytochrome c oxidase upregulation and other possible mechanisms.[29]

Thermographic image of a fruit bat.

Infrared

134

Health hazard
Strong infrared radiation in certain industry high heat settings may constitute a health hazard to the eyes and
resulting in damaging or even blinding the user. More so, since the radiation is invisible. Therefore special IR proof
protective goggles must be worn in such places.[30]

The Earth as an infrared emitter


The Earth's surface and the clouds absorb visible and invisible
radiation from the sun and re-emit much of the energy as infrared back
to the atmosphere. Certain substances in the atmosphere, chiefly cloud
droplets and water vapor, but also carbon dioxide, methane, nitrous
oxide, sulfur hexafluoride, and chlorofluorocarbons,[31] absorb this
infrared, and re-radiate it in all directions including back to Earth. Thus
the greenhouse effect keeps the atmosphere and surface much warmer
than if the infrared absorbers were absent from the atmosphere.[32]

History of infrared science

Brief diagram showing the greenhouse effect

The discovery of infrared radiation is ascribed to William Herschel, the astronomer, in the early 19th century.
Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light
from the sun and detected the infrared, beyond the red part of the spectrum, through an increase in the temperature
recorded on a thermometer. He was surprised at the result and called them "Calorific Rays". The term 'Infrared' did
not appear until late in the 19th century.[33]
Other important dates include:[10]
1737: milie du Chtelet predicted what is today known as infrared radiation in Dissertation sur la nature et la
propagation du feu.
1835: Macedonio Melloni makes the first thermopile IR detector.
1860: Gustav Kirchhoff formulates the blackbody theorem
1873: Willoughby Smith discovers the photoconductivity of selenium.

1879: Stefan-Boltzmann law formulated empirically that the power radiated by a blackbody is proportional to
.
1880s & 1890s: Lord Rayleigh and Wilhelm Wien both solve part of the blackbody equation, but both solutions
are approximations that "blow up" out of their useful ranges. This problem was called the "Ultraviolet catastrophe
and Infrared Catastrophe".
1901: Max Planck published the blackbody equation and theorem. He solved the problem by quantizing the
allowable energy transitions.
1905: Albert Einstein develops the theory of the photoelectric effect, determining the photon. Also William
Coblentz in spectroscopy and radiometry.
1917: Theodore Case develops thallous sulfide detector; British develop the first infra-red search and track
(IRST) in World War I and detect aircraft at a range of one mile (1.6km).
1935: Lead salts - early missile guidance in World War II.
1938: Teau Ta - predicted that the pyroelectric effect could be used to detect infrared radiation.
1945: The Zielgert 1229 "Vampir" infrared weapon system is introduced, as the first man portable infrared
device to be used in a military application.
1952: H. Welker discovers InSb.
1950s: Paul Kruse (at Honeywell) and Texas Instruments form infrared images before 1955.

Infrared
1950s and 1960s: Nomenclature and radiometric units defined by Fred Nicodemenus, G.J. Zissis and R. Clark,
Jones defines D*.
1958: W.D. Lawson (Royal Radar Establishment in Malvern) discovers IR detection properties of HgCdTe.
1958: Falcon & Sidewinder missiles developed using infrared and the first textbook on infrared sensors appears
by Paul Kruse, et al.
1961: J. Cooper demonstrated pyroelectric detection.
1962: Kruse and ? Rodat advance HgCdTe; Signal Element and Linear Arrays available.
1965: First IR Handbook; first commercial imagers (Barnes, Agema {now part of FLIR Systems Inc.}; Richard
Hudson's landmark text; F4 TRAM FLIR by Hughes; phenomenology pioneered by Fred Simmons and A.T.
Stair; U.S. Army's night vision lab formed (now Night Vision and Electronic Sensors Directorate (NVESD), and
Rachets develops detection, recognition and identification modeling there.
1970: Willard Boyle & George E. Smith propose CCD at Bell Labs for picture phone.
1972: Common module program started by NVESD.
1978: Infrared imaging astronomy comes of age, observatories planned, IRTF on Mauna Kea opened; 32 by 32
and 64 by 64 arrays are produced in InSb, HgCdTe and other materials.

References
[1] Dr. S. C. Liew. "Electromagnetic Waves" (http:/ / www. crisp. nus. edu. sg/ ~research/ tutorial/ em. htm). Centre for Remote Imaging,
Sensing and Processing. . Retrieved 2006-10-27.
[2] "Reference Solar Spectral Irradiance: Air Mass 1.5" (http:/ / rredc. nrel. gov/ solar/ spectra/ am1. 5/ ). . Retrieved 2009-11-12.
[3] C.R. Nave - Hyperphysics: Electromagnetic Spectrum (http:/ / hyperphysics. phy-astr. gsu. edu/ hbase/ ems3. html#c1)
[4] "IR Astronomy: Overview" (http:/ / www. ipac. caltech. edu/ Outreach/ Edu/ importance. html). NASA Infrared Astronomy and Processing
Center. . Retrieved 2006-10-30.
[5] Reusch, William (1999). "Infrared Spectroscopy" (http:/ / www. cem. msu. edu/ ~reusch/ VirtualText/ Spectrpy/ InfraRed/ infrared. htm).
Michigan State University. . Retrieved 2006-10-27.
[6] Byrnes, James (2009). Unexploded Ordnance Detection and Mitigation. Springer. pp.2122. ISBN9781402092527.
[7] Henderson, Roy. "Wavelength considerations" (http:/ / web. archive. org/ web/ 20071028072110/ http:/ / info. tuwien. ac. at/ iflt/ safety/
section1/ 1_1_1. htm). Instituts fr Umform- und Hochleistungs. Archived from the original (http:/ / info. tuwien. ac. at/ iflt/ safety/ section1/
1_1_1. htm) on 2007-10-28. . Retrieved 2007-10-18.
[8] "ISO 20473:2007". ISO.
[9] IPAC Staff. "Near, Mid and Far-Infrared" (http:/ / www. ipac. caltech. edu/ Outreach/ Edu/ Regions/ irregions. html). NASA ipac. . Retrieved
2007-04-04.
[10] Miller, Principles of Infrared Technology (Van Nostrand Reinhold, 1992), and Miller and Friedman, Photonic Rules of Thumb, 2004. ISBN
9780442012106
[11] D.R. Griffin, R Hubbard, G Wald (1947). "The Sensitivity of the Human Eye to Infra-Red Radiation". J. Opt. Soc. Am. 37 (7): 546553.
doi:10.1364/JOSA.37.000546.
[12] Ramaswami, Rajiv (May 2002). "Optical Fiber Communication: From Transmission to Networking" (http:/ / ieeexplore. ieee. org/ iel5/ 35/
21724/ 01006983. pdf) (PDF). IEEE. . Retrieved 2006-10-18.
[13] "Introduction to Solar Energy" (http:/ / www. azsolarcenter. com/ design/ documents/ passive. DOC) (DOC). Passive Solar Heating &
Cooling Manual. Rodale Press, Inc.. 1980. . Retrieved 2007-08-12.
[14] McCreary, Jeremy (October 30, 2004). "Infrared (IR) basics for digital photographers-capturing the unseen (Sidebar: Black Body
Radiation)" (http:/ / dpfwiw. com/ ir. htm). Digital Photography For What It's Worth. . Retrieved 2006-11-07.
[15] "How Night Vision Works" (http:/ / www. atncorp. com/ HowNightVisionWorks). American Technologies Network Corporation. .
Retrieved 2007-08-12.
[16] Bryant, Lynn (2007-06-11). "How does thermal imaging work? A closer look at what is behind this remarkable technology" (http:/ / www.
video-surveillance-guide. com/ how-does-thermal-imaging-work. htm). . Retrieved 2007-08-12.
[17] Holma, H., (May 2011), Thermische Hyperspektralbildgebung im langwelligen Infrarot (http:/ / www. photonik. de/ index. php?id=11&
np=5& artid=848& L=1), Photonik
[18] Frost&Sullivan, Technical Insights, Aerospace&Defence (Feb 2011): World First Thermal Hyperspectral Camera for Unmanned Aerial
Vehicles (http:/ / www. frost. com/ prod/ servlet/ segment-toc. pag?segid=D870-00-48-00-00& ctxixpLink=FcmCtx3&
ctxixpLabel=FcmCtx4)
[19] Mahulikar, S.P., Sonawane, H.R., & Rao, G.A.: (2007) "Infrared signature studies of aerospace vehicles", Progress in Aerospace Sciences,
v. 43(7-8), pp. 218-245.

135

Infrared
[20] "IR Reflectography for Non-destructive Analysis of Underdrawings in Art Objects" (http:/ / www. sensorsinc. com/ artanalysis. html).
Sensors Unlimited, Inc.. . Retrieved 2009-02-20.
[21] "The Mass of Saint Gregory: Examining a Painting Using Infrared Reflectography" (http:/ / www. clevelandart. org/ exhibcef/ ConsExhib/
html/ grien. html). The Cleveland Museum of Art. . Retrieved 2009-02-20.
[22] National Gallery Catalogues: The Fifteenth Century Netherlandish Paintings by Lorne Campbell, 1998, ISBN 185709171
[23] "International Dunhuang Project An Introduction to digital infrared photography and its application within IDP -paper pdf 6.4 MB" (http:/ /
idp. bl. uk/ pages/ technical_resources. a4d). Idp.bl.uk. . Retrieved 2011-11-08.
[24] B. S. Jones; W. F. Lynn; M. O. Stone (2001). "Thermal Modeling of Snake Infrared Reception: Evidence for Limited Detection Range".
Journal of Theoretical Biology 209 (2): 201211. doi:10.1006/jtbi.2000.2256. PMID11401462.
[25] V. Gorbunov; N. Fuchigami; M. Stone; M. Grace; V. V. Tsukruk (2002). "Biological Thermal Detection: Micromechanical and
Microthermal Properties of Biological Infrared Receptors". Biomacromolecules 3 (1): 106115. doi:10.1021/bm015591f. PMID11866562.
[26] Evans, W.G. (1966). "Infrared receptors in Melanophila acuminata De Geer". Nature 202 (4928): 211. Bibcode1964Natur.202..211E.
doi:10.1038/202211a0.
[27] A.L. Campbell, A.L. Naik, L. Sowards, M.O. Stone (2002). "Biological infrared imaging and sensing". Micrometre 33 (2): 211225.
doi:10.1016/S0968-4328(01)00010-5. PMID11567889.
[28] Hargate, G (2006). "A randomised double-blind study comparing the effect of 1072-nm light against placebo for the treatment of herpes
labialis.". Clinical and experimental dermatology 31 (5): 63841. doi:10.1111/j.1365-2230.2006.02191.x. PMID16780494.
[29] Desmet, KD; Paz, DA; Corry, JJ; Eells, JT; Wong-Riley, MT; Henry, MM; Buchmann, EV; Connelly, MP et al. (2006). "Clinical and
experimental applications of NIR-LED photobiomodulation.". Photomedicine and laser surgery 24 (2): 1218. doi:10.1089/pho.2006.24.121.
PMID16706690.
[30] "The artist's complete health and ... - Monona Rossol, Graphic Artists Guild (U.S.) - Google Books" (http:/ / books. google. com/
books?id=E7-9unTgJrwC& pg=PA33& lpg=PA33& dq=infrared+ protective+ goggles). Books.google.com. . Retrieved 2011-11-08.
[31] "Global Sources of Greenhouse Gases" (http:/ / www. eia. doe. gov/ oiaf/ 1605/ gg01rpt/ emission. html). Emissions of Greenhouse Gases in
the United States 2000. Energy Information Administration. 2002-05-02. . Retrieved 2007-08-13.
[32] "Clouds & Radiation" (http:/ / earthobservatory. nasa. gov/ Library/ Clouds/ ). . Retrieved 2007-08-12.
[33] "Herschel Discovers Infrared Light" (http:/ / coolcosmos. ipac. caltech. edu/ cosmic_classroom/ classroom_activities/ herschel_bio. html).
Coolcosmos.ipac.caltech.edu. . Retrieved 2011-11-08.

External links
Infrared: A Historical Perspective (http://www.omega.com/literature/transactions/volume1/historical1.html)
(Omega Engineering)
Infrared Data Association (http://www.irda.org/), a standards organization for infrared data interconnection
SIRC Protocol (http://yengal-marumugam.blogspot.com/2011/06/sirc-part-i-basics.html)
How to build an USB infrared receiver to control PCs remotely (http://www.ocinside.de/html/modding/
usb_ir_receiver/usb_ir_receiver.html)
Infrared Waves (http://imagers.gsfc.nasa.gov/ems/infrared.html): detailed explanation of infrared light.
(NASA)

136

Linear encoder

137

Linear encoder
A linear encoder is a sensor,
transducer or readhead paired with a
scale that encodes position. The sensor
reads the scale in order to convert the
encoded position into an analog or
digital signal, which can then be
decoded into position by a digital
readout (DRO) or motion controller.
The encoder can be either incremental
or absolute. Motion can be determined
by change in position over time. Linear
encoder technologies include optical,
magnetic, inductive, capacitive and
eddy current. Optical technologies
include shadow, self imaging and
interferometric. Linear encoders are
used in metrology instruments, motion
systems and high precision machining
tools ranging from digital calipers and
coordinate measuring machines to
stages, CNC Mills, manufacturing
gantry tables and semiconductor
steppers.

Typical linear optical encoders.

Physical principle
Linear encoders are transducers that
exploit many different physical
properties in order to encode position:

Visualization of magnetic structures of a linear encoder (Recorded with MagView).

Scale / reference based


Optical
Optical linear encoders [1], [2] dominate the high resolution market and may employ shuttering / Moir, diffraction
or holographic principles. Typical incremental scale periods vary from hundreds down to sub-micrometre and
following interpolation can provide resolutions as fine as a nanometre. Light sources used include infrared LEDs,
visible LEDs, miniature light-bulbs and laser diodes.
Magnetic
Magnetic linear encoders [3] employ either active (magnetized) or passive (variable reluctance) scales and position
may be sensed using sense-coils, Hall Effect or magnetoresistive readheads. With coarser scale periods than optical
encoders (typically a few hundred micrometers to several millimeters) resolutions in the order of a micrometer are
the norm.

Linear encoder
Capacitive
Capacitive linear encoders work by sensing the capacitance between a reader and scale. Typical applications are
digital calipers. One of the disadvantage is the sensitivity to uneven dirt, which can locally change the relative
permittivity.
Inductive
Inductive technology is robust to contaminants, allowing calipers and other measurement tools that are coolant-proof
[4]. A popular application of the inductive measuring principle is the Inductosyn [5]. In effect it is a resolver
unwound into a linear system. The Spherosyn encoder [6] is based on the principle of electromagnetic induction and
uses coils to sense nickel-chrome ball-bearings mounted within a tube.
Eddy current
US Patent 3820110, "Eddy current type digital encoder and position reference", gives an example of this type of
encoder, which uses a scale coded with high and low permeability, non-magnetic materials, which is detected and
decoded by monitoring changes in inductance of an AC circuit that includes an inductive coil sensor. Maxon [7]
makes an example (rotary encoder) product (the MILE encoder).

Without scales
Optical image sensor
The sensors are based on an image correlation method. The Sensor takes subsequent pictures from the surface being
measured and compares the images for displacement [8]. Resolutions down to 1 nm are possible. [9]

Applications
There are two main areas of application for linear encoders:

Measurement
Measurement application include coordinate-measuring machines (CMM), laser scanners, calipers, gear
measurement [10], tension testers [11] and Digital read outs (DROs).

Motion systems
Servo controlled motion systems employ linear encoder so as to provide accurate, high-speed movement. Typical
applications include robotics, machine tools, pick-and-place PCB assembly equipment; semiconductors handling and
test equipment, wire bonders, printers and digital presses.[12]

138

Linear encoder

Output Signal Formats


Incremental signals
Linear encoders can have analog or digital outputs.
Analog
The industry standard, analog output for linear encoders
is sine and cosine quadrature signals. These are usually
transmitted differentially so as to improve noise
immunity. An early industry standard was 12 A
peak-peak current signals but more recently this has
been replaced with 1V peak to peak voltage signals.
Compared to digital transmission, the analog signals'
lower bandwidth helps to minimise emc emissions.
Quadrature sine/cosine signals can be monitored easily
by using an oscilloscope in XY mode to display a
circular Lissajous Figure. Highest accuracy signals are
obtained if the Lissajous Figure is circular (no gain or
phase error) and perfectly centred. Modern encoder
systems employ circuitry to trim these error mechanisms
automatically. The overall accuracy of the linear encoder
is a combination of the scale accuracy and errors
introduced by the readhead. Scale contributions to the
error budget include linearity and slope (scaling factor
error). Readhead error mechanisms are usually described
as cyclic error or sub-divisional error (SDE) as they
repeat every scale period. The largest contributor to
readhead inaccuracy is signal offset, followed by signal
imbalance (ellipticity) and phase error (the quadrature
The sine and cosine outputs.
signals not being exactly 90 apart). Overall signal size
does not affect encoder accuracy, however,
signal-to-noise and jitter performance may degrade with smaller signals. Automatic signal compensation
mechanisms can include automatic offset compensation (AOC), automatic balance compensation (ABC) and
automatic gain control (AGC). Phase is more difficult to compensate dynamically and is usually applied as one time
compensation during installation or calibration. Other forms of inaccuracy include signal distortion (frequently
harmonic distortion of the sine/cosine signals).

139

Linear encoder
Digital
Many linear encoders interpolate the
analogue sine/cosine signals in order to
sub-divide the scale period, providing
a higher measurement resolution. The
output of the interpolation process is
quadrature squarewaves the distance
between edges of the two channels
being the resolution of the encoder.
The A and B quadrature channels
The reference mark or index pulse will
also be processed digitally and will be a pulse, usually one to four units-of-resolution wide.
The major advantage of encoders with built-in interpolation and digital signal transmission is improved noise
immunity. However, the high frequency, fast edge speed signals may produce more emc emissions.
Incremental encoders with built-in digital processing make it possible to transmit position to any subsequent
electronics such as a position counter.

Absolute reference signals


As well as analog or digital incremental output signals, linear encoders can provide absolute reference or positioning
signals.
Reference mark
Most incremental, linear encoders can produce an index or reference mark pulse providing a datum position along
the scale for use at power-up or following a loss of power. This index signal must be able to identify position within
one, unique period of the scale. The reference mark may comprise a single feature on the scale, an autocorrelator
pattern (typically a Barker code) or a chirp pattern.
Distance coded reference marks (DCRM) are placed onto the scale in a unique pattern allowing a minimal movement
(typically moving past two reference marks) to define the readhead's position. Multiple, equally spaced reference
marks may also be placed onto the scale such that following installation, the desired marker can be selected - usually
via a magnet or optically.
Absolute code
With suitably encoded scales (multitrack, vernier, digital code, or pseudo-random code) an encoder can determine its
position without movement or needing to find a reference position. Such absolute encoders also communicate using
serial communication protocols. Many of these protocols are proprietary Fanuc, Mitsubishi, EnDat, DriveCliq,
Panasonic, Yaskawa but open standards such as BiSS [13] are now appearing, which avoid tying users to a
particular supplier.

140

Linear encoder

Limit switches
Many linear encoders include built-in limit switches - either optical or magnetic. Two limit switches are frequently
included such that on power-up the controller can determine if the encoder is at an end-of-travel and in which
direction to drive the axis.

Physical arrangement / protection


Linear encoders may be either enclosed or open. Enclosed linear encoders are employed in dirty, hostile
environments such as machine-tools. They typically comprise an aluminium extrusion enclosing a glass or metal
scale. Flexible lip seals allow an internal, guided readhead to read the scale. Accuracy is limited due to the friction
and hysteresis imposed by this mechanical arrangement.
For the highest accuracy, lowest measurement hysteresis and lowest friction applications, open linear encoders are
used.
Linear encoders may use transmissive (glass) or reflective scales, employing Ronchi or phase gratings. Scale
materials include chrome on glass, metal (stainless steel, gold plated steel, Invar), ceramics (Zerodur) and plastics.
The scale may be self supporting, be thermally mastered to the substrate (via adhesive or adhesive tape) or track
mounted. Track mounting may allow the scale to maintain its own coefficient of thermal expansion and allows large
equipment to be broken down for shipment.

Encoder terms
Resolution
Repeatability
Hysteresis
Signal-to-noise ratio / Noise / Jitter
Lissajous Figure
Quadrature
Index / Reference Mark / Datum / Fiducial
Distance Coded Reference Marks (DCRM)

Book
David S. Nyce: Linear Position Sensors: Theory and Application, New Jersey, John Wiley & Sons Inc. (2003)
Walcher Hans: Position Sensing: Angle and Distance Measurement for Engineers, Butterworth Heinemann
(1994)

References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]

http:/ / www. microesys. com/ m2/ index. html


http:/ / www. renishaw. com/ en/ 6433. aspx
http:/ / www. rls. si/ default. asp?prod=LMencoders
http:/ / www. mitutoyo. com/ pdf/ ABS1813-293. pdf
http:/ / www. ruhle. com/ bar_scale. htm
http:/ / www. newall. com/ LEDs/ operation. htm
http:/ / www. maxonmotor. com/ downloads/ Flyer_EC6_MILE_e_03. 09. pdf
http:/ / www. intacton. com/ us/ products/ INTACTON/ OpticalMotionSensors_Context/
LengthSpeedSensors_Optical_Context_Technology_PopupBase. html
[9] http:/ / www. mitutoyo. com/ pdf/ 1976_MICSYS. pdf
[10] http:/ / www. wenzel-cmm. co. uk/ Industries. asp?SE=9
[11] http:/ / www. instron. co. uk/ wa/ product/ Tension-Testers. aspx
[12] http:/ / global. oce. com/ products/ productionprinting/ digitalpresses/ color/ default. aspx
[13] http:/ / www. biss-interface. com/

141

Photoelectric sensor

Photoelectric sensor
A photoelectric sensor, or photoeye, is a device used to detect the distance, absence, or presence of an object by
using a light transmitter, often infrared, and a photoelectric receiver. They are used extensively in industrial
manufacturing. There are three different functional types: opposed (a.k.a. through beam), retroreflective, and
proximity-sensing (a.k.a. diffused).

Types
A self-contained photoelectric sensor contains the optics, along with the electronics. It requires only a power source.
The sensor performs its own modulation, demodulation, amplification, and output switching. Some self-contained
sensors provide such options as built-in control timers or counters. Because of technological progress, self-contained
photoelectric sensors have become increasingly smaller.
Remote photoelectric sensors used for remote sensing contain only the optical components of a sensor. The circuitry
for power input, amplification, and output switching are located elsewhere, typically in a control panel. This allows
the sensor, itself, to be very small. Also, the controls for the sensor are more accessible, since they may be bigger.
When space is restricted or the environment too hostile even for remote sensors, fiber optics may be used. Fiber
optics are passive mechanical sensing components. They may be used with either remote or self-contained sensors.
They have no electrical circuitry and no moving parts, and can safely pipe light into and out of hostile
environments.[1]

Sensing Modes
An opposed (through beam) arrangement consists of a receiver located within the line-of-sight of the transmitter. In
this mode, an object is detected when the light beam is blocked from getting to the receiver from the transmitter.
A retroreflective arrangement places the transmitter and receiver at the same location and uses a reflector to bounce
the light beam back from the transmitter to the receiver. An object is sensed when the beam is interrupted and fails to
reach the receiver.
A proximity-sensing (diffused) arrangement is one in which the transmitted radiation must reflect off the object in
order to reach the receiver. In this mode, an object is detected when the receiver sees the transmitted source rather
than when it fails to see it.
Some photoeyes have two different operational types, light operate and dark operate. Light operate photoeyes
become operational when the receiver "receives" the transmitter signal. Dark operate photoeyes become operational
when the receiver "does not receive" the transmitter signal.
The detecting range of a photoelectric sensor is its "field of view", or the maximum distance the sensor can retrieve
information from, minus the minimum distance. A minimum detectable object is the smallest object the sensor can
detect. More accurate sensors can often have mimimum detectable objects of minuscule size.

142

Photoelectric sensor

143

References
[1] http:/ / info. bannersalesforce. com/ xpedio/ groups/ public/ documents/ literature/ pr_p1_t1_e. pdf. pdf Types of Optical Sensors, Banner
Engineering Corporation

Brackets & Components for Photoelectric/Automation Sensors, 2010, SoftNoze USA Inc, Frankfort, NY USA
(http://www.softnoze.com/products.html''Mounting)
to Sensing, 2002, Banner Engineering Corporation, P/N 120236 (http://info.bannersalesforce.com/
intradoc-cgi/nph-idc_cgi.exe?IdcService=GET_FILE&dDocName=120236&
RevisionSelectionMethod=Latest&Rendition=web''Guide)
http://sensors-transducers.globalspec.com/LearnMore/Sensors_Transducers_Detectors/
Proximity_Presence_Sensing/Photoelectric_Sensors

Photodiode
A photodiode is a type of photodetector capable of converting light
into either current or voltage, depending upon the mode of operation.[1]
The common, traditional solar cell used to generate electric solar
power is a large area photodiode.
Photodiodes are similar to regular semiconductor diodes except that
they may be either exposed (to detect vacuum UV or X-rays) or
packaged with a window or optical fiber connection to allow light to
reach the sensitive part of the device. Many diodes designed for use
specifically as a photodiode use a PIN junction rather than a p-n
junction, to increase the speed of response. A photodiode is designed to
operate in reverse bias.[2]
Three Si and one Ge (bottom) photodiodes

Principle of operation
A photodiode is a p-n junction or PIN structure. When a photon of
sufficient energy strikes the diode, it excites an electron, thereby
creating a free electron (and a positively charged electron hole). This
mechanism is also known as the inner photoelectric effect. If the
absorption
occurs
in
the
Symbol for photodiode.

Photodiode

junction's depletion region, or one


diffusion length away from it, these
carriers are swept from the junction by
the built-in field of the depletion
region. Thus holes move toward the
anode, and electrons toward the
cathode, and a photocurrent is
produced. This photocurrent is the sum
of both the dark current (without light)
and the light current, so the dark
current must be minimized to enhance
the sensitivity of the device.[3]

Photovoltaic mode
When used in zero bias or photovoltaic mode, the flow of photocurrent out of the device is restricted and a voltage
builds up. This mode exploits the photovoltaic effect, which is the basis for solar cells a traditional solar cell is just
a large area photodiode.

Photoconductive mode
In this mode the diode is often reverse biased (with the cathode positive), dramatically reducing the response time at
the expense of increased noise. This increases the width of the depletion layer, which decreases the junction's
capacitance resulting in faster response times. The reverse bias induces only a small amount of current (known as
saturation or back current) along its direction while the photocurrent remains virtually the same. For a given spectral
distribution, the photocurrent is linearly proportional to the illuminance (and to the irradiance).[4]
Although this mode is faster, the photoconductive mode tends to exhibit more electronic noise. The leakage current
of a good PIN diode is so low (<1 nA) that the JohnsonNyquist noise of the load resistance in a typical circuit often
dominates.

Other modes of operation


Avalanche photodiodes have a similar structure to regular photodiodes, but they are operated with much higher
reverse bias. This allows each photo-generated carrier to be multiplied by avalanche breakdown, resulting in internal
gain within the photodiode, which increases the effective responsivity of the device.
A phototransistor is in essence a bipolar transistor encased in a transparent case so that light can reach the
base-collector junction. The electrons that are generated by photons in the base-collector junction are injected into
the base, and this photodiode current is amplified by the transistor's current gain (or hfe). If the emitter is left
unconnected, the phototransistor becomes a photodiode. While phototransistors have a higher responsivity for light
they are not able to detect low levels of light any better than photodiodes. Phototransistors also have significantly
longer response times.

144

Photodiode

145

Materials
The material used to make a photodiode is critical to defining its properties, because only photons with sufficient
energy to excite electrons across the material's bandgap will produce significant photocurrents.
Materials commonly used to produce photodiodes include[5] :
Material

Electromagnetic
spectrum
wavelength range (nm)

Silicon

1901100

Germanium

4001700

Indium gallium arsenide 8002600


Lead(II) sulfide

<10003500

Because of their greater bandgap, silicon-based photodiodes generate less noise than germanium-based photodiodes.

Unwanted photodiodes
Any p-n junction, if illuminated, is potentially a photodiode. Semiconductor devices such as transistors and ICs
contain p-n junctions, and will not function correctly if they are illuminated by unwanted electromagnetic radiation
(light) of wavelength suitable to produce a photocurrent; this is avoided by encapsulating devices in opaque
housings. If these housings are not completely opaque to high-energy radiation (ultraviolet, X-rays, gamma rays),
transistors and ICs can malfunction due to induced photo-currents. Plastic cases are more vulnerable than metal ones.

Features
Critical performance parameters of a photodiode include:
Responsivity
The ratio of generated photocurrent to incident light
power, typically expressed in A/W when used in
photoconductive mode. The responsivity may also be
expressed as a Quantum efficiency, or the ratio of the
number of photogenerated carriers to incident photons
and thus a unitless quantity.
Dark current
The current through the photodiode in the absence of
light, when it is operated in photoconductive mode. The
Response of a silicon photo diode vs wavelength of the
dark current includes photocurrent generated by
incident light
background radiation and the saturation current of the
semiconductor junction. Dark current must be accounted
for by calibration if a photodiode is used to make an accurate optical power measurement, and it is also a
source of noise when a photodiode is used in an optical communication system.
Noise-equivalent power
(NEP) The minimum input optical power to generate photocurrent, equal to the rms noise current in a 1 hertz
bandwidth. The related characteristic detectivity (D) is the inverse of NEP, 1/NEP; and the specific detectivity (
) is the detectivity normalized to the area (A) of the photodetector,
. The NEP is roughly
the minimum detectable input power of a photodiode.

Photodiode
When a photodiode is used in an optical communication system, these parameters contribute to the sensitivity of the
optical receiver, which is the minimum input power required for the receiver to achieve a specified bit error rate.

Applications
P-N photodiodes are used in similar applications to other photodetectors, such as photoconductors, charge-coupled
devices, and photomultiplier tubes. They may be used to generate an output which is dependent upon the
illumination (analog; for measurement and the like), or to change the state of circuitry (digital; either for control and
switching, or digital signal processing).
Photodiodes are used in consumer electronics devices such as compact disc players, smoke detectors, and the
receivers for infrared remote control devices used to control equipment from televisions to air conditioners. For
many applications either photodiodes or photoconductors may be used. Either type of photosensor may be used for
light measurement, as in camera light meters, or to respond to light levels, as in switching on street lighting after
dark.
Photosensors of all types may be used to respond to incident light, or to a source of light which is part of the same
circuit or system. A photodiode is often combined into a single component with an emitter of light, usually a
light-emitting diode (LED), either to detect the presence of a mechanical obstruction to the beam (slotted optical
switch), or to couple two digital or analog circuits while maintaining extremely high electrical isolation between
them, often for safety (optocoupler).
Photodiodes are often used for accurate measurement of light intensity in science and industry. They generally have
a more linear response than photoconductors.
They are also widely used in various medical applications, such as detectors for computed tomography (coupled with
scintillators), instruments to analyze samples (immunoassay), and pulse oximeters.
PIN diodes are much faster and more sensitive than p-n junction diodes, and hence are often used for optical
communications and in lighting regulation.
P-N photodiodes are not used to measure extremely low light intensities. Instead, if high sensitivity is needed,
avalanche photodiodes, intensified charge-coupled devices or photomultiplier tubes are used for applications such as
astronomy, spectroscopy, night vision equipment and laser rangefinding.

Comparison with photomultipliers


Advantages compared to photomultipliers:
1.
2.
3.
4.
5.
6.
7.
8.
9.

Excellent linearity of output current as a function of incident light


Spectral response from 190nm to 1100nm (silicon), longer wavelengths with other semiconductor materials
Low noise
Ruggedized to mechanical stress
Low cost
Compact and light weight
Long lifetime
High quantum efficiency, typically 80%
No high voltage required

Disadvantages compared to photomultipliers:


1. Small area
2. No internal gain (except avalanche photodiodes, but their gain is typically 102103 compared to up to 108 for the
photomultiplier)
3. Much lower overall sensitivity

146

Photodiode
4. Photon counting only possible with specially designed, usually cooled photodiodes, with special electronic
circuits
5. Response time for many designs is slower

Photodiode array
A one-dimensional array of hundreds or thousands of photodiodes can be used as a position sensor, for example as
part of an angle sensor.[6] One advantage of photodiode arrays (PDAs) is that they allow for high speed parallel read
out since the driving electronics may not be built in like a traditional CMOS or CCD sensor.

References
This article incorporatespublic domain material from the General Services Administration document "Federal
Standard 1037C" [7].
[1] IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006) " Photodiode (http:/ /
goldbook. iupac. org/ P04598. html)".
[2] James F. Cox (26 June 2001). Fundamentals of linear electronics: integrated and discrete (http:/ / books. google. com/
books?id=FbezraN9tvEC& pg=PA91). Cengage Learning. pp.91. ISBN978-0-7668-3018-9. . Retrieved 20 August 2011.
[3] Filip Tavernier, Michiel Steyaert High-Speed Optical Receivers with Integrated Photodiode in Nanoscale CMOS Springer, 2011 ISBN
1441999248, Chapter 3 From Light to Electric Current - The Photodiode
[4] "Photodiode slide" (http:/ / hyperphysics. phy-astr. gsu. edu/ hbase/ Electronic/ photdet. html). .
[5] Held. G, Introduction to Light Emitting Diode Technology and Applications, CRC Press, (Worldwide, 2008). Ch. 5 p. 116. ISBN
1-4200-7662-0
[6] Wei Gao (2010). Precision Nanometrology: Sensors and Measuring Systems for Nanomanufacturing (http:/ / books. google. com/
books?id=N0ys_sSxD60C& pg=PA15). Springer. pp.1516. ISBN9781849962537. .
[7] http:/ / www. its. bldrdoc. gov/ fs-1037/ fs-1037c. htm

Gowar, John, Optical Communication Systems, 2 ed., Prentice-Hall, Hempstead UK, 1993 (ISBN 0-13-638727-6)

External links
Technical Information Hamamatsu Photonics (http://sales.hamamatsu.com/assets/html/ssd/si-photodiode/
index.htm)
Using the Photodiode to convert the PC to a Light Intensity Logger (http://www.emant.com/324003.page)
Design Fundamentals for Phototransistor Circuits (http://www.fairchildsemi.com/an/AN/AN-3005.pdf)
Working principles of photodiodes (http://ece-www.colorado.edu/~bart/book/book/chapter4/ch4_7.htm)

147

Piezoelectric accelerometer

Piezoelectric accelerometer
A piezoelectric accelerometer that utilizes the piezoelectric effect of
certain materials to measure dynamic changes in mechanical variables.
(e.g. acceleration, vibration, and mechanical shock)
As with all transducers, piezoelectric accelerometers convert one form
of energy into another and provide an electrical signal in response to a
quantity, property, or condition that is being measured. Using the
general sensing method upon which all accelerometers are based,
acceleration acts upon a seismic mass that is restrained by a spring or
A depiction of how a piezoelectric accelerometer
suspended on a cantilever beam, and converts a physical force into an
works in theory.
electrical signal. Before the acceleration can be converted into an
electrical quantity it must first be converted into either a force or
displacement. This conversion is done via the mass spring system shown in the figure to the right.

Introduction
The word piezoelectric finds its roots in the Greek word piezein, which
means to squeeze or press. When a physical force is exerted on the
accelerometer, the seismic mass loads the piezoelectric element
according to Newtons second law of motion (
). The force
exerted on the piezoelectric material can be observed in the change in
the electrostatic force or voltage generated by the piezoelectric
material. This differs from a piezoresistive effect in that piezoresistive
materials experience a change in the resistance of the material rather
than a change in charge or voltage. Physical force exerted on the
piezoelectric can be classified as one of two types; bending or
The cross-section of a piezoelectric
accelerometer.
compression. Stress of the compression type can be understood as a
force exerted to one side of the piezoelectric while the opposing side
rests against a fixed surface, while bending involves a force being exerted on the piezoelectric from both sides.
Piezoelectric materials used for the purpose of accelerometers can also fall into two categories. The first, and more
widely used, is single-crystal materials (usually quartz). Though these materials do offer a long life span in terms of
sensitivity, their disadvantage is that they are generally less sensitive than some piezoelectric ceramics. In addition to
having a higher piezoelectric constant (sensitivity) than single-crystal materials, ceramics are more inexpensive to
produce. The other category is ceramic material. That uses barium titanate, lead-zirconate-lead-titanate, lead
metaniobate, and other materials whose composition is considered proprietary by the company responsible for their
development. The disadvantage to piezoelectric ceramics, however, is that their sensitivity degrades with time
making the longevity of the device less than that of single-crystal materials.
In applications when low sensitivity piezoelectrics are used, two or more crystals can be connected together for
output multiplication. The proper material can be chosen for particular applications based on the sensitivity,
frequency response, bulk-resistivity, and thermal response. Due to the low output signal and high output impedance
that piezoelectric accelerometers possess, there is a need for amplification and impedance conversion of the signal
produced. In the past this problem has been solved using a separate (external) amplifier/impedance converter. This
method, however, is generally impractical due to the noise that is introduced as well as the physical and
environmental constraints posed on the system as a result. Today IC amplifiers/impedance converters are
commercially available and are generally packaged within the case of the accelerometer itself.

148

Piezoelectric accelerometer

History
Behind the mystery of the operation of the piezoelectric accelerometer lie some very fundamental concepts
governing the behavior of crystallographic structures. In 1880, Pierre and Jacques Curie published an experimental
demonstration connecting mechanical stress and surface charge on a crystal. This phenomenon became known as the
piezoelectric effect. Closely related to this phenomenon is the Curie point, named for the physicist Pierre Curie,
which is the temperature above which piezoelectric material loses spontaneous polarization of its atoms.
The development of the commercial piezoelectric accelerometer came about through a number of attempts to find
the most effective method to measure the vibration on large structures such as bridges and on vehicles in motion
such as aircraft. One attempt involved using the resistance strain gage as a device to build an accelerometer.
Incidentally, it was Hans J. Meier who, through his work at MIT, is given credit as the first to construct a
commercial strain gage accelerometer (circa 1938).[1] However, the strain gage accelerometers were fragile and
could only produce low resonant frequencies and they also exhibited a low frequency response. These limitations in
dynamic range made it unsuitable for testing naval aircraft structures. On the other hand, the piezoelectric sensor was
proven to be a much better choice over the strain gage in designing an accelerometer. The high modulus of elasticity
of piezoelectric materials makes the piezoelectric sensor a more viable solution to the problems identified with the
strain gage accelerometer.
Simply stated, the inherent properties of the piezoelectric accelerometers made it a much better alternative to the
strain gage types because of its high frequency response, and its ability to generate high resonant frequencies. The
piezoelectric accelerometer allowed for a reduction in its physical size at the manufacturing level and it also
provided for a higher g (standard gravity) capability relative to the strain gage type. By comparison, the strain gage
type exhibited a flat frequency response above 200 Hz while the piezoelectric type provided a flat response up to
10,000 Hz.[1] These improvements made it possible for measuring the high frequency vibrations associated with the
quick movements and short duration shocks of aircraft which before was not possible with the strain gage types.
Before long, the technological benefits of the piezoelectric accelerometer became apparent and in the late 1940s,
large scale production of piezoelectric accelerometers began. Today, piezoelectric accelerometers are used for
instrumentation in the fields of engineering, health and medicine, aeronautics and many other different industries.

Manufacturing
There are two common methods used to manufacture accelerometers. One is based upon the principles of
piezoresistance and the other is based on the principles of piezoelectricity. Both methods ensure that unwanted
orthogonal acceleration vectors are excluded from detection.
Manufacturing an accelerometer that uses piezoresistance first starts with a semiconductor layer that is attached to a
handle wafer by a thick oxide layer. The semiconductor layer is then patterned to the accelerometers geometry. This
semiconductor layer has one or more apertures so that the underlying mass will have the corresponding apertures.
Next the semiconductor layer is used as a mask to etch out a cavity in the underlying thick oxide. A mass in the
cavity is supported in cantilever fashion by the piezoresistant arms of the semiconductor layer. Directly below the
accelerometers geometry is a flex cavity that allows the mass in the cavity to flex or move in direction that is
orthogonal to the surface of the accelerometer.
Accelerometers based upon piezoelectricity are constructed with two piezoelectric transducers. The unit consists of a
hollow tube that is sealed by a piezoelectric transducer on each end. The transducers are oppositely polarized and are
selected to have a specific series capacitance. The tube is then partially filled with a heavy liquid and the
accelerometer is excited. While excited the total output voltage is continuously measured and the volume of the
heavy liquid is microadjusted until the desired output voltage is obtained. Finally the outputs of the individual
transducers are measured, the residual voltage difference is tabulated, and the dominate transducer is identified.

149

Piezoelectric accelerometer

150

Applications of piezoelectric accelerometers


Piezoelectric accelerometers are used in many different industries, environments and applications. Piezoelectric
measuring devices are widely used today in the laboratory, on the production floor, and as original equipment for
measuring and recording dynamic changes in mechanical variables including shock and vibration.

References
[1] Patrick, Walter L. The History of the Accelerometer 1920s-1996 Prologue and Epilogue. 2006.

Norton, Harry N.(1989). Handbook of Transducers. Prentice Hall PTR. ISBN 013382599X

External links

'Piezoelectric Tranducers' (http://endevco.com/resources/tp_pdf/TP225.pdf)


'Piezoelectric Sensors' (http://www.piezocryst.com/piezoelectric_sensors.php)
'The Piezoelectric Effect' (http://www.pcb.com/techsupport/tech_gen.php)
'Piezoelectric Accelerometers - Theory and Application' (http://www.new.mmf.de/theory.htm)

Pressure sensor
A pressure sensor measures pressure, typically of gases or liquids.
Pressure is an expression of the force required to stop a fluid from
expanding, and is usually stated in terms of force per unit area. A
pressure sensor usually acts as a transducer; it generates a signal as a
function of the pressure imposed. For the purposes of this article, such
a signal is electrical.
Pressure sensors are used for control and monitoring in thousands of
everyday applications. Pressure sensors can also be used to indirectly
measure other variables such as fluid/gas flow, speed, water level, and
altitude. Pressure sensors can alternatively be called pressure
transducers, pressure transmitters, pressure senders, pressure
indicators and piezometers, manometers, among other names.

Digital air pressure sensor

Pressure sensors can vary drastically in technology, design,


performance, application suitability and cost. A conservative estimate
would be that there may be over 50 technologies and at least 300
companies making pressure sensors worldwide.
There is also a category of pressure sensors that are designed to
measure in a dynamic mode for capturing very high speed changes in
pressure. Example applications for this type of sensor would be in the
Compact digital barometric pressure sensor
measuring of combustion pressure in an engine cylinder or in a gas
turbine. These sensors are commonly manufactured out of piezoelectric materials such as quartz.
Some pressure sensors, such as those found in some traffic enforcement cameras, function in a binary (on/off)
manner, i.e., when pressure is applied to a pressure sensor, the sensor acts to complete or break an electrical circuit.
These types of sensors are also known as a pressure switch.

Pressure sensor

151

Types of pressure measurements


Pressure sensors can be classified in terms of pressure ranges they
measure, temperature ranges of operation, and most importantly
the type of pressure they measure. In terms of pressure type,
pressure sensors can be divided into five categories:
Absolute pressure sensor
This sensor measures the pressure relative to perfect vacuum
pressure (0 PSI or no pressure). Atmospheric pressure is 101.325
kPa (14.7 PSI) at sea level with reference to vacuum.
Gauge pressure sensor
silicon piezoresistive pressure sensors

This sensor is used in different applications because it can be


calibrated to measure the pressure relative to a given atmospheric
pressure at a given location. A tire pressure gauge is an example of gauge pressure indication. When the tire pressure
gauge reads 0 PSI, there is really 14.7 PSI (atmospheric pressure) in the tire.
Vacuum pressure sensor
This sensor is used to measure pressure less than the atmospheric pressure at a given location. This has the potential
to cause some confusion as industry may refer to a vacuum sensor as one which is referenced to either atmospheric
pressure (i.e. measure Negative gauge pressure) or relative to absolute vacuum.
Differential pressure sensor
This sensor measures the difference between two or more pressures introduced as inputs to the sensing unit, for
example, measuring the pressure drop across an oil filter. Differential pressure is also used to measure flow or level
in pressurized vessels.
Sealed pressure sensor
This sensor is the same as the gauge pressure sensor except that it is previously calibrated by manufacturers to
measure pressure relative to sea level pressure.

Pressure-sensing technology
There are two basic categories of analog pressure sensors.
Force collector types These types of electronic pressure sensors generally use a force collector (such a diaphragm,
piston, bourdon tube, or bellows) to measure strain (or deflection) due to applied force (pressure) over an area.
Piezoresistive strain gauge
Uses the piezoresistive effect of bonded or formed strain gauges to detect strain due to applied pressure.
Common technology types are Silicon (Monocrystalline), Polysilicon Thin Film, Bonded Metal Foil, Thick
Film, and Sputtered Thin Film. Generally, the strain gauges are connected to form a Wheatstone bridge circuit
to maximize the output of the sensor. This is the most commonly employed sensing technology for general
purpose pressure measurement. Generally, these technologies are suited to measure absolute, gauge, vacuum,
and differential pressures.
Capacitive
Uses a diaphragm and pressure cavity to create a variable capacitor to detect strain due to applied pressure.
Common technologies use metal, ceramic, and silicon diaphragms. Generally, these technologies are most
applied to low pressures (Absolute, Differential and Gauge)
Electromagnetic

Pressure sensor
Measures the displacement of a diaphragm by means of changes in inductance (reluctance), LVDT, Hall
Effect, or by eddy current principle.
Piezoelectric
Uses the piezoelectric effect in certain materials such as quartz to measure the strain upon the sensing
mechanism due to pressure. This technology is commonly employed for the measurement of highly dynamic
pressures.
Optical
Techniques include the use of the physical change of an optical fiber to detect strain due to applied pressure. A
common example of this type utilizes Fiber Bragg Gratings. This technology is employed in challenging
applications where the measurement may be highly remote, under high temperature, or may benefit from
technologies inherently immune to electromagnetic interference. Another analogous technique utilizes an
elastic film constructed in layers that can change reflected wavelengths according to the applied pressure
(strain).[1] .
Potentiometric
Uses the motion of a wiper along a resistive mechanism to detect the strain caused by applied pressure.
Other types These types of electronic pressure sensors use other properties (such as density) to infer pressure of a
gas, or liquid.
Resonant
Uses the changes in resonant frequency in a sensing mechanism to measure stress, or changes in gas density,
caused by applied pressure. This technology may be used in conjunction with a force collector, such as those
in the category above. Alternatively, resonant technology may be employed by expose the resonating element
itself to the media, whereby the resonant frequency is dependent upon the density of the media. Sensors have
been made out of vibrating wire, vibrating cylinders, quartz, and silicon MEMS. Generally, this technology is
considered to provide very stable readings over time.
Thermal
Uses the changes in thermal conductivity of a gas due to density changes to measure pressure. A common
example of this type is the Pirani gauge.
Ionization
Measures the flow of charged gas particles (ions) which varies due to density changes to measure pressure.
Common examples are the Hot and Cold Cathode gauges.
Others
There are numerous other ways to derive pressure from its density (speed of sound, mass, index of refraction)
among others.

Applications
There are many applications for pressure sensors:
Pressure sensing
This is where the measurement of interest is pressure, expressed as a force per unit area (. This is useful in weather
instrumentation, aircraft, automobiles, and any other machinery that has pressure functionality implemented.
Altitude sensing
This is useful in aircraft, rockets, satellites, weather balloons, and many other applications. All these applications
make use of the relationship between changes in pressure relative to the altitude. This relationship is governed by the
following equation[2] :

152

Pressure sensor

This equation is calibrated for an altimeter, up to 36,090feet (11,000m). Outside that range, an error will be
introduced which can be calculated differently for each different pressure sensor. These error calculations will factor
in the error introduced by the change in temperature as we go up.
Barometric pressure sensors can have an altitude resolution of less than 1meter, which is significantly better than
GPS systems (about 20meters altitude resolution). In navigation applications altimeters are used to distinguish
between stacked road levels for car navigation and floor levels in buildings for pedestrian navigation.
Flow sensing
This is the use of pressure sensors in conjunction with the venturi effect to measure flow. Differential pressure is
measured between two segments of a venturi tube that have a different aperture. The pressure difference between the
two segments is directly proportional to the flow rate through the venturi tube. A low pressure sensor is almost
always required as the pressure difference is relatively small.
Level / depth sensing
A pressure sensor may also be used to calculate the level of a fluid. This technique is commonly employed to
measure the depth of a submerged body (such as a diver or submarine), or level of contents in a tank (such as in a
water tower). For most practical purposes, fluid level is directly proportional to pressure. In the case of fresh water
where the contents are under atmospheric pressure, 1psi = 27.7 inH20 / 1Pa = 9.81 mmH20. The basic equation for
such a measurement is

where P = pressure, p = density of the fluid, g = standard gravity, h = height of fluid column above pressure sensor
Leak testing
A pressure sensor may be used to sense the decay of pressure due to a system leak. This is commonly done by either
comparison to a known leak using differential pressure, or by means of utilizing the pressure sensor to measure
pressure change over time.
Ratiometric Correction of Transducer Output
Piezoresistive transducers configured as Wheatstone bridges often exhibit ratiometric behavior with respect not only
to the measured pressure, but also the transducer supply voltage.

where:
is the output voltage of the transducer.
is the actual measured pressure.
is the nominal transducer scale factor (given an ideal transducer supply voltage) in units of voltage per pressure.
is the actual transducer supply voltage.
is the ideal transducer supply voltage.
Correcting measurements from transducers exhibiting this behavior requires measuring the actual transducer supply
voltage as well as the output voltage and applying the inverse transform of this behavior to the output signal:

NOTE: Common mode signals often present in transducers configured as Wheatstone bridges are not considered in
this analysis.

153

Pressure sensor

154

References
[1] Elastic hologram' pages 113-117, Proc. of the IGC 2010, ISBN 978-0-9566139-1-2 here: http:/ / www. dspace. cam. ac. uk/ handle/ 1810/
225960
[2] http:/ / www. wrh. noaa. gov/ slc/ projects/ wxcalc/ formulas/ pressureAltitude. pdf National Oceanic and Atmospheric Association

Resistance thermometer
Resistance thermometers, also called resistance temperature detectors or resistive thermal devices (RTDs), are
sensors used to measure temperature by correlating the resistance of the RTD element with temperature. Most RTD
elements consist of a length of fine coiled wire wrapped around a ceramic or glass core. The element is usually quite
fragile, so it is often placed inside a sheathed probe to protect it. The RTD element is made from a pure material
whose resistance at various temperatures has been documented. The material has a predictable change in resistance
as the temperature changes; it is this predictable change that is used to determine temperature.
As they are almost invariably made of platinum, they are often called platinum resistance thermometers (PRTs).
They are slowly replacing the use of thermocouples in many industrial applications below 600C, due to higher
accuracy and repeatability.[1]

R vs T relationship of various metals


Common RTD sensing elements constructed of platinum, copper or nickel have a unique, and repeatable and
predictable resistance versus temperature relationship (R vs T) and operating temperature range. The R vs T
relationship is defined as the amount of resistance change of the sensor per degree of temperature change.
Platinum is a noble metal and has the most stable resistance to temperature relationship over the largest temperature
range. Nickel elements have a limited temperature range because the amount of change in resistance per degree of
change in temperature becomes very non-linear at temperatures over 572 F (300 C). Copper has a very linear
resistance to temperature relationship, however copper oxidizes at moderate temperatures and cannot be used over
302F (150C).
Platinum is the best metal for RTDs because it follows a very linear resistance to temperature relationship and it
follows the R vs T relationship in a highly repeatable manner over a wide temperature range. The unique properties
of platinum make it the material of choice for temperature standards over the range of -272.5 C to 961.78 C, and is
used in the sensors that define the International Temperature Standard, ITS-90. It is made using platinum because of
its linear resistance-temperature relationship and its chemical inertness.
The basic differentiator between metals used as resistive elements is the linear approximation of the R vs T
relationship between 0 and 100 C and is referred to as alpha, . The equation below defines , its units are
ohm/ohm/C.

the resistance of the sensor at 0C


the resistance of the sensor at 100C
Pure platinum has an alpha of 0.003925 ohm/ohm/C and is used in the construction of laboratory grade RTDs.
Conversely two widely recognized standards for industrial RTDs IEC 6075 and ASTM E-1137 specify an alpha of
0.00385 ohms/ohm/C. Before these standards were widely adopted several different alpha values were used. It is
still possible to find older probes that are made with platinum that have alpha values of 0.003916 ohms/ohm/C and
0.003902 ohms/ohm/C.
These different alpha values for platinum are achieved by doping; basically carefully introducing impurities into the
platinum. The impurities introduced during doping become embedded in the lattice structure of the platinum and

Resistance thermometer
result in a different R vs.T curve and hence alpha value.[2]

Calibration
To characterize the R vs T relationship of any RTD over a temperature range that represents the planned range of
use, calibration must be performed at temperatures other than 0C and 100C. Two common calibration methods are
the fixed point method and the comparison method. [3]
Fixed point calibration, used for the highest accuracy calibrations, uses the triple point, freezing point or melting
point of pure substances such as water, zinc, tin, and argon to generate a known and repeatable temperature.
These cells allow the user to reproduce actual conditions of the ITS-90 temperature scale. Fixed point calibrations
provide extremely accurate calibrations (within 0.001C) A common fixed point calibration method for
industrial-grade probes is the ice bath. The equipment is inexpensive, easy to use, and can accommodate several
sensors at once. The ice point is designated as a secondary standard because its accuracy is 0.005C (0.009F),
compared to 0.001C (0.0018F) for primary fixed points.
Comparison calibrations, commonly used with secondary SPRTs and industrial RTDs, the thermometers being
calibrated are compared to calibrated thermometers by means of a bath whose temperature is uniformly stable
Unlike fixed point calibrations, comparisons can be made at any temperature between 100C and 500C
(148F to 932F). This method might be more cost-effective since several sensors can be calibrated
simultaneously with automated equipment. These, electrically heated and well-stirred baths, use silicone oils and
molten salts as the medium for the various calibration temperatures.

RTD Element Types


There are three main categories of RTD sensors; Thin Film, Wire-Wound, and Coiled Elements. While these types
are the ones most widely used in industry there are some places were other more exotic shapes are used, for example
carbon resistors are used at ultra low temperatures (-173 C to -273 C).[4]
Carbon resistors Elements are widely available and are very inexpensive. They have very reproducible results at
low temperatures. They are the most reliable form at extremely low temperatures. They generally do not suffer
from significant hysteresis or strain gauge effects.
Strain Free Elements a wire coil minimally supported within a sealed housing filled with an inert gas. These
sensors are used up to 961.78 C and are used in the SPRTs that define ITS-90. They consisted of platinum wire
loosely coiled over a support structure so the element is free to expand and contract with temperature, but it is
very susceptible to shock and vibration as the loops of platinum can sway back and forth causing deformation.
Thin Film Elements have a sensing element that is formed by depositing a very thin layer of resistive material,
normal platinum, on a ceramic substrate; This layer is usually just 10 to 100 angstroms (1 to 10 nanometers)
thick.[5] This film is then coated with an epoxy or glass that helps protect the deposited film and also acts as a
strain relief for the external lead-wires. Disadvantages of this type are that they are not as stable as there wire
wound or coiled brethren. They are also can only be used over a limited temperature range due to the different
expansion rates of the substrate and resistive deposited giving a "strain gauge" effect that can be seen in the
resistive temperature coefficient. These Elements works with temperatures to 300 C.

155

Resistance thermometer

Wire-wound Elements can have greater accuracy, especially for wide temperature ranges. The coil diameter
provides a compromise between mechanical stability and allowing expansion of the wire to minimize strain and
consequential drift. The sensing wire is wrapped around an insulating mandrel or core. The winding core can be
round or flat, but must be an electrical insulator. The coefficient of thermal expansion of the winding core
material is matched to the sensing wire to minimize any mechanical strain. This strain on the element wire will
result in a thermal measurement error. The sensing wire is connected to a larger wire, usually referred to as the
element lead or wire. This wire is selected to be compatible with the sensing wire so that the combination does
not generate an emf that would distort the thermal measurement. These elements work with temperatures to 660
C.

Coiled elements have largely replaced wire-wound elements in industry. This design has a wire coil which can
expand freely over temperature, held in place by some mechanical support which lets the coil keep its shape. This
strain free design allows the sensing wire to expand and contract free of influence from other materials in the
This design is similar to that of a SPRT, the primary standard upon which ITS-90 is based, while providing the
durability necessary for industrial use. The basis of the sensing element is a small coil of platinum sensing wire.
This coil resembles a filament in an incandescent light bulb. The housing or mandrel is a hard fired ceramic oxide
tube with equally spaced bores that run transverse to the axes. The coil is inserted in the bores of the mandrel and

156

Resistance thermometer
then packed with a very finely ground ceramic powder. This permits the sensing wire to move while still
remaining in good thermal contact with the process. These Elements works with temperatures to 850 C.

The current international standard which specifies tolerance, and the temperature-to-electrical resistance relationship
for platinum resistance thermometers is IEC 60751:2008, ASTM E1137 is also used in the United States. By far the
most common devices used in industry have a nominal resistance of 100 ohms at 0C, and are called Pt100 sensors
('Pt' is the symbol for platinum). The sensitivity of a standard 100ohm sensor is a nominal 0.00385 ohm/C. RTDs
with a sensitivity of 0.00375 and 0.00392ohm/C as well as a variety of others are also available.

Function
Resistance thermometers are constructed in a number of forms and offer greater stability, accuracy and repeatability
in some cases than thermocouples. While thermocouples use the Seebeck effect to generate a voltage, resistance
thermometers use electrical resistance and require a power source to operate. The resistance ideally varies linearly
with temperature.
The platinum detecting wire needs to be kept free of contamination to remain stable. A platinum wire or film is
supported on a former in such a way that it gets minimal differential expansion or other strains from its former, yet is
reasonably resistant to vibration. RTD assemblies made from iron or copper are also used in some applications.
Commercial platinum grades are produced which exhibit a temperature coefficient of resistance 0.00385/C
(0.385%/C) (European Fundamental Interval).[6] The sensor is usually made to have a resistance of 100 at 0C.
This is defined in BS EN 60751:1996 (taken from IEC 60751:1995). The American Fundamental Interval is
0.00392/C,[7] based on using a purer grade of platinum than the European standard. The American standard is from
the Scientific Apparatus Manufacturers Association (SAMA), who are no longer in this standards field. As a result
the "American standard" is hardly the standard even in the US.
Measurement of resistance requires a small current to be passed through the device under test. This can cause
resistive heating, causing significant loss of accuracy if manufacturers' limits are not respected, or the design does
not properly consider the heat path. Mechanical strain on the resistance thermometer can also cause inaccuracy. Lead
wire resistance can also be a factor; adopting three- and four-wire, instead of two-wire, connections can eliminate
connection lead resistance effects from measurements (see below); three-wire connection is sufficient for most
purposes and almost universal industrial practice. Four-wire connections are used for the most precise applications.

157

Resistance thermometer

Advantages and limitations


Advantages of platinum resistance thermometers:

High accuracy
Low drift
Wide operating range
Suitable for precision applications

Limitations:
RTDs in industrial applications are rarely used above 660C. At temperatures above 660C it becomes
increasingly difficult to prevent the platinum from becoming contaminated by impurities from the metal sheath of
the thermometer. This is why laboratory standard thermometers replace the metal sheath with a glass
construction. At very low temperatures, say below -270C (or 3K), due to the fact that there are very few
phonons, the resistance of an RTD is mainly determined by impurities and boundary scattering and thus basically
independent of temperature. As a result, the sensitivity of the RTD is essentially zero and therefore not useful.
Compared to thermistors, platinum RTDs are less sensitive to small temperature changes and have a slower
response time. However, thermistors have a smaller temperature range and stability.
Sources of error:
The common error sources of a PRT are:
Interchangeability: the closeness of agreement between the specific PRT's Resistance vs. Temperature
relationship and a predefined Resistance vs. Temperature relationship, commonly defined by IEC 60751.[8]
Insulation Resistance: Error caused by the inability to measure the actual resistance of element. Current leaks into
or out of the circuit through the sheath, between the element leads, or the elements.[9]
Stability: Ability to maintain R vs T over time as a result of thermal exposure.[10]
Repeatability: Ability to maintain R vs T under the same conditions after experiencing thermal cycling
throughout a specified temperature range.[11]
Hysteresis: Change in the characteristics of the materials from which the RTD is built due to exposures to varying
temperatures.[12]
Stem Conduction: Error that results from the PRT sheath conducting heat into or out of the process.
Calibration/Interpolation: Errors that occur due to calibration uncertainty at the cal points, or between cal point
due to propagation of uncertainty or curve fit errors.
Lead Wire: Errors that occur because a 4 wire or 3 wire measurement is not used, this is greatly increased by
higher gauge wire.
2 wire connection adds lead resistance in series with PRT element.
3 wire connection relies on all 3 leads having equal resistance.
Self Heating: Error produced by the heating of the PRT element due to the power applied.
Time Response: Errors are produced during temperature transients because the PRT cannot respond to changes
fast enough.
Thermal EMF: Thermal EMF errors are produced by the EMF adding to or subtracting from the applied sensing
voltage, primarily in DC systems.

158

Resistance thermometer

RTDs vs Thermocouples
The two most common ways of measuring industrial temperatures are with resistance temperature detectors (RTDs)
and thermocouples. Choice between them is usually determined by four factors.
What are the temperature requirements? If process temperatures are between -200 to 500 C (-328to 932F), an
industrial RTD is the preferred option. Thermocouples have a range of -180 to 2320 C (-292to 4208F),[13] so
for temperatures above 500 C (932F) they are the only contact temperature measurement device.
What are the time-response requirements? If the process requires a very fast response to temperature
changesfractions of a second as opposed to seconds (e.g. 2.5 to 10s)then a thermocouple is the best choice.
Time response is measured by immersing the sensor in water moving at 1m/s (3ft/s) with a 63.2% step change.
What are the size requirements? A standard RTD sheath is 3.175 to 6.35 mm (0.1250 to 0.250 in) in diameter;
sheath diameters for thermocouples can be less than 1.6mm (0.063in).
What are the accuracy and stability requirements? If a tolerance of 2C is acceptable and the highest level of
repeatability is not required, a thermocouple will serve. RTDs are capable of higher accuracy and can maintain
stability for many years, while thermocouples can drift within the first few hours of use.

Construction

These elements nearly always require insulated leads attached. At temperatures below about 250C PVC, silicon
rubber or PTFE insulators are used. Above this, glass fibre or ceramic are used. The measuring point, and usually
most of the leads, require a housing or protective sleeve, often made of a metal alloy which is chemically inert to the
process being monitored. Selecting and designing protection sheaths can require more care than the actual sensor, as
the sheath must withstand chemical or physical attack and provide convenient attachment points.

Wiring configurations
Two-wire configuration

The simplest resistance thermometer configuration uses two wires. It is only used when high accuracy is not
required, as the resistance of the connecting wires is added to that of the sensor, leading to errors of measurement.
This configuration allows use of 100meters of cable. This applies equally to balanced bridge and fixed bridge
system.

159

Resistance thermometer

Three-wire configuration

In order to minimize the effects of the lead resistances, a three-wire configuration can be used. Using this method the
two leads to the sensor are on adjoining arms. There is a lead resistance in each arm of the bridge so that the
resistance is cancelled out, so long as the two lead resistances are accurately the same. This configuration allows up
to 600meters of cable.
Error on the schematic : A three wire RTD is connected in the following manner. One lead is connected to R1. That
wire's lead resistance is measured as a part of the RTD resistance. One wire (of the two on the other end of the RTD)
is connected to the lower end of R3. This wire's lead resistance is measured with R3, the reference resistor. One wire
(of the two on the other end of the RTD) is connected to the supply return. (ground) This resistance is normally
considered too low to matter in the measurement and is in series with the currents through the RTD and the reference
resistor (R3) The lead resistance effects are all translated into common mode voltages that are rejected (common
mode rejection) by the instrumentation amplifier. The wires are typically the same gauge and made of the same wire,
to minimise temperature coefficient issues.

Four-wire configuration

The four-wire resistance thermometer configuration increases the accuracy and reliability of the resistance being
measured: the resistance error due to lead wire resistance is zero. In the diagram above a standard two-terminal RTD
is used with another pair of wires to form an additional loop that cancels out the lead resistance. The above
Wheatstone bridge method uses a little more copper wire and is not a perfect solution. Below is a better
configuration, four-wire Kelvin connection. It provides full cancellation of spurious effects; cable resistance of up to
15 can be handled.

160

Resistance thermometer

Classifications of RTDs
The highest accuracy of all PRTs is the Standard platinum Resistance Thermometers (SPRTs). This accuracy is
achieved at the expense of durability and cost. The SPRTs elements are wound from reference grade platinum wire.
Internal lead wires are usually made from platinum while internal supports are made from quartz or fuse silica. The
sheaths are usually made from quartz or sometimes Inconel depending on temperature range. Larger diameter
platinum wire is used, which drives up the cost and results in a lower resistance for the probe (typically 25.5 ohms).
SPRTs have a wide temperature range (-200C to 1000C) and approximately accurate to 0.001C over the
temperature range. SPRTs are only appropriate for laboratory use.
Another classification of laboratory PRTs is Secondary Standard platinum Resistance Thermometers (Secondary
SPRTs). They are constructed like the SPRT, but the materials are more cost-effective. SPRTs commonly use
reference grade, high purity smaller diameter platinum wire, metal sheaths and ceramic type insulators. Internal lead
wires are usually a nickel based alloy. Secondary SPRTs are limited in temperature range (-200C to 500C) and are
approximately accurate to 0.03C over the temperature range.
Industrial PRTs are designed to withstand industrial environments. They can be almost as durable as a thermocouple.
Depending on the application industrial PRTs can use thin film elements or coil wound elements. The internal lead
wires can range from PTFE insulated stranded nickel plated copper to silver wire, depending on the sensor size and
application. Sheath material is typically stainless steel; higher temperature applications may demand Inconel. Other
materials are used for specialized applications.

Applications
Sensor assemblies can be categorized into two groups by how they are installed or interface with the process:
immersion or surface mounted.
Immersion sensors take the form of an SS tube and some type of process connection fitting. They are installed
into the process with sufficient immersion length to ensure good contact with the process medium and reduce
external influences.[14] A variation of this style includes a separate thermowell that provides additional protection
for the sensor.[15] These styles are used to measure fluid or gas temperatures in pipes and tanks. Most sensors
have the sensing element located at the tip of the stainless steel tube. An averaging style RTD however, can
measure an average temperature of air in a large duct.[16] This style of immersion RTD has the sensing element
distributed along the entire probe length and provides an average temperature. Lengths range from 3 to 60 feet.
Surface mounted sensors are used when immersion into a process fluid is not possible due to configuration of the
piping or tank, or the fluid properties may not allow an immersion style sensor. Configurations range from tiny
cylinders[17] to large blocks which are mounted by clamps[18] , adhesives, or bolted into place. Most require the
addition of insulation to isolate them from cooling or heating affects of the ambient conditions to insure accuracy.
Other applications may require special water proofing or pressure seals. A heavy-duty underwater temperature
sensor is designed for complete submersion under rivers, cooling ponds, or sewers. Steam autoclaves require a
sensor that is sealed from intrusion by steam during the vacuum cycle process.

161

Resistance thermometer

162

Immersion sensors generally have the best measurement accuracy because they are in direct contact with the process
fluid. Surface mounted sensors are measuring the pipe surface as a close approximation of the internal process fluid.

History
The application of the tendency of electrical conductors to increase their electrical resistance with rising temperature
was first described by Sir William Siemens at the Bakerian Lecture of 1871 before the Royal Society of Great
Britain. The necessary methods of construction were established by Callendar, Griffiths, Holborn and Wein between
1885 and 1900.

Standard resistance thermometer data


Temperature sensors are usually supplied with thin-film elements. The resisting elements are rated in accordance
with BS EN 60751:2008 as:
Tolerance Class Valid Range
F 0.3

-50 to +500C

F 0.15

-30 to +300C

F 0.1

0 to +150C

Resistance thermometer elements can be supplied which function up to 1000C. The relation between temperature
and resistance is given by the Callendar-Van Dusen equation,

Here,

is the resistance at temperature T,

is the resistance at 0C, and the constants (for an alpha=0.00385

platinum RTD) are

Since the B and C coefficients are relatively small, the resistance changes almost linearly with the temperature.

Values for various popular resistance thermometers


Values for various popular resistance thermometers
Temperature Pt100
in C
in

Pt1000
in

PTC
in

NTC
in

NTC
in

NTC
in

NTC
in

NTC
in

Typ: 404 Typ: 501 Typ: 201 Typ: 101 Typ: 102 Typ: 103 Typ: 104 Typ: 105
50

80.31

803.1

1032

45

82.29

822.9

1084

40

84.27

842.7

1135

50475

35

86.25

862.5

1191

36405

30

88.22

882.2

1246

26550

25

90.19

901.9

1306

26083

19560

20

92.16

921.6

1366

19414

14560

Resistance thermometer

163

15

94.12

941.2

1430

14596

10943

10

96.09

960.9

1493

11066

8299

98.04

980.4

1561

31389

8466

100.00

1000.0

1628

23868

6536

101.95

1019.5

1700

18299

5078

10

103.90

1039.0

1771

14130

3986

15

105.85

1058.5

1847

10998

20

107.79

1077.9

1922

8618

25

109.73

1097.3

2000

6800

15000

30

111.67

1116.7

2080

5401

11933

35

113.61

1136.1

2162

4317

9522

40

115.54

1155.4

2244

3471

7657

45

117.47

1174.7

2330

6194

50

119.40

1194.0

2415

5039

55

121.32

1213.2

2505

4299

27475

60

123.24

1232.4

2595

3756

22590

65

125.16

1251.6

2689

18668

70

127.07

1270.7

2782

15052

75

128.98

1289.8

2880

12932

80

130.89

1308.9

2977

10837

85

132.80

1328.0

3079

9121

90

134.70

1347.0

3180

7708

95

136.60

1366.0

3285

6539

100

138.50

1385.0

3390

105

140.39

1403.9

110

142.29

1422.9

150

157.31

1573.1

200

175.84

1758.4

The function for temperature value acquisition (C++)


The following code provides a temperature sensor Pt100 or Pt1000 from its current resistance.
float GetPt100Temperature(float r)
{
float const Pt100[] = { 80.31,
82.29, 84.27, 86.25, 88.22,
90.19, 92.16, 94.12, 96.09, 98.04,
100.0,
101.95, 103.9, 105.85,
107.79, 109.73, 111.67, 113.61, 115.54, 117.47,
119.4, 121.32, 123.24, 125.16, 127.07,
128.98, 130.89, 132.8, 134.7, 136.6,
138.5, 140.39, 142.29, 157.31, 175.84,

Resistance thermometer
195.84};
int t = -50, i, dt = 0;
if (r > Pt100[i = 0])
while (250 > t) {
dt = (t < 110) ? 5 : (t > 110) ? 50 : 40;
if (r < Pt100[++i])
return t + (r - Pt100[i-1]) * dt / (Pt100[i] - Pt100[i-1]);
t += dt;
};
return t;
}
float GetPt1000Temperature(float r)
{
return GetPt100Temperature(r / 10);
}

References
[1] Common RTD sensing elements constructed of platinum copper or nickel have a unique, and repeatable and predictable resistance versus
temperature relationship (R vs T) and operating temperature range. The R vs T relationship is defined as the amount of resistance change of
the sensor per degree of temperature change. Frequently asked questions about RTDs (http:/ / www. burnsengineering. com/ pgd.
asp?pgid=docfaq), , retrieved 2009-09-18
[2] http:/ / www. burnsengineering. com/ tech-papers/
[3] Calibration (http:/ / www. burnsengineering. com/ document/ papers/ Calibration-Why_When_How_Handout. pdf), , retrieved 2011-11-16
[4] Carbon Resistors (http:/ / www. bipm. org/ utils/ common/ pdf/ its-90/ TECChapter11. pdf), , retrieved 2011-11-16
[5] RTD Element Types (http:/ / canteach. candu. org/ library/ 20030701. pdf), , retrieved 2011-11-16
[6] http:/ / www. instrumentationservices. net/ hand-held-thermometers. php
[7] http:/ / hyperphysics. phy-astr. gsu. edu/ hbase/ electric/ restmp. html
[8] Interchangeability (http:/ / www. burnsengineering. com/ document/ pdf/ interchangeability. pdf), , retrieved 2009-09-18
[9] Insulation Resistance (http:/ / www. burnsengineering. com/ document/ pdf/ a080211. pdf), , retrieved 2009-09-18
[10] Stability (http:/ / www. burnsengineering. com/ document/ pdf/ a080306. pdf), , retrieved 2009-09-18
[11] Repeatability (http:/ / www. burnsengineering. com/ document/ papers/ PRT_Error_Sources_Part_4_Repeatability. pdf), , retrieved
2009-09-18
[12] Hysteresis (http:/ / www. burnsengineering. com/ document/ papers/ PRT_Error_Sources_Part_5_Hysteresis. pdf), , retrieved 2009-09-18
[13] http:/ / www. omega. com/ temperature/ Z/ pdf/ z241-245. pdf
[14] Small Line Direct Imersion (http:/ / www. burnsengineering. com/ document/ pdf/ a110425. pdf), , retrieved 2011-11-16
[15] Thermowell (http:/ / www. burnsengineering. com/ document/ pdf/ A080528. pdf), , retrieved 2011-11-16
[16] Averaging Sensors (http:/ / www. burnsengineering. com/ document/ pdf/ a100415. pdf), , retrieved 2011-11-16
[17] Elbow Thermowell (http:/ / www. burnsengineering. com/ document/ pdf/ A080218. pdf), , retrieved 2011-11-16
[18] Surface Sensor (http:/ / www. burnsengineering. com/ document/ pdf/ A071011. pdf), , retrieved 2011-11-16

164

Resistance thermometer

165

External links
Practical computational RTD linearization techniques for DAQ and embedded systems. (http://garga.iet.unipi.
it/II/Linearizing RTD.pdf)
The Callendar van Dusen coefficients (http://www.uniteksys.com/Graphics/CalVan.pdf) - How to calculate
Callendar van Dusen coefficients
RESISTANCE THERMOMETRY: PRINCIPLES AND APPLICATIONS OF RESISTANCE
THERMOMETERS AND THERMISTORS (http://www.minco.com/download-media.aspx?id=2284)
Additional information on RTDs (http://www.burnsengineering.com/rtdology/)
Addition information on RTD applications (http://www.burnsengineering.com/application-notes/)

Thermistor
A thermistor is a type of resistor whose resistance varies significantly
with temperature, more so than in standard resistors. The word is a
portmanteau of thermal and resistor. Thermistors are widely used as
inrush current limiters, temperature sensors, self-resetting overcurrent
protectors, and self-regulating heating elements.
Thermistors differ from resistance temperature detectors (RTD) in that
the material used in a thermistor is generally a ceramic or polymer,
while RTDs use pure metals. The temperature response is also
different; RTDs are useful over larger temperature ranges, while
thermistors typically achieve a higher precision within a limited
temperature range [usually 90C to 130C].[1]

Negative temperature coefficient (NTC)


thermistor, bead type, insulated wires

Assuming, as a first-order approximation, that the relationship between


resistance and temperature is linear, then:

Thermistor symbol

where
= change in resistance
= change in temperature
= first-order temperature coefficient of resistance
Thermistors can be classified into two types, depending on the sign of

. If

is positive, the resistance increases

with increasing temperature, and the device is called a positive temperature coefficient (PTC) thermistor, or
posistor. If is negative, the resistance decreases with increasing temperature, and the device is called a negative
temperature coefficient (NTC) thermistor. Resistors that are not thermistors are designed to have a

as close to

Thermistor

166

zero as possible, so that their resistance remains nearly constant over a wide temperature range.
Instead of the temperature coefficient k, sometimes the temperature coefficient of resistance
used. It is defined as[2]

This

coefficient should not be confused with the

(alpha sub T) is

parameter below.

Steinhart-Hart equation
In practice, the linear approximation (above) works only over a small temperature range. For accurate temperature
measurements, the resistance/temperature curve of the device must be described in more detail. The Steinhart-Hart
equation is a widely used third-order approximation:

where a, b and c are called the Steinhart-Hart parameters, and must be specified for each device. T is the temperature
in kelvin and R is the resistance in ohms. To give resistance as a function of temperature, the above can be
rearranged into:

where
and
The error in the Steinhart-Hart equation is generally less than 0.02C in the measurement of temperature over a
200C range[3] . As an example, typical values for a thermistor with a resistance of 3000 at room temperature
(25C = 298.15K) are:

B parameter equation
NTC thermistors can also be characterised with the B parameter equation, which is essentially the Steinhart Hart
equation with
,
and
,

Where the temperatures are in kelvins and R0 is the resistance at temperature T0 (usually 25C= 298.15K). Solving
for R yields:

or, alternatively,

where

. This can be solved for the temperature:

The B-parameter equation can also be written as


of resistance vs. temperature of a thermistor into a linear function of

. This can be used to convert the function


vs.

. The average slope of this

Thermistor

167

function will then yield an estimate of the value of the B parameter.

Conduction model
Many NTC thermistors are made from a pressed disc or cast chip of a semiconductor such as a sintered metal oxide.
They work because raising the temperature of a semiconductor increases the number of electrons able to move about
and carry charge - it promotes them into the conduction band. The more charge carriers that are available, the more
current a material can conduct. This is described in the formula:

= electric current (amperes)


= density of charge carriers (count/m)
= cross-sectional area of the material (m)
= velocity of charge carriers (m/s)
= charge of an electron (

coulomb)

The current is measured using an ammeter. Over large changes in temperature, calibration is necessary. Over small
changes in temperature, if the right semiconductor is used, the resistance of the material is linearly proportional to
the temperature. There are many different semiconducting thermistors with a range from about 0.01 kelvin to 2,000
kelvins (273.14 C to 1,700 C).
Most PTC thermistors are of the "switching" type, which means that their resistance rises suddenly at a certain
critical temperature. The devices are made of a doped polycrystalline ceramic containing barium titanate (BaTiO3)
and other compounds. The dielectric constant of this ferroelectric material varies with temperature. Below the Curie
point temperature, the high dielectric constant prevents the formation of potential barriers between the crystal grains,
leading to a low resistance. In this region the device has a small negative temperature coefficient. At the Curie point
temperature, the dielectric constant drops sufficiently to allow the formation of potential barriers at the grain
boundaries, and the resistance increases sharply. At even higher temperatures, the material reverts to NTC behaviour.
The equations used for modeling this behaviour were derived by W. Heywang and G. H. Jonker in the 1960s.
Another type of PTC thermistor is the polymer PTC, which is sold under brand names such as "Polyswitch"
"Semifuse", and "Multifuse". This consists of a slice of plastic with carbon grains embedded in it. When the plastic is
cool, the carbon grains are all in contact with each other, forming a conductive path through the device. When the
plastic heats up, it expands, forcing the carbon grains apart, and causing the resistance of the device to rise rapidly.
Like the BaTiO3 thermistor, this device has a highly nonlinear resistance/temperature response and is used for
switching, not for proportional temperature measurement.
Yet another type of thermistor is a silistor, a thermally sensitive silicon resistor. Silistors are similarly constructed
and operate on the same principles as other thermistors, but employ silicon as the semiconductive component
material.

Self-heating effects
When a current flows through a thermistor, it will generate heat which will raise the temperature of the thermistor
above that of its environment. If the thermistor is being used to measure the temperature of the environment, this
electrical heating may introduce a significant error if a correction is not made. Alternatively, this effect itself can be
exploited. It can, for example, make a sensitive air-flow device employed in a sailplane rate-of-climb instrument, the
electronic variometer, or serve as a timer for a relay as was formerly done in telephone exchanges.
The electrical power input to the thermistor is just:

Thermistor

168

where I is current and V is the voltage drop across the thermistor. This power is converted to heat, and this heat
energy is transferred to the surrounding environment. The rate of transfer is well described by Newton's law of
cooling:

where T(R) is the temperature of the thermistor as a function of its resistance R,

is the temperature of the

surroundings, and K is the dissipation constant, usually expressed in units of milliwatts per degree Celsius. At
equilibrium, the two rates must be equal.
The current and voltage across the thermistor will depend on the particular circuit configuration. As a simple
example, if the voltage across the thermistor is held fixed, then by Ohm's Law we have
and the
equilibrium equation can be solved for the ambient temperature as a function of the measured resistance of the
thermistor:

The dissipation constant is a measure of the thermal connection of the thermistor to its surroundings. It is generally
given for the thermistor in still air, and in well-stirred oil. Typical values for a small glass bead thermistor are
1.5mW/C in still air and 6.0mW/C in stirred oil. If the temperature of the environment is known beforehand, then
a thermistor may be used to measure the value of the dissipation constant. For example, the thermistor may be used
as a flow rate sensor, since the dissipation constant increases with the rate of flow of a fluid past the thermistor.

Applications
PTC thermistors can be used as current-limiting devices for circuit protection, as replacements for fuses. Current
through the device causes a small amount of resistive heating. If the current is large enough to generate more heat
than the device can lose to its surroundings, the device heats up, causing its resistance to increase, and therefore
causing even more heating. This creates a self-reinforcing effect that drives the resistance upwards, reducing the
current and voltage available to the device.
PTC thermistors are used as timers in the degaussing coil circuit of most CRT displays and televisions. When the
display unit is initially switched on, current flows through the thermistor and degaussing coil. The coil and
thermistor are intentionally sized so that the current flow will heat the thermistor to the point that the degaussing
coil shuts off in under a second. For effective degaussing, it is necessary that the magnitude of the alternating
magnetic field produced by the degaussing coil decreases smoothly and continuously, rather than sharply
switching off or decreasing in steps; the PTC thermistor accomplishes this naturally as it heats up. A degaussing
circuit using a PTC thermistor is simple, reliable (for its simplicity), and inexpensive.
NTC thermistors are used as resistance thermometers in low-temperature measurements of the order of 10K.
NTC thermistors can be used as inrush-current limiting devices in power supply circuits. They present a higher
resistance initially which prevents large currents from flowing at turn-on, and then heat up and become much
lower resistance to allow higher current flow during normal operation. These thermistors are usually much larger
than measuring type thermistors, and are purposely designed for this application.
NTC thermistors are regularly used in automotive applications. For example, they monitor things like coolant
temperature and/or oil temperature inside the engine and provide data to the ECU and, indirectly, to the
dashboard.
NTC thermistors can be also used to monitor the temperature of an incubator.
Thermistors are also commonly used in modern digital thermostats and to monitor the temperature of battery
packs while charging.

Thermistor

History
The thermistor was invented by Samuel Ruben in 1930.[4] Because early thermistors were difficult to produce and
applications for the technology were limited, commercial production of thermistors did not begin until the 1930s.[5]
A famous early use of the thermistor principle was the HP200A, the first product made by Hewlett-Packard in 1939.
It was a Wien bridge oscillator, but its principle innovation was using an incandescent lamp as its stabilizing PTC
thermistor.[6]

References
[1]
[2]
[3]
[4]

http:/ / www. microchiptechno. com/ ntc_thermistors. php


Thermistor Terminology (http:/ / www. ussensor. com/ terminology. html). U.S. Sensor
Practical Temperature Measurements (http:/ / cp. literature. agilent. com/ litweb/ pdf/ 5965-7822E. pdf). Agilent Application Note
Biomedical Sensors (http:/ / books. google. com/ books?id=7cI83YOIUTkC& pg=PA12& dq=Samuel+ Ruben+ and+ Thermistor& hl=en&
ei=gHVyTpyUG5Cltwfkp-SFCg& sa=X& oi=book_result& ct=result& resnum=2& ved=0CDEQ6AEwAQ#v=onepage& q& f=false).
Momentum Press. .
[5] McGee, Thomas (1988). "9". Principles and Methods of Temperature Measurement.
[6] Williams, Jim (June 1990), Bridge Circuits: Marrying Gain and Balance (http:/ / www. linear. com/ pc/ downloadDocument.
do?navId=H0,C1,C1154,C1009,C1026,P1213,D4134), Application Note, 43, Linear Technology Inc, pp.2933, 43,

External links
The thermistor at bucknell.edu (http://www.facstaff.bucknell.edu/mastascu/eLessonsHTML/Sensors/
TempR.html)
Software for thermistor calculation at Sourceforge (http://thermistor.sourceforge.net/)
"Thermistors & Thermocouples:Matching the Tool to the Task in Thermal Validation" (http://img.en25.com/
Web/Vaisala/JVT KBull Article_final pdf.pdf) - Journal of Validation Technology

169

Torque sensor

Torque sensor
A torque sensor or torque transducer or torquemeter is a device for measuring and recording the torque on a
rotating system, such as an engine, crankshaft, gearbox, transmission, rotor, or a bicycle crank. Static torque is
relatively easy to measure. Dynamic torque, on the other hand, is not easy to measure, since it generally requires
transfer of some effect (electric or magnetic) from the shaft being measured to a static system.
One way to achieve this is to condition the shaft or a member attached to the shaft with a series of permanent
magnetic domains. The magnetic characteristics of these domains will vary according to the applied torque, and thus
can be measured using non-contact sensors. Such magnetoelastic torque sensors are generally used for in-vehicle
applications on racecars, automobiles, aircraft, and hovercraft.
Commonly, torque sensors or torque transducers use strain gauges applied to a rotating shaft or axle. With this
method, a means to power the strain gauge bridge is necessary, as well as a means to receive the signal from the
rotating shaft. This can be accomplished using slip rings, wireless telemetry, or rotary transformers. Newer types of
torque transducers add conditioning electronics and an A/D converter to the rotating shaft. Stator electronics then
read the digital signals and convert those signals to a high-level analog output signal, such as +/-10VDC.
A more recent development is the use of SAW devices attached to the shaft and remotely interrogated. The strain on
these tiny devices as the shaft flexes can be read remotely and output without the need for attached electronics on the
shaft. The probable first use in volume will be in the automotive field as, of May 2009, Schott announced it has a
SAW sensor package viable for in vehicle uses.
Finally, another way to measure torque is by way of twist angle measurement or phase shift measurement, whereby
the angle of twist resulting from applied torque is measured by using two angular position sensors and measuring the
phase angle between them.

External links

Torque Sensor Application Diagrams [1]


Evolution and Future of Torque Measurement Technology [2]
TorqSense Surface Acoustic Wave Torque Measuring Technology [3]
Torque Measurement Primer [4]
Transense Technologies plc [5]
Schott AG [6]
The Basics of Torque Measurement [7]
High Performance Wireless Torque Sensing [8]
ABB Torductor-S [9]
Torquetronic [10]

170

Torque sensor

References
[1] http:/ / www. futek. com/ apps_torque. aspx
[2] http:/ / www. lorenz-messtechnik. de/ english/ company/ torque_measurement_technology. php
[3] http:/ / www. sensors. co. uk/ technology/ pages/ torqsense. html
[4] http:/ / www. interfaceforce. com/ technical-library/ TorquePrimerFeb2010. pdf
[5] http:/ / www. transense. co. uk/ technologies/ tqs. html
[6] http:/ / www. schott. com/ epackaging/ newsfiles/ 20090525092514_061_2009_PI_TorqueSensor_EN_FINAL. pdf
[7] http:/ / www. sendev. com/ catalog/ pdf/ torque-measurement. pdf
[8] http:/ / www. magcanica. com
[9] http:/ / www. abb. com/ product/ seitp331/ 0dfa9bd9c21d8f19c125737900413ba7. aspx?tabKey=6
[10] http:/ / www. torquemeters. com

Ultrasonic thickness gauge


An Ultrasonic thickness gauge is a measuring instrument for the non-destructive investigation of a material's
thickness using ultrasonic waves.
The first ultrasonic thickness gauge in the world was constructed by the Polish engineer Werner Sobek from
Katowice, in year 1967. The ultrasonic thickness gauge measures the speed of propagation of the ultrasonic waves in
the given material, and then converts this speed to the thickness in millimetres, applying a suitable mathematical
formula.

List of sensors
This is a list of sensors sorted by sensor type.

Acoustic, sound, vibration

Geophone
Hydrophone
Lace Sensor a guitar pickup
Microphone
Seismometer

Automotive, transportation

Air-fuel ratio meter


Crankshaft position sensor
Curb feeler, used to warn driver of curbs
Defect detector, used on railroads to detect axle and signal problems in passing trains
Engine coolant temperature sensor, or ECT sensor, used to measure the engine temperature
Hall effect sensor, used to time the speed of wheels and shafts
MAP sensor, Manifold Absolute Pressure, used in regulating fuel metering.
Mass flow sensor, or mass airflow (MAF) sensor, used to tell the ECU the mass of air entering the engine
Oxygen sensor, used to monitor the amount of oxygen in the exhaust
Parking sensors, used to alert the driver of unseen obstacles during parking manoeuvres

Radar gun, used to detect the speed of other objects


Speedometer, used measure the instantaneous speed of a land vehicle
Speed sensor, used to detect the speed of an object

171

List of sensors

Throttle position sensor, used to monitor the position of the throttle in an internal combustion engine
Tire-pressure monitoring sensor, used to monitor the air pressure inside the tires
Torque sensor, or torque transducer or torquemeter measures torque (twisting force) on a rotating system.
Transmission fluid temperature sensor, used to measure the temperature of the transmission fluid
Turbine speed sensor (TSS), or input speed sensor (ISS), used to measure the rotational speed of the input shaft or
torque converter
Variable reluctance sensor, used to measure position and speed of moving metal components
Vehicle speed sensor (VSS), used to measure the speed of the vehicle
Water sensor or water-in-fuel sensor, used to indicate the presence of water in fuel
Wheel speed sensor, used for reading the speed of a vehicle's wheel rotation

Chemical

Breathalyzer
Carbon dioxide sensor
Carbon monoxide detector
Catalytic bead sensor
Chemical field-effect transistor

Electrochemical gas sensor


Electronic nose
Electrolyteinsulatorsemiconductor sensor
Holographic sensor
Hydrocarbon dew point analyzer
Hydrogen sensor
Hydrogen sulfide sensor
Infrared point sensor
Ion-selective electrode
Nondispersive infrared sensor
Microwave chemistry sensor
Nitrogen oxide sensor
Olfactometer
Optode
Oxygen sensor
Pellistor
pH glass electrode
Potentiometric sensor
Redox electrode
Smoke detector
Zinc oxide nanorod sensor

172

List of sensors

Electric current, electric potential, magnetic, radio

Current sensor
Galvanometer
Hall effect sensor
Hall probe
Leaf electroscope
Magnetic anomaly detector
Magnetometer
MEMS magnetic field sensor
Metal detector
Radio direction finder
Voltage detector

Environment, weather, moisture, humidity


Actinometer
Bedwetting alarm

Ceilometer
Dew warning
Fish counter
Gas detector
Hook gauge evaporimeter
Humistor
Hygrometer
Leaf sensor
Pyranometer
Pyrgeometer
Psychrometer
Rain gauge
Rain sensor
Seismometers
Snow gauge
Soil moisture sensor
Stream gauge
Tide gauge

Flow, fluid velocity

Air flow meter


Anemometer
Flow sensor
Gas meter
Mass flow sensor
Water meter

173

List of sensors

Ionising radiation, subatomic particles

Bubble chamber
Cloud chamber
Geiger counter
Neutron detection
Particle detector
Scintillation counter
Scintillator
Wire chamber

Navigation instruments

Air speed indicator


Altimeter
Attitude indicator
Depth gauge
Fluxgate compass

Gyroscope
Inertial reference unit
Magnetic compass
MHD sensor
Ring laser gyroscope
Turn coordinator
Variometer
Vibrating structure gyroscope
Yaw rate sensor

Position, angle, displacement, distance, speed, acceleration

Accelerometer
Auxanometer
Capacitive displacement sensor
Capacitive sensing
Free fall sensor
Gravimeter
Inclinometer
Laser rangefinder
Linear encoder
Linear variable differential transformer (LVDT)
Liquid capacitive inclinometers
Odometer
Photoelectric sensor
Piezoelectric accelerometer
Position sensor
Rotary encoder

Rotary variable differential transformer


Selsyn
Sudden Motion Sensor

174

List of sensors
Tilt sensor
Tachometer
Ultrasonic thickness gauge

Optical, light, imaging, photon

Charge-coupled device
Colorimeter
Contact image sensor
Electro-optical sensor
Flame detector
Infra-red sensor
Kinetic inductance detector
LED as light sensor
Light-addressable potentiometric sensor
Nichols radiometer
Fiber optic sensors
Photodetector

Photodiode
Photomultiplier tubes
Phototransistor
Photoelectric sensor
Photoionization detector
Photomultiplier
Photoresistor
Photoswitch
Phototube
Scintillometer
Shack-Hartmann
Single-photon avalanche diode
Superconducting nanowire single-photon detector
Transition edge sensor
Visible light photon counter
Wavefront sensor

Pressure

Barograph
Barometer
Boost gauge
Bourdon gauge
Hot filament ionization gauge
Ionization gauge
McLeod gauge
Oscillating U-tube
Permanent Downhole Gauge

Pirani gauge
Pressure sensor

175

List of sensors
Pressure gauge
Tactile sensor
Time pressure gauge

Force, density, level

Bhangmeter
Hydrometer
Force gauge
Level sensor
Load cell
Magnetic level gauge
Nuclear density gauge
Piezoelectric sensor
Strain gauge
Torque sensor
Viscometer

Thermal, heat, temperature

Bolometer
Bimetallic strip
Calorimeter
Exhaust gas temperature gauge
Gardon gauge
Golay cell
Heat flux sensor
Infrared thermometer
Microbolometer
Microwave radiometer
Net radiometer
Quartz thermometer
Resistance temperature detector
Resistance thermometer
Silicon bandgap temperature sensor
Temperature gauge
Thermistor
Thermocouple
Thermometer

176

List of sensors

Proximity, presence

Alarm sensor
Doppler radar
Motion detector
Occupancy sensor
Proximity sensor
Passive infrared sensor
Reed switch
Stud finder
Triangulation sensor
Touch switch
Wired glove

Sensor technology
Active pixel sensor
Back-illuminated sensor

Biochip
Biosensor
Capacitance probe
Catadioptric sensor
Carbon paste electrode
Displacement receiver
Electromechanical film
Electro-optical sensor
FabryProt interferometer
Image sensor
Image sensor format
Inductive sensor
Intelligent sensor
Lab-on-a-chip
Leaf sensor
Machine vision
Microelectromechanical systems
Micro-sensor arrays
Photoelasticity
RADAR

Ground-penetrating radar
Synthetic aperture radar
Sensor array
Sensor fusion
Sensor grid
Sensor node
Soft sensor
SONAR

Staring array
Transducer

177

List of sensors

Ultrasonic sensor
Video sensor
Visual sensor network
Wheatstone bridge
Wireless sensor network

Other sensors and sensor related properties and concepts

Analog image processing


Digital holography
Frame grabbers
Intensity sensors and their properties
Atomic force microscopy
Chemoreceptor
Compressive sensing
Hyperspectral sensors
Millimeter wave scanner
Magnetic resonance imaging

Diffusion tensor imaging


Functional magnetic resonance imaging
Molecular sensor
Optical coherence tomography
Positron emission tomography
Quantization (signal processing)
Range imaging
Moire deflectometry
Phase unwrapping techniques
Time-of-flight camera
Structured-light 3D scanner
Omnidirectional camera
Catadioptric sensor
Single-Photon Emission Computed Tomography (SPECT)
Transcranial Magnetic Stimulation (TMS)

178

Article Sources and Contributors

Article Sources and Contributors


Analog signal processing Source: http://en.wikipedia.org/w/index.php?oldid=444745824 Contributors: Amalas, Binksternet, Brianhe, Caranha, Carl086, Charles Matthews, David D., Drew335,
Giftlite, Grebaldar, Isarl, Isnow, Jehochman, Jodi.a.schneider, Kingpin13, Parijata, PaulHanson, Pboyd04, Pigsonthewing, Richwales, Rjwilmsi, Sageev, Sigmundg, Smack, Spinningspark,
Superborsuk, That Guy, From That Show!, Tide rolls, Tomas e, 58 anonymous edits
Fourier transform Source: http://en.wikipedia.org/w/index.php?oldid=467820321 Contributors: 9258fahsflkh917fas, A Doon, A. Pichler, Abecedare, Admartch, Adoniscik, Ahoerstemeier,
Akbg, Alejo2083, Alipson, Amaher, AnAj, Andrei Polyanin, Andres, Angalla, Anna Lincoln, Ap, Army1987, Arondals, Asmeurer, Astronautameya, Avicennasis, AxelBoldt, Barak Sh, Bci2,
Bdmy, BehzadAhmadi, BenFrantzDale, BigJohnHenry, Bo Jacoby, Bob K, Bobblewik, Bobo192, BorisG, Bugnot, Burhem, Butala, CSTAR, Caio2112, Cassandra B, Catslash, Cburnett, Ch mad,
Charles Matthews, Chris the speller, ClickRick, Cmghim925, Complexica, Compsonheir, Coppertwig, CrisKatz, Crisfilax, DX-MON, Da nuke, DabMachine, DavidCBryant, Demosta, Dhabih,
Discospinster, DmitTrix, Dmmaus, Dougweller, Dr.enh, DrBob, Drew335, Drilnoth, Dysprosia, EconoPhysicist, Ed g2s, Eliyak, Elkman, Enochlau, Epzcaw, Favonian, Feline Hymnic, Feraudyh,
Fizyxnrd, Forbes72, Fr33kman, Fred Bradstadt, Fropuff, Futurebird, Gaius Cornelius, Gareth Owen, Giftlite, Glenn, GuidoGer, GyroMagician, H2g2bob, HappyCamper, Heimstern,
HenningThielemann, Herr Lip, Hesam7, HirsuteSimia, Hrafeiro, Ht686rg90, I am a mushroom, Igny, Iihki, Ivan Shmakov, Iwfyita, Jaakobou, Jdorje, Jhealy, Jko, Joerite, JohnQPedia, Joriki,
Justwantedtofixonething, KHamsun, KYN, Keenan Pepper, Kevmitch, Kostmo, Kunaporn, Larsobrien, Linas, LokiClock, Looxix, Lovibond, Luciopaiva, Lupin, M1ss1ontomars2k4,
Manik762007, MathKnight, Maxim, Mckee, Metacomet, Michael Hardy, Mikeblas, Millerdl, Moxfyre, Mr. PIM, NTUDISP, Naddy, NathanHagen, Nbarth, Nihil, Nishantjr, Njerseyguy, Nk,
Nmnogueira, NokMok, Od Mishehu, Oleg Alexandrov, Oli Filth, Omegatron, Oreo Priest, Ouzel Ring, PAR, Pak21, Papa November, Paul August, Pedrito, Pete463251, Petergans, Phasmatisnox,
Phils, PhotoBox, PigFlu Oink, Poincarecon, PsiEpsilon, PtDw832, Publichealthguru, Quintote, Qwfp, R.e.b., Rainwarrior, Rbj, Red Winged Duck, Riesz, Rifleman 82, Rijkbenik, Rjwilmsi,
RobertHannah89, Rror, Rs2, Rurz2007, SKvalen, Safenner1, Sai2020, Sandb, Sbyrnes321, SebastianHelm, Sepia tone, Sgoder, Sgreddin, Shreevatsa, Silly rabbit, Slawekb, SlimDeli, Snigbrook,
Snoyes, Sohale, Soulkeeper, SpaceFlight89, Spanglej, Sprocedato, Stevan White, Stevenj, Stpasha, StradivariusTV, Sunev, Sverdrup, Sylvestersteele, Sawomir Biay, THEN WHO WAS
PHONE?, TYelliot, Tabletop, Tahome, TakuyaMurata, TarryWorst, Tetracube, The Thing That Should Not Be, Thenub314, Thermochap, Thinking of England, Tim Goodwyn, Tim Starling,
Tinos, Tobias Bergemann, TranceThrust, Tunabex, Ujjalpatra, User A1, Vadik wiki, Vasi, Verdy p, VeryNewToThis, VictorAnyakin, Vidalian Tears, Vnb61, Voronwae, WLior, Wavelength,
Wiki Edit Testing, WikiDao, Wile E. Heresiarch, Writer130, Wwheaton, Ybhatti, YouRang?, Zoz, Zvika, 541 anonymous edits
Fast Fourier transform Source: http://en.wikipedia.org/w/index.php?oldid=469768938 Contributors: 16@r, 2ganesh, Adam Zivner, Adashiel, Akoesters, Amitparikh, Apexfreak, Artur Nowak,
Audriusa, Avalcarce, AxelBoldt, Bartosz, BehnamFarid, Bemoeial, Bender235, Bender2k14, Blablahblablah, Bmitov, Boxplot, Cameronc, Captain Disdain, Coneslayer, Conversion script,
Crossz, Cxxl, Daniel Brockman, David spector, Davidmeo, Dcoetzee, Dcwill1285, DeadTotoro, Decrease789, Dekart, Delirium, Djg2006, Dmsar, Domitori, Donarreiskoffer, DrBob, Efryevt,
Eras-mus, Excirial, Eyreland, Faestning, Fredrik, Fresheneesz, Furrykef, Gareth Owen, Gene93k, Geoffeg, Giftlite, Glutamin, Gopimanne, GreenSpigot, Grendelkhan, Gunnar Larsson, H2g2bob,
HalfShadow, Hashar, Haynals, Headbomb, Helwr, HenningThielemann, Herry41341, Herve661, Hess88, Hmo, Hyacinth, Ixfd64, Jaredwf, Jeltz, Jitse Niesen, Johnbibby, JustUser,
Kellybundybrain, Kkmurray, Klokbaske, Kuashio, LMB, Lavaka, LeoTrottier, LiDaobing, Lorem Ip, LouScheffer, Lupo, MarylandArtLover, Materialscientist, MaxSem, Maxim, Michael Hardy,
MrOllie, Mschlindwein, Mwilde, Nagesh Anupindi, Nbarth, Nixdorf, Norm mit, Ntsimp, Oleg Alexandrov, Oli Filth, Omkar lon, Palfrey, Pankajp, Pit, Pt, QueenCake, Quibik, Quintote,
Qwertyus, Qwfp, R. J. Mathar, R.e.b., RTV User 545631548625, Requestion, Riemann'sZeta, Rjwilmsi, Roadrunner, Robertvan1, Rogerbrent, Rubin joseph 10, Sam Hocevar, Sangwine,
SciCompTeacher, SebastianHelm, Smallman12q, Solongmarriane, Spectrogram, Squell, Steina4, Stevenj, TakuyaMurata, Tarquin, Teorth, The Anome, The Yowser, Thenub314, Tim32,
TimBentley, Timendum, Tuhinbhakta, Twexcom, Ulterior19802005, Unyoyega, Vincent kraeutler, Wik, Wikichicheng, Yacht, Ylai, ZeroOne, Zven, Zxcvbnm, 194 anonymous edits
Laplace transform Source: http://en.wikipedia.org/w/index.php?oldid=469094868 Contributors: 213.253.39.xxx, A. Pichler, Abb615, Ahoerstemeier, Alansohn, Alejo2083, Alexthe5th, Alfred
Centauri, Alll, Amfortas, Andrei Polyanin, Android Mouse, Anonymous Dissident, Anterior1, Ap, Ascentury, AugPi, AvicAWB, AxelBoldt, BWeed, Bart133, Bdmy, BehnamFarid, Bemoeial,
BenFrantzDale, BigJohnHenry, Blablablob, Bookbuddi, Bplohr, Cburnett, Cfp, Charles Matthews, Chris 73, Chrislewis.au, Chronulator, Chubby Chicken, Cic, Clark89, Cnilep, Commander
Nemet, Conversion script, Cronholm144, Cutler, Cyp, CyrilB, DabMachine, Danpovey, Dantonel, Dillard421, Dissipate, Don4of4, Doraemonpaul, Dragon0128, Drew335, Drilnoth, DukeEgr93,
Dysprosia, Ec5618, ElBarto, Electron9, Eli Osherovich, Ellywa, Emiehling, Eshylay, Fblasqueswiki, Fcueto, Fintor, First Harmonic, Flekstro, Fofti, Foom, Fred Bradstadt, Fresheneesz,
Futurebird, Gene Ward Smith, Gerrit, Ghostal, Giftlite, Glickglock, Glrx, Gocoolrao, Grafen, GregRM, Guardian of Light, GuidoGer, H2g2bob, Haham hanuka, Hair Commodore, HalJor,
Haukurth, Hereforhomework2, Hesam7, Humanengr, Intangir, Isheden, JAIG, JPopovic, Janto, Javalenok, Jitse Niesen, Jmnbatista, JohnCD, Johndarrington, JonathonReinhart, Jscott.trapp,
Julian Mendez, Jwmillerusa, KSmrq, Karipuf, Kensaii, Kenyon, Ketiltrout, Kevin Baas, Kiensvay, Kingpin13, KittySaturn, KoenDelaere, Kri, Kubigula, LachlanA, Lambiam, Lantonov,
Lbs6380, Le Docteur, Lightmouse, Linas, Looxix, Lupin, M ayadi78, Macl, Maksim-e, Manop, MarkSutton, Mars2035, Martynas Patasius, MaxEnt, Maximus Rex, Mekong Bluesman,
Metacomet, Michael Hardy, MiddaSantaClaus, Mike.lifeguard, Mild Bill Hiccup, Mlewis000, Mohqas, Morpo, Morqueozwald, Mschlindwein, Msiddalingaiah, Msmdmmm, N5iln, Nbarth, Neil
Parker, Nein, Netheril96, Nixdorf, Nuwewsco, Octahedron80, Ojigiri, Oleg Alexandrov, Oli Filth, Omegatron, Peter.Hiscocks, Petr Burian, Pgadfor, Phgao, Pokyrek, Prunesqualer, Pyninja,
PyonDude, Qwerty Binary, Rbj, Rboesch, Rdrosson, Reaper Eternal, Reedy, RexNL, Reyk, Rifleman 82, Rojypala, Ron Ritzman, Rovenhot, Rs2, Salvidrim, Scforth, Schaapli, Scls19fr,
SebastianHelm, Serrano24, Shay Guy, Sifaka, Silly rabbit, Skizzik, Slawekb, SocratesJedi, Starwiz, Stevenj, StradivariusTV, Stutts, Swagat konchada, Sawomir Biay, T boyd, Tarquin, Tbsmith,
TedPavlic, The Thing That Should Not Be, TheProject, Thegeneralguy, Thumperward, Tide rolls, Tim Starling, Tobias Bergemann, Tobias Hoevekamp, Walter.Arrighetti, Weyes, Wiml,
Wknight94, XJamRastafire, Xenonice, Yardleydobon, Yunshui, Ziusudra, Zvika, 442 anonymous edits
Linear system Source: http://en.wikipedia.org/w/index.php?oldid=465031026 Contributors: Allan McInnes, Athkalani, Ben pcc, BenFrantzDale, Berland, Btyner, Cburnett, Charles Matthews,
Cihan, Creidieki, DR04, Digfarenough, El C, First Harmonic, FocalPoint, Frozenport, Gwaihir, Hadal, Icairns, Jgreeter, Jitse Niesen, K-UNIT, Krsti, Lommer, Lowercase Sigma, MarkSweep,
Mathmanta, Maurice Carbonaro, Michael Hardy, Mks004, Nixdorf, Nvrmnd, Pak21, Paolo.dL, Rbj, Reyk, Sanfranman59, Sapphic, Srleffler, Treesmill, Vanakaris, Vanish2, VolvoMan,
WhiteHatLurker, , 40 anonymous edits
Time-invariant system Source: http://en.wikipedia.org/w/index.php?oldid=465517496 Contributors: Arroww, Belizefan, Cburnett, Giftlite, HatlessAtlas, HenningThielemann, JHunterJ,
Jamelan, Jmaliakal, Michael Hardy, Mwilde, Nbarth, Oleg Alexandrov, OlivierMiR, Prinsen, R'n'B, Rinconsoleao, Ro8269, Siddhant, Sterrys, , 44 anonymous edits
Dirac delta function Source: http://en.wikipedia.org/w/index.php?oldid=469802413 Contributors: 130.94.122.xxx, 134.132.11.xxx, Aaron north, Abrhm17, Adavis444, AdjustShift, Alexxauw,
Ap, Ashok567, AugPi, AxelBoldt, Baxtrom, Bdmy, Bejitunksu, BenFrantzDale, Bender235, Benwing, Bnitin, Bob K, Bob K31416, Brews ohare, Btyner, C S, Caiyu, Camembert, Centrx,
Ceplusplus, Chaohuang, Charles Matthews, Chas zzz brown, Christopher.Gordon3, Cj67, Classicalecon, Coppertwig, Crasshopper, CronoDAS, Curtdbz, Cyan, Czhangrice, Daniel.Cardenas,
Darkknight911, David Martland, Davidcarfi, Delta G, Donarreiskoffer, Dpaddy, Dr.enh, Dudesleeper, Emvee, Fibonacci, Fred Stober, Fyrael, Galdobr, Genie05, Gianluigi, Giftlite, Haonhien,
Headbomb, Henrygb, Heptadecagon, Holmansf, Hwasungmars, Ivanip, JJL, JWroblewski, Jason Quinn, Jkl, Joel7687, Joelholdsworth, JohnBlackburne, Jshadias, Jujutacular, Justin545, Karl-H,
Kidburla, KlappCK, Knucmo2, Kri, Kupirijo, Laurascudder, Le Docteur, Lethe, Light current, Lingwitt, LokiClock, Looxix, Lzur, M gol, MagnaMopus, Mark.przepiora, MarkSweep,
MathKnight, Mboverload, McSly, Meigel, Melchoir, Michael Hardy, Mikez, Mild Bill Hiccup, Moxfyre, Msa11usec, Mrten Berglund, Nbarth, Oleg Alexandrov, Oli Filth, Omegatron, PAR,
Papa November, Paul August, Philtime, Plasticup, Pyrop, Qef, Qwfp, RAE, Rattatosk, Rbj, Rex the first, Rkr1991, Rmartin16, Ronhjones, Rumping, Salgueiro, Sayanpatra, Shapeev,
Shervinafshar, SimpsonDG, Sinawiki, Skizzik, Slashme, Slawekb, Smjg, Spendabuk, Spike Wilbury, Stca74, StefanosNikolaou, SteffenGodskesen, Stevenj, Stevvers, Sullivan.t.j, Sverdrup,
Sawomir Biay, TMA, Tardis, Tarquin, The Anome, TheSeven, Thenub314, Thingg, TimothyRias, Tkuvho, Tobias Bergemann, TonyMath, Tosha, Treisijs, Tyrrell McAllister, W0lfie, WLior,
Wearedeltax, Wtmitchell, Wvbailey, XJamRastafire, XaosBits, Yaris678, Yurik, Zfeinst, Zundark, Zvika, Zytsef, , 250 anonymous edits
Heaviside step function Source: http://en.wikipedia.org/w/index.php?oldid=467969251 Contributors: Abdull, Albmont, Anonymous Dissident, Astroview120mm, AugPi, AxelBoldt, Baxtrom,
Benwing, BigrTex, Billlion, Bobo italy, Boothinator, Burn, Cburnett, Cgibbard, Charles Matthews, Charlesriver, Chill doubt, Daniele.tampieri, Davunderscorei, EdC, Elana1945, Falcorian, Fred
Stober, Freeridr, Freiddie, Geoffrey.landis, Giftlite, Glenn, Gubbubu, Hashar, Henning Makholm, Henrygb, Heron, Hirak 99, Jshadias, Kku, KlappCK, KnightRider, Ksoileau, Linas, Lumidek,
MathMartin, McSly, Md2perpe, Metacomet, Michael Hardy, Mohamed Al-Dabbagh, Occultations, Oleg Alexandrov, Omegatron, PAR, Papa November, Pgan002, Pigpag, Quietbritishjim, Rbj,
Rex the first, Rs2, ScottAlanHill, Slawekb, Spoon!, StevenJohnston, Stevvers, Sullivan.t.j, Sawomir Biay, T00h00, Tarquin, The Anome, Tobias Bergemann, Windrider it, XJamRastafire,
Yelyos, iedas, , 67 anonymous edits
Ramp function Source: http://en.wikipedia.org/w/index.php?oldid=468974823 Contributors: Alekh, BenFrantzDale, Brad7777, Draicone, Giftlite, Gubbubu, Jxramos, McSly, Mild Bill Hiccup,
PAR, PGSONIC, Papa November, Pigpag, Qef, Rumping, Seemeel, Tobias Bergemann, Tomas e, Tomer shalev, 10 anonymous edits
Digital signal processing Source: http://en.wikipedia.org/w/index.php?oldid=469564735 Contributors: .:Ajvol:., 3dB dar, Abdul muqeet, Abdull, Adoniscik, AeroPsico, Agasta, Alberto
Orlandini, Alinja, Allen4names, Allstarecho, Alphachimp, Andy M. Wang, Ap, Atlant, AxelBoldt, Bact, Belizefan, Ben-Zin, Betacommand, Binksternet, Boleslav Bobcik, Bongwarrior, Brion
VIBBER, CLW, Caiaffa, Can't sleep, clown will eat me, CapitalR, Captain-tucker, Cburnett, Cgrislin, Chinneeb, Coneslayer, Conversion script, Cst17, Ctmt, Cybercobra, Cyberparam, DARTH
SIDIOUS 2, DJS, Dawdler, Dicklyon, Discospinster, Dr Alan Hewitt, Dreadstar, Drew335, Ds13, Dspanalyst, Dysprosia, Eart4493, Easterbrook, EmadIV, Epbr123, ErrantX, Essjay, Excirial,
Fieldday-sunday, Finlay McWalter, Frmatt, Furrykef, Gbuffett, Ged Davies, Gfoltz9, Giftlite, Gimboid13, Glenn, Glrx, Graham87, Greglocock, Gkhan, HelgeStenstrom, Helwr, Hezarfenn,
Hooperbloob, Indeterminate, Inductiveload, Iquinnm, JPushkarH, Jeancey, Jefchip, Jeff8080, Jehochman, Jeremyp1988, Jmrowland, Jnothman, Joaopchagas2, Johnpseudo, Jondel, Jshadias,
Kesaloma, King Lopez, Kku, Kurykh, Landen99, Ldo, LilHelpa, Loisel, Lzur, MER-C, Mac, Martpol, Materialscientist, Matt Britt, Mcapdevila, Mdd4696, Michael Hardy, MightCould,
Mistman123, Moberg, Mohammed hameed, Moxfyre, Mutaza, Mwilde, Mysid, Niceguyedc, Nixdorf, Nvd, OSP Editor, Ohnoitsjamie, Olivier, Omegatron, Omeomi, One half 3544, Orange
Suede Sofa, OrgasGirl, PEHowland, Paileboat, Pantech solutions, Patrick, Pdfpdf, Pgan002, Phoebe, Pinkadelica, Prari, Raanoo, Rade Kutil, Rbj, Recurring dreams, Refikh, Rememberway,

179

Article Sources and Contributors


RexNL, Reza1615, Rickyrazz, Rohan2kool, Ronz, SWAdair, SamShearman, Sandeeppasala, Savidan, Semitransgenic, Seth Nimbosa, ShashClp, Shekure, Siddharth raw, Skarebo, SkiAustria,
Smack, Soumyasch, SpaceFlight89, Spectrogram, Stan Sykora, Stillastillsfan, Style, Swagato Barman Roy, Tbackstr, Tertiary7, The Anome, The Photon, Thisara.d.m, Tobias Hoevekamp, Togo,
Tomer Ish Shalom, Tony Sidaway, Tot12, Travelingseth, Trlkly, Trolleymusic, Turgan, TutterMouse, Van helsing, Vanished user 34958, Vermolaev, Versus22, Vinras, Violetriga, Vizcarra,
Wayne Hardman, Webshared, Whoop whoop, Wknight8111, Wolfkeeper, Ww, Y(J)S, Yangjy0113, Yaroslavvb, Yewyew66, Yswismer, Zvika, 349 anonymous edits
Time domain Source: http://en.wikipedia.org/w/index.php?oldid=445935002 Contributors: ANDROBETA, Ancheta Wis, Atlant, Barek, CBM, Charles Matthews, Deville, Dgrant, Dicklyon,
Eubulides, Fredrik, Joy, Luigi30, Mandarax, Marius, Materialscientist, Meanos, Melcombe, Michael Hardy, Nbarth, Omegatron, Owen, Soosed, Splash, Sysy, Yamara, 12 anonymous edits
Z-transform Source: http://en.wikipedia.org/w/index.php?oldid=469333731 Contributors: A multidimensional liar, Abdull, Albmont, Alejo2083, Andrei Stroe, AndrewKay, Andyjsmith,
Apourbakhsh, Arroww, Arvindn, Ashkan2000, Bahram.zahir, BenFrantzDale, BeowulfNode, BigJohnHenry, Billymac00, Bjcairns, Boodlepounce, Cburnett, Charles Matthews, ChooseAnother,
Clarkgwillison, Crasshopper, Ctbolt, Daniel5Ko, Daveros2008, David Eppstein, Dicklyon, Dspdude, E-boy, ESkog, Eleleszek, Eli Osherovich, Emperorbma, Epbr123, Foryayfan, Fred Bradstadt,
Galorr, Gandalf61, Gene Ward Smith, Giftlite, Gligoran, Glrx, Gscshoyru, Guiermo, Hakeem.gadi, I dream of horses, IluvatarTheOne, JAIG, Jalanpalmer, Jatoo, Johnbanjo, Jtxx000, Jujutacular,
Kenyon, Kri, Kwamikagami, LachlanA, Larry V, Linas, LizardJr8, LokiClock, Lorem Ip, Mani excel, Mckee, Metacomet, Michael Hardy, Michael93555, Mostafa mahdieh, Nejko, Neutiquam,
Ninly, Nixdorf, Oleg Alexandrov, Oli Filth, PAR, Peni, Peytonbland, Rabbanis, Rbehrns, Rbj, Rdrosson, Redhatter, Rgclegg, Robin48gx, SKvalen, Safulop, Salgueiro, Scls19fr, ShashClp, Sitar
Physics, Slaunger, Smallman12q, SocratesJedi, Stevenj, Sverdrup, Taral, The Anome, TheDeadManCometh, Twinesurge, Ucgajhe, Unmitigated Success, Vegaswikian, Virtualphtn, Vizcarra,
[email protected], Wine Guy, Wireless friend, WordsOnLitmusPaper, Zama Zalotta, Zihengpan, Zvika, Zvn, 199 anonymous edits
Frequency domain Source: http://en.wikipedia.org/w/index.php?oldid=462140084 Contributors: 48v, Alberto Orlandini, Atlant, Binksternet, Boxplot, Burhem, Cburnett, Charles Matthews,
Chetvorno, Chinasaur, Dicklyon, Dingo1729, Dreadstar, EconoPhysicist, Furrykef, Gadfium, Giftlite, Jamelan, James Grayhame, Jcw69, Jimfbleak, Larsobrien, Lotje, Marius, Mcld, Meanos,
Melcombe, Michael Hardy, Nbarth, Omegatron, Pjvpjv, RDBury, Rubenyi, SeanMack, Shwab, SpaceFlight89, Sunbeam60, User A1, VSteiger, Wendell, Woohookitty, WurmWoode, 41
anonymous edits
Initial value theorem Source: http://en.wikipedia.org/w/index.php?oldid=466503727 Contributors: Babakathy, Endothermic, Geometry guy, Giftlite, Michael Hardy, Qetuth
Final value theorem Source: http://en.wikipedia.org/w/index.php?oldid=466515677 Contributors: DARTH SIDIOUS 2, Einshrek, Endothermic, Geometry guy, Giftlite, Glrx, Michael Hardy,
Orborde, Qetuth, Raven1977, TedPavlic, 21 anonymous edits
Sensor Source: http://en.wikipedia.org/w/index.php?oldid=469669172 Contributors: 01101001, 12st12st12, 16@r, A.K., AK Auto, Achalmeena, Alexandrov, Allstarecho, Alteripse, Amaltheus,
AmyKondo, Analogauthority, Andrewpmk, Anlace, Anthony Appleyard, AntonioSajonia, Ashoksharmaz87, Atlan, Atlantia, Atul44885, Axel.mulder, Barkeep, Beetstra, Benvogel, BillC,
Binksternet, Blainster, Blobglob, Bobo192, Boing! said Zebedee, Bongwarrior, CENSI, Calltech, Caltas, Caltrop, Can't sleep, clown will eat me, CanadianLinuxUser, Capricorn42, Chaosdruid,
ChemGardener, Clan-destine, Cowman109, Cpl Syx, D climacus, DH85868993, DanaRuff, Dancter, Darth Panda, Dawnseeker2000, Dbcv, Decltype, DerHexer, Dhiva1, Diannaa, Dicklyon,
Dieselbub, Digresser, Dina, Dmr2, E Wing, ERK, Eaolson, Edebraal, Ehynes, Electricsforlife, EncMstr, Epbr123, Eric119, Evilrabidplotbunnies, EyeSerene, Fabartus, Falcon8765, Femto,
Finejon, Flowerpotman, Funandtrvl, F, Galoubet, Geekstuff, Gene93k, Geodesic42, Giftlite, Gilliam, Glaurung, Glenn, GoingBatty, GrayFullbuster, Green-eyed girl, Grubber, Guidod, Gzkn,
HalfShadow, Hankwang, Harriv, Haukurth, Helix84, Henkjanvanderpol, Heron, Hukseflux, Iamjp180, Iautomation, Ikalogic, Indurilo, Jackelfive, Jaeger5432, Jaxhere, Jc3s5h, Jlmerrill, Jni,
JoanneB, Jon Awbrey, JonHarder, Jondel, Jreferee, Karafias, Kaverin, Keyence, Kingpin13, Kisss256, Kkolodziej, KnowledgeOfSelf, Korg, LOL, LeaveSleaves, Leonard G., Levineps, Light
current, Lights, Lotje, Love Krittaya, Mac, Madurasn, Mako098765, Malo, Manavbhardwaj, Mark Kretschmar, Masgatotkaca, MauriceKA, Mdd, Meaghan, Meekywiki, Mesdale, Metricopolus,
Mikr18, Minakshimeena, Mion, Miroslavpohanka, Momergil, Momo san, Monkey230, Mortenoesterlundjoergensen, MrOllie, Mrh30, NawlinWiki, Ncmvocalist, Neelix, Nemilar, Newbi, Nick
Alexeev, Nornen3, Nposs, Oli Filth, Omair.majid, Onebravemonkey, Ot, Patrick, Patrick Berry, Pavithrabtech, Payno, Pb30, Pearle, Peruvianllama, Petrb, Phantom784, Phemanth88, Pigletpoglet,
Pit, Praveen khm, Pro66, Publunch, Qatter, R'n'B, RJFJR, Raasgat, Rabbiz, Radudobra, RainbowOfLight, Rajeev1988, RalphWiggum, Rdsmith4, Renaissancee, Rettetast, ReturnofPenitron,
Rigadoun, Rpyle731, Rubin joseph 10, Sadik ulubey, Sanders muc, Sapphic, Semiwiki, Sergio.ballestrero, Shadowjams, Shawnhath, SilentC, Silivrenion, Sinbaddasail0r, Sir Nicholas de
Mimsy-Porpington, Sjorford, Slowking Man, Smack, Snowdog, Snowolf, Solarmicrowavefive, Sole Soul, Sonett72, Sourcer66, SpeedyGonsales, Squidonius, Srleffler, Stephenb, Steve Pucci,
Stoneygirl45, Stuartfost, SuperMono, Sven Manguard, Synthy, The Anome, The Photon, The flamingo, TheFeds, Thubing, Tiddly Tom, TigerShark, Tohd8BohaithuGh1, Tommy2010,
Troy.peterson, Ummairsaeed, V8rik, Vactivity, Vanished User 8a9b4725f8376, Veinor, Vsmith, West.andrew.g, WikHead, WikiLeon, Wikifan798, YourEyesOnly, Z4ngetsu, ZX81, Zzuuzz, 479
anonymous edits
Accelerometer Source: http://en.wikipedia.org/w/index.php?oldid=466144676 Contributors: Aayushg1991, Aceleo, AcoustiMax, Aido2002, Alex.g, Aliento, AlistairMcMillan, Aluvus, Andy
Dingley, Antonbarabashov, Ariefwn, Ashlux, Ateachout, Auntof6, Axel.mulder, Baburaj kp, Beland, BigHairRef, Buechner, CalumCook234, Cambrant, Camw, CaptainG, CezarkennySeF,
Chadcarr, Chancek, Chances1, Charles Matthews, Chilichaz33, Chris the speller, Christopherjfoster, Chriswaterguy, ColinHelvensteijn, Dale Arnett, Dancter, Dane13, Daniel Olsen, Dave.bradi,
Davwillev, Debresser, Decan tulahi, Demorphica, Denisarona, Dicklyon, Dmd, Dmn, Dontmentionit, Dr Gangrene, Dravick, Drbreznjev, Drunken Pirate, Duwem, E smith2000, Ed g2s, Edcolins,
Edgar v. Hinber, Editore99, Eh kia, Elangsto, Electric hits, Electron9, Emerson7, Eridani, EugeneUngar, Ev, Evil saltine, Finfindiscotheque, Fixentries, Frederick Munster, Fredrik,
FrummerThanThou, Fstanchina, Fuutott, Getsnoopy, Gioto, Giulianorock, Glaurung, Glengarry, Glenn, Graeme Bartlett, Greensoda, Greg L, Gufiak, Headbomb, Heron, Hillshum, Hipocrite,
HiroJudgement, Hooperbloob, Ibbn, Icairns, Iluvcapra, Incompetence, Indigoanalysis, Iridescent, Irregular Shed, J04n, JJ Harrison, JamesBWatson, JanBurg, Jarus, Jayr168,
Jfdjhowyxbvhieutqxm, Jhsounds, Jim.henderson, Jmrowland, Jo Weber, JoaoRicardo, JohannVisagie, Johntheadams, Jorgbrown, Jsherm2, Juan Fco. Araya, Jwigton, KKL, KSchmitt6755,
KTo288, Katieh5584, Kelly Martin, Kenyon, Khommel, Khullah, Kku, Kokoo, Kwikwish, Kylegordon, Laderaranch, Leonard G., Linas, Lonelymiesarchie, Lumos3, Luotianci, Lwoodyiii,
Maande10, Maclarcs, Mangogirl2, Marcmarroquin, Marklwil, Marshallsumter, Martarius, Mawich, Mbierman, Michael.poplawski, Mike1024, MikelZap, Mikemurphy, Mit-Mit, Mizouman,
Mmarroquin, Mmkils, Motumboe, Mozzerati, Mtffm, NawlinWiki, Nicopipo2, Nneonneo, Nopetro, Nposs, Ohconfucius, Omegatron, Oneiros, Onion25, Oxwil, Pichote, Pietrow, Pinethicket,
Pinkgothic, Primenay13, Pwnage97, Quintote, Qutezuce, Ramblinknight, Rated325, Referencellc, ReyBrujo, Rich Farmbrough, RichG, Richiekim, Rjwilmsi, Rlsheehan, Rmsuperstar99, Robin
Johnson, Ronz, Rustyguts, Rwalker, Ryanrs, SF007, Salsb, Sasha5113, SatyrTN, Sbharris, Sbmehta, Scepia, Sebastiangarth, SecretDisc, Seldo, Serrano24, Shopingjs, Skimaniac, Smadidas,
Socrates2008, Srleffler, StephanACS, Sternutator, Steve Pucci, Suihkulokki, Symmetric, Syrthiss, TestingInfo, Themfromspace, Timneu22, Timo Honkasalo, Tmcw, Topbanana, Tordail,
Tushar.bhatnagar, UncleSamPatriot, Unwill, Vclaw, Vegaswikian, Viknesh1996, WLU, Waleswatcher, Wasell, WilcoxonResearch, Wkkm007, Wolfkeeper, Woohookitty, Youaremyrefuge,
Zaybertamer, Zazou25, Ztolstoy, Zunaid, 421 anonymous edits
Capacitive sensing Source: http://en.wikipedia.org/w/index.php?oldid=469657551 Contributors: Alpabarot, Baxterlkb, Beanfarmers, Beland, Belmond, BenFrantzDale, Boemanneke,
Cambridgeblue1971, Cander0000, CharlesC, Dancter, Darkfrog24, DavidAmis, Dicklyon, Diego Moya, Ebe123, Geekstuff, HarryHenryGebel, HereToHelp, Heron, Iancbend, Jerryobject,
Jogloran, John of Reading, Jovianeye, Juan M. Gonzalez, Kinema, Kissmojo, LjL, MER-C, Madison Alex, Malcolmst, Mandarax, Mark Kretschmar, Mikiemike, Mindmatrix, MrOllie, My
Ubuntu, Myscrnnm, Nikevich, Nuggetboy, Old Moonraker, Picopiyush, PlantTrees, R'n'B, RayAYang, Ripounet, Seaphoto, Sigmundur, Sole Soul, Steve Farnell, Tobias Bergemann,
Toomanycode, Tuetschek, U235, Wolfman1800, Wuffyz, Wwmarket, Yutsi, 74 anonymous edits
Capacitive displacement sensor Source: http://en.wikipedia.org/w/index.php?oldid=469049445 Contributors: BenFrantzDale, Fratrep, Katharineamy, Mark Kretschmar, Materialscientist,
MegaSloth, Music Sorter, Piezoelectric, Rich Farmbrough, Sole Soul, 11 anonymous edits
Current sensor Source: http://en.wikipedia.org/w/index.php?oldid=427256418 Contributors: DavideAndrea, Elmsfu, Hooperbloob, John of Reading, Micru, Mikiemike, Mion, Susts, Telempe,
10 anonymous edits
Electro-optical sensor Source: http://en.wikipedia.org/w/index.php?oldid=458552789 Contributors: Aspects, AvicAWB, Ceramic2metal, Dicklyon, Drmies, Eurocopter, Katharineamy,
Postcard Cathy, Time Immemorial, Vikramrudyani, Ynhockey, 1971, 8 anonymous edits
Galvanometer Source: http://en.wikipedia.org/w/index.php?oldid=469509035 Contributors: 168..., AJim, Adi4094, Alexius08, Algont, Allstarecho, Andrewa, Arch dude, Arjen Dijksman,
Atlant, Aulis Eskola, Bemoeial, BenFrantzDale, Bookandcoffee, Bryan Derksen, Chetvorno, DaGizza, Darnir redhat, Dean.jenkins, Dennywuh, Dolovis, Dr Greg, Dysprosia, ELApro, Flippin42,
Fuzlyssa, Gauravjuvekar, Gene Nygaard, Glenn, Heron, Herorev, Hotshot288, Infinoid, Janke, Japo, Jdclevenger, Jef-Infojef, Jim.henderson, Jmauro2000, Jpsfitz, Jrdioko, K Eliza Coyne, KJS77,
Kesac, Krallja, Kuamudhan, La Alquimista, Levent, Light current, Lohusalu, Magus732, Mani1, MarcusBritish, Mashford, Mav, Mikeblas, NERIUM, Naohiro19, NewEnglandYankee,
NuclearWarfare, OwenX, PACO, Pavel Vozenilek, Radagast83, Rich Farmbrough, Rifleman 82, Rsduhamel, Schaxel, Sectryan, Shd, Sonett72, SpuriousQ, Suckindiesel, Tcncv, The Thing That
Should Not Be, Theresa knott, Tholly, Titoxd, Ugophy, Ugophy2, WaltBusterkeys, Wangkairu, Waveguy, Wayne Slam, William Avery, Williamgelbart, Wtshymanski, Zoicon5, 150 anonymous
edits
Hall effect sensor Source: http://en.wikipedia.org/w/index.php?oldid=466983838 Contributors: A-Day, Adams13, Aeonx, Alansohn, Alvis, Android79, Angr, Antzervos, Attilios,
BenFrantzDale, Cacycle, Cubaexplorer, Danpeirce, Dsperlich, Enric Naval, Es330td, Evil Monkey, Glenn, Hooperbloob, Inwind, Janke, Joe Gazz84, Jwortzel, Karol Langner, Kongr43gpen,
Ksk2005, Little Mountain 5, Magnus Manske, Mbutts, Mikeblas, Mion, Mralik, Nposs, Omegatron, Ot, Pocketrocket24, Remuel, Rewolff, Sam Hocevar, Sillybilly, Sjakkalle, Smack, Srleffler,
Stephenb, Trurle, Yves-Laurent, Zhangzhe0101, , 62 anonymous edits
Inductive sensor Source: http://en.wikipedia.org/w/index.php?oldid=449157817 Contributors: Amalas, ChrisHodgesUK, Crystallina, Discospinster, Dougofborg, Gillsensors, Hooperbloob,
Inductiveload, John of Reading, Lokalan, Loupeter, Mion, O keyes, Positek, Ratarsed, Rossheth, Sameboat, TheoClarke, Voidxor, , 14 anonymous edits

180

Article Sources and Contributors


Infrared Source: http://en.wikipedia.org/w/index.php?oldid=470012210 Contributors: 129.128.164.xxx, 15.253, 1exec1, 200.191.188.xxx, 28421u2232nfenfcenc, 5 albert square, ABF, Aappo,
Abb3w, Accurizer, Adam850, Adamantios, Aetheling, Ahoerstemeier, Ajmint, Alan D, Alansohn, Albertktau, Alfio, Allstarecho, Alphachimp, Altermike, Amp14, Anaraug, Andres, AndrewJD,
Andrewpmk, Andyjsmith, Anilocra, Anna Lincoln, Anon user, Antandrus, Antilived, Anwar saadat, ArchonMagnus, Arcos9000, Ario, Armando, Arthur Holland, AssegaiAli, Atthack, Aur,
Axlq, B5Fan2258, Bachrach44, Barefootguru, Barkeep, BarretB, Bart133, Basawala, Basssaa, Bcebul, Benplowman, Bergsten, Betacommand, Bisqwit, BoP, Bobblewik, Boccobrock,
Bongwarrior, Bowlhover, Brandon, BrendanRyan, Brianga, Brn2bme, Bryan Derksen, Buchloe, Butane Goddess, Byzantime, Calvin 1998, CambridgeBayWeather, Camw, CanDo, Canderson7,
Capricorn42, Carmichael95, Catonahottinroof, Centrx, Ceyockey, Chamberlain2007, Chris Capoccia, Chris the speller, Chrisd87, Christophenstein, Chromatest, Chromaticity, Cmdrjameson,
Cody.pope, CommonsDelinker, Condem, Conversion script, Cool Cosmos, Creativeunit, Crucis, Ctroy36, DARTH SIDIOUS 2, DOTA fanatic, DV8 2XL, DVdm, Daniel5127, Danim, David
Woodward, Davidl9999, Davipo, Dbiel, Debolaz, Deglr6328, Dendodge, Denniss, Deor, DerHexer, Detective spooner, Dicklyon, Dicky747, Discospinster, Diyar se, Dlohcierekim's sock,
Dlrohrer2003, Doyley, DrBob, DragonHawk, Dreadstar, E Wing, ESkog, Ed Poor, Edward321, El C, Elansey, Eleland, Elmoro47, Emc2, Emperorbma, Energee5, Epbr123, Epolk, Erebus
Morgaine, Eroney, Eu.stefan, EvenT, Everyking, Excirial, FKmailliW, Faradayplank, Farosdaughter, Father Goose, Felix116, Femto, Ferkelparade, Flightmight, Foobar, ForrestVoight, Fotaun,
Fradeve11, Fram, Fred Bradstadt, GDonato, Gail, Gaius Cornelius, Gdarin, Gene Nygaard, Geni, GianniG46, Giftlite, Glenn, Gorm, Gosub, Graeme Bartlett, GraemeL, Graham87, Greggklein,
GreyCat, Gscshoyru, Gurch, Hadal, Haham hanuka, Hamza1232, Hankwang, Haploidavey, Happysailor, Hawkhkg11, Headbomb, Heathert87, Heron, Hmains, Hmmc12, Homonihilis, Hugh2414,
Hunt9, Hydrogen Iodide, II MusLiM HyBRiD II, IRSPECS, Ian Strachan, Icefall5, Indium, InternetMeme, Iridescent, Itomchandler, Ixfd64, J.delanoy, JHMM13, JLaTondre, Ja 62, JaGa,
Jacobolus, Jamesmorrey, Jaraalbe, Jasonglchu, Jauhienij, Javert, Jcobbpscorp, Jhsounds, Jim Seffrin, JinJian, Jman8, Jmencisom, Jmundo, Jncraton, Jodamn, Joeang, Johnbod,
Johnpatrickgreenwood, JorisvS, Jpk, Jpvinall, Jrockley, Julius Sahara, Jusdafax, Jwinius, Kaenneth, Kane5187, Keilana, King of Hearts, Kingpin13, Kitten454, Kjkolb, Klahs, Kleinost,
Knorzeyboi, KoshVorlon, Krashlandon, Krm500, Krouge, Kukini, Kuru, Kyoko, Ladybug37, Lauramadeline, LaurelESH, Lcarscad, Lectonar, Lee Carre, Leonard^Bloom, Lewiswalker9999,
Lightdarkness, Lightmouse, Limideen, LindsayH, Lir, Logical2u, Looxix, Loren.wilton, Lotje, Lotusoneny, Lucky number 49, Luna Santin, Lupo, M.O.X, M.veenstra, MD87, MER-C, MHD,
MJ94, MONGO, Mac, Makeemlighter, Makemi, Manofthesea, MarkS, MarkSutton, Marshallsumter, Martarius, Martin Kozk, Martyjmch, Master Jay, Masterofpsi, Mat-C, Materialscientist,
Matt Deres, MatthewBChambers, Mav, Maxamilliona, Mconst, Mekong Bluesman, Melaen, Mentifisto, Michael Hardy, Michaell83, Michilans, Microcell, Mighty Draco, Mike Rosoft, Mike4ty4,
Mikemoral, Mild Bill Hiccup, Mistry the mistro, Mkch, Mkubica, Mohammad Qasim, Moppet65535, Moriori, Mortense, Mosely, MrOllie, Mvidito-mlaniak, Mwanner, Mygerardromance, N5iln,
NawlinWiki, Neothemagic, Neurolysis, Newportm, Nick.follows, Nicolaasuni, Night-vision-guru, Nihiltres, Nishkid64, Nk, Nlu, Nono64, Novacatz, Nukeless, Nutscode, Ocaasi, Oddbodz,
OlEnglish, Onorem, Orionus, Oxymoron83, P69, PRRfan, PSimeon, Pagw, ParticleMan, Patrick, Pcbene, Penguin, Perey, Peterlewis, Pfaff9, Pgk, Phantomsteve, Philg88, Philip Trueman, Piano
non troppo, Pinethicket, Pinzo, Pion, Plasticup, Pmd1991, Poi9999, Poopiebut1, Professor marginalia, Profnick, Protious, Quantum.explorer, Quinacrine, RJHall, RPS Deepan, Radon210,
Raffaele.castagno, RainbowOfLight, Ramsis II, RandomHat, Raoulduke47, Raskolnikov The Penguin, Raul654, Rayc, RedWolf, Reddi, Redfarmer, Reelx09, RexNL, Rhobite, Rich Farmbrough,
RingtailedFox, Rjwilmsi, Rmhermen, Rnt20, Rob Hooft, Rob184rob, Robertw0925, RockMancuso, Rodhullandemu, RokerHRO, Romit3, Ronhjones, Roscoe x, RoyBoy, RoySmith, Rrburke,
Rror, SDS, SJP, Sadalsuud, Saintswithin, Salamurai, Samtheboy, Sankalpdravid, Saric, Scasey1960, SchuminWeb, ScooterSES, SeanMack, Seaphoto, Sengkang, Sheeson, ShelfSkewed,
Shoeofdeath, Shutupthaddeus666, Sjakkalle, Skarebo, Skizzik, Socialservice, Sodium, Some jerk on the Internet, Soroush83, Sparkgap, Splash, SpuriousQ, Srleffler, Srt vinay, Starionwolf,
Stephan t, Stephenb, Steve Quinn, Stevenmitchell, Stizz, Strayan, StuartH, Sunlight saunas, SuperHamster, SwedishPsycho, TWCarlson, Tabletop, Tamfang, Tar7arus, Tarani, Tarquin,
TastyPoutine, Tb, Teacher123, Tedder, Tels, Template namespace initialisation script, Thanujan23, The Anome, The Electric Eel, The Thing That Should Not Be, The undertow, The_ansible,
Theanphibian, Thedjatclubrock, Thermoworld, Thingg, Thunderbird2, Tiddly Tom, Tide rolls, Tigershrike, Tobias Hoevekamp, Tranh Nguyen, Tree Biting Conspiracy, Tresiden, Truegreen,
Tv316, USNV, Unyoyega, VMS Mosaic, Vanangamudiyan, Vary, Velella, Versageek, Viskonsas, Vivaronaldo, Volcanictelephone, Vsmith, Waerloeg, Wapcaplet, Waterden, Wavelength, Wayne
Hardman, Wayne Slam, Westley Turner, WikHead, WikipedianMarlith, Wikipelli, Wikisteff, William Avery, Wimt, Wjbeaty, WmRowan, Wolfgobbler, Wolfmankurd, Writeeddie,
Wtfunkymonkey, Wtmitchell, Wtshymanski, Xntrick03, Yamamoto Ichiro, Yansa, Yashkochar, Yemal, Yogi m, Yorkshiresky, Yt95, Yurik, Yyy, ZacBowling, Zakamoka1, Zenecan, Zerbey,
Zoicon5, ZooFari, Zundark, ~shuri, jlfr, , 1279 anonymous edits
Linear encoder Source: http://en.wikipedia.org/w/index.php?oldid=463085769 Contributors: C mobile, CWA-FRB, Colonies Chris, Fiducial, Fresheneesz, Lightmouse, Magioladitis,
Malcolma, Matesy, Mechgyver, Mild Bill Hiccup, Nagle, Pt130135, R'n'B, Radiojon, 28 anonymous edits
Photoelectric sensor Source: http://en.wikipedia.org/w/index.php?oldid=464968373 Contributors: Abutorsam007, Armadilloz, Banana04131, Bellazambo, Biscuittin, Creutiman, Ingolfson,
JethroElfman, Jusdafax, Kingturtle, Materialscientist, McBEC, Mild Bill Hiccup, Mion, Nihola, Oli Filth, PhilKnight, R'n'B, SN2216, Vrenator, Whispering, 30 anonymous edits
Photodiode Source: http://en.wikipedia.org/w/index.php?oldid=470169382 Contributors: A. B., A0602336, Adam C C, AledJames, Anachron, Andres, Arande2, Atcold, Atlant, Audriusa,
Bdesham, Beetstra, Big Bird, BigRiz, Bookofjude, Breno, Bwrs, Capricorn42, Chn Andy, Css, Ctroy36, Cyfal, DJIndica, DV8 2XL, Daisydaisy, Dancter, Ddcc, Deglr6328, DerHexer,
Devindunseith, Dicklyon, Djjhs, Dlegros, Electron9, Em3ryguy, Epo, Erud, Evan 124, Favonian, Ferkelparade, Fotodiod, Francs2000, Frecklefoot, Fredrik, Fulldecent, Gh5046, Giant12111,
Giftlite, Glenn, H0dges, Hede2000, HenrikS, Heron, Hooperbloob, Intellec7, J.delanoy, Jaraalbe, Jeffthejiff, Jhchang, John of Reading, Jstahley, KaiMartin, Karthikndr, Kevglynn, Kkmurray,
KnightRider, KrakatoaKatie, LeaveSleaves, Liftarn, Linas, Lindosland, LouScheffer, Lucas the scot, MadScientistVX, Mako098765, Materialscientist, Minna Sora no Shita, Mitjaprelovsek,
Moonkey, Morcheeba, Murtasa, NANOIDENT, Nakon, Nedim Ardoa, Nickptar, Nikai, Nilfanion, Nrnkpeukdzr, Ohnoitsjamie, Oli Filth, Omegatron, Pb30, Perry Bebbington, Pharaoh of the
Wizards, Philip Trueman, Pol098, Puffin, RTC, Rabin, Rannphirt anaithnid, Rcopley, Rdsmith4, Reddi, Robertmuil, Roscoe x, RuM, Sannse, Scwks, Semiwiki, Sgc2002, Shadwstalkr, Shanes,
Shkirenko edik, Silverbone, Snafflekid, Spiel496, Srdhrp vict, Srleffler, Tarchon, Terry1944, Texasron, The Photon, Tim Starling, Tls60, Tmaull, Vercalos, Vssun, Weihao.chiu, Whogue,
WingkeeLEE, Wtshymanski, Xagent86, 286 anonymous edits
Piezoelectric accelerometer Source: http://en.wikipedia.org/w/index.php?oldid=443188884 Contributors: 5 albert square, Andrew Jameson, Anna Frodesiak, Archiem, Dancter, JanBurg,
Lamro, Lbnoliac, M-le-mot-dit, Marcmarroquin, MilborneOne, Pinethicket, Pro crast in a tor, Shinerunner, Tide rolls, Wjejskenewr, 46 anonymous edits
Pressure sensor Source: http://en.wikipedia.org/w/index.php?oldid=470008109 Contributors: A winterburn, AFreyler, AGeorgas, Ali Esfandiari, AmyKondo, Axel.mulder, Beetstra,
CambridgeBayWeather, Cdeverille, Chase me ladies, I'm the Cavalry, Cst17, Dancter, Danntm, Dawnseeker2000, Dawop, Dtgriscom, Duk, El Wray, Ericwsyow, Firsfron, Frasermoo, GB fan,
GLammel, Glenn, Glogger, GoingBatty, HazardousWaster, Hebrides, Hooperbloob, Insanity Incarnate, JForget, Jayden54, Jdwca123, Kendall.waters, Kmashal, Logicserve, Longciaran,
Mahira75249, Michael Hardy, Mion, MrOllie, Murphystout, Napalm Llama, NebY, Nnh, Nposs, Ohnoitsjamie, Oli Filth, Ownage2214, PrestonH, Rebekahpedia, Redecke, Robert K S,
Smurrayinchester, Srleffler, ThreeOfCups, Totlll, WBardwin, Whitepaw, Woohookitty, 173 anonymous edits
Resistance thermometer Source: http://en.wikipedia.org/w/index.php?oldid=465630142 Contributors: Ahoerstemeier, AlphaTeller, Anonymous6494, ArsniureDeGallium, AvicAWB,
Bassbonerocks, Billymac00, Bobblewik, Calebcs, Callunckleart, CanadianLinuxUser, Carnildo, Catapult, Chochopk, Chrubb, Closedmouth, Codeczero, Edward, EliV, Femto, Gabriel Kielland,
Gaius Cornelius, Gene Nygaard, Grahamwild, Grantmidnight, Hankwang, Heron, Hoemaco, Howcheng, Ihg, Ixfd64, Jaan513, Japanese Searobin, Jgoldfar, Jwortzel, Karpouzi, Kimmyb76,
Kjkolb, Lankiveil, MER-C, Microfrost, Mild Bill Hiccup, Mortense, Mpj28202, Mpj96, Nemilar, Nick Number, Nitecrawler, Noisy, PHermans, Phildong1339, Pol098, Psanderson, Qqzzccdd,
RainbowOfLight, Rasbach, Redeemer, Rjwilmsi, Robert - Northern VA, Roiwikiacct, Rorro, Rwl10267, Sadalmelik, SchuminWeb, Shikharsaxena, Shirifan, Sho Uemura, Skraz, Srleffler,
Stokito, Surecan, Surendhar Murugan, Suspekt, THEN WHO WAS PHONE?, Tasior, Tbhotch, Tkeep, Tom Hawkins, Toon05, Torrentcat, Turian, UncivilFire, Vadim Makarov, Vuo, William
Avery, WingkeeLEE, Xxslayer88xx, Yummy mummy, Zgozvrm, 176 anonymous edits
Thermistor Source: http://en.wikipedia.org/w/index.php?oldid=468762037 Contributors: 63.192.137.xxx, ABF, Adamw, Aditya, Alansohn, Ashtead Tutor, Awickert, Back0ut, Bemasher,
Bobblewik, Brian Helsinki, Brighterorange, Bryan Derksen, Buster2058, CambridgeBayWeather, Captain-tucker, Carbuncle, Centrx, Chakri srivatsa, Choihei, Chuunen Baka, Closedmouth,
Conversion script, Cookie4869, Dhaluza, Diablod666, Dittos12, Donarreiskoffer, Doodle77, Drigz, EliV, Finell, Gene Nygaard, God Emperor, GoingBatty, Graham87, Greg L, Grend3l, Grunt,
Halmstad, Hankwang, Henry W. Schmitt, Heron, Hooperbloob, Howcheng, Ibdelfest, Iopq, James086, Janz94b, Jim.henderson, Jimgeorge, Jni, Jnmurfin, Jwortzel, Karch, Karpouzi, Kjkolb,
Komeil, Leonard G., Light current, Lochlan1, Lou Mueller, Madhero88, Madmanguruman, Magioladitis, Manavbhardwaj, Mardetanha, Masgatotkaca, Maximus Rex, Mayank2507, Mdurante,
Mhking, Mion, Mircealutic, Morcheeba, Mortense, Myanw, N5iln, Nasanbat, NascarEd, Ndgrahams, Nedim Ardoa, Neeners, Nemilar, Neparis, NickFr, Oh Snap, Ohnoitsjamie, Oli Filth,
Omegatron, Oscarthecat, PAR, Papep, Plainman, Plugwash, PureGenius, Quintote, RUL3R, RainbowOfLight, Ramaksoud2000, Raven4x4x, Ray Van De Walker, Requestion, Rob-bob7-0,
Salvar, Sbyrnes321, SchfiftyThree, Searchme, Seraphal, Shaddack, Shirifan, Sho Uemura, SkyLined, Sodium, SpaceFlight89, Srleffler, Stannered, Szopen, Tabby, Tempodivalse, Teslaton,
Tgmsfu, Thorseth, Tim Starling, Tommy2010, Tonsofpcs, Tsca, Txomin, UncivilFire, Unfocused, Ussensor, Vadim Makarov, Veledan, Vichou, Waulfgang, West.andrew.g, Willking1979,
Wmahan, Yaris678, Yoganate79, Zoicon5, Zondor, Zotel, Zundark, Zureks, 307 anonymous edits
Torque sensor Source: http://en.wikipedia.org/w/index.php?oldid=468759166 Contributors: Amalas, Atlant, BD2412, Beetstra, Chase me ladies, I'm the Cavalry, ChemGardener, Dieseltaylor,
Guy Macon, Habensm, Hooperbloob, Hu12, Just zis Guy, you know?, Mion, Mushin, Nposs, Oxtoby, Sinbaddasail0r, Steelheadwook, Titaniumviper, Vachhaninimit, Ward20, Winczner,
Xboxgamer733, 35 anonymous edits
Ultrasonic thickness gauge Source: http://en.wikipedia.org/w/index.php?oldid=398578024 Contributors: Biscuittin, Hooperbloob, Makawity, Mission Fleg, Remo34, Sadads
List of sensors Source: http://en.wikipedia.org/w/index.php?oldid=468319377 Contributors: AJim, Agrihouse, Amalas, AmyKondo, Anger22, Barkeep, Bovineone, Dancter, Darya1970,
Dave2208, Dbcv, Dicklyon, E Wing, Ed.gemdjian, Edgar181, Gene Nygaard, Gillsensors, Gmoose1, HCPotter, Hankwang, Hathawayc, J.delanoy, Jondel, Kj cheetham, Leonard G.,
Marcmarroquin, Mark Kretschmar, Matsyes, Melaen, Mindmatrix, Mion, Mjm1964, NawlinWiki, Nay Min Thu, Neier, NickelShoe, Nposs, PMDrive1061, Papadim.G, Patrick Berry, Pegship,
Per Ardua, Pierre cb, Publunch, Reskin, Shawn Worthington Laser Plasma, Sole Soul, Soni master, Srleffler, Swinman, Tabby, The wub, Thrapper, Tls60, Trfs, Vachhaninimit, Wikipelli,
Winstonwolf33, Wtshymanski, Zazou25, 66 anonymous edits

181

Image Sources, Licenses and Contributors

Image Sources, Licenses and Contributors


Image:Function ocsillating at 3 hertz.svg Source: http://en.wikipedia.org/w/index.php?title=File:Function_ocsillating_at_3_hertz.svg License: Creative Commons Attribution-Sharealike 3.0
Contributors: Thenub314
Image:Onfreq.svg Source: http://en.wikipedia.org/w/index.php?title=File:Onfreq.svg License: GNU Free Documentation License Contributors: Original: Nicholas Longo, SVG conversion:
DX-MON (Richard Mant)
Image:Offfreq.svg Source: http://en.wikipedia.org/w/index.php?title=File:Offfreq.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Thenub314
Image:Fourier transform of oscillating function.svg Source: http://en.wikipedia.org/w/index.php?title=File:Fourier_transform_of_oscillating_function.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Thenub314
File:Rectangular function.svg Source: http://en.wikipedia.org/w/index.php?title=File:Rectangular_function.svg License: GNU Free Documentation License Contributors: Axxgreazz,
Bender235, Darapti, Omegatron
File:Sinc function (normalized).svg Source: http://en.wikipedia.org/w/index.php?title=File:Sinc_function_(normalized).svg License: GNU Free Documentation License Contributors:
Bender235, Juiced lemon, Krishnavedala, Omegatron, Pieter Kuiper
File:S-Domain circuit equivalency.svg Source: http://en.wikipedia.org/w/index.php?title=File:S-Domain_circuit_equivalency.svg License: Public Domain Contributors: Ordoon (original
creator) and Flekstro
Image:Dirac distribution PDF.svg Source: http://en.wikipedia.org/w/index.php?title=File:Dirac_distribution_PDF.svg License: GNU Free Documentation License Contributors: Original SVG
by Omegatron Original PNG version by PAR This version adapted by Qef
Image:Dirac function approximation.gif Source: http://en.wikipedia.org/w/index.php?title=File:Dirac_function_approximation.gif License: Public Domain Contributors: Oleg Alexandrov
Image:DiracComb.png Source: http://en.wikipedia.org/w/index.php?title=File:DiracComb.png License: Public Domain Contributors: PAR, 1 anonymous edits
Image:Dirac distribution CDF.svg Source: http://en.wikipedia.org/w/index.php?title=File:Dirac_distribution_CDF.svg License: GNU Free Documentation License Contributors: Juiced
lemon, Omegatron, Pieter Kuiper
Image:Ramp_function.svg Source: http://en.wikipedia.org/w/index.php?title=File:Ramp_function.svg License: Public Domain Contributors: Qef
Image:Jpeg2000 2-level wavelet transform-lichtenstein.png Source: http://en.wikipedia.org/w/index.php?title=File:Jpeg2000_2-level_wavelet_transform-lichtenstein.png License: Creative
Commons Attribution-ShareAlike 3.0 Unported Contributors: Alessio Damato
Image:Region of convergence 0.5 causal.svg Source: http://en.wikipedia.org/w/index.php?title=File:Region_of_convergence_0.5_causal.svg License: Creative Commons
Attribution-ShareAlike 3.0 Unported Contributors: en:User:Cburnett
Image:Region of convergence 0.5 anticausal.svg Source: http://en.wikipedia.org/w/index.php?title=File:Region_of_convergence_0.5_anticausal.svg License: Creative Commons
Attribution-ShareAlike 3.0 Unported Contributors: en:User:Cburnett
Image:Region of convergence 0.5 0.75 mixed-causal.svg Source: http://en.wikipedia.org/w/index.php?title=File:Region_of_convergence_0.5_0.75_mixed-causal.svg License: Creative
Commons Attribution-ShareAlike 3.0 Unported Contributors: en:User:Cburnett
File:Type K and type S.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Type_K_and_type_S.jpg License: Public Domain Contributors: Skatebiker, 3 anonymous edits
File:Infrared Transceiver Circuit.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Infrared_Transceiver_Circuit.jpg License: Creative Commons Attribution-Sharealike 3.0
Contributors: User:Praveen khm
Image:Accelerometer.png Source: http://en.wikipedia.org/w/index.php?title=File:Accelerometer.png License: Public Domain Contributors: Original uploader was SatyrTN at en.wikipedia
File:Galaxy Nexus smartphone.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Galaxy_Nexus_smartphone.jpg License: Creative Commons Attribution 2.5 Contributors: Faramarz,
MB-one, SF007, 1 anonymous edits
Image:Galvanometer scheme.svg Source: http://en.wikipedia.org/w/index.php?title=File:Galvanometer_scheme.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
User:Fred the Oyster
Image:D'Arsonval ammeter movement.jpg Source: http://en.wikipedia.org/w/index.php?title=File:D'Arsonval_ammeter_movement.jpg License: Public Domain Contributors: Bartlett & Co.
N.Y.
Image:Thompson mirror galvanometer use.png Source: http://en.wikipedia.org/w/index.php?title=File:Thompson_mirror_galvanometer_use.png License: Public Domain Contributors:
Norman Hugh Schneider and Sylvanus P. Thompson
Image:Western Union standard galvanometer.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Western_Union_standard_galvanometer.jpg License: Public Domain Contributors:
Thomas Dixon Lockwood
File:Astatic Galvanometer brass and ivory.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Astatic_Galvanometer_brass_and_ivory.jpg License: Creative Commons Zero
Contributors: Image:Autoexpmeter.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Autoexpmeter.JPG License: Public Domain Contributors: Original uploader was Janke at en.wikipedia
File:DynAXIS L.jpg Source: http://en.wikipedia.org/w/index.php?title=File:DynAXIS_L.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Scanlab7
File:Cylinders with Hall sensors.png Source: http://en.wikipedia.org/w/index.php?title=File:Cylinders_with_Hall_sensors.png License: GNU Free Documentation License Contributors:
User:IMeowbot
File:Clutch with Hall Effect sensor.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Clutch_with_Hall_Effect_sensor.jpg License: Creative Commons Attribution 2.0 Contributors:
Deadstar, FlickreviewR, Stunteltje, Wdwd
Image:Hall sensor tach.gif Source: http://en.wikipedia.org/w/index.php?title=File:Hall_sensor_tach.gif License: GNU Free Documentation License Contributors: User:IMeowbot
File:Budowa_czujnika_indukcyjnego_(ubt).svg Source: http://en.wikipedia.org/w/index.php?title=File:Budowa_czujnika_indukcyjnego_(ubt).svg License: Creative Commons Attribution 2.5
Contributors: user:tsca
File:Ir girl.png Source: http://en.wikipedia.org/w/index.php?title=File:Ir_girl.png License: GNU Free Documentation License Contributors: Better than Hustler, Flying Saucer, Masgatotkaca,
Pieter Kuiper
File:Wide-field Infrared Survey Explorer first-light image.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Wide-field_Infrared_Survey_Explorer_first-light_image.jpg License:
Public Domain Contributors: NASA/JPL-Caltech/UCLA
File:Atmosfaerisk spredning.gif Source: http://en.wikipedia.org/w/index.php?title=File:Atmosfaerisk_spredning.gif License: Public Domain Contributors: Adoniscik, Cepheiden,
Jim.henderson, Maksim, 1 anonymous edits
File:Active-Infrared-Night-Vision.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Active-Infrared-Night-Vision.jpg License: Attribution Contributors: Extreme CCTV. Original
uploader was Night-vision-guru at en.wikipedia. Later version(s) were uploaded by Beao at en.wikipedia.
File:Infrared dog.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Infrared_dog.jpg License: Public Domain Contributors: NASA/IPAC
Image:Specim aisaowl outdoor.png Source: http://en.wikipedia.org/w/index.php?title=File:Specim_aisaowl_outdoor.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Aappo
File:P1020168.JPG Source: http://en.wikipedia.org/w/index.php?title=File:P1020168.JPG License: GNU Free Documentation License Contributors: RockMancuso at en.wikipedia
File:US IR satpic.JPG Source: http://en.wikipedia.org/w/index.php?title=File:US_IR_satpic.JPG License: Public Domain Contributors: Original uploader was Yorkshiresky at en.wikipedia
image:ESO - Beta Pictoris planet finally imaged (by).jpg Source: http://en.wikipedia.org/w/index.php?title=File:ESO_-_Beta_Pictoris_planet_finally_imaged_(by).jpg License: unknown
Contributors: ESO/A.-M. Lagrange et al.
File:Spitzer- Telescopio.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Spitzer-_Telescopio.jpg License: Public Domain Contributors: Bricktop, Clh288, Denniss, Huntster,
OS2Warp
File:Van Eyck - Arnolfini Portrait.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Van_Eyck_-_Arnolfini_Portrait.jpg License: Public Domain Contributors: Jan van Eyck
File:wiki snake eats mouse.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Wiki_snake_eats_mouse.jpg License: Creative Commons Attribution-ShareAlike 3.0 Unported
Contributors: Arno / Coen
File:wiki bat.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Wiki_bat.jpg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Arno

182

Image Sources, Licenses and Contributors


File:Greenhouse Effect.svg Source: http://en.wikipedia.org/w/index.php?title=File:Greenhouse_Effect.svg License: GNU Free Documentation License Contributors: User:Rugby471
File:Optical_Encoders.png Source: http://en.wikipedia.org/w/index.php?title=File:Optical_Encoders.png License: Creative Commons Attribution-Share Alike Contributors: Fiducial
File:Visualisierung der magnetischen Struktur eines Linearencoders (Aufnahme mit MagView).jpg Source:
http://en.wikipedia.org/w/index.php?title=File:Visualisierung_der_magnetischen_Struktur_eines_Linearencoders_(Aufnahme_mit_MagView).jpg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: User:Matesy
File:Circular Lissajous.gif Source: http://en.wikipedia.org/w/index.php?title=File:Circular_Lissajous.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: Fiducial
File:Quadrature Diagram.svg Source: http://en.wikipedia.org/w/index.php?title=File:Quadrature_Diagram.svg License: Public Domain Contributors: Sagsaw
File:Fotodio.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Fotodio.jpg License: GNU Free Documentation License Contributors: Flying Saucer, JuTa
Image:Photodiode symbol.svg Source: http://en.wikipedia.org/w/index.php?title=File:Photodiode_symbol.svg License: GNU Free Documentation License Contributors: Omegatron,
Rocket000, Sergey kudryavtsev, 5 anonymous edits
File:Photodiode operation.png Source: http://en.wikipedia.org/w/index.php?title=File:Photodiode_operation.png License: Creative Commons Attribution-ShareAlike 3.0 Unported
Contributors: Kennlinie_Photodiode_1.png: Gregor Hess (Ghe42) derivative work: Materialscientist (talk)
File:Response silicon photodiode.svg Source: http://en.wikipedia.org/w/index.php?title=File:Response_silicon_photodiode.svg License: Creative Commons Attribution-Sharealike 3.0
Contributors: KaiMartin
Image:PD-icon.svg Source: http://en.wikipedia.org/w/index.php?title=File:PD-icon.svg License: Public Domain Contributors: Alex.muller, Anomie, Anonymous Dissident, CBM, MBisanz,
Quadell, Rocket000, Strangerer, Timotheus Canens, 1 anonymous edits
Image:PiezoAccelTheory.gif Source: http://en.wikipedia.org/w/index.php?title=File:PiezoAccelTheory.gif License: GNU Free Documentation License Contributors: Archiem
Image:PiezoAccel.jpg Source: http://en.wikipedia.org/w/index.php?title=File:PiezoAccel.jpg License: GNU Free Documentation License Contributors: Archiem
Image:Pressuresensor.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pressuresensor.jpg License: GNU General Public License Contributors: Badseed, Redecke, Wdwd, 3
anonymous edits
Image:Digital-barometric-pressure-sensor.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Digital-barometric-pressure-sensor.jpg License: Creative Commons Attribution 3.0
Contributors: GLammel
Image:P sensors.JPG Source: http://en.wikipedia.org/w/index.php?title=File:P_sensors.JPG License: GNU Free Documentation License Contributors: Kmashal
File:Thin Film PRT.png Source: http://en.wikipedia.org/w/index.php?title=File:Thin_Film_PRT.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: AlphaTeller
File:Wire Wound PRT.png Source: http://en.wikipedia.org/w/index.php?title=File:Wire_Wound_PRT.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: AlphaTeller
File:Coil Element PRT.png Source: http://en.wikipedia.org/w/index.php?title=File:Coil_Element_PRT.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: AlphaTeller
Image:Rtdconstruction.gif Source: http://en.wikipedia.org/w/index.php?title=File:Rtdconstruction.gif License: unknown Contributors: Original uploader was Psanderson at en.wikipedia
Image:twowire.gif Source: http://en.wikipedia.org/w/index.php?title=File:Twowire.gif License: unknown Contributors: Original uploader was Psanderson at en.wikipedia
Image:threewire.gif Source: http://en.wikipedia.org/w/index.php?title=File:Threewire.gif License: unknown Contributors: Psanderson at en.wikipedia
Image:fourwire.gif Source: http://en.wikipedia.org/w/index.php?title=File:Fourwire.gif License: unknown Contributors: Psanderson at en.wikipedia
Image:4wirebetter.gif Source: http://en.wikipedia.org/w/index.php?title=File:4wirebetter.gif License: GNU Free Documentation License Contributors: Original uploader was Psanderson at
en.wikipedia
Image:NTC bead.jpg Source: http://en.wikipedia.org/w/index.php?title=File:NTC_bead.jpg License: Creative Commons Attribution-ShareAlike 2.0 Germany Contributors: Ansgar Hellwig
Image:Thermistor.svg Source: http://en.wikipedia.org/w/index.php?title=File:Thermistor.svg License: Public Domain Contributors: jjbeard

183

License

License
Creative Commons Attribution-Share Alike 3.0 Unported
//creativecommons.org/licenses/by-sa/3.0/

184

You might also like