Aberration: Aberration Is Something That Deviates From The Normal Way But Has Several Specifically Defined
Aberration: Aberration Is Something That Deviates From The Normal Way But Has Several Specifically Defined
Aberration: Aberration Is Something That Deviates From The Normal Way But Has Several Specifically Defined
Aberration is something that deviates from the normal way but has several specifically defined
meanings:
Aberrations are departures of the performance of an optical system from the predictions of
paraxial optics.[1] Aberration leads to blurring of the image produced by an image-forming
optical system. It occurs when light from one point of an object after transmission through the
system does not converge into (or does not diverge from) a single point. Instrument-makers need
to correct optical systems to compensate for aberration. The articles on reflection, refraction and
caustics discuss the general features of reflected and refracted rays.
Aberrations fall into two classes: monochromatic and chromatic. Monochromatic aberrations are
caused by the geometry of the lens and occur both when light is reflected and when it is
refracted. They appear even when using monochromatic light, hence the name.
Chromatic aberrations are caused by dispersion, the variation of a lens's refractive index with
wavelength. They do not appear when monochromatic light is used.
Monochromatic aberrations
Piston
Tilt
Defocus
Spherical aberration
Coma
Astigmatism
Field curvature
Image distortion
Piston and tilt are not actually true optical aberrations, as they do not represent or model
curvature in the wavefront. If an otherwise perfect wavefront is "aberrated" by piston and tilt, it
will still form a perfect, aberration-free image, only shifted to a different position. Defocus is the
lowest-order true optical aberration.
Chromatic aberrations
Coma (optics)
In optics (especially telescopes), the coma (aka comatic aberration) in an optical system refers
to aberration inherent to certain optical designs or due to imperfection in the lens or other
components which results in off-axis point sources such as stars appearing distorted, appearing to
have a tail (coma) like a comet. Specifically, coma is defined as a variation in magnification over
the entrance pupil. In refractive or diffractive optical systems, especially those imaging a wide
spectral range, coma can be a function of wavelength, in which case it is a form of chromatic
aberration.
Coma is an inherent property of telescopes using parabolic mirrors. Light from a point source
(such as a star) in the center of the field is perfectly focused at the focal point of the mirror
(unlike a spherical mirror, where light from the outer part of the mirror focuses closer to the
mirror than light from the center--spherical aberration). However, when the light source is off-
center (off-axis), the different parts of the mirror do not reflect the light to the same point. This
results in a point of light that is not in the center of the field looking wedge-shaped. The further
off-axis, the worse this effect is. This causes stars to appear to have a cometary coma, hence the
name.
Schemes to reduce spherical aberration without introducing coma include Schmidt, Maksutov,
ACF and Ritchey-Chrétien optical systems. Correction lenses for Newtonian reflectors have been
designed which reduce coma in telescopes below f/6. These work by means of a dual lens system
of a plano-convex and a plano-concave lens fitted into an eyepiece adaptor which superficially
resembles a Barlow lens. [1][2]
Coma of a single lens or a system of lenses can be minimized (and in some cases eliminated) by
choosing the curvature of the lens surfaces to match the application. Lenses in which both
spherical aberration and coma are minimized at a single wavelength are called bestform or
aplanatic or aspheric lenses.
Vertical coma is the most common higher-order aberration in the eyes of patients with
keratoconus.[3]
Chromatic aberration
In optics, chromatic aberration (CA, also called achromatism or chromatic distortion) is a
type of distortion in which there is a failure of a lens to focus all colors to the same convergence
point. It occurs because lenses have a different refractive index for different wavelengths of light
(the dispersion of the lens). The refractive index decreases with increasing wavelength.
Chromatic aberration manifests itself as "fringes" of color along boundaries that separate dark
and bright parts of the image, because each color in the optical spectrum cannot be focused at a
single common point. Since the focal length f of a lens is dependent on the refractive index n,
different wavelengths of light will be focused on different positions
Types of chromatic aberration
Chromatic aberration can be both axial (longitudinal), in that different wavelengths are focused
at a different distance from the lens, different points on the optical axis (focus shift); and
transverse (lateral), in that different wavelengths are focused at different positions in the focal
plane (because the magnification and/or distortion of the lens also varies with wavelength;
indicated in graphs as (change in) focus length). The acronym LCA is used, but ambiguous, and
may refer to either longitudinal or lateral CA; for clarity, this article uses "axial" (shift in the
direction of the optical axis) and "transverse" (shift perpendicular to the optical axis, in the plane
of the sensor or film).
These two types have different characteristics, and may occur together. Axial CA occurs
throughout the image, and is reduced by stopping down (this increases depth of field, so though
the different wavelength focus at different distances, they are still in acceptable focus).
Transverse CA does not occur in the center, and increases towards the edge, but is not affected
by stopping down.
In digital sensors, axial CA results in the red and blue planes being defocused (assuming that the
green plane is in focus), which is relatively difficult to remedy in post-processing, while
transverse CA results in the red, green, and blue planes being at different magnifications
(magnification changing along radii, as in geometric distortion), and can be corrected by scaling
the planes appropriately so they line up.
Even if the image is sharp, it may be distorted compared to ideal pinhole projection. In pinhole
projection, the magnification of an object is inversely proportional to its distance to the camera
along the optical axis so that a camera pointing directly at a flat surface reproduces that flat
surface. Distortion can be thought of as stretching the image non-uniformly, or, equivalently, as a
variation in magnification across the field. While "distortion" can include arbitrary deformation
of an image, the most pronounced modes of distortion produced by conventional imaging optics
is "barrel distortion", in which the center of the image is magnified more than the perimeter
(figure 7a). The reverse, in which the perimeter is magnified more than the center, is known as
"pincushion distortion" (figure 7b). This effect is called lens distortion or image distortion, and
there are algorithms to correct it.
Systems free of distortion are called orthoscopic (orthos, right, skopein to look) or rectilinear
(straight lines).
Figure 8
This aberration is quite distinct from that of the sharpness of reproduction; in unsharp,
reproduction, the question of distortion arises if only parts of the object can be recognized in the
figure. If, in an unsharp image, a patch of light corresponds to an object point, the center of
gravity of the patch may be regarded as the image point, this being the point where the plane
receiving the image, e.g., a focusing screen, intersects the ray passing through the middle of the
stop. This assumption is justified if a poor image on the focusing screen remains stationary when
the aperture is diminished; in practice, this generally occurs. This ray, named by Abbe a
principal ray (not to be confused with the principal rays of the Gaussian theory), passes through
the center of the entrance pupil before the first refraction, and the center of the exit pupil after the
last refraction. From this it follows that correctness of drawing depends solely upon the principal
rays; and is independent of the sharpness or curvature of the image field. Referring to fig. 8, we
have O'Q'/OQ = a' tan w'/a tan w = 1/N, where N is the scale or magnification of the image. For
N to be constant for all values of w, a' tan w'/a tan w must also be constant. If the ratio a'/a be
sufficiently constant, as is often the case, the above relation reduces to the condition of Airy, i.e.
tan w'/ tan w= a constant. This simple relation (see Camb. Phil. Trans., 1830, 3, p. 1) is fulfilled
in all systems which are symmetrical with respect to their diaphragm (briefly named symmetrical
or holosymmetrical objectives), or which consist of two like, but different-sized, components,
placed from the diaphragm in the ratio of their size, and presenting the same curvature to it
(hemisymmetrical objectives); in these systems tan w' / tan w = 1.
The constancy of a'/a necessary for this relation to hold was pointed out by R. H. Bow (Brit.
Journ. Photog., 1861), and Thomas Sutton (Photographic Notes, 1862); it has been treated by O.
Lummer and by M. von Rohr (Zeit. f. Instrumentenk., 1897, 17, and 1898, 18, p. 4). It requires
the middle of the aperture stop to be reproduced in the centers of the entrance and exit pupils
without spherical aberration. M. von Rohr showed that for systems fulfilling neither the Airy nor
the Bow-Sutton condition, the ratio a' cos w'/a tan w will be constant for one distance of the
object. This combined condition is exactly fulfilled by holosymmetrical objectives reproducing
with the scale 1, and by hemisymmetrical, if the scale of reproduction be equal to the ratio of the
sizes of the two components.
A typical optical power meter consists of a calibrated sensor, measuring amplifier and display.
The sensor primarily consists of a photodiode selected for the appropriate range of wavelengths
and power levels. On the display unit, the measured optical power and set wavelength is
displayed. Power meters are calibrated using a traceable calibration standard such as a NIST
standard.
A traditional optical power meter responds to a broad spectrum of light, however the calibration
is wavelength dependent. This is not normally an issue, since the test wavelength is usually
known, however it has a couple of drawbacks. Firstly, the user must set the meter to the correct
test wavelength, and secondly if there are other spurious wavelengths present, then wrong
readings will result.
Sometimes optical power meters are combined with a different test function such as an Optical
Light Source (OLS) or Visual Fault Locator (VFL), or may be a sub-system is a much larger
instrument. When combined with a light source, the instrument is usually called an Optical Loss
Test Set.
Sensors
The major semiconductor sensor types are Silicon (Si), Germanium (Ge) and Indium Gallium
Arsenide (InGaAs). Additionally, these may be used with attenuating elements for high optical
power testing, or wavelength selective elements so they only respond to particular wavelengths.
These all operate in a similar type of circuit, however in addition to their basic wavelength
response characteristics, each one has some other particular characteristics:
Si detectors tend to saturate at relatively low power levels, and they are only useful in the
visible and 850 nm bands.
Ge detectors saturate at the highest power levels, but have poor low power performance, poor
general linearity over the entire power range, and are generally temperature sensitive. They are
only marginally accurate for "1550 nm" testing, due to a combination of temperature and
wavelength affecting responsivity at eg 1580nm, however they provide useful performance over
the the commonly used 850 / 1300 / 1550 nm wavelength bands, so they are extensively
deployed where lower accuracy is acceptable. Other limitations include: non-linearity at low
power levels, and poor responsivity uniformity across the detector area.
InGaAs detectors saturate at intermediate levels. They offer generally good performance, but
are often very wavelength sensitive around 850 nm. So they are largely used for singlemode
fiber testing at 1270 - 1650 nm.
An important part of an optical power meter sensor, is the fiber optic connector interface. Careful
optical design is required to avoid significant accuracy problems when used with the wide
variety of fiber types and connectors typically encountered.
Another important component, is the sensor input amplifier. This needs very careful design to
avoid significant performance degradation over a wide range of conditions.
A typical OPM measures accurately under most conditions from about 0 dBm (1 milli Watt) to
about -50 dBm (10 nano Watt), although the display range may be larger. Above 0 dBm is
considered "high power", and specially adapted units may measure up to nearly + 30 dBm ( 1
Watt). Below -50 dBm is "low power", and specially adapted units may measure as low as -110
dBm. Irrespective of power meter specifications, testing below about -50 dBm tends to be
sensitive to stray ambient light leaking into fibers or connectors. So when testing at "low power",
some sort of test range / linearity verification (easily done with attenuators) is advisable. At low
power levels, optical signal measurements tend to become noisy, so meters may become very
slow due to use of a significant amount of signal averaging.
Measuring the absolute power in a fiber optic signal. For this application, the power meter
needs to be properly calibrated at the wavelength being tested, and set to this wavelength.
Measuring the optical loss in a fiber, in combination with a suitable stable light source. Since this
is a relative test, accurate calibration is not a particular requirement, unless two or more meters
are being used due to distance issues. If a more complex two-way loss test is performed, then
power meter calibration can be ignored, even when two meters are used.
Some instruments are equipped for optical test tone detection, to assist in quick cable
continuity testing. Standard test tones are usually 270 Hz, 1 KHz, 2 KHz. Some units can also
determine one of 12 tones[1], for ribbon fiber continuity testing.
Spectrum analyzer
From Wikipedia, the free encyclopedia
A spectrum analyzer
A spectrum analyzer or spectral analyzer is a device used to examine the spectral composition
of some electrical, acoustic, or optical waveform. It may also measure the power spectrum.
Contents
[hide]
1 Types
2 Typical functionality
o 2.1 Frequency range
o 2.2 Marker/peak search
o 2.3 Bandwidth/average
o 2.4 Amplitude
o 2.5 View/trace
3 Operation
4 RF uses
5 See also
6 References
7 External links
[edit] Types
An analog spectrum analyzer uses either a variable band-pass filter whose mid-frequency is
automatically tuned (shifted, swept) through the range of frequencies of which the spectrum is
to be measured or a superheterodyne receiver where the local oscillator is swept through a
range of frequencies.
A digital spectrum analyzer computes the discrete Fourier transform (DFT), a mathematical
process that transforms a waveform into the components of its frequency spectrum.
Some spectrum analyzers (such as "real-time spectrum analyzers") use a hybrid technique where
the incoming signal is first down-converted to a lower frequency using superheterodyne
techniques and then analyzed using fast fourier transformation (FFT) techniques.
[edit] Typical functionality
This section requires expansion.
Two key parameter for spectrum analysis are frequency and span. The frequency specifies the
center of the display. Span specifies the range between the start and stop frequencies, the
bandwidth of the analysis. Sometimes it is possible to specify the start and stop frequency rather
than center and range.
Controls the position and function of markers and indicates the value of power. Several spectrum
analyzers have a "Marker Delta" function that can be used to measure Signal to Noise Ratio or
Bandwidth.
[edit] Bandwidth/average
Is a filter of resolution. The spectrum analyzer captures the measure on having displaced a filter
of small bandwidth along the window of frequencies.
[edit] Amplitude
The maximum value of a signal at a point is called amplitude. A spectrum analyzer that
implements amplitude analysis is called a Pulse height analyzer.
[edit] View/trace
Manages parameters of measurement. It stores the maximum values in each frequency and a
solved measurement to compare it.
[edit] Operation
A real time analysis of a song. This spectrum analyzer output features frequency on X (horizontal),
magnitude on Y (vertical), and moves through time in sequence with the song
Frequency spectrum of the heating up period of a switching power supply (spread spectrum) incl.
waterfall diagram over a few minutes.
Usually, a spectrum analyzer displays a power spectrum over a given frequency range, changing
the display as the properties of the signal change. There is a trade-off between how quickly the
display can be updated and the frequency resolution, which is for example relevant for
distinguishing frequency components that are close together. With a digital spectrum analyzer,
the frequency resolution is Δν = 1 / T, the inverse of the time T over which the waveform is
measured and Fourier transformed (according to Uncertainty principle). With an analog spectrum
analyzer, it is dependent on the bandwidth setting of the bandpass filter. However, an analog
spectrum analyzer will not produce meaningful results if the filter bandwidth (in Hz) is smaller
than the square root of the sweep speed (in Hz/s)[citation needed], which means that an analog spectrum
analyzer can never beat a digital one in terms of frequency resolution for a given acquisition
time. Choosing a wider bandpass filter will improve the signal-to-noise ratio at the expense of a
decreased frequency resolution.
With Fourier transform analysis in a digital spectrum analyzer, it is necessary to sample the input
signal with a sampling frequency νs that is at least twice the bandwidth of the signal, due to the
Nyquist limit. A Fourier transform will then produce a spectrum containing all frequencies from
zero to νs / 2. This can place considerable demands on the required analog-to-digital converter
and processing power for the Fourier transform. Often, one is only interested in a narrow
frequency range, for example between 88 and 108 MHz, which would require at least a sampling
frequency of 216 MHz, not counting the low-pass anti-aliasing filter. In such cases, it can be
more economic to first use a superheterodyne receiver to transform the signal to a lower range,
such as 8 to 28 MHz, and then sample the signal at 56 MHz. This is how an analog-digital-hybrid
spectrum analyzer works.
For use with very weak signals, a pre-amplifier can be used, although harmonic and
intermodulation distortion may lead to the creation of new frequency components that were not
present in the original signal. A new method, without using a high local oscillator (LO) (that
usually produces a high-frequency signal close to the signal) is used on the latest analyzer
generation like Aaronia´s Spectran series. The advantage of this new method is a very low noise
floor near the physical thermal noise limit of -174 dBm(1Hz).
Optical attenuator
An optical attenuator is a device used to reduce the power level of an optical signal, either in
free space or in an optical fiber. The basic types of optical attenuators are fixed, step-wise
variable, and continuously variable.
Contents
[hide]
Attenuators are commonly used in fiber optic communications, either to test power level margins
by temporarily adding a calibrated amount of signal loss, or installed permanently to properly
match transmitter and receiver levels.
Sharp bends stress optic fibers and can cause losses. If a received signal is too strong a temporary
fix is to wrap the cable around a pencil until the desired level of attenuation is achieved.[1]
However, such arrangements are unreliable, since the stressed fiber tends to break over time.
[edit] Fixed fiber-optic attenuators
Fixed optical attenuators used in fiber optic systems may use a variety of principles for their
functioning. Preferred attenuators use either doped fibers, or mis-aligned splices, since both of
these are reliable and inexpensive. Inline style attenuators are incorporated into patch cables. The
alternative build out style attenuator is a small male-female adapter that can be added on to other
cables.[2]
Non-preferred attenuators often use gap loss or reflective principles. Such devices can be
sensitive to: modal distribution, wavelength, contamination, vibration, temperature, damage due
to power bursts, may cause back reflections, may cause signal dispersion etc.
A manual device is useful for one-time set up of a system, and is a near-equivalent to a fixed
attenuator, and may be referred to as an "adjustable attenuator". In contrast, an electrically
controlled attenuator can provide adaptive power optimization.
Attributes of merit for electrically controlled devices, include speed of response and avoiding
degradation of the transmitted signal. Dynamic range is usually quite restricted, and power
feedback may mean that long term stability is a relatively minor issue. Speed of response is a
particularly major issue in dynamically reconfigurable systems, where a delay of one millionth of
a second can result in the loss of large amounts of transmitted data. Typical technologies
employed for high speed response include LCD, or Lithium niobate devices.
There is a class of built-in attenuators that is technically indistinguishable from test attenuators,
except they are packaged for rack mounting, and have no test display.
Variable Fiber Optic Test Attenuators generally use a variable neutral density filter. Despite
relatively high cost, this arrangement has the advantages of being stable, wavelength insensitive,
mode insensitive, and offering a large dynamic range. Other schemes such as LCD, variable air
gap ect have been tried over the years, but with limited success.
They may be either manually or motor controlled. Motor control give regular users a distinct
productivity advantage, since commonly used test sequences can be run automatically.
Attenuator instrument calibration is a major issue. The user typically would like an absolute port
to port calibration. Also, calibration should usually be at a number of wavelengths and power
levels, since the device is not always linear. However a number of instruments do not in fact
offer these basic features, presumably in an attempt to reduce cost. The most accurate variable
attenuator instruments have thousands of calibration points, resulting in excellent overall
accuracy in use