DualPath Explained
DualPath Explained
DualPath explained
Why one A/D converter per channel is not (always) sufficient
In conventional power analyzers, a Due to the limitation to a single A/D time-consuming. If one variant is
signal first undergoes analog converter, there are inherently some skipped in order to save time, the
conditioning. Then the output is downsides to be factored in with results are inevitably error-prone. If
optionally fed through an anti- conventional devices. If the goal is to the filter remains activated to avoid
aliasing filter (AAF) before it gets measure RMS power both over the aliasing with the FFT, bandwidth gets
digitized by an A/D converter and entire bandwidth and at the sacrificed when measuring RMS
awaits further processing. fundamental frequency, unfiltered values. Switching off the AAF voids
The decision for or against the AAF and filtered measurements could be the FFT. If it is carried out
has to be taken before sampling, as alternated – in theory. In practice, it nevertheless, the quality of the
this is when aliasing occurs. It cannot is of utmost difficulty to exactly results is questionable. An aliasing
be undone later on. RMS values can reproduce the same operating point error of 50%, for instance, is easily
be determined without risk of aliasing twice. Unless this can be guaranteed, detected, however a deviation of
due to their statistical nature, all all comparisons between results are 0.5% could go unnoticed.
other measurements need to be void due to lack of repeatability.
handled with care. Besides, this procedure is extremely
2. The instruments of manufacturer Y are sampling with all filters removed, and allow you to apply digital filtering
later on in order to isolate narrowband values – does this not achieve the same results as DualPath? No, it merely
proves the Nyquist-Shannon theorem and the subject of aliasing have not been properly grasped. Frequency
portions for whom the sampling rate is insufficient according to the theorem need to be removed prior to filtering,
as they can no longer be identified after sampling. E.g. a signal at 50 Hz below the sampling rate would show up at
50 Hz after sampling. This signal, which has been created by undersampling, is indistinguishable from the original
signal, since part of the original information has been wiped out due to the insufficient sampling rate.
The green line represents a high-frequency (2 MHz) signal before sampling, the red dots show the result of
undersampling – here with a rate slightly higher than the signal frequency (2.125 MHz). The erroneous sampling
process, since too slow according to the Nyquist-Shannon theorem, creates a “phantom signal” at a frequency of
(2.125 MHz – 2 MHz = 125 kHz), which is merely 1/16 of the original frequency and can no longer be distinguished
from the genuine signal content around 125 kHz. In that range, e.g. harmonics of a frequency converter’s
switching frequency might appear, which would in turn be distorted by the “phantom signal”. The knowledge that
part of the power measured around 125 kHz stems from the undersampled 2 MHz signal is irretrievably lost. The
above values are merely examples, the erroneous frequency components might as well appear near the
fundamental of the unit under test.
Trying to identify these components after the fact and to remove them would be as futile as striving to identify
blue objects on a color photography reduced to black and white – the information “color“ is irretrievably lost, the
remaining information “brightness“ is by no means equivalent.