0% found this document useful (0 votes)
158 views41 pages

3 Chapter BMSP

The document discusses several methods for detecting QRS complexes in electrocardiogram (ECG) signals: 1. Power spectrum analysis can identify the frequency of the QRS complex. Peak energy in the spectrum corresponds to the QRS. 2. Differentiation amplifies higher QRS frequencies while attenuating lower P and T wave frequencies. Algorithms use thresholds on smoothed first/second derivatives to detect complexes. 3. Template matching correlates signal segments to a stored QRS template. Continuous correlation detects alignments between the signal and template. 4. Automata-based matching reduces the ECG to tokens representing waveform shapes. A finite state automaton classifies sequences of tokens to detect normal complexes.

Uploaded by

Sravani Kumari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
158 views41 pages

3 Chapter BMSP

The document discusses several methods for detecting QRS complexes in electrocardiogram (ECG) signals: 1. Power spectrum analysis can identify the frequency of the QRS complex. Peak energy in the spectrum corresponds to the QRS. 2. Differentiation amplifies higher QRS frequencies while attenuating lower P and T wave frequencies. Algorithms use thresholds on smoothed first/second derivatives to detect complexes. 3. Template matching correlates signal segments to a stored QRS template. Continuous correlation detects alignments between the signal and template. 4. Automata-based matching reduces the ECG to tokens representing waveform shapes. A finite state automaton classifies sequences of tokens to detect normal complexes.

Uploaded by

Sravani Kumari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 41

– BIOMEDICAL SIGNAL PROCESSING

Power Spectrum of ECG:         

The power spectrum of the ECG signal can provide useful information about the QRS complex.
This section reiterates the notion of the power spectrum presented earlier, but also gives an
interpretation of the power spectrum of the QRS complex.The power spectrum (based on the
FFT) of a set of 512 sample points that contain approximately two heartbeats results in a series
of coefficients with a maximal value near a frequency corresponding to the heart rate.

    The heart rate can be determined by multiplying together the normalized frequency and the
sampling frequency. We can also get useful information about the frequency spectrum of the
QRS complex. In order to obtain this information, the QRS complex of the ECG signal must be
selected as a template and zero-padded prior to the power spectrum analysis. The peak of the
frequency spectrum obtained corresponds to the peak energy of the QRS complex.

    The ECG waveform contains, in addition to the QRS complex, P and T waves,60-Hz noise from
powerline interference, EMG from muscles, motion artifact from the electrode and skin interface, and
possibly other interference from electrosurgery equipment in the operating room.

Many clinical instruments such as a cardiotachometer and an arrhythmia monitor require


accurate real-time QRS detection. It is necessary to extract the signal of interest, the QRS
complex, from the other noise sources such as the P and T waves. Figure 12.1 summarizes the
relative power spectra of the ECG, QRS complexes, P and T waves, motion artifact, and muscle
noise based on our previous research.
Relative power spectra of QRS complex,P&T waves ,muscle noise and motion artifacts based on
average of 150 beats.

Explain Differentiation technique:   

  Differentiation forms the basis of many QRS detection algorithms. Since it is basically a high-
pass filter, the derivative amplifies the higher frequencies characteristic of the QRS complex
while attenuating the lower frequencies of the P and T waves.

      An algorithm based on first and second derivatives originally developed was modified for
use in high-speed analysis of recorded ECGs by Ahlstrom and Tompkins (1983). Friesen et al.
(1990) subsequently implemented the algorithm as part of a study to compare noise sensitivity
among certain types of QRS detection algorithms. Figure  shows the signal processing steps of
this algorithm.
                                            

Various signal stage in the QRS detection algorithm based on differentiation (a) Original ECG

(b)Smoothed and rectified  first derivative (c) Smoothed and rectified second derivative.

(d) Smoothed sum of (b) and (c). (e)Square pulse output for each QRS complex.

                       The absolute values of the first and second derivative are calculated from the ECG
signal

                              y0(nT) = | x(nT) – x(nT – 2T) | (12.5)

                            y1(nT) = | x(nT) – 2x(nT – 2T) + x(nT – 4T) | (12.6)
   These two data buffers, y0(nT) and y1(nT), are scaled and then summed

                            y2(nT) = 1.3y0(nT) + 1.1y1(nT) (12.7)

The data buffer y2(nT) is now scanned until a certain threshold is met or exceeded

                              y2(iT) ≥ 1.0 (12.8)

Once this condition is met for a data point in y2(iT), the next eight points are compared to the
threshold. If six or more of these eight points meet or exceed the threshold, then the segment
might be part of the QRS complex. In addition to detecting the QRS complex, this algorithm has
the advantage that it produces a pulse which is proportional in width to the complex. However, a
disadvantage is that it is particularly sensitive to higher-frequency noise.

3.Explain Template crosscorrelation.

Template crosscorrelation:

  Signals are said to be correlated if the shapes of the waveforms of two signals match one
another. The correlation coefficient is a value that determines the degree of match between the
shapes of two or more signals. A QRS detection technique uses cross correlation.

       This technique of correlating one signal with another requires that the two signals be aligned
with one another. In this QRS detection technique, the template of the signal that we are trying to
match stores a digitized form of the signal shape that we wish to detect. Since the template has to
be correlated with the incoming signal, the signal should be aligned with the template. Dobbs et
al. describe two ways of implementing this.

      The first way of aligning the template and the incoming signal is by using the fiducial points
on each signal. These fiducial points have to be assigned to the signal by some external process.
If the fiducial points on the template and the signal are aligned, then the correlation can be
performed.

      Another implementation involves continuous correlation between a segment of the incoming
signal and the template. Whenever a new signal data point arrives, the oldest data point in time is
discarded from the segment (a first-in-first-out data structure). A correlation is performed
between this signal segment and the template segment that has the same number of signal points.
This technique does not require processing time to assign fiducial points to the signal. The
template can be thought of as a window that moves over the incoming signal one data point at a
time. Thus, alignment of the segment of the signal of interest must occur at least once as
the Window moves through the signal.

       The value of the cross correlation coefficient always falls between +1 and –1. A value of +1
indicates that the signal and the template match exactly. As mentioned earlier, the value of this
coefficient determines how well the shapes of the two waveforms under consideration match.
The magnitude of the actual signal samples does not matter. This shape matching, or recognizing
process of QRS complexes, conforms with our natural approach to recognizing signals.

4.Explain Automata-based template matching.

Automata-based template matching:

The algorithm uses some of the basic techniques that are common in many pattern recognition
systems. The ECG signal is first reduced into a set of predefined tokens, which represent certain
shapes of the ECG waveform.

                  Below Figure shows the set of tokens that would represent a normal ECG. Then this
set of tokens is input to the finite state automaton defined in Figure . The finite state automaton is
essentially a state-transition diagram that can be implemented with IF … THEN control
statements available in most programming languages.The sequence of tokens is fed into the
automaton. For example, a sequence of tokens such as zero, normup, normdown, and normup
would result in the automaton signaling a normal classification for the ECG.

                        

                                     Reduction of an ECG signal to token

            The sequence of tokens must be derived from the ECG signal data. This is done by
forming a sequence of the differences of the input data. Then the algorithm groups together those
differences that have the same sign and also exceed a certain predetermined threshold level. The
algorithm then sums the differences in each of the groups and associates with each group this
sum and the number of differences that are in it.
          This QRS detector has an initial learning phase where the program approximately
determines the peak magnitude of a normal QRS complex. Then the algorithm detects a normal
QRS complex each time there is a deflection in the waveform with a magnitude greater than half
of the previously determined peak. The algorithm now teaches the finite state automaton the
sequence of tokens that make up a normal QRS complex. The number and sum values (discussed
in the preceding paragraph) for a normal QRS complex are now set to a certain range of their
respective values in the QRS complex detected.

    

          The algorithm can now assign a waveform token to each of the groups formed previously
based on the values of the number and the sum in each group of differences. For example, if a
particular group of differences has a sum and number value in the ranges (determined in the
learning phase) of a QRS upward or downward deflection, then a normup or normdown token is
generated for that group of differences. If the number and sum values do not fall in this range,
then a noiseup or noisedown token is generated. A zero token is generated if the sum for a
group of differences is zero. Thus, the algorithm reduces the ECG signal data into a sequence of
tokens, which can be fed to the finite state automata for QRS detection.

5.Explain QRS detection algorithm.

QRS Detection algorithm:

A real-time QRS detection algorithm developed by Pan and Tompkins (1985) was further
described by Hamilton and Tompkins (1986). It recognizes QRS complexes based on analyses of
the slope, amplitude, and width. Figure 12.9 shows the various filters involved in the analysis of
the ECG signal. In order to attenuate noise, the signal is passed through a band pass filter
composed of cascaded high-pass and low-pass integer filters. Subsequent processes
are differentiation, squaring, and time averaging of the signal. 
              

         We designed a bandpass filter from a special class of digital filters that require only integer
coefficients. This permits the microprocessor to do the signal processing using only integer
arithmetic, thereby permitting real-time processing speeds that would be difficult to achieve with
floating-point processing. Since it was not possible to directly design the desired bandpass filter
with this special approach, the design actually consists of cascaded low-pass and high-pass filter
sections. This filter isolates the predominant QRS energy centered at 10 Hz, attenuates the
low frequencies characteristic of P and T waves and baseline drift and also attenuates the higher
frequencies associated with electromyographic noise and power line interference.

        The next processing step is differentiation, a standard technique for finding the high slopes
that normally distinguish the QRS complexes from other ECG waves.To this point in the
algorithm, all the processes are accomplished by linear digital filters.

         Next is a nonlinear transformation that consists of point-by-point squaring of the signal


samples. This transformation serves to make all the data positive prior to subsequent integration,
and also accentuates the higher frequencies in the signal obtained from the differentiation
process. These higher frequencies are normally characteristic of the QRS complex.

        The squared waveform passes through a moving window integrator. This integrator sums
the area under the squared waveform over a 150-ms interval, advances one sample interval, and
integrates the new 150-ms window. We chose the window’s width to be long enough to include
the time duration of extended abnormal QRS complexes, but short enough so that it does not
overlap both a QRS complex and a T wave.

       Adaptive amplitude thresholds applied to the bandpass-filtered waveform and to the moving
integration waveform are based on continuously updated estimates of the peak signal level and
the peak noise. After preliminary detection by the adaptive thresholds, decision processes make
the final determination as to whether or not a detected event was a QRS complex.A measurement
algorithm calculates the QRS duration as each QRS complex is detected. Thus, two waveform
features are available for subsequent arrhythmia analysis—RR interval and QRS duration.
6. Explain analysis of ST segment in ECG.

ST-SEGMENT ANALYZER:

The ST-segment represents the period of the ECG just after depolarization, the QRS complex,
and just before repolarization, the T wave. Changes in the ST-segment of the ECG may indicate
that there is a deficiency in the blood supply to the heart muscle. Thus, it is important to be able
to make measurements of the ST-segment. This section describes a microprocessor-based device
for analyzing the STsegment.

          Below fig. shows an ECG with several features marked. The analysis begins by detecting
the QRS waveform. Any efficient technique can be implemented to do this. The R wave peak is
then established by searching the interval corresponding to 60 ms before and after the QRS
detection mark, for a point of maximal value. The Q wave is the first inflection point prior to the
R wave. This inflection point is recognized by a change in the sign of slope, zero slopes, or a
significant change in slope. The three-point difference derivative method is used to calculate the
slope. If the ECG signal is noisy, a low-pass digital filter is applied to smooth the data
before Calculating the slope.

        The isoelectric line of the ECG must be located and measured. This is done by searching
between the P and Q waves for a 30-ms interval of near-zero slope. In order to determine the
QRS duration, the S point is located as the first inflection point after the R wave using the same
strategy as for the Q wave. Measurements of the QRS duration, R-peak magnitude relative to the
isoelectric line, and the RR interval are then obtained.

                            

ECG measurement made by the ST-segement analyzer.The relevnt points of the ECG,J,T,ST
level.

        The J point is the first inflection point after the S point, or may be the S point itself in
certain ECG waveforms. The onset of the T wave, defined as the T point, is found by first
locating the T-wave peak which is the maximal absolute value, relative to the isoelectric line,
between J + 80 ms and R + 400 ms. The onset of the T wave, the T point, is then found by
looking for a 35-ms period on the R side of the T wave, which has values within one sample unit
of each other. The T point is among the most difficult features to identify. If this point is not
detected, it is assumed to be J + 120 ms.

        Having identified various ECG features, ST-segment measurements are made using a
windowed search method. Two boundaries, the J + 20 ms and the T point, define the window
limits. The point of maximal depression or elevation in the window is then identified. ST-
segment levels can be expressed as the absolute change relative to the isoelectric line.

       In addition to the ST-segment level, several other parameters are calculated. The ST slope is
defined as the amplitude difference between the ST-segment point and the T point divided by the
corresponding time interval. The ST area is calculated by summing all sample values between
the J and T points after subtracting the isoelectric-line value from each point. An ST index is
calculated as the sum of the STsegment level and one-tenth of the ST slope.

7.Explain microprocessor based Arrhythmia monitor.

Arrhythmia monitor:

Block diagram of one of our prototype monitors. In addition to the microprocessor, a portable
arrhythmia monitor requires analog and digital support electronics. Analog amplifiers do the
front-end ECG amplification and signal conditioning. An analog-to-digital (A/D) converter
integrated circuit changes the analog ECG to the digital signal representation needed by the
microprocessor.

              

            ROM memory holds the program that directs the performance of all the functions of  the
instrument, and RAM memory stores the captured ECG signal.Input/output (I/O) ports interface
audio and visual displays and switch interactions in the device. A modem circuit provides for
communication with a remote computer so that captured ECG signals can be transmitted back to
a central site.                  All these technologies are changing rapidly, so we must constantly
consider improving the hardware design. Newer designs permit devices with (1) fewer
components,(2) greater reliability, (3) smaller size and weight, (4) less power consumption,and
(5) greater computational power (thereby permitting more complex signal processing and
interpretive algorithms). Thus, there is a technological force driving us to continually improve
the hardware design. In industry, an engineering compromise must come into play at some point,
and the design must be frozen to produce a viable product. However, we are in a university
environment, and so can afford to indulge ourselves by continual iteration of the design.

8.Describe the original prony's method.

Original Prony's Method:


9.Explain summarizing steps in original prony's method. 

Prony's method can be summarized s follows:


10. Explain analysis of heart valve sounds using prony's method. 

Phonocardiogram(PCG) recorded from subjects with heart valve implants have been widely used
for early detection of valve disorders. The power spectra of PCG possess certain attributes which
in some cases may be indicative of the valve disorders. Although these attributes or decision
parameters have been useful in diagnosis, there are no conclusive rules for early detection of
valve disorders.

              Several researchers have attempted to understand the mechanisms underlying the
generation of the heart sounds. The power spectral peak frequencies of closing sounds of
biprosthetic, natural, mitral and aortic valves are representative of the vibrational frequencies of
the  valve leaflects.The stiffening of the valve material causes more pronounced high –frequency
spectra.

             It state that the first heart sound is primarily attributed to oscillations of the ventricle
during its isovolumic phase. The heart size is indirectly proportional to the power spectrum
above 150Hz.

              A rapidly decelerating valve occluder can provide impulsive excitation for the
cardiovascular system, including the heart and entire heart cavity, major vessels, and other
partitions from the chest. These elements have their own resonating modes.
           The motion of occlude as the input and the phonocardiogram as the output of the system.
The output signal was considered to be summation of decaying sinusoids. The characteristics of
the phonocardiogram, the output of the system, were considered diagnostic parameter.

        The input of the model valve motion and output (the closing sounds) mitral valve
prostheses) were analyzed using prony’s method.

The PCG sound from patient mitral valves was recorded using a PCG amplifier and the
piezoelectric type transducer. The mitral sounds occurring after the R-wave were located by
using ECG. The valve sounds were digitized at rate of 2Hz.   

              

             Mitral valve closing sound signal(........)The signal at the output of the PCG amplifier.
(______)Corresponding signal at the                   input of the amplifier

The typical implanted mitral value closing sound signal. The signal was assumed to composed of
damped sinusoids.The number of the AR model was determined to be 6,Which corresponds to
three decaying sinusoids.
                    

            (a) Mitral valve closing (b)The corresponding estimated signal and the residue left after
fitting three decaying sinusoidals.

Initially the AR coefficients were estimated by using the recursive least square (RLS) algorithm.
After the AR coefficients were found, the parameter Ai,αi,Wi and θi,determined by using
Prony’s method, were estimated after calculating the root of the roots of the prediction
polynomial.

                 The original mitral valve closing sounds and the estimated signal with residual signals
after fitting three decaying sinusoidal.    
6. is TP algorithm

The turning point (TP) algorithm was originally developed to reduce the sampling frequency


of an ECG signal from 200 to 100 Hz .

7.state the need for data compression.

The need for data compression arises from two basic requirements, namely:

1. To meet an operational specification under an existing system performance constraint such as


a limited bandwidth, interval of transmission,

2. To save costs in the operation of the system.

8. What is the equation volume of signal space?

All these forms of signal space i.e. volume, time and bandwidth are interrelated as shown by the
equation.

     Volume=f(time X bandwidth)

where f stands for function of. Any reduction in volume of signal space can be related to
reduction in time or bandwidth. 

9.What are the instructive to classify the direct data compression methods?

Instructive to classify the direct data compression methods under three different heads namely:

1. Tolerance-comparison compression
2. Differential pulse code modulation
3. Entropy coding method.

10.What is the signal averaging ?

Signal averaging:
Signal averaging is a signal processing technique applied in the time domain, intended to
increase the strength of a signal relative to noise that is obscuring it. By averaging a set
of replicate measurements, the signal-to-noise ratio (SNR) will be increased, ideally in
proportion to the square root of the number of measurements.

11.What is the RMS value of  ?

RMS value of   is

N(iT)= = n

12.What is Abbreviation of CORTES?

CORTES:

Coordinate Reduction Time Encoding System

13.what is the Abbreviation of AZTEC and TP

AZTEC:Amplitude Zone Time Epoch Coding

TP:Turning Point

14.What are the advantage of AZTEC?

Advantage of AZTEC method :

1.On-line smoothing

2.Data reduction and coding

15.What is the  disadvantage of AZTEC algorithm?

A disadvantage of the smoothing process is the loss of amplitude morphology.


.

1.What is the data reduction algorithm? 

A data reduction algorithm seeks to minimize the number od code bits stored by reducing the
redundancy present in the original signal. We obtained the reduction ratio by dividing the
number of bits of the original signal by the number saved in the compressed signal .We generally
desire ahigh reduction ratio but caution against using this parameter as the sole basis of
comparison among data reduction algorithm.

2.Draw the Flow chart of TP algorithm.

3.Define PRD? 

PRD given by
4.What are the signal averaging characteristics.

1. The signal waveforms must be repetitive ( although it does not have to be periodic).This
means that signal must occur more than once but necessarily at regular intervals.

2.The noise must be random and uncorrelated with the signal.In this application,random means
that the noise is not periodic and that it can only be described statistically (e.g. by its mean and
variance)

3. The temporal position of each signal waveform must be accurately known.  

5. Draw the flow chart of line detection  AZTEC algorithm.


6. Distinguish between AZTEC,TP and CORTES

A Comparison of various ECG data compression techniques:                   

SF(Hz) Wave 
Method CR PRD % Comments
precision recognition
500
AZTEC 10.0 28 Implied Poor 'p' and 'T' fidelities
12
200
TP 2.0 5.3 NO Sensitive to SF
12
200
CORTES 4.8 7.0 Implied Sensitive to SF Poor p fidelities
12
7. Draw the flow chart of CORTES algorithm. 

8. What are advantages of TP algorithm? 

Advantages of TP algorithm:

1. The of the TP algorithm is simplicity.

2. Additionally it is very fast

3. Producing a compression ratio of 2:1

4. The fact that it is fast makes it a potential candidate for real time computation

5. The original signal can be reconstructed from the compressed data by suitably interpolating
between the saved data point.
9. What are the applications of AZTEC algorithm?

Applications of AZTEC algorithm three different ECG waveforms

1. Normal sinus rhythm, Data reduction of 170/1000 

2. Tachycardia, Data reduction of 30/1000

3. PVC,Data reduction of 138/1000 

10.Disadvantage of Turning point algorithm.

Disadvantages:

1. The resulting reconstructed signal typically has a widened QRS complex and sharp edges that
reduce its clinical acceptability.

2. This introduces short term time distortion. However, this localized distortion is not visible
When the reconstructed signal is viewed on the standard clinical monitors and paper recorders. 

1. Explain data reduction algorithm.

Data reduction algorithm:

A typical computerized medical signal processing system acquires a large amount of data that is
difficult to store and transmit. We need a way to reduce the data storage space while preserving
the significant clinical content for signal reconstruction. In some applications, the process of
reduction and reconstruction requires real-time performance.

A data reduction algorithm seeks to minimize the number of code bits stored by reducing the
redundancy present in the original signal. We obtain the reduction ratio by dividing the number
of bits of the original signal by the number saved in the compressed signal. We generally desire a
high reduction ratio but caution against using this parameter as the sole basis of comparison
among data reduction algorithms. Factors such as bandwidth, sampling frequency ,and precision
of the original data generally have considerable effect on the reduction ratio.    

    A data reduction algorithm must also represent the data with acceptable fidelity. In biomedical
data reduction, we usually determine the clinical acceptability of the reconstructed signal through
visual inspection. We may also measure the residual, that is, the difference between the
reconstructed signal and the original signal. Such a numerical measure is the percent root-mean-
square difference, PRD, given by

    Where n is the number of samples and Xorg and Xrec are samples of the original and
reconstructed data sequences.

A lossless data reduction algorithm produces zero residual, and the reconstructed signal exactly
replicates the original signal, However ,clinically acceptable quality is neither guaranteed by a
low nonzero residual nor ruled out by a high numerical residual .For example ,a data reduction
algorithm for an ECG recording may eliminate small-amplitude base line drift. In this case, the
residual contains negligible clinical information. The reconstructed ECG signal can thus be quite
clinically acceptable despite a high residual.

2. Explain basics of signal averaging. 

Basic of signal averaging:

The spectrum of a signal that is corrupted by noise. In this case, the noise bandwidth is
completely separated from the signal bandwidth, so the noise can easily be discarded by applying
a linear low-pass filter. On the other hand, the noise bandwidth in Figure . overlaps the signal
bandwidth, and the noise amplitude is larger than the signal. For this situation, a low-pass filter
would need to discard some of the signal energy in order to remove the noise, thereby
distorting the signal.

            One predominant application area of signal averaging is in electroencephalography. The


EEG recorded from scalp electrodes is difficult to interpret in part because it consists of a
summation of the activity of the billions of brain cells. It is impossible to deduce much about the
activity of the visual or auditory parts of the rain from the EEG. However, if we stimulate a part
of the brain with a flash of light or an acoustical click, an evoked response occurs in the region of
the brain that processes information for the sensory system being stimulated. By summing the
signals that are evoked immediately following many stimuli and dividing by the total number of
stimuli, we obtain an averaged evoked response. This signal can reveal a great deal about the
performance of a sensory system.

           Signal averaging sums a set of time epochs of the signal together with the superimposed
random noise. If the time epochs are properly aligned, the signal waveforms directly sum
together. On the other hand, the uncorrelated noise averages out in time. Thus, the signal-to-
noise ratio (SNR) is improved.

                Signal averaging is based on the following characteristics of the signal and the noise:

1. The signal waveform must be repetitive (although it does not have to be periodic).This means
that the signal must occur more than once but not necessarily at regular intervals.

2. The noise must be random and uncorrelated with the signal. In this application, random means
that the noise is not periodic and that it can only be described statistically (e.g., by its mean and
variance).

3. The temporal position of each signal waveform must be accurately known.

                                              

                              a)The signal and noise bands do not overlap   b)Signal and noise spectra
overlap

          It is the random nature of noise that makes signal averaging useful. Each time epoch (or
sweep) is intentionally aligned with the previous epochs so that the digitized samples from the
new epoch are added to the corresponding samples from the previous epochs. Thus the time-
aligned repetitive signals S in each epoch are added directly together so that after four epochs,
the signal amplitude is four times larger than for one epoch (4S). If the noise is random and has a
mean of zero and an average rms value N, the rms value after four epochs is the square root of
the sum of squares (i.e., (4N2)1/2 or 2N). In general after m repetitions the signal amplitude is
mS and the noise amplitude is (m)1/2N. Thus, the SNR improves as the ratio of m to m1/2 (i.e.,
m1/2). For example, averaging 100 repetitions of a signal improves the SNR by a factor of 10.
This can be proven mathematically as follows.

          The input waveform f(t) has a signal portion S(t) and a noise portion N(t). Then
                                                       f(t) = S(t) + N(t)------- (1)

Let f(t) be sampled every T seconds. The value of any sample point in the time epoch (i = 1, 2,
…, n) is the sum of the noise component and the signal component.

                                                   f(iT) = S(iT) + N(iT)-------------  (2)

Each sample point is stored in memory. The value stored in memory location i after m repetitions
is

                                  
-----------(3)

The signal component for sample point i is the same at each repetition if the signal is stable and
the sweeps are aligned together perfectly. Then

                                     S(iT) = mS(iT)        -------------(4)

The assumptions for this development are that the signal and noise are uncorrelated and that the
noise is random with a mean of zero. After many repetitions, N(iT) has an rms value of .

                                   =   --------------(5)

Taking the ratio of Eqs. (9.4) and (9.5) gives the SNR after m repetitions as

                     = = SNR  -------------------(6)

Thus, signal averaging improves the SNR by a factor of m . Figure  is a graph llustrating the
results of Eq. (6).

3. Explain Line detection AZTEC algorithm with flow chart.

Line detection AZTEC algorithm:

Originally developed to preprocess ECGs for rhythm analysis, the AZTEC (Amplitude Zone
Time Epoch Coding) data reduction algorithm decomposes raw ECG sample points into plateaus
and slopes . It provides a sequence of line segments that form a piecewise-linear approximation
to the ECG.
                   

.Data reduction:

The above diagram shows the line detection operation which makes use of zero-
order interpolation (ZOI) to produce horizontal lines. Two variables Vmx and Vmn always reflect
the highest and lowest elevations of the current line. Variable LineLen keeps track of the number
of samples examined. We store a plateau if either the difference between Vmxi and Vmni is
greater than a predetermined threshold Vth or if LineLen is greater than 50. The stored values are
the length (LineLen – 1) and the average amplitude of the plateau (Vmx + Vmn)/2. Note
that Vmxi and Vmni are always updated to the latest sample before line detection begins.This
forces ZOI to begin from the value of the latest sample.

4. Explain Line processing opeartion of tdhe AZTEC algorithm with flow chart. 

Line processing operation of AZTEC algorithm:

The below fig.shows the line processing algorithm which either produces a plateau or a slope
depending on the value of the variable LineMode. We initialize LineMode to _PLATEAU in
order to begin by producing a plateau. The production of an AZTEC slope begins when the
number of samples needed to form a plateau is less than three. Setting LineMode to _SLOPE
indicates that we have entered slope production mode. We then determine the direction or sign of
the current slope by subtracting the previous line amplitude V1 from the current amplitude
Vsi. We also reset the length of the slope Tsi. The variable Vsi records the current line amplitude
so that any change in the direction of the slope can be tracked.
                  

         When we reenter line processing with LineMode equal to _SLOPE, we either save or


update the slope. The slope is saved either when a plateau of more than three samples can be
formed or when a change in direction is detected. If we detect a new plateau of more than three
samples, we store the current slope and the new plateau. For the slope, the stored values are its
length Tsi and its final elevation V1. Note that Tsi is multiplied by –1 to differentiate a slope
from a plateau (i.e., the minus sign serves as a flag to indicate a slope). We also store the length
and the amplitude of the new plateau, then reset all parameters and return to plateau production.

         If a change in direction is detected in the slope, we first save the parameters for the current
slope and then reset sign, Vsi, Tsi, Vmxi, and Vmni to produce a new AZTEC slope. Now the
algorithm returns to line detection but remains in slope production mode. When there is no new
plateau or change of direction, we simply update the slope’s parameters, Tsi and Vsi, and return
to line detection with LineMode remaining set to _SLOPE. AZTEC does not produce a constant
data reduction ratio. The ratio is frequently as great as 10 or more, depending on the nature of the
signal and the value of the empirically determined threshold.

5.Explain  how to reconstruct  the AZTEC algorithm by using any example. 

Reconstruction of AZTEC:

Let us illustrate how an AZTEE coded data, such as the one give below, can be used to
reconstruct the original signal.

          AZTEC coded data:{15,80,5,100,-8,-200,-5,150,25,150}

Solution: We reconstruct the signal by expanding the AZTEC data as plateaus and slopes as
indicated. The first two pints in the sequence represent a line length of 15 samples of amplitude
80; the second set of two values represents another segment of 5 samples of constant amplitude
100. Now as the first value in the third set is negative, it represents a line segments with non zero
slope. This line is 8 samples long, beginning at the end of the previous line segment and ending
at an amplitude of -200 and ending with an amplitude of 150 after 5samples.The last set of
values represents a horizontal line(plateau) 25sample units long with an amplitude of 150.

                   

                                                             Fig. In AZTEC reconstructed ECG waveform 


     The reconstructed waveform is shown in fig. As can be seen from the figure the waveform
has a jagged appearance with step-like quantization which is not clinically acceptable. To make
it more acceptable some kind of smoothing of the reconstructed waveform needs to be done.
Literature has suggested parabolic fitting to the AZTEC reconstructed signal, which is claimed to
be both fast and easy. It advocates fitting a parabola to an off number (2L+10 OF DATA points.
Thus if L is assumed to be 7 we have

X(k)=1/2I{-2x(k-3)+3x(k-2)+6x(k-1)+7x(k)=6x(k+1)+3X(k+2)+2x(k+3) -------- (1)

Where x(k) is the new data point and x(k±n) are original (AZTEC) data points are k±n define the
time relationships.

Note: The smoothing function defined by equation (1) acts as a low pass filter to reduced the
discontinuties.The effect of this is clearly visible the below fig. where, AZTEC-coded ECG
before and after smoothing is shown in respect of three waveform: the first a normal Sirius
rhythm; the second a case of tachycardia and the last PVC.

               

  Applications of AZTEC algorithm to three different ECG waveforms (a) Normal sinus rhythm,
Data reduction of 170/1000, (b)Tachycardia ,Data reduction of 340/1000,(c) PVC ,Data
reduction of 138/1000.

                A disadvantage of the smoothing process is the loss of amplitude morphology. This


amplitude information is vital in the case of detection so disorders of the ventricle such as
ventricular hyper-trophy. 

  

6. Explain CORTES algorithm with flow chart.


CORTES Algorithm:

We have seen that the basic advantage of the TP approach is that it preserves the important
attributes of the QRS complexes. But the main disadvantage is that the data compression is
independent of the morphology, whether it is the fast QRS moving complex or the relatively
slow moving T and P waves or for that matter the iso-electric region.

                  On the contrary, AZTEC has a much better data-reduction capability, adapts its
coding mode to the nature of the signal region itself as well as eliminates base line drift.
However, even after smoothing by a parabolic filter ECG waveform reconstructed from AZTEC
data may not be clinically acceptable. Occasional complete loss of  P wave, loss of PR and ST
segment durations, a widening of the QRS complex and a decrease in its amplitude are some of
the problems associated with such waveform.   

                   Thus both the methods while having certain advantages, have some serious
shortcomings. It is therefore proposed to develop a new data reduction algorithm which while
having the desirable features of both the methods yet does not suffer from their disadvantages.
This is precisely what is achieved by new algorithm known as coordinate-Reduction-Time-
Encoding system or CORTES.

CORTES Data compression:

The basic philosophy of the algorithm is quite sample.

    1. Apply the TP algorithm to the fast changing morphologies of the ECG waveform

    2. Apply the AZTEC algorithm to the isoelectric region of the ECG wave forms.

    The algorithm may be best described through flowchart given below..
                

                                             Flow chart for the CORTES algorithm

   It executes the TP and AZTEC algorithms are parallel on the incoming ECG signal. The
CORTES algorithm, based on an experimentally determined line length threshold Lth, decides
whether the AZTEC data or the TP data is to be retained. Thus whenever an AZTEC line length
exceeds the length threshold Lth.CORTES saves the AZTEC line otherwise it saves the TP data
points. As TP is used to encode the QRS complex only AZTEC plateaus are generated; no slopes
are produced. The way it is done on the computer is described below.

                As the ECG signal is sampled .CORTES simultaneously applies AZTEC to the
isoelectric region of the ECG waveform and the TP algorithm to the higher frequency region of
the ECG signal. The algorithm uses a line length threshold Lth that is arrived at by
experimentation on the data base.

    Once an AZTEC platenu is produced, the program CORTES, saves he AZTEC data only if the
platenu is longer than Lth or it saves the TP data only if the plateau is shorter than or at most
equal to Lth. Only AZTEC plateaus are generated; no slopes are produced. To preserve the slope
of the QRS complex, the CORTES algorithm uses the TP instead of that AZTEC to encode the
high frequencies that are attributed to it.
7. Describe Reconstructed data of ECG using CORTES algorithm take any one example.

Construction from CORTES Data

     Reconstruction of the original signal from the encoded data of CORTES is not simple.The
reason primarily is that the true length recorded in the AZTEC portion of the CORTES
represents signle units of sampling radte while the TP data points are two units.Further,the
transition between lines and turning points needs to be carefully looked into as each of them is
governed by different set of rules. Has suggested the following rules for reconstruction of
CORTES data.

1. If an AZTEC lines is followed by TP data,the length of the line is decremented and the
distance between the shortened lne and the first turning point is ‘1’
2. If a TP data is formed by aline ,then the length between the TP data and the value of the
AZTEC line is 2time units.

Thus the rules for reconstruction of ECG waveform from CORTES data are:

a. Length of AZTEC lines are single units of time.


b. Distance between TP values is 2 time units
c. Transition from an AZTEC line to TP data

                     i) Length of AZTEC line is decremented

                    ii) The distance between the end of the shortened AZTEC line and the the TP data
equals 1.

d. For transition from TP data to AZTEC, the distance from the last TP data point and
the beginning of AZTEC line equals 2time units.

                                   Let us illustrate the procedure of reconstruction of CORTES data by means


of an example.

Example: Given the CORTES data below,reconstruct the ECG signal where **is the mark
separating AZTEC data from TP and *** is the mark separating TP from AZTEC.

   The CORTES encoded data is:

{3,10,9,5**10,12,5,-10,-18,-5,***30,10}
                 

       

                                              Reconstructed data of ECG using CORTES algorithm     

8. Explain TP algorithm flow chart.

Turning point (TP) algorithm:

The most important morphology of an ECG waveform, undoubtedly, is the QRS complex. As
such any data reduction algorithm to be useful must have the potential to reproduce this
morphology with near fidelity as possible when the compressed data is used reconstruct the
signal. This ,perhaps, was one of the important considerations in the development of the turning
point (TP) algorithm. As indicative of its name, it retains all the turning points and treats then as
non- redundant.
                        

                                                      TP algorithm flow chart

                              It is very simple in construction and yields a compression of 50% except in
the most pessimistic condition where the signal alternates between a peak and trough. In a
monitoring  scenario where the highest frequency is assumed to around 50Hz and a sampling
frequency of 200HZ,the TP algorithm will produce a method of efficiently reducing the effective
sampling rate by at least one half i.e.100 samples per  second.

            The way it works is now explained.

  The algorithm considers three sample points at a time .It retains the first sample by assuming it
as the reference point X0.The next two points are designated X1 and X2.The algorithm then
decides whether to retain X1 or X2 depending upon which of the two points best conserves the
turning point of the original signal.

            To enable one to better comprehend to the operationalizing of the method, we give below
a Table in which the various configurations of three consecutive sample points can possibly
occur.
                    

                                            Figure : various patterns of the triple X0,X1 and X2

In each frame, the encircled point preserves the slope of the original three points. The algorithm
retains this point and makes it the reference point X0 for the next interation.It continues to sample
the next two samples by assigning them as X1 , X2  and repeats the process.        

Except in case of pattern 1 and 2,when there is a turning, it is always the last point X2 that is
stored. In case of turning X1 is retained.

       Since it is not possible for a computer to visually inspect the various patterns and determine
which point is to be stored, a mathematical relationship is now developed to determine which
point is to be saved.
            

S1 and S2 are slopes of the two consecutive points.If the difference of (X1- X0) or (X2-X1) is
zero,it produces a zero result.

              

                     sign of pattern and point choice

  

9. Draw the number of possible patterns for saving slope point using TP algorithm.

important morphology of an ECG waveform, undoubtedly, is the QRS complex. As such any
data reduction algorithm to be useful must have the potential to reproduce this morphology with
near fidelity as possible when the compressed data is used reconstruct the signal. This ,perhaps,
was one of the important considerations in the development of the turning point (TP) algorithm.
As indicative of its name, it retains all the turning points and treats then as non- redundant.

                              It is very simple in construction and yields a compression of 50% except in
the most pessimistic condition where the signal alternates between a peak and trough. In a
monitoring  scenario where the highest frequency is assumed to around 50Hz and a sampling
frequency of 200HZ,the TP algorithm will produce a method of efficiently reducing the effective
sampling rate by at least one half i.e.100 samples per  second.

            The way it works is now explained.

  The algorithm considers three sample points at a time .It retains the first sample by assuming it
as the reference point X0.The next two points are designated X1 and X2.The algorithm then
decides whether to retain X1 or X2 depending upon which of the two points best conserves the
turning point of the original signal.

            To enable one to better comprehend to the operationalizing of the method, we give below
a Table in which the various configurations of three consecutive sample points can possibly
occur.

                          Table for  Various patterns of the triple X0,X1,X2


            

In each frame, the encircled point preserves the slope of the original three points. The algorithm
retains this point and makes it the reference point X0 for the next interation.It continues to sample
the next two samples by assigning them as X1 , X2  and repeats the process.

Note: Except in case of pattern 1 and 2,when there is a turning, it is always the last point X2 that
is stored. In case of turning X1 is retained.
       Since it is not possible for a computer to visually inspect the various patterns and determine
which point is to be stored, a mathematical relationship is now developed to determine which
point is to be saved.

                   

Note:S1 and S2 are the slopes of the two consecutive points. If the difference of (X1-X2) is zero, it
produces a zero result.

         The signs of patterns corresponding to the nine configurations of  as well as the choice of
the retained point.  

 Signs of patterns and point choice:

                     

You might also like