0% found this document useful (0 votes)
25 views9 pages

Itc 2022 22-05-01

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views9 pages

Itc 2022 22-05-01

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Lossless Compression of Telemetry

Data – Methodology and Results

Item Type Proceedings; text

Authors Wolfson, Scott C.; Jones, Joshua B.

Citation Wolfson, S. C., & Jones, J. B. (2022). Lossless Compression


of Telemetry Data – Methodology and Results. International
Telemetering Conference Proceedings, 57.

Publisher International Foundation for Telemetering

Journal International Telemetering Conference Proceedings

Rights Copyright © held by the author; distribution rights International


Foundation for Telemetering

Download date 28/07/2023 16:40:26

Item License http://rightsstatements.org/vocab/InC/1.0/

Version Final published version

Link to Item http://hdl.handle.net/10150/666932


LOSSLESS COMPRESSION OF TELEMETRY DATA –
METHODOLOGY AND RESULTS
Scott C. Wolfson
Joshua B. Jones
U.S. Army Redstone Test Center
Redstone Arsenal, AL 35898-8052
[email protected]

ABSTRACT

The continual advances in military system technologies compel the Test & Evaluation
community to constantly mature, adapt and apply innovative methodologies and numerical
methods. This progression requires a clear understanding of technique capabilities and
limitations. With respect to telemetry, bandwidth limitations exist while higher data throughputs
are desired. These higher throughput requirements can be attributed to factors such as increases
in sensor sample rates and image resolutions. The primary objective of this technical paper is to
provide details pertaining to a lossless compression algorithm and how it was applied to
telemetry data prior to transmission as well as algorithm variants. Prior to selecting the algorithm
used, controlled testing was performed on each of the algorithm variants to investigate the
effects, if any, on achievable compression ratios. The results from this testing are also included.

INTRODUCTION

Data compression is a subcategory under the scientific field of study knows as information
theory and dates back to the works of Harry Nyquist and Ralph Hartley performed in the 1920s
[1]. Although algorithmically complex, data compression is simply the process of reducing the
number of bits required to represent digital data prior to transmission or storage and has been
implemented in numerous commercial and military applications which include digital image
storage, streaming video, cell phone audio and satellite communications. Data compression
algorithms can also be divided into two classes know as lossless and lossy where lossless
algorithms reduce bits by eliminating redundancy while lossy algorithms reduce bits by
removing inconsequential information [2].

The motivation for performing the research and experimentation documented in this paper can be
attributed to the bit rate trends of on-system sensors and subsystems increasing beyond the
bandwidth capabilities of telemetry. To narrow the research focus area, lossless compression was
determined to be the most beneficial to the telemetry community. Additionally, algorithm
selection was accomplished using comparison metrics that included compression percentage,
firmware implementation complexity and impact of telemetry dropouts. The following sections
detail the method selected, method variations, encoder implementation and test results.
COMPRESSION ALGORITHM

The initial lossless compression algorithm investigated was block delta encoding. This method
entails the initial transmission of an uncompressed collection of sampled data referred to as
reference frames. Compression is accomplished when subsequent transitions are limited to
changes, or deltas, from the initial data transmissions. Typical implementations transmit new
reference frames at set time intervals or when the number of deltas exceed a threshold. Although
data dependent, this algorithm can produce a high compression ratio, especially for video, but is
negatively impacted if telemetry dropouts occur and corrupt either the reference frame or delta
information. This data corruption is permanent and persists until a new reference frame is
transmitted. Because of this potential for large amounts of corrupt data, the block delta encoding
method is not recommended for telemetry use.

A second approach is referred to as Huffman coding. This algorithm necessitates the


identification of bit patterns and their associated frequency of occurrence. Compression occurs
when unique variable bit length tokens are assigned to each bit pattern based on occurrence
frequency where shorter length tokens are assigned to bit patterns with the highest occurrence
frequency. This method can produce high compression ratios for certain types of data such as
textual information. For telemetry applications where data types typically include sensor samples
and/or bus data, the randomness of the data limits the compression efficiency of this method [3].

The selected and recommended method for telemetry usage integrates the delta encoding of
adjacent samples, sample grouping and the Huffman coding of redundant exponents. The
following steps summarize the algorithm process.

1. Select the sample group size. (Example: One row of video consisting of 256 16-bit
unsigned pixel values)
2. Store the sample group.
• S=
29470 29127 29127 30164 30865 30164 28108 26124 … 12869 12642 12195 11974
3. If unsigned, convert the sample group to signed data.
• D0 = S – 32768 = [Data(1:256)] =
-3298 -3641 -3641 -2604 -1903 -2604 -4660 -6644 … -19899 -20126 -20573 -20794
4. Store the first derivative of the sample group.
• D1 = [Data(1) Diff(Data(2:256)] =
-3298 -343 0 1037 701 -701 -2056 -1984 … -460 -227 -447 -221
5. Store the second derivative of the sample group.
• D2 = [Data(1) Diff(Data(2) Diff(Diff(Data(3:256)))] =
-3298 -343 343 1037 -336 -1402 -1335 72 … -227 233 -220 226
6. Select a subgroup size and divide the stored information into subgroups. (Example:
Subgroup Size SS = 4)
• G0 = G1 = G2 =
-3298 -1903 … -19899 -3298 701 … -460 -3298 -336 … -227
-3641 -2604 … -20126 -343 -701 … -227 -343 -1402 … 233
-3641 -4880 … -20573 0 -2056 … -447 343 -1335 … -220
-2604 -6644 … -20794 1037 -1984 … -221 1037 72 … 226
7. Determine the number of bits remaining in each subgroup after redundant leading zeros
or ones are removed and retain the results. These retained values are the redundant
exponents.
• E0 = E1 = E2 =
13 14 … 16 13 13 … 10 13 12 … 9
8. Determine the first derivative of the redundant exponents.
• Diff(E0) = Diff(E1) = Diff(E2) =
13 1 … 0 13 0 … -1 13 -1 … 0
9. Use the following Huffman table to assign Huffman tokens to the redundant exponent
derivatives and retain these values and number of token bits (B).
• Table 1 Huffman Tokens
Exponent Token (binary) Token bits
0 0 1
-1, 1 100, 101 3
-2, 2 1100, 1101 4
-3, 3 11100000, 11110000 8
-4, 4 11100001, 11110001 8
-5, 5 11100010, 11110010 8
-6, 6 11100011, 11110011 8
-7, 7 11100100, 11110100 8
-8, 8 11100101, 11110101 8
-9, 9 11100110, 11110110 8
-10, 10 11100111, 11110111 8
-11, 11 11101000, 11111000 8
-12, 12 11101001, 11111001 8
-13, 13 11101010, 11111010 8
-14, 14 11101011, 11111011 8
-15, 15 11101100, 11111100 8
-16, 16 11101101, 11111101 8
• H0 = H1 = H2 =
11111010 101 … 0 11111010 0 … 100 11111010 100 … 0
0 1
• B = B = B2 =
8 3… 1 8 1… 3 8 3… 1

10. Determine the assemblage requiring the least number of bits to perfectly represent the
data. (i.e. SS[sum(E0)]+sum(B0), SS[sum(E1)]+sum(B1) or SS[sum(E2)]+sum(B2)
11. Send the assemblage with the best compression ratio to the telemetry encoder.
Several important characteristics of the selected algorithm pertaining to user configuration
should be noted. To begin, the user selected group size can have a significant impact on data
integrity. For example, a telemetry dropout results in the received data being corrupted and these
data errors proliferate through the de-compression algorithm for the entirety of the data group.
Using intuition, the impact of the data corruption can be reduced by selecting a smaller group
size with the tradeoff being compression percentage. The second user selectable parameter is
subgroup size. The effects of this parameter on compression efficiency is difficult to quantify.
Because of this, testing was performed on several different data types in an attempt to understand
how these algorithm variants effect the achievable compression percentage. The results of this
testing are presented in a subsequent section of this paper.

FPGA IMPLEMENTATION OF THE COMPRESSION ALGORITHM

The architecture of a Field Programmable Gate Array (FPGA) has many advantages when
compared to a microcontroller where the primary advantage with respect to the selected
compression algorithm is concurrent/parallel versus sequential code execution. This inherent
architecture feature allows the derivatives, redundant exponents, Huffman tokens and
assemblage with the best compression ratio to be determined simultaneously. One thing to note,
even with concurrency, latency exists between the input and output of the compression firmware.
This latency is due to the requirement to store the group and exponent information prior to
determining which data is sent to the telemetry encoder. The firmware implementation for the
selected compression algorithm is summarized in Figure 1.

Figure 1 FPGA Firmware Block Diagram


COMPRESSION TEST RESULTS

To test the compression algorithm, several data types and subgroup size variants were selected.
These data types included video pixels as well as accelerometer and magnetometer samples. The
following information summarizes the findings.

Figure 2 One row of 16-Bit grayscale pixel data (256 pixels)

Table 2 Pixel data compression results


Compression Precentage (%)
Data Format SS = 4 SS = 8 SS = 16
D0 (Data Type = 0b01) 0.415 0.855 0.317
1
D (Data Type = 0b10) 25.073 23.975 22.9
2
D (Data Type = 0b11) 22.534 21.826 20.483

Figure 3 Accelerometer on a spin fixture measuring centripetal acceleration (2640 samples)


Table 3 Accelerometer data compression results
Compression Precentage (%)
Data Format SS = 4 SS = 8 SS = 16
0
D (Data Type = 0b01) -9.214 -8.185 -7.677
1
D (Data Type = 0b10) 51.411 51.364 50.448
2
D (Data Type = 0b11) 47.317 47.055 45.694

Figure 4 X-axis magnetometer on a spin fixture (2640 samples)

Table 4 Magnetometer data compression results (x-axis)


Compression Precentage (%)
Data Format SS = 4 SS = 8 SS = 16
D0 (Data Type = 0b01) 16.417 17.206 17.437
D1 (Data Type = 0b10) 56.523 57.014 56.515
2
D (Data Type = 0b11) 55.388 55.647 55.028
Figure 5 Z-axis magnetometer on a spin fixture (2640 samples)

Table 5 Magnetometer data compression results (z-axis)


Compression Precentage (%)
Data Format SS = 4 SS = 8 SS = 16
D0 (Data Type = 0b01) 16.329 17.194 17.462
1
D (Data Type = 0b10) 55.448 55.442 54.463
2
D (Data Type = 0b11) 54.132 53.873 52.519

CONCLUSION

The results presented illustrate significant compression ratios for the representative telemetry
example data sets tested where the data sets included 16-bit grayscale imagery as well as
accelerometer and magnetometer analog samples. Further experimentation is required to fully
understand and document the effects of sample rates and data slew rates on achievable
compression ratios. For this work the compression algorithm was implemented on an FPGA
using firmware. The algorithm could also be implemented using a microcontroller. Both
implementation methods allow for easy integration into existing and future telemetry encoder
designs without hardware modification, however the intrinsic concurrency of an FPGA makes
near real-time compression possible if minimal data latency is desired.

Even though the results obtained are encouraging, some improvements to the algorithm are being
explored. One such improvement entails expanding group/exponent memory to concurrently
store the exponents for multiple subgroup sizes. The decision logic would also be modified to
select the frame type with the highest compression ratio based on subgroup size in addition to the
derivatives. Another improvement involves applying a curve fit to the sample group data for the
purpose of eliminating any bias within the sample group. This addition to the compression
algorithm is expected to reduce full scale values over the sample group, thus improving the
compression ratio. The results from these algorithm enhancements will be presented in
subsequent publications.
REFERENCES

[1] Information Theory, Wikipedia, 2022. (Available on-line at


https://en.wikipedia.org/wiki/Information_theory).
[2] Data Compression, Wikipedia, 2022. (Available on-line at
https://en.wikipedia.org/wiki/Data_compression#:~:text=Lossy%20image%20compression%
20is%20used,is%20extensively%20used%20in%20video).
[3] Huffman Coding, Wikipedia, 2022. (Available on-line at
https://en.wikipedia.org/wiki/Huffman_coding).

You might also like