0% found this document useful (0 votes)
48 views15 pages

DC 3

always available

Uploaded by

srigoutham2414
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
48 views15 pages

DC 3

always available

Uploaded by

srigoutham2414
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 15
where the quantization error is the difference between the quantized signal and the original signal. Another approach to the comparison of an original and a reconstructed image is to generate the difference image and judge it visually. Intuitively, the difference image is D,= P:-Q, but such an image is hard to judge visually because its pixel values D; tend to be small numbers, If a pixel value of zero represents white, such a difference i age would be almost invisible, In the opposite case, where pixel values of zero represent black, such a difference would be too dark to judge. Better results are obtained by calculating Di=a(Pi-Q) +b where a is a magnification parameter (typically a small number such as 2) and b is half the maximum value of a pixel (typically 128), Parameter a serves to magnify small differences, while b shifts the difference image from extreme white (or extreme black) to a more comfortable gray, 6. Image Transforms ‘An image can be compressed by transforming its pixels (which are correlated) to a representation where they are decorrelated. Compression is achieved if the new values are smaller, on average, ‘than the original ones, Lossy compression can be achieved by quantizing the transformed values. ‘The decoder inputs the transformed values from the compressed stream and reconstructs the (precise or approximate) original data by applying the inverse transform, The transforms discussed in this section are orthogonal. The term decorrelated means that the transformed values are independent of one another. As a result, they can be encoded independently, which makes it simpler to construct a statistical model, An image can be compressed if its representation has redundancy. The redundancy in images stems from pixel correlation, If we transform the image to a representation where the pixels are decorrelated, we have eliminated the redundancy and the image has been fully compressed. 6.1 Orthogonal Transforms Image transforms are designed to have two properties: 1. to reduce image redundancy by reducing the sizes of most pixels and 2, to identify the less important parts of the image by isolating the various frequencies of the image. We intuitively associate a frequency with a wave. Water waves, sound waves, and electromagnetic ‘waves have frequencies, but pixels in an image can also feature frequencies, Figure 2 shows a small, 5x8 bi-level image that illustrates this concept, The top row is uniform, so we can assign it zero frequency. The rows below it have increasing pixel frequencies as measured by the number of color changes along a row. The four waves on the right roughly correspond to the frequencies of the four top rows of the image. Figure 2: Image frequencies Image frequencies are important because of the following basic fact: Low frequencies correspond to the important image features, whereas high frequencies correspond to the details of the image, which are less important. Thus, when a transform isolates the various image frequencies, pixels that correspond to high frequencies can be quantized heavily, whereas pixels that correspond to low frequencies should be quantized lightly or not at all. This is how a transform can compress an image very effectively by losing information, but only information associated with unimportant image details. Practical image transforms should be fast and preferably also simple to implement. This suggests the use of linear transforms. In such a transform, each transformed value (or transform coefficient) ci is a weighted sum of the data items (the pixels) dj that are being transformed, where each item is multiplied by a weight wy. Thus, C, = Zydjwy for i j= 1, 2,...,n. For n= 4, this is expressed in matrix notation: (4) (% me we me) (4, [ee |_| Wn 2 Ms d, 6 Wy We Wy We || dy ed LW Wer Wes Wa) Ade For the general case, we can write C =W.D. Each row of W is called a “basis vector.” The only quantities that have to be computed are the weights wy. The guiding principles are as follows: 1. Reducing redundancy. The first transform coefficient ci can be large, but the remaining values 2, C3... should be small. 2. Isolating frequencies. The first transform coefficient ¢: should correspond to zero pixel frequency, and the remaining coefficients should correspond to higher and higher frequencies. The key to determining the weights wy is the fact that our data items d, are not arbitrary numbers but pixel values, which are nonnegative and correlated. This choice of wy satisfies the first requirement: to reduce pixel redundancy by means of a transform. In order to satisfy the second requirement, the weights wy of row i should feature frequencies that get higher with i, Weights w:, should have zero frequency; they should all be +1’s. Weights wy; should have one sign change; Le, they should be +1, +1... + 1-1-1... ,-1. This continues until the last row of weights wa should have the highest frequency +1,-1, #1-1,..., +1.-1. The mathematical discipline of vector spaces coins the term “basis vectors for our rows of weights. In addition to isolating the various frequencies of pixels d,, this choice results in basis vectors that are orthogonal. The basis vectors are the rows of matrix W, which is why this matrix and, by implication, the entire transform are also termed orthogonal. These considerations are satisfied by the orthogonal matrix fla. iy The first basis vector (the top row of W) consists of all 1’s, so its frequency is zero. Each of the subsequent vectors has two +1's and two -1's, so they produce small transformed values, and their frequencies (measured as the number of sign changes along the basis vector) get higher. It is also possible to modify this transform to conserve the energy of the data vector. All that’s needed is to multiply the transformation matrix W by the scale factor 1/2. Another advantage of W is that it also performs the inverse transform. 6.2 Two-Dimensional Transforms Given two-dimensional data such as the 4X4 matrix 5674 5 6 8 where each of the four columns is highly correlated, we can apply our simple one dimensional twansform to the columns of D. The result is, fl 11 1) (5 67 4) (26 26 28 23 11 1-1] |6575 4-40 -5 --11f'}776 6) }o 2 21 li -1 1-1) (8 8 8 8) (2 0 -2 Each column of C’ is the transform of a column of D. Notice how the top element of each column of C’ is dominant, because the data in the corresponding column of D is correlated. Notice also that the rows of C’ are still correlated. C’ is the first stage in a two-stage process that produces the two-dimensional transform of matrix D. The second stage should transform each row of C’, and this is done by multiplying C’ by the transpose W’. Our particular W, however, is symmetric, so we end up with C= C.WT=W.D.WT=W.D.W or 26 26 28 23) (1 1 1 1) (103 1 -5 5) cu[4 49 SB] fbr aa 13-3 -5 5 o 2 2 1; i1-41-41 Ss -1 3-1 20 2-3) U-ad -) (7 3 -3 =) The elements of C are decorrelated. The top-left element is dominant. It contains most of the total energy of the original D, The elements in the top row and the leftmost column are somewhat large, while the remaining elements are smaller than the original data items. The double- stage, two-dimensional transformation has reduced the correlation in both the horizontal and vertical dimensions. As in the one-dimensional case, excellent compression can be achieved by quantizing the elements of C, especially those that correspond to higher frequencies (ie, located toward the bottom-right corner of C). ‘This is the essence of orthogonal transforms, The important transforms are: 1. The Walsh-Hadamard transform: is fast and easy to compute (it requires only additions and subtractions), but its performance, in terms of energy compaction, is lower than that of the DCT. 2. The Haar transform: is a simple, fast transform. Its the simplest wavelet transform. 3. The Karhunen-Lo'eve transform: is the best one theoretically, in the sense of energy compaction (or, equivalently, pixel decorrelation). However, its coefficients are not fixed; they depend on the data to be compressed. Calculating these coefficients (the basis of the transform) is slow, as is the calculation of the transformed values themselves, Since the coefficients are data dependent, they have to be included in the compressed stream, For these reasons and because the DCT performs almost as well, the KLT is not generally used in practice. 4, The discrete cosine transform (DCT): is important transform as efficient as the KLT in terms of energy compaction, but it uses a fixed basis, independent of the data. There are also fast methods for calculating the DCT. This method is used by JPEG and MPEG audio. ‘The 1-D discrete cosine transform (DCT) is defined as Par | 2N The input is a set of n data values (pixels, audio samples, or other data), and the output is a set of n DCT transform coefficients (or weights) C(u) . The first coefficient C(0) is called the DC coefficient, and the rest are referred to as the AC coefficients. Notice that the coefficients are real numbers even if the input data consists of integers. Similarly, the coefficients may be positive or negative even if the input data consists of nonnegative numbers only. Similarly, the inverse DCT is defined as 100)= Halide cof SH] rand 2N where Wa for u=0 |Av for w= 12, ‘The corresponding 2-D DCT, and the inverse DCT are defined as au) = Ans) = aaa SS Aso) cof EM. and flax) = FF aluwyalvietur) cof Ete |-cos oy ‘The advantage of DCT is that it can be expressed without complex numbers. 2-D DCT is also separable (like 2-D Fourier transform), ic. it can be obtained by two subsequent 1-D DCT. ‘The important feature of the DCT, the feature that makes it so useful in data compression, is that it takes correlated input daca and concentrates its energy in just the first few transform coefficients. Ifthe input data consists of correlated quantities, then most of the Ntransform coefficients produced by the DCT are zeros ‘or small numbers, and only a few are large (normally the first ones) ‘Compressing data with the DCT is therefore done by quantizing the coefficients, The small ones are quantized coarsely (posibly all the way to zero), and the large ones can be quantized finely to the nearest integer. After quantization, the coefficients (or variable-size codes assigned to the coefficients) are written on the compressed stream. Decompression is done by performing the inverse DCT on the quantized coefficients. ‘This results in data items that are not identical tothe original ones but are not much diferent In practical applications, the data to be compressed is partitioned into sets of Nitems each and each set, is DCT-transformed and quantized individually. The value of Nis critical. Small values of such a8 3,4, or 6 result in many small sets of data items, Such a small st is transformed to a small set of coefficients where the energy of the original data is concentrated ina few coefficients but there are only a few coefficients in such a set! Thus, there are not enough small coefficients to quantize. Large values of result in afew large sts of data. The problem in such a case is thatthe individual daca items of a large set are normally not correlated and therefore result in a set of transform coefficients where all the coefficient are large. Experience indicates that N-8is a good value, and most data compression methods that employ the DCT use this value of N. 7.JPEG JPEG is a sophisticated lossy/lossless compression method for color or grayscale still images. It does not handle bi-level (black and white) images very well. It also works best on continuous-tone images, where adjacent pixels have similar colors. An important feature of JPEG is its use of many parameters, allowing the user to adjust the amount of the data lost (and thus also the compression ratio) over a very wide range, Often, the eye cannot see any image degradation even at compression factors of 10 or 20. There are two operating modes, lossy (also called baseline) and lossless (which typically produces compression ratios of around 0.5). Most implementations support just the lossy mode, This mode includes progressive and hierarchical coding, JPEG is a compression method, not a complete standard for image representation. This is why it does not specify image features such as pixel aspect ratio, color space, or interleaving of bitmap rows. JPEG has been designed as a compression method for continuous-tone images. ‘The name JPEG is an acronym that stands for Joint Photographic Experts Group. This was a joint effort by the CCITT and the ISO (the International Standards Organization) that started in June 1987 and produced the first JPEG draft proposal in 1991. The JPEG standard has proved successful and has become widely used for image ‘compression, especially in Web pages. ‘The main goals of JPEG compression are the following: 1. High compression ratios, especially in cases where image quality is judged as very good to excellent, 2. The use of many parameters, allowing knowledgeable users to experiment and achieve the desired compression/quality trade-off. 3. Obtaining good results with any kind of continuous-tone image, regardless of image dimensions, color spaces, pixel aspect ratios, or other image features. 4. A sophisticated, but not too complex compression method, allowing software and hardware implementations on many platforms. 5. JPEG includes four modes of operation : (a) A sequential mode where each image component (color) is compressed in a single left-to-right, top-to-bottom scan; (b) A progressive mode where the image is compressed in multiple blocks (known as “scans") to be viewed from coarse to fine detail; (c) A lossless mode that is important in cases where the user decides that no pixels should be lost (the trade-off is low compression ratio compared to the lossy modes); and (d) A hierarchical mode where the image is compressed at multiple resolutions allowing lower- resolution blocks to be viewed without first having to decompress the following higher-resolution blocks. <=| ae FF Al = a) Sequential coding: part-by-part (b) Progressive coding: quality-by-quality Figure3 Difference between sequential coding and progressive coding ‘The main JPEG compression steps are: 1. Color images are transformed from RGB into a luminance/chrominance color space. The eye is sens ive to small changes in luminance but not in chrominance, so the chrominance part can later lose much data, and thus be highly compressed, without visually impairing the overall image quality much, This step is optional but important because the remainder of the algorithm works on each color component separately, Without transforming the color space, none of the three color components will tolerate much loss, leading to worse compression. 2. Color images are downsampled by creating low-resolution pixels from the original ones (this step is used only when hierarchical compression is selected; it is always skipped for grayscale images). The downsampling is not done for the luminance component. Downsampling is done either at a ratio of 2:1 both horizontally and vertically (the so called 2h2v or 4:1:1 sampling) or at ratios of 2:1 horizontally and 1:1 vertically (2h1v or 4:2:2 sampling). Since this is done on two ofthe three color components, 2h2v reduces the image to 1/3 + (2/3) x (1/4) = while 2h1v reduces it to 1/3 + (2/3) » (1/2) = is not touched, there is no noticeable loss of image quality. Grayscale images don’t go through this 1/2 its original size, ./3 its original size. Since the luminance component step. (Quantization Transmission storage Decoded exe De Decodina blocks Figure 4: JPEG encoder and decoder YIQ or YUV i fG9) F(u,v) P(u,v) *| Der Quantization tr Oo BxB t ' _| Quantiz. t Coding Tables ' | Tables Heda] | | Tables pre BC Entropy Pig Codins a Zag Data is REG AC FigureS: JPEG encoder Reduction of Reduction of spatial redundancy spectral redundancy | Color Color space | C, Quantization Roo B | transformation pe tat and coding Subsampiing Figure 6: Scheme of the JPEG for RGB images 3. The pixels of each color component are organized in groups of 8x8 pixels called data units, and each data unit is compressed separately. If the number of image rows or columns is not a multiple of 8, the bottom row and the rightmost column are duplicated as many times as necessary. In the noninterleaved mode, the encoder handles all the data units of the first image component, then the data units of the second component, and finally those of the third component. In the interleaved mode the encoder processes the three top-left data units of the three image components, then the three data units to their right, and so on. 4, The discrete cosine transform is then applied to each data unit to create an 8x8 map of frequency components. They represent the average pixel value and successive higher-frequency changes within the group. This prepares the image data for the crucial step of losing information. 5. Each of the 64 frequency components in a data unit is divided by a separate number called its quantization coefficient (QC), and then rounded to an integer, This is where information is irretrievably lost. Large QCs cause more loss, so the high frequency components typically have larger QCs. Each of the 64 QCs is a JPEG parameter and can, in principle, be specified by the user. In practice, most JPEG implementations use the QC tables recommended by the JPEG standard for the luminance and chrominance image components. 6. The 64 quantized frequency coefficients (which are now integers) of each data unit are encoded using a combination of RLE and Huffman coding. 7. The last step adds headers and all the required JPEG parameters, and outputs the result The compressed file may be in one of three formats (1) the interchange format, in which the file contains the compressed image and all the tables needed by the decoder (mostly quantization tables and tables of Huffman codes), (2) the abbreviated format for compressed image data, where the file contains the compressed image and may contain no tables (or just a few tables), and (3) the abbreviated format for table-specification data, where the file contains just tables, and no compressed image. The second format makes sense in cases where the same encoder/decoder pair is used, and they have the same tables builtin. The third format is used in cases where many images have been compressed by the same encoder, using the same tables, When those images need to be decompressed, they are sent to a decoder preceded by one file with table-specification data. The JPEG decoder performs the reverse steps. (Thus, JPEG is a symmetric compression method.) Figure 4 and 5 shows the block diagram of JPEG encoder and decoder. Figure 6 shows JPEG for RGB images. 7.1 Modes of JPEG algorithm: ‘The progressive mode is a JPEG option, In this mode, higher- quency DCT coefficients are written on the compressed stream in blocks called “scans.” Each scan that is read and processed by the decoder results in a sharper image. The idea is to use the first few scans to quickly create a low-quality, blurred preview of the image, and then either input the remaining scans or stop the process and reject the image, The trade-off is that the encoder has to save all the coefficients of all the data units in a memory buffer before they are sent in scans, and also go through all the steps for each scan, slowing down the progressive mode. In the hierarchical mode, the encoder stores the image several times in the output stream, at several resolutions. However, each high-resolution part uses information from the low- resolution parts of the output stream, so the total amount of information is less than that required to store the different resolutions separately. Each hierarchical part may use the progressive mode. ‘The hierarchical mode is useful in cases where a high-resolution image needs to be output in low resolution, Older dot-matrix printers may be a good example of a low-resolution output device still The lossless mode of JPEG calculates a “predicted” value for each pixel, generates the difference between the pixel and its predicted value, and encodes the difference using the same method (ie, Huffman or arithmetic coding) employed by step 5 above, The predicted value is calculated using values of pixels above and to the left of the current pixel (pixels that have already been input and encoded) 7.2 Why DCT? ‘The JPEG committee elected to use the DCT because of its good performance, because it does not assume anything about the structure of the data (the DFT, for example, assumes that the data to be transformed is periodic), and because there are ways to speed it up. DCT has two key advantages: the decorrelation of the information by generating coefficients which are almost Independent of each other and the concentration of this information in a greatly reduced number of coefficients. It reduces redundancy while guaranteeing a compact representation. ‘The JPEG standard calls for applying the DCT not to the entire image but to dataunits (blocks) of 8x8 pixels. The reasons for this are (1) Applying DCT to large blocks involves many arithmetic operations and is therefore slow. Applying DCT to small data units is faster. (2) Experience shows that, in a continuous-tone image, correlations between pixels are short range. A pixel in such an image has a value (color component or shade of gray) that's close to those of its near neighbors, but has nothing to do with the values of far neighbors. The JPEG DCT is therefore 8 ‘The DCT is JPEGs key to lossy compressio executed for The unimportant image information is reduced or removed by quantizing the 64 DCT coefficients, especially the ones located toward the lower- right. If the pixels of the image are correlated, quantization does not degrade the image quality much. For best results, each of the 64 coefficients is quantized by dividing it by a different quantization coefficient (QC), All 64 QCs are parameters that can be controlled, in principle, by the user. Mathematically, the DCT is a one-to-one mapping of 64-point vectors from the image domain to the frequency domain. The IDCT is the reverse mapping. If the DCT and IDCT could be calculated with infinite precision and if the DCT coefficients were not quantized, the original 64 pixels would be exactly reconstructed. 7.3 Quantization After each 8x8 data unit of DCT coefficients Gy is computed, it is quantized, This is the step where information is lost (except for some unavoidable loss because of finite precision calculations in other steps). Each number in the DCT coefficients matrix is divided by the corresponding number from the particular “quantization table” used, and the result is rounded to the nearest integer. As has already been mentioned, three such tables are needed, for the three color components, The JPEG standard allows for up to four tables, and the user can select any of the four for quantizing each color component. ‘The 64 numbers that constitute each quantization table are all JPEG parameters. In principle, they can all be specified and fine-tuned by the user for maximum compression. In practice, few users have the patience or expertise to experiment with so many parameters, so JPEG software normally uses the following two approaches: 1. Default quantization tables. Two such tables, for the luminance (grayscale) and the chrominance components, are the result of many experiments performed by the JPEG committee. They are included in the JPEG standard and are reproduced here as Table 1. It is easy to see how the Q¢s in the table generally grow as we move from the upper left corner to the bottom right corner. This is how JPEG reduces the DCT coefficients with high spatial frequencies. 2. A simple quantization table @ is computed based on one parameter R specified by the user. A simple expression such as Qy = 1+(i + j) R guarantees that QCs start small at the upper-left corner and get bigger toward the lower-right corner. Table 2 shows an example of such a table with R = 2. 1611 10 16 24 40 51 61 17 18 24 47 99 99 99 99 121214 19 26 58 60 55 18 21 26 66 99 99 99 99 14:13 16 24 40 57 69 56 24 26 56 99 99 99 99 99 14.17 22 29 51 87 80 62 47 66 99 99 99 99 99 99 18 22 37 56 68 109 103 77 99 99 99 99 99 99 99 99 24 35 55 64 81 104 113° 92 99 99 99 99 99 99 99 99 49 64 78 87 103 121 120 101 99 99 99 99 99 99 99 99 7 8 98 112 100 103 99 99 99 99 9¢ 99 99 Luminance Chromimance Table 1: Recommended Quantization Tables. If the quantization is done correctly, very few nonzero numbers will be left in the DCT coefficients matrix, and they will typically be concentrated in the upper-left region. These numbers are the output of JPEG, but they are further compressed before being written on the output stream. In the JPEG literature this compression is called “entropy coding,” Three techniques are used by entropy coding to compress the 8 x 8 matrix of integers: 1355 15 305 7 at 5 7 9 19 7 911 21 9 11 13 23 11 12 15 25 13:15 17 5 27 15 17 19 29 Table 2: The Quantization Table 1 + (i+ j) x 2. 1. The 64 numbers are collected by scanning the matrix in zigzags. This produces a string of 64 numbers that starts with some nonzeros and typically ends with many consecutive zeros. Only the nonzero numbers are output (after further compressing them) and are followed by a special end-of block (EOB) code. This way there is no need to output the trailing zeros (we can say that the EOB is the run-length encoding of all the trailing zeros).. 2. The nonzero numbers are compressed using Huffman coding, 3. The first of those numbers (the DC coefficient) is treated differently from the others (the AC coefficients) 7.4 Coding: Each 8x8 matrix of quantized DCT coefficients contains one DC coefficient [at position (0, 0), the top left corner] and 63 AC coefficients. The DC coefficient is a measure of the average value of the 64 original pixels, constituting the data unit, Experience shows that in a continuous-tone image, adjacent data units of pixels are normally correlated in the sense that the average values of the pixels in adjacent data units are close. We already know that the DC coefficient of a data unit is a multiple of the average of the 64 pixels constituting the unit, This implies that the DC coefficients of adjacent data units don’t differ much, JPEG outputs the first one (encoded), followed by differences (also encoded) of the DC coefficients of consecutive data units. Example: 1118, 1114, and 1119, then the JPEG output for the first data unit is 1118 (Huffman encoded) f the first three 8x8 data units of an image have quantized DC coefficients of followed by the 63 (encoded) AC coefficients of that data unit, The output for the second data unit will be 1114 - 1118 = ~4 (also Huffman encoded), followed by the 63 (encoded) AC coefficients of that data unit, and the output for the third data unit will be 1119 - 1114 = 5 (also Huffman encoded), again followed by the 63 (encoded) AC coefficients of that data unit, This way of handling the DC coefficients is worth the extra trouble, because the differences are small. “Assume that 46 bits encode one color component of the 64 pixels ofa data unit, Let's assume that the other two color components are also encoded into 46-bit numbers. If each pixel originally consists of 24 bits, then this corresponds to a compression factor of 64 x 24/46 <3) ~ 11.13; very impressivel Each quantized spectral domain is composed ofa few non-zero quantized coefficients, and the majority of zero coefficients eliminated in the quantization stage. The positioning of the zeros changes from one block to another. As shown in Figure 7, a zigzag scanning of the block is performed in order to create a vector of coefficients with a lot of zero runlengths. The natural images generally have low frequency characteristics. By beginning the zigzag scanning at the top left (by the low frequency zone), the vector generated will at first contain significant coefficients, and then more and more runlengths of zeros as we move towards the high frequency coefficients. sure 7gives us an example. MTA) e]2]2])°] 97] 0 TTaTSaTe yoyo ayo apis apo;o]o TTTT2{a}opopopo] Tlo}ololololajo TT opopoToloy oye T OpOpo] oop oo TTopo}o]ofoloyo 26 -81-8-262-41-4115020012000001-1EOB [et peg or (0,-3)(0.1,...5(1.2)(2.1)(0.2)(6,-1),(0,-1);EOB Figure 7. Zigzag scanning of a quantized DCT domain, the resulting coefficient vector, and the generation of pairs (zero runtength, DCT coefficient). EOB stands for “end of block” Couples of (zero runlengths, DCT coefficient value) are then generated and coded by a set of Huffman coders defined in the JPEG standard. The mean values of the blocks (DC coefficient) are coded separately by a DPCM method. Finally, the “jpg” file is constructed with the union of the bitstreams associated with the coded blocks, Why the Zig-Zag Sean 1, To group low frequency coefficients in top of vector. 2. Maps 8x8 tal x 64 vector 3. Zig-Zag scan is more effective 8. JPEG -LS: J images, JPEG-LS examines several of the previously-seen neighbors of the current pixel, uses them -LS is a new standard for the lossless (or near-lossless) compression of continuous tone as the context of the pixel, uses the context to predict the pixel and to select a probability distribution out of several such distributions, and uses that distribution to encode the prediction error with a special Golomb code. There is also a run mode, where the length of a run of identical pixels is encoded, ‘igure 8 below shows the block diagram of JPEG-LS encoder. Figure 8: JPEG - LS Block diagram The context used to predict the current pixel x is shown in Figure 9, The encoder examines the context pixels and decides whether to encode the current pixel x in the run mode or in the

You might also like