0% found this document useful (0 votes)
28 views6 pages

2015 Chapter 10 MMS IT

The document discusses basic techniques for video compression. It explains that video compression takes advantage of spatial and temporal redundancy between frames by concentrating on changes between neighboring images. The MPEG video compression algorithm relies on motion compensation to reduce temporal redundancy and transform domain compression to reduce spatial redundancy. Motion compensation works by finding the difference between the current frame and previous/future frames, which often have significant similarity since camera parameters do not change rapidly. This difference image can then be compressed more efficiently than compressing each frame separately.

Uploaded by

Mercy Dega
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views6 pages

2015 Chapter 10 MMS IT

The document discusses basic techniques for video compression. It explains that video compression takes advantage of spatial and temporal redundancy between frames by concentrating on changes between neighboring images. The MPEG video compression algorithm relies on motion compensation to reduce temporal redundancy and transform domain compression to reduce spatial redundancy. Motion compensation works by finding the difference between the current frame and previous/future frames, which often have significant similarity since camera parameters do not change rapidly. This difference image can then be compressed more efficiently than compressing each frame separately.

Uploaded by

Mercy Dega
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Chapter 10

BASIC VIDEO COMPRESSION


TECHNIQUES
Contents
10.1. Introduction to Video Compression
10.2. Video Compression based on Motion Compensation

1/11/2023 CH-10 2
10.1. Introduction to Video Compression
• A video consists of a time-ordered sequence of frames images. An obvious solution to video
compression would be predictive coding based on previous frames. For example, suppose we simply
created a predictor such that the prediction equals the previous frame. Then compression proceeds by
subtracting images: instead of subtracting the image from itself (i.e., use a derivative), we subtract in
time order and code the residual error.
• Video compression techniques take advantage of the repetition of portions of the picture from one
image to another by concentrating on the changes between neighboring images. The two types of
redundancy in video frames.
• Spatial redundancy: pixel-to-pixel or spectral correlation within the same frame.
• Temporal redundancy: similarity between Two or more different frames.
– The MPEG video compression algorithm relies on two basic techniques:
• The MPEG video compression algorithm relies on two basic techniques:
– Motion compensation for the reduction of the temporal redundancy and
– Transform domain-(DCT) based compression for the reduction of spatial redundancy.

1/11/2023 CH-10 3
10.1. Introduction to Video Compression…
Video compression – MPEG encoding
• The Moving Picture Experts Group (MPEG) method is used to compress video. In principle, a motion picture is a
rapid sequence of a set of frames in which each frame is a picture. In other words, a frame is a spatial
combination of pixels, and a video is a temporal combination of frames that are sent one after another.
Compressing video, then, means spatially compressing each frame and temporally compressing a set of frames.
• Spatial compression: The spatial compression of each frame is done with JPEG, or a modification of it. Each
frame is a picture that can be independently compressed.
• Temporal compression: In temporal compression, redundant frames are removed. When we watch television, for
example, we receive 30 frames per second. However, most of the consecutive frames are almost the same.

1/11/2023 CH-10 4
10.2. Video Compression based on Motion Compensation
• Consecutive frames in a video are similar — temporal redundancy exists. Temporal
redundancy is exploited so that not every frame of the video needs to be coded independently
as a new image. The difference between the current frame and other frame(s) in the sequence
will be coded — small values and low entropy, good for compression.
• A video can be viewed as a sequence of images stacked in the temporal dimension. Since the
frame rate of the video since is often relatively high (e.g.: >15 frames per second) and the
camera parameters (focal length, position, viewing angle, etc.) usually do not change rapidly
between-frames, the contents of consecutive frames are usually similar, unless certain objects
in the scene move extremely fast. In other words, the video has temporal redundancy.
• Temporal redundancy is often significant and it is exploited, so that not every frame of the
video needs to be coded independently as a new image. Instead, the difference between the
current frame and other frame(s) in the sequence is coded. If redundancy between them is
great enoug1t, the difference images could consist mainly of small values and low entropy,
which is good for compression.

1/11/2023 CH-10 5
10.2. Video Compression based on Motion Compensation…
• Steps of Video compression based on Motion Compensation (MC):
– Motion Estimation (motion vector search).
– Motion Compensation based Prediction.
– Derivation of the prediction error, i.e., the difference.
– Motion Estimation (motion vector search).
– Motion Compensation based Prediction.
– Derivation of the prediction error, i.e., the difference.
• For efficiency, each image is divided into macroblocks of size N x N. By default, N = 16 for
luminance images. For Chrominance images, N = 8 if 4:2:0 chroma subsampling is adopted.
Motion compensation is not performed at the pixel level, nor at the level of video object, as
in later video standards (such as MPEG-4). Instead, it is at the macroblock level. The current
image frame is referred to as the Target frame. A match is sought between the macroblock
under consideration in the Target frame and the most similar macroblock in previous and/or
future frame(s) [referred to as Reference frame(s). In that sense, the Target macroblock is
predicted from the Reference macroblock.
1/11/2023 CH-10 6

You might also like