0% found this document useful (0 votes)
19 views43 pages

MM03 1

Multimedia

Uploaded by

Sana M.saffar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views43 pages

MM03 1

Multimedia

Uploaded by

Sana M.saffar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Media Representation

Department of Software Engineering


Digital Images

• All images are represented digitally as pixels.

• An image is defined by image width, height, and pixel depth.


• The image width gives the number of pixels that span the
image horizontally and the image height gives the number of
lines in the image.
• Each pixel is further represented by a number of bits, which is
commonly called the pixel depth.
• The number of bits used per pixel in an image depends on the
color space representation (gray or color) and is typically
segregated into channels.
Digital Images

• three channels represent the red, green, and blue


components
• The total number of bits per pixel is, thus, the sum of the
number of bits used in each channel.
• For instance, in grayscale images, the gray-level value is
encoded on 8 bits for each pixel.
• In color images, each R, G, B channel may be represented by
8 bits each, or 24 bits for a pixel.
Digital Images

• The size of the image can, thus, vary depending on the


representations used.
• For example, a color image has a width of 640 and height of
480. If the R, G, B color channels are represented by 8 bits
each, the size of color image
• 640 * 480 * 3 * 8 = 7.37 Mbits (921.6 Kbytes).

• If this were a gray image, its size would be


• 640 * 480 * 8 = 2.45 Mbits (307.2 Kbytes)
Digital Images

10 by 10 image and its matrix


Digital Images

Aspect Ratio

• Image aspect ratio refers to the width/height ratio of the


images, and plays an important role in standards.

• Different applications require different aspect ratios. Some of


the commonly used aspect ratios for images are 3:2 (when
developing and printing photographs), 4:3 (television
images), 16:9 (high-definition images), and 47:20
(anamorphic formats used in cinemas).
Digital Images
4:3
Aspect Ratio

16:9

47:20
Digital Video

Representation of Digital Video

• Video is represented by a sequence of discrete images


shown in quick succession.
• Each image in the video is called a frame, which is
represented as a matrix of pixels defined by a width, height,
and pixel depth.
• The pixel depth is represented in a standardized color space
such as RGB.
Digital Video

Representation of Digital Video

• The rate at which the images are shown is the frame rate
• Film is displayed at 24 frames per second. Television
standards use 30 frames per second (NTSC) or 25 frames per
second (PAL). If the frame rate is too slow, the human eye
perceives an unevenness of motion called flicker
• Although digital video can be considered a three-dimensional
signal—a 2D image changing over time—analog video is
converted to a 1D signal of scan lines
• This scan line conversion was introduced to make analog
television broadcast technology work.
Digital Video

Representation of Digital Video

• The electron gun(s) in a television project electrons on the


phosphor screen from left to right in a scan line manner and
from top to bottom successively for each frame.
• The phosphor screen glows at each location on a scan line
creating a color at all positions on the line.
• The color glow fades quickly, but the frame rate ensures that
electron gun(s) recolor the scan line before it fades.
• Scanning formats, which is an outcome of the analog
technology, can be represented as interlaced or progressive
Digital Video

Representation of Digital Video

• Digital display technologies display media in a digital format.


• Digital video display on these devices, such as LCD or
plasma, does not require the scanning mechanism
described previously. However, when the technology for
digital video started to evolve, the television instruments were
still rendering analog signals only.
• As a result, the digital video standards have their
representations and formats closely tied to analog TV
standards—NTSC (developed by the National Television
Systems Committee), PAL (Phase Alternating Line), and
SECAM (Système Electronique Couleur Avec Mémoire).
Analog Video

Representation of Analog Video

• Although digital video is thought of as a three-dimensional


signal in space and time, the analog video signal used in
broadcast is scanned as a one-dimensional signal in time,
where the spatiotemporal information is ordered as a function
of time according to a predefined scanning convention.
• This 1D signal captures the time-varying image intensity
information only along scanned lines.
• Television requires this analog scanned information to be
broadcast from a broadcast station to all users
Analog Video
Representation of Analog Video
Interlaced Scan
The top figure
shows the upper
“odd” field
consisting of odd-
numbered lines.
The bottom
shows a lower
“even” field
interspersed with
the odd field.
Digital Video
Representation of Analog Video
Progressive Scan
Analog Video
YUV subsampling

RGB and YUV schemes

• Video frames, like images, are represented using a color


format, which is normally RGB. This RGB color space is used
by cathode-ray tube–based display devices, such as the
television, to display and render the video signal. For
transmission purposes, however, the RGB signal is
transformed into a YUV signal
• The YUV color space aims to decouple the intensity
information (Y or luminance) from the color information (UV or
chrominance).
• Y or Luminance: this refers to the brightness
• UV or chrominance: this refers to the color
YUV subsampling

RGB and YUV schemes

• The separation was intended to reduce the transmission


bandwidth and is based on experiments with the human
visual system, which suggests that humans are more tolerant
to color distortions than to intensity distortions.
• In other words, reducing the color resolution does not affect
our perception.
YUV subsampling

RGB and YUV schemes

• Transmission
YUV subsampling

RGB and YUV schemes

• Reception by color TV
YUV subsampling

RGB and YUV schemes

• Reception by Black and White TV


YUV subsampling

RGB and YUV schemes

• RGB is simplest
• RGB – three different values are stored for pixel
YUV subsampling

RGB and YUV schemes

• Each value is between 0 and 255, can be viewed on a 3D


graph
• The three values define pixel color in combination
YUV subsampling

RGB and YUV schemes

• 3D view
YUV subsampling

RGB and YUV schemes

• 3D view
YUV subsampling

RGB and YUV schemes

• Every possible color is located within this cube


• We can see top white corner, where each of RGB is maximum
YUV subsampling

RGB and YUV schemes

• The point where each of RGB value is zero – (black color)


YUV subsampling

RGB and YUV schemes


• The black corner to while corner there is a line that represents
the gray level
• With 24bit RGB we can encode about 16million colors
• But we can only distinguish about 10million colors
YUV subsampling

RGB and YUV schemes


• With 24bit RGB we can encode about 16million colors
• But we can only distinguish about 10million colors
• Is the RGB system is efficient way to save or transmit
images? NO
YUV subsampling

RGB and YUV schemes


• Is the RGB system is efficient way to save or transmit
images? NO
• YUV is an alternative to RGB, through which analog signals
could be transmitted with backward compatibility with black
and white TV
• YUV consumes less bandwidth than RGB
• In RGB, interference between the chrominance and
luminance information is inevitable and tends to worsen when
the signal is weak. This is why fluctuating colors, false colors,
and intensity variations are seen.
YUV subsampling

RGB and YUV schemes

• YUV also stores information in 3 pieces Y, U and V


• Y: luminance or brightness of image (like gray level)
• U: chrominance blue component
• V: chrominance red component
y u v
YUV subsampling

Representation of Digital Video


y u v
• The

Cb or U=chrominance blue component


Cr or V=chrominance red component
YUV subsampling

RGB and YUV schemes

• YUV 3D view
• Note that the centre of face of YUV cube is white
YUV subsampling

RGB and YUV schemes

• YUV 3D view
• Note that the centre of opposite face of YUV cube is BLACK
YUV subsampling

RGB and YUV schemes

• YUV 3D view
• Note that the centre of opposite face of YUV cube is BLACK
YUV subsampling

RGB and YUV schemes

• YUV 3D view
• A line connected the centre two faces (white and black). This
line represent the full range of luminance component (Y)
YUV subsampling
RGB and YUV schemes
• YUV 3D view
• Every value of Y is located at the centre of thin slice of this cube
• The values of Cb (U) and Cr(V) are the coordinates the two
dimensional plane
• Lower values of Cb and Cr are located at the lower left corner,
which is why shades are green are seen
Digital Video Formats
Analog TV formats

• The analog TV formats such as NTSC, PAL, and SECAM


have been around for a long time and have also made their
way into VHS technology
Digital Video Formats
Analog TV formats

• Define NTSC, PAL, and SECAM?


• NTSC (National Television Systems Committee)
• PAL (Phase Alternating Line)
• SECAM (Système Electronique Couleur Avec Mémoire)
Digital Video Formats
Digital Video Formats

• The digital video formats have been established for digital


video applications. The CCIR (Consultative Committee for
International Radio) body has established the ITU R_601
standard that has been adopted by the popular DV video
applications.
• For example, the CIF format (Common Interchange Format)
was established for a progressive digital broadcast television.
It consists of VHS quality resolutions whose width and height
are divisible by 8—a requirement for digital encoding
algorithms.
• The Quarter Common Interchange Format (QCIF) was
established for digital videoconferencing over ISDN lines
Digital Video Formats
Digital video formats
Digital Video Formats
Digital video formats
Digital Video Formats
Digital Video Formats

• 4:2:0 subsampling is a method used in digital image and video


compression to reduce the amount of data while preserving
image quality to a reasonable degree. It is commonly
employed in video codecs like MPEG-2, H.264 (AVC), and
H.265 (HEVC) to achieve efficient compression.
• The numbers in "4:2:0" represent the ratio of chrominance
(color) samples to luminance (brightness) samples in a digital
image or video frame.
Digital Video Formats
Digital Video Formats

• (4:) This number refers to the number of luminance (Y)


samples in each horizontal row of pixels. In other words, for
every 4 pixels in a row, there is one sample for luminance (Y).
• (2:) This number represents the number of chrominance
(color) samples for the horizontal row of pixels. For every 4
pixels in a row, there are two samples for chrominance.
• (0:) This number indicates that there is no chrominance (color)
information sampled for the vertical column of pixels. In other
words, chrominance samples are shared across two horizontal
rows. This is the most significant reduction in data, as color
information is not sampled for every pixel but rather for every
other row of pixels.

You might also like