EC8093
DIGITAL IMAGE PROCESSING
UNIT- I
DIGITAL IMAGE FUNDAMENTALS
1
Topics to be covered in Unit-I
This lecture will cover:
• Steps in Digital Image Processing – Components
• Elements of Visual Perception
• Image Sensing and Acquisition
• Image Sampling and Quantization
• Relationships between pixels
• Color image fundamentals - RGB, HSI models
• Two-dimensional mathematical preliminaries
• 2D transforms - DFT, DCT
Objective
• To understand the fundamentals of digital image
processing.
• To know the sampling & quantization technique.
• To know about the color models.
Learning Outcome
• Ellucidate about fundamentals of DIP
• Illustrate and describe the fundamental steps and
components in DIP
• Draw the structure of Human eye and relate each parts to
image formation, visual perception
• Differentiate Image sensing and acquisition using Single
Sensor, Sensor Strips and Arrays
• Describe the concept of sampling & quantization techniques
• Explain Image Operations on Pixel Basis
• Differentiate and list the applications of Color Models
• Reproduce the basic concept of mathematical tools used in
DIP
• Solve problems on DFT, DCT
Session 1
Introduction about DIP & Applications
Objective:
• To understand the fundamentals of digital image processing &
its Application
Learning Outcome:
• Summarize about fundamentals of DIP
• List the various Application of DIP
• Summarize the concept of Image Representation
• Illustrate and describe the fundamental steps and
components in DIP
What is Image ?
An image is a spatial representation of a two-
dimensional or three-dimensional scene.
An image is an array, or a matrix pixels (picture
elements) arranged in columns and rows.
What is Image Processing
What is Image Processing
What is Image Processing
WHY digital image processing…???
Interest in digital image processing methods stems from
two principal application areas:
1. Improvement of pictorial information for human
interpretation
2. Processing of image data for storage, transmission, and
representation for autonomous machine perception
WHAT IS DIGITAL PROCESSING?
IMAGE
DIP Definition:
A Discipline in Which Both the Input and Output of a
Process are Images.
Image Process Image
What Is Digital Image ?
An image may be defined as a two-dimensional function, f(x, y),
where x and y are spatial (plane) coordinates, and the
amplitude of f at any pair of coordinates (x,y) is called the
intensity or gray level of the image at that point.
Digital Image:
When x, y and the intensity values of f are all finite, discrete
quantities, we call the image a digital image.
Color Image:
The field of digital image processing refers to processing digital
images by means of a digital computer.
What Is Digital Image ?
An Image:
g(x , y)
Discretization
g(i , j )
Quantization
f(i , j) Digital Image
f(i0 , j0) : Picture Element, Image Element, Pel, Pixel
WHAT IS DIGITAL IMAGE PROCESSING?
Image
Processi Image Analysis Vision
ng
Low-Level High-Level
Process Mid-Level Process
Process
• Reduce Noise Making Sense of an
• Segmentation Ensemble of
• Contrast Enhancement
• Classification Recognized Objects
• Image Sharpening
Origins of Digital Image Processing
One of the first applications of digital images was in the newspaper
industry, when pictures were first sent by submarine cable between
London and New York.
Introduction of the Bartlane cable picture transmission system in the
early 1920s reduced the time required to transport a picture across the
Atlantic from more than a week to less than three hours.
A digital picture produced in
1921 from a coded tape by a
telegraph printer with special
type faces.
Fields that Use Digital Image Processing
Today, there is almost no area of technical endeavor that is
not impacted in some way by digital image processing.
Gamma-Ray Imaging
X-Ray Imaging
Imaging in the Ultraviolet Band
Imaging in the Visible and Infrared Bands
Imaging in the Microwave Band
Imaging in the Radio Band
Gamma-Ray Imaging
Bone scan PET
Major uses of imaging
based on gamma rays
include nuclear medicine.
In nuclear medicine, the
approach is to inject a
patient with a radioactive
isotope that emits gamma
rays as it decays.
Images are produced from
the emissions collected by
gamma ray detectors.
Cygnus loop Reactor valve
X-Ray Imaging
Chest
X-Ray PCB
Angiogram
Head CT Cygnus loop
Imaging in Ultraviolet Band
• Applications of ultraviolet “light” are
varied. They include lithography,
industrial inspection, microscopy,
lasers, biological imaging, and
astronomical observations.
• We illustrate imaging in this band with
examples from microscopy and
astronomy.
• The ultraviolet light itself is not visible,
but when a photon of ultraviolet
radiation collides with an electron in
an atom of a fluorescent material, it
elevates the electron to a higher
energy level.
• Subsequently, the excited electron
relaxes to a lower level and emits light
in the form of a lower-energy photon
in the visible (red) light region
Applications and Research Topics
Applications and Research Topics
Document Handling
Applications and Research Topics
Signature Verification
Applications and Research Topics
Biometrics
Applications and Research Topics
Fingerprint Verification / Identification
Applications and Research Topics
Object Recognition
Applications and Research Topics
Target Recognition
Department of Defense (Army, Air force, Navy)
Applications and Research Topics
Interpretation of Aerial Photography
Interpretation of aerial i aproble doma i bot
photography s m in nh
comput visio and
er n registration.
Applications and Research Topics
Autonomous Vehicles
Land, Underwater, Space
Applications and Research Topics
Traffic Monitoring
Applications and Research Topics
Traffic Monitoring
Applications and Research Topics
Face Detection
Applications and Research Topics
Face Recognition
Applications and Research Topics
Face Detection/Recognition Research
Applications and Research Topics
Facial Expression Recognition
Applications and Research Topics
Hand Gesture Recognition
Smart Human-Computer User Interfaces
Sign Language Recognition
Applications and Research Topics
Human Activity Recognition
Applications and Research Topics
Medical Applications
skin cancer breast cancer
Applications and Research Topics
Morphing
Applications and Research Topics
Inserting Artificial Objects into a Scene
Learning Assessment!!! (Direct Poll)
http://etc.ch/kZQF
Session 2
Fundamental Steps in DIP
Objective:
• To understand the fundamentals steps of digital image
processing
Learning Outcome:
• List the fundamental steps of DIP
• Define different processes in DIP
• Illustrate the block diagram of fundamental steps of DIP
Fundamental Steps in Digital Image Processing
Fundamental Steps in Digital Image Processing
Essential steps when processing digital images:
Acquisition
Enhancement
Restoration Outputs are
digital
Color image restoration images
Wavelets
Morphological processing
Segmentation Outputs are
attributes of the
Representation image
Recognition
Fundamental Steps in Digital Image Processing
Image acquisition is the first process.
Generally, the image acquisition stage involves
preprocessing, such as scaling.
Scaling changes the size of the whole image by resampling it
Fundamental Steps in Digital Image Processing
Image enhancement is the process of manipulating an image
so that the result is more suitable than the original for a specific
application.
There is no general “theory” of image enhancement.
When an image is processed for visual interpretation, the
viewer is the ultimate judge of how well a particular method
works.
Fundamental Steps in Digital Image Processing
Image Restoration is an area that also deals with improving the
appearance of an image.
However, unlike enhancement, which is subjective, image
restoration is objective, in the sense that restoration techniques
tend to be based on mathematical or probabilistic models of
image degradation.
Fundamental Steps in Digital Image Processing
Color Image Processing is an area that has been
gaining in importance because of the significant
increase in the use of digital images over the Internet.
Wavelets are the foundation for representing images
in various degrees of resolution.
Fundamental Steps in Digital Image Processing
Compression, as the name implies, deals with
techniques for reducing the storage required to save
an image, or the bandwidth required to transmit it. This
is true particularly in uses of the Internet.
Fundamental Steps in Digital Image Processing
Morphological processing deals with tools for
extracting image components that are useful in the
representation and description of shape.
Segmentation procedures partition an image into its
constituent parts or objects.
A segmentation procedure brings the process a
long way toward successful solution of imaging
problems that require objects to be identified
individually.
In general, the more accurate the segmentation,
the more likely recognition is to succeed.
Fundamental Steps in Digital Image Processing
Representation and description almost always follow the
output of a segmentation stage, which usually is raw pixel
data.
Boundary representation is appropriate when the focus is on
external shape characteristics, such as corners and
inflections.
Regional representation is appropriate when the focus is on
internal properties, such as texture or skeletal shape.
Description, also called feature selection, deals with
extracting attributes that result in some quantitative
information of interest or are basic for differentiating one
class of objects from another.
Recognition is the process that assigns a label (e.g.,
“vehicle”) to an object based on its descriptors. Digital
image processing with the development of methods for
recognition of individual objects.
Learning Assessment!!! (Direct Poll)
http://etc.ch/kZQF
https://directpoll.com/c?XDVhEtaJSgYCRlFBwNAW3VgGY7t8bwQ6
Session 3
Elements of Digital Processing System
Objective:
• To understand the fundamentals elements of digital image
processing
Learning Outcome:
• Identify the fundamental elements/components of DIP
• Illustrate the block diagram of fundamental elements of DIP
System
General Purpose Image Processing System
Components of an Image Processing
System
Image Sensors
• Two elements are required to acquire digital images.
• The first is the physical device that is sensitive to the energy
radiated by the object we wish to image (Sensor).
• The second, called a digitizer, is a device for converting the output
of the physical sensing device into digital form.
Specialized Image Processing Hardware
• Usually consists of the digitizer, mentioned before, plus hardware
that performs other primitive operations, such as an arithmetic logic
unit (ALU), which performs arithmetic and logical operations in
parallel on entire images.
• This type of hardware sometimes is called a front-end subsystem,
and its most distinguishing characteristic is speed.
• In other words, this unit performs functions that require fast data
throughputs that the typical main computer cannot handle.
Computer
• The computer in an image processing system is a general-purpose
computer and can range from a PC to a supercomputer.
• In dedicated applications, sometimes specially designed computers are
used to achieve a required level of performance.
Image Processing Software
• Software for image processing consists of specialized modules that
perform specific tasks.
• A well-designed package also includes the capability for the user to write
code that, as a minimum, utilizes the specialized modules.
Mass Storage Capability
• Mass storage capability is a must in a image processing
applications.
• And image of sized 1024 * 1024 pixels requires one megabyte of
storage space if the image is not compressed.
• Digital storage for image processing applications falls into three
principal categories:
1. Short-term storage for use during processing.
2. on line storage for relatively fast recall
3. Archival storage, characterized by infrequent access
Mass Storage Capability
• One method of providing short-term storage is computer memory.
Another is by specialized boards, called frame buffers, that store one
or more images and can be accessed rapidly.
• The on-line storage method, allows virtually instantaneous image
zoom, as well as scroll (vertical shifts) and pan (horizontal shifts).
• On-line storage generally takes the form of magnetic disks and
optical-media storage.
• The key factor characterizing on-line storage is frequent access to
the stored data.
• Finally, archival storage is characterized by massive storage
requirements but infrequent need for access.
Image Displays
• The displays in use today are mainly color (preferably flat screen)
TV monitors.
• Monitors are driven by the outputs of the image and graphics
display cards that are an integral part of a computer system.
Hardcopy devices
• Used for recording images, include laser printers, film cameras,
heat-sensitive devices, inkjet units and digital units, such as optical
and CD-Rom disks.
Networking
• Is almost a default function in any computer system, in use today.
• Because of the large amount of data inherent in image processing
applications the key consideration in image transmission is
bandwidth.
• In dedicated networks, this typically is not a problem, but
communications with remote sites via the internet are not always as
efficient.
Learning Assessment!!! (Direct Poll)
http://etc.ch/kZQF
Session 4 & 5
Elements of Visual Perception and Image Formation
Objective:
• To understand the mechanism of Visual Perception & Image
Formation in Eye
Learning Outcome:
• Draw and label the anatomy of Eye
• Summarize the mechanism behind Image Formation in eye
• State Brightness Adaptation & Discrimination
• State Mach Band Effect
Preview
• Structure of human eye
• Brightness adaptation and Discrimination
• Image formation in human eye and Image formation model
• Basics of exposure
• Resolution
– Sampling and quantization
• Research issues
67
Objective:
• To understand the concept of visual perception and various
elements of eye which contributes for perception.
Learning Outcome:
• Draw the structure of Human eye
• Relate each parts of eye to image formation, visual
perception
Structure of the human eye
• The cornea and sclera outer cover
• The choroid
– Ciliary body
– Iris diaphragm
– Lens
• The retina
– Cones vision (photopic/bright-light vision): centered
at fovea, highly sensitive to color
– Rods (scotopic/dim-light vision): general view
– Blind spot
69
Human eye
70
Cones vs. Rods
71
Image Formation in the Eye
• The eye lens (if compared to an
optical lens) is flexible.
• It gets controlled by the fibers of the
ciliary body and to focus on distant
objects it gets flatter (and vice
versa).
Image Formation in the Eye
• Distance between the center of the
lens and the retina (focal length):
• varies from 17 mm to 14 mm
(refractive power of lens goes from
minimum to maximum).
• Objects farther than 3 m use
minimum refractive lens powers
(and vice versa).
Image Formation in the Eye
• Example:
• Calculation of retinal image of an object
15 x
100 17
x 2.55mm
Image Formation in the Eye
Image Formation in the Eye
• Perception takes place by the
relative excitation of light receptors.
• These receptors transform radiant
energy into electrical impulses that
are ultimately decoded by the brain.
Brightness adaptation
• Dynamic range of
human visual system
– 10-6 ~ 104
• Cannot accomplish this
range simultaneously
• The current sensitivity
level of the visual
system is called the
brightness adaptation
level
77
Brightness discrimination
• Weber ratio (the experiment) DIc/I
– I: the background illumination
– DIc : the increment of illumination
– Small Weber ratio indicates good discrimination
– Larger Weber ratio indicates poor discrimination
78
Brightness Adaptation
• Subjective brightness: Intensity as perceived
by the human visual system which is a
logarithmic function of the light intensity
incident on the eye
• Brightness adaptation: For any given set of
conditions, the current sensitivity level of the
visual system is called the brightness
adaptation.
Two phenomena clearly demonstrate that
perceived brightness is not a simple function of
intensity.
• Mach Band Effect
• Simultaneous contrast
MACH BAND EFFECT
• The first is based on the fact that the visual
system tends to undershoot or overshoot around
the boundary of regions of different intensities.
• Figure 2.7(a) shows a striking example of this
phenomenon.
• Although the intensity of the stripes is constant,
we actually perceive a brightness pattern that is
strongly scalloped near the boundaries [Fig.
2.7(c)].
• These seemingly scalloped bands are called Mach
bands after Ernst Mach, who first described the
phenomenon in 1865.
Brightness Adaptation of Human Eye
(cont.)
A
B
Intensity
Position
In area A, brightness perceived is darker while in area B is
brighter. This phenomenon is called Mach Band Effect.
MACH BAND
EFFECT
• Although the intensity of the stripes is constant, we actually
perceive brightness pattern that is strongly scalloped near the
boundaries. These seemingly scalloped bands are called Mach
bands.
Simultaneous contrast
All small squares have exactly the same intensity
but they appear progressively darker as background becomes lighter.
Simple Image Formation Model
• Binary images: images having only two possible brightness levels (black and
white)
• Gray scale images : “black and white” images
• Color images: can be described mathematically as three gray scale images
• Let f(x,y) be an image function, then
f(x,y) = i(x,y) r(x,y),
where i(x,y): the illumination function
r(x,y): the reflection function
Note: 0 < i(x,y)< ∞ and 0 <r(x,y)< 1.
• For digital images the minimum gray level is usually 0, but the maximum
depends on number of quantization levels used to digitize an image. The
most common is 256 levels, so that the maximum level is 255.
Luminance
• Intensity per unit area
• Measured in lumens(lm), gives a measure of the
amount of energy an observer perceives from a
light source
• Emitting or reflecting light from the object
• The luminance of an object is independent of
the luminance of the surrounding objects
BRIGHTNESS
• Brightness is the perceived luminance
• Cannot be measured
• Depends on the luminance of the surround
• Two objects with different surroundings could
have identical luminance but different
brightness
CONTRAST
• Contrast is the difference in visual properties
that makes an object (or its representation in an
image) distinguishable from other objects and
the background
• Contrast is determined by the difference in the
color and brightness of the light reflected or
emitted by an object and other objects within
the same field of view .
HUE
• The hue of a color refers to its “redness”, “greenness” and
so on.
• A hue refers to the gradation of color within the
optical spectrum, or visible spectrum, of light.
• "Hue" may also refer to a particular color within this
spectrum, as defined by its dominant wavelength,
• or the central tendency of its combined wavelengths. For
example, a light wave with a central tendency within 565-
590 nm will be yellow.
SATURATION
• Saturation refers to the intensity of a specific
hue.
• In art or when working with colors, saturation is
the amount of color a certain color has. For
example, black and white have no saturation and
bright red has 100% saturation
Image resolution
• Spatial resolution
– Line pairs per unit distance
– Dots/pixels per unit distance
• dots per inch - dpi
• Intensity resolution
– Smallest discernible change in intensity level
• The more samples in a fixed range, the higher the
resolution
• The more bits, the higher the resolution
91
The research
• Artificial retina
• Artificial vision
• 3-D interpretation of
line drawing
• Compress sensing
92
Summary
• Structure of human eye
– Photo-receptors on retina (cones vs. rods)
• Brightness adaptation
• Brightness discrimination (Weber ratio)
• Be aware of psychovisual effects
• Image formation models
93
Learning Assessment!!! (Direct Poll)
http://etc.ch/kZQF
Session 6
Image Sensing & Acquisition
Image Sensing and Acquisition
• Images are generated by the combination of an
“illumination” source and the reflection or absorption of
energy from that source by the elements of the “scene”
being imaged.
• Illumination may originate from a source of electromagnetic
energy such as radar, infrared, or X-ray energy.
• Depending on the nature of the source, illumination energy
is reflected from, or transmitted through, objects.
• Examples: Planar surface-Light is reflected, X-rays pass
through a patient’s body for the purpose of generating a
diagnostic X-ray film,
• In some applications, the reflected or transmitted energy is
focused onto a photo converter (e.g., a phosphor screen),
which converts the energy into visible light.
Image Sensing and Acquisition
Fig. Single imaging Sensor
(Ex: Photo diode)
Image Sensing and Acquisition
• Three principal sensor arrangements used to transform
illumination energy into digital images.
• Concept: Incoming energy is transformed into a voltage
by the combination of input electrical power and sensor
material that is responsive to the particular type of
energy being detected. The output voltage waveform is
the response of the sensor(s), and a digital quantity is
obtained from each sensor by digitizing its response.
• Image Acquisition Using a Single Sensor: Most familiar
sensor of this type is the photodiode, which is
constructed of silicon materials and whose output
voltage waveform is proportional to light. The use of a
filter in front of a sensor improves selectivity.
Image Sensing and Acquisition
Combining a single sensor
with motion to generate a
2-D image
• In order to generate a 2-D image using a single sensor, there has to
be relative displacements in both the x- and y-directions between
the sensor and the area to be imaged.
• Figure shows an arrangement used in high-precision scanning,
where a film negative is mounted onto a drum whose mechanical
rotation provides displacement in one dimension.
• The single sensor is mounted on a lead screw that provides motion
in the perpendicular direction. Since mechanical motion can be
controlled with high precision, this method is an inexpensive (but
slow) way to obtain high-resolution images.
• These types of mechanical digitizers sometimes are referred to as
microdensitometers.
Image Acquisition Using Sensor Strips:
• A geometry that is used much more frequently than single
sensors consists of an in-line arrangement of sensors in the form
of a sensor strip, as Figure shows.
• The strip provides imaging elements in one direction. Motion
perpendicular to the strip provides imaging in the other
direction, as shown in Figure below.
Image Acquisition Using Sensor Strips
• This is the type of arrangement used in most flat bed scanners.
• Sensing devices with 4000 or more in-line sensors are possible.
• In-line sensors are used routinely in airborne imaging applications, in
which the imaging system is mounted on an aircraft that flies at a
constant altitude and speed over the geographical area to be imaged.
• One- dimensional imaging sensor strips that respond to various bands
of the electromagnetic spectrum are mounted perpendicular to the
direction of flight.
• The imaging strip gives one line of an image at a time, and the motion
of the strip completes the other dimension of a two-dimensional
image.
Image Acquisition Using Sensor Strips
• Sensor strips mounted in a ring configuration are
used in medical and industrial imaging to obtain
cross-sectional (“slice”) images of 3-D objects, as
Figure shows.
Image Acquisition Using Sensor Strips:
• A rotating X-ray source provides illumination and the portion of
the sensors opposite the source collect the X-ray energy that
pass through the object (the sensors obviously have to be
sensitive to X-ray energy).
• This is the basis for medical and industrial computerized axial
tomography (CAT).
• Output of the sensors must be processed by reconstruction
algorithms whose objective is to transform the sensed data into
meaningful cross-sectional images.
• In other words, images are not obtained directly from the
sensors by motion alone; they require extensive processing.
• A 3-D digital volume consisting of stacked images is generated as
the object is moved in a direction perpendicular to the sensor
ring.
• Other modalities of imaging based on the CAT principle include
magnetic resonance imaging (MRI) and positron emission
Image Acquisition Using Sensor Arrays:
• Figure shows individual sensors arranged in the form of a 2-D array.
• Numerous electromagnetic and some ultrasonic sensing devices
frequently are arranged in an array format. This is also the predominant
arrangement found in digital cameras.
• A typical sensor for these cameras is a CCD array, which can be
manufactured with a broad range of sensing properties and can be
packaged in rugged arrays of 4000 * 4000 elements or more.
Image Acquisition Using Sensor Arrays:
• CCD sensors are used widely in digital cameras and other
light sensing instruments
• The response of each sensor is proportional to the integral
of the light energy projected onto the surface of the sensor,
a property that is used in astronomical and other
applications requiring low noise images.
• Noise reduction is achieved by letting the sensor integrate
the input light signal over minutes or even hours.
Image Acquisition Using Sensor Arrays:
Fig. An example of the digital image acquisition process (a) Energy
(“illumination”) source (b) An element of a scene (c) Imaging system (d)
Projection of the scene onto the image plane (e) Digitized image
Image Acquisition Using Sensor Arrays:
• This figure shows the energy from an illumination source being
reflected from a scene element
• The energy also could be transmitted through the scene elements.
• The first function performed by the imaging system shown in Fig.
(c) is to collect the incoming energy and focus it onto an image
plane. If the illumination is light, the front end of the imaging
system is a lens, which projects the viewed scene onto the lens
focal plane, as Fig.(d) shows.
• The sensor array, which is coincident with the focal plane, produces
outputs proportional to the integral of the light received at each
sensor.
• Digital and analog circuitry sweep these outputs and converts them
to analog signal, which is then digitized by another section of the
imaging system.
• The output is a digital image, as shown diagrammatically in Fig. (e).
A Simple Image Model
• When an image is generated from a physical process, its
intensity values are proportional to energy radiated by a
physical source (e.g., electromagnetic waves).
• As a consequence, f(x, y) must be nonzero and finite
• Nature of f(x,y):
– The amount of source light incident on the scene being
viewed
– The amount of light reflected by the objects in the scene
A Simple Image Model
• Illumination & reflectance components:
– Illumination: i(x,y)
– Reflectance: r(x,y)
– f(x,y) = i(x,y) r(x,y)
– 0 < i(x,y) <
and 0 < r(x,y) < 1
(from total absorption to total reflectance)
Learning Assessment!!! (Direct Poll)
http://etc.ch/kZQF
Session 7
Image Sampling & Quantization
Objective:
• To get familiarize with Sampling and Quantization Concept
• To know the sampling & quantization technique
Learning Outcome:
• Differentiate Sampling and Quantization
• Elucidate the concept of Spatial Resolution and Intensity
Resolution
• Define Image Interpolation
Sampling & Quantization
• There are numerous ways to acquire images, but our
objective in all is the same: to generate digital images from
sensed data.
• The output of most sensors is a continuous voltage
waveform whose amplitude and spatial behavior are
related to the physical phenomenon being sensed.
• To create a digital image, we need to convert the
continuous sensed data into digital form.
• This involves two processes: sampling and quantization.
Sampling & Quantization
• The basic idea behind sampling and quantization is illustrated in Fig. 2.16.
• Figure 2.16(a) shows a continuous image f that we want to convert to
digital form.
• An image may be continuous with respect to the x- and y-coordinates,
and also in amplitude.
• To convert it to digital form, we have to sample the function in both
coordinates and in amplitude.
• Digitizing the coordinate values is called sampling.
• Digitizing the amplitude values is called quantization.
Digital Image
Sampling & Quantization
• The spatial and amplitude digitization of f(x,y)
is called:
– image sampling when it refers to spatial
coordinates (x,y) and
– gray-level quantization when it refers to the
amplitude.
Sampling and Quantization
• The section of the real plane spanned by the
coordinates of an image is called the spatial
domain, with x and y being referred to as
spatial variables or spatial coordinates.
Sampling & Quantization
f (0,0) f (0,1) ... f (0, M 1)
f (1,0) ... ... f (1, M 1)
f ( x, y )
... ... ... ...
f ( N 1,0) f ( N 1,1) ... f ( N 1, M 1)
Digital ImageImage Elements
Picture Elements
(Pixel, Pel)
Sampling & Quantization
• The digitization process requires decisions
about:
– values for N,M (where N x M: the image array)
and
– the number of discrete gray levels allowed for
each pixel.
Sampling & Quantization
• Usually, in DIP these quantities are integer
powers of two:
N=2n M=2m and G=2k
number of gray
levels
• Another assumption is that the discrete levels
are equally spaced between 0 and L-1 in the
gray scale.
Examples
Examples
Examples
Examples
Sampling & Quantization
• If b is the number of bits required to store a
digitized image then:
– b = N x M x k (if M=N, then b=N2k)
Storage
Sampling & Quantization
• How many samples and gray levels are
required for a good approximation?
– Resolution (the degree of discernible detail) of an
image depends on sample number and gray level
number.
– i.e. the more these parameters are increased, the
closer the digitized array approximates the original
image.
Sampling & Quantization
• How many samples and gray levels are
required for a good approximation? (cont.)
– But: storage & processing requirements increase
rapidly as a function of N, M, and k
Sampling & Quantization
• Different versions (images) of the same object
can be generated through:
– Varying N, M numbers
– Varying k (number of bits)
– Varying both
Sampling & Quantization
• Conclusions:
– Quality of images increases as N
& k increase
– Sometimes, for fixed N, the
quality improved by decreasing k
(increased contrast)
– For images with large amounts of
detail, few gray levels are needed
Dithering
• Full-color photographs may contain an almost infinite range of color
values
• Dithering is the attempt by a computer program to approximate a color
from a mixture of other colors when the required color is not available
• Dithering is the most common means of reducing the color range of
images down to the 256 (or fewer) colors seen in 8-bit GIF images
• Most images are dithered in a diffusion or
randomized pattern to diminish the harsh
transition from one color to another
• But dithering also reduces the overall
sharpness of an image, and it often
introduces a noticeable grainy pattern in the
image
• This loss of image detail is especially
apparent when full-color photos are
dithered down to the 216-color browser-
safe palette.
Review: Matrices and Vectors
Definitions
Definitions (Con’t)
(Con’t)
A column vector is an m × 1 matrix:
A row vector is a 1 × n matrix:
A column vector can be expressed as a row
vector by using
the transpose:
137
Review: Matrices and Vectors
Some
Some Basic
Basic Matrix
Matrix Operations
Operations
• The sum of two matrices A and B (of equal
dimension), denoted A + B, is the matrix
with elements aij + bij.
• The difference of two matrices, A B, has
elements aij bij.
• The product, AB, of m×n matrix A and p×q
matrix B, is an m×q matrix C whose (i,j)-th
element is formed by multiplying the
entries across the ith row of A times the
entries down the jth column of B; that is,
138
Review: Matrices and Vectors
Some
Some Basic
Basic Matrix
Matrix Operations
Operations (Con’t)
(Con’t)
The inner product (also called dot product) of two vector
is defined as
Note that the inner product is a scalar.
139
TOEPLITZ MATRIX, CIRCULANT MATRIX
• It has a constant elements along the main
diagonal and the sub diagonals
• Circulant matrix – each of its rows (or
columns) is a circular shift of the previous
row(or column)
140
ORTHOGONAL AND UNITARY MATRICES
• An orthogonal matrix is such that its inverse is
equal to its transpose
A-1 = AT
• A Matrix is called unitary if its inverse is equal
to its conjugate transpose
A-1 = A*T
141
BLOCK MATRIX
• Any matrix whose elements are matrices
themselves is called a block matrix.
142
Learning Assessment!!! (Direct Poll)
http://etc.ch/kZQF
Session 8
Color Models
Objective:
• To know about the concept of color models
• To know the application of Color Model Transformations
Learning Outcome:
• Differentiate and list the applications of Color Models
• Reproduce the basic concept of mathematical tools used in
DIP
• Explain Image Operations on Pixel Basis in the context of color
models
Color Fundamentals
• Color Image Processing is divided into two major areas:
• 1) Full-color processing
• Images are acquired with a full-color sensor, such as a
color TV camera or color scanner
• Used in publishing, visualization, and the Internet
• 2) Pseudo color processing
• Assigning a color to a particular monochrome intensity
or range of intensities
Color Fundamentals
• In 1666, Sir Isaac Newton discovered that when a beam of
sunlight passes through a glass prism, the emerging beam
of light is split into a spectrum of colors ranging from violet
at one end to red at the other
Color Fundamentals
• Visible light as a narrow band of frequencies in EM
• A body that reflects light that is balanced in all visible
wavelengths appears white
• However, a body that favors reflectance in a limited range
of the visible spectrum exhibits some shades of color
• Green objects reflect wavelength in the 500 nm to 570 nm
range while absorbing most of the energy at other
wavelengths
Color Fundamentals
• If the light is achromatic (void of color), its only attribute is
its intensity, or amount
• Chromatic light spans EM from 380 to 780 nm
• Three basic quantities to describe the quality:
• 1) Radiance is the total amount of energy that flows from
the light source, and it is usually measured in watts (W)
• 2) Luminance, measured in lumens (lm), gives a
measure of the amount of energy an observer
perceives from a light source
• For example, light emitted from a source operating in the
far infrared region of the spectrum could have significant
energy (radiance), but an observer would hardly perceive
it; its luminance would be almost zero
Color Fundamentals
• 3) Brightness is a subjective descriptor that is practically
impossible to measure. It embodies the achromatic notion
of intensity and is one of the key factors in describing color
sensation
Color Fundamentals
• Cones are the sensors in the eye responsible for color
vision. 6 to 7 million cones in the human eye can be divided
into three principle categories: red, green, and blue
Color Fundamentals
• Approximately 65% of all cones are sensitive to red light,
33% are sensitive to green light, and only about 2% are
sensitive to blue (but the blue cones are the most sensitive)
• According to CIE (International Commission on
Illumination) wavelengths of blue = 435.8 nm, green =
546.1 nm, and red = 700 nm
Color Fundamentals
Color Fundamentals
• To distinguish one color from another are brightness, hue,
and saturation
• Brightness embodies the achromatic notion of intensity
• Hue is an attribute associated with the dominant
wavelength in a mixture of light waves. Hue represents
dominant color as perceived by an observer. Thus, when we
call an object red, orange, or yellow, we are referring to its
hue
• Sat
urat
ion
refe
saturated, with the degree of saturation being inversely
rs
proportional to the amount of white light added
Color Fundamentals
• Hue and saturation taken together are called Chromaticity
• Therefore a color may be characterized by its brightness
and chromaticity
• The amounts of red, green, and blue needed to form any
particular color are called the Tristimulus values and are
denoted, X, Y, and Z, respectively
• Tri-chromatic coefficients
X Y Z
x y z
XYZ XYZ XYZ
xyz
1
Color Fundamentals
• Another approach for specifying colors is to use the CIE
chromaticity diagram
Color Fundamentals
• For any value of x and y, the corresponding value of z is
obtained by noting that z = 1-(x+y)
• 62% green, 25% red, and 13% blue
• Pure colors are at boundary which are fully saturated
• Any point within boundary represents some mixture of
spectrum colors
• Equal energy and equal fractions of the three primary
colors represents white light
• The saturation at the point of equal energy is zero
• Chromaticity diagram is useful for color mixing
Color Fundamentals
Color Gamut produced
by RGB monitors
Color Gamut produced
by high quality color
printing device
Color Fundamentals
• Printing gamut is irregular because color printing is a
combination of additive and subtractive color mixing, a
process that is much more difficult to control than that of
displaying colors on a monitor, which is based on the
addition of three highly controllable light primaries
Color Models
• Also known as color space or color system
• Purpose is to facilitate the specification of colors in some
standard, generally accepted way
• Oriented either toward hardware (such as monitors and
printers) or toward applications (color graphics for
animation)
• Hardware oriented models most commonly used in
practices are the RGB model for color monitors or color
video cameras, CMY and CMYK models for color printing,
and the HSI model, which corresponds closely with the way
humans describe and interpret color.
• HSI model also has the advantage that it decouples the
color and gray-scale information in an image
RGB Color Models
• Each color appears in its primary spectral components of
red, green, and blue.
• Model based on a Cartesian coordinate system
RGB Color Models
• RGB primary values are at three corners; the secondary
colors cyan, magenta, and yellow are at the other corners;
black is at the origin; and white is at the corner
• In this model, the gray scale (points of equal
RGB values) extends from black to white
along the line joining these two points
• The different colors in this model are points on or inside
the cube, and are defined by vectors extending from the
origin
• RGB images consist three images (R, G, and B planes)
• When fed into an RGB monitor, these three images
combine on the screen to produce a composite color image
RGB Color Models
• Number of bits used to represent each pixel in RGB space is
called the pixel depth
• RGB image has 24 bit pixel depth
• True color or full color image is a 24 bit RGB image
• Total colors in 24-bit image is (28)3 = 16,777,216
RGB Color Models
a
b
FIGURE 6.9
(a) Generating
the RGB image of
the cross-
sectional color
plane
( 127, G, B). RGB
(b) The thr ee Color
hi dden monitor
surface
planes in the color
cube o[ Fig. 6.8.
(R = 0) (G = 0) (8 = 0)
RGB Color Models
• Many systems in use today are limited to colors
• Many 256 applications require few colors
• Given only
the variety of systems in current use, it is of
considerable interest to have subset of colors that are likely
to be reproduced faithfully, this subset of colors are called
the set of safe RGB colors, or the set of all-systems-safe
colors
• In internet applications, they are called safe Web colors or
safe browser colors
• On the assumption that 256 colors is the minimum number
of colors that can be reproduced faithfully
• Forty of these 256 colors are known to be processed
RGB Color Models
Color Equivalents TABLE 6.1
Number Valid values of
System
00 33 66 99 cc FF each RGB
Hex O 51 10 15 204 255 component in
Decimal 2 3 a
safe color.
a
b
FIGURE 6.10
(a) The 216 safe
RGB colors,
(b) A ll th e
gray s in the
256-color RGB
system (grays
that ar e
part or the safe
color group arc
shown
underlined).
CMY and CMYK Color Models
• CMY are the secondary colors of light, or, alternatively, the
primary colors of pigments
• For example, when a surface coated with cyan pigment is
illuminated with white light, no red light is reflected from
the surface because cyan subtracts red light from reflected
white light
• Color printers and copiers require CMY data input or
perform RGB to CMY conversion internally
C 1 R
M 1 G
Y 1 B
CMY and CMYK Color Models
• Equal amounts of the pigment primaries, cyan, magenta,
and yellow should produce black
• In practice, combining these colors for printing produces a
muddy-looking black
• So, in order to produce true black, a fourth color, black, is
added, giving rise to the CMYK color model
HSI Color Model
https://www.lightingschool.eu/portfolio/understanding-the-light/#41-hue-saturation-brightness
• Unfortunately, the RGB, CMY, and other similar color
models are not well suited for describing colors in terms
that are practical for human interpretation
• For example, one does not refer to the color of an
automobile by giving the percentage of the primaries
composing its color
• We do not think of color images as being composed of
three primary images that combine to form that single
image
• When human view a color object, we describe it by its hue,
saturation, and brightness
HSI Color Model
Green Yellow
'
''
'
''
''
\Vhite\' ---------- Red
Cyan
'
r ''
'
'
''
Blue Magenta
Green Yello Green - - - - - Y e l l o w
w Green
s
Ii
Cyan � -'----- ----
bR ed
Magenta Blue Magenta Red
Blue Blue - - - - -
Magent a
a
b c d
FIGURE 6.13 Hue and in the HSI color model. The clot is an color
saturation
point.T h e angle from the red axis arbitrary
gives the hue. and the length or the vector is the
sat•
uration. The intensity of all colors in any of these planes is given by the position of
the plane on the vertical intensity axis.
Converting from RGB to HSI
if B
H
360 if G
BG
1
[(R G) (R B)]
cos
1 2 1/ 2
2
[(R G) (R B)(G B)]
S 1 3 [min( R,G, B)]
(R G B)
1
I (R G
B)
3
Converting from HSI to RGB
• RG sector: 0 H 120
B I (1 S)
S cosH
R I 1
cos(60 H )
G 3I (R B)
Converting from HSI to RGB
• GB sector: 120 H
240
H H 120
R I (1 S)
S cosH
G I 1
cos(60 H )
B 3I (R G)
Converting from HSI to RGB
• GB sector: 240 H 360
H H 240
G I (1 S)
S cosH
B I 1
cos(60 H )
R 3I (G B)
HSI Color Model
• HSI (Hue, saturation, intensity) color model, decouples the
intensity component from the color-carrying information
(hue and saturation) in a color image
Intensity to Color Transformations
• Achieving a wider range of pseudocolor enhancement
results than simple slicing technique
• Idea underlying this approach is to perform three
independent transformations on the intensity of any input
pixel
• Three results are then fed separately into the red, green,
and blue channels of a color television monitor
• This produces a composite image whose color content is
modulated by the nature of the transformation functions
• Not the functions of position
• Nonlinear function
Intensity to Color Transformations
Intensity to Color Transformations
Intensity to Color Transformations
Intensity to Color Transformations
Introduction to Mathematical Operations in DIP
Introduction to Mathematical Operations in DIP
• Array vs. Matrix Operation
a11 a12 b11 b12
A B
Array a21 a22 b b
product 21 22
operator
a11b11 a12b12 Array product
A .* B
a21b21 a22b22
Matrix
product
operator
a11b11 a12b21 a11b12 a12b22 Matrix product
A * B
a b a
21 11 22 21 b a b
21 12 a b
22 22
Weeks 1 & 2 185
Introduction to Mathematical Operations in DIP
• Linear vs. Nonlinear Operation
H f ( x, y ) g ( x, y )
H ai fi ( x, y ) a j f j ( x, y )
Additivity
H ai fi ( x, y ) H a j f j ( x, y )
ai H f i ( x, y ) a j H f j ( x, y ) Homogeneity
ai gi ( x, y ) a j g j ( x, y )
H is said to be a linear operator;
H is said to be a nonlinear operator if it does not meet the above
qualification.
Weeks 1 & 2 186
Arithmetic Operations
• Arithmetic operations between images are array operations.
The four arithmetic operations are denoted as
s(x,y) = f(x,y) + g(x,y)
d(x,y) = f(x,y) – g(x,y)
p(x,y) = f(x,y) × g(x,y)
v(x,y) = f(x,y) ÷ g(x,y)
Weeks 1 & 2 187
Example: Addition of Noisy Images for Noise Reduction
Noiseless image: f(x,y)
Noise: n(x,y) (at every pair of coordinates (x,y), the noise is uncorrelated and has
zero average value)
Corrupted image: g(x,y)
g(x,y) = f(x,y) + n(x,y)
Reducing the noise by adding a set of noisy images, {gi(x,y)}
K
1
g ( x, y )
K
g ( x, y )
i 1
i
Weeks 1 & 2 188
Example: Addition of Noisy Images for Noise Reduction
K
1
g ( x, y )
K
g ( x, y )
i 1
i
1 K
E g ( x, y ) E g i ( x, y )
2
2 K
K i 1 g ( x,y ) 1
gi ( x , y )
K i 1
1 K
E f ( x, y ) ni ( x, y )
K i 1 1 2
2
n( x, y )
1 K
1 K
ni ( x , y ) K
f ( x, y ) E
K
i 1
ni ( x, y )
K i 1
f ( x, y )
Weeks 1 & 2 189
Example: Addition of Noisy Images for Noise Reduction
► In astronomy, imaging under very low light levels frequently
causes sensor noise to render single images virtually useless
for analysis.
► In astronomical observations, similar sensors for noise
reduction by observing the same scene over long periods of
time. Image averaging is then used to reduce the noise.
Weeks 1 & 2 190
Weeks 1 & 2 191
An Example of Image Subtraction: Mask Mode Radiography
Mask h(x,y): an X-ray image of a region of a patient’s body
Live images f(x,y): X-ray images captured at TV rates after injection of the
contrast medium
Enhanced detail g(x,y)
g(x,y) = f(x,y) - h(x,y)
The procedure gives a movie showing how the contrast medium propagates
through the various arteries in the area being observed.
Weeks 1 & 2 192
Weeks 1 & 2 193
An Example of Image Multiplication
Weeks 1 & 2 194
Set and Logical Operations
Weeks 1 & 2 195
Set and Logical Operations
• Let A be the elements of a gray-scale image
The elements of A are triplets of the form (x, y, z), where x and y
are spatial coordinates and z denotes the intensity at the point
(x, y).
A {( x, y, z ) | z f ( x, y )}
• The complement of A is denoted Ac
Ac {( x, y, K z ) | ( x, y, z ) A}
K 2k 1; k is the number of intensity bits used to represent z
Weeks 1 & 2 196
Set and Logical Operations
• The union of two gray-scale images (sets) A and B is defined as
the set
A B {max(a, b) | a A, b B}
z
Weeks 1 & 2 197
Set and Logical Operations
Weeks 1 & 2 198
Set and Logical Operations
Weeks 1 & 2 199
Spatial Operations
• Single-pixel operations
Alter the values of an image’s pixels based on the intensity.
s T ( z )
e.g.,
Weeks 1 & 2 200
Spatial Operations
• Neighborhood operations
The value of this pixel is determined
by a specified operation involving the
pixels in the input image with
coordinates in Sxy
Weeks 1 & 2 201
Spatial Operations
• Neighborhood operations
Weeks 1 & 2 202
Geometric Spatial Transformations
• Geometric transformation (rubber-sheet transformation)
— A spatial transformation of coordinates
( x, y ) T {(v, w)}
— intensity interpolation that assigns intensity values to the spatially transformed
pixels.
• Affine transform
t11 t12 0
x y 1 v w 1 t21 t22 0
t31 t32 1
Weeks 1 & 2 203
Weeks 1 & 2 204
Image Registration
• Input and output images are available but the transformation
function is unknown.
Goal: estimate the transformation function and use it to
register the two images.
•One of the principal approaches for image registration is to
use tie points (also called control points)
The corresponding points are known precisely in the input
and output (reference) images.
Weeks 1 & 2 205
Image Registration
• A simple model based on bilinear approximation:
x c1v c2 w c3vw c4
y c5v c6 w c7 vw c8
Where (v, w) and ( x, y ) are the coordinates of
tie points in the input and reference images.
Weeks 1 & 2 206
Image Registration
Weeks 1 & 2 207
Image Transform
• A particularly important class of 2-D linear transforms,
denoted T(u, v)
M 1 N1
T (u , v) f ( x, y )r ( x, y, u , v)
x 0 y 0
where f ( x, y ) is the input image,
r ( x, y, u , v) is the forward transformation ker nel ,
variables u and v are the transform variables,
u = 0, 1, 2, ..., M-1 and v = 0, 1, ..., N-1.
Weeks 1 & 2 208
Image Transform
• Given T(u, v), the original image f(x, y) can be recoverd using
the inverse tranformation of T(u, v).
M 1 N1
f ( x, y ) T (u , v) s ( x, y, u , v )
u 0 v 0
where s ( x, y, u , v) is the inverse transformation ker nel ,
x = 0, 1, 2, ..., M-1 and y = 0, 1, ..., N-1.
Weeks 1 & 2 209
Image Transform
Weeks 1 & 2 210
Example: Image Denoising by Using DCT Transform
Weeks 1 & 2 211
Forward Transform Kernel
M 1 N1
T (u , v) f ( x, y )r ( x, y, u , v)
x 0 y 0
The kernel r ( x, y, u , v) is said to be SEPERABLE if
r ( x, y, u , v) r1 ( x, u )r2 ( y, v)
In addition, the kernel is said to be SYMMETRIC if
r1 ( x, u ) is functionally equal to r2 ( y, v), so that
r ( x, y, u , v) r1 ( x, u )r1 ( y, u )
Weeks 1 & 2 212
The Kernels for 2-D Fourier Transform
The forward kernel
j 2 ( ux / M vy / N )
r ( x, y, u , v) e
Where j = 1
The inverse kernel
1 j 2 (ux / M vy / N )
s ( x, y , u , v ) e
MN
Weeks 1 & 2 213
2-D Fourier Transform
M 1 N1
T (u , v) f ( x, y )e j 2 ( ux / M vy / N )
x 0 y 0
M 1 N1
1
f ( x, y )
MN
T (u, v)e
u 0 v 0
j 2 ( ux / M vy / N )
Weeks 1 & 2 214
Probabilistic Methods
Let zi , i 0, 1, 2, ..., L -1, denote the values of all possible intensities
in an M N digital image. The probability, p( zk ), of intensity level
zk occurring in a given image is estimated as
nk
p ( zk ) ,
MN
where nk is the number of times that intensity zk occurs in the image.
L 1
p( z ) 1
k 0
k
The mean (average) intensity is given by
L 1
m= z
k 0
k p( z k )
Weeks 1 & 2 215
Probabilistic Methods
The variance of the intensities is given by
L 1
2 = ( z k m) 2 p ( z k )
k 0
The n th moment of the intensity variable z is
L 1
un ( z ) = (z
k 0
k
n
m ) p ( zk )
Weeks 1 & 2 216
Example: Comparison of Standard Deviation
Values
14.3 31.6 49.2
Weeks 1 & 2 217
Learning Assessment!!! (Direct Poll)
http://etc.ch/kZQF
Session 9
Two Dimensional DFT & DCT
Objective:
• To know the algorithm of DCT & DFT and its implementation
for real time application
Learning Outcome:
• Reproduce the basic concept of mathematical tools used in
DIP
• Solve problems on DFT, DCT
Discrete Fourier Transform
2-D DFT:
The discrete fourier transform of a function (image) f(x,y) of size
M×N is given
by,
(u, v) =1/MN∑ x ∑ y f(x,y) e-j2П(ux/M+vy/N)
for u=0,1,2,….,M-1. and v=0,1,2,…..,N-1.
Similarly, given F(u,v), we obtain f(x,y) by the inverse fourier
transform, given
by,
f(x,y)= ∑ u∑ v F(u,v)
ej2П(ux/M+vy/N)
for x=0,1,2,….,M-1. and y=0,1,2,….,N-1.
U and V are the transform or frequency variables and X and Y are
the spatial
or image variables.
DFT AND UNITARY
DFT ARE
SYMMETRIC
PERIODIC
FAST ALGORITHM
CONJUGATE
SYMMETRIC ABOUT
N/2
• CIRCULAR CONVOLUTION
THEOREM
• The DFT of the circular convolution
of the two sequences is equal to
the product of their DFTs.
Discrete Cosine Transform
Introduction
• The discrete cosine transform (DCT) is a
technique for converting a signal into
elementaryfrequency components. It is widely
used in image compression.
• Developed by Ahmed, Natarajan, and Rao
[1974], the DCT is a close relative of the
discrete Fourier transform (DFT). Its
application to image compression was
pioneered by Chenand Pratt [1984].
Definitions
• The N X N Cosine Transform matrix C={c(k,n)},also called as Discrete
Cosine Transform (DCT) ,is defined as
c(k,n)=1/√N , k=0, 0≤n ≥N-1
= √2/N cos∏(2n+1)k/2N , 1≤k ≥N-1, 0≤n ≥N-1
The One Dimensional DCT of a sequence {u(n), 0≤n≥N-1}is defined as
N-1
v(k)=α(k) ∑ u(n) cos∏(2n+1)k/2N) , 0≤k ≥N-1
n=0
where α(0) ≈ 1/√N , α(k) = √2/N for 1≤k ≥N-1
The 2-Dimensional DCT pair is given by
C(u,v)= α(u) α(v) N-1∑ f(n,y) cos[u∏(2x+1)/2N] cos[u∏(2y+1)/2N] , 0≤u,v ≥N-1
n=0
Tranformation
• An N X N image is divided into a st of smaller m
x m sub images
• The DCT is computed for each images
• Within each sub image ,only the DCT
components that are non-negligible are
retained, providing a means of image
compression. All other components are
assumed to be zero
• The remaining non-negligible components are
used to reconstruct the image
Here is the original image and its discrete
cosine transform
Particularly note the concentration of large DCT coefficients in
the low-frequency zone. The DCT is known to have excellent
energy compaction properties.
Advantages
1.The DCT has the added advantage of mirror
symmetry, making it suitable for block
encoding of an image
ie; processing an image via small overlapping
sub images
2. Because of the symmetry property of the DCT,
it produces less degradation at each of the sub
image boundaries than the DFT.
3. It supports Data compression
Properties of DCT
1. The Cosine Transform is real and orthogonal
ie; C=C* C -1= C T
2. The Cosine Transform of a sequence is
related to the DFT of its symmetric extension
3. The Cosine Transform is a fast transform.
The Cosine Transform of a vector of N
elements can be calculated in N log2N
operations via an N-point FFT.
4.The Cosine Transform has excellent energy
compaction for higly correled data
5. The Basic Vectors of the DCT are the eigen
vemtors of the symmetric tridiagonal matrix
6. The NxN DCT is very close to first order KL
Transform.
Learning Assessment!!! (Direct Poll)
http://etc.ch/kZQF
Thank
You…