What-is-meant-by-digital-image-processing (1)
What-is-meant-by-digital-image-processing (1)
What-is-meant-by-digital-image-processing (1)
The primary interest in transmitting and handling images in digital forms goes back to 1920’s. However,
due to lack of strong computer systems and vast storage requirements the interest in this area was not
fully explored until mid 1960’s after the successful Apollo mission and other space programs. The serious
work in the area of digital image processing (DIP) began at JPL when the need for processing lunar
images was felt after the Apollo mission. DIP, originally established to analyze and improve lunar images
is rapidly growing into a wealth of new applications, due to the enormous progress made in both
algorithm development and computer engineering. At present, there is no technical area that is not
affected one way or another by DIP. The most important fields To advance click enter or page down to
go back use page up 1 of growth appear to emerge in the areas of medical image processing (e.g.
surgical simulators and tele-surgery), data communication and compression (e.g. HDTV, 3D TV), remote
sensing (e.g. meteorological, environmental and military), computer vision (e.g. robotics, assembly line
inspection, autonomous systems such as UAV’s), etc. Currently the emphasis is being shifted towards
real-time digital image processing. For the years ahead, trends in computer engineering, especially in
parallel/pipeline processing technologies, coupled with new emerging applications indicate no limitation
for the horizon of the DIP area.
Applications:
3. Remote Sensing: meteorology and climatology, tracking of earth resources, geographical mapping;
prediction of agricultural crops, urban growth and weather, flood, fire control, etc.
4. Radar and Sonar: Detection and recognition of various targets, guidance or maneuvering of aircraft or
missiles. 5. Image Transmission: HDTV and 3DTV, teleconferencing, communications over computer
networks/satellite, military To advance click enter or page down to go back use page up 3
communication, space missions.
Digital Image: A sampled and quantized version of a 2D function that has been acquired by optical or
other means, sampled at equally spaced rectangular grid pattern, and quantized in equal intervals of
amplitudes. The task of digital image processing involves handling, transmission, enhancement and
analysis of digital images with the aid of digital computers. This calls for manipulation of 2-D signals.
There generally three types of processing that are applied to an image. These are: low-level,
intermediate-level and high-level processing which are described below. Areas of Digital Image
Processing (DIP): Starts with one image and produces a modified version of that image.
- An image can be represented either in the spatial domain or the transform domain. An important
consideration in image representation is the fidelity or intelligibility criteria for measuring the quality of
an images. Such measures includes contrast (gray level difference within an image), spatial frequencies,
color and sharpness of the edge information. Images represented in the spatial domain directly indicate
the type and the physical nature of the imaging sensors; e.g. luminance of object in a scene for pictures
taken by camera, absorption characteristics of the body tissue for X-ray images, radar crosssection of a
target for radar imaging, temperature profile of a region for infrared imaging and gravitational field in an
area in geophysical imaging.
Linear statistical models can also be used to model the images in the spatial domain. These models
allow development of algorithms that are useful for an entire class or an ensemble of images rather than
for a single image. Alternatively, unitary or orthogonal image transforms can be applied to digital images
to extract such characteristics as spectral (frequency) content, bandwidth, power spectra, or other
salient features for various applications such as filtering, compression, and object recognition.
1.) Image Enhancement (low-level): No imaging system will give images of perfect quality. In image
enhancement the aim is to manipulate an image in order to improve its quality. This requires that an
intelligent human viewer is available to recognize and extract useful information from an image. Since
the human subjective judgement may either be wise or fickle, certain difficulties may arise. Thus, a
careful study requires subjective testing of a group of human viewers. The psychophysical aspect should
always be considered in this process.
Noise removal
- Filtering image data is a standard process used in almost every image processing system. Filters are
used for this purpose. They remove noise from images by preserving the details of the same. The
choice of filter depends on the filter behaviour and type of data
Edge sharpening
- Sharpening is an image-manipulation technique for making the outlines of a digital image look
more distinct. Sharpening increases the contrast between edge pixels and emphasizes the transition
between dark and light areas. Sharpening increases local contrast and brings out fine detail.
Deblurring
- An image deblurring is a recovering process that recovers a sharp latent image from a blurred
image, which is caused by camera shake or object motion. It has widely attracted attention in image
processing and computer vision fields. A number of algorithms have been proposed to address the
image deblurring problem.
Inverse filtering
- Inverse Filtering is the process of receiving the input of a system from its output. It is the simplest
approach to restore the original image once the degradation function is known.
Radon Transform
Projection theorem
Reconstruction algorithms
2. Image Transforms (Intermediate-Level): Image transformation involves mapping digital images to
the transform domain using a unitary image transform such as 2D DFT, 2D Discrete Cosine
Transform (DCT), 2D Discrete Wavelet Transform (DWT). In the transform domain certain useful
characteristics of the images, which cannot typically be ascertained in the spatial domain, are
revealed. Image transformation performs both feature extraction and dimensionality reduction
which are crucial for various applications. These operations are considered intermediate-level since
images are mapped to reduced dimensional feature vectors.
3. Image Data Compression and Coding: In many applications one needs to transmit or store images
in digital forms. The number of bits is tremendous. It is therefore essential to compress or efficiently
code the data e.g. LANDSAT-D is approximately sending 3.7 × 1015 bits of information per year.
Storage and/or transmission of such huge amount of data require large capacity and/or bandwidth
which would be expensive or impractical. Straight digitalization requires 8 bits per picture element
(pixel). Using frame coding technique (DPCM) or transform coding one can reduce this to 1-2
bits/pixel which preserves the quality of images. Using frame-to-frame coding (transmit frame
differences) further reduction is possible. Motion-compensated coding detects and estimates
motion parameters from video image sequences and motion-compensated frame differences are
transmitted. An area of active research is stereo-video sequence compression and coding
4. Image Analysis and Computer Vision (High-level): Image analysis and computer vision involve (a)
segmentation, (b) feature extraction and (c) classification/recognition. Segmentation techniques are
used to isolate the desired object from the scene so that the features can be measured easily and
accurately e.g.separation of targets from background. The most useful features are then extracted
from the segmented objects (targets). Quantitative evaluation of these features allow classification
and description of the object.
The overall goal of image analysis is to build an automatic interactive system which can derive
symbolic descriptions from a scene. Pattern recognition can be regarded as the inverse of computer
graphics. It starts with a picture or scene and transforms it into an abstract description; a set of
numbers, a string of symbols or a graph. The differences and similarities between three areas of
computer graphics, image processing and pattern recognition are shown in figure above: A
complete image processing or computer vision system is shown below:
Edge extraction
Line extraction
Texture discrimination
Shape recognition
Object recognition
Goals of this course:
Fundamental concepts
Image processing mathematics
Basic techniques
State-of-the-art
Applications
- In spatial domain, we deal with images as it is. In frequency domain, we deal with the rate at which
the pixel values are changing in spatial domain. Transformation: It is the process of converting signal
from time/space domain to frequency domain.