0% found this document useful (0 votes)
10 views10 pages

What-is-meant-by-digital-image-processing (1)

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 10

DIGITAL IMAGES PROCESSING

What is meant by digital image processing?


The processes of acquiring an image of the area containing the text, preprocessing that
image, extracting (segmenting) the individual characters, describing the characters in a
form suitable for computer processing, and recognizing those individual characters are
in the scope of what we call digital image processing.

What are the uses of digital image processing?


It helps to improve images for human interpretation. Information can be processed and
extracted from images for machine interpretation. The pixels in the image can be
manipulated to any desired density and contrast. Images can be stored and retrieved
easily.
Introduction

The primary interest in transmitting and handling images in digital forms goes back to 1920’s. However,
due to lack of strong computer systems and vast storage requirements the interest in this area was not
fully explored until mid 1960’s after the successful Apollo mission and other space programs. The serious
work in the area of digital image processing (DIP) began at JPL when the need for processing lunar
images was felt after the Apollo mission. DIP, originally established to analyze and improve lunar images
is rapidly growing into a wealth of new applications, due to the enormous progress made in both
algorithm development and computer engineering. At present, there is no technical area that is not
affected one way or another by DIP. The most important fields To advance click enter or page down to
go back use page up 1 of growth appear to emerge in the areas of medical image processing (e.g.
surgical simulators and tele-surgery), data communication and compression (e.g. HDTV, 3D TV), remote
sensing (e.g. meteorological, environmental and military), computer vision (e.g. robotics, assembly line
inspection, autonomous systems such as UAV’s), etc. Currently the emphasis is being shifted towards
real-time digital image processing. For the years ahead, trends in computer engineering, especially in
parallel/pipeline processing technologies, coupled with new emerging applications indicate no limitation
for the horizon of the DIP area.

Applications:

1. Medical: Automatic detection/classification of tumors in X-ray images, magnetic resonance imaging


(MRI), processing of CATscan, ultrasound images, chromosome identification, blood test, etc.

2. Computer Vision: Identification of parts in an assembly line,robotics, tele-operation, autonomous


systems, bin-picking, etc.

3. Remote Sensing: meteorology and climatology, tracking of earth resources, geographical mapping;
prediction of agricultural crops, urban growth and weather, flood, fire control, etc.

4. Radar and Sonar: Detection and recognition of various targets, guidance or maneuvering of aircraft or
missiles. 5. Image Transmission: HDTV and 3DTV, teleconferencing, communications over computer
networks/satellite, military To advance click enter or page down to go back use page up 3
communication, space missions.

6. Office Automation: Document storage, retrieval and reproduction.


7. Identification Systems: Facial, Iris, finger-print-based ID systems, airport and bank security, etc.

Digital Image: A sampled and quantized version of a 2D function that has been acquired by optical or
other means, sampled at equally spaced rectangular grid pattern, and quantized in equal intervals of
amplitudes. The task of digital image processing involves handling, transmission, enhancement and
analysis of digital images with the aid of digital computers. This calls for manipulation of 2-D signals.
There generally three types of processing that are applied to an image. These are: low-level,
intermediate-level and high-level processing which are described below. Areas of Digital Image
Processing (DIP): Starts with one image and produces a modified version of that image.

Image Representation and Modelling.

- An image can be represented either in the spatial domain or the transform domain. An important
consideration in image representation is the fidelity or intelligibility criteria for measuring the quality of
an images. Such measures includes contrast (gray level difference within an image), spatial frequencies,
color and sharpness of the edge information. Images represented in the spatial domain directly indicate
the type and the physical nature of the imaging sensors; e.g. luminance of object in a scene for pictures
taken by camera, absorption characteristics of the body tissue for X-ray images, radar crosssection of a
target for radar imaging, temperature profile of a region for infrared imaging and gravitational field in an
area in geophysical imaging.

Linear statistical models can also be used to model the images in the spatial domain. These models
allow development of algorithms that are useful for an entire class or an ensemble of images rather than
for a single image. Alternatively, unitary or orthogonal image transforms can be applied to digital images
to extract such characteristics as spectral (frequency) content, bandwidth, power spectra, or other
salient features for various applications such as filtering, compression, and object recognition.

1.) Image Enhancement (low-level): No imaging system will give images of perfect quality. In image
enhancement the aim is to manipulate an image in order to improve its quality. This requires that an
intelligent human viewer is available to recognize and extract useful information from an image. Since
the human subjective judgement may either be wise or fickle, certain difficulties may arise. Thus, a
careful study requires subjective testing of a group of human viewers. The psychophysical aspect should
always be considered in this process.

Examples of image enhancement (low-level processing) are

 Contrast & gray scale improvement


- The contrast ranges from black at the weakest intensity to white at the strongest. Grayscale images
are distinct from one-bit bi-tonal black-and-white images, which, in the context of computer
imaging, are images with only two colors: black and white (also called bilevel or binary images).

 Spatial frequency enhancement


- Spatial frequency is the frequency of change per unit distance across an image. Some images have
high spatial frequencies, whereas other ones have low spatial frequency. An image's spatial
frequency depends on how much detail it contains.
 Pseudo coloring
- Pseudocolor images are originally grayscale which are assigned colors based on the intensity values.
Typical usage of these images is for thermography where the only available is infrared radiation
instead of lights. Another example is elevation map.

 Noise removal
- Filtering image data is a standard process used in almost every image processing system. Filters are
used for this purpose. They remove noise from images by preserving the details of the same. The
choice of filter depends on the filter behaviour and type of data
 Edge sharpening
- Sharpening is an image-manipulation technique for making the outlines of a digital image look
more distinct. Sharpening increases the contrast between edge pixels and emphasizes the transition
between dark and light areas. Sharpening increases local contrast and brings out fine detail.

 Magnification and Zooming


- Digital zoom is a method of decreasing the precise angle of view of a digital photograph or video
image. It is accomplished by cropping an image down to an area with the same aspect ratio as the
original, and scaling the image up to the dimensions of the original.
- Typically, magnification is related to scaling up visuals or images to be able to see more detail,
increasing resolution, using microscope, printing techniques, or digital processing. In all cases, the
magnification of the image does not change the perspective of the image.
2.) Image Restoration (low-level): As in the image enhancement, the ultimate goal of image
restoration is to improve the quality of an image in some sense. Image restoration involves
recovering or estimating an image that has been degraded by some deterministic and/or
stochastic phenomena. Blur is a deterministic phenomenon which is caused by atmospheric
turbulence (satellite imaging), relative motion between the camera and the object,
defocusing , etc. Noise, on the other hand, is a stochastic phenomenon which corrupts the
images additively and/or multiplicatively. Sources of additive noise are imperfection of
sensors, thermal noise and channel noise. Examples of multiplicative noise are speckle in
coherent imaging systems such as synthetic aperture radar (SAR), lasers and ultrasound
images and also film grain noise. The restoration techniques aim at modelling the
degradation and then applying an appropriate scheme in order to recover the original
image. Some of the typical methods are;

 Image estimation and noise smoothing


- Smoothing is used to reduce noise or to produce a less pixelated image. Most smoothing methods
are based on low-pass filters, but you can also smooth an image using an average or median value
of a group of pixels (a kernel) that moves through the image.

 Deblurring
- An image deblurring is a recovering process that recovers a sharp latent image from a blurred
image, which is caused by camera shake or object motion. It has widely attracted attention in image
processing and computer vision fields. A number of algorithms have been proposed to address the
image deblurring problem.
 Inverse filtering
- Inverse Filtering is the process of receiving the input of a system from its output. It is the simplest
approach to restore the original image once the degradation function is known.

 2D Wiener and Kalman filters


- (Wiener filter) executes and optimal trade off between filtering and noise smoothing. IT removes
the addition noise and inputs in the blurring simultaneously. Weiner filter is real and even.
- (The Kalman filter) is a computationally efficient, recursive, discrete, linear filter. It can give
estimates of past, present and future states of a system even when the underlying model is
imprecise or unknown.

Image reconstruction can also be viewed as a special class


of restoration where two or higher dimensional objects are
reconstructed from several projections. Applications: CT scanners
(medical), astronomy, radar imaging and NDE. Typical methods
are:

Radon Transform
Projection theorem
Reconstruction algorithms
2. Image Transforms (Intermediate-Level): Image transformation involves mapping digital images to
the transform domain using a unitary image transform such as 2D DFT, 2D Discrete Cosine
Transform (DCT), 2D Discrete Wavelet Transform (DWT). In the transform domain certain useful
characteristics of the images, which cannot typically be ascertained in the spatial domain, are
revealed. Image transformation performs both feature extraction and dimensionality reduction
which are crucial for various applications. These operations are considered intermediate-level since
images are mapped to reduced dimensional feature vectors.

3. Image Data Compression and Coding: In many applications one needs to transmit or store images
in digital forms. The number of bits is tremendous. It is therefore essential to compress or efficiently
code the data e.g. LANDSAT-D is approximately sending 3.7 × 1015 bits of information per year.
Storage and/or transmission of such huge amount of data require large capacity and/or bandwidth
which would be expensive or impractical. Straight digitalization requires 8 bits per picture element
(pixel). Using frame coding technique (DPCM) or transform coding one can reduce this to 1-2
bits/pixel which preserves the quality of images. Using frame-to-frame coding (transmit frame
differences) further reduction is possible. Motion-compensated coding detects and estimates
motion parameters from video image sequences and motion-compensated frame differences are
transmitted. An area of active research is stereo-video sequence compression and coding

for 3D TV and virtual reality applications.

Some of the typical schemes are:


Pixel-by-pixel coding
Predictive coding
Transform coding
Hybrid coding
Frame-to-frame coding
Vector quantization

4. Image Analysis and Computer Vision (High-level): Image analysis and computer vision involve (a)
segmentation, (b) feature extraction and (c) classification/recognition. Segmentation techniques are
used to isolate the desired object from the scene so that the features can be measured easily and
accurately e.g.separation of targets from background. The most useful features are then extracted
from the segmented objects (targets). Quantitative evaluation of these features allow classification
and description of the object.
The overall goal of image analysis is to build an automatic interactive system which can derive
symbolic descriptions from a scene. Pattern recognition can be regarded as the inverse of computer
graphics. It starts with a picture or scene and transforms it into an abstract description; a set of
numbers, a string of symbols or a graph. The differences and similarities between three areas of
computer graphics, image processing and pattern recognition are shown in figure above: A
complete image processing or computer vision system is shown below:

Sensor: Collects the image data.


Preprocessor: Compression of data, noise removal, etc.
Segmentation: Isolating the objects of interest using edge detection and region growing.
Feature Extraction: Extract a representative set of features from segmented objects.
Classifier: classifies each object or region and extracts attributes from them.
Structural Analyzer: Determines the relationships among the classified primitives. The output is the
description of the original scene.
World Model: This is used to guide each stage of the analyzing
system. The results of each stage can be used in turn to refine the
world model.

Before we start to analyze a scene a world model is


constructed incorporating as much as a priori information about the scene as possible.
Typical operations applied in image analysis are:

Edge extraction
Line extraction
Texture discrimination
Shape recognition
Object recognition
Goals of this course:
Fundamental concepts
Image processing mathematics
Basic techniques
State-of-the-art
Applications

POSIBLE RELATIVE QUESTION:

What is the diffirence of spatial or the transform domain


- Spatial domain is related to pixels and transform domain is based on frequency. Theses domains
further include many algorithms to get high quality fused image and other required resources.
Basically average method is simple technique in which all relevant objects are in focus.

- In spatial domain, we deal with images as it is. In frequency domain, we deal with the rate at which
the pixel values are changing in spatial domain. Transformation: It is the process of converting signal
from time/space domain to frequency domain.

You might also like