Image Processing Updated Report PDF
Image Processing Updated Report PDF
by
VIPUL VIJAY
Image representation
In this type one is concerned with characteristics of the that
each pixel represents for example an image could represent
luminance of object in scene, the absorption characteristics of
body tissue that we get from x ray imaging, the temperature
profile of a region. An important consideration in image
representation is the fidelity criteria for measuring the quality of
an image or the performance of a processing technique.
Specification of such measure requires models of perception of
contrast, spatial frequencies, colour, and so on.
Image enhancement
In image enhancement, the goal is to accentuate certain image
feature for subsequent analysis or for image display. Examples
include contrast and edge enhancement, pseudo colouring,
noise filtering, sharpening, and magnifying. Image
enhancement is useful in feature extraction, image analysis,
and visual information display image enhancement technique,
such as contrast stretching, map each grey level into another
grey level by a predetermined transformation, an example is
the histogram equalization method, where the input grey levels
are mapped so that the output grey level distribution is uniform.
Image restoration
Image restoration refers to removal or minimization of known
degradation is an image. This includes deblurring of image
degraded by limitation of sensor or its environment, noise
filtering, and correction of geometric distortion or non-linearity’s
due to sensors.
Image analysis
Image analysis is concerned with making quantitative measure
from an image to produce a description of it. For example, the
task can be reading a label on the grocery item, sorting
different parts on an assembly line. Image analysis technique
require extraction of certain feature that aid in the identification
of an object. segmentation techniques are used to isolate the
desired object from the scene so that measurement can be
made on it subsequently.
Image data compression
The amount of data associated with visual information is so
large that it’s storage would require enormous storage
capacity.so image data compression techniques are concerned
with reduction of number of bits required to store or transmit
image without any appreciable loss of information. Its
applications are in field of broadcast television, remote sensing
via satellite, aircraft, radar sonar, etc.
Chapter 2
Literature survey
[1]
In this paper the image modelling and estimation algorithm developed
and can be interpreted as an approach to nonlocal adaptive
nonparametric filtering. The proposed approach can be adapted to
various noise models such as additive coloured noise, non-Gaussian
noise, etc., by modifying the calculation of coefficients’ variances in the
basic and Wiener parts of the algorithm. In addition, the developed
method can be modified for demonising 1-D-signals and video, for image
restoration, as well as for other problems that can benefit from highly
sparse signal representations.
The enhancement of the sparsity is achieved by grouping similar 2-D
image fragments (e.g., blocks) into 3-D data arrays which is called as
“groups.” Collaborative filtering is a special procedure developed to deal
with these 3-D groups.
The result is a 3-D estimate that consists of the jointly filtered grouped
image blocks. By attenuating the noise, the collaborative filtering reveals
even the finest details shared by grouped blocks and, at the same time,
it preserves the essential unique features of each individual block. The
filtered blocks are then returned to their original positions. Because
these blocks are overlapping, for each pixel, and obtain many different
estimates which need to be combined. Aggregation is a particular
averaging procedure which is exploited to take advantage of this
redundancy.
plenty of denoising methods exist, originating from various disciplines
such as probability theory, statistics, partial differential equations, linear
and nonlinear filtering, and spectral and multiresolution analysis. All
these methods rely on some explicit or implicit assumptions about the
true (noise-free) signal in order to separate it properly from the random
noise.
Single image haze removal has been a challenging problem due to its ill-
posed nature. This algorithm is a simple but powerful colour attenuation
prior for haze removal from a single input hazy image.
Outdoor images taken in bad usually lose contrast and fidelity, resulting
from the fact that light is absorbed and scattered by the turbid medium
such as particles and water droplets in the atmosphere during the
process of propagation. Moreover, most automatic systems, which
strongly depend on the definition of the input images, fail to work
normally caused by the degraded images. Therefore, improving the
technique of image haze removal will benefit many image understanding
and computer vision applications such as aerial imagery, image
classification image/video retrieval remote sensing and video analysis
and recognition
The dehazing effect is limited, because a single hazy image can hardly
provide much information. Later, researchers try to improve the
dehazing performance with multiple images. polarization based methods
are used for dehazing with multiple images which are taken with different
degrees of polarization.
The fusion of multiple inputs, but derives the two inputs to combine by
correcting the contrast and by sharpening a white-balanced version of a
single native input image. The multi-scale implementation of the fusion
process results in an artifact-free blending.
The feature extraction stage may involve computing the depth (and/or
colour) gradients, histogram, and other more complex transformations of
the video data.
The simplest and most widely used full-reference quality metric is the
mean squared error (MSE), computed by averaging the squared
intensity differences of distorted and reference image pixels, along with
the related quantity of peak signal-to-noise ratio (PSNR). These are
appealing because they are simple to calculate, have clear physical
meanings, and are mathematically convenient in the context of
optimization. But they are not very well matched to perceived visual
quality
Most perceptual quality assessment models can be described with a
similar diagram, although they differ in detail. The stages of the diagram
are as follows
• Pre-processing:
• CSF Filtering
• Channel Decomposition
• Error Normalization
• Error Pooling
A new framework for the design of image quality measures was
proposed, based on the assumption that the human visual system is
highly adapted to extract structural information from the viewing field. It
follows that a measure of structural information change can provide a
good approximation to perceived image distortion.
Diagram of SSIM
[6]
In this paper, author proposed a high-resolution multi-scale encoder-
decoder network (HMEDN) to segment medical images, especially for
the challenging cases with blurry and vanishing boundaries caused by
low tissue contrast. In this network, three kinds of pathways were
integrated to extract meaningful features that capture accurate location
and semantic information
(3) Low tissue contrast: CT images, especially those from the pelvic
area, have blurry and vanishing boundaries This last challenge poses
the most severe problem for image segmentation algorithms, as
compared with the natural or MR images, CT images visibly lack rich
and stable texture information
The relative order of lightness represents the light source directions and
the lightness variation, the naturalness of an enhanced image is related
to the relative order of lightness in different local areas.
ACNNs differ from previous part or holistic based methods in two ways.
One, ACNNs need not explicitly handle occlusions which avoid
propagating detecting/ in painting error afterwards. Two, ACNNs unify
representation learning and occlusion patterns encoding in an end to
end CNN.
The Gate Unit in ACNN enables the model to shift attention from the
occluded patches to other unobstructed as well as distinctive facial
regions. Considering that facial expression is distinguished in specific
facial regions, author designed a patch based pACNN that incorporates
region decomposition to find typical facial parts that are related to
expression.
[10]
In this paper, author present a single-image SR algorithm based on the
rational fractal interpolation model, which is more suitable for describing
the structures of an image.
The steps involving in this algorithm include First, for each LR image
patch, the isocline method is employed to detect texture, such that more
detailed textures can be obtained, and the LR image is divided into
texture regions and contexture regions. Second, in the interpolation
model, the scaling factors play an important role, whereas the influence
of the shape parameters is minor. Based on the relationship between
scaling factors and the fractal dimension, the scaling factors are
accurately calculated by using the image local structure feature.
[2] Qingsong Zhu, Jiaming Mai, and Ling Shao, “A Fast Single Image Haze
Removal Algorithm Using Color Attenuation Prior”, IEEE Transactions on
Image Processing (Volume: 24, Issue: 11, Nov. 2015)
[5] Zhou Wang, Alan Conrad Bovik, Hamid Rahim Sheikh, and Eero P.
Simoncelli, “Image Quality Assessment: From Error Visibility to Structural
Similarity”, IEEE Transactions on Image Processing (Volume: 13, Issue: 4,
April 2004)
[6] Sihang Zhou, Dong Nie, Ehsan Adeli ,Jianping Yin ,Jun Lian , and
Dinggang Shen, “High-Resolution Encoder–Decoder Networks for Low-
Contrast Medical Image Segmentation”, IEEE Transactions on Image
Processing (Volume: 29, Issue: 19, June 2019)
[7] Wenqi Ren, Sifei Liu, Lin Ma, Qianqian Xu, Xiangyu Xu, Xiaochun Cao,
Junping Du, and Ming-Hsuan Yang, “Low-Light Image Enhancement via a
Deep Hybrid Network” , IEEE Transactions on Image Processing (Volume:
28, Issue: 9, Sept. 2019)
[8] Xiaojie Guo, Yu Li, and Haibin Ling, “LIME: Low-Light Image
Enhancement via Illumination Map Estimation”, IEEE Transactions on Image
Processing (Volume: 26, Issue: 2, Feb. 2017)
[11] Xinxin Zhang, Ronggang Wang, Da Chen, Yang Zhao, Wen GAO,
“Handling Outliers by Robust M-Estimation in Blind Image Deblurring”
IEEE Transactions on Multimedia (Early Access), 2020
[12] Jun Liu, Ming Yan, and Tieyong Zeng, “Surface-aware Blind Image
Deblurring”, IEEE Transactions on Pattern Analysis and Machine
Intelligence (Early Access), 2019
[13] Fei Wen, Rendong Ying, Yipeng Liu, Peilin Liu, and Trieu-Kien Truong,
“A Simple Local Minimal Intensity Prior and an Improved Algorithm for Blind
Image Deblurring”, IEEE Transactions on Circuits and Systems for Video
Technology (Early Access), 2020
[14] Kaiming Nie, Xiaopei Shi, Silu Cheng, Zhiyuan Gao, Jiangtao Xu, “High
Frame Rate Video Reconstruction and Deblurring based on Dynamic and
Active Pixel Vision Image Sensor” IEEE Transactions on Circuits and Systems
for Video Technology, 2020