Fast Multispectral Imaging by Spatial Pixel-Binning and Spectral Unmixing
Fast Multispectral Imaging by Spatial Pixel-Binning and Spectral Unmixing
Fast Multispectral Imaging by Spatial Pixel-Binning and Spectral Unmixing
Multispectral imaging system is of wide application in relevant fields for its capability
in acquiring spectral information of scenes. Its limitation is that, due to the large
number of spectral channels, the imaging process can be quite time consuming when
capturing high-resolution (HR) multispectral images. To resolve this limitation, this
system proposes a fast multispectral imaging framework based on the image sensor
pixel-binning and spectral un-mixing techniques. The framework comprises a fast
imaging stage and a computational reconstruction stage. In the imaging stage only a
few spectral images are acquired in HR, while most spectral images are acquired in
low resolution (LR). The LR images are captured by applying pixel binning on the
image sensor, such that the exposure time can be greatly reduced. In the
reconstruction stage an optimal number of basis spectra are computed and the signal-
dependent noise statistics are estimated. Then the unknown HR images are efficiently
reconstructed by solving a closed-form cost function that models the spatial and
spectral degradations. The effectiveness of the proposed framework is evaluated using
real scene multispectral images. Experimental results validate that, in general, the
method outperforms the state of the arts in terms of reconstruction accuracy, with
additional 20 or more improvement in computational efficiency.
Introduction
Multispectral imaging has attracted much interest because of its wide application in
relevant fields like biomedicine [1], remote sensing [2], color reproduction [3], and
etc.A a filter-based multispectral imaging system can acquire more spectral
information of scenes at the spatial resolution of camera sensor. In such a system, the
filters split the visible spectrum into many spectral channels, at which the camera
acquires images. The filter wheel is installed with a number of filters which are
sequentially rotated into the optical path in the imaging process. Compared with filter
wheel, the tunable filters have no moving parts and can provide rapid and vibration
less light transmission changes. A multispectral image is comprised of multiple
spectral images, which are acquired at different wavelengths. The framework
comprises the fast imaging stage and the computational reconstruction stage. In the
imaging stage, only a few spectral images are acquired at high resolution, and most
spectral images are captured at low-resolution (LR) based on the pixel-binning
operation incorporated in the image sensor. With such a treatment, the exposure time
can be greatly reduced. In the reconstruction stage, the unknown HR spectral images
corresponding to the LR ones are recovered according to the spectral unmixing
principle. Pixel-binning provides a way to improve imaging efficiency at the cost of
spatial resolution degradation, which is worthwhile in some practical applications. In
fact, pixelbinning has been implemented as a fundamental function in some scientific
and industrial cameras.1,2. A framework that employs the sensor pixel-binning and
spectral unmixing techniques is proposed for fast acquisition of HR multispectral
images.
EXISTING SYSTEM
Disadvantages
PROPOSED SYSTEM
A fast multispectral imaging framework that consists of a fast imaging stage and a
computational reconstruction stage. In the imaging stage, the majority of spectral
images are acquired at low resolution such that the total exposure time can be greatly
reduced. In the reconstruction stage, an optimal number of basis spectra are
computed, based on which the problem of high-resolution image reconstruction is
well posed and can be solved in closed form. A framework that employs the sensor
pixel-binning and spectral un-mixing techniques is proposed for fast acquisition of
HR multispectral images. The signal-dependent noise statistics are incorporated in the
framework such that the reconstruction of multispectral images is robust to noise
corruption.
Advantages
Spectral imaging for remote sensing of terrestrial features and objects arose as an
alternative to high-spatial-resolution, large-aperture satellite imaging systems. Early
applications of spectral imaging were oriented toward ground-cover classification,
mineral exploration, and agricultural assessment, employing a small number of
carefully chosen spectral bands spread across the visible and infrared regions of the
electromagnetic spectrum. Improved versions of these early multispectral imaging
sensors continue in use today. A new class of sensor, the hyperspectral imager, has
also emerged, employing hundreds of contiguous bands to detect and identify a
variety of natural and man-made materials. This overview article introduces the
fundamental elements of spectral imaging and discusses the historical evolution of
both the sensors and the target detection and classification applications.
Digital imaging that includes spectral estimation can overcome limitations of typical
digital photography, such as limited color accuracy and constraints to a predefined
viewing condition or a specific output device. An example includes the use of ICC
color management to generate an archive of images rendered for a specific display or
for a specific printing technology. A spectral image offers enhanced opportunities for
image analysis, art conservation science, lighting design, and an archive that can be
used to relate back to an objects physical properties. The Munsell Color Science
Laboratory at Rochester Institute of Technology is involved in a joint research
program with the National Gallery of Art in Washington, D.C., and the Museum of
Modern Art in New York to develop a spectral-imaging system optimized for artwork
imaging, archiving, and reproduction. This paper summarizes the scientific approach.
This paper discusses recent activities of the Jet Propulsion Laboratory (JPL) in the
development of a new type of remote sensing multispectral imaging instrument using
acousto- optic tunable filter (AOTF) as a programmable bandpass filter. This remote
sensor filter provides real-time operation; observational flexibility; measurements of
spectral, spatial, and polarization information using a single instrument; and compact,
solid state structure without moving parts. An AOTF multispectral imaging prototype
system for outdoor field experiments was designed and assembled. Some preliminary
experimental results are reported. The field system is used to investigate spectral and
polarization signatures of natural and man-made objects for evaluation of the
technological feasibility for remote sensing applications. In addition, an airborne
prototype instrument is currently under development.
The first frame-transfer CMOS active pixel sensor (APS) is reported. The sensor
architecture integrates an array of active pixels with an array of passive memory cells.
Charge integration amplifer-based readout of the memory cells permits binning of
pixels for variable resolution imaging. A 32/spl times/32 element prototype sensor
with 24-/spl mu/m pixel pitch was fabricated in 1.2-/spl mu/m CMOS and
demonstrated.
In this paper we prove sanity-check bounds for the error of the leave-one-out
crossvalidation estimate of the generalization error: that is, bounds showing that the
worst-case error of this estimate is not much worse than that of the training error
estimate. The name sanity-check refers to the fact that although we often expect the
leave-one-out estimate to perform considerably better than the training error estimate,
we are here only seeking assurance that its performance will not be considerably
worse. Perhaps surprisingly, such assurance has been given only for rather limited
cases in the prior literature on cross-validation. Any nontrivial bound on the error of
leave-one-out must rely on some notion of algorithmic stability. Previous bounds
relied on the rather strong notion of hypothesis stability, whose application was
primarily limited to nearest-neighbor and other local algorithms. Here we introduce
the new and weaker notion of error stability, and apply it to obtain sanity-check
bounds for leave-one-out for other classes of learning algorithms, including training
error minimization procedures and Bayesian algorithms. We also provide lower
bounds demonstrating the necessity of error stability for proving bounds on the error
of the leave-one-out estimate, and the fact that for training error minimization
algorithms, in the worst case such bounds must still depend on the Vapnik-
Chervonenkis dimension of the hypothesis class.
MODULES
The Sylvester matrix equation, which has also been employed in image fusion,
the noise statistics expressed in the covariance matrices are signal-dependent.
And by adopting an optimal number of basis spectra is always well-posed and a
closed-form solution can be obtained.