Multispectral Demosaicing Based On Iterative-Linea
Multispectral Demosaicing Based On Iterative-Linea
Article
Multispectral Demosaicing Based on Iterative-Linear-Regression
Model for Estimating Pseudo-Panchromatic Image
Kyeonghoon Jeong , Sanghoon Kim and Moon Gi Kang *
School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Republic of Korea;
[email protected] (K.J); [email protected] (S.K)
* Correspondence: [email protected]
Abstract: This paper proposes a method for demosaicing raw images captured by multispectral
cameras. The proposed method estimates a pseudo-panchromatic image (PPI) via an iterative-linear-
regression model and utilizes the estimated PPI for multispectral demosaicing. The PPI is estimated
through horizontal and vertical guided filtering, with the subsampled multispectral-filter-array-
(MSFA) image and low-pass-filtered MSFA as the guide image and filtering input, respectively. The
number of iterations is automatically determined according to a predetermined criterion. Spectral
differences between the estimated PPI and MSFA are calculated for each channel, and each spec-
tral difference is interpolated using directional interpolation. The weights are calculated from the
estimated PPI, and each interpolated spectral difference is combined using the weighted sum. The
experimental results indicate that the proposed method outperforms the State-of-the-Art methods
with regard to spatial and spectral fidelity for both synthetic and real-world images.
1. Introduction
Commercial cameras, which capture traditional red-green-blue-(RGB) images, typi-
cally record only three colors in the visible band, and they are commonly used for general
Citation: Jeong, K.; Kim, S.; Kang,
landscapes and portraits, making them one of the most popular camera types. How-
M.G. Multispectral Demosaicing Based
ever, with the development of various industries, there is a growing need to record or
on Iterative-Linear-Regression Model
for Estimating Pseudo-Panchromatic
identify objects that are not easily discernible in RGB images. To meet this demand,
Image Sensors 2024, 24, 760. https://
multispectral cameras have been developed. Multispectral imaging has become an in-
doi.org/10.3390/s24030760 creasingly important tool in various fields, such as remote sensing [1], agriculture [2], and
biomedical imaging [3]. These imaging systems capture information from multiple spectral
Academic Editor: Liang-Jian Deng
bands, thereby providing valuable information that is not visible in traditional grayscale or
Received: 14 December 2023 RGB imaging.
Revised: 13 January 2024 There are various methods for acquiring multispectral images, including rotating
Accepted: 23 January 2024 structures of different optical filters for each band. Although this approach can capture
Published: 24 January 2024 multispectral images with full resolution for each channel, it is unsuitable for capturing
moving subjects. To address this issue, cameras employing the one-snapshot method are
used. These cameras acquire mosaic images when a photograph is captured. The resulting
mosaic image appears similar to the Bayer pattern [4] used in commercial RGB cameras,
Copyright: © 2024 by the authors. as shown in Figure 1a. The mosaic patterns of multispectral-filter arrays (MSFAs) vary
Licensee MDPI, Basel, Switzerland.
depending on the manufacturer. The most commonly used pattern is a 4 × 4 array, which
This article is an open access article
can be divided into two cases: one where there is a dominant channel, e.g., green, in
distributed under the terms and
Figure 1b [5], and another where all the channels have the same probability of appearance
conditions of the Creative Commons 1
of 16 , as shown in Figure 1c [6].
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
G R G GG
R RG
R R
G RG R G GG
R OrR G Or 7
G Or 8 7 6 78 5 86 65
B GB BG
B GG
B G
B GB G B CyB
G GCy G G 15 1615 1415
G Cy 16 1316
14 14
13 1
G R G GG
R RG
R R
G RG OrG GOr
G ROr
G GR R 11 1211 1011
12 912
10 10
9
B B GG
GB BG B B
G G
Cy GCy BCy
G GG
B B
G G 3 4 3 2 34 1 42 21
(a) (b) (c)
Figure 1. Basic CFA and MSFA patterns: (a) Bayer pattern [4]. (b) MSFA with one dominant band [5].
(c) MSFA with no dominant band in IMEC camera [6]. The numbers are the band numbers.
of MSFA; (3) guided filtering restores the high frequency components of a guide image
while preserving details. To this end, we propose a method that uses guided filtering to
estimate the PPI and then restores high frequencies for each channel by identifying edges
according to the estimated PPI. Our approach is optimized for 4 × 4 MSFA patterns without
a dominant band, but can be adapted to other patterns.
The main contributions of this study are as follows:
1. We propose a novel method for iterative-guided-filtering-pseudo-panchromatic-image-
(IGFPPI) estimation that involves performing iterative guided filtering in both the
horizontal and vertical directions, and combining the results.
2. The proposed guided-filtering technique is iterative and automatically determines the
stopping criterion for each image.
3. We use the estimated IGFPPI to determine the weights of each channel, and we obtain
the interpolated spectral-difference domain through a weighted sum of the difference
between the IGFPPI and the spectral channels. Finally, we add the IGFPPI, to obtain
the demosaicing result, and we follow the demosaicing order of the BTES method.
We conducted extensive experiments to compare the quantitative and qualitative
results of the proposed method for the peak-signal-to-noise ratio (PSNR), the structural-
similarity-index measure (SSIM) [20], and the spectral-angle-mapper-(SAM) [21] metrics
to those of previously reported methods. The results indicated that the proposed method
outperformed both traditional and deep-learning methods. In addition to using the syn-
thesized data, we conducted experiments on actual images captured by IMEC cameras.
The demosaicing results for these real-world images suggest that the proposed method
performs well in practical situations.
The remainder of this paper is organized as follows: Section 2 presents related work.
Section 3 describes the proposed method. Section 4 presents the experimental procedures
and results. Section 5 presents our conclusions.
2. Related Work
The proposed algorithm is designed to be effective for multispectral cameras that
acquire images in multispectral bands. This section presents an observational model
that accurately describes the image-acquisition process using multispectral cameras. Our
algorithm builds on the principles of guided image filtering and PPI estimation, which
allows accurate demosaicing of multispectral images. Herein, we comprehensively review
these methods.
where Ikc represents the acquired pixel of channel c at pixel k; Q(·) is the quantization
function; a and b represent, respectively, the spectral minimum and maximum ranges of the
multispectral camera; E(λ) represents the relative spectral power distribution of the light
source; R(λ)k is the spectral-reflectance factor of a subject at pixel k; and T c (λ) represents
the transmittance of the MSFA channel c.
From the observation model, a raw image of a multispectral camera with N channels
is defined as follows:
N
IkMSFA = ∑ Ikc Mkc , (2)
c =1
where IkMSFA represents the raw image, Ikc represents the full resolution of channel c at pixel
k, and Mc represents the binary mask, which is a special type of image comprising only 0 s
and 1 s that is used to represent the MSFA channel c.
Sensors 2024, 24, 760 4 of 19
Ī M = I MSFA ∗ M, (4a)
1 2 2 2 1
2 4 4 4 2
1
M= 2 4 4 4 2, (4b)
64
2 4 4 4 2
1 2 2 2 1
where I MSFA represents a raw image. Second, a high-frequency component is added to the
initial PPI. The high-frequency component is calculated under the assumption that the local
difference of the initial PPI is similar to that of the raw image, where the local difference is
the difference between the value of the arbitrary pixel k and the weighted average value
of its eight nearest neighbors q ∈ Ñk with the same channel. The final PPI ÎkM at pixel k is
defined as follows:
∑q∈ Ñk γq ĪqM − IqMSFA
ÎkM = IkMSFA + , (5)
∑q∈ Ñk γq
where γq is the weight calculated from the reciprocal of the difference between k and q in
the raw image I MSFA .
E ( a k , bk ) = ∑ (ak Il + bk − pl )2 , (7)
l ∈ ωk
3. Proposed Algorithm
In this section, we describe the proposed methods of the two main components. First,
we explain the process of estimating the PPI from the raw image I MSFA . Then, we describe
the process of performing directional multispectral demosaicing using the estimated PPI.
Low-pass
Filtering
Guidance
Guidance
𝐼𝐼 ̅
Horizontal
Criterion
Subsampling Guided Filtering
𝑡𝑡 = 1,2, …
̇̅
𝐼𝐼𝑡𝑡𝑐𝑐,ℎ
Guided
Subsampling Combining
𝑡𝑡 = 0 Upsampling
𝑡𝑡 = 1,2, …
Subsampled Subsampled
𝐼𝐼 𝑐𝑐̇ 𝐼𝐼̂𝑐𝑐̇
𝐼𝐼0𝑐𝑐̇ ̅ Vertical
𝐼𝐼 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀
Guided Filtering
Criterion 𝐼𝐼̂𝑃𝑃𝑃𝑃𝑃𝑃
Guidance
̇̅
𝐼𝐼𝑡𝑡𝑐𝑐,𝑣𝑣
The initial PPI, which is denoted as Ī, includes the low-frequency components of
all channels that contribute to the final PPI image. Equation (4) is used to obtain the
initial estimate. Next, we perform subsampling on both Ī and I MSFA for each channel
as a preprocessing step to restore the high frequencies of the final PPI. The subsampled
versions of the raw image I MSFA and the initial PPI Ī in channel c are denoted as İ c and
Ī˙c , respectively. The sizes of I MSFA and Ī are (W × H ), and the sizes of İ c and Ī˙c are
(W H
4 × 4 ), where W and H represent the width and height of the image, respectively. We
use the subsampled I˙c as the guidance image and the subsampled Ī˙c as the filtering input
for the iterative guided filtering. Iterative guided filtering is performed separately in the
horizontal and vertical directions. If the window size is increased to estimate more precise
high frequencies, the estimate is closer to the MSFA image, which is the guide image. To
prevent this, we calculate the horizontal and vertical directions separately, and the two
results are combined to obtain the estimation. The window size used to calculate the linear
coefficients is denoted as (h × v); horizontal guided filtering is used when h > v, and
vertical guided filtering is used when v > h.
In the first iteration t = 0, iterative guided filtering is performed in the vertical and
horizontal directions, using the subsampled I˙c as the guidance image and the subsampled
Ī˙0c as the filtering input. The equations for this process are as follows:
The pixel coordinates are represented by (i, j). For t >= 1, the iterative guided filtering is
repeated using the following expressions:
where dc,h
t (i, j ) represents the absolute difference between the results of the horizontal loops
of the previous and current step, and where dc,v t (i, j ) represents the absolute difference
between the results of the vertical loops of the previous and current step. These two values
indicate changes in the image. As they converge to zero, there is little change in the pixels
at position (i, j). Additionally, δtc,h (i, j) represents the horizontal change in the result of the
current step’s horizontal iteration. A value close to zero indicates that there is no change in
the horizontal direction. Similarly, δtc,v (i, j) represents the vertical change in the result of the
current step’s vertical iteration. The criterion for pixel change is determined by multiplying
these two expressions, as follows:
The pixel change stops when Dtc,h (i, j) < ϵ pixel for the horizontal direction and when
Dtc,v (i, j) < ϵ pixel for the vertical direction, where ϵ pixel represents a predefined threshold.
The global condition under which the loop stops is calculated using the following
expressions:
Ḣ Ẇ
1
MAD c,h (t) =
Ẇ × Ḣ
∑ ∑ dc,h
t (i, j ),
i =1 j =1
(13)
Ḣ Ẇ
1
MAD c,v (t) =
Ẇ × Ḣ
∑ ∑ dc,v
t (i, j ),
i =1 j =1
where Ẇ and Ḣ represent the width and height of the subsampled image, respectively.
The mean absolute difference (MAD) is a measure of the extent to which the entire image
changes and is calculated as the average absolute value of the difference between the results
of the previous and current steps. Ye et al. determined the convergence based solely on
the MAD value [22]. However, our focus is the convergence of the difference between the
current and previous MADs to zero, rather than the value of the MAD approaching zero.
This is because the MAD may not converge to zero, owing to the conditions that prevent
each pixel from changing. The difference in MAD between the current and previous steps
is calculated as follows:
The final number of iterations is determined by finding the smallest value of t that satisfies
both ∆c,h c,v
MAD ( t ) < ϵ global and ∆ MAD ( t ) < ϵ global , which is defined as T.
Sensors 2024, 24, 760 7 of 19
The process of weighting and summing the results obtained by guided filtering in
the vertical and horizontal directions with the number of iterations obtained earlier is
as follows:
wc,h (i, j) Ī˙Tc,h (i, j) + wc,v (i, j) Ī˙Tc,v (i, j)
Î˙c (i, j) = , (15)
wc,h (i, j) + wc,v (i, j)
where wc,h (i, j) and wc,v (i, j) are the weights in the horizontal and vertical directions,
respectively, and are defined as follows:
1
wc,h (i, j) = ,
DTc,h (i, j)
(16)
c,v 1
w (i, j) = c,v ,
DT (i, j)
where (m, n) ∈ [0, 1, 2, 3]2 determines the grid for upsampling and depends on the subsam-
pled channel c. The indices (m, n) represent the position of a pixel within a 4 × 4 block. For
example, if c = 1 in Figure 1c, (m, n) is (3, 3).
NW ∆c (i − 2, j − 2) + γ NE ∆c (i − 2, j + 2) + γSE ∆c (i + 2, j + 2) + γSW ∆c (i + 2, j − 2)
γs0
∆cs1 (i, j) = s0 s0 s0 s0 s0
NW + γ NE + γSE + γSW
s0 s0
. (18)
γs0 s0 s0 s0
NW 1
γs0 = ,
2 Î PPI (i − 2, j − 2) − Î PPI (i, j) + Î PPI (i − 1, j − 1) − Î PPI (i + 1, j + 1)
NE 1
γs0 = ,
2 Î PPI (i − 2, j + 2) − Î PPI (i, j) + Î PPI (i − 1, j + 1) − Î PPI (i + 1, j − 1)
(19)
SE 1
γs0 = ,
2 Î PPI (i + 2, j + 2) − Î PPI (i, j) + Î PPI (i + 1, j + 1) − Î PPI (i − 1, j − 1)
SW 1
γs0 = .
2 Î PPI (i + 2, j − 2) − Î PPI (i, j) + Î PPI (i + 1, j − 1) − Î PPI (i − 1, j + 1)
N ∆c (i − 2, j ) + γ E ∆c (i, j + 2) + γS ∆c (i + 2, j ) + γW ∆c (i, j − 2)
γs1
∆cs2 (i, j) = s1 s1 s1 s1 s1
N + γ E + γ S + γW
s1 s1
. (20)
γs1 s1 s1 s1
N 1
γs1 = ,
2 Î PPI (i − 2, j) − Î PPI (i, j) + Î PPI (i − 1, j) − Î PPI (i + 1, j)
E 1
γs1 = ,
2 Î (i, j + 2) − Î (i, j) + Î PPI (i, j + 1) − Î PPI (i, j − 1)
PPI PPI
(21)
S 1
γs1 = ,
PPI PPI
2 Î (i + 2, j) − Î (i, j) + Î PPI (i + 1, j) − Î PPI (i − 1, j)
1
γW
s1 = .
2 Î PPI (i, j − 2) − Î PPI (i, j) + Î PPI (i, j − 1) − Î PPI (i, j + 1)
Sensors 2024, 24, 760 8 of 19
Masking +
- WC & WS ⋯ +
+
𝐼𝐼 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀
𝐼𝐼𝑐𝑐 ∆𝑐𝑐𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 ∆𝑐𝑐 𝐼𝐼̂𝑐𝑐
IGFPPI
𝐼𝐼̂𝑃𝑃𝑃𝑃𝑃𝑃
Î c = ÎPPI + ∆c . (22)
4. Experiment Results
4.1. Metrics
To evaluate the quality of the demosaicing results, we used quantitative metrics, such
as the PSNR, SSIM, and SAM.
The PSNR, which measures the logarithm of the average difference between the
reference image and the estimated image, was calculated as follows:
MAX 2
PSNR(x, x̂) = 10 log10 ,
MSE(x, x̂)
(23)
||x − x̂||2
MSE(x, x̂) = ,
WH
where MAX represents the maximum value of the image, MSE represents the mean
squared error between the reference image x and the estimated image x̂, and W and H
represent the width and height of the image, respectively.
Sensors 2024, 24, 760 9 of 19
The SSIM was used to evaluate the similarity between the reference image x and the
estimated image x̂. It was calculated using the following equation:
where µx and µx̂ represent the means of the image vectors x and x̂, respectively. The
standard deviations of x and x̂ are represented by σx and σx̂ , respectively. The covariance
between x and x̂ is represented by σxx̂ , and c1 and c2 are constants used to prevent the
denominator from approaching zero.
The SAM is commonly used to evaluate multispectral images. It represents the average
of the angles formed by the reference and estimated image vectors and is calculated using
the following formula:
x · x̂
−1
SAM (x, x̂) = cos . (25)
||x||||x̂||
For the PSNR and SSIM, larger values indicated better performance, and for the SAM,
smaller values indicated better performance.
Figure 4. Experimental results for PPI estimation: (a) Original. (b) PPID. (c) IGFPPI.
The results for the PSNR, SSIM, and SAM of TT31 are presented in Tables 1–3, re-
spectively. In the tables, a dark-gray background indicates the best score and a light-gray
background indicates the second-best score. Of the 35 images in the TT31 dataset, the
proposed method had the best PSNR for 19 images and the second-best PSNR for 16 images.
Additionally, it had the best SSIM for 20 images, the second-best SSIM for 15 images, the
best SAM for 18 images, and the second-best SAM for 17. The average PSNR, SSIM, and
SAM values for the TT31 dataset indicated that the proposed method outperformed the
other methods.
Table 1. PSNR(DB) Comparision for TT31.
Table 1. Cont.
Table 2. Cont.
Table 3. Cont.
Figures 5 and 6 present the qualitative evaluation results for TT31, including those for
the Butterfly and ChartCZP images, with the images cropped to highlight differences. We
obtained red, green, and blue channels from the multispectral demosaicing image cube and
represented them as the RGB images for qualitative evaluation. Figures 5a–h and 6a–h show
RGB images from which we extracted channel 16 for red, channel 6 for green, and channel 1
for blue from the multispectral image cube. Figures 5i–p and 6i–p show the error maps of
Figures 5a–h and 6a–h. The results of CM1 show the blurriest image, and the results of CM2
and CM3 estimated high frequencies somewhat well, but artifacts can be seen. CM4 and
CM6 nearly perfectly restored high frequencies in the resolution chart; however, the mosaic
pattern was not entirely removed from the general color image. In CM6, demosaicing is
performed using a network that erases the mosaic pattern for each channel. This method
performs demosaicing on 16 channels of an MSFA; however, the arrangement is different
from the paper of CM6. In the experimental results of this method, we can observe that
only the evaluation metrics of chart images corresponding to monotone are of high score.
This is because the mosaic pattern is easily erased in images where changes in all channels
are constant, but the mosaic pattern is not erased in images where a large change occurs
in a specific color. In general, the outcomes of CM5 and PM (referring to the proposed
method) appeared to be similar. However, for images such as the resolution chart, PM
exhibited superior high-frequency recovery and less color aliasing than CM5. Overall, the
image produced by PM had fewer mosaic pattern artifacts and less color aliasing than those
produced by the conventional methods.
For quantitative evaluation of the TT59 dataset, we computed the PSNR, SSIM, and
SAM values, which are presented in Tables 4–6, respectively. Of the 16 images in the
TT59 dataset, the proposed method had the best PSNR for 10 images, and the second-best
PSNR for 4 images. Moreover, it had the best SSIM for 8 images, the second-best SSIM for
7 images, the best SAM for 12 images and the second-best SAM for 4 images. The average
PSNR, SSIM, and SAM values for the TT59 dataset indicated that the proposed method
achieved the best results.
The results for the TT59 dataset were similar to those for the TT31 dataset. In the
gray areas, CM4 and CM6 effectively recovered the high frequencies. However, in the
colored sections, MSFA pattern artifacts were introduced, resulting in grid-like artifacts. By
comparison, CM5 and PM performed better overall, with PM recovering high frequencies
better than CM5, as shown in the resolution chart.
Figure 7 shows the demosaicing results for different MSFA arrangements. Figure 7a–h
shows the MSFAs in which adjacent spectra are grouped in a 2 × 2 shape. Figure 7i–p are
the MSFAs of the original IMEC camera. The proposed method can be observed to be more
robust and to have fewer artifacts than conventional methods. In particular, Figure 7c,d,f
show grid artifacts where the black line of the butterfly is broken, whereas the proposed
method shows reduced grid artifacts compared with other methods.
Table 7 presents a comparison of the execution times, with the desktop specifications
of an Intel i7-11700k processor, 32 GB of memory, and an Nvidia RTX 3090 GPU. CM6
was tested using Pytorch, whereas the remaining methods were tested using MATLAB
R2021a. To obtain the average execution times for all the datasets, we conducted timing
measurements. We found that the method with the shortest execution time was CM1,
followed by CM5, PM, CM4, CM2, CM6, and CM3.
Sensors 2024, 24, 760 14 of 19
Figure 5. Experimental results for TT31: (a–h) Butterfly and (i–p) error maps of (a–h).
Figure 6. Cont.
Sensors 2024, 24, 760 15 of 19
Figure 6. Experimental results for TT31: (a–h) ChartCZP and (i–p) error maps of (a–h).
2 4 10 12
5 7 13 15
6 8 14 16
Figure 7. Cont.
Sensors 2024, 24, 760 17 of 19
7 8 6 5
15 16 14 13
11 12 10 9
3 4 2 1
Figure 7. Experimental results for various MSFAs: (a–h) Demosaicing results for different arrange-
ment MSFAs. (i–p) Demosaicing results for original MSFA.
(a) full-size image (b) CM1 cropped image (c) CM2 cropped image (d) CM3 cropped image
(e) CM4 cropped image (f) CM5 cropped image (g) CM6 cropped image (h) PM cropped image
5. Conclusions
We propose an IGFPPI method for PPI estimation and a directional-multispectral-
demosaicing method using the estimated PPI obtained from IGFPPI. Guided filtering was
used to estimate the PPI from the raw image of the MSFA, where a Gaussian filter was used
to obtain the PPI of the low-frequency components, and horizontal and vertical guided
filtering was used to estimate the high-frequency components. Using the estimated PPI, we
performed directional interpolation in the spectral-difference domain to obtain the final
demosaiced multispectral image.
In extensive experiments, among the methods tested, the proposed method achieved
the best quantitative scores for the PSNR, SSIM, and SAM and exhibited the best restoration
of high frequencies and the least color artifacts in a qualitative evaluation, with a reasonable
computation time. The proposed method also achieved good results for real-world images.
Furthermore, our proposed method can be adapted to perform multispectral demosaicing
in the case of a periodic MSFA and when the spectral transmittance of the MSFA varies. In
Sensors 2024, 24, 760 18 of 19
future research, we will focus on image-fusion demosaicing using both multispectral and
color filter arrays.
Author Contributions: Conceptualization, K.J.; methodology, K.J.; software, K.J. and S.K.; validation,
K.J., S.K. and M.G.K.; funding acquisition, M.G.K.; supervision, K.J. and M.G.K. All authors have
read and agreed to the published version of the manuscript.
Funding: This work was supported by the National Research Foundation of Korea (NRF) grant
funded by the Korean government (MSIT) (No. 2022R1A2C200289711).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data are contained within the article.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Adam, E.; Mutanga, O.; Rugege, D. Multispectral and hyperspectral remote sensing for identification and mapping of wetland
vegetation: A review. Wetl. Ecol. Manag. 2010, 18, 281–296. [CrossRef]
2. Deng, L.; Mao, Z.; Li, X.; Hu, Z.; Duan, F.; Yan, Y. UAV-based multispectral remote sensing for precision agriculture: A comparison
between different cameras. ISPRS J. Photogramm. Remote Sens. 2018, 146, 124–136. [CrossRef]
3. Wu, Y.; Zeng, F.; Zhao, Y.; Wu, S. Emerging contrast agents for multispectral optoacoustic imaging and their biomedical
applications. Chem. Soc. Rev. 2021, 50, 7924–7940. [CrossRef] [PubMed]
4. Bayer, B.E. Color Imaging Array. U.S. Patent 3,971,065, 20 July 1976.
5. Monno, Y.; Tanaka, M.; Okutomi, M. Multispectral demosaicking using adaptive kernel upsampling. In Proceedings of the 2011
18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; IEEE: Piscataway, NJ, USA,
2011; pp. 3157–3160.
6. Geelen, B.; Tack, N.; Lambrechts, A. A compact snapshot multispectral imager with a monolithically integrated per-pixel
filter mosaic. In Advanced Fabrication Technologies for Micro/Nano Optics and Photonics VII; SPIE: Bellingham, WA, USA, 2014;
Volume 8974, pp. 80–87.
7. Kimmel, R. Demosaicing: Image reconstruction from color CCD samples. IEEE Trans. Image Process. 1999, 8, 1221–1228. [CrossRef]
[PubMed]
8. Lu, W.; Tan, Y.P. Color filter array demosaicking: new method and performance measures. IEEE Trans. Image Process. 2003,
12, 1194–1210. [PubMed]
9. Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Residual interpolation for color image demosaicking. In Proceedings of the 2013
IEEE International Conference on Image Processing, Melbourne, VIC, Australia, 15–18 September 2013; IEEE: Piscataway, NJ,
USA, 2013; pp. 2304–2308.
10. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [CrossRef] [PubMed]
11. Gharbi, M.; Chaurasia, G.; Paris, S.; Durand, F. Deep joint demosaicking and denoising. ACM Trans. Graph. (ToG) 2016, 35, 1–12.
[CrossRef]
12. Rathi, V.; Goyal, P. Generic multispectral demosaicking based on directional interpolation. IEEE Access 2022, 10, 64715–64728.
[CrossRef]
13. Miao, L.; Qi, H.; Ramanath, R.; Snyder, W.E. Binary tree-based generic demosaicking algorithm for multispectral filter arrays.
IEEE Trans. Image Process. 2006, 15, 3550–3558. [CrossRef] [PubMed]
14. Shinoda, K.; Ogawa, S.; Yanagi, Y.; Hasegawa, M.; Kato, S.; Ishikawa, M.; Komagata, H.; Kobayashi, N. Multispectral filter
array and demosaicking for pathological images. In Proceedings of the 2015 Asia-Pacific Signal and Information Processing
Association Annual Summit and Conference (APSIPA), Hong Kong, China, 16–19 December 2015; IEEE: Piscataway, NJ, USA,
2015; pp. 697–703.
15. Mihoubi, S.; Losson, O.; Mathon, B.; Macaire, L. Multispectral demosaicing using pseudo-panchromatic image. IEEE Trans.
Comput. Imaging 2017, 3, 982–995. [CrossRef]
16. Feng, K.; Zhao, Y.; Chan, J.C.W.; Kong, S.G.; Zhang, X.; Wang, B. Mosaic convolution-attention network for demosaicing
multispectral filter array images. IEEE Trans. Comput. Imaging 2021, 7, 864–878. [CrossRef]
17. Liu, S.; Zhang, Y.; Chen, J.; Lim, K.P.; Rahardja, S. A Deep Joint Network for Multispectral Demosaicking Based on Pseudo-
Panchromatic Images. IEEE J. Sel. Top. Signal Process. 2022, 16, 622–635. [CrossRef]
18. Zhao, B.; Zheng, J.; Dong, Y.; Shen, N.; Yang, J.; Cao, Y.; Cao, Y. PPI Edge Infused Spatial-Spectral Adaptive Residual Network for
Multispectral Filter Array Image Demosaicing. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5405214. [CrossRef]
19. Chen, Y.; Zhang, H.; Wang, Y.; Ying, A.; Zhao, B. ADMM-DSP: A Deep Spectral Image Prior for Snapshot Spectral Image
Demosaicing. IEEE Trans. Ind. Inform. 2023, early access.
Sensors 2024, 24, 760 19 of 19
20. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: from error visibility to structural similarity. IEEE
Trans. Image Process. 2004, 13, 600–612. [CrossRef] [PubMed]
21. Kruse, F.A.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system
(SIPS)—Interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [CrossRef]
22. Ye, W.; Ma, K.K. Color image demosaicing using iterative residual interpolation. IEEE Trans. Image Process. 2015, 24, 5879–5891.
[CrossRef] [PubMed]
23. Monno, Y.; Kikuchi, S.; Tanaka, M.; Okutomi, M. A practical one-shot multispectral imaging system using a single image sensor.
IEEE Trans. Image Process. 2015, 24, 3048–3059. [CrossRef] [PubMed]
24. Monno, Y.; Teranaka, H.; Yoshizaki, K.; Tanaka, M.; Okutomi, M. Single-sensor RGB-NIR imaging: High-quality system design
and prototype implementation. IEEE Sens. J. 2018, 19, 497–507. [CrossRef]
25. Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S.K. Generalized assorted pixel camera: postcapture control of resolution, dynamic
range, and spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [CrossRef] [PubMed]
26. Pichette, J.; Laurence, A.; Angulo, L.; Lesage, F.; Bouthillier, A.; Nguyen, D.K.; Leblond, F. Intraoperative video-rate hemodynamic
response assessment in human cortex using snapshot hyperspectral optical imaging. Neurophotonics 2016, 3, 045003–045003.
[CrossRef] [PubMed]
27. Brauers, J.; Aach, T. A color filter array based multispectral camera. In 12. Workshop Farbbildverarbeitung; Lehrstuhl für
Bildverarbeitung: Ilmenau, Germany, 2006; pp. 55–64.
28. Mizutani, J.; Ogawa, S.; Shinoda, K.; Hasegawa, M.; Kato, S. Multispectral demosaicking algorithm based on inter-channel
correlation. In Proceedings of the 2014 IEEE Visual Communications and Image Processing Conference, Valletta, Malta, 7–10
December 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 474–477.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.