Next Article in Journal
An Object-Aware Network Embedding Deep Superpixel for Semantic Segmentation of Remote Sensing Images
Next Article in Special Issue
Three-Dimensional Geometric-Physical Modeling of an Environment with an In-House-Developed Multi-Sensor Robotic System
Previous Article in Journal
Estimation of Surface Water Level in Coal Mining Subsidence Area with GNSS RTK and GNSS-IR
Previous Article in Special Issue
MCG-RTDETR: Multi-Convolution and Context-Guided Network with Cascaded Group Attention for Object Detection in Unmanned Aerial Vehicle Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Infrared and Visible Image Fusion via Sparse Representation and Guided Filtering in Laplacian Pyramid Domain

1
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
2
School of Computer Science and Technology, Xinjiang University, Urumqi 830046, China
3
National Key Laboratory of Space Integrated Information System, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
4
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Submission received: 16 August 2024 / Revised: 21 September 2024 / Accepted: 27 September 2024 / Published: 13 October 2024

Abstract

:
The fusion of infrared and visible images together can fully leverage the respective advantages of each, providing a more comprehensive and richer set of information. This is applicable in various fields such as military surveillance, night navigation, environmental monitoring, etc. In this paper, a novel infrared and visible image fusion method based on sparse representation and guided filtering in Laplacian pyramid (LP) domain is introduced. The source images are decomposed into low- and high-frequency bands by the LP, respectively. Sparse representation has achieved significant effectiveness in image fusion, and it is used to process the low-frequency band; the guided filtering has excellent edge-preserving effects and can effectively maintain the spatial continuity of the high-frequency band. Therefore, guided filtering combined with the weighted sum of eight-neighborhood-based modified Laplacian (WSEML) is used to process high-frequency bands. Finally, the inverse LP transform is used to reconstruct the fused image. We conducted simulation experiments on the publicly available TNO dataset to validate the superiority of our proposed algorithm in fusing infrared and visible images. Our algorithm preserves both the thermal radiation characteristics of the infrared image and the detailed features of the visible image.

1. Introduction

Infrared and visible image fusion is a process that integrates the complementary information from infrared (IR) and visible light images to produce a single image that is more informative and suitable for human perception or automated analysis tasks [1]. This technique leverages the distinct advantages of both imaging modalities to enhance the visibility of features that are not apparent in either image alone [2,3].
Unlike visible light images, infrared images capture the thermal radiation emitted by objects. This allows for the detection of living beings, machinery, and other heat sources, even in total darkness or through obstructions like smoke and fog. IR imaging is invaluable for applications requiring visibility in low-light conditions, such as night-time surveillance, search and rescue operations, and wildlife observation [4].
Visible light images provide high-resolution details and color information, which are crucial for human interpretation and understanding of a scene. From photography to video surveillance, visible light imaging is the most common form of imaging, offering a straightforward depiction of the environment as perceived by the human eye. The fusion process integrates the thermal information from infrared images with the detail and color information from visible images [5,6,7,8]. This results in images that highlight both the thermal signatures and the detailed scene information. By combining these two types of images, the fused image enhances the ability to detect and recognize subjects and objects in various conditions, including complete darkness, smoke, fog, and camouflage situations.
Various algorithms and techniques, including multi-resolution analysis, image decomposition, and feature-based methods, have been developed to fuse the images. A major challenge in image fusion is to maintain and highlight the essential details from both source images in the combined image, while avoiding artifacts and ensuring that no crucial information is lost [9,10,11,12,13,14,15,16,17]. For some applications, such as surveillance and automotive safety, the ability to process and fuse images in real time is crucial. This creates difficulties in terms of processing efficiency and the fine-tuning of algorithms.
During the fusion process, some information may be lost or confused, especially in areas with strong contrast or rich details, where the fusion algorithm might not fully retain the information from each image. Additionally, noise or artifacts may be introduced during the fusion process, affecting the quality of the final image. To enhance the performance of the fused image in terms of both thermal radiation characteristics and detail clarity, a fusion method utilizing sparse representation and guided filtering in the Laplacian pyramid domain is constructed. Sparse representation has demonstrated excellent results in image fusion; it is used to process the low-frequency sub-bands, and guided filtering combined with the weighted sum of eight-neighborhood-based modified Laplacian (WSEML) is utilized to process the high-frequency sub-bands. Through experiments and validation on the publicly available TNO dataset, our algorithm has achieved significant fusion effects, incorporating both infrared characteristics and scene details. This is advantageous for subsequent target detection and recognition tasks.
The paper is structured as follows: Section 2 reviews related research. Section 3 introduces the Laplacian pyramid transform. Section 4 details the proposed fusion approach. Section 5 shows the experimental results and discussion. Finally, Section 6 concludes the paper. This structure ensures a clear progression through the background research, foundational concepts, algorithmic details, empirical findings, and concluding remarks, thereby comprehensively addressing the topic of image fusion in the Laplacian pyramid domain.

2. Related Works

2.1. Deep Learning on Image Fusion

Deep learning has achieved significant results in the field of image processing, with popular algorithms including CNNs [18], GANs [19], swin transformer [20,21], vision transformer [22], and mamba [23]. Deep learning has significantly advanced the field of image fusion by introducing models that can learn complex representations and fusion rules from data, leading to superior fusion performance compared with traditional techniques. Deep-learning models can automatically extract and merge the most pertinent features from both infrared and visible images. This process produces fused images that effectively combine the thermal information from infrared images with the detailed texture and color from visible images [24,25,26].
CNNs are widely employed as deep-learning models for image fusion. They excel at capturing spatial hierarchies in images through their deep architecture, making them ideal for tasks that involve spatial data, like images. In the context of image fusion, CNNs can be trained to identify and merge the salient features from both infrared and visible images, ensuring that the fused image retains critical information from both sources [27]. Liu et al. [28] introduced the fusion of infrared and visible images using CNNs. Their experimental findings showcase that this approach attains cutting-edge outcomes in both visual quality and objective metrics. Similarly, Yang et al. [29] devised a method for image fusion leveraging multi-scale convolutional neural networks alongside saliency weight maps.
GANs have also been applied to image fusion with promising results [30,31]. A GAN consists of two networks: a generator that creates images and a discriminator that evaluates them. For image fusion, the generator can be trained to produce fused images from input images, while the discriminator ensures that the fused images are indistinguishable from real images in terms of quality and information content. This approach can result in high-quality fused images that effectively blend the characteristics of both modalities. Change et al. [32] presented a GAN model incorporating dual fusion paths and a U-type discriminator. Experimental findings illustrate that this approach outperforms other methods.
Deep learning offers a powerful framework for image fusion, with the potential to significantly enhance the quality and usefulness of fused images across a wide range of applications. Ongoing research in this field focuses on developing more efficient, adaptable, and interpretable models that can provide even better fusion results.

2.2. Traditional Methods of Image Fusion

Traditional methods for image fusion focus on combining the complementary information from source images to enhance the visibility of features and improve the overall quality of the resulting image. These techniques are generally categorized via the domain in which the fusion takes place: transform- and spatial-domain methods [33,34,35,36,37].
In transform-domain methods, Chen et al. [38] introduced a spatial-frequency collaborative fusion framework for image fusion; this algorithm utilizes the properties of nonsubsampled shearlet transform for decomposition and reconstruction. Chen et al. [39] introduced a fusion approach tailored for image fusion, emphasizing edge consistency and correlation-driven integration. Through nonsubsampled shearlet transform decomposition, detail layers are acquired housing image details and textures alongside a base layer containing primary features. Li et al. [40] introduced the method for fusing infrared and visible images, leveraging low-pass filtering and sparse representation. Chen et al. [41] introduced the multi-focus image fusion with complex sparse representation (CSR); this model leverages the properties of hypercomplex signals to obtain directional information from real-valued signals by extending them to complex form. It then decomposes these directional components into sparse coefficients using specific directional dictionaries. Unlike traditional SR models, this approach excels at capturing geometric structures in images. This is because CSR coefficients offer accurate measurements of detailed information along particular directions.
In spatial domain methods, Li et al. [42] introduced a neural-network-based approach to assess focus properties using measures like spatial frequency, visibility, and edge features within the source image blocks.

3. Laplacian Pyramid Transform

The Laplacian pyramid of an image can be obtained by computing the difference between every two consecutive layers of the Gaussian pyramid [43,44,45]. Suppose G 0 represents a matrix of an image, and G k represents the k th layer of the Gaussian pyramid decomposition of the image. Similarly, the k 1 th layer of the Gaussian pyramid is G k 1 , where the 0th layer is the image itself. The definition of G k is as follows [44]:
G k i , j = m = 2 2   n = 2 2 w m , n G k 1 2 i + m ,   2 j + n 1 k N ,   0 i R k ,   0 j C k
where N is the maximum number of layers in the Gaussian pyramid decomposition; R k and C k represent the number of rows and columns of the k th layer image of the Gaussian pyramid, respectively. w m , n is a low-pass window function of size 5 × 5 [44,45]:
w = 1 256 1       4         6         4       1 4     16     24     16     4 6     24     36     24     6 4     16     24     16     4 1       4         6         4       1
To compute the difference between the k th layer image G k and the ( k 1 ) th layer image G k 1 in the Gaussian pyramid, it is necessary to upsample the low-resolution image G k to match the size of the high-resolution image G k 1 . Opposite to the process of image downsampling (Reduce), the operation defined for image upsampling is called Expand:
G k = Expand G k
where G k and G k 1 have the same dimensions. The specific operation is achieved by interpolating and enlarging the k th layer image, G k , as defined in Equation (3):
G k i , j = 4 m = 2 2   n = 2 2 w m , n G k i m 2 , j n 2 1 k N ,   0 i R k 1 ,   0 j C k 1
where
G k i m 2 , j n 2 = G k i m 2 , j n 2 ,   when i m 2 , j n 2 are   integers 0 ,                                                                       else
From Equation (4), it can be inferred that the newly interpolated pixels between the original pixels are determined by the weighted average of the original pixel intensities.
At this point, the difference between the expanded k th image G k and the k 1 th layer image G k 1 in the pyramid can be obtained from the following equation:
LP k 1 = G k 1 G k = G k 1 Expand G k
The above expression generates the k 1 th level of the Laplacian pyramid. Since G k is obtained from G k 1 through low-pass filtering and downsampling, the details in G k are significantly fewer than those in G k 1 , so the detail information contained in the interpolated G k of G k will still be less than G k 1 . LP k 1 , as the difference between G k and G k 1 , also reflects the information difference between the two layers of images in the Gaussian pyramid G k and G k 1 . It contains the high-frequency detail information lost when G k is obtained through the blurring and downsampling of G k 1 .
The complete definition of the Laplacian pyramid is as follows:
LP k = G k Expand G k + 1 ,     0 k N LP N = G N ,                                                             k = N
Thus, LP 0 ,   LP 1 ,   ,   LP N can form the Laplacian pyramid of the image, where each layer is the difference between the corresponding layers of the Gaussian pyramid and its upsampled version. This process is akin to bandpass filtering; therefore, the Laplacian pyramid can also be referred to as bandpass tower decomposition.
The decomposition process of the Laplacian pyramid can be summarized into four steps: low-pass filtering, downsampling, interpolation, and bandpass filtering. Figure 1 shows the decomposition and reconstruction process of the Laplacian pyramid transform. A series of pyramid images obtained through Laplacian decomposition can be reconstructed into the original image through an inverse transformation process. Below, we derive the reconstruction method based on Equation (7):
G 0 = LP 0 + Expand G 1 G 1 = LP 1 + Expand G 2 G N 1 = LP N 1 + Expand G N G N = LP N
In summary, the reconstruction formula for the Laplacian pyramid can be expressed as
G N = LP N ,                                                                   k = N G k = LP k + Expand G k + 1 ,     0 k < N

4. Proposed Fusion Method

In this section, we present a technique for fusing infrared and visible images using sparse representation and guided filtering within the Laplacian pyramid framework. The method involves four main stages: image decomposition, low-frequency fusion, high-frequency fusion, and image reconstruction. The structure of the proposed method is shown in Figure 2.

4.1. Image Decomposition

The original image undergoes decomposition into a Laplacian pyramid (LP), yielding a low-frequency band L P N and a series of high-frequency bands. This LP transform is applied separately to the source images A and B, resulting in L A k and L B k , which represent the k th layer of the source images. When k = N , L A N and L B N are the decomposed top-level images (i.e., low-frequency information).

4.2. Low-Frequency Fusion

The low-frequency band effectively encapsulates the general structure and energy of the image. Sparse representation [1] has demonstrated efficacy in image fusion tasks; hence, it is employed to process the low-frequency band.
The sliding window technique is used to partition L A N and L B N into image patches with the size n × n , from upper left to lower right, with the step length of s pixels. Let us denote that there are T patches represented as p A i i = 1 T and p B i i = 1 T in L A N and L B N , respectively.
For each position i , rearrange p A i , p B i into column vectors v A i , v B i , and then normalize each vector’s mean value to zero to generate V ^ A i , V ^ B i using the following equations [1]:
V ^ A i = V A i v ¯ A i 1
V ^ B i = V B i v ¯ B i 1
where 1 depicts an all-one valued n × 1 vector, and v ^ A i and v ^ B i are the mean values of all the elements in V A i and V B i , respectively.
To compute the sparse coefficient vectors α A i , α B i of V ^ A i , V ^ B i , we employ the orthogonal matching pursuit (OMP) technique, applying the following formulas:
α A i = arg min α α 0   s . t .   V ^ A i D α 2 < ε
α B i = arg min α α 0   s . t .   V ^ B i D α 2 < ε
Here, D represents the learned dictionary obtained through the K-singular value decomposition (K-SVD) approach.
Next, α A i and α B i are combined using the “max-L1” rule to produce the fused sparse vector:
α F i = α A i         i f   α A i 1 > α B i 1 α B i         e l s e
The fused results of V A i and V B i can be calculated using the following method:
V F i = D α F i + v ^ F i 1
where the merged mean value v ¯ F i can be computed as follows:
v ¯ F i = v ¯ A i     i f   α F i = α A i v ¯ B i     e l s e
The above process is iterated for all the source image patches in p A i i = 1 T and p B i i = 1 T to generate all fused vectors V F i i = 1 T . Let L F N denotes the low-pass fused result. For each V F i , reshape it into a patch p F i , and then plug p F i into its original position in L F N . As the patches are overlapped, each pixel’s value in L F N is averaged over its accumulation times.

4.3. High-Frequency Fusion

The high-frequency bands contain detailed information. The activity level measure, named WSEML, is defined as follows [46]:
WSEML S i , j = m = r r   n = r r W m + r + 1 ,   n + r + 1 × EML S i + m ,   j + n
where S L A k ,   L B k , the 3 × 3 normalized model of W , is defined as follows:
W = 1 16 1   2   1 2   4   2 1   2   1
and the E M L S is computed by
EML S i , j = 2 S i , j S i 1 , j S i + 1 , j     + 2 S i , j S i , j 1 S i , j + 1     + 1 2 2 S i , j S i 1 , j 1 S i + 1 , j + 1     + 1 2 2 S i , j S i 1 , j + 1 S i + 1 , j 1
The two zero-value matrixes mapA and mapB are initialized, and the matrixes are computed by
m a p A i , j = 1 ,   if   WSEML L A k i , j WSEML L B k i , j 0 ,   else       0 k < N
m a p B i , j = 1 m a p A i , j
Guided filtering, denoted as G r , ε p , I , is a linear filtering technique [47,48]. Here, the parameters that control the size of the filter kernel and the extent of blur are represented by r and ε , respectively. p and I depict the input image and guidance image, respectively. To enhance the spatial continuity of the high-pass bands in the context of using guided filtering on mapA and mapB, we utilize the corresponding high-pass bands L A k and L B k as the guidance images.
m a p A = G r , ε m a p A , L A l
m a p B = G r , ε m a p B , L B l
where mapA and mapB should be normalized, and the fused high-pass bands L F k i , j are calculated by
L F k i , j = m a p A × L A k + m a p B × L B k ,     0 k < N

4.4. Image Reconstruction

Perform the corresponding inverse LP to reconstruct the final fused image.

5. Experimental Results and Discussion

5.1. Experimental Setup

In this section, we conducted simulation experiments using the TNO public dataset [49] and compared them through qualitative and quantitative evaluations. Figure 3 shows the examples from the TNO dataset. We compared our algorithm with eight other image fusion algorithms, namely, ICA [50], ADKLT [51], MFSD [52], MDLatLRR [53], PMGI [54], RFNNest [55], EgeFusion [56], and LEDIF [57]. For quantitative evaluation, we adopted 10 commonly used evaluation metrics to assess the effectiveness of the algorithm, namely, the edge-based similarity measurement Q A B / F [58,59,60,61,62,63], the human-perception-inspired metric Q C B [64,65], the structural-similarity-based metric Q E [64], the feature mutual information metric Q F M I [66], the gradient-based metric Q G [64], the mutual information metric Q M I [58,67], the nonlinear correlation information entropy Q N C I E [64], the normalized mutual information Q N M I [64], the phase-congruency-based metric Q P [64], and the structural-similarity-based metric introduced by Yang et al. Q Y [64,68,69]. Q A B / F computes and measures the amount of edge information transferred from the source images to the fused images using a Sobel edge detector. Q C B is a perceptual-fusion metric based on human visual system (HVS) models. Q E takes the original images and the edge images into consideration at the same time. Q F M I calculates the regional mutual information between corresponding windows in the fused image and the two source images. Q G is obtained from the weighted average of the edge information preservation values. Q M I computes how much information from the source images is transferred to the fused image. Q N C I E is an information-theory-based metric. Q N M I is a quantitative measure of the mutual dependence of two variables. Q P provides an absolute measure of image features. Q Y is a fusion metric based on SSIM. A higher index value indicates the algorithm’s superiority.
The parameters for the compared algorithms correspond to the default parameters in the respective articles. For our method, the parameters are as follows: r = 3 , ε = 10 6 ; the dictionary size is 256, with K-SVD iterated 180 times. Patch size is 6 × 6, step length is 1, and error tolerance is 0.1 [1].

5.2. Analysis of LP Decomposition Levels

Figure 4 shows the fusion results of LP with different decomposition levels. From the figure, it can be observed that the fusion effects in Figure 4a–c are poor, with severe artifacts. The fusion results in Figure 4d–f are relatively similar. Table 1 provides evaluation metrics for 42 image pairs under different LP decomposition levels. Since the fusion results are poor for decomposition levels 1–3, we first exclude these settings. Comparing the average metric values for decomposition levels 4–6, we see that at level 4, five metrics are optimal. Therefore, we set the LP decomposition level to 4.

5.3. Qualitative and Quantitative Analysis

Figure 5 illustrates the fusion outcomes of various methods applied to Data 1 alongside the corresponding metric data in Table 2. The ICA, ADKLT, PMGI, and RFNNest methods are observed to produce fused images that appear blurred, failing to maintain the thermal radiation characteristics and details present in the source images. Both MFSD and LEDIF methods yield similar fusion results, preserving human thermal radiation characteristics but suffering from noticeable loss of brightness information in specific areas. Conversely, the MDLatLRR and EgeFusion algorithms demonstrate over-sharpening effects, leading to artifacts and significant distortion in the fused images. Our algorithm enables comprehensive complementarity between the infrared and visible images while fully preserving the thermal infrared characteristics.
From Table 2, it can be observed that our algorithm achieves optimal objective metrics on Data 1, with a Q A B / F value of 0.5860, Q C B value of 0.6029, Q E value of 0.7047, Q F M I value of 0.9248, Q G value of 0.5838, Q M I value of 2.7156, Q N C I E value of 0.8067, Q N M I value of 0.3908, Q P value of 0.3280, and Q Y value of 0.8802.
Figure 6 displays the fusion results of various methods applied to Data 2, along with the corresponding metric data shown in Table 3. Observing the fusion results, it is evident that the ICA, ADKLT, and PMGI algorithms produced fused images that are blurred and exhibit low brightness. The MFSD, RFNNest, and LEDIF methods suffered from some loss of thermal radiation information. In contrast, the MDLatLRR and EgeFusion algorithms resulted in sharpened images, enhancing the human subjects but potentially causing distortion in other areas due to the sharpening effect. Our algorithm achieved the best fusion result.
From Table 3, it is apparent that our algorithm achieved superior objective metrics on Data 2, with a Q A B / F value of 0.6880, Q C B value of 0.6771, Q E value of 0.7431, Q F M I value of 0.9623, Q G value of 0.6860, Q M I value of 3.6399, Q N C I E value of 0.8112, Q N M I value of 0.5043, Q P value of 0.2976, and Q Y value of 0.9458.
Figure 7 depicts the fusion results of various methods applied to Data 3, accompanied by the corresponding metric data shown in Table 4. Analyzing the fusion outcomes, it is evident that the ICA and ADKLT algorithms produced blurry fused images with significant loss of information. The MFSD method introduced artifacts in certain regions. While the MDLatLRR and EgeFusion algorithms increased the overall brightness, they also introduced artifacts. The PMGI and RFNNest algorithms resulted in distorted fused images. The LEDIF algorithm achieved commendable fusion results, albeit with some artifacts present. Our algorithm yielded the best fusion result, achieving moderate brightness and preserving the thermal radiation characteristics.
From Table 4, it is apparent that our algorithm attained optimal objective metrics on Data 3, with a Q A B / F value of 0.7252, Q C B value of 0.6830, Q E value of 0.8105, Q F M I value of 0.8887, Q G value of 0.7182, Q M I value of 4.4156, Q N C I E value of 0.8131, Q N M I value of 0.6674, Q P value of 0.8141, and Q Y value of 0.9395.
Figure 8 displays the fusion results of various methods applied to Data 4, alongside the corresponding metric data shown in Table 5. Upon reviewing the fusion outcomes, it is evident that the fusion images produced by the ICA, ADKLT, MFSD, PMGI, and LEDIF algorithms exhibit some loss of brightness information. The MDLatLRR and EgeFusion algorithms sharpened the fused image, while the RFNNest method resulted in a darker fused image with some information loss. In contrast, our algorithm produced a fused image with complementary information.
From Table 5, it is notable that our algorithm achieved optimal objective metrics on Data 4, with a Q A B / F value of 0.5947, Q C B value of 0.5076, Q E value of 0.6975, Q F M I value of 0.9059, Q G value of 0.5915, Q M I value of 2.5337, Q N C I E value of 0.8062, Q N M I value of 0.3571, Q P value of 0.5059, and Q Y value of 0.8553.
Figure 9 provides detailed insights into the objective performance of the various fusion methods across 42 pairs of data from the TNO dataset. The horizontal axis represents the number of data pairs used in our experiments, while the vertical axis represents the metric values. Each method’s scores across different source images are plotted as curves, with the average score indicated in the legend. Figure 9 illustrates that most methods show consistent trends across the metrics examined, and nearly all fusion methods demonstrate stable performance across all test images, with few exceptions. Therefore, comparisons based on average values in Table 6 hold significant value.

5.4. Experimental Expansion

We expanded our proposed algorithm to include the fusion of multi-focus images from the Lytro [70] and MFI-WHU datasets [71], selecting 20 and 30 groups of data for testing, respectively. The simulation results for one of the data groups are shown in Figure 10. This extension involved a comparative evaluation against eight methods: ICA [50], FusionDN [72], PMGI [54], U2Fusion [73], LEGFF [74], ZMFF [75], EgeFusion [56], and LEDIF [57]. The assessment utilized both subjective visual inspection and objective metrics. Figure 11 and Figure 12 provide detailed insights into the objective performance of various fusion methods on the Lytro and MFI-WHU datasets, with the corresponding average metric values shown in Table 7 and Table 8. From the results in Figure 10, it is evident that the ICA and PMGI algorithms tended to produce fused images with noticeable blurriness, impacting the clarity of detailed information within the fused images. The fused images produced by the FusionDN and U2Fusion algorithms exhibited dark regions in specific areas, such as hair regions in portraits, which detracted from overall visual quality. The fusion results of the LEGFF, ZMFF, and LEDIF algorithms are quite similar, all achieving fully focused fusion effects. The fused image generated by the EgeFusion algorithm showed distortions that made it challenging to discern detailed parts of the image. Our algorithm demonstrated promising results both visually and quantitatively when compared with the other algorithms. Subjective visual assessment indicated that our method effectively enhanced the presentation of complementary information in the fused images, preserving clarity and detail across different focus levels.

6. Conclusions

To enhance the clarity and thermal radiation fidelity of infrared and visible image fusion, a fusion method based on sparse representation and guided filtering in the Laplacian pyramid domain is introduced. The Laplacian pyramid serves as an efficient multi-scale transform that decomposes the original image into distinct low- and high-frequency components. Low-frequency bands, crucial for capturing overall scene structure and thermal characteristics, are processed using the sparse representation technique. Sparse representation ensures that key features are preserved while reducing noise and maintaining thermal radiation attributes. High-frequency bands, which encompass fine details and textures vital for visual clarity, are enhanced using guided filtering integrated with WSEML. This approach successfully combines the contextual details from the source images, ensuring that the fused output maintains sharpness and fidelity across different scales. We carried out thorough simulation tests using the well-known TNO dataset to assess the performance of our algorithm. The results demonstrate that our method successfully preserves thermal radiation characteristics while enhancing scene details in the fused images. By continuing to innovate within the framework of sparse representation and guided filtering in the Laplacian pyramid domain, we aim to contribute significantly to the advancement of image fusion techniques, particularly in scenarios where preserving thermal characteristics and enhancing visual clarity are paramount. Moreover, we extended our approach to conducting fusion experiments on multi-focus images, achieving satisfactory results in capturing diverse focal points within a single fused output.
In our future research, we plan to further refine and expand our algorithm’s capabilities. Specifically, we aim to explore enhancements tailored for the fusion of synthetic aperture radar (SAR) and optical images [76]. By integrating SAR data, which provide unique insights into surface properties and structures, with optical imagery, which offers high-resolution contextual information, we anticipate developing a robust fusion framework capable of addressing diverse application scenarios effectively. Additionally, research on change detection based on fusion models is also one of our future research directions [77,78,79,80].

Author Contributions

The experimental measurements and data collection were carried out by L.L., Y.S., M.L. (Ming Lv), Z.J., M.L. (Minqin Liu), X.Z. (Xiaobin Zhao), X.Z. (Xueyu Zhang), and H.M. The manuscript was written by L.L. with the assistance of Y.S., M.L. (Ming Lv), Z.J., M.L. (Minqin Liu), X.Z. (Xiaobin Zhao), X.Z. (Xueyu Zhang), and H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant Nos. 92152109 and 62261053; the Technology Innovation Program of Beijing Institute of Technology under Grant No. 2024CX02065; the Cross-Media Intelligent Technology Project of Beijing National Research Center for Information Science and Technology (BNRist) under Grant No. BNR2019TD01022; and the Tianshan Talent Training Project-Xinjiang Science and Technology Innovation Team Program (2023TSYCTD0012).

Data Availability Statement

The TNO dataset can be accessed via the following link: https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029 (accessed on 2 July 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 2015, 24, 147–164. [Google Scholar] [CrossRef]
  2. Huo, X.; Deng, Y.; Shao, K. Infrared and visible image fusion with significant target enhancement. Entropy 2022, 24, 1633. [Google Scholar] [CrossRef]
  3. Luo, Y.; Luo, Z. Infrared and visible image fusion: Methods, datasets, applications, and prospects. Appl. Sci. 2023, 13, 10891. [Google Scholar] [CrossRef]
  4. Li, L.; Lv, M.; Jia, Z.; Jin, Q.; Liu, M.; Chen, L.; Ma, H. An effective infrared and visible image fusion approach via rolling guidance filtering and gradient saliency map. Remote Sens. 2023, 15, 2486. [Google Scholar] [CrossRef]
  5. Ma, X.; Li, T.; Deng, J. Infrared and visible image fusion algorithm based on double-domain transform filter and contrast transform feature extraction. Sensors 2024, 24, 3949. [Google Scholar] [CrossRef]
  6. Wang, Q.; Yan, X.; Xie, W.; Wang, Y. Image fusion method based on snake visual imaging mechanism and PCNN. Sensors 2024, 24, 3077. [Google Scholar] [CrossRef]
  7. Feng, B.; Ai, C.; Zhang, H. Fusion of infrared and visible light images based on improved adaptive dual-channel pulse coupled neural network. Electronics 2024, 13, 2337. [Google Scholar] [CrossRef]
  8. Yang, H.; Zhang, J.; Zhang, X. Injected infrared and visible image fusion via L1 decomposition model and guided filtering. IEEE Trans. Comput. Imaging 2022, 8, 162–173. [Google Scholar]
  9. Zhang, X.; Boutat, D.; Liu, D. Applications of fractional operator in image processing and stability of control systems. Fractal Fract. 2023, 7, 359. [Google Scholar] [CrossRef]
  10. Zhang, X.; He, H.; Zhang, J. Multi-focus image fusion based on fractional order differentiation and closed image matting. ISA Trans. 2022, 129, 703–714. [Google Scholar] [CrossRef]
  11. Zhang, X.; Yan, H. Medical image fusion and noise suppression with fractional-order total variation and multi-scale decomposition. IET Image Process. 2021, 15, 1688–1701. [Google Scholar] [CrossRef]
  12. Yan, H.; Zhang, X. Adaptive fractional multi-scale edge-preserving decomposition and saliency detection fusion algorithm. ISA Trans. 2020, 107, 160–172. [Google Scholar] [CrossRef] [PubMed]
  13. Zhang, X.; Yan, H.; He, H. Multi-focus image fusion based on fractional-order derivative and intuitionistic fuzzy sets. Front. Inf. Technol. Electron. Eng. 2020, 21, 834–843. [Google Scholar] [CrossRef]
  14. Zhang, J.; Ding, J.; Chai, T. Fault-tolerant prescribed performance control of wheeled mobile robots: A mixed-gain adaption approach. IEEE Trans. Autom. Control 2024, 69, 5500–5507. [Google Scholar] [CrossRef]
  15. Zhang, J.; Xu, K.; Wang, Q. Prescribed performance tracking control of time-delay nonlinear systems with output constraints. IEEE/CAA J. Autom. Sin. 2024, 11, 1557–1565. [Google Scholar] [CrossRef]
  16. Wu, D.; Wang, Y.; Wang, H.; Wang, F.; Gao, G. DCFNet: Infrared and visible image fusion network based on discrete wavelet transform and convolutional neural network. Sensors 2024, 24, 4065. [Google Scholar] [CrossRef]
  17. Wei, Q.; Liu, Y.; Jiang, X.; Zhang, B.; Su, Q.; Yu, M. DDFNet-A: Attention-based dual-branch feature decomposition fusion network for infrared and visible image fusion. Remote Sens. 2024, 16, 1795. [Google Scholar] [CrossRef]
  18. Li, X.; He, H.; Shi, J. HDCCT: Hybrid densely connected CNN and transformer for infrared and visible image fusion. Electronics 2024, 13, 3470. [Google Scholar] [CrossRef]
  19. Mao, Q.; Zhai, W.; Lei, X.; Wang, Z.; Liang, Y. CT and MRI image fusion via coupled feature-learning GAN. Electronics 2024, 13, 3491. [Google Scholar] [CrossRef]
  20. Wang, Z.; Chen, Y.; Shao, W. SwinFuse: A residual swin transformer fusion network for infrared and visible images. IEEE Trans. Instrum. Meas. 2023, 71, 5016412. [Google Scholar] [CrossRef]
  21. Ma, J.; Tang, L.; Fan, F. SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer. IEEE-CAA J. Autom. Sin. 2022, 9, 1200–1217. [Google Scholar] [CrossRef]
  22. Gao, F.; Lang, P.; Yeh, C.; Li, Z.; Ren, D.; Yang, J. An interpretable target-aware vision transformer for polarimetric HRRP target recognition with a novel attention loss. Remote Sens. 2024, 16, 3135. [Google Scholar] [CrossRef]
  23. Huang, L.; Chen, Y.; He, X. Spectral-spatial Mamba for hyperspectral image classification. Remote Sens. 2024, 16, 2449. [Google Scholar] [CrossRef]
  24. Zhang, X.; Demiris, Y. Visible and infrared image fusion using deep learning. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 10535–10554. [Google Scholar] [CrossRef]
  25. Zhang, X.; Ye, P.; Xiao, G. VIFB: A visible and infrared image fusion benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  26. Li, H.; Wu, X. CrossFuse: A novel cross attention mechanism based infrared and visible image fusion approach. Inf. Fusion 2024, 103, 102147. [Google Scholar] [CrossRef]
  27. Liu, Y.; Chen, X.; Wang, Z. Deep learning for pixel-level image fusion: Recent advances and future prospects. Inf. Fusion 2018, 42, 158–173. [Google Scholar] [CrossRef]
  28. Liu, Y.; Chen, X.; Cheng, J. Infrared and visible image fusion with convolutional neural networks. Int. J. Wavelets Multiresolut. Inf. Process. 2018, 16, 1850018. [Google Scholar] [CrossRef]
  29. Yang, C.; He, Y. Multi-scale convolutional neural networks and saliency weight maps for infrared and visible image fusion. J. Vis. Commun. Image Represent. 2024, 98, 104015. [Google Scholar] [CrossRef]
  30. Wei, H.; Fu, X.; Wang, Z.; Zhao, J. Infrared/Visible light fire image fusion method based on generative adversarial network of wavelet-guided pooling vision transformer. Forests 2024, 15, 976. [Google Scholar] [CrossRef]
  31. Ma, J.; Xu, H. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans. Image Process. 2020, 29, 4980–4995. [Google Scholar] [CrossRef]
  32. Chang, L.; Huang, Y. DUGAN: Infrared and visible image fusion based on dual fusion paths and a U-type discriminator. Neurocomputing 2024, 578, 127391. [Google Scholar] [CrossRef]
  33. Lv, M.; Jia, Z.; Li, L.; Ma, H. Multi-focus image fusion via PAPCNN and fractal dimension in NSST domain. Mathematics 2023, 11, 3803. [Google Scholar] [CrossRef]
  34. Lv, M.; Li, L.; Jin, Q.; Jia, Z.; Chen, L.; Ma, H. Multi-focus image fusion via distance-weighted regional energy and structure tensor in NSCT domain. Sensors 2023, 23, 6135. [Google Scholar] [CrossRef]
  35. Li, L.; Lv, M.; Jia, Z.; Ma, H. Sparse representation-based multi-focus image fusion method via local energy in shearlet domain. Sensors 2023, 23, 2888. [Google Scholar] [CrossRef]
  36. Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
  37. Liu, Y.; Wang, L.; Cheng, J. Multi-focus image fusion: A survey of the state of the art. Inf. Fusion 2020, 64, 71–91. [Google Scholar] [CrossRef]
  38. Chen, H.; Deng, L. SFCFusion: Spatial-frequency collaborative infrared and visible image fusion. IEEE Trans. Instrum. Meas. 2024, 73, 5011615. [Google Scholar] [CrossRef]
  39. Chen, H.; Deng, L.; Zhu, L.; Dong, M. ECFuse: Edge-consistent and correlation-driven fusion framework for infrared and visible image fusion. Sensors 2023, 23, 8071. [Google Scholar] [CrossRef]
  40. Li, X.; Tan, H. Infrared and visible image fusion based on domain transform filtering and sparse representation. Infrared Phys. Technol. 2023, 131, 104701. [Google Scholar] [CrossRef]
  41. Chen, Y.; Liu, Y. Multi-focus image fusion with complex sparse representation. IEEE Sens. J. 2024; early access. [Google Scholar]
  42. Li, S.; Kwok, J.T.; Wang, Y. Multifocus image fusion using artificial neural networks. Pattern Recognit. Lett. 2002, 23, 985–997. [Google Scholar] [CrossRef]
  43. Chang, C.I.; Liang, C.C.; Hu, P.F. Iterative Gaussian–Laplacian pyramid network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5510122. [Google Scholar] [CrossRef]
  44. Burt, P.J.; Adelson, E.H. The laplacian pyramid as a compact image code. IEEE Trans. Commun. 1983, 31, 532–540. [Google Scholar] [CrossRef]
  45. Chen, J.; Li, X.; Luo, L. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf. Sci. 2020, 508, 64–78. [Google Scholar] [CrossRef]
  46. Yin, M.; Liu, X.; Liu, Y. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans. Instrum. Meas. 2019, 68, 49–64. [Google Scholar] [CrossRef]
  47. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef]
  48. Li, S.; Kang, X.; Hu, J. Image fusion with guided filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar]
  49. Available online: https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029 (accessed on 1 May 2024).
  50. Mitianoudis, N.; Stathaki, T. Pixel-based and region-based image fusion schemes using ICA bases. Inf. Fusion 2007, 8, 131–142. [Google Scholar] [CrossRef]
  51. Bavirisetti, D.P.; Dhuli, R. Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform. IEEE Sens. J. 2016, 16, 203–209. [Google Scholar] [CrossRef]
  52. Bavirisetti, D.P.; Dhuli, R. Two-scale image fusion of visible and infrared images using saliency detection. Infrared Phys. Technol. 2016, 76, 52–64. [Google Scholar] [CrossRef]
  53. Li, H.; Wu, X.; Kittler, J. MDLatLRR: A novel decomposition method for infrared and visible image fusion. IEEE Trans. Image Process. 2020, 29, 4733–4746. [Google Scholar] [CrossRef]
  54. Zhang, H.; Xu, H.; Xiao, Y. Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12797–12804. [Google Scholar]
  55. Li, H.; Wu, X.; Kittler, J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images. Inf. Fusion 2021, 73, 72–86. [Google Scholar] [CrossRef]
  56. Tang, H.; Liu, G. EgeFusion: Towards edge gradient enhancement in infrared and visible image fusion with multi-scale transform. IEEE Trans. Comput. Imaging 2024, 10, 385–398. [Google Scholar] [CrossRef]
  57. Xiang, W.; Shen, J.; Zhang, L.; Zhang, Y. Infrared and visual image fusion based on a local-extrema-driven image filter. Sensors 2024, 24, 2271. [Google Scholar] [CrossRef] [PubMed]
  58. Qu, X.; Yan, J.; Xiao, H. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Autom. Sin. 2008, 34, 1508–1514. [Google Scholar] [CrossRef]
  59. Li, S.; Han, M.; Qin, Y.; Li, Q. Self-attention progressive network for infrared and visible image fusion. Remote Sens. 2024, 16, 3370. [Google Scholar] [CrossRef]
  60. Li, L.; Zhao, X.; Hou, H.; Zhang, X.; Lv, M.; Jia, Z.; Ma, H. Fractal dimension-based multi-focus image fusion via coupled neural P systems in NSCT domain. Fractal Fract. 2024, 8, 554. [Google Scholar] [CrossRef]
  61. Zhai, H.; Ouyang, Y.; Luo, N. MSI-DTrans: A multi-focus image fusion using multilayer semantic interaction and dynamic transformer. Displays 2024, 85, 102837. [Google Scholar] [CrossRef]
  62. Li, L.; Ma, H.; Jia, Z.; Si, Y. A novel multiscale transform decomposition based multi-focus image fusion framework. Multimed. Tools Appl. 2021, 80, 12389–12409. [Google Scholar] [CrossRef]
  63. Li, B.; Zhang, L.; Liu, J.; Peng, H. Multi-focus image fusion with parameter adaptive dual channel dynamic threshold neural P systems. Neural Netw. 2024, 179, 106603. [Google Scholar] [CrossRef]
  64. Liu, Z.; Blasch, E.; Xue, Z. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 94–109. [Google Scholar] [CrossRef]
  65. Zhai, H.; Chen, Y.; Wang, Y. W-shaped network combined with dual transformers and edge protection for multi-focus image fusion. Image Vis. Comput. 2024, 150, 105210. [Google Scholar] [CrossRef]
  66. Haghighat, M.; Razian, M. Fast-FMI: Non-reference image fusion metric. In Proceedings of the IEEE 8th International Conference on Application of Information and Communication Technologies, Astana, Kazakhstan, 15–17 October 2014; pp. 424–426. [Google Scholar]
  67. Wang, X.; Fang, L.; Zhao, J.; Pan, Z.; Li, H.; Li, Y. MMAE: A universal image fusion method via mask attention mechanism. Pattern Recognit. 2025, 158, 111041. [Google Scholar] [CrossRef]
  68. Zhang, X.; Li, W. Hyperspectral pathology image classification using dimension-driven multi-path attention residual network. Expert Syst. Appl. 2023, 230, 120615. [Google Scholar] [CrossRef]
  69. Zhang, X.; Li, Q. FD-Net: Feature distillation network for oral squamous cell carcinoma lymph node segmentation in hyperspectral imagery. IEEE J. Biomed. Health Inform. 2024, 28, 1552–1563. [Google Scholar] [CrossRef]
  70. Nejati, M.; Samavi, S.; Shirani, S. Multi-focus image fusion using dictionary-based sparse representation. Inf. Fusion 2015, 25, 72–84. [Google Scholar] [CrossRef]
  71. Zhang, H.; Le, Z. MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Inf. Fusion 2021, 66, 40–53. [Google Scholar] [CrossRef]
  72. Xu, H.; Ma, J.; Le, Z. FusionDN: A unified densely connected network for image fusion. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12484–12491. [Google Scholar]
  73. Xu, H.; Ma, J.; Jiang, J. U2Fusion: A unified unsupervised image fusion network. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 502–518. [Google Scholar] [CrossRef]
  74. Zhang, Y.; Xiang, W. Local extreme map guided multi-modal brain image fusion. Front. Neurosci. 2022, 16, 1055451. [Google Scholar] [CrossRef]
  75. Hu, X.; Jiang, J.; Liu, X.; Ma, J. ZMFF: Zero-shot multi-focus image fusion. Inf. Fusion 2023, 92, 127–138. [Google Scholar] [CrossRef]
  76. Li, J.; Zhang, J.; Yang, C.; Liu, H.; Zhao, Y.; Ye, Y. Comparative analysis of pixel-level fusion algorithms and a new high-resolution dataset for SAR and optical image fusion. Remote Sens. 2023, 15, 5514. [Google Scholar] [CrossRef]
  77. Li, L.; Ma, H.; Jia, Z. Multiscale geometric analysis fusion-based unsupervised change detection in remote sensing images via FLICM model. Entropy 2022, 24, 291. [Google Scholar] [CrossRef] [PubMed]
  78. Li, L.; Ma, H.; Zhang, X.; Zhao, X.; Lv, M.; Jia, Z. Synthetic aperture radar image change detection based on principal component analysis and two-level clustering. Remote Sens. 2024, 16, 1861. [Google Scholar] [CrossRef]
  79. Li, L.; Ma, H.; Jia, Z. Change detection from SAR images based on convolutional neural networks guided by saliency enhancement. Remote Sens. 2021, 13, 3697. [Google Scholar] [CrossRef]
  80. Li, L.; Ma, H.; Jia, Z. Gamma correction-based automatic unsupervised change detection in SAR images via FLICM model. J. Indian Soc. Remote Sens. 2023, 51, 1077–1088. [Google Scholar] [CrossRef]
Figure 1. Laplacian pyramid. (a) Three-level Laplacian pyramid decomposition diagram; (b) Three-level Laplacian reconstruction diagram.
Figure 1. Laplacian pyramid. (a) Three-level Laplacian pyramid decomposition diagram; (b) Three-level Laplacian reconstruction diagram.
Remotesensing 16 03804 g001
Figure 2. The structure of the proposed method.
Figure 2. The structure of the proposed method.
Remotesensing 16 03804 g002
Figure 3. Examples from the TNO dataset.
Figure 3. Examples from the TNO dataset.
Remotesensing 16 03804 g003
Figure 4. Fusion results of different decomposition levels in LP. (a) 1 level; (b) 2 level; (c) 3 level; (d) 4 level; (e) 5 level; (f) 6 level.
Figure 4. Fusion results of different decomposition levels in LP. (a) 1 level; (b) 2 level; (c) 3 level; (d) 4 level; (e) 5 level; (f) 6 level.
Remotesensing 16 03804 g004
Figure 5. Results on Data 1. (a) ICA; (b) ADKLT; (c) MFSD; (d) MDLatLRR; (e) PMGI; (f) RFNNest; (g) EgeFusion; (h) LEDIF; (i) Proposed.
Figure 5. Results on Data 1. (a) ICA; (b) ADKLT; (c) MFSD; (d) MDLatLRR; (e) PMGI; (f) RFNNest; (g) EgeFusion; (h) LEDIF; (i) Proposed.
Remotesensing 16 03804 g005
Figure 6. Results on Data 2. (a) ICA; (b) ADKLT; (c) MFSD; (d) MDLatLRR; (e) PMGI; (f) RFNNest; (g) EgeFusion; (h) LEDIF; (i) Proposed.
Figure 6. Results on Data 2. (a) ICA; (b) ADKLT; (c) MFSD; (d) MDLatLRR; (e) PMGI; (f) RFNNest; (g) EgeFusion; (h) LEDIF; (i) Proposed.
Remotesensing 16 03804 g006
Figure 7. Results on Data 3. (a) ICA; (b) ADKLT; (c) MFSD; (d) MDLatLRR; (e) PMGI; (f) RFNNest; (g) EgeFusion; (h) LEDIF; (i) Proposed.
Figure 7. Results on Data 3. (a) ICA; (b) ADKLT; (c) MFSD; (d) MDLatLRR; (e) PMGI; (f) RFNNest; (g) EgeFusion; (h) LEDIF; (i) Proposed.
Remotesensing 16 03804 g007
Figure 8. Results on Data 4. (a) ICA; (b) ADKLT; (c) MFSD; (d) MDLatLRR; (e) PMGI; (f) RFNNest; (g) EgeFusion; (h) LEDIF; (i) Proposed.
Figure 8. Results on Data 4. (a) ICA; (b) ADKLT; (c) MFSD; (d) MDLatLRR; (e) PMGI; (f) RFNNest; (g) EgeFusion; (h) LEDIF; (i) Proposed.
Remotesensing 16 03804 g008
Figure 9. Objective performance of different methods on the TNO dataset.
Figure 9. Objective performance of different methods on the TNO dataset.
Remotesensing 16 03804 g009aRemotesensing 16 03804 g009bRemotesensing 16 03804 g009c
Figure 10. Results on Lytro-01. (a) Near focus; (b) Far focus; (c) ICA; (d) FusionDN; (e) PMGI; (f) U2Fusion; (g) LEGFF; (h) ZMFF; (i) EgeFusion; (j) LEDIF; (k) Proposed.
Figure 10. Results on Lytro-01. (a) Near focus; (b) Far focus; (c) ICA; (d) FusionDN; (e) PMGI; (f) U2Fusion; (g) LEGFF; (h) ZMFF; (i) EgeFusion; (j) LEDIF; (k) Proposed.
Remotesensing 16 03804 g010
Figure 11. Objective performance of different methods on the Lytro dataset.
Figure 11. Objective performance of different methods on the Lytro dataset.
Remotesensing 16 03804 g011aRemotesensing 16 03804 g011b
Figure 12. Objective performance of different methods on the MFI-WHU dataset.
Figure 12. Objective performance of different methods on the MFI-WHU dataset.
Remotesensing 16 03804 g012aRemotesensing 16 03804 g012b
Table 1. The average objective evaluation of different LP decomposition levels on 42 pairs of data from the TNO dataset.
Table 1. The average objective evaluation of different LP decomposition levels on 42 pairs of data from the TNO dataset.
Levels Q A B / F Q C B Q E Q F M I Q G Q M I Q N C I E Q N M I Q P Q Y
10.56860.53920.62050.90680.55923.81950.81550.54400.30160.8307
20.56690.54670.66550.91240.55653.13500.80990.44380.34020.8317
30.57270.53940.67640.91380.56192.66280.80750.37600.36440.8301
40.57680.53060.66990.91400.56542.43780.80650.34600.37160.8233
50.57650.51310.65210.91380.56552.31600.80600.33210.38320.8079
60.57750.51130.62920.91330.56622.45750.80640.35400.38710.7980
Table 2. The objective evaluation of different methods on Data 1.
Table 2. The objective evaluation of different methods on Data 1.
Q A B / F Q C B Q E Q F M I Q G Q M I Q N C I E Q N M I Q P Q Y
ICA0.40170.44610.53000.91390.39561.85670.80380.27750.26540.7064
ADKLT0.40260.54040.46510.87780.39761.59360.80340.23820.18510.7098
MFSD0.42470.57560.58980.90170.42031.35510.80310.19830.20560.7252
MDLatLRR0.32480.49570.41360.88740.31841.09440.80280.15560.29580.6882
PMGI0.38800.50350.43990.9024 0.38031.89010.80410.27470.20280.7361
RFNNest0.33720.49390.39910.90310.33001.72390.80360.25460.21550.6856
EgeFusion0.19680.42980.33710.86880.19011.18860.80290.16650.21540.4970
LEDIF0.50580.57020.65120.90870.50011.29480.80300.19290.25720.8143
Proposed0.58600.60290.70470.92480.58382.71560.80670.39080.32800.8802
Table 3. The objective evaluation of different methods on Data 2.
Table 3. The objective evaluation of different methods on Data 2.
Q A B / F Q C B Q E Q F M I Q G Q M I Q N C I E Q N M I Q P Q Y
ICA0.40020.44170.48990.95690.39872.32540.80510.34270.26760.7434
ADKLT0.40430.56990.41240.9249 0.39931.87670.80410.2756 0.15950.7093
MFSD0.41750.6009 0.62290.9539 0.4128 1.78520.80390.25940.16770.6909
MDLatLRR0.33820.45030.51200.9142 0.33701.2513 0.80300.17690.2772 0.7223
PMGI0.46050.52690.54540.95160.46102.1395 0.80430.30890.19390.7885
RFNNest0.4098 0.58030.45070.9460 0.40662.18510.80480.30980.1841 0.7168
EgeFusion0.20110.39870.37150.88350.19711.19560.80290.16660.21330.5511
LEDIF0.58700.59200.68010.95380.58451.54220.80340.22970.25780.8901
Proposed0.68800.67710.74310.96230.68603.63990.81120.50430.29760.9458
Table 4. The objective evaluation of different methods on Data 3.
Table 4. The objective evaluation of different methods on Data 3.
Q A B / F Q C B Q E Q F M I Q G Q M I Q N C I E Q N M I Q P Q Y
ICA0.67480.66890.74460.88540.66424.18770.81130.65310.73580.8365
ADKLT0.58910.65990.64990.87390.57643.78800.80980.59070.61400.7521
MFSD0.61830.64230.76340.87510.60713.56830.80910.54920.63310.7636
MDLatLRR0.31240.47820.4074 0.84600.30832.45120.80600.40630.56870.5580
PMGI0.55290.28910.54250.86760.54003.27410.80820.51810.5801 0.5961
RFNNest0.50530.61860.51450.87230.49643.69970.80950.57280.61630.7138
EgeFusion0.24520.47320.35110.80700.24142.15130.80530.35610.45980.5115
LEDIF0.63900.6455 0.71460.88290.63143.48610.80880.53870.73710.8444
Proposed0.72520.68300.81050.8887 0.71824.41560.81310.66740.81410.9395
Table 5. The objective evaluation of different methods on Data 4.
Table 5. The objective evaluation of different methods on Data 4.
Q A B / F Q C B Q E Q F M I Q G Q M I Q N C I E Q N M I Q P Q Y
ICA0.45230.3979 0.59320.90040.44782.10080.80450.31530.40240.7236
ADKLT0.35850.40320.39220.8670 0.35291.7737 0.80380.26970.26150.6098
MFSD0.44160.47860.61760.88610.43881.49310.80330.22290.30660.6666
MDLatLRR0.31570.47460.37720.88740.31311.27630.80290.18300.4091 0.6339
PMGI0.37990.35870.44970.8783 0.37641.71620.80350.25940.32570.7108
RFNNest0.29710.41590.31380.89200.29612.09970.80460.31370.33430.6153
EgeFusion0.21230.48000.33510.85820.21011.20460.80290.17200.27230.4726
LEDIF0.51200.45970.67240.89110.5081 1.54190.80330.23540.38470.7865
Proposed0.59470.50760.69750.90590.5915 2.53370.80620.35710.50590.8553
Table 6. The average objective evaluation of the different methods on 42 pairs of data from the TNO dataset.
Table 6. The average objective evaluation of the different methods on 42 pairs of data from the TNO dataset.
Q A B / F Q C B Q E Q F M I Q G Q M I Q N C I E Q N M I Q P Q Y
ICA0.43170.44960.52770.90740.41972.11720.80480.31670.31920.7050
ADKLT0.40780.47330.42050.87890.39191.79680.80410.27040.23410.6745
MFSD0.42740.51030.56570.89480.41241.65840.80380.24590.24670.6627
MDLatLRR0.33640.47350.42510.89150.32741.32780.80330.19240.34530.6478
PMGI0.42580.45800.51230.89610.41212.34620.80550.33990.27770.7095
RFNNest0.34800.46790.36920.89880.33472.11260.80470.30670.23060.6146
EgeFusion0.20410.44210.31640.86060.19641.29720.80320.18500.25040.4683
LEDIF0.52220.50620.63900.89960.50851.88270.80440.28100.31650.7919
Proposed0.57680.53060.66990.91400.56542.43780.80650.34600.37160.8233
Table 7. The average objective evaluation of different methods on 20 pairs of data from the Lytro dataset.
Table 7. The average objective evaluation of different methods on 20 pairs of data from the Lytro dataset.
Q A B / F Q C B Q E Q F M I Q G Q M I Q N C I E Q N M I Q P Q Y
ICA0.62480.63340.79910.89490.61916.25570.82470.83600.63400.8339
FusionDN0.60180.60080.76630.88330.59525.79080.82210.76840.62210.8224
PMGI0.39010.56560.47360.88150.38575.86410.82250.80040.46200.6738
U2Fusion0.61430.56820.78350.88440.60935.77650.82210.77250.66570.7912
LEGFF0.68100.67510.81950.89370.67545.61380.82140.74730.75650.8817
ZMFF0.70870.74120.86870.89250.70306.62710.82710.88380.78530.9313
EgeFusion0.35760.40340.50320.84720.35413.21910.81200.42480.54050.5991
LEDIF0.70510.68980.83900.89320.70055.75460.82220.76590.76650.9146
Proposed0.75030.77450.88190.89970.74877.48540.83320.99800.83020.9700
Table 8. The average objective evaluation of different methods on 30 pairs of data from the MFI-WHU dataset.
Table 8. The average objective evaluation of different methods on 30 pairs of data from the MFI-WHU dataset.
Q A B / F Q C B Q E Q F M I Q G Q M I Q N C I E Q N M I Q P Q Y
ICA0.59400.74600.75620.86740.58776.05690.82420.83040.62980.8594
FusionDN0.52430.49960.65560.85270.51875.35040.82030.71790.58560.7638
PMGI0.42370.59330.50610.85580.41775.48840.82100.76140.47500.7031
U2Fusion0.55020.51560.69700.85650.54475.14980.81940.69910.62120.7830
LEGFF0.61900.60600.70670.86920.61064.82910.81830.65550.70750.8266
ZMFF0.63950.71020.79940.86310.63225.77950.82280.79140.68340.8804
EgeFusion0.28740.32770.37570.82550.28412.80550.81110.37610.51910.5539
LEDIF0.65990.65850.76100.86730.65385.15920.81990.70310.69680.8971
Proposed0.73480.82040.84670.87790.73128.23430.84121.12440.78760.9825
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, L.; Shi, Y.; Lv, M.; Jia, Z.; Liu, M.; Zhao, X.; Zhang, X.; Ma, H. Infrared and Visible Image Fusion via Sparse Representation and Guided Filtering in Laplacian Pyramid Domain. Remote Sens. 2024, 16, 3804. https://doi.org/10.3390/rs16203804

AMA Style

Li L, Shi Y, Lv M, Jia Z, Liu M, Zhao X, Zhang X, Ma H. Infrared and Visible Image Fusion via Sparse Representation and Guided Filtering in Laplacian Pyramid Domain. Remote Sensing. 2024; 16(20):3804. https://doi.org/10.3390/rs16203804

Chicago/Turabian Style

Li, Liangliang, Yan Shi, Ming Lv, Zhenhong Jia, Minqin Liu, Xiaobin Zhao, Xueyu Zhang, and Hongbing Ma. 2024. "Infrared and Visible Image Fusion via Sparse Representation and Guided Filtering in Laplacian Pyramid Domain" Remote Sensing 16, no. 20: 3804. https://doi.org/10.3390/rs16203804

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop