Movatterモバイル変換


[0]ホーム

URL:


Next Article in Journal
IoT Serverless Computing at the Edge: A Systematic Mapping Review
Next Article in Special Issue
Employee Attrition Prediction Using Deep Neural Networks
Previous Article in Journal
Ontology-Based Reasoning for Educational Assistance in Noncommunicable Chronic Diseases
Previous Article in Special Issue
Classification of Contaminated Insulators Usingk-Nearest Neighbors Based on Computer Vision
 
 
Search for Articles:
Title / Keyword
Author / Affiliation / Email
Journal
Article Type
 
 
Section
Special Issue
Volume
Issue
Number
Page
 
Logical OperatorOperator
Search Text
Search Type
 
add_circle_outline
remove_circle_outline
 
 
Journals
Computers
Volume 10
Issue 10
10.3390/computers10100129
Font Type:
ArialGeorgiaVerdana
Font Size:
AaAaAa
Line Spacing:
Column Width:
Background:
Article

A Novel Multi-Modality Image Simultaneous Denoising and Fusion Method Based on Sparse Representation

1
Computer Information Systems Department, State University of New York at Buffalo State, Buffalo, NY 14222, USA
2
College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
3
Department of Mathematics & Computer and Information Science, Mansfield University of Pennsylvania, Mansfield, PA 16933, USA
*
Author to whom correspondence should be addressed.
Submission received: 3 September 2021 /Revised: 5 October 2021 /Accepted: 7 October 2021 /Published: 13 October 2021
(This article belongs to the Special IssueFeature Paper in Computers)

Abstract

:
Multi-modality image fusion applied to improve image quality has drawn great attention from researchers in recent years. However, noise is actually generated in images captured by different types of imaging sensors, which can seriously affect the performance of multi-modality image fusion. As the fundamental method of noisy image fusion, source images are denoised first, and then the denoised images are fused. However, image denoising can decrease the sharpness of source images to affect the fusion performance. Additionally, denoising and fusion are processed in separate processing modes, which causes an increase in computation cost. To fuse noisy multi-modality image pairs accurately and efficiently, a multi-modality image simultaneous fusion and denoising method is proposed. In the proposed method, noisy source images are decomposed into cartoon and texture components. Cartoon-texture decomposition not only decomposes source images into detail and structure components for different image fusion schemes, but also isolates image noise from texture components. A Gaussian scale mixture (GSM) based sparse representation model is presented for the denoising and fusion of texture components. A spatial domain fusion rule is applied to cartoon components. The comparative experimental results confirm the proposed simultaneous image denoising and fusion method is superior to the state-of-the-art methods in terms of visual and quantitative evaluations.

Graphical Abstract

    1. Introduction

    Since an image obtained by a single sensor cannot contain sufficient information of one scene in most cases, additional information from other images captured in the same scene can be used as a necessary complement to reduce the limitations of a single image and enhance the visibility [1,2,3]. Multi-modality image fusion can merge the complementary information from different sensor modalities into the originally captured image [4,5]. Recently, image fusion is widely used in remote sensing, medical imaging, and robotics for the improvement of image quality. Traditional image fusion methods often suppose that there is no noise in source image pairs [6,7,8,9]. However, due to the limitation of sensor-related techniques, image noise always appears in captured images by all types of commercial, professional, and scientific cameras [10,11,12], that can seriously affect image analysis. To improve the image quality, both image denoising and fusion have drawn increasing attention in the image processing area.
    In the past decade, similar image processing techniques have been applied to image denoising and fusion and achieved great performance. Wavelet, multi-scale transform, and total variation-based algorithm are the three most widely used methods in both image fusion and denoising [13,14,15]. Sparse representation (SR) has proved to be effective in image denoising and fusion [16,17,18].
    Conventional image fusion methods have two steps to process the noise of source images. It does image denoising first, then fuses the denoised images. Since image denoising may decrease both sharpness and contrast of source images, the fusion of denoised images may cause inaccuracy in image details. Additionally, both image fusion and denoising are time-consuming. To further improve the efficiency of image fusion, a number of simultaneous image denoising and fusion methods have been proposed in the past few years. Most simultaneous image fusion and denoising methods are developed based on SR framework.
    However, most of existing SR-based simultaneous image fusion and denoising methods do not specialize in image restoration. Both structure and detailed information may be degraded in the denoising process. To optimize this limitation, a novel simultaneous multi-modality image denoising and fusion method is proposed. The proposed method consists of three steps. First, source images are decomposed into cartoon and texture components according to a total variation-based method. In this step, image noise is decomposed into texture components. Second, a GSM-based SR model specialized for image restoration is implemented in the denoising and fusion of texture components, and a spatial domain-based method is applied to the fusion of cartoon components. GSM-based SR model can denoise and sparse code noisy texture components simultaneously. The sparse coded coefficients are fused by using Max-L1 fusion rule, and the fused coefficients can be inversely transformed to a denoised and fused image. Finally, fused texture and cartoon components are integrated into a fused image. The main contributions can be summarized as follows:
    • This paper proposes an image denoising and fusion framework, that can fuse and denoise multi-modality images simultaneously. In the proposed framework, image noise is decomposed into texture components, which are fused and denoised simultaneously according to an SR-based method. For the cartoon components, a proper spatial domain fusion rule is implemented. The denoised and fused image can be obtained by integrating fused texture and cartoon components.
    • This paper proposes a cartoon-texture decomposition based method to separate image noise and detailed information. In the proposed method, source images are decomposed into cartoon and texture components, where noisy components are decomposed into texture components. Therefore, only the texture components are needed for denoising, this can retain the structure information of cartoon components. Additionally, the detailed and structure information of source images is also decomposed in this step.
    • This paper proposes a GSM-based SR model for simultaneous denoising and fusion of texture components. According to a GSM model, SR can remove the noise of texture components, and preserve the image texture information. During the denoising process, sparse coefficients without noisy information can be obtained for fusion.
    The rest of this paper is structured as follows:Section 2 discusses the related work;Section 3 presents the proposed framework;Section 4 simulates the proposed solutions and analyzes experiment results; andSection 5 concludes this paper.

    2. Related Work

    2.1. Sparse Representation in Image Denoising

    SR technique, that represents image patches as a sparse linear combination of atoms in an over-complete redundant dictionary, is a popular research topic of image processing in recent years [19,20,21,22]. In the image denoising field, SR-based research focuses on two related issues, dictionary construction and statistical modeling of sparse coefficients. For dictionary construction, K-SVD dictionary learning proposed by Elad [23], multi-scale dictionary learning [24], and online dictionary learning by Mairal [25] are the most popular methods. For statistical modeling of sparse coefficients, denoising is conducted in the acquisition process of spare coefficients. Zoran and Wess [26] presented Gaussian mixture models for sparse coefficients in image denoising. Variational Bayesian model and centralized Laplacian model were proposed for image denoising by Ji [27] and Dong [28], respectively. Advanced performance is achieved by these sparse models in image denoising.

    2.2. Dictionary Construction and Image Decomposition

    For image fusion, the key issue of SR-based image fusion methods can be categorized into dictionary construction and source image decomposition [10,29]. For dictionary construction, Yang and Li [30] applied a fixed DCT dictionary to multi-focus image fusion as the first application of SR-based image fusion. Duan presented a dual-tree complex shearlet dictionary for enhancing the sharpness and contrast of infrared-visible image fusion [31]. K-SVD based dictionary construction methods were implemented in image fusion by Yin [32] and Zhu [33], that improved the performance of details in both fused multi-focus and medical images. Both Kim and Zhu used principal component analysis (PCA) bases of source images to construct dictionary for image fusion [34,35,36]. The constructed dictionary, consisting of PCA bases, was compact and informative, which could decrease the computation cost of image fusion and improve the performance of image fusion. For the decomposition of source images, Kim implemented Gaussian smooth in image decomposition, that strengthened the visual effect of a fused image [34]. Liu introduced multi-scale transform filter to the decomposition of source images, that could improve the performance of SR-based fusion methods in both medical and infrared-visible scenes [29]. Liu and Yin presented a morphological component analysis (MCA) based cartoon-texture decomposition method for image decomposition [37]. They also proposed proper SR-based fusion rules for the fusion of cartoon and texture components. According to the previous discussion, dictionary construction, image decomposition, and models specialized for sparse coefficients are three key issues of both image denoising and fusion.

    2.3. Simultaneous Image Denoising and Fusion Method

    Li and Yin [10] developed a dictionary learning method based on group-related sparse representation. They used the intrinsic geometrical structure of sparse representation in the form of clusters to build a dictionary. This method can ensure the group structure sparsity of local atoms in different groups of both noise-free and noisy images. Kim and Han [34] presented a joint patch clustering-based dictionary learning (JCPD) method for SR-based image fusion. This method trained a few sub-dictionaries by using PCA-based method, that can construct a denoised compact dictionary for sparse representation. Additionally, according to the image noise rate, Kim and Han set the error tolerance for denoising in the sparse coding process. Therefore, denoised sparse coefficients can be obtained for image fusion. An adaptive sparse representation (ASR) model was proposed by Liu and Wang [38] for simultaneous image denoising and fusion. Liu and Wang used the geometric similarity of image patches to build a few compact sub-dictionaries for both image denoising and fusion. Li [39] proposed a medical image fusion, denoising, and enhancement method based on low-rank sparse component decomposition and dictionary learning (FDESD). Low-rank and sparse regularization terms are first incorporated into the dictionary learning model. Then, a weighted nuclear norm and sparse constraint are imposed on the sparse components to remove noise and preserve texture details. Finally, the fused low-rank and sparse components of source images are combined to construct the fused image. Li [40] proposed an image fusion method based on three-layer decomposition and sparse representation (FDS). The source image is first decomposed into high- and low-frequency components, and then the sparse reconstruct error parameter is adaptively designed and applied to denoising and the fusion of high-frequency components simultaneously. A structure-texture decomposition model is used for low-frequency components. The fused image is obtained by the combination of fused high- and low-frequency components.
    Mei [41] first represented image features by using the fractional-order gradient information, and then used two convex variational models to achieve the fusion of noisy images. An alternating direction method of multiplier was applied to optimization of the simultaneous image fusion and denoising. Under the assumption of both RGB and near infrared (NIR) images containing the same well-calibrated spatial resolution, multi-scale wavelet analysis was integrated into a multi-spectral fusion and denoising framework to achieve texture transfer and noise removal [42]. A discrepancy model based on the wavelet scale map was used to solve the discrepancy between RGB and NIR images. NIR-guided Laplacian distributions are applied to model the prior of the fused wavelet coefficients. So, the fusion, denoising, and detail preservation of RGB and NIR image can be achieved simultaneously. Wang [43] integrated an energy function to a variational approach to adjust the pixel values of an input images directly. The corresponding histogram was redistributed to be uniform and the related image noise was removed. A total variational term was used to remove image noise. Additionally, a histogram equalization term was applied to image contrast enhancement, and both image structure and texture were retained by a fidelity term. Yang [44] used both non-locally centralized sparse representation (NCSR) and residual learning of deep CNN (DnCNN) to achieve internal and external denoising, respectively. The simultaneous image denoising and fusion was converted to an adaptive weight-based image fusion of the denoised image details obtained by NCSR and DnCNN. The weights of both pixel intensity change and global gradient of the denoised images are adaptively adjusted. Since image structure varies considerably across different image patches, existing SR-based solutions always need an exceedingly redundant dictionary to achieve the related signal reconstruction. So, visual artifacts and high computational cost are unavoidable in most cases.

    3. The Proposed Simultaneous Denoising and Fusion Framework

    The proposed simultaneous denoising and fusion framework is demonstrated inFigure 1. In the proposed fusion framework, noisy source images are decomposed into cartoon and texture components. After the cartoon-texture decomposition, noisy and detailed information of source images are categorized into texture components. A GSM-based SR model is applied to the denoising and fusion of texture components. In the texture-component fusion, denoised sparse coefficients are first fused by using Max-L1 fusion rule. Then, the fused coefficients are inversely transformed to the denoised and fused texture components. For the cartoon components, texture information-based spatial fusion rule is implemented in cartoon-component fusion. Finally, the fused cartoon and texture components of source images are integrated to generate the fused image.

    3.1. Image Cartoon-Texture Decomposition

    Cartoon-texture image decomposition, that can decompose noisy and detailed information of the image, is widely used in image processing. It is a fundamental step for both image denoising and fusion processes in the proposed framework. For image fusion, cartoon-texture decomposition can separate source images into detailed and structure components. To preserve image information, different fusion rules are applied to detailed and structure components respectively. In the cartoon-texture decomposition, image noise is categorized into texture components. So, the system needs to do the denoising operation on texture components. In the framework, a total variation model is implemented in cartoon-texture decomposition [45]. The details of this model are shown in Equation (1).
    infuBV(Ω),glp(Ω)2Gp(u,g)=uBV(Ω)+λf(u+divg)l2(Ω)2+μg12+g12lp(Ω),
    whereg=(g1,g2) is a vector inG space to represent digital images. BV (Ω) represents bounded variation set.λ andμ are regularization parameters.u represents the cartoon component of input image.f represents input image.glp represents thelp norm ofg12+g12. The description ofg12+g12lp is demonstrated in Equation (2):
    gLp=(g12+g22)pdxdy1p.
    Cartoon componentu can be obtained by solving the optimization problem in Equation (1).
    When the cartoon componentu is calculated, the texture and noise informationv can be simply calculated by Equation (3).
    v=fu.
    The decomposed cartoon and texture components of noisy images are shown inFigure 2.
    The noisy information only appears in texture components. In this case, the image denoising problem is converted to a denoising problem of texture components.

    3.2. GSM-Based SR Model for the Denoising and Sparse Representation of Texture Components

    In the proposed method, GSM-based SR model is employed in the denoising and sparse coding of texture information. GSM-based SR model is a statistical model for sparse representation, which is proposed by Dong [46].
    The GSM model decomposes the coefficient vectorα into the point-wise product of a Gaussian vectorβ and a hidden scalar multiplierθ, such asαi=θiβi, whereθ is the positive scaling variable with probabilityP(θ). In GSM-based SR model, sparse coefficientα can be obtained by the specification of sparse priorP(θ). The sparse prior termP(αθ) can be expressed as Equation (4),
    P(αθ)=1θ2πexp(αμl)22θ2,
    whereα andγ can be decomposed asα=Λβ andμ=Λγ respectively.Λ is a diagonal matrix ofθ, whereΛ=diag(θ)RK×K.
    For the image denoising problem, a noisy image can be represented as a degraded version of the origin image. The degraded image can be modeled as Equation (5)
    y=x+ω,
    wherey andx are degraded and original image respectively.ω is additive white Gaussian noise observingN(0,σn2).
    According to the GSM statistics model shown in Equation (4) and the image degraded model shown in Equation (5), the denoising and reconstruction problem of texture components of image patchl can be formulated as Equation (6).
    (x,{Bl},{θl})=argminx,{Bl},{θl}yx22+l=1L{R˜lxlDΛlBlF2+σn2BlΓlF2+4σn2log(θl+ε)},
    whereR˜lx=[Rl1x,Rl2x,,Rlmx]Rn×m denotes the data matrix formed by an image patch.RlRn×N denotes a matrix extracting thel-th patchxl fromx.D is the dictionary consisting of the PCA bases of all input texture components of image patches.Bl andΓl represent the first and second order statistics of GSM-based SR model for thel-th patch fromx.L is the total number of patches.ε is a small positive number for numerical stability.
    To solve Equation (6), an iterative SR-based method, that is made up of two isolated optimization problems, is implemented. For the first optimization problem,B andθ are fixed to obtainx. WhenB andθ are fixed,X^l is also fixed. Therefore, Equation (6) becomes anL2 optimization problem as Equation (7).
    x=argminxyx22+l=1LR˜lxDΛlBlF2.
    This problem can be solved [46] by Equation (8).
    x=I+l=1LR˜lTR˜l1y+l=1LR˜lTX^l,
    whereI is an identity matrix.
    For the second optimization problem,x is fixed to solveB andθ. Whenx is fixed, Equation (6) can be transformed to Equation (9).
    ({Bl},{θl})=argmin{Bl},{θl}R˜lxDΛlBlF2+σn2BlΓlF2+4σn2log(θl+ε).
    According to Dong’s method [46], this problem can be divided into two sub-problems that use the fixedB andθ to solveθ andB respectively. Whenθ is fixed,B can be obtained by Equation (10).
    Bl=argminBlR˜lxDΛlBlF2+σn2BlΓlF2.
    As both terms of Equation (10) areL2, it is a classical Wiener filtering, that can be solved by Equation (11).
    Bl=DΛTDΛ+σ2I1DΛTR˜lx+Γl.
    WhenB is fixed, Equation (9) is transformed to Equation (12),
    θl=argminθlR˜lxDΛlBlF2+4σn2log(θl+ε).
    To solve the optimization problem in Equation (12), this paper supposesxl=DAl. According to Dong’s method,θl can be calculated by Equation (13).
    θl=0,if4Al2((Bl)T)216(Bl22)24σ22Bl22<0argminθl{f(0),f(θl,1),f(θl,2)},otherwise,
    whereθl,1 andθl,2 are shown in Equation (14).
    θi,1=bi4ai+bi216c2ai,θi,2=bi4aibi216c2ai.
    f(θl) is demonstrated in Equation (15).
    f(θl)=Bl22θl22Al(Bl)T+4σn2ln(θl+ε).
    Given the above steps, the noisy information of source images can be eliminated. Simultaneously, image information is sparse coded into spare coefficients for the fusion of coefficients.

    3.3. Details of Fusion Process

    Supposing there aren noisy source imagesx1,x2,,xn for multi-modality image fusion, the fusion process is summarized as follows. In the proposed framework, texture components of each noisy source imagex1,x2,,xn are denoised iteratively in the sparse coding process. In each iteration, the dictionary used in sparse coding is obtained by calculating PCA bases of all the source images. When the iteration reaches the maximum number, sparse coefficients can be obtained for fusion. After the coefficients are fused, the fused image can be easily obtained by the reconstruction of the fused coefficients. The whole denoising and fusion process of texture components is demonstrated in Algorithm A1 (shown inAppendix A).
    In this work, the outer loop numberk is set to 6 and the inner loop numberj is set to 3 for the elimination of noisy information. According to the texture components of all the fused image patches, the texture components of the fused imageTf can be obtained. The presented texture information fusion model can eliminate the noise in the texture components of noisy images in the spare coding process for approximating noise-free sparse coefficients of texture components. Noise in the fused result can be suppressed by fusing the approximated coefficients of texture information. This is the major contribution of the presented SR model.
    To preserve the structure information of all source images, a weighted average fusion method is applied to the fusion of cartoon components. The weight of each cartoon component can be calculated by Equation (16),
    crf=k=1Kωkcrkk=1Kωk,
    wherecrk represents cartoon component ofr-th image patch ofk-th source image.crf represents the fused cartoon component ofr-th image patch.ωk=trk2,trk represents ther-th denoised patch of the texture components of thek-th source image. When the cartoon-component patches are fused, they are combined to form a fused cartoon-component imageCf. In the proposed cartoon-component fusion rule, when the amount of the texture components of a patch from one source image is larger than the corresponding one from the other image, its cartoon component is given a higher percentage in the final fused image. The fused and denoised image can be obtained by simply addingCf andTf by Equation (17)
    If=Cf+Tf,

    4. Experiments and Analyses

    4.1. Experiment Setup

    In comparative experiments, 20 pairs of multi-focus images, 20 pairs of infrared-visible, and 20 pairs of medical images are used to test the fusion performance respectively. The multi-focus and infrared-visible image pairs have240×320 resolution. The resolution of medical image pairs is256×256. Parts of several representative images are shown inFigure 3. (a)&(b), and (g)&(h) are two typical multi-focus image pairs. Two medical image pairs are shown in (c)&(d), and (i)&(j) respectively. (e)&(f), and (k)&(l) demonstrate two infrared-visible image pairs respectively. All of image pairs used for testing are collected by Liu [29] and can be downloaded atquxiaobo.org. Gaussian noise is injected to source images for simultaneous fusion and denoising testing. In the following comparative experiments, the levels of noise are set to four fixed values asσ=0,10,20,50. All the experiments are programmed in MATLAB 2016b on a desktop with an Intel(R) Core(TM) i9-7900X @ 3.30 GHz CPU and 16.00 GB RAM.
    Eight mainstream objective evaluation metrics are implemented in the performance evaluation of all fused images. These metrics include Tsallis entropy (QTE) [47,48], nonlinear correlation information entropy (QNCIE) [49], edge retention (QAB/F) [50], image fusion metric based on phase congruency (QP) [51], mutual information (MI) [52,53], Yang proposed fusion metric (QY) [48,54], Chen-Blum metric (QCB) [48,55], and visual information fidelity for fusion (VIFF) [56]. These objective metrics can evaluate different characteristics of fused images.QTE is a divergence measure, that can evaluate the degree of dependence between two discrete signals.QNCIE is a nonlinear correlation matrix, that measures the nonlinear correlation coefficient (NCC) of input images and the fused image.QAB/F metric is a gradient-based quality index to measure how well the edge information of source images is conducted to the fused image.QP is an image phase congruency based evaluation metric to evaluate the corner and edge information of the fused image.MI is a metric to evaluate information similarity between source images and the fused image.QY is a structure-similarity based image fusion performance metric, that can measure the structure similarity between source images and the fused image without a reference image.QCB is a human perception inspired fusion metric, that can obtain a contrast-preservation value of the fused image.VIFF is also a human perception inspired fusion metric. It quantifies the information shared between test and fused images based on Natural Scene Statistics (NSS) theory and Human Visual System (HVS) model. When the values of previously mentioned objective metrics get bigger, the fused results are indicated to be better.

    4.2. Comparison of Simultaneous Fusion and Denoising Methods

    In this experiment, the simultaneous denoising and fusion performance of the proposed framework is compared with two existing state-of-the-art methods, FDESD [39] and FDS [40]. For all sparse-representation based methods, the size of each image patch is set to8×8 and the overlap is set to 6 in all the experiments. The dictionary size of FDESD and the proposed method is set to 256.

    4.2.1. Multi-Focus Image Fusion

    Due to the limitation of focus range, camera lens have difficulty in capturing an all-on-focus image in a shutter. Multi-focus image fusion technique is proposed to solve this issue. Moreover, the parameter settings of image sensors can cause multi-focus images to be affected by noise. Thus, simultaneous denoising and fusion can relieve the limitations of lens focus range. 14 image pairs are used in the multi-focus image fusion experiments. To test the simultaneous denoising and fusion performance, these image pairs are tested as original image pairs and noisy image pairs. The noise levels are set toσ=10,20 and 50 respectively.
    The selected experimental results -1: The representative image pair of all multi-focus experiments is shown inFigure 4. Row 1&2 and row 3–5 demonstrate the noised source images and processed images respectively. The noise levels of column 1 to 4 inFigure 4 are set toσ=0,10,20 and 50 respectively.
    According toFigure 4, when the noise level is set toσ=0, all three methods show similar fusion results. However, as the noise level increases toσ=20 andσ=50, the fused image by FDESD cannot filter all the noise. To facilitate comparisons, local regions enclosed by blue frame inFigure 4 are enlarged and presented in the lower left corner of each fused image. From these close-up views of the labeled regions, it shows that the fusion details produced by the proposed method contain better contrast and sharpness, when the noise level raises toσ=20 andσ=50. Therefore, it confirms that the proposed method achieves the best visual quality in multi-focus image simultaneous fusion and denoising among all three methods.
    To assess the fusion performance objectively, eight objective evaluation metrics, i.e.,QTE,QNCIE,QAB/F,QP,MI,QY,QCB, andVIFF, are used in the comparison. The corresponding assessment results are listed inTable 1, where the largest values are highlighted in bold. The proposed method not only achieves the highest scores in the multi-focus image fusion, but also in the other seven measures of all noise levels, that includeQTE,QNCIE,QP,MI,QY,QCB, andVIFF. The metricQAB/F measures the edge retention. The fusion result of FDS obtains higher score inQAB/F than the proposed method, when the noise level is set toσ=0. However, as the noise level raises toσ=10,20 and 50, theQAB/F scores of the proposed method surpass the corresponding scores of FDS. In this case, it concludes that the proposed method yield the best results in the simultaneous image denoising and fusion.
    The selected experimental results -2: The representative image pair of all multi-focus experiments is shown inFigure 5. Row 1&2 and row 3–5 demonstrate the noised source images and processed images respectively. The noise levels of column 1 to 4 inFigure 5 are set toσ=0,10,20 and 50 respectively.
    According toFigure 5, when the noise level is set toσ=0, all three methods show similar fusion results. However, as the noise level increases toσ=20 andσ=50, the fused image by FDESD cannot filter all the noise. To facilitate comparisons, local regions enclosed by blue frame inFigure 5 are enlarged and presented in the lower left corner of each fused image. From these close-up views of the labeled regions, it shows that the fusion details produced by the proposed method contain better contrast and sharpness, when the noise level raises toσ=20 andσ=50. Therefore, it confirms that the proposed method achieves the best visual quality in multi-focus image simultaneous fusion and denoising among all three methods.
    To assess the fusion performance objectively, eight objective evaluation metrics, i.e.,QTE,QNCIE,QAB/F,QP,MI,QY,QCB, andVIFF, are used in the comparison. The corresponding assessment results are listed inTable 2, where the largest values are highlighted in bold. The proposed method not only achieves the highest scores in the multi-focus image fusion, but also in the other seven measures of all noise levels, that includeQTE,QNCIE,QP,MI,QY,QCB, andVIFF. In this case, it concludes that the proposed method yield the best results in the simultaneous image denoising and fusion.
    The average experimental results: The average objective evaluation results of simultaneous multi-focus image denoising and fusion are shown inTable 3. The corresponding results are consistent with the above demonstrated results of two groups of simultaneous multi-focus image denoising and fusion experiments. The proposed method has the best overall performance.

    4.2.2. Multi-Modality Medical Image Fusion

    Medical images are usually used in medical diagnosis. However, a single-modality medical image can only reflect one aspect of characteristics in medical diagnosis. So, multi-modality medical image fusion can be an effective technique to enhance the accuracy of medical diagnosis. Additionally, medical imaging sensors have limitations and medical images often contain noise. Therefore, simultaneous image denoising and fusion have practical significance in medical image processing. To test the performance of multi-modality medical image simultaneous denoising and fusion, the second experiment compares the fusion results of eight multi-modality medical image pairs. The noise levels of these multi-modality image pairs are set toσ=0,10,20 and 50 respectively.
    The selected experimental results -1: InFigure 6, row 1&2 and row 3–5 demonstrate the noised source images and processed images respectively. Column 1 to 4 ofFigure 6 provide the source images and fusion results with the noise level fromσ=0 to 50 respectively.
    The simultaneous denoising and fusion results of representative multi-modality medical image are shown inFigure 6. FDESD can eliminate the image noise at levelσ=0 to 10. But when the noise level raises toσ=20 and 50, the fused images by using FDESD are still noisy. FDS and the proposed method eliminate image noise at all noise levels. After careful observation, there is some information residue in the magnified regions of processed image by FDS at noise levelσ=50. In contrast, the processed results of the proposed method exhibit the best visual quality without obvious artifacts and residue at all noise levels.
    In comparison with other methods, FDS method can generate edge pleasant fusion results, when the noise level is set toσ=20. However, the proposed method outperforms the other two methods in terms of almost all the metrics as shown inTable 4.
    The selected experimental results -2: InFigure 7, row 1&2 and row 3–5 demonstrate the noised source images and processed images respectively. Column 1 to 4 ofFigure 7 provide the source images and fusion results with the noise level fromσ=0 to 50 respectively.
    The simultaneous denoising and fusion results of representative multi-modality medical image are shown inFigure 7. FDESD can only eliminate the image noise at levelσ=0 to 10. The images fused by FDESD contain noise when the noise level raises toσ=20 and 50. FDS and the proposed method eliminate image noise at all noise levels. After careful observation, there is some information residue in the magnified regions of processed image by FDS at noise levelσ=50.Table 5 shows the results of eight objective evaluation indicators. In contrast, the processed results of the proposed method exhibit the best visual quality without obvious artifacts and residue at all noise levels.
    The average experimental results: The average objective evaluation results of simultaneous multi-modality medical image denoising and fusion are shown inTable 6. The corresponding results are consistent with the above demonstrated results of two groups of simultaneous multi-modality medical image denoising and fusion experiments. The proposed method has better overall performance than FDESD and FDS.

    4.2.3. Infrared-Visible Image Fusion

    Infrared-visible image fusion is often used in low-light environment for object detection [57]. Due to the sensitivity of camera sensors, photos taken by visible sensors are noisy in low-light environment. Infrared sensors also produce noisy images when the temperature of the infrared sensor rises in the imaging process. Therefore, simultaneous image denoising and fusion have practical significance in infrared-visible image processing. In consequence, simultaneous denoising and fusion technique is necessary for integrating the information of infrared and visible images. Eight infrared-visible image pairs are used in the comparisons of infrared-visible image simultaneous denoising and fusion. The noise levels of infrared-visible image pairs are set toσ=0,10,20 and 50 respectively.
    The selected experimental results -1: The fusion results of representative infrared-visible image pairs are shown inFigure 8. The first two rows ofFigure 8 are the source infrared-visible image pairs, and the remaining three rows are the processed images. Row 3–5 are the processed images by using FDESD, FDS and the proposed method respectively. From column 1–4, the original and processed image with noise levels ofσ=0,10,20 and 50 are presented respectively.
    When the noise level is set toσ=0, the background is clear in the fusion results of FDESD and FDS. However, when the noise level rises toσ=10,20 and 50, the backgrounds of FDESD and FDS become unsharp. Besides that, the contrast of the enlarged object ’walking man’ is low in all the integrated images of FDESD and FDS. The noise is eliminated in all fused images by the proposed method. Moreover, the object ’walking man’ is clear and sharp in the fused images of the proposed method. Hence, the proposed method achieves the best simultaneous denoising and fusion effect among the three methods.
    Table 7 presents the average quantitative comparisons of infrared-visible image simultaneous denoising and fusion results. It is obvious that the proposed method shows the best performance in all objective evaluation metrics.
    The selected experimental results -2: The fusion results of representative infrared-visible image pairs are shown inFigure 9. The first two rows ofFigure 9 are the source infrared-visible image pairs, and the remaining three rows are the processed images. Row 3–5 are the processed images by using FDESD, FDS and the proposed method respectively. From column 1–4, the original and processed image with noise levels ofσ=0,10,20 and 50 are presented respectively.
    When the noise level is set toσ=0, the background is clear in the fusion results of FDESD and FDS. However, when the noise level rises toσ=10,20 and 50, the backgrounds of FDESD and FDS become unsharp. Besides that, the contrast of the enlarged object is low in all the integrated images of FDESD and FDS. The proposed method successfully eliminate noise in all fused images. Moreover, the enlarged object is clear and sharp in the image fused by the proposed method. According to the results of eight objective evaluation indicators shown inTable 8, the proposed method achieves the best simultaneous denoising and fusion effect among the three methods.
    The average experimental results: The average objective evaluation results of simultaneous infrared-visible image denoising and fusion are shown inTable 9. The corresponding results are consistent with the above demonstrated results of two groups of simultaneous infrared-visible image denoising and fusion experiments. The proposed method has the best overall performance.

    4.2.4. Comparison of Computational Efficiency

    The average processing time is used to compare the computational efficiency of all three simultaneous denoising and fusion methods. All the tests are implemented in the same platform as described in the experiment setup. In the comparative experiments, the source codes of FDESD [39] and FDS [40] are provided by the original authors. The average processing times of different methods are presented inTable 10.
    These results confirm that FDS method is the most efficient method, and the proposed method is much more efficient than FDESD. Although FDS is more efficient than the proposed method, the average processing time of the proposed method is still comparable. FDS uses the original sparse-representation model to eliminate image noise which improves the efficiency of FDS. However, the denoising performance of the original sparse-representation model is not good. Considering the SR model in processing effect and computation efficiency, the proposed method gets the best performance in noisy image fusion among all three methods.

    4.3. The Proposed Method Compares with Conventional Image Fusion and Denoising Method

    In further testing, the proposed method is compared with the conventional image fusion and denoising method in noisy image fusion. The separated denoising and fusion method (SDF) consisted of a state-of-the-art SR-based image denoising method [58] and one of the best SR-based fusion frameworks [29] are implemented in twenty comparative experiments of each image type. The patch size is set to8×8, and the overlap is set to 6 in all the experiments.

    4.3.1. Comparison of Processing Results

    In comparative experiments, multi-focus, multi-modality medical and infrared-visible image pairs are employed. Noise is added to all source images for testing the processing performance. Noise levels of added noise are 0, 10, 20, and 50 respectively. The source images with noise level 0 are directly fused by conventional image fusion method in the comparative experiment. Representative fusion results are shown inFigure 10. Row 1 to 4 are the fused images with noise level from 0 to 50 respectively. The processing results of SDF is presented in image (a) to (l). Image (a)–(d), (e)–(h) and (i)–(l) are the processed results of multi-focus, multi-modality medical and infrared-visible image pairs respectively. (m)–(x) are the processed images by the proposed simultaneous image denoising and fusion method. Image (m) to (p), (q) to (t) and (u) to (x) are the processed results of multi-focus, multi-modality medical, and infrared-visible image pairs respectively.
    According to the processed results shown inFigure 10, the proposed simultaneous image denoising and fusion method shows the best performance in brightness and contrast of the processed image. Parts of the processed images by SDF show the best performance in detailed information. However, since the denoising process affects the completeness of detailed information of source images, some detailed information of processed images is incomplete and unclear.
    To evaluate the separate and simultaneous denoising and fusion performance, objective metrics are conducted. The results of multi-focus, multi-modality medical, and infrared-visible image denoising and fusion are shown inTable 11,Table 12 andTable 13 respectively. As shown inTable 11, the proposed method obtains better noisy multi-focus image denoising and fusion results in most of metrics.Table 12 also demonstrates the proposed method obtains a better score than the comparison method in most objective metrics. In noisy infrared-visible image denoising and fusion, SDF and fusion method obtain similar scores, when the noise level is low. Some metrics of separate processing results are even higher than the proposed method. However, as the noise level rises to 20 or higher, the proposed simultaneous processing method achieves obviously better performance in most of objective metrics.
    Table 14,Table 15 andTable 16 compare the average objective evaluation results of each image type obtaind by SDF and the proposed method. The corresponding results are consistent with the results of the above demonstrated images. Compared with conventional image fusion and denoising method SDF, the proposed method has better performance in most of conditions.

    4.3.2. Comparison of Computational Efficiency

    Simultaneous image denoising and fusion can decrease the total processing time of noisy image fusion. The average processing times of separate and simultaneous image denoising and fusion are shown inTable 17. The processing times of SDF image denoising and fusion are demonstrated respectively. Since the proposed method processes image denoising and fusion simultaneously, it achieves the minimum total processing time as shown inTable 17.
    The total computation cost of the proposed method is much lower than SDF. On average, the proposed method expends 21.73 s to process an image pair with the size of256×256, that is more than 3 times faster than the competitor. For image pairs of320×240, the proposed method also spends 3 times less computation time than SDF. Moreover,Table 17 shows the computation cost of proposed method is similar to the computation cost of the conventional image denoising or image fusion. In conclusion, compared with SDF, the proposed shows better performance in both processed image quality and computation cost.

    5. Conclusions and Future Work

    In this paper, a novel framework of image simultaneous denoising and fusion with a novel SR model is proposed. Noisy source images are decomposed into cartoon and texture components for separating image noise and detailed information to the texture components. To fuse the noisy texture components, a GSM-based image patch denoising and sparse coding model is presented for coding patches of noisy components to denoised sparse coefficients. Principle components of source noisy images are extracted to construct a dictionary, that is used to sparse code the texture components. Denoised and coded coefficients are fused by thel1 value of hidden scalar multiplierθ. Since cartoon components are noise free, a conventional spatial-domain based weighted average rule is implemented. The weighted average fusion rule can greatly preserve the structure information, that is contained in cartoon components of source images. Integrated cartoon and texture components are summated to a denoised and fused image. The fusion results of the proposed method in various experiments is promising, when compared with other SR-based simultaneous denoising and fusion methods. Additionally, the computational efficiency of the proposed image fusion framework is comparable to SR-based simultaneous denoising and fusion methods with similar functions. Compared with image processed by SR-based denoising and fusing method separately, the proposed method also shows superior performance in both image quality and computation cost.
    Although the proposed method gets better or comparable performance in computation costs compared with existing SR-based methods, the proposed method is still time consuming. Since there are plentiful matrix computations in sparse coding and dictionary learning, the processing time of SR-based method is a little bit long. The employment of a group-based sparse model for improving dictionary construction and parallel processing of both sparse representation and fusion process is a future research topic. Additionally, the proposed method will be further extended to the simultaneous denoising and fusion of colorful or multi-spectral images. The proposed method will be modified to process each image layer first. Then, the processed each image layer is stacked to obtain the denoised fusion result.

    Author Contributions

    Conceptualization, G.Q. and H.L.; methodology, G.Q., G.H. and N.M.; software, G.Q. and H.L.; validation, G.Q., G.H. and M.H.; formal analysis, G.Q. and G.H.; investigation, N.M.; resources, G.Q., G.H. and H.L.; data curation, H.L.; writing—original draft preparation, G.Q.; writing—review and editing, G.Q., G.H., N.M., H.L. and M.H.; visualization, H.L.; supervision, G.Q. and M.H.; project administration, G.H. All authors have read and agreed to the published version of the manuscript.

    Funding

    This research received no external funding.

    Institutional Review Board Statement

    Not applicable.

    Informed Consent Statement

    Not applicable.

    Data Availability Statement

    Not applicable.

    Conflicts of Interest

    The authors declare no conflict of interest.

    Appendix A

    Algorithm A1 Cartoon Components Denoising and Fusion
    Input:
      n noisy multi-modality imagesx1,x2,,xn, the maximum iterative numberk andj for outer loop and inner loop, respectively.
    Output:
       Fused imagexf
    1:
    Outer Loop
    2:
    for i1 = 1 tok dodo
    3:
      Image-to-patch transformation
    4:
      Transform each source image toz image patches.
    5:
      Obtain dictionary from image patches of all source images using PCA method.
    6:
      forl = 1 tozdo
    7:
        Update the reconstructed image patchesxlt,t(1,2,n), which representl-th
        image patch ofxt for fixedBlt andθlt using Equation (8)
    8:
      end for
    9:
      Inner Loop
    10:
    fori2 = 1 tojdo
    11:
      forl = 1 tozdo
    12:
        Updateθlt for fixedBlt using Equation (11);
    13:
        UpdateBlt for fixedθlt using Equation (13).
    14:
      end for
    15:
    end for
    16:
    end for
    17:
    Get fused hidden scalar multipliersθlf according to Max-L1 fusion rule as follows:
    18:
    θlf=θlt,ifθlt1=max(θl11,θl21,θln1).
    19:
    The correspondingBlf oflth image patch,Blf=Blt and the correspondingΛlf=Λlt.
    20:
    Whenθlf andBlf ofl-th image patch are obtained, the fused image patch can be reconstructed by the following equation:
    21:
    xlf=DΛlfBlf, whereΛlf is a diagonal matrix ofθlf.

    References

    1. Zhang, J.; Hirakawa, K. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.IEEE Trans. Image Process.2017,26, 1565–1579. [Google Scholar] [CrossRef]
    2. Zhu, Z.; Wei, H.; Hu, G.; Li, Y.; Qi, G.; Mazur, N. A Novel Fast Single Image Dehazing Algorithm Based on Artificial Multiexposure Image Fusion.IEEE Trans. Instrum. Meas.2021,70, 1–23. [Google Scholar]
    3. Zheng, M.; Qi, G.; Zhu, Z.; Li, Y.; Wei, H.; Liu, Y. Image Dehazing by an Artificial Image Fusion Method Based on Adaptive Structure Decomposition.IEEE Sens. J.2020,20, 8062–8072. [Google Scholar] [CrossRef]
    4. Li, H.; Li, X.; Yu, Z.; Mao, C. Multifocus image fusion by combining with mixed-order structure tensors and multiscale neighborhood.Inf. Sci.2016,349, 25–49. [Google Scholar] [CrossRef]
    5. Wang, K.; Zheng, M.; Wei, H.; Qi, G.; Li, Y. Multi-Modality Medical Image Fusion Using Convolutional Neural Network and Contrast Pyramid.Sensors2020,20, 2169. [Google Scholar] [CrossRef] [Green Version]
    6. Li, Y.; Sun, Y.; Huang, X.; Qi, G.; Zheng, M.; Zhu, Z. An Image Fusion Method Based on Sparse Representation and Sum Modified-Laplacian in NSCT Domain.Entropy2018,20, 522. [Google Scholar] [CrossRef] [Green Version]
    7. Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art.Inf. Fusion2017,33, 100–112. [Google Scholar] [CrossRef]
    8. Li, H.; Qiu, H.; Yu, Z.; Zhang, Y. Infrared and visible image fusion scheme based on NSCT and low-level visual features.Infrared Phys. Technol.2016,76, 174–184. [Google Scholar] [CrossRef]
    9. Zhu, Z.; Zheng, M.; Qi, G.; Wang, D.; Xiang, Y. A Phase Congruency and Local Laplacian Energy Based Multi-Modality Medical Image Fusion Method in NSCT Domain.IEEE Access2019,7, 20811–20824. [Google Scholar] [CrossRef]
    10. Li, S.; Yin, H.; Fang, L. Group-Sparse Representation With Dictionary Learning for Medical Image Denoising and Fusion.IEEE Trans. Biomed. Eng.2012,59, 3450–3459. [Google Scholar] [CrossRef] [PubMed]
    11. Zhu, Z.; Luo, Y.; Qi, G.; Meng, J.; Li, Y.; Mazur, N. Remote Sensing Image Defogging Networks Based on Dual Self-Attention Boost Residual Octave Convolution.Remote Sens.2021,13, 3104. [Google Scholar] [CrossRef]
    12. Zhu, Z.; Luo, Y.; Wei, H.; Li, Y.; Qi, G.; Mazur, N.; Li, Y.; Li, P. Atmospheric Light Estimation Based Remote Sensing Image Dehazing.Remote Sens.2021,13, 2432. [Google Scholar] [CrossRef]
    13. Jain, P.; Tyagi, V. LAPB: Locally adaptive patch-based wavelet domain edge-preserving image denoising.Inf. Sci.2015,294, 164–181. [Google Scholar] [CrossRef]
    14. Li, H.; Qiu, H.; Yu, Z.; Li, B. Multifocus image fusion via fixed window technique of multiscale images and non-local means filtering.Signal Process.2017,138, 71–85. [Google Scholar] [CrossRef]
    15. Li, H.; Liu, X.; Yu, Z.; Zhang, Y. Performance improvement scheme of multifocus image fusion derived by difference images.Signal Process.2016,128, 474–493. [Google Scholar] [CrossRef]
    16. Tropp, J.A. Greed is Good: Algorithmic Results for Sparse Approximation.IEEE Trans. Inf. Theory2004,50, 2231–2242. [Google Scholar] [CrossRef] [Green Version]
    17. Donoho, D.L.; Elad, M. Optimally sparse representation in general nonorthogonal dictionaries vial − 1 minimization.Proc. Natl. Acad. Sci. USA2003,100, 2197–2202. [Google Scholar] [CrossRef] [Green Version]
    18. Liu, H.; Liu, Y.; Sun, F. Robust Exemplar Extraction Using Structured Sparse Coding.IEEE Trans. Neural Netw. Learn. Syst.2015,26, 1816–1821. [Google Scholar] [CrossRef]
    19. Zhao, Y.Q.; Yang, J. Hyperspectral Image Denoising via Sparse Representation and Low-Rank Constraint.IEEE Trans. Geosci. Remote Sens.2015,53, 296–308. [Google Scholar] [CrossRef]
    20. Shekhar, S.; Patel, V.M.; Nasrabadi, N.M.; Chellappa, R. Joint Sparse Representation for Robust Multimodal Biometrics Recognition.IEEE Trans. Pattern Anal. Mach. Intell.2014,36, 113. [Google Scholar] [CrossRef]
    21. An, L.; Chen, X.; Yang, S.; Bhanu, B. Sparse representation matching for person re-identification.Inf. Sci.2016,355–356, 74–89. [Google Scholar] [CrossRef] [Green Version]
    22. Liu, H.; Guo, D.; Sun, F. Object Recognition Using Tactile Measurements: Kernel Sparse Coding Methods.IEEE Trans. Instrum. Meas.2016,65, 656–665. [Google Scholar] [CrossRef]
    23. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation.IEEE Trans. Signal Process.2006,54, 4311–4322. [Google Scholar] [CrossRef]
    24. Ophir, B.; Lustig, M.; Elad, M. Multi-Scale Dictionary Learning Using Wavelets.IEEE J. Sel. Top. Signal Process.2011,5, 1014–1024. [Google Scholar] [CrossRef]
    25. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G. Online dictionary learning for sparse coding. In Proceedings of the International Conference on Machine Learning, ICML 2009, Montreal, QC, Canada, 14–18 June 2009; pp. 689–696. [Google Scholar]
    26. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 479–486. [Google Scholar]
    27. Ji, S.; Xue, Y.; Carin, L. Bayesian Compressive Sensing.IEEE Trans. Signal Process.2008,56, 2346–2356. [Google Scholar] [CrossRef]
    28. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach.IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc.2013,22, 700. [Google Scholar] [CrossRef]
    29. Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multi-scale transform and sparse representation.Inf. Fusion2015,24, 147–164. [Google Scholar] [CrossRef]
    30. Yang, B.; Li, S. Multifocus Image Fusion and Restoration With Sparse Representation.IEEE Trans. Instrum. Meas.2010,59, 884–892. [Google Scholar] [CrossRef]
    31. Yin, M.; Duan, P.; Liu, W.; Liang, X. A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation.Neurocomputing2017,226, 182–191. [Google Scholar] [CrossRef]
    32. Yin, H.; Li, Y.; Chai, Y.; Liu, Z.; Zhu, Z. A novel sparse-representation-based multi-focus image fusion approach.Neurocomputing2016,216, 216–229. [Google Scholar] [CrossRef]
    33. Zhu, Z.; Yin, H.; Chai, Y.; Li, Y.; Qi, G. A novel multi-modality image fusion method based on image decomposition and sparse representation.Inf. Sci.2018,432, 516–529. [Google Scholar] [CrossRef]
    34. Kim, M.; Han, D.K.; Ko, H. Joint patch clustering-based dictionary learning for multimodal image fusion.Inf. Fusion2015,27, 198–214. [Google Scholar] [CrossRef]
    35. Zhu, Z.; Qi, G.; Chai, Y.; Li, P. A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion.Appl. Sci.2017,7, 161. [Google Scholar] [CrossRef]
    36. Wang, K.; Qi, G.; Zhu, Z.; Chai, Y. A Novel Geometric Dictionary Construction Approach for Sparse Representation Based Image Fusion.Entropy2017,19, 306. [Google Scholar] [CrossRef] [Green Version]
    37. Liu, Z.; Chai, Y.; Yin, H.; Zhou, J.; Zhu, Z. A novel multi-focus image fusion approach based on image decomposition.Inf. Fusion2017,35, 102–116. [Google Scholar] [CrossRef]
    38. Liu, Y.; Wang, Z. Simultaneous image fusion and denoising with adaptive sparse representation.Iet Image Process.2015,9, 347–357. [Google Scholar] [CrossRef] [Green Version]
    39. Li, H.; He, X.; Tao, D.; Tang, Y.; Wang, R. Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning.Pattern Recognit.2018,79, 130–146. [Google Scholar] [CrossRef]
    40. Li, X.; Zhou, F.; Tan, H. Joint image fusion and denoising via three-layer decomposition and sparse representation.Knowl.-Based Syst.2021,224, 107087. [Google Scholar] [CrossRef]
    41. Mei, J.J.; Dong, Y.; Huang, T.Z. Simultaneous image fusion and denoising by using fractional-order gradient information.J. Comput. Appl. Math.2019,351, 212–227. [Google Scholar] [CrossRef]
    42. Su, H.; Jung, C.; Yu, L. Multi-Spectral Fusion and Denoising of Color and Near-Infrared Images Using Multi-Scale Wavelet Analysis.Sensors2021,21, 3610. [Google Scholar] [CrossRef]
    43. Wang, W.; Zhang, C.; Ng, M.K. Variational model for simultaneously image denoising and contrast enhancement.Opt. Express2020,28, 18751–18777. [Google Scholar] [CrossRef]
    44. Yang, F.; Xu, S.; Li, C. Boosting of Denoising Effect with Fusion Strategy.Appl. Sci.2020,10, 3857. [Google Scholar] [CrossRef]
    45. Vese, L.A.; Osher, S. Image Denoising and Decomposition with Total Variation Minimization and Oscillatory Functions.J. Math. Imaging Vis.2004,20, 7–18. [Google Scholar] [CrossRef] [Green Version]
    46. Dong, W.; Shi, G.; Ma, Y.; Li, X. Image Restoration via Simultaneous Sparse Coding: Where Structured Sparsity Meets Gaussian Scale Mixture.Int. J. Comput. Vis.2015,114, 217–232. [Google Scholar] [CrossRef]
    47. Cvejic, N.; Canagarajah, C.N.; Bull, D.R. Image fusion metric based on mutual information and Tsallis entropy.Electron. Lett.2006,42, 626–627. [Google Scholar] [CrossRef]
    48. Liu, Z.; Blasch, E.; Xue, Z.; Zhao, J.; Laganiere, R.; Wu, W. Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Study.IEEE Trans. Pattern Anal. Mach. Intell.2011,34, 94–109. [Google Scholar] [CrossRef] [PubMed]
    49. Wang, Q.; Shen, Y.; Jin, J. 19—Performance evaluation of image fusion techniques.Image Fusion2008, 469–492. [Google Scholar] [CrossRef]
    50. Petrovic, V.S. Subjective tests for image fusion evaluation and objective metric validation.Inf. Fusion2007,8, 208–216. [Google Scholar] [CrossRef]
    51. Liu, Z.; Forsyth, D.S.; Laganière, R. A feature-based metric for the quantitative evaluation of pixel-level image fusion.Comput. Vis. Image Underst.2008,109, 56–68. [Google Scholar] [CrossRef]
    52. Wang, Q.; Shen, Y.; Zhang, Y.; Zhang, J.Q. Fast quantitative correlation analysis and information deviation analysis for evaluating the performances of image fusion techniques.IEEE Trans. Instrum. Meas.2004,53, 1441–1447. [Google Scholar] [CrossRef]
    53. Qi, G.; Chang, L.; Luo, Y.; Chen, Y.; Zhu, Z.; Wang, S. A Precise Multi-Exposure Image Fusion Method Based on Low-level Features.Sensors2020,20, 1597. [Google Scholar] [CrossRef] [PubMed] [Green Version]
    54. Yang, C.; Zhang, J.Q.; Wang, X.R.; Liu, X. A novel similarity based quality metric for image fusion.Inf. Fusion2008,9, 156–160. [Google Scholar] [CrossRef]
    55. Chen, Y.; Blum, R.S. A new automated quality assessment algorithm for image fusion.Image Vis. Comput.2009,27, 1421–1432. [Google Scholar] [CrossRef]
    56. Sheikh, H.R.; Bovik, A.C. Image information and visual quality.IEEE Trans. Image Process.2006,15, 430–444. [Google Scholar] [CrossRef]
    57. Huang, X.; Qi, G.; Wei, H.; Chai, Y.; Sim, J. A Novel Infrared and Visible Image Information Fusion Method Based on Phase Congruency and Image Entropy.Entropy2019,21, 1135. [Google Scholar] [CrossRef] [Green Version]
    58. Zuo, W.; Zhang, L.; Song, C.; Zhang, D.; Gao, H. Gradient Histogram Estimation and Preservation for Texture Enhanced Image Denoising.IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc.2014,23, 2459–2472. [Google Scholar]
    Computers 10 00129 g001 550
    Figure 1. The Proposed Fusion Framework.
    Figure 1. The Proposed Fusion Framework.
    Computers 10 00129 g001
    Computers 10 00129 g002 550
    Figure 2. The Proposed Fusion Framework. (a) a noisy image, (b) cartoon components, and (c) texture components.
    Figure 2. The Proposed Fusion Framework. (a) a noisy image, (b) cartoon components, and (c) texture components.
    Computers 10 00129 g002
    Computers 10 00129 g003 550
    Figure 3. Parts of Used Representative source images. (al) are selected source images.
    Figure 3. Parts of Used Representative source images. (al) are selected source images.
    Computers 10 00129 g003
    Computers 10 00129 g004 550
    Figure 4. Simultaneous denoising and fusion results of noisy multi-focus image pairs -1. (ah) are source multi-focus images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Figure 4. Simultaneous denoising and fusion results of noisy multi-focus image pairs -1. (ah) are source multi-focus images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Computers 10 00129 g004
    Computers 10 00129 g005 550
    Figure 5. Simultaneous denoising and fusion results of noisy multi-focus image pairs -2. (ah) are source multi-focus images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Figure 5. Simultaneous denoising and fusion results of noisy multi-focus image pairs -2. (ah) are source multi-focus images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Computers 10 00129 g005
    Computers 10 00129 g006 550
    Figure 6. Simultaneous denoising and fusion results of FDESD, FDS and the proposed method for noisy multi-modality medical image pairs -1. (ah) are source multi-modality medical images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Figure 6. Simultaneous denoising and fusion results of FDESD, FDS and the proposed method for noisy multi-modality medical image pairs -1. (ah) are source multi-modality medical images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Computers 10 00129 g006
    Computers 10 00129 g007 550
    Figure 7. Simultaneous denoising and fusion results of FDESD, FDS and the proposed method for noisy multi-modality medical image pairs -2. (ah) are source multi-modality medical images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Figure 7. Simultaneous denoising and fusion results of FDESD, FDS and the proposed method for noisy multi-modality medical image pairs -2. (ah) are source multi-modality medical images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Computers 10 00129 g007
    Computers 10 00129 g008 550
    Figure 8. Simultaneous denoising and fusion results of FDESD, FDS and the proposed method for noisy infrared-visible image pairs. (ah) are source infrared-visible images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Figure 8. Simultaneous denoising and fusion results of FDESD, FDS and the proposed method for noisy infrared-visible image pairs. (ah) are source infrared-visible images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Computers 10 00129 g008
    Computers 10 00129 g009 550
    Figure 9. Simultaneous denoising and fusion results of FDESD, FDS and the proposed method for noisy infrared-visible image pairs -2. (ah) are source infrared-visible images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Figure 9. Simultaneous denoising and fusion results of FDESD, FDS and the proposed method for noisy infrared-visible image pairs -2. (ah) are source infrared-visible images with additional noiseσ=0,10,20,50 respectively; (it) are simultaneous denoising and fusion results of source images with additional noiseσ=0,10,20,50 by FDESD, FDS and proposed method respectively.
    Computers 10 00129 g009
    Computers 10 00129 g010 550
    Figure 10. Comparison of separate and simultaneous image denoising and fusion results. (ad) and (mp) are denoising and fusion results of multi-focus image with additional noiseσ=0,10,20,50 by SDF and proposed method respectively. (eh) and (qt) are denoising and fusion results of malti-modality medical image with additional noiseσ=0,10,20,50 by SDF and proposed method respectively. (il) and (ux) are denoising and fusion results of infrared-visible image with additional noiseσ=0,10,20,50 by SDF and proposed method respectively.
    Figure 10. Comparison of separate and simultaneous image denoising and fusion results. (ad) and (mp) are denoising and fusion results of multi-focus image with additional noiseσ=0,10,20,50 by SDF and proposed method respectively. (eh) and (qt) are denoising and fusion results of malti-modality medical image with additional noiseσ=0,10,20,50 by SDF and proposed method respectively. (il) and (ux) are denoising and fusion results of infrared-visible image with additional noiseσ=0,10,20,50 by SDF and proposed method respectively.
    Computers 10 00129 g010
    Table
    Table 1. Objective evaluations of simultaneous multi-focus image denoising and fusion -1.
    Table 1. Objective evaluations of simultaneous multi-focus image denoising and fusion -1.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.66850.81600.72870.75391.74200.89630.64440.6257
    FDS0.75290.81900.74790.83871.93370.92480.72280.6770
    proposed0.91630.82510.73050.84992.34500.96570.75690.6962
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.60560.81060.56140.56891.56230.74040.61320.6009
    FDS0.64460.81520.57450.68211.66170.77650.67450.6346
    proposed0.65270.81560.58570.69921.88710.78250.68040.6450
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.55630.81220.44590.40781.43520.63360.57280.5549
    FDS0.60530.81400.47870.51881.55700.67130.62450.5578
    proposed0.61020.81420.48730.55381.56720.69330.63710.5777
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.45110.81000.26330.17261.16150.44130.49280.4257
    FDS0.53690.81190.26190.23521.36560.45680.46960.4268
    proposed0.54720.81220.30140.30551.38520.50300.52760.4340
    Table
    Table 2. Objective evaluations of simultaneous multi-focus image denoising and fusion -2.
    Table 2. Objective evaluations of simultaneous multi-focus image denoising and fusion -2.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.67520.80980.72790.76921.75830.90340.65790.6372
    FDS0.76820.81970.73920.83931.98720.92440.73820.6804
    proposed0.91920.83170.74080.85032.35810.96940.75930.6985
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.61930.81320.56870.57931.57360.75340.63280.6196
    FDS0.64870.81570.57830.68941.66890.77930.67840.6372
    proposed0.65760.81860.58730.69981.89030.78960.68640.6497
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.56040.81460.44940.41841.44230.64060.58040.5569
    FDS0.60850.81950.48080.52681.55960.67920.62940.5608
    proposed0.61810.82370.49060.55871.56930.69870.63950.5788
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.45950.81870.26930.17741.17090.44730.49890.4267
    FDS0.54090.81210.27460.24291.37260.45890.47850.4326
    proposed0.55030.81730.30680.30941.38730.50960.53160.4389
    Table
    Table 3. Average objective evaluations of simultaneous multi-focus image denoising and fusion.
    Table 3. Average objective evaluations of simultaneous multi-focus image denoising and fusion.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.67490.80960.72770.76891.75800.90310.65760.6369
    FDS0.76780.81940.73880.83921.98670.92420.73800.6801
    proposed0.91870.83140.74040.85982.35790.96900.75900.6981
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.61890.81270.56840.57891.57320.75310.63250.6192
    FDS0.64820.81530.57800.68891.66850.77900.67810.6369
    proposed0.65720.81830.58700.69941.89000.78930.68610.6493
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.56010.81420.44900.41811.44210.64020.58990.5564
    FDS0.60810.81920.48030.52641.55920.67880.62910.5602
    proposed0.61770.82330.49020.55831.56890.69830.63910.5782
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.45910.81820.26900.17721.17030.44700.49860.4264
    FDS0.54070.81180.27420.24261.37230.45840.47810.4323
    proposed0.55000.81710.30650.30921.38690.50940.53130.4386
    Table
    Table 4. Objective evaluations of simultaneous multi-modality medical image denoising and fusion -1.
    Table 4. Objective evaluations of simultaneous multi-modality medical image denoising and fusion -1.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.55710.80710.40100.42710.99890.47170.42520.3337
    FDS0.56770.80720.48370.53331.00680.56260.46860.3601
    proposed0.65370.80990.65830.53711.22230.71500.46980.4362
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.48700.80640.30670.25840.92560.39980.33380.3075
    FDS0.53670.80690.33550.38520.97580.46350.34610.3167
    proposed0.56000.80770.40820.39381.05510.54080.37780.3729
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.43310.80570.26610.17020.83860.35460.33490.2880
    FDS0.50990.80660.28150.29080.94590.38910.32450.2908
    proposed0.48710.80670.32210.29890.94770.42690.35940.3904
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.29750.80410.15170.06630.58830.22000.29010.2360
    FDS0.45420.80580.18520.15480.85930.27440.23380.2402
    proposed0.48750.80640.25540.19030.93260.34720.29940.3013
    Table
    Table 5. Objective evaluations of simultaneous multi-modality medical image denoising and fusion -2.
    Table 5. Objective evaluations of simultaneous multi-modality medical image denoising and fusion -2.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.55820.80770.40470.43060.99970.47730.43020.3371
    FDS0.56910.80850.48690.53611.01030.56890.46900.3634
    proposed0.65750.81120.65990.53891.22860.71830.47080.4397
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.48870.80770.30930.26040.92960.40630.33590.3096
    FDS0.53840.80980.33920.38870.97910.46630.34860.3191
    proposed0.56410.81030.40940.39741.05880.54370.37920.3764
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.43620.80790.26860.17470.84060.35830.33850.2901
    FDS0.51020.80810.28550.29340.94810.39060.32730.2975
    proposed0.51320.80910.32340.29960.94970.42840.36060.3937
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.29940.80800.15420.06760.58970.22210.29320.2393
    FDS0.45760.80870.18910.15860.86120.27860.23670.2435
    proposed0.49020.80930.25770.19380.93610.34960.30080.3037
    Table
    Table 6. Average objective evaluations of simultaneous multi-modality medical image denoising and fusion.
    Table 6. Average objective evaluations of simultaneous multi-modality medical image denoising and fusion.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.55800.80750.40430.43020.99940.47710.43000.3369
    FDS0.56880.80820.48660.53571.01010.56850.46860.3631
    proposed0.65720.81080.65950.53861.22820.71800.47020.4393
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.48830.80740.30900.26010.92930.40590.33550.3092
    FDS0.53810.80950.33880.38820.97870.46600.34820.3187
    proposed0.56350.81000.40910.39711.05850.54330.37860.3761
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.43590.80760.26840.17440.84040.35810.33820.2987
    FDS0.51980.80770.28510.29320.94760.39020.32700.2971
    proposed0.51290.80870.32310.29920.94930.42810.36020.3933
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.29910.80780.15380.06720.58930.22170.29280.2390
    FDS0.45720.80830.18870.15820.86070.27820.23620.2431
    proposed0.48970.80900.25720.19340.93580.34920.30050.3033
    Table
    Table 7. Objective evaluations of simultaneous infrared-visible image denoising and fusion -1.
    Table 7. Objective evaluations of simultaneous infrared-visible image denoising and fusion -1.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.28910.80400.58180.32010.63010.69240.48840.2814
    FDS0.29890.80430.62290.43780.66380.77740.54250.3122
    proposed0.35820.80590.65430.50000.81730.87850.54340.3975
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.29490.80410.34730.17340.64600.48620.48780.2870
    FDS0.29480.80410.42590.27730.65310.57480.50850.3011
    proposed0.31850.80480.45140.30490.72320.65800.51110.3727
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.28080.80310.32040.11880.62530.48730.45610.2835
    FDS0.29280.80310.33140.20810.64550.46600.46240.2706
    proposed0.32170.80500.38620.23550.73090.57040.46410.3429
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.21140.80310.22590.05340.46970.39140.43800.2038
    FDS0.26060.80350.18940.10790.56140.31760.39460.2065
    proposed0.29330.80410.23780.13240.65220.38560.37710.2356
    Table
    Table 8. Objective evaluations of simultaneous infrared-visible image denoising and fusion -2.
    Table 8. Objective evaluations of simultaneous infrared-visible image denoising and fusion -2.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.29060.80430.58430.32360.63410.69520.48970.2846
    FDS0.30060.80490.62510.43980.66580.77930.54490.3141
    proposed0.35930.80630.65610.50110.81890.87930.54520.3996
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.29610.80510.34880.17490.64730.48810.48900.2887
    FDS0.29640.80590.42680.27820.65620.57670.50970.3038
    proposed0.31940.80620.45390.30770.72610.65950.51330.3752
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.28210.80520.32280.12010.62750.48880.45790.2851
    FDS0.29420.80500.33360.21010.64730.46870.46530.2728
    proposed0.32340.80680.38790.23710.73270.57210.46630.3447
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.21350.80490.22800.05530.47070.39460.43990.2060
    FDS0.26310.80510.19100.10970.56330.31960.39710.2082
    proposed0.29460.80560.23910.13400.65370.38720.37870.2369
    Table
    Table 9. Average objective evaluations of simultaneous infrared-visible image denoising and fusion.
    Table 9. Average objective evaluations of simultaneous infrared-visible image denoising and fusion.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.29030.80410.58400.32340.63400.69590.48920.2842
    FDS0.30030.80450.62470.43950.66540.77900.54450.3137
    proposed0.35900.80590.65580.50080.81850.87900.54480.3991
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.29570.80480.34830.17450.64710.48770.48860.2884
    FDS0.29610.80540.42660.27770.65570.57630.50920.3034
    proposed0.31910.80570.45350.30740.72560.65910.51300.3748
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.28180.80470.32230.11970.62720.48850.45770.2848
    FDS0.29400.80460.33320.20960.64700.46810.46500.2722
    proposed0.32310.80620.38770.23660.73230.57170.46590.3444
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    FDESD0.21310.80430.22750.05480.47020.39430.43950.2057
    FDS0.26270.80470.19050.10930.56300.31920.39660.2078
    proposed0.29420.80530.23860.13350.65330.38680.37850.235
    Table
    Table 10. The Average Computational Efficiency Comparison of Noisy Image Fusion.
    Table 10. The Average Computational Efficiency Comparison of Noisy Image Fusion.
    Resolution256×256240×320
    FDESD36.82 s46.53 s
    FDS20.71 s24.56 s
    Proposed21.55 s26.71 s
    Table
    Table 11. Objective evaluations of separate and simultaneous noisy-multi-focus image denoising and fusion.
    Table 11. Objective evaluations of separate and simultaneous noisy-multi-focus image denoising and fusion.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.71580.81890.69750.83371.95080.85600.71540.6665
    proposed0.91630.82510.73050.84992.34500.96570.75690.6962
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.66500.81710.48710.62591.78490.70470.66470.6326
    proposed0.65270.81560.58570.69921.88710.78250.68040.6450
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.60450.81340.37710.55751.69200.60600.63610.5696
    proposed0.61020.81420.48730.55381.56720.69330.63710.5777
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.51830.81500.26620.37921.22760.46900.45700.4154
    proposed0.54720.81220.30140.30551.38520.50300.52760.4340
    Table
    Table 12. Objective evaluations of separate and simultaneous noisy-multi-modality medical image denoising and fusion.
    Table 12. Objective evaluations of separate and simultaneous noisy-multi-modality medical image denoising and fusion.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.64380.80920.47800.44231.21160.72360.53010.4440
    proposed0.65370.80990.65830.53711.22230.71500.46980.4362
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.45050.80550.21550.23140.77490.32890.42400.2797
    proposed0.56000.80770.40820.39381.05510.54080.37780.3729
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.45300.80550.18450.18330.75300.32450.37100.2725
    proposed0.48710.80670.32210.26890.94770.42690.35940.3904
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.44430.80550.13480.12400.74400.27240.31880.2510
    proposed0.48750.80640.25540.19030.93260.34720.29940.3013
    Table
    Table 13. Objective evaluations of separate and simultaneous noisy-infrared-visible image denoising and fusion.
    Table 13. Objective evaluations of separate and simultaneous noisy-infrared-visible image denoising and fusion.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.38670.80700.64350.58970.79850.79080.48080.3042
    proposed0.35820.80590.65430.50000.81730.87850.54340.3975
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.33830.80640.45100.24060.70290.61740.52790.2820
    proposed0.31850.80480.45140.30490.72320.65800.51110.3727
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.30950.80510.36020.12830.65540.54010.45150.2540
    proposed0.32170.80500.38620.23550.73090.57040.46410.3429
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.27440.80540.22350.09090.62280.30380.32330.1893
    proposed0.29330.80410.23780.13240.65220.38560.37710.2356
    Table
    Table 14. Average objective evaluations of separate and simultaneous noisy-multi-focus image denoising and fusion.
    Table 14. Average objective evaluations of separate and simultaneous noisy-multi-focus image denoising and fusion.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.71650.81940.69790.83431.95130.85640.71590.6668
    proposed0.91660.82550.73080.85032.34540.96620.75740.6968
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.66550.81740.48760.62651.78550.70500.66530.6330
    proposed0.65320.81600.58630.69991.88770.78290.68080.6455
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.60480.81380.37760.55791.69240.60630.63670.5699
    proposed0.61060.81490.48780.55421.56770.69380.63750.5782
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.51880.81550.26680.37981.22810.46960.45750.4159
    proposed0.54770.81250.30180.30591.38570.50350.52800.4346
    Table
    Table 15. Average objective evaluations of separate and simultaneous noisy-multi-modality medical image denoising and fusion.
    Table 15. Average objective evaluations of separate and simultaneous noisy-multi-modality medical image denoising and fusion.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.64430.80980.47870.44281.21200.72420.53070.4448
    proposed0.65460.81050.65890.53781.22270.71550.47040.4370
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.45110.80620.21640.23190.77530.32940.42480.2804
    proposed0.56060.80830.40870.39451.05580.54150.37840.3735
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.45340.80610.18520.18380.75360.32490.37160.2731
    proposed0.48760.80720.32260.26960.94840.42750.35990.3911
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.44480.80620.13540.12480.74470.27290.31930.2517
    proposed0.48790.80680.25600.19080.93330.34780.30010.3019
    Table
    Table 16. Average objective evaluations of separate and simultaneous noisy-infrared-visible image denoising and fusion.
    Table 16. Average objective evaluations of separate and simultaneous noisy-infrared-visible image denoising and fusion.
    σ=0QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.38730.80760.64400.59050.79920.79140.48130.3048
    proposed0.35900.80640.65470.50060.81760.87910.54390.3978
    σ=10QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.33890.80670.45150.24120.70360.61790.52860.2827
    proposed0.31890.80540.45190.30560.72390.65860.51190.3735
    σ=20QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.31020.80580.36080.12890.65620.54090.45220.2546
    proposed0.32230.80570.38680.23620.73170.57100.46480.3434
    σ=50QTEQNCIEQAB/FQPMIQYQCBVIFF
    SDF0.27500.80590.22410.09160.62340.30450.32380.1899
    proposed0.29380.80460.23830.13280.65290.38600.37780.2363
    Table
    Table 17. Comparison of Average Computational Efficiency.
    Table 17. Comparison of Average Computational Efficiency.
    Noisy Image Fusion Computational Efficiency of256×256 Images
    Processing Time of SDFTotal Processing Time of Proposed Method
    Image Denoising and Fusion
    DenoisingFusionTotal
    41.77 s28.61 s70.38 s21.73 s
    Noisy Image Fusion Computational Efficiency of320×240 Images
    Processing Time of SDFTotal Processing Time of Proposed Method
    Image Denoising and Fusion
    DenoisingFusionTotal
    48.72 s34.69 s83.41 s26.83 s
    Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

    © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

    Share and Cite

    MDPI and ACS Style

    Qi, G.; Hu, G.; Mazur, N.; Liang, H.; Haner, M. A Novel Multi-Modality Image Simultaneous Denoising and Fusion Method Based on Sparse Representation.Computers2021,10, 129. https://doi.org/10.3390/computers10100129

    AMA Style

    Qi G, Hu G, Mazur N, Liang H, Haner M. A Novel Multi-Modality Image Simultaneous Denoising and Fusion Method Based on Sparse Representation.Computers. 2021; 10(10):129. https://doi.org/10.3390/computers10100129

    Chicago/Turabian Style

    Qi, Guanqiu, Gang Hu, Neal Mazur, Huahua Liang, and Matthew Haner. 2021. "A Novel Multi-Modality Image Simultaneous Denoising and Fusion Method Based on Sparse Representation"Computers 10, no. 10: 129. https://doi.org/10.3390/computers10100129

    APA Style

    Qi, G., Hu, G., Mazur, N., Liang, H., & Haner, M. (2021). A Novel Multi-Modality Image Simultaneous Denoising and Fusion Method Based on Sparse Representation.Computers,10(10), 129. https://doi.org/10.3390/computers10100129

    Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further detailshere.

    Article Metrics

    No
    No

    Article Access Statistics

    For more information on the journal statistics, clickhere.
    Multiple requests from the same IP address are counted as one view.
    Computers, EISSN 2073-431X, Published by MDPI
    RSSContent Alert

    Further Information

    Article Processing Charges Pay an Invoice Open Access Policy Contact MDPI Jobs at MDPI

    Guidelines

    For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers

    MDPI Initiatives

    Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series

    Follow MDPI

    LinkedIn Facebook X
    MDPI

    Subscribe to receive issue release notifications and newsletters from MDPI journals

    © 1996-2025 MDPI (Basel, Switzerland) unless otherwise stated
    Terms and Conditions Privacy Policy
    We use cookies on our website to ensure you get the best experience.
    Read more about our cookieshere.
    Accept
    Back to TopTop
    [8]ページ先頭

    ©2009-2025 Movatter.jp