1
|
Muthukrishnan V, Jaipurkar S, Damodaran N. Continuum topological derivative - a novel application tool for denoising CT and MRI medical images. BMC Med Imaging 2024; 24:182. [PMID: 39048968 PMCID: PMC11267933 DOI: 10.1186/s12880-024-01341-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 06/18/2024] [Indexed: 07/27/2024] Open
Abstract
BACKGROUND CT and MRI modalities are important diagnostics tools for exploring the anatomical and tissue properties, respectively of the human beings. Several advancements like HRCT, FLAIR and Propeller have advantages in diagnosing the diseases very accurately, but still have enough space for improvements due to the presence of inherent and instrument noises. In the case of CT and MRI, the quantum mottle and the Gaussian and Rayleigh noises, respectively are still present in their advanced modalities of imaging. This paper addresses the denoising problem with continuum topological derivative technique and proved its trustworthiness based on the comparative study with other traditional filtration methods such as spatial, adaptive, frequency and transformation techniques using measures like visual inspection and performance metrics. METHODS This research study focuses on identifying a novel method for denoising by testing different filters on HRCT (High-Resolution Computed Tomography) and MR (Magnetic Resonance) images. The images were acquired from the Image Art Radiological Scan Centre using the SOMATOM CT and SIGNA Explorer (operating at 1.5 Tesla) machines. To compare the performance of the proposed CTD (Continuum Topological Derivative) method, various filters were tested on both HRCT and MR images. The filters tested for comparison were Gaussian (2D convolution operator), Wiener (deconvolution operator), Laplacian and Laplacian diagonal (2nd order partial differential operator), Average, Minimum, and Median (ordinary spatial operators), PMAD (Anisotropic diffusion operator), Kuan (statistical operator), Frost (exponential convolution operator), and HAAR Wavelet (time-frequency operator). The purpose of the study was to evaluate the effectiveness of the CTD method in removing noise compared to the other filters. The performance metrics were analyzed to assess the diligence of noise removal achieved by the CTD method. The primary outcome of the study was the removal of quantum mottle noise in HRCT images, while the secondary outcome focused on removing Gaussian (foreground) and Rayleigh (background) noise in MR images. The study aimed to observe the dynamics of noise removal by examining the values of the performance metrics. In summary, this study aimed to assess the denoising ability of various filters in HRCT and MR images, with the CTD method being the proposed approach. The study evaluated the performance of each filter using specific metrics and compared the results to determine the effectiveness of the CTD method in removing noise from the images. RESULTS Based on the calculated performance metric values, it has been observed that the CTD method successfully removed quantum mottle noise in HRCT images and Gaussian as well as Rayleigh noise in MRI. This can be evidenced by the PSNR (Peak Signal-to-Noise Ratio) metric, which consistently exhibited values ranging from 50 to 65 for all the tested images. Additionally, the CTD method demonstrated remarkably low residual values, typically on the order of e-09, which is a distinctive characteristic across all the images. Furthermore, the performance metrics of the CTD method consistently outperformed those of the other tested methods. Consequently, the results of this study have significant implications for the quality, structural similarity, and contrast of HRCT and MR images, enabling clinicians to obtain finer details for diagnostic purposes. CONCLUSION Continuum topological derivative algorithm is found to be constructive in removing prominent noises in both CT and MRI images and can serve as a potential tool for recognition of anatomical details in case of diseased and normal ones. The results obtained from this research work are highly inspiring and offer great promise in obtaining accurate diagnostic information for critical cases such as Thoracic Cavity Carina, Brain SPI Globe Lens 4th Ventricle, Brain-Middle Cerebral Artery, Brain-Middle Cerebral Artery and neoplastic lesions. These findings lay the foundation for implementing the proposed CTD technique in routine clinical diagnosis.
Collapse
Affiliation(s)
- Viswanath Muthukrishnan
- Central Instrumentation & Service Laboratory, Guindy Campus, University of Madras, Chennai, India
| | | | - Nedumaran Damodaran
- Central Instrumentation & Service Laboratory, Guindy Campus, University of Madras, Chennai, India.
| |
Collapse
|
2
|
Wang W, He J, Liu H, Yuan W. MDC-RHT: Multi-Modal Medical Image Fusion via Multi-Dimensional Dynamic Convolution and Residual Hybrid Transformer. SENSORS (BASEL, SWITZERLAND) 2024; 24:4056. [PMID: 39000834 PMCID: PMC11244347 DOI: 10.3390/s24134056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 06/19/2024] [Accepted: 06/19/2024] [Indexed: 07/16/2024]
Abstract
The fusion of multi-modal medical images has great significance for comprehensive diagnosis and treatment. However, the large differences between the various modalities of medical images make multi-modal medical image fusion a great challenge. This paper proposes a novel multi-scale fusion network based on multi-dimensional dynamic convolution and residual hybrid transformer, which has better capability for feature extraction and context modeling and improves the fusion performance. Specifically, the proposed network exploits multi-dimensional dynamic convolution that introduces four attention mechanisms corresponding to four different dimensions of the convolutional kernel to extract more detailed information. Meanwhile, a residual hybrid transformer is designed, which activates more pixels to participate in the fusion process by channel attention, window attention, and overlapping cross attention, thereby strengthening the long-range dependence between different modes and enhancing the connection of global context information. A loss function, including perceptual loss and structural similarity loss, is designed, where the former enhances the visual reality and perceptual details of the fused image, and the latter enables the model to learn structural textures. The whole network adopts a multi-scale architecture and uses an unsupervised end-to-end method to realize multi-modal image fusion. Finally, our method is tested qualitatively and quantitatively on mainstream datasets. The fusion results indicate that our method achieves high scores in most quantitative indicators and satisfactory performance in visual qualitative analysis.
Collapse
Affiliation(s)
- Wenqing Wang
- School of Automation and Information Engineering, Xi'an University of Technology, Xi'an 710048, China
- Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing, Xi'an University of Technology, Xi'an 710048, China
| | - Ji He
- School of Automation and Information Engineering, Xi'an University of Technology, Xi'an 710048, China
| | - Han Liu
- School of Automation and Information Engineering, Xi'an University of Technology, Xi'an 710048, China
- Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing, Xi'an University of Technology, Xi'an 710048, China
| | - Wei Yuan
- School of Automation and Information Engineering, Xi'an University of Technology, Xi'an 710048, China
| |
Collapse
|
3
|
Zhong Y, Zhang S, Liu Z, Zhang X, Mo Z, Zhang Y, Hu H, Chen W, Qi L. Unsupervised Fusion of Misaligned PAT and MRI Images via Mutually Reinforcing Cross-Modality Image Generation and Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1702-1714. [PMID: 38147426 DOI: 10.1109/tmi.2023.3347511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
Photoacoustic tomography (PAT) and magnetic resonance imaging (MRI) are two advanced imaging techniques widely used in pre-clinical research. PAT has high optical contrast and deep imaging range but poor soft tissue contrast, whereas MRI provides excellent soft tissue information but poor temporal resolution. Despite recent advances in medical image fusion with pre-aligned multimodal data, PAT-MRI image fusion remains challenging due to misaligned images and spatial distortion. To address these issues, we propose an unsupervised multi-stage deep learning framework called PAMRFuse for misaligned PAT and MRI image fusion. PAMRFuse comprises a multimodal to unimodal registration network to accurately align the input PAT-MRI image pairs and a self-attentive fusion network that selects information-rich features for fusion. We employ an end-to-end mutually reinforcing mode in our registration network, which enables joint optimization of cross-modality image generation and registration. To the best of our knowledge, this is the first attempt at information fusion for misaligned PAT and MRI. Qualitative and quantitative experimental results show the excellent performance of our method in fusing PAT-MRI images of small animals captured from commercial imaging systems.
Collapse
|
4
|
Gupta P, Jain N. Segmentation-Based Fusion of CT and MR Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01078-x. [PMID: 38528288 DOI: 10.1007/s10278-024-01078-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 02/08/2024] [Accepted: 02/12/2024] [Indexed: 03/27/2024]
Abstract
In this paper, a segmentation-based image fusion method is proposed for the fusion of MR and CT images to obtain a high contrast fused image that contains complementary information from both input images. The proposed method uses the fuzzy C-mean method to extract information about the skull from the CT image. This skull information is used to extract soft tissue information from the MR image. Both the skull information and the soft tissue information are then fused using the fusion rule. The efficiency of the proposed method over other state-of-the-art fusion methods is analyzed and compared using qualitative and quantitative analysis methods. Qualitative analysis shows the improvement in the contrast between the bone and the soft tissue using the proposed method over other state-of-the-art methods without introducing any artifacts or distortions. Classical and gradient-based quantitative analysis also show significant improvement in the fused image obtained using the proposed method over the five state-of-the-art methods. The percentage improvement in the standard deviation, average gradient, entropy, spatial frequency, QABF, and LABF of the proposed method over the best value obtained by the five state-of-the-art methods is 27.11%, 12.06%, 23.64%, 11.30%, 5.59%, and 13.70% respectively.
Collapse
Affiliation(s)
- Pragya Gupta
- Department of Electronics and Communication Engineering, Jaypee University of Information Technology, Waknaghat, Solan, 173234, India
| | - Nishant Jain
- Department of Electronics and Communication Engineering, Jaypee University of Information Technology, Waknaghat, Solan, 173234, India.
| |
Collapse
|
5
|
Zhu F, Liu W. A novel medical image fusion method based on multi-scale shearing rolling weighted guided image filter. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15374-15406. [PMID: 37679184 DOI: 10.3934/mbe.2023687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Medical image fusion is a crucial technology for biomedical diagnoses. However, current fusion methods struggle to balance algorithm design, visual effects, and computational efficiency. To address these challenges, we introduce a novel medical image fusion method based on the multi-scale shearing rolling weighted guided image filter (MSRWGIF). Inspired by the rolling guided filter, we construct the rolling weighted guided image filter (RWGIF) based on the weighted guided image filter. This filter offers progressive smoothing filtering of the image, generating smooth and detailed images. Then, we construct a novel image decomposition tool, MSRWGIF, by replacing non-subsampled shearlet transform's non-sampling pyramid filter with RWGIF to extract richer detailed information. In the first step of our method, we decompose the original images under MSRWGIF to obtain low-frequency subbands (LFS) and high-frequency subbands (HFS). Since LFS contain a large amount of energy-based information, we propose an improved local energy maximum (ILGM) fusion strategy. Meanwhile, HFS employ a fast and efficient parametric adaptive pulse coupled-neural network (AP-PCNN) model to combine more detailed information. Finally, the inverse MSRWGIF is utilized to generate the final fused image from fused LFS and HFS. To test the proposed method, we select multiple medical image sets for experimental simulation and confirm its advantages by combining seven high-quality representative metrics. The simplicity and efficiency of the method are compared with 11 classical fusion methods, illustrating significant improvements in the subjective and objective performance, especially for color medical image fusion.
Collapse
Affiliation(s)
- Fang Zhu
- Department of Mathematics, Ministry of General Education, Anhui Xinhua University, Hefei 230088, China
| | - Wei Liu
- College of Mathematics and Computer Science, Tongling University, Tongling 244061, China
| |
Collapse
|
6
|
Li J, Han D, Wang X, Yi P, Yan L, Li X. Multi-Sensor Medical-Image Fusion Technique Based on Embedding Bilateral Filter in Least Squares and Salient Detection. SENSORS (BASEL, SWITZERLAND) 2023; 23:3490. [PMID: 37050552 PMCID: PMC10098979 DOI: 10.3390/s23073490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/21/2023] [Accepted: 03/23/2023] [Indexed: 06/19/2023]
Abstract
A multi-sensor medical-image fusion technique, which integrates useful information from different single-modal images of the same tissue and provides a fused image that is more comprehensive and objective than a single-source image, is becoming an increasingly important technique in clinical diagnosis and treatment planning. The salient information in medical images often visually describes the tissue. To effectively embed salient information in the fused image, a multi-sensor medical image fusion method is proposed based on an embedding bilateral filter in least squares and salient detection via a deformed smoothness constraint. First, source images are decomposed into base and detail layers using a bilateral filter in least squares. Then, the detail layers are treated as superpositions of salient regions and background information; a fusion rule for this layer based on the deformed smoothness constraint and guided filtering was designed to successfully conserve the salient structure and detail information of the source images. A base-layer fusion rule based on modified Laplace energy and local energy is proposed to preserve the energy information of these source images. The experimental results demonstrate that the proposed method outperformed nine state-of-the-art methods in both subjective and objective quality assessments on the Harvard Medical School dataset.
Collapse
Affiliation(s)
- Jiangwei Li
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
| | - Dingan Han
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
| | - Xiaopan Wang
- Guangdong Province Graduate Joint Training Base (Foshan), Foshan University, Foshan 528225, China
| | - Peng Yi
- Jiangsu Shuguang Photoelectric Co., Ltd., Yangzhou 225009, China
| | - Liang Yan
- Jiangsu Shuguang Photoelectric Co., Ltd., Yangzhou 225009, China
| | - Xiaosong Li
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
| |
Collapse
|
7
|
Li X, Wan W, Zhou F, Cheng X, Jie Y, Tan H. Medical image fusion based on sparse representation and neighbor energy activity. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
8
|
Yang Y, Cao S, Wan W, Huang S. Multi-modal medical image super-resolution fusion based on detail enhancement and weighted local energy deviation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
9
|
Li W, Zhang Y, Wang G, Huang Y, Li R. DFENet: A dual-branch feature enhanced network integrating transformers and convolutional feature learning for multimodal medical image fusion. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
10
|
Zhang G, Nie X, Liu B, Yuan H, Li J, Sun W, Huang S. A multimodal fusion method for Alzheimer's disease based on DCT convolutional sparse representation. Front Neurosci 2023; 16:1100812. [PMID: 36685238 PMCID: PMC9853298 DOI: 10.3389/fnins.2022.1100812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/07/2023] Open
Abstract
Introduction The medical information contained in magnetic resonance imaging (MRI) and positron emission tomography (PET) has driven the development of intelligent diagnosis of Alzheimer's disease (AD) and multimodal medical imaging. To solve the problems of severe energy loss, low contrast of fused images and spatial inconsistency in the traditional multimodal medical image fusion methods based on sparse representation. A multimodal fusion algorithm for Alzheimer' s disease based on the discrete cosine transform (DCT) convolutional sparse representation is proposed. Methods The algorithm first performs a multi-scale DCT decomposition of the source medical images and uses the sub-images of different scales as training images, respectively. Different sparse coefficients are obtained by optimally solving the sub-dictionaries at different scales using alternating directional multiplication method (ADMM). Secondly, the coefficients of high-frequency and low-frequency subimages are inverse DCTed using an improved L1 parametric rule combined with improved spatial frequency novel sum-modified SF (NMSF) to obtain the final fused images. Results and discussion Through extensive experimental results, we show that our proposed method has good performance in contrast enhancement, texture and contour information retention.
Collapse
Affiliation(s)
- Guo Zhang
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China,School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Xixi Nie
- Chongqing Key Laboratory of Image Cognition, College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Bangtao Liu
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Hong Yuan
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Jin Li
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Weiwei Sun
- School of Optoelectronic Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China,*Correspondence: Weiwei Sun,
| | - Shixin Huang
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China,Department of Scientific Research, The People’s Hospital of Yubei District of Chongqing City, Yubei, China,Shixin Huang,
| |
Collapse
|
11
|
Liu Y, Zhou D, Nie R, Hou R, Ding Z, Xia W, Li M. Green fluorescent protein and phase contrast image fusion via Spectral TV filter-based decomposition. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
12
|
Bi X, Wang P, Wu T, Zha F, Xu P. Non-uniform illumination underwater image enhancement via events and frame fusion. APPLIED OPTICS 2022; 61:8826-8832. [PMID: 36256018 DOI: 10.1364/ao.463099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 09/09/2022] [Indexed: 06/16/2023]
Abstract
Absorption and scattering by aqueous media can attenuate light and cause underwater optical imagery difficulty. Artificial light sources are usually used to aid deep-sea imaging. Due to the limited dynamic range of standard cameras, artificial light sources often cause underwater images to be underexposed or overexposed. By contrast, event cameras have a high dynamic range and high temporal resolution but cannot provide frames with rich color characteristics. In this paper, we exploit the complementarity of the two types of cameras to propose an efficient yet simple method for image enhancement of uneven underwater illumination, which can generate enhanced images containing better scene details and colors similar to standard frames. Additionally, we create a dataset recorded by the Dynamic and Active-pixel Vision Sensor that includes both event streams and frames, enabling testing of the proposed method and frame-based image enhancement methods. The experimental results conducted on our dataset with qualitative and quantitative measures demonstrate that the proposed method outperforms the compared enhancement algorithms.
Collapse
|
13
|
Fan C, Hu K, Yuan Y, Li Y. A Data-driven Analysis of Global Research Trends in Medical Image: A Survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.10.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
14
|
Tang L, Hui Y, Yang H, Zhao Y, Tian C. Medical image fusion quality assessment based on conditional generative adversarial network. Front Neurosci 2022; 16:986153. [PMID: 36033610 PMCID: PMC9400712 DOI: 10.3389/fnins.2022.986153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 07/13/2022] [Indexed: 11/23/2022] Open
Abstract
Multimodal medical image fusion (MMIF) has been proven to effectively improve the efficiency of disease diagnosis and treatment. However, few works have explored dedicated evaluation methods for MMIF. This paper proposes a novel quality assessment method for MMIF based on the conditional generative adversarial networks. First, with the mean opinion scores (MOS) as the guiding condition, the feature information of the two source images is extracted separately through the dual channel encoder-decoder. The features of different levels in the encoder-decoder are hierarchically input into the self-attention feature block, which is a fusion strategy for self-identifying favorable features. Then, the discriminator is used to improve the fusion objective of the generator. Finally, we calculate the structural similarity index between the fake image and the true image, and the MOS corresponding to the maximum result will be used as the final assessment result of the fused image quality. Based on the established MMIF database, the proposed method achieves the state-of-the-art performance among the comparison methods, with excellent agreement with subjective evaluations, indicating that the method is effective in the quality assessment of medical fusion images.
Collapse
Affiliation(s)
- Lu Tang
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Yu Hui
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Hang Yang
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Yinghong Zhao
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Chuangeng Tian
- School of Information and Electrical Engineering, Xuzhou University of Technology, Xuzhou, China
| |
Collapse
|
15
|
Tang W, He F, Liu Y, Duan Y. MATR: Multimodal Medical Image Fusion via Multiscale Adaptive Transformer. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5134-5149. [PMID: 35901003 DOI: 10.1109/tip.2022.3193288] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Owing to the limitations of imaging sensors, it is challenging to obtain a medical image that simultaneously contains functional metabolic information and structural tissue details. Multimodal medical image fusion, an effective way to merge the complementary information in different modalities, has become a significant technique to facilitate clinical diagnosis and surgical navigation. With powerful feature representation ability, deep learning (DL)-based methods have improved such fusion results but still have not achieved satisfactory performance. Specifically, existing DL-based methods generally depend on convolutional operations, which can well extract local patterns but have limited capability in preserving global context information. To compensate for this defect and achieve accurate fusion, we propose a novel unsupervised method to fuse multimodal medical images via a multiscale adaptive Transformer termed MATR. In the proposed method, instead of directly employing vanilla convolution, we introduce an adaptive convolution for adaptively modulating the convolutional kernel based on the global complementary context. To further model long-range dependencies, an adaptive Transformer is employed to enhance the global semantic extraction capability. Our network architecture is designed in a multiscale fashion so that useful multimodal information can be adequately acquired from the perspective of different scales. Moreover, an objective function composed of a structural loss and a region mutual information loss is devised to construct constraints for information preservation at both the structural-level and the feature-level. Extensive experiments on a mainstream database demonstrate that the proposed method outperforms other representative and state-of-the-art methods in terms of both visual quality and quantitative evaluation. We also extend the proposed method to address other biomedical image fusion issues, and the pleasing fusion results illustrate that MATR has good generalization capability. The code of the proposed method is available at https://github.com/tthinking/MATR.
Collapse
|
16
|
Santarelli C, Carfagni M, Alparone L, Arienzo A, Argenti F. Multimodal fusion of tomographic sequences of medical images: MRE spatially enhanced by MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 223:106964. [PMID: 35759822 DOI: 10.1016/j.cmpb.2022.106964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 06/15/2022] [Accepted: 06/16/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE In biomedical fields, image analysis is often necessary for an accurate diagnosis. In order to obtain all the information needed to form an in-depth clinical picture, it may be useful to combine the contents of images taken under different diagnostic modes. Multimodal medical image fusion techniques enable complementary information acquired by different imaging devices to be automatically combined into a unique image. METHODS In this paper, multimodal medical images fusion method based on multiresolution analysis (MRA) is proposed, with the aim to combine the high geometric content of magnetic resonance imaging (MRI) and the elasticity information of magnetic resonance elastography (MRE), simultaneously acquired on the same organs of a patient. First, the slices of MRE are volumetrically interpolated to exactly overlap, each with a slice of MRI. Then, the spatial details of MRI are extracted by means of MRA and injected into the corresponding slices of MRE. Due to the intrinsic dissimilarity between corresponding slices of MRE and MRI, the spatial details of MRI are modulated by local or global matching functions. RESULTS The performance of the proposed method is quantitatively assessed considering radiometric and geometric consistency of the fused images with respect to their originals, in a comparison with two popular methods from the literature. For a qualitative evaluation, a visual inspection is carried out. CONCLUSIONS The results show that the proposed method enables an effective MRI-MRE fusion that allows the elasticity information and geometric details of the examined organs to be evaluated in a single image.
Collapse
Affiliation(s)
- Chiara Santarelli
- Department of Industrial Engineering, University of Florence, Via di Santa Marta, Florence 3 - 50139, Italy.
| | - Monica Carfagni
- Department of Industrial Engineering, University of Florence, Via di Santa Marta, Florence 3 - 50139, Italy.
| | - Luciano Alparone
- Department of Information Engineering, University of Florence, Via di Santa Marta, Florence 3 - 50139, Italy.
| | - Alberto Arienzo
- Department of Information Engineering, University of Florence, Via di Santa Marta, Florence 3 - 50139, Italy.
| | - Fabrizio Argenti
- Department of Information Engineering, University of Florence, Via di Santa Marta, Florence 3 - 50139, Italy.
| |
Collapse
|
17
|
Multi-modal medical image fusion based on densely-connected high-resolution CNN and hybrid transformer. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07635-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
18
|
Ullah H, Zhao Y, Abdalla FYO, Wu L. Fast local Laplacian filtering based enhanced medical image fusion using parameter-adaptive PCNN and local features-based fuzzy weighted matrices. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02834-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
19
|
Multimodal medical image fusion based on multichannel coupled neural P systems and max-cloud models in spectral total variation domain. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.059] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
20
|
A Review on the Rule-Based Filtering Structure with Applications on Computational Biomedical Images. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:2599256. [PMID: 35299677 PMCID: PMC8923774 DOI: 10.1155/2022/2599256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 01/27/2022] [Indexed: 11/17/2022]
Abstract
In this paper, we present rule-based fuzzy inference systems that consist of a series of mathematical representations based on fuzzy concepts in the filtering structure. It is crucial for understanding and discussing different principles associated with fuzzy filter design procedures. A number of typical fuzzy multichannel filtering approaches are provided in order to clarify the different fuzzy filter designs and compare different algorithms. In particular, in most practical applications (i.e., biomedical image analysis), the emphasis is placed primarily on fuzzy filtering algorithms, with the main advantages of restoration of corrupted medical images and the interpretation capability, along with the capability of edge preservation and relevant image information for accurate diagnosis of diseases.
Collapse
|
21
|
Vanitha K, Satyanarayana D, Giri Prasad M. Medical image fusion using fuzzy adaptive reduced pulse coupled neural networks. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-213416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This paper addresses a novel neuro-fuzzy-based approach to set the weighted linking strength of parameter - adaptive reduced pulse coupled neural networks. In reduced PCNN based medical image fusion algorithms, it is quite essential to evaluate the prominence of each pixel in an image. The fusion performance in turn depends on the linking factor, internal activity. Thus, we need to set these values of reduced PCNN in a more adaptive manner with fewer complications and uncertainties. For this, the weighted linking strength i.e., lambda of the reduced PCNN neurons is attentively set by a fuzzy-based approach. Here, lambda of neurons is represented as fuzzy membership values using the activity level measures such as local information entropy and energy. Finally, a new model called-Fuzzy adaptive reduced pulse coupled neural networks is developed by reducing the number of parameters and fuzzy adaptive settings of them. This leads to a very less complicated network and more computational efficacy, which is a prominent part of health care requirements. The proposed scheme is free from the shortcomings such as loss of boundaries, structural details, unwanted artifacts, degradations, etc. Subjective and objective evaluations show better performance of this new approach compared to the existing techniques.
Collapse
Affiliation(s)
- K. Vanitha
- Department of ECE, JNTUA, Ananthapuramu, AP, India
| | - D. Satyanarayana
- Department of ECE, RGM College of Engineering, Nandyal, AP, India
| | | |
Collapse
|
22
|
Lakshmi A, Rajasekaran MP, Jeevitha S, Selvendran S. An Adaptive MRI-PET Image Fusion Model Based on Deep Residual Learning and Self-Adaptive Total Variation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-020-05201-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
23
|
Zhu R, Li X, Huang S, Zhang X. Multimodal medical image fusion using adaptive co-occurrence filter-based decomposition optimization model. Bioinformatics 2022; 38:818-826. [PMID: 34664633 DOI: 10.1093/bioinformatics/btab721] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 10/10/2021] [Accepted: 10/13/2021] [Indexed: 02/03/2023] Open
Abstract
MOTIVATION Medical image fusion has developed into an important technology, which can effectively merge the significant information of multiple source images into one image. Fused images with abundant and complementary information are desirable, which contributes to clinical diagnosis and surgical planning. RESULTS In this article, the concept of the skewness of pixel intensity (SPI) and a novel adaptive co-occurrence filter (ACOF)-based image decomposition optimization model are proposed to improve the quality of fused images. Experimental results demonstrate that the proposed method outperforms 22 state-of-the-art medical image fusion methods in terms of five objective indices and subjective evaluation, and it has higher computational efficiency. AVAILABILITY AND IMPLEMENTATION First, the concept of SPI is applied to the co-occurrence filter to design ACOF. The initial base layers of source images are obtained using ACOF, which relies on the contents of images rather than fixed scale. Then, the widely used iterative filter framework is replaced with an optimization model to ensure that the base layer and detail layer are sufficiently separated and the image decomposition has higher computational efficiency. The optimization function is constructed based on the characteristics of the ideal base layer. Finally, the fused images are generated by designed fusion rules and linear addition. The code and data can be downloaded at https://github.com/zhunui/acof. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Rui Zhu
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun 130012, China.,College of Computer Science and Technology, Jilin University, Changchun 130012, China
| | - Xiongfei Li
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun 130012, China.,College of Computer Science and Technology, Jilin University, Changchun 130012, China
| | - Sa Huang
- Department of Radiology, the Second Hospital of Jilin University, Changchun 130041, China
| | - Xiaoli Zhang
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun 130012, China.,College of Computer Science and Technology, Jilin University, Changchun 130012, China
| |
Collapse
|
24
|
A multiscale double-branch residual attention network for anatomical-functional medical image fusion. Comput Biol Med 2021; 141:105005. [PMID: 34763846 DOI: 10.1016/j.compbiomed.2021.105005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 10/28/2021] [Accepted: 10/29/2021] [Indexed: 01/29/2023]
Abstract
Medical image fusion technology synthesizes complementary information from multimodal medical images. This technology is playing an increasingly important role in clinical applications. In this paper, we propose a new convolutional neural network, which is called the multiscale double-branch residual attention (MSDRA) network, for fusing anatomical-functional medical images. Our network contains a feature extraction module, a feature fusion module and an image reconstruction module. In the feature extraction module, we use three identical MSDRA blocks in series to extract image features. The MSDRA block has two branches. The first branch uses a multiscale mechanism to extract features of different scales with three convolution kernels of different sizes, while the second branch uses six 3 × 3 convolutional kernels. In addition, we propose the Feature L1-Norm fusion strategy to fuse the features obtained from the input images. Compared with the reference image fusion algorithms, MSDRA consumes less fusion time and achieves better results in visual quality and the objective metrics of Spatial Frequency (SF), Average Gradient (AG), Edge Intensity (EI), Quality-Aware Clustering (QAC), Variance (VAR), and Visual Information Fidelity for Fusion (VIFF).
Collapse
|
25
|
Polinati S, Bavirisetti DP, Rajesh KNVPS, Dhuli R. Multimodal medical image fusion based on content-based decomposition and PCA-Sigmoid. Curr Med Imaging 2021; 18:546-562. [PMID: 34607547 DOI: 10.2174/1573405617666211004114726] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 08/26/2021] [Accepted: 08/30/2021] [Indexed: 11/22/2022]
Abstract
OBJECTIVE The objective of any multimodal medical image fusion algorithm is to assist a radiologist for better decision-making during the diagnosis and therapy by integrating the anatomical (magnetic resonance imaging) and functional (positron emission tomography/single-photon emission computed tomography) information. METHODS We proposed a new medical image fusion method based on content-based decomposition, principal component analysis (PCA), and sigmoid function. We considered empirical wavelet transform (EWT) for content-based decomposition purposes since it can preserve crucial medical image information such as edges and corners. PCA is used to obtain initial weights corresponding to each detail layer. RESULTS In our experiments, we found that direct usage of PCA for detail layer fusion introduces severe artifacts into the fused image due to weight scaling issues. In order to tackle this, we considered using the sigmoid function for better weight scaling. We considered 24 pairs of MRI-PET and 24 pairs of MRI-SPECT images for fusion and the results are measured using four significant quantitative metrics. CONCLUSION Finally, we compared our proposed method with other state-of-the-art transform-based fusion approaches, using traditional and recent performance measures. An appreciable improvement is observed in both qualitative and quantitative results compared to other fusion methods.
Collapse
Affiliation(s)
| | | | - Kandala N V P S Rajesh
- Department of ECE, Gayatri Vidya Parishad College of Engineering (A), Visakhapatnam . India
| | | |
Collapse
|
26
|
Fu J, Li W, Du J, Xu L. DSAGAN: A generative adversarial network based on dual-stream attention mechanism for anatomical and functional image fusion. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.06.083] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
27
|
Zuo Q, Zhang J, Yang Y. DMC-Fusion: Deep Multi-Cascade Fusion With Classifier-Based Feature Synthesis for Medical Multi-Modal Images. IEEE J Biomed Health Inform 2021; 25:3438-3449. [PMID: 34038372 DOI: 10.1109/jbhi.2021.3083752] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multi-modal medical image fusion is a challenging yet important task for precision diagnosis and surgical planning in clinical practice. Although single feature fusion strategy such as Densefuse has achieved inspiring performance, it tends to be not fully preserved for the source image features. In this paper, a deep multi-fusion framework with classifier-based feature synthesis is proposed to automatically fuse multi-modal medical images. It consists of a pre-trained autoencoder based on dense connections, a feature classifier and a multi-cascade fusion decoder with separately fusing high-frequency and low-frequency. The encoder and decoder are transferred from MS-COCO datasets and pre-trained simultaneously on multi-modal medical image public datasets to extract features. The feature classification is conducted through Gaussian high-pass filtering and the peak signal to noise ratio thresholding, then feature maps in each layer of the pre-trained Dense-Block and decoder are divided into high-frequency and low-frequency sequences. Specifically, in proposed feature fusion block, parameter-adaptive pulse coupled neural network and l1-weighted are employed to fuse high-frequency and low-frequency, respectively. Finally, we design a novel multi-cascade fusion decoder on total decoding feature stage to selectively fuse useful information from different modalities. We also validate our approach for the brain disease classification using the fused images, and a statistical significance test is performed to illustrate that the improvement in classification performance is due to the fusion. Experimental results demonstrate that the proposed method achieves the state-of-the-art performance in both qualitative and quantitative evaluations.
Collapse
|
28
|
Li X, Zhou F, Tan H, Zhang W, Zhao C. Multimodal medical image fusion based on joint bilateral filter and local gradient energy. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.04.052] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
29
|
Bellam K, Krishnaraj N, Jayasankar T, Prakash NB, Hemalakshmi GR. Adaptive Multimodal Image Fusion with a Deep Pyramidal Residual Learning Network. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Multimodal medical imaging is an indispensable requirement in the treatment of various pathologies to accelerate care. Rather than discrete images, a composite image combining complementary features from multimodal images is highly informative for clinical examinations, surgical planning,
and progress monitoring. In this paper, a deep learning fusion model is proposed for the fusion of medical multimodal images. Based on pyramidal and residual learning units, the proposed model, strengthened with adaptive fusion rules, is tested on image pairs from a standard dataset. The potential
of the proposed model for enhanced image exams is shown by fusion studies with deep network images and quantitative output metrics of magnetic resonance imaging and positron emission tomography (MRI/PET) and magnetic resonance imaging and single-photon emission computed tomography (MRI/SPECT).
The proposed fusion model achieves the Structural Similarity Index Measure (SSIM) values of 0.9502 and 0.8103 for the MRI/SPECT and MRI/PET MRI/SPECT image sets, signifying the perceptual visual consistency of the fused images. Testing is performed on 20 pairs of MRI/SPECT and MRI/PET images.
Similarly, the Mutual Information (MI) values of 2.7455 and 2.7776 obtained for the MRI/SPECT and MRI/PET image sets, indicating the model’s ability to capture the information content from the source images to the composite image. Further, the proposed model allows deploying its variants,
introducing refinements on the basic model suitable for the fusion of low and high-resolution medical images of diverse modalities.
Collapse
Affiliation(s)
- Kiranmai Bellam
- Department of Computer Science, Prairie View A&M University, Prairie View, TX, 77429, United States
| | - N. Krishnaraj
- SRM Institute of Science and Technology, Kattankulathur 603203, Tamilnadu, India
| | - T. Jayasankar
- Electronics and Communication Engineering Department, University College of Engineering, Bharathidasan Institute of Technology Campus, Anna University, Tiruchirappalli 620024, Tamilnadu, India
| | - N. B. Prakash
- Department of Electrical and Electronics Engineering, National Engineering College, 628503, Tamilnadu, India
| | - G. R. Hemalakshmi
- Department of Computer Science and Engineering, National Engineering College, 628503, Tamilnadu, India
| |
Collapse
|
30
|
|
31
|
Vanitha K, Satyanarayana D, Prasad MNG. Multi-modal Medical Image Fusion Algorithm Based on Spatial Frequency Motivated PA-PCNN in the NSST Domain. Curr Med Imaging 2021; 17:634-643. [PMID: 33213329 DOI: 10.2174/1573405616666201118123220] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 08/09/2020] [Accepted: 10/13/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Image fusion has been grown as an effectual method in diseases related diagnosis schemes. METHODS In this paper, a new method for combining multimodal medical images using spatial frequency motivated parameter-adaptive PCNN (SF-PAPCNN) is suggested. The multi- modal images are disintegrated into frequency bands by using decomposition NSST. The coefficients of low frequency bands are selected using maximum rule. The coefficients of high frequency bands are combined by SF-PAPCNN. METHODS In this paper, a new method for combining multimodal medical images using spatial frequency motivated parameter-adaptive PCNN (SF-PAPCNN) is suggested. The multi-modal images are disintegrated into frequency bands by using decomposition NSST. The coefficients of low frequency bands are selected using maximum rule. The coefficients of high frequency bands are combined by SF-PAPCNN. RESULTS The fused medical images is obtained by applying INSST to above coefficients. CONCLUSION The quality metrics such as entropy ENT, fusion symmetry FS, deviation STD, mutual information QMI and edge strength QAB/F are used to validate the efficacy of suggested scheme.
Collapse
Affiliation(s)
- K Vanitha
- Department of ECE, Jawaharlal Nehru Technological University, Anantapur, India
| | - D Satyanarayana
- Department of ECE, Rajeev Gandhi Memorial College of Engineering and Technology, Nandyal, India
| | - M N G Prasad
- Department of ECE, Jawaharlal Nehru Technological University, Anantapur, India
| |
Collapse
|
32
|
Multi-modal brain image fusion based on multi-level edge-preserving filtering. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102280] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
33
|
Wang G, Li W, Huang Y. Medical image fusion based on hybrid three-layer decomposition model and nuclear norm. Comput Biol Med 2020; 129:104179. [PMID: 33360260 DOI: 10.1016/j.compbiomed.2020.104179] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Revised: 11/30/2020] [Accepted: 12/12/2020] [Indexed: 11/30/2022]
Abstract
The aim of medical image fusion technology is to synthesize multiple-image information to assist doctors in making scientific decisions. Existing studies have focused on preserving image details while avoiding halo artifacts and color distortions. This paper proposes a novel medical image fusion algorithm based on this research objective. First, the input image is decomposed into structure, texture, and local mean brightness layers using a hybrid three-layer decomposition model that can fully extract the features of the original images without the introduction of artifacts. Secondly, the nuclear norm of the patches, which are obtained using a sliding window, are calculated to construct the weight maps of the structure and texture layers. The weight map of the local mean brightness layer is constructed by calculating the local energy. Finally, remapping functions are applied to enhance each fusion layer, which reconstructs the final fusion image with the inverse operation of decomposition. Subjective and objective experiments confirm that the proposed algorithm has a distinct advantage compared with other state-of-the-art algorithms.
Collapse
Affiliation(s)
- Guofen Wang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Weisheng Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Yuping Huang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| |
Collapse
|
34
|
Du J, Fang M, Yu Y, Lu G. An adaptive two-scale biomedical image fusion method with statistical comparisons. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 196:105603. [PMID: 32570007 DOI: 10.1016/j.cmpb.2020.105603] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 06/06/2020] [Indexed: 06/11/2023]
Abstract
Two-scale image representation of base and detail in the spatial-domain is a well-known decomposition scheme for its lower computational complexity than that performed in the transform-domain in the field of image fusion. Unfortunately, for a pseudo-colour input image, the base and detail images in the spatial-domain obtained via image decomposition scheme always display in greyscale. In this paper, a two-scale image fusion method with adaptive threshold obtained by Otsu's method is proposed for pseudo-colour image in the colour space domain. For greyscale image, detail and base image are obtained using structural information extracted from the difference image between a global and a local patch size. Consequently, local edge-preserving filter for preserving luminance information and local energy with the discussed window size are adopted to combine base and detail image. Experimental results show that structural and luminance information has been better preserved in terms of subjective and objective evaluations for medical image and protein image fusion. Specially, a two-step non-parametric statistical test (Friedman test and Nemenyi post-hoc test) with p-values is adopted to analysis the statistical significant of the relative difference between the proposed and compared methods in terms of values of objective metrics including 30 co-registered pairs of imaging data.
Collapse
Affiliation(s)
- Jiao Du
- School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510006, China.
| | - Meie Fang
- School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510006, China
| | - Yufeng Yu
- Department of Statistics, Guangzhou University, Guangzhou 510006, China
| | - Gang Lu
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing 210096, China
| |
Collapse
|
35
|
Fu J, Li W, Du J, Xiao B. Multimodal medical image fusion via laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy. Comput Biol Med 2020; 126:104048. [PMID: 33068809 DOI: 10.1016/j.compbiomed.2020.104048] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 10/06/2020] [Accepted: 10/06/2020] [Indexed: 10/23/2022]
Abstract
BACKGROUND In recent years, numerous fusion algorithms have been proposed for multimodal medical images. The Laplacian pyramid is one type of multiscale fusion method. Although the pyramid-based fusion algorithm can fuse images well, it has the disadvantages of edge degradation, detail loss and image smoothing as the number of decomposition layers increase, which is harmful for medical diagnosis and analysis. METHOD This paper proposes a medical image fusion algorithm based on the Laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy, which can greatly improve the edge quality. First, multimodal medical images are reconstructed through convolutional neural network. Then, the Laplacian pyramid is applied in the decomposition and fusion process. The optimal number of decomposition layers is determined by experiments. In addition, a local gradient energy fusion strategy is utilized to fuse the coefficients in each layer. Finally, the fused image is output through Laplacian inverse transformation. RESULTS Compared with existing algorithms, our fusion results represent better vision quality performance. Furthermore, our algorithm is considerably superior to the compared algorithms in objective indicators. In addition, in our fusion results of Alzheimer and Glioma, the disease details are much clearer than those of compared algorithms, which can provide a reliable basis for doctors to analyze disease and make pathological diagnoses.
Collapse
Affiliation(s)
- Jun Fu
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Weisheng Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Jiao Du
- School of Computer Science and Educational Software, Guangzhou University, Guangzhou, 510006, China
| | - Bin Xiao
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| |
Collapse
|
36
|
Zhao C, Wang T, Lei B. Medical image fusion method based on dense block and deep convolutional generative adversarial network. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05421-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
37
|
Robust spiking cortical model and total-variational decomposition for multimodal medical image fusion. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101996] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
38
|
Abstract
AbstractIn image-based medical decision-making, different modalities of medical images of a given organ of a patient are captured. Each of these images will represent a modality that will render the examined organ differently, leading to different observations of a given phenomenon (such as stroke). The accurate analysis of each of these modalities promotes the detection of more appropriate medical decisions. Multimodal medical imaging is a research field that consists in the development of robust algorithms that can enable the fusion of image information acquired by different sets of modalities. In this paper, a novel multimodal medical image fusion algorithm is proposed for a wide range of medical diagnostic problems. It is based on the application of a boundary measured pulse-coupled neural network fusion strategy and an energy attribute fusion strategy in a non-subsampled shearlet transform domain. Our algorithm was validated in dataset with modalities of several diseases, namely glioma, Alzheimer’s, and metastatic bronchogenic carcinoma, which contain more than 100 image pairs. Qualitative and quantitative evaluation verifies that the proposed algorithm outperforms most of the current algorithms, providing important ideas for medical diagnosis.
Collapse
|
39
|
|
40
|
Liu X, Wang C, Bai J, Liao G. Fine-tuning Pre-trained Convolutional Neural Networks for Gastric Precancerous Disease Classification on Magnification Narrow-band Imaging Images. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.10.100] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
41
|
Multi-modality medical images fusion based on local-features fuzzy sets and novel sum-modified-Laplacian in non-subsampled shearlet transform domain. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101724] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
42
|
Jung H, Kim Y, Jang H, Ha N, Sohn K. Unsupervised Deep Image Fusion with Structure Tensor Representations. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:3845-3858. [PMID: 31976896 DOI: 10.1109/tip.2020.2966075] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Convolutional neural networks (CNNs) have facilitated substantial progress on various problems in computer vision and image processing. However, applying them to image fusion has remained challenging due to the lack of the labelled data for supervised learning. This paper introduces a deep image fusion network (DIF-Net), an unsupervised deep learning framework for image fusion. The DIF-Net parameterizes the entire processes of image fusion, comprising of feature extraction, feature fusion, and image reconstruction, using a CNN. The purpose of DIF-Net is to generate an output image which has an identical contrast to high-dimensional input images. To realize this, we propose an unsupervised loss function using the structure tensor representation of the multi-channel image contrasts. Different from traditional fusion methods that involve time-consuming optimization or iterative procedures to obtain the results, our loss function is minimized by a stochastic deep learning solver with large-scale examples. Consequently, the proposed method can produce fused images that preserve source image details through a single forward network trained without reference ground-truth labels. The proposed method has broad applicability to various image fusion problems, including multi-spectral, multi-focus, and multi-exposure image fusions. Quantitative and qualitative evaluations show that the proposed technique outperforms existing state-of-the-art approaches for various applications.
Collapse
|
43
|
Green Fluorescent Protein and Phase-Contrast Image Fusion via Generative Adversarial Networks. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:5450373. [PMID: 31885682 PMCID: PMC6915023 DOI: 10.1155/2019/5450373] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Revised: 11/07/2019] [Accepted: 11/20/2019] [Indexed: 12/16/2022]
Abstract
In the field of cell and molecular biology, green fluorescent protein (GFP) images provide functional information embodying the molecular distribution of biological cells while phase-contrast images maintain structural information with high resolution. Fusion of GFP and phase-contrast images is of high significance to the study of subcellular localization, protein functional analysis, and genetic expression. This paper proposes a novel algorithm to fuse these two types of biological images via generative adversarial networks (GANs) by carefully taking their own characteristics into account. The fusion problem is modelled as an adversarial game between a generator and a discriminator. The generator aims to create a fused image that well extracts the functional information from the GFP image and the structural information from the phase-contrast image at the same time. The target of the discriminator is to further improve the overall similarity between the fused image and the phase-contrast image. Experimental results demonstrate that the proposed method can outperform several representative and state-of-the-art image fusion methods in terms of both visual quality and objective evaluation.
Collapse
|
44
|
Lin YH, Hua KL, Lu HH, Sun WL, Chen YY. An Adaptive Exposure Fusion Method Using fuzzy Logic and Multivariate Normal Conditional Random Fields. SENSORS (BASEL, SWITZERLAND) 2019; 19:s19214743. [PMID: 31683704 PMCID: PMC6864834 DOI: 10.3390/s19214743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2019] [Revised: 10/24/2019] [Accepted: 10/29/2019] [Indexed: 06/10/2023]
Abstract
High dynamic range (HDR) has wide applications involving intelligent vision sensing which includes enhanced electronic imaging, smart surveillance, self-driving cars, intelligent medical diagnosis, etc. Exposure fusion is an essential HDR technique which fuses different exposures of the same scene into an HDR-like image. However, determining the appropriate fusion weights is difficult because each differently exposed image only contains a subset of the scene's details. When blending, the problem of local color inconsistency is more challenging; thus, it often requires manual tuning to avoid image artifacts. To address this problem, we present an adaptive coarse-to-fine searching approach to find the optimal fusion weights. In the coarse-tuning stage, fuzzy logic is used to efficiently decide the initial weights. In the fine-tuning stage, the multivariate normal conditional random field model is used to adjust the fuzzy-based initial weights which allows us to consider both intra- and inter-image information in the data. Moreover, a multiscale enhanced fusion scheme is proposed to blend input images when maintaining the details in each scale-level. The proposed fuzzy-based MNCRF (Multivariate Normal Conditional Random Fields) fusion method provided a smoother blending result and a more natural look. Meanwhile, the details in the highlighted and dark regions were preserved simultaneously. The experimental results demonstrated that our work outperformed the state-of-the-art methods not only in several objective quality measures but also in a user study analysis.
Collapse
Affiliation(s)
- Yu-Hsiu Lin
- Department Electrical Engineering, Ming Chi University of Technology, New Taipei 243, Taiwan.
| | - Kai-Lung Hua
- Department Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan.
| | - Hsin-Han Lu
- Graduate Institute. Automation Technology, National Taipei University of Technology, Taipei 106, Taiwan.
| | - Wei-Lun Sun
- Graduate Institute. Automation Technology, National Taipei University of Technology, Taipei 106, Taiwan.
| | - Yung-Yao Chen
- Graduate Institute. Automation Technology, National Taipei University of Technology, Taipei 106, Taiwan.
| |
Collapse
|
45
|
MRI/CT fusion based on latent low rank representation and gradient transfer. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.04.013] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
46
|
Yang Y, Wu J, Huang S, Fang Y, Lin P, Que Y. Multimodal Medical Image Fusion Based on Fuzzy Discrimination With Structural Patch Decomposition. IEEE J Biomed Health Inform 2019; 23:1647-1660. [DOI: 10.1109/jbhi.2018.2869096] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
47
|
Li W, Du J, Zhao Z, Long J. Fusion of Medical Sensors Using Adaptive Cloud Model in Local Laplacian Pyramid Domain. IEEE Trans Biomed Eng 2019; 66:1172-1183. [DOI: 10.1109/tbme.2018.2869432] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
48
|
Shahdoosti HR, Tabatabaei Z. MRI and PET/SPECT image fusion at feature level using ant colony based segmentation. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.08.017] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|