1
|
Vipparla C, Krock T, Nouduri K, Fraser J, AliAkbarpour H, Sagan V, Cheng JRC, Kannappan P. Fusion of Visible and Infrared Aerial Images from Uncalibrated Sensors Using Wavelet Decomposition and Deep Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:8217. [PMID: 39771950 PMCID: PMC11679027 DOI: 10.3390/s24248217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Revised: 12/14/2024] [Accepted: 12/16/2024] [Indexed: 01/11/2025]
Abstract
Multi-modal systems extract information about the environment using specialized sensors that are optimized based on the wavelength of the phenomenology and material interactions. To maximize the entropy, complementary systems operating in regions of non-overlapping wavelengths are optimal. VIS-IR (Visible-Infrared) systems have been at the forefront of multi-modal fusion research and are used extensively to represent information in all-day all-weather applications. Prior to image fusion, the image pairs have to be properly registered and mapped to a common resolution palette. However, due to differences in the device physics of image capture, information from VIS-IR sensors cannot be directly correlated, which is a major bottleneck for this area of research. In the absence of camera metadata, image registration is performed manually, which is not practical for large datasets. Most of the work published in this area assumes calibrated sensors and the availability of camera metadata providing registered image pairs, which limits the generalization capability of these systems. In this work, we propose a novel end-to-end pipeline termed DeepFusion for image registration and fusion. Firstly, we design a recursive crop and scale wavelet spectral decomposition (WSD) algorithm for automatically extracting the patch of visible data representing the thermal information. After data extraction, both the images are registered to a common resolution palette and forwarded to the DNN for image fusion. The fusion performance of the proposed pipeline is compared and quantified with state-of-the-art classical and DNN architectures for open-source and custom datasets demonstrating the efficacy of the pipeline. Furthermore, we also propose a novel keypoint-based metric for quantifying the quality of fused output.
Collapse
Affiliation(s)
- Chandrakanth Vipparla
- Department of Electrical and Computer Engineering, University of Missouri, Columbia, MO 65211, USA
| | - Timothy Krock
- Department of Electrical and Computer Engineering, University of Missouri, Columbia, MO 65211, USA
| | - Koundinya Nouduri
- Department of Electrical and Computer Engineering, University of Missouri, Columbia, MO 65211, USA
| | - Joshua Fraser
- Department of Electrical and Computer Engineering, University of Missouri, Columbia, MO 65211, USA
| | - Hadi AliAkbarpour
- Department of Computer Science, Saint Louis University, St. Louis, MO 63103, USA
| | - Vasit Sagan
- Department of Computer Science, Saint Louis University, St. Louis, MO 63103, USA
- Department of Earth, Environmental and Geospatial Sciences, Saint Louis University, St. Louis, MO 63108, USA
| | - Jing-Ru C. Cheng
- Engineer Research and Development Center, U.S. Army Corps of Engineers, Vicksburg, MS 39180, USA
| | - Palaniappan Kannappan
- Department of Electrical and Computer Engineering, University of Missouri, Columbia, MO 65211, USA
| |
Collapse
|
2
|
Chen Y, Liu A, Liu Y, He Z, Liu C, Chen X. Multi-Dimensional Medical Image Fusion With Complex Sparse Representation. IEEE Trans Biomed Eng 2024; 71:2728-2739. [PMID: 38652633 DOI: 10.1109/tbme.2024.3391314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
In the field of medical imaging, the fusion of data from diverse modalities plays a pivotal role in advancing our understanding of pathological conditions. Sparse representation (SR), a robust signal modeling technique, has demonstrated noteworthy success in multi-dimensional (MD) medical image fusion. However, a fundamental limitation appearing in existing SR models is their lack of directionality, restricting their efficacy in extracting anatomical details from different imaging modalities. To tackle this issue, we propose a novel directional SR model, termed complex sparse representation (ComSR), specifically designed for medical image fusion. ComSR independently represents MD signals over directional dictionaries along specific directions, allowing precise analysis of intricate details of MD signals. Besides, current studies in medical image fusion mostly concentrate on addressing either 2D or 3D fusion problems. This work bridges this gap by proposing a MD medical image fusion method based on ComSR, presenting a unified framework for both 2D and 3D fusion tasks. Experimental results across six multi-modal medical image fusion tasks, involving 93 pairs of 2D source images and 20 pairs of 3D source images, substantiate the superiority of our proposed method over 11 state-of-the-art 2D fusion methods and 4 representative 3D fusion methods, in terms of both visual quality and objective evaluation.
Collapse
|
3
|
Sedighin F. Tensor Methods in Biomedical Image Analysis. JOURNAL OF MEDICAL SIGNALS & SENSORS 2024; 14:16. [PMID: 39100745 PMCID: PMC11296571 DOI: 10.4103/jmss.jmss_55_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/20/2023] [Accepted: 12/28/2023] [Indexed: 08/06/2024]
Abstract
In the past decade, tensors have become increasingly attractive in different aspects of signal and image processing areas. The main reason is the inefficiency of matrices in representing and analyzing multimodal and multidimensional datasets. Matrices cannot preserve the multidimensional correlation of elements in higher-order datasets and this highly reduces the effectiveness of matrix-based approaches in analyzing multidimensional datasets. Besides this, tensor-based approaches have demonstrated promising performances. These together, encouraged researchers to move from matrices to tensors. Among different signal and image processing applications, analyzing biomedical signals and images is of particular importance. This is due to the need for extracting accurate information from biomedical datasets which directly affects patient's health. In addition, in many cases, several datasets have been recorded simultaneously from a patient. A common example is recording electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) of a patient with schizophrenia. In such a situation, tensors seem to be among the most effective methods for the simultaneous exploitation of two (or more) datasets. Therefore, several tensor-based methods have been developed for analyzing biomedical datasets. Considering this reality, in this paper, we aim to have a comprehensive review on tensor-based methods in biomedical image analysis. The presented study and classification between different methods and applications can show the importance of tensors in biomedical image enhancement and open new ways for future studies.
Collapse
Affiliation(s)
- Farnaz Sedighin
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
4
|
Hu Q, Cai W, Xu S, Hu S, Wang L, He X. Adaptive convolutional sparsity with sub-band correlation in the NSCT domain for MRI image fusion. Phys Med Biol 2024; 69:055022. [PMID: 38316044 DOI: 10.1088/1361-6560/ad2636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 02/05/2024] [Indexed: 02/07/2024]
Abstract
Objective.Multimodal medical image fusion (MMIF) technologies merges diverse medical images with rich information, boosting diagnostic efficiency and accuracy. Due to global optimization and single-valued nature, convolutional sparse representation (CSR) outshines the standard sparse representation (SR) in significance. By addressing the challenges of sensitivity to highly redundant dictionaries and robustness to misregistration, an adaptive convolutional sparsity scheme with measurement of thesub-band correlationin the non-subsampled contourlet transform (NSCT) domain is proposed for MMIF.Approach.The fusion scheme incorporates four main components: image decomposition into two scales, fusion of detail layers, fusion of base layers, and reconstruction of the two scales. We solved a Tikhonov regularization optimization problem with source images to obtain the base and detail layers. Then, after CSR processing, detail layers were sparsely decomposed using pre-trained dictionary filters for initial coefficient maps. NSCT domain'ssub-band correlationwas used to refine fusion coefficient maps, and sparse reconstruction produced the fused detail layer. Meanwhile, base layers were fused using averaging. The final fused image was obtained via two-scale reconstruction.Main results.Experimental validation of clinical image sets revealed that the proposed fusion scheme can not only effectively eliminate the interference of partial misregistration, but also outperform the representative state-of-the-art fusion schemes in the preservation of structural and textural details according to subjective visual evaluations and objective quality evaluations.Significance. The proposed fusion scheme is competitive due to its low-redundancy dictionary, robustness to misregistration, and better fusion performance. This is achieved by training the dictionary with minimal samples through CSR to adaptively preserve overcompleteness for detail layers, and constructing fusion activity level withsub-band correlationin the NSCT domain to maintain CSR attributes. Additionally, ordering the NSCT for reverse sparse representation further enhancessub-band correlationto promote the preservation of structural and textural details.
Collapse
Affiliation(s)
- Qiu Hu
- School of Information Science and Engineering, NingboTech University, Ningbo 315100, People's Republic of China
| | - Weiming Cai
- School of Information Science and Engineering, NingboTech University, Ningbo 315100, People's Republic of China
- Zhejiang Engineering Research Center for Intelligent Marine Ranch Equipment, Ningbo 315100, People's Republic of China
| | - Shuwen Xu
- Third Research Institute of China Electronics Technology Group Corporation, Beijing 100846, People's Republic of China
| | - Shaohai Hu
- Institute of Information Science, Beijing Jiaotong University, Beijing 100044, People's Republic of China
- Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing 100044, People's Republic of China
| | - Lang Wang
- School of Information Science and Engineering, NingboTech University, Ningbo 315100, People's Republic of China
| | - Xinyi He
- Ningbo Xiaoshi High School, Ningbo 315100, People's Republic of China
| |
Collapse
|
5
|
Zhang R, Wang Z, Sun H, Deng L, Zhu H. TDFusion: When Tensor Decomposition Meets Medical Image Fusion in the Nonsubsampled Shearlet Transform Domain. SENSORS (BASEL, SWITZERLAND) 2023; 23:6616. [PMID: 37514910 PMCID: PMC10384420 DOI: 10.3390/s23146616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 07/30/2023]
Abstract
In this paper, a unified optimization model for medical image fusion based on tensor decomposition and the non-subsampled shearlet transform (NSST) is proposed. The model is based on the NSST method and the tensor decomposition method to fuse the high-frequency (HF) and low-frequency (LF) parts of two source images to obtain a mixed-frequency fused image. In general, we integrate low-frequency and high-frequency information from the perspective of tensor decomposition (TD) fusion. Due to the structural differences between the high-frequency and low-frequency representations, potential information loss may occur in the fused images. To address this issue, we introduce a joint static and dynamic guidance (JSDG) technique to complement the HF/LF information. To improve the result of the fused images, we combine the alternating direction method of multipliers (ADMM) algorithm with the gradient descent method for parameter optimization. Finally, the fused images are reconstructed by applying the inverse NSST to the fused high-frequency and low-frequency bands. Extensive experiments confirm the superiority of our proposed TDFusion over other comparison methods.
Collapse
Affiliation(s)
- Rui Zhang
- Jiangsu Province Key Lab on Image Processing and Image Communication, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
| | - Zhongyang Wang
- Jiangsu Province Key Lab on Image Processing and Image Communication, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
| | - Haoze Sun
- Jiangsu Province Key Lab on Image Processing and Image Communication, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
| | - Lizhen Deng
- National Engineering Research Center of Communication and Network Technology, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
| | - Hu Zhu
- Jiangsu Province Key Lab on Image Processing and Image Communication, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
| |
Collapse
|
6
|
Feature generation and multi-sequence fusion based deep convolutional network for breast tumor diagnosis with missing MR sequences. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
7
|
Ramprasad MVS, Rahman MZU, Bayleyegn MD. A Deep Probabilistic Sensing and Learning Model for Brain Tumor Classification With Fusion-Net and HFCMIK Segmentation. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2022; 3:178-188. [PMID: 36712319 PMCID: PMC9870266 DOI: 10.1109/ojemb.2022.3217186] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 10/14/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Goal: Implementation of an artificial intelli gence-based medical diagnosis tool for brain tumor classification, which is called the BTFSC-Net. Methods: Medical images are preprocessed using a hybrid probabilistic wiener filter (HPWF) The deep learning convolutional neural network (DLCNN) was utilized to fuse MRI and CT images with robust edge analysis (REA) properties, which are used to identify the slopes and edges of source images. Then, hybrid fuzzy c-means integrated k-means (HFCMIK) clustering is used to segment the disease affected region from the fused image. Further, hybrid features such as texture, colour, and low-level features are extracted from the fused image by using gray-level cooccurrence matrix (GLCM), redundant discrete wavelet transform (RDWT) descriptors. Finally, a deep learning based probabilistic neural network (DLPNN) is used to classify malignant and benign tumors. The BTFSC-Net attained 99.21% of segmentation accuracy and 99.46% of classification accuracy. Conclusions: The simulations showed that BTFSC-Net outperformed as compared to existing methods.
Collapse
Affiliation(s)
- M V S Ramprasad
- Koneru Lakshmaiah Education FoundationK L University Guntur 522302 India
- GITAM (Deemed to be University) Visakhapatnam AP 522502 India
| | - Md Zia Ur Rahman
- Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education FoundationK L University Vaddeswaram Guntur 522502 India
| | | |
Collapse
|
8
|
Effect of Medical Image Fusion in the Treatment of Poststroke Limb Dysfunction with Acupuncture and Moxibustion of Traditional Chinese Medicine. BIOMED RESEARCH INTERNATIONAL 2022; 2022:8380251. [PMID: 36212715 PMCID: PMC9537003 DOI: 10.1155/2022/8380251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 07/30/2022] [Accepted: 08/12/2022] [Indexed: 11/18/2022]
Abstract
According to relevant data, the morbidity and mortality of strokes in China remain high. Without effective treatment, stroke morbidity and mortality will continue to rise, which may become the second leading disease in the world. With the nonstop advancement and improvement of clinical innovation in China, the death pace of stroke patients has dropped altogether. After clinical treatment, the patient actually showed a progression of sequelae, which made it challenging to work on the personal satisfaction of the patient. The purpose for this paper was to concentrate on the impact of medical image fusion in the treatment of poststroke appendage brokenness with TCM needle therapy. The related concepts of medical image fusion and the meaning of acupuncture and moxibustion in traditional Chinese medicine, stroke, and limb dysfunction were introduced. In this study, acupuncture and moxibustion were analyzed to explore the therapeutic effect of this type of therapy on upper extremity dysfunction caused by phlegm and blood stasis blocking collaterals and to provide a scientific method for the treatment and efficacy judgment of upper extremity motor dysfunction after stroke. Before the treatment measures were taken, there was no significant difference in the general data and all index scores between the two groups (P > 0.05), and there was no significant difference in the baseline data, reflecting high balance and comparability. In the following 3 months of treatment, the FMA score, NIHSS score, BI list, and VAS score of the two groups of patients were essentially not quite the same as those before treatment (P < 0.05). When treatment, there was a huge contrast between the trial group and the control group (P < 0.05). The finish of the trial in this paper is that needle therapy joined with pricking and measuring can essentially work on the engine capability of stroke patients with furthest point brokenness brought about by mucus and blood balance impeding securities.
Collapse
|
9
|
Liu Y, Mu F, Shi Y, Cheng J, Li C, Chen X. Brain tumor segmentation in multimodal MRI via pixel-level and feature-level image fusion. Front Neurosci 2022; 16:1000587. [PMID: 36188482 PMCID: PMC9515796 DOI: 10.3389/fnins.2022.1000587] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 08/18/2022] [Indexed: 11/30/2022] Open
Abstract
Brain tumor segmentation in multimodal MRI volumes is of great significance to disease diagnosis, treatment planning, survival prediction and other relevant tasks. However, most existing brain tumor segmentation methods fail to make sufficient use of multimodal information. The most common way is to simply stack the original multimodal images or their low-level features as the model input, and many methods treat each modality data with equal importance to a given segmentation target. In this paper, we introduce multimodal image fusion technique including both pixel-level fusion and feature-level fusion for brain tumor segmentation, aiming to achieve more sufficient and finer utilization of multimodal information. At the pixel level, we present a convolutional network named PIF-Net for 3D MR image fusion to enrich the input modalities of the segmentation model. The fused modalities can strengthen the association among different types of pathological information captured by multiple source modalities, leading to a modality enhancement effect. At the feature level, we design an attention-based modality selection feature fusion (MSFF) module for multimodal feature refinement to address the difference among multiple modalities for a given segmentation target. A two-stage brain tumor segmentation framework is accordingly proposed based on the above components and the popular V-Net model. Experiments are conducted on the BraTS 2019 and BraTS 2020 benchmarks. The results demonstrate that the proposed components on both pixel-level and feature-level fusion can effectively improve the segmentation accuracy of brain tumors.
Collapse
Affiliation(s)
- Yu Liu
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, China
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, Hefei University of Technology, Hefei, China
| | - Fuhao Mu
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, China
| | - Yu Shi
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, China
| | - Juan Cheng
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, China
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, Hefei University of Technology, Hefei, China
| | - Chang Li
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, China
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, Hefei University of Technology, Hefei, China
| | - Xun Chen
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China
- *Correspondence: Xun Chen
| |
Collapse
|
10
|
An automatic Computer-Aided Diagnosis system based on the Multimodal fusion of Breast Cancer (MF-CAD). Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102914] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
11
|
Xu J, Wan C, Yang W, Zheng B, Yan Z, Shen J. A novel multi-modal fundus image fusion method for guiding the laser surgery of central serous chorioretinopathy. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:4797-4816. [PMID: 34198466 DOI: 10.3934/mbe.2021244] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The angiography and color fundus images are of great assistance for the localization of central serous chorioretinopathy (CSCR) lesions. However, it brings much inconvenience to ophthalmologists because of these two modalities working independently in guiding laser surgery. Hence, a novel fundus image fusion method in non-subsampled contourlet transform (NSCT) domain, aiming to integrate the multi-modal CSCR information, is proposed. Specifically, the source images are initially decomposed into high-frequency and low-frequency components based on NSCT. Then, an improved deep learning-based method is employed for the fusion of low-frequency components, which helps to alleviate the tedious process of manually designing fusion rules and enhance the smoothness of the fused images. The fusion of high-frequency components based on pulse-coupled neural network (PCNN) is closely followed to facilitate the integration of detailed information. Finally, the fused images can be obtained by applying an inverse transform on the above fusion components. Qualitative and quantitative experiments demonstrate the proposed scheme is superior to the baseline methods of multi-scale transform (MST) in most cases, which not only implies its potential in multi-modal fundus image fusion, but also expands the research direction of MST-based fusion methods.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, 210029, China
| | - Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, 313000, China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, 210029, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| |
Collapse
|
12
|
Liu J, Li X, Shen S, Jiang X, Chen W, Li Z. Research on Panoramic Stitching Algorithm of Lateral Cranial Sequence Images in Dental Multifunctional Cone Beam Computed Tomography. SENSORS 2021; 21:s21062200. [PMID: 33801108 PMCID: PMC8004189 DOI: 10.3390/s21062200] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 03/14/2021] [Accepted: 03/19/2021] [Indexed: 11/16/2022]
Abstract
In the design of dental multifunctional Cone Beam Computed Tomography, the linear scanning strategy not only saves equipment cost, but also avoids the demand for patients to be repositioned when acquiring lateral cranial sequence images. In order to obtain panoramic images, we propose a local normalized cross-correlation stitching algorithm based on Gaussian Mixture Model. Firstly, the Block-Matching and 3D filtering algorithm is used to remove quantum and impulse noises according to the characteristics of X-ray images; Then, the segmentation of the irrelevant region and the extraction of the region of interest are performed by Gaussian Mixture Model; The locally normalized cross-relation is used to complete the registration with the multi-resolution strategy based on wavelet transform and Particle Swarm Optimization algorithm; Finally, image fusion is achieved by the weighted smoothing fusion algorithm. The experimental results show that the panoramic image obtained by this method has significant performance in both subjective vision and objective quality evaluation and can be applied to preoperative diagnosis of clinical dental deformity and postoperative effect evaluation.
Collapse
Affiliation(s)
- Junyuan Liu
- Medical Electronics and Information Technology Engineering Research Center, Chongqing University of Posts and Telecommunications, Chong Qing 400065, China; (J.L.); (S.S.); (X.J.); (W.C.)
| | - Xi Li
- Foundation Department, Chongqing Medical and Pharmaceutical College, Chongqing 401331, China;
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Siwan Shen
- Medical Electronics and Information Technology Engineering Research Center, Chongqing University of Posts and Telecommunications, Chong Qing 400065, China; (J.L.); (S.S.); (X.J.); (W.C.)
| | - Xiaoming Jiang
- Medical Electronics and Information Technology Engineering Research Center, Chongqing University of Posts and Telecommunications, Chong Qing 400065, China; (J.L.); (S.S.); (X.J.); (W.C.)
| | - Wang Chen
- Medical Electronics and Information Technology Engineering Research Center, Chongqing University of Posts and Telecommunications, Chong Qing 400065, China; (J.L.); (S.S.); (X.J.); (W.C.)
| | - Zhangyong Li
- Medical Electronics and Information Technology Engineering Research Center, Chongqing University of Posts and Telecommunications, Chong Qing 400065, China; (J.L.); (S.S.); (X.J.); (W.C.)
- Correspondence:
| |
Collapse
|
13
|
Multi-modal brain image fusion based on multi-level edge-preserving filtering. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102280] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
14
|
Wang H, Han G, Zhang B, Tao G, Cai H. Exsavi: Excavating both sample-wise and view-wise relationships to boost multi-view subspace clustering. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.07.060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
15
|
Abstract
AbstractIn image-based medical decision-making, different modalities of medical images of a given organ of a patient are captured. Each of these images will represent a modality that will render the examined organ differently, leading to different observations of a given phenomenon (such as stroke). The accurate analysis of each of these modalities promotes the detection of more appropriate medical decisions. Multimodal medical imaging is a research field that consists in the development of robust algorithms that can enable the fusion of image information acquired by different sets of modalities. In this paper, a novel multimodal medical image fusion algorithm is proposed for a wide range of medical diagnostic problems. It is based on the application of a boundary measured pulse-coupled neural network fusion strategy and an energy attribute fusion strategy in a non-subsampled shearlet transform domain. Our algorithm was validated in dataset with modalities of several diseases, namely glioma, Alzheimer’s, and metastatic bronchogenic carcinoma, which contain more than 100 image pairs. Qualitative and quantitative evaluation verifies that the proposed algorithm outperforms most of the current algorithms, providing important ideas for medical diagnosis.
Collapse
|
16
|
Wang Y, Wang Y. Fusion of 3-D medical image gradient domain based on detail-driven and directional structure tensor. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:1001-1016. [PMID: 32675434 DOI: 10.3233/xst-200684] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
BACKGROUND Multi-modal medical image fusion plays a crucial role in many areas of modern medicine like diagnosis and therapy planning. OBJECTIVE Due to the factor that the structure tensor has the property of preserving the image geometry, we utilized it to construct the directional structure tensor and further proposed an improved 3-D medical image fusion method. METHOD The local entropy metrics were used to construct the gradient weights of different source images, and the eigenvectors of traditional structure tensor were combined with the second-order derivatives of image to construct the directional structure tensor. In addition, the guided filtering was employed to obtain detail components of the source images and construct a fused gradient field with the enhanced detail. Finally, the fusion image was generated by solving the functional minimization problem. RESULTS AND CONCLUSION Experimental results demonstrated that this new method is superior to the traditional structure tensor and multi-scale analysis in both visual effect and quantitative assessment.
Collapse
Affiliation(s)
- Yu Wang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Yuanjun Wang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|