1
|
Premalatha R, Dhanalakshmi P. Robust neutrosophic fusion design for magnetic resonance (MR) brain images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/15/2023]
|
2
|
Using of Laplacian Re-decomposition image fusion algorithm for glioma grading with SWI, ADC, and FLAIR images. POLISH JOURNAL OF MEDICAL PHYSICS AND ENGINEERING 2021. [DOI: 10.2478/pjmpe-2021-0031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Abstract
Introduction: Based on the tumor’s growth potential and aggressiveness, glioma is most often classified into low or high-grade groups. Traditionally, tissue sampling is used to determine the glioma grade. The aim of this study is to evaluate the efficiency of the Laplacian Re-decomposition (LRD) medical image fusion algorithm for glioma grading by advanced magnetic resonance imaging (MRI) images and introduce the best image combination for glioma grading.
Material and methods: Sixty-one patients (17 low-grade and 44 high-grade) underwent Susceptibility-weighted image (SWI), apparent diffusion coefficient (ADC) map, and Fluid attenuated inversion recovery (FLAIR) MRI imaging. To fuse different MRI image, LRD medical image fusion algorithm was used. To evaluate the effectiveness of LRD in the classification of glioma grade, we compared the parameters of the receiver operating characteristic curve (ROC).
Results: The average Relative Signal Contrast (RSC) of SWI and ADC maps in high-grade glioma are significantly lower than RSCs in low-grade glioma. No significant difference was detected between low and high-grade glioma on FLAIR images. In our study, the area under the curve (AUC) for low and high-grade glioma differentiation on SWI and ADC maps were calculated at 0.871 and 0.833, respectively.
Conclusions: By fusing SWI and ADC map with LRD medical image fusion algorithm, we can increase AUC for low and high-grade glioma separation to 0.978. Our work has led us to conclude that, by fusing SWI and ADC map with LRD medical image fusion algorithm, we reach the highest diagnostic accuracy for low and high-grade glioma differentiation and we can use LRD medical fusion algorithm for glioma grading.
Collapse
|
3
|
Preliminary study of multiple b-value diffusion-weighted images and T1 post enhancement magnetic resonance imaging images fusion with Laplacian Re-decomposition (LRD) medical fusion algorithm for glioma grading. Eur J Radiol Open 2021; 8:100378. [PMID: 34632000 PMCID: PMC8487979 DOI: 10.1016/j.ejro.2021.100378] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 09/20/2021] [Accepted: 09/26/2021] [Indexed: 12/21/2022] Open
Abstract
LRD medical image fusion algorithm can be used for glioma grading. We can use the LRD fusion algorithm with MRI image for glioma grading. Fusing of DWI (b50) and T1 enhancement (T1Gd) by LRD, have highest diagnostic value for glioma grading.
Background Grade of brain tumor is thought to be the most significant and crucial component in treatment management. Recent development in medical imaging techniques have led to the introduce non-invasive methods for brain tumor grading such as different magnetic resonance imaging (MRI) protocols. Combination of different MRI protocols with fusion algorithms for tumor grading is used to increase diagnostic improvement. This paper investigated the efficiency of the Laplacian Re-decomposition (LRD) fusion algorithms for glioma grading. Procedures In this study, 69 patients were examined with MRI. The T1 post enhancement (T1Gd) and diffusion-weighted images (DWI) were obtained. To evaluated LRD performance for glioma grading, we compared the parameters of the receiver operating characteristic (ROC) curves. Findings We found that the average Relative Signal Contrast (RSC) for high-grade gliomas is greater than RSCs for low-grade gliomas in T1Gd images and all fused images. No significant difference in RSCs of DWI images was observed between low-grade and high-grade gliomas. However, a significant RSCs difference was detected between grade III and IV in the T1Gd, b50, and all fussed images. Conclusions This research suggests that T1Gd images are an appropriate imaging protocol for separating low-grade and high-grade gliomas. According to the findings of this study, we may use the LRD fusion algorithm to increase the diagnostic value of T1Gd and DWI picture for grades III and IV glioma distinction. In conclusion, this article has emphasized the significance of the LRD fusion algorithm as a tool for differentiating grade III and IV gliomas.
Collapse
Key Words
- ADC, apparent diffusion coefficient
- AUC, Aera Under Curve
- BOLD, blood oxygen level dependent imaging
- CBV, Cerebral Blood Volume
- DCE, Dynamic contrast enhancement
- DGR, Decision Graph Re-decomposition
- DWI, Diffusion-weighted imaging
- Diffusion-weighted images
- FA, flip angle
- Fusion algorithm
- GBM, glioblastomas
- GDIE, Gradient Domain Image Enhancement
- Glioma
- Grade
- IRS, Inverse Re-decomposition Scheme
- LEM, Local Energy Maximum
- LP, Laplacian Pyramid
- LRD, Laplacian Re-decomposition
- Laplacian Re-decomposition
- MLD, Maximum Local Difference
- MRI, magnetic resonance imaging
- MRS, Magnetic resonance spectroscopy
- MST, Multi-scale transform
- Magnetic resonance imaging
- NOD, Non-overlapping domain
- OD, overlapping domain
- PACS, PACS picture archiving and communication system
- ROC, receiver operating characteristic curve
- ROI, regions of interest
- RSC, Relative Signal Contrast
- SCE, Susceptibility contrast enhancement
- T1Gd, T1 post enhancement
- TE, time of echo
- TI, time of inversion
- TR, repetition time
Collapse
|
4
|
Multi scale decomposition based medical image fusion using convolutional neural network and sparse representation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102789] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
5
|
Tawfik N, Elnemr HA, Fakhr M, Dessouky MI, Abd El-Samie FE. Survey study of multimodality medical image fusion methods. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:6369-6396. [DOI: 10.1007/s11042-020-08834-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 01/13/2020] [Accepted: 03/06/2020] [Indexed: 09/02/2023]
|
6
|
Wang K, Zheng M, Wei H, Qi G, Li Y. Multi-Modality Medical Image Fusion Using Convolutional Neural Network and Contrast Pyramid. SENSORS (BASEL, SWITZERLAND) 2020; 20:E2169. [PMID: 32290472 PMCID: PMC7218740 DOI: 10.3390/s20082169] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2020] [Revised: 04/06/2020] [Accepted: 04/08/2020] [Indexed: 12/21/2022]
Abstract
Medical image fusion techniques can fuse medical images from different morphologies to make the medical diagnosis more reliable and accurate, which play an increasingly important role in many clinical applications. To obtain a fused image with high visual quality and clear structure details, this paper proposes a convolutional neural network (CNN) based medical image fusion algorithm. The proposed algorithm uses the trained Siamese convolutional network to fuse the pixel activity information of source images to realize the generation of weight map. Meanwhile, a contrast pyramid is implemented to decompose the source image. According to different spatial frequency bands and a weighted fusion operator, source images are integrated. The results of comparative experiments show that the proposed fusion algorithm can effectively preserve the detailed structure information of source images and achieve good human visual effects.
Collapse
Affiliation(s)
- Kunpeng Wang
- School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China;
- Robot Technology Used for Special Environment Key Laboratory of Sichuan Province, Mianyang 621010, China
| | - Mingyao Zheng
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.Z.); (H.W.)
| | - Hongyan Wei
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.Z.); (H.W.)
| | - Guanqiu Qi
- Computer Information Systems Department, State University of New York at Buffalo State, Buffalo, NY 14222, USA;
| | - Yuanyuan Li
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.Z.); (H.W.)
| |
Collapse
|
7
|
Du J, Li W, Tan H. Three-Layer Image Representation by an Enhanced Illumination-Based Image Fusion Method. IEEE J Biomed Health Inform 2019; 24:1169-1179. [PMID: 31352358 DOI: 10.1109/jbhi.2019.2930978] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The recently developed multiscale-based fusion methods can be improved with two approaches: an advanced image decomposition scheme and an advanced fusion rule. In this paper, three-layer image decomposition, enhanced illumination fusion rule-based method is proposed. The proposed method includes three steps. First, each input image is decomposed into its corresponding smooth, texture, and edge layers using defined local extrema and low-pass filters in the spatial domain. Second, three different strategies are applied as fusion rules for the three-layer representation. To preserve the illumination closely related to tumors, the illumination is corrected by applying a higher contrast to the decomposed image details, including the texture and edge inputs, such as those found in grayscale CT and MRI images. The final fused image is created by the addition of the normalized smooth, texture, and edge image layers. The experiments demonstrate that the proposed method performs better than the existing state-of-the-art fusion methods.
Collapse
|
8
|
Qi G, Wang H, Haner M, Weng C, Chen S, Zhu Z. Convolutional neural network based detection and judgement of environmental obstacle in vehicle operation. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2019. [DOI: 10.1049/trit.2018.1045] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Guanqiu Qi
- Department of Mathematics & Computer and Information ScienceMansfield University of PennsylvaniaMansfieldPA16933USA
| | - Huan Wang
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| | - Matthew Haner
- Department of Mathematics & Computer and Information ScienceMansfield University of PennsylvaniaMansfieldPA16933USA
| | - Chenjie Weng
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| | - Sixin Chen
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| | - Zhiqin Zhu
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| |
Collapse
|
9
|
Zhou F, Li X, Zhou M, Chen Y, Tan H. A New Dictionary Construction Based Multimodal Medical Image Fusion Framework. ENTROPY 2019; 21:e21030267. [PMID: 33266982 PMCID: PMC7514747 DOI: 10.3390/e21030267] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2019] [Revised: 02/22/2019] [Accepted: 03/04/2019] [Indexed: 11/30/2022]
Abstract
Training a good dictionary is the key to a successful image fusion method of sparse representation based models. In this paper, we propose a novel dictionary learning scheme for medical image fusion. First, we reinforce the weak information of images by extracting and adding their multi-layer details to generate the informative patches. Meanwhile, we introduce a simple and effective multi-scale sampling to implement a multi-scale representation of patches while reducing the computational cost. Second, we design a neighborhood energy metric and a multi-scale spatial frequency metric for clustering the image patches with a similar brightness and detail information into each respective patch group. Then, we train the energy sub-dictionary and detail sub-dictionary, respectively by K-SVD. Finally, we combine the sub-dictionaries to construct a final, complete, compact and informative dictionary. As a main contribution, the proposed online dictionary learning can not only obtain an informative as well as compact dictionary, but can also address the defects, such as superfluous patch issues and low computation efficiency, in traditional dictionary learning algorithms. The experimental results show that our algorithm is superior to some state-of-the-art dictionary learning based techniques in both subjective visual effects and objective evaluation criteria.
Collapse
Affiliation(s)
- Fuqiang Zhou
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100083, China
- Correspondence:
| | - Xiaosong Li
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100083, China
| | - Mingxuan Zhou
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100083, China
| | - Yuanze Chen
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100083, China
| | - Haishu Tan
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528000, China
| |
Collapse
|
10
|
Li Y, Sun Y, Zheng M, Huang X, Qi G, Hu H, Zhu Z. A Novel Multi-Exposure Image Fusion Method Based on Adaptive Patch Structure. ENTROPY 2018; 20:e20120935. [PMID: 33266659 PMCID: PMC7512522 DOI: 10.3390/e20120935] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2018] [Revised: 12/03/2018] [Accepted: 12/03/2018] [Indexed: 12/02/2022]
Abstract
Multi-exposure image fusion methods are often applied to the fusion of low-dynamic images that are taken from the same scene at different exposure levels. The fused images not only contain more color and detailed information, but also demonstrate the same real visual effects as the observation by the human eye. This paper proposes a novel multi-exposure image fusion (MEF) method based on adaptive patch structure. The proposed algorithm combines image cartoon-texture decomposition, image patch structure decomposition, and the structural similarity index to improve the local contrast of the image. Moreover, the proposed method can capture more detailed information of source images and produce more vivid high-dynamic-range (HDR) images. Specifically, image texture entropy values are used to evaluate image local information for adaptive selection of image patch size. The intermediate fused image is obtained by the proposed structure patch decomposition algorithm. Finally, the intermediate fused image is optimized by using the structural similarity index to obtain the final fused HDR image. The results of comparative experiments show that the proposed method can obtain high-quality HDR images with better visual effects and more detailed information.
Collapse
Affiliation(s)
- Yuanyuan Li
- School of Information and Electrical, China University of Mining and Technology, Xuzhou 221116, China
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Yanjing Sun
- School of Information and Electrical, China University of Mining and Technology, Xuzhou 221116, China
- Correspondence:
| | - Mingyao Zheng
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Xinghua Huang
- College of Automation, Chongqing University, Chongqing 400044, China
| | - Guanqiu Qi
- Department of Mathematics and Computer Information Science, Mansfield University of Pennsylvania, Mansfield, PA 16933, USA
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ 85287, USA
| | - Hexu Hu
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Zhiqin Zhu
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
11
|
Qi G, Zhang Q, Zeng F, Wang J, Zhu Z. Multi‐focus image fusion via morphological similarity‐based dictionary construction and sparse representation. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2018. [DOI: 10.1049/trit.2018.0011] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Guanqiu Qi
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
- School of Computing, Informatics, and Decision Systems EngineeringArizona State UniversityTempeAZ85287USA
| | - Qiong Zhang
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| | - Fancheng Zeng
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| | - Jinchuan Wang
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| | - Zhiqin Zhu
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| |
Collapse
|