1
|
Song Y, Dai Y, Liu W, Liu Y, Liu X, Yu Q, Liu X, Que N, Li M. DesTrans: A medical image fusion method based on Transformer and improved DenseNet. Comput Biol Med 2024; 174:108463. [PMID: 38640634 DOI: 10.1016/j.compbiomed.2024.108463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Revised: 03/29/2024] [Accepted: 04/07/2024] [Indexed: 04/21/2024]
Abstract
Medical image fusion can provide doctors with more detailed data and thus improve the accuracy of disease diagnosis. In recent years, deep learning has been widely used in the field of medical image fusion. The traditional method of medical image fusion is to operate by superimposing and other methods of pixels. The introduction of deep learning methods has improved the effectiveness of medical image fusion. However, these methods still have problems such as edge blurring and information redundancy. In this paper, we propose a deep learning network model based on Transformer and an improved DenseNet network module integration that can be applied to medical images and solve the above problems. At the same time, the method can be moved to natural images. The use of Transformer and dense concatenation enhances the feature extraction capability of the method by limiting the feature loss which reduces the risk of edge blurring. We compared several representative traditional methods and more advanced deep learning methods with this method. The experimental results show that the Transformer and the improved DenseNet network module have a strong capability of feature extraction. The method yields good results both in terms of visual quality and objective image evaluation metrics.
Collapse
Affiliation(s)
- Yumeng Song
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Yin Dai
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China; Engineering Center on Medical Imaging and Intelligent Analysis, Ministry Education, Northeastern University, Shenyang 110169, China.
| | - Weibin Liu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Yue Liu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xinpeng Liu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Qiming Yu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xinghan Liu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ningfeng Que
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Mingzhe Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| |
Collapse
|
2
|
Lapusan R, Borlan R, Focsan M. Advancing MRI with magnetic nanoparticles: a comprehensive review of translational research and clinical trials. NANOSCALE ADVANCES 2024; 6:2234-2259. [PMID: 38694462 PMCID: PMC11059564 DOI: 10.1039/d3na01064c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 04/01/2024] [Indexed: 05/04/2024]
Abstract
The nexus of advanced technology and medical therapeutics has ushered in a transformative epoch in contemporary medicine. Within this arena, Magnetic Resonance Imaging (MRI) emerges as a paramount tool, intertwining the advancements of technology with the art of healing. MRI's pivotal role is evident in its broad applicability, spanning from neurological diseases, soft-tissue and tumour characterization, to many more applications. Though already foundational, aspirations remain to further enhance MRI's capabilities. A significant avenue under exploration is the incorporation of innovative nanotechnological contrast agents. Forefront among these are Superparamagnetic Iron Oxide Nanoparticles (SPIONs), recognized for their adaptability and safety profile. SPION's intrinsic malleability allows them to be tailored for improved biocompatibility, while their functionality is further broadened when equipped with specific targeting molecules. Yet, the path to optimization is not devoid of challenges, from renal clearance concerns to potential side effects stemming from iron overload. This review endeavors to map the intricate journey of SPIONs as MRI contrast agents, offering a chronological perspective of their evolution and deployment. We provide an in-depth current outline of the most representative and impactful pre-clinical and clinical studies centered on the integration of SPIONs in MRI, tracing their trajectory from foundational research to contemporary applications.
Collapse
Affiliation(s)
- Radu Lapusan
- Biomolecular Physics Department, Faculty of Physics, Babes-Bolyai University Cluj-Napoca Romania
- Nanobiophotonics and Laser Microspectroscopy Centre, Interdisciplinary Research Institute on Bio-Nano-Sciences, Babes-Bolyai University Cluj-Napoca Romania
| | - Raluca Borlan
- Nanobiophotonics and Laser Microspectroscopy Centre, Interdisciplinary Research Institute on Bio-Nano-Sciences, Babes-Bolyai University Cluj-Napoca Romania
| | - Monica Focsan
- Biomolecular Physics Department, Faculty of Physics, Babes-Bolyai University Cluj-Napoca Romania
- Nanobiophotonics and Laser Microspectroscopy Centre, Interdisciplinary Research Institute on Bio-Nano-Sciences, Babes-Bolyai University Cluj-Napoca Romania
| |
Collapse
|
3
|
Chang CW, Peng J, Safari M, Salari E, Pan S, Roper J, Qiu RLJ, Gao Y, Shu HK, Mao H, Yang X. High-resolution MRI synthesis using a data-driven framework with denoising diffusion probabilistic modeling. Phys Med Biol 2024; 69:045001. [PMID: 38241726 PMCID: PMC10839468 DOI: 10.1088/1361-6560/ad209c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 01/08/2024] [Accepted: 01/19/2024] [Indexed: 01/21/2024]
Abstract
Objective. High-resolution magnetic resonance imaging (MRI) can enhance lesion diagnosis, prognosis, and delineation. However, gradient power and hardware limitations prohibit recording thin slices or sub-1 mm resolution. Furthermore, long scan time is not clinically acceptable. Conventional high-resolution images generated using statistical or analytical methods include the limitation of capturing complex, high-dimensional image data with intricate patterns and structures. This study aims to harness cutting-edge diffusion probabilistic deep learning techniques to create a framework for generating high-resolution MRI from low-resolution counterparts, improving the uncertainty of denoising diffusion probabilistic models (DDPM).Approach. DDPM includes two processes. The forward process employs a Markov chain to systematically introduce Gaussian noise to low-resolution MRI images. In the reverse process, a U-Net model is trained to denoise the forward process images and produce high-resolution images conditioned on the features of their low-resolution counterparts. The proposed framework was demonstrated using T2-weighted MRI images from institutional prostate patients and brain patients collected in the Brain Tumor Segmentation Challenge 2020 (BraTS2020).Main results. For the prostate dataset, the bicubic interpolation model (Bicubic), conditional generative-adversarial network (CGAN), and our proposed DDPM framework improved the noise quality measure from low-resolution images by 4.4%, 5.7%, and 12.8%, respectively. Our method enhanced the signal-to-noise ratios by 11.7%, surpassing Bicubic (9.8%) and CGAN (8.1%). In the BraTS2020 dataset, the proposed framework and Bicubic enhanced peak signal-to-noise ratio from resolution-degraded images by 9.1% and 5.8%. The multi-scale structural similarity indexes were 0.970 ± 0.019, 0.968 ± 0.022, and 0.967 ± 0.023 for the proposed method, CGAN, and Bicubic, respectively.Significance. This study explores a deep learning-based diffusion probabilistic framework for improving MR image resolution. Such a framework can be used to improve clinical workflow by obtaining high-resolution images without penalty of the long scan time. Future investigation will likely focus on prospectively testing the efficacy of this framework with different clinical indications.
Collapse
Affiliation(s)
- Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Mojtaba Safari
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Elahheh Salari
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30308, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Yuan Gao
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30308, United States of America
| |
Collapse
|
4
|
Safari M, Fatemi A, Archambault L. MedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial network. BMC Med Imaging 2023; 23:203. [PMID: 38062431 PMCID: PMC10704723 DOI: 10.1186/s12880-023-01160-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 11/23/2023] [Indexed: 12/18/2023] Open
Abstract
PURPOSE This study proposed an end-to-end unsupervised medical fusion generative adversarial network, MedFusionGAN, to fuse computed tomography (CT) and high-resolution isotropic 3D T1-Gd Magnetic resonance imaging (MRI) image sequences to generate an image with CT bone structure and MRI soft tissue contrast to improve target delineation and to reduce the radiotherapy planning time. METHODS We used a publicly available multicenter medical dataset (GLIS-RT, 230 patients) from the Cancer Imaging Archive. To improve the models generalization, we consider different imaging protocols and patients with various brain tumor types, including metastases. The proposed MedFusionGAN consisted of one generator network and one discriminator network trained in an adversarial scenario. Content, style, and L1 losses were used for training the generator to preserve the texture and structure information of the MRI and CT images. RESULTS The MedFusionGAN successfully generates fused images with MRI soft-tissue and CT bone contrast. The results of the MedFusionGAN were quantitatively and qualitatively compared with seven traditional and eight deep learning (DL) state-of-the-art methods. Qualitatively, our method fused the source images with the highest spatial resolution without adding the image artifacts. We reported nine quantitative metrics to quantify the preservation of structural similarity, contrast, distortion level, and image edges in fused images. Our method outperformed both traditional and DL methods on six out of nine metrics. And it got the second performance rank for three and two quantitative metrics when compared with traditional and DL methods, respectively. To compare soft-tissue contrast, intensity profile along tumor and tumor contours of the fusion methods were evaluated. MedFusionGAN provides a more consistent, better intensity profile, and a better segmentation performance. CONCLUSIONS The proposed end-to-end unsupervised method successfully fused MRI and CT images. The fused image could improve targets and OARs delineation, which is an important aspect of radiotherapy treatment planning.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de Physique, de génie Physique et d'Optique, et Centre de Recherche sur le Cancer, Université Laval, Québec City, QC, Canada.
- Service de Physique Médicale et Radioprotection, Centre Intégré de Cancérologie, CHU de Québec - Université Laval et Centre de recherche du CHU de Québec, Québec City, QC, Canada.
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, MS, USA
- Department of Radiation Oncology, Gamma Knife Center, Merit Health Central, Jackson, MS, USA
| | - Louis Archambault
- Département de Physique, de génie Physique et d'Optique, et Centre de Recherche sur le Cancer, Université Laval, Québec City, QC, Canada
- Service de Physique Médicale et Radioprotection, Centre Intégré de Cancérologie, CHU de Québec - Université Laval et Centre de recherche du CHU de Québec, Québec City, QC, Canada
| |
Collapse
|
5
|
Shimizu K. Near-Infrared Transillumination for Macroscopic Functional Imaging of Animal Bodies. BIOLOGY 2023; 12:1362. [PMID: 37997961 PMCID: PMC10668962 DOI: 10.3390/biology12111362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/17/2023] [Accepted: 10/18/2023] [Indexed: 11/25/2023]
Abstract
The classical transillumination technique has been revitalized through recent advancements in optical technology, enhancing its applicability in the realm of biomedical research. With a new perspective on near-axis scattered light, we have harnessed near-infrared (NIR) light to visualize intricate internal light-absorbing structures within animal bodies. By leveraging the principle of differentiation, we have extended the applicability of the Beer-Lambert law even in cases of scattering-dominant media, such as animal body tissues. This approach facilitates the visualization of dynamic physiological changes occurring within animal bodies, thereby enabling noninvasive, real-time imaging of macroscopic functionality in vivo. An important challenge inherent to transillumination imaging lies in the image blur caused by pronounced light scattering within body tissues. By extracting near-axis scattered components from the predominant diffusely scattered light, we have achieved cross-sectional imaging of animal bodies. Furthermore, we have introduced software-based techniques encompassing deconvolution using the point spread function and the application of deep learning principles to counteract the scattering effect. Finally, transillumination imaging has been elevated from two-dimensional to three-dimensional imaging. The effectiveness and applicability of these proposed techniques have been validated through comprehensive simulations and experiments involving human and animal subjects. As demonstrated through these studies, transillumination imaging coupled with emerging technologies offers a promising avenue for future biomedical applications.
Collapse
Affiliation(s)
- Koichi Shimizu
- School of Optoelectronic Engineering, Xidian University, Xi’an 710071, China;
- IPS Research Center, Waseda University, Kitakyushu 808-0135, Japan
| |
Collapse
|
6
|
Sundström E, Jiang M, Najm HK, Tretter JT. Blood Speckle Imaging: An Emerging Method for Perioperative Evaluation of Subaortic and Aortic Valvar Repair. Bioengineering (Basel) 2023; 10:1183. [PMID: 37892913 PMCID: PMC10604765 DOI: 10.3390/bioengineering10101183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/03/2023] [Accepted: 10/06/2023] [Indexed: 10/29/2023] Open
Abstract
BACKGROUND This article presents the use of blood speckle Imaging (BSI) as an echocardiographic approach for the pre- and post-operative evaluation of subaortic membrane resection and aortic valve repair. METHOD BSI, employing block-matching algorithms, provided detailed visualization of flow patterns and quantification of parameters from ultrasound data. The 9-year-old patient underwent subaortic membrane resection and peeling extensions of the membrane from under the ventricular-facing surface of all three aortic valve leaflets. RESULT Post-operatively, BSI demonstrated improvements in hemodynamic patterns, where quantified changes in flow velocities showed no signs of stenosis and trivial regurgitation. The asymmetric jet with a shear layer and flow reversal on the posterior aspect of the aorta was corrected resulting in reduced wall shear stress on the anterior aspect and reduced oscillatory shear index, which is considered a contributing element in cellular alterations in the structure of the aortic wall. CONCLUSION This proof-of-concept study demonstrates the potential of BSI as an emerging echocardiographic approach for evaluating subaortic and aortic valvar repair. BSI enhances the quantitative evaluation of the left ventricular outflow tract of immediate surgical outcomes beyond traditional echocardiographic parameters and aids in post-operative decision-making. However, larger studies are needed to validate these findings and establish standardized protocols for clinical implementation.
Collapse
Affiliation(s)
- Elias Sundström
- Department of Engineering Mechanics, FLOW Research Center, KTH Royal Institute of Technology, Teknikringen 8, 100 44 Stockholm, Sweden
| | - Michael Jiang
- Department of Pediatric Cardiology, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Hani K. Najm
- Congenital Valve Procedural Planning Center, Department of Pediatric Cardiology, Cleveland, OH 44195, USA
- Division of Pediatric Cardiac Surgery, and the Heart, Vascular, and Thoracic Institute, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Justin T. Tretter
- Congenital Valve Procedural Planning Center, Department of Pediatric Cardiology, Cleveland, OH 44195, USA
- Division of Pediatric Cardiac Surgery, and the Heart, Vascular, and Thoracic Institute, Cleveland Clinic, Cleveland, OH 44195, USA
| |
Collapse
|
7
|
El-Shafai W, Aly R, Taha TE, Abd El-Samie FE. CNN framework for optical image super-resolution and fusion. JOURNAL OF OPTICS 2023. [DOI: 10.1007/s12596-023-01122-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 01/28/2023] [Indexed: 09/01/2023]
|
8
|
Deng R, Jin X, Du D, Li Z. Scan-free time-of-flight-based three-dimensional imaging through a scattering layer. OPTICS EXPRESS 2023; 31:23662-23677. [PMID: 37475446 DOI: 10.1364/oe.492864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 06/11/2023] [Indexed: 07/22/2023]
Abstract
Reconstructing an object's three-dimensional shape behind a scattering layer with a single exposure is of great significance in real-life applications. However, due to the little information captured by a single exposure while strongly perturbed by the scattering layer and encoded by free-space propagation, existing methods cannot achieve scan-free three-dimensional reconstruction through the scattering layer in macroscopic scenarios using a short acquisition time of seconds. In this paper, we proposed a scan-free time-of-flight-based three-dimensional reconstruction method based on explicitly modeling and inverting the time-of-flight-based scattering light propagation in a non-confocal imaging system. The non-confocal time-of-flight-based scattering imaging model is developed to map the three-dimensional object shape information to the time-resolved measurements, by encoding the three-dimensional object shape into the free-space propagation result and then convolving with the scattering blur kernel derived from the diffusion equation. To solve the inverse problem, a three-dimensional shape reconstruction algorithm consisting of the deconvolution and diffractive wave propagation is developed to invert the effects caused by the scattering diffusion and the free-space propagation, which reshapes the temporal and spatial distribution of scattered signal photons and recovers the object shape information. Experiments on a real scattering imaging system are conducted to demonstrate the effectiveness of the proposed method. The single exposure used in the experiment only takes 3.5 s, which is more than 200 times faster than confocal scanning methods. Experimental results show that the proposed method outperforms existing methods in terms of three-dimensional reconstruction accuracy and imaging limit subjectively and objectively. Even though the signal photons captured by a single exposure are too highly scattered and attenuated to present any valid information in time gating, the proposed method can reconstruct three-dimensional objects located behind the scattering layer of 9.6 transport mean free paths (TMFPs), corresponding to the round-trip scattering length of 19.2 TMFPs.
Collapse
|
9
|
Abd El-Fattah I, Ali AM, El-Shafai W, Taha TE, Abd El-Samie FE. Deep-learning-based super-resolution and classification framework for skin disease detection applications. OPTICAL AND QUANTUM ELECTRONICS 2023; 55:427. [DOI: 10.1007/s11082-022-04432-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 11/25/2022] [Indexed: 09/01/2023]
|
10
|
El-Shafai W, Aly R, Taha TES, El-Samie FEA. CNN: a tool to fuse multi-modality medical images. JOURNAL OF OPTICS 2023. [DOI: 10.1007/s12596-023-01092-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 12/31/2022] [Indexed: 09/01/2023]
|
11
|
Xu W, Fu YL, Xu H, Wong KKL. Medical image fusion using enhanced cross-visual cortex model based on artificial selection and impulse-coupled neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107304. [PMID: 36586176 DOI: 10.1016/j.cmpb.2022.107304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 11/28/2022] [Accepted: 12/08/2022] [Indexed: 06/17/2023]
Abstract
OBJECTIVE The traditional ICM is widely used in applications, such as image edge detection and image segmentation. However, several model parameters must be set, which tend to lead to reduced accuracy and increased cost. As medical images have more complex edges, contours and details, more suitable combinatorial algorithms are needed to handle the pathological diagnosis of multiple cerebral infarcts and acute strokes, resulting in the findings being more applicable, as well as having good clinical value. METHODS To better solve the medical image fusion and diagnosis problems, this paper introduces the image fusion algorithm based on the combination of NSCT and improved ICM and proposes low-frequency, sub-band fusion rules and high-frequency sub-band fusion rules. The above method is applied to the fusion of CT/MRI images, subsequently, three other fusion algorithms, including NSCT-SF-PCNN, NSCT-SR-PCNN and Adaptive-PCNN are compared, and the simulation results of image fusion are analyzed and validated. RESULTS According to the experimental findings, the suggested algorithm performs better than other fusion algorithms in terms of five objective evaluation metrics or subjective evaluation. The NSCT transform and the improved ICM were combined, and the outcomes were evaluated against those of other fusion algorithms. The CT/MRI medical images of healthy brain tissue, numerous cerebral infarcts and acute strokes were combined using this technique. CONCLUSION Medical image fusion using Adaptive-PCNN produces satisfactory results, not only in relation to improved image clarity but also in terms of outstanding edge information, high contrast and brightness.
Collapse
Affiliation(s)
- Wanni Xu
- Xiamen Academy of Arts and Design, Fuzhou University, Xiamen 361024, China; Department of Computer Information Engineering, Nanchang Institute of Technology, Nanchang 330044, China
| | - You-Lei Fu
- Department of Computer Information Engineering, Nanchang Institute of Technology, Nanchang 330044, China; Fine Art and Design College, Quanzhou Normal University, Quanzhou 362000, China.
| | - Huasen Xu
- Department of Civil Engineering, Shanghai Normal University, Shanghai 201418, China.
| | - Kelvin K L Wong
- Fine Art and Design College, Quanzhou Normal University, Quanzhou 362000, China
| |
Collapse
|
12
|
Baran B, Kozłowski E, Majerek D, Rymarczyk T, Soleimani M, Wójcik D. Application of Machine Learning Algorithms to the Discretization Problem in Wearable Electrical Tomography Imaging for Bladder Tracking. SENSORS (BASEL, SWITZERLAND) 2023; 23:1553. [PMID: 36772593 PMCID: PMC9918926 DOI: 10.3390/s23031553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/27/2023] [Accepted: 01/29/2023] [Indexed: 06/18/2023]
Abstract
The article presents the implementation of artificial intelligence algorithms for the problem of discretization in Electrical Impedance Tomography (EIT) adapted for urinary tract monitoring. The primary objective of discretization is to create a finite element mesh (FEM) classifier that will separate the inclusion elements from the background. In general, the classifier is designed to detect the area of elements belonging to an inclusion revealing the shape of that object. We show the adaptation of supervised learning methods such as logistic regression, decision trees, linear and quadratic discriminant analysis to the problem of tracking the urinary bladder using EIT. Our study focuses on developing and comparing various algorithms for discretization, which perfectly supplement methods for an inverse problem. The innovation of the presented solutions lies in the originally adapted algorithms for EIT allowing for the tracking of the bladder. We claim that a robust measurement solution with sensors and statistical methods can track the placement and shape change of the bladder, leading to effective information about the studied object. This article also shows the developed device, its functions and working principle. The development of such a device and accompanying information technology came about in response to particularly strong market demand for modern technical solutions for urinary tract rehabilitation.
Collapse
Affiliation(s)
- Bartłomiej Baran
- Research & Development Centre Netrix S.A., 20-704 Lublin, Poland
| | - Edward Kozłowski
- Faculty of Management, Lublin University of Technology, 20-618 Lublin, Poland
| | - Dariusz Majerek
- Faculty of Fundamentals of Technology, Lublin University of Technology, 20-618 Lublin, Poland
| | - Tomasz Rymarczyk
- Research & Development Centre Netrix S.A., 20-704 Lublin, Poland
- WSEI University, 20-209 Lublin, Poland
| | - Manuchehr Soleimani
- Department of Electronic and Electrical Engineering, University of Bath, Bath BA2 7AY, UK
| | - Dariusz Wójcik
- Research & Development Centre Netrix S.A., 20-704 Lublin, Poland
- WSEI University, 20-209 Lublin, Poland
| |
Collapse
|
13
|
El-Bakary EM, El-Shafai W, El-Rabaie S, Zahran O, El-Halawany M, El-Samie FEA. Efficient secure optical DWT-based watermarked 3D video transmission over MC-CDMA wireless channel. JOURNAL OF OPTICS 2023. [DOI: 10.1007/s12596-022-01067-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 12/14/2022] [Indexed: 09/02/2023]
|
14
|
Tian Z, Tao S, Bai L, Xu Y, Liu X, Kuang C. A novel fusion method for X-ray phase contrast imaging based on fast adaptive bidimensional empirical mode decomposition. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:1341-1362. [PMID: 37840465 DOI: 10.3233/xst-230180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2023]
Abstract
BACKGROUNDS X-ray phase contrast imaging (XPCI) can separate the attenuation, refraction, and scattering signals of the object. The application of image fusion enables the concentration of distinctive information into a single image. Some methods have been applied in XPCI field, but wavelet-based decomposition approaches often result in loss of original data. OBJECTIVE To explore the application value of a novel image fusion method for XPCI system and computed tomography (CT) system. METHODS The means of fast adaptive bidimensional empirical mode decomposition (FABEMD) is considered for image decomposition to avoid unnecessary information loss. A parameter δ is proposed to guide the fusion of bidimensional intrinsic mode functions which contain high-frequency information, using a pulse coupled neural network with morphological gradients (MGPCNN). The residual images are fused by the energy attribute fusion strategy. Image preprocessing and enhancement are performed on the result to ensure its quality. The effectiveness of other image fusion methods has been compared, such as discrete wavelet transforms and anisotropic diffusion fusion. RESULTS The δ-guided FABEMD-MGPCNN method achieved either the first or second position in objective evaluation metrics with biological samples, as compared to other image fusion methods. Moreover, comparisons are made with other fusion methods used for XPCI. Finally, the proposed method applied in CT show expected results to retain the feature information. CONCLUSIONS The proposed δ-guided FABEMD-MGPCNN method shows potential feasibility and superiority over traditional and recent image fusion methods for X-ray differential phase contrast imaging and computed tomography systems.
Collapse
Affiliation(s)
- Zonghan Tian
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science & Engineering, Zhejiang University, Hangzhou, China
| | - Siwei Tao
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science & Engineering, Zhejiang University, Hangzhou, China
| | - Ling Bai
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science & Engineering, Zhejiang University, Hangzhou, China
| | - Yueshu Xu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science & Engineering, Zhejiang University, Hangzhou, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, China
| | - Xu Liu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science & Engineering, Zhejiang University, Hangzhou, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, China
- Ningbo Research Institute, Zhejiang University, Ningbo, China
- Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, China
| | - Cuifang Kuang
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science & Engineering, Zhejiang University, Hangzhou, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, China
- Ningbo Research Institute, Zhejiang University, Ningbo, China
- Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, China
| |
Collapse
|
15
|
Yadav AS, Kumar S, Karetla GR, Cotrina-Aliaga JC, Arias-Gonzáles JL, Kumar V, Srivastava S, Gupta R, Ibrahim S, Paul R, Naik N, Singla B, Tatkar NS. A Feature Extraction Using Probabilistic Neural Network and BTFSC-Net Model with Deep Learning for Brain Tumor Classification. J Imaging 2022; 9:jimaging9010010. [PMID: 36662108 PMCID: PMC9865827 DOI: 10.3390/jimaging9010010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/21/2022] [Accepted: 12/28/2022] [Indexed: 01/03/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Brain Tumor Fusion-based Segments and Classification-Non-enhancing tumor (BTFSC-Net) is a hybrid system for classifying brain tumors that combine medical image fusion, segmentation, feature extraction, and classification procedures. MATERIALS AND METHODS to reduce noise from medical images, the hybrid probabilistic wiener filter (HPWF) is first applied as a preprocessing step. Then, to combine robust edge analysis (REA) properties in magnetic resonance imaging (MRI) and computed tomography (CT) medical images, a fusion network based on deep learning convolutional neural networks (DLCNN) is developed. Here, the brain images' slopes and borders are detected using REA. To separate the sick region from the color image, adaptive fuzzy c-means integrated k-means (HFCMIK) clustering is then implemented. To extract hybrid features from the fused image, low-level features based on the redundant discrete wavelet transform (RDWT), empirical color features, and texture characteristics based on the gray-level cooccurrence matrix (GLCM) are also used. Finally, to distinguish between benign and malignant tumors, a deep learning probabilistic neural network (DLPNN) is deployed. RESULTS according to the findings, the suggested BTFSC-Net model performed better than more traditional preprocessing, fusion, segmentation, and classification techniques. Additionally, 99.21% segmentation accuracy and 99.46% classification accuracy were reached using the proposed BTFSC-Net model. CONCLUSIONS earlier approaches have not performed as well as our presented method for image fusion, segmentation, feature extraction, classification operations, and brain tumor classification. These results illustrate that the designed approach performed more effectively in terms of enhanced quantitative evaluation with better accuracy as well as visual performance.
Collapse
Affiliation(s)
- Arun Singh Yadav
- Department of Computer Science, University of Lucknow, Lucknow 226007, Uttar Pradesh, India
| | - Surendra Kumar
- Department of Computer Application, Marwadi University, Rajkot 360003, Gujrat, India
| | - Girija Rani Karetla
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Penrith, NSW 2751, Australia
| | | | - José Luis Arias-Gonzáles
- Department of Business, Pontificia Universidad Católica del Perú, Av. Universitaria 1801, San Miguel 15088, Peru
| | - Vinod Kumar
- Department of Computer Applications, ABES Engineering College, Ghaziabad 201009, Uttar Pradesh, India
| | - Satyajee Srivastava
- Department of Computer Science and Engineering, University of Engineering and Technology Roorkee, Roorkee 247667, Uttarakhand, India
| | - Reena Gupta
- Department of Pharmacognosy, Institute of Pharmaceutical Research, GLA University, Mathura 281406, Uttar Pradesh, India
| | - Sufyan Ibrahim
- Neuro-Informatics Laboratory, Department of Neurological Surgery, Mayo Clinic, Rochester, MN 55905, USA
| | - Rahul Paul
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02115, USA
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India
| | - Nithesh Naik
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
- Curiouz TechLab Private Limited, BIRAC-BioNEST, Manipal Government of Karnataka Bioincubator, Manipal 576104, Karnataka, India
- Correspondence: ; Tel.: +91-83-1087-4339
| | - Babita Singla
- Chitkara Business School, Chitkara University, Chandigarh 140401, Punjab, India
| | - Nisha S. Tatkar
- Department of Postgraduate Diploma in Management, Institute of PGDM, Mumbai Education Trust, Mumbai 400050, Maharashtra, India
| |
Collapse
|
16
|
Wang R, Fu G, Li J, Pei Y. Diagnosis after zooming in: A multilabel classification model by imitating doctor reading habits to diagnose brain diseases. Med Phys 2022; 49:7054-7070. [PMID: 35880443 DOI: 10.1002/mp.15871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 03/18/2022] [Accepted: 06/28/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors. METHODS In this paper, we propose a model called DrCT2 that can detect brain diseases without using image-level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open-access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human-computer cooperation in different aspects. RESULTS The method achieved F1-scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency. CONCLUSIONS We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value.
Collapse
Affiliation(s)
- Ruiqian Wang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Guanghui Fu
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, F-75013, Paris, France
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Japan
| |
Collapse
|
17
|
Abdelmotaal H, Sharaf M, Soliman W, Wasfi E, Kedwany SM. Bridging the resources gap: deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation. BMC Ophthalmol 2022; 22:355. [PMID: 36050661 PMCID: PMC9434904 DOI: 10.1186/s12886-022-02577-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 08/23/2022] [Indexed: 11/29/2022] Open
Abstract
Background To assess the ability of the pix2pix generative adversarial network (pix2pix GAN) to synthesize clinically useful optical coherence tomography (OCT) color-coded macular thickness maps based on a modest-sized original fluorescein angiography (FA) dataset and the reverse, to be used as a plausible alternative to either imaging technique in patients with diabetic macular edema (DME). Methods Original images of 1,195 eyes of 708 nonconsecutive diabetic patients with or without DME were retrospectively analyzed. OCT macular thickness maps and corresponding FA images were preprocessed for use in training and testing the proposed pix2pix GAN. The best quality synthesized images using the test set were selected based on the Fréchet inception distance score, and their quality was studied subjectively by image readers and objectively by calculating the peak signal-to-noise ratio, structural similarity index, and Hamming distance. We also used original and synthesized images in a trained deep convolutional neural network (DCNN) to plot the difference between synthesized images and their ground-truth analogues and calculate the learned perceptual image patch similarity metric. Results The pix2pix GAN-synthesized images showed plausible subjectively and objectively assessed quality, which can provide a clinically useful alternative to either image modality. Conclusion Using the pix2pix GAN to synthesize mutually dependent OCT color-coded macular thickness maps or FA images can overcome issues related to machine unavailability or clinical situations that preclude the performance of either imaging technique. Trial registration ClinicalTrials.gov Identifier: NCT05105620, November 2021. “Retrospectively registered”.
Collapse
Affiliation(s)
- Hazem Abdelmotaal
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, 71515, Egypt.
| | - Mohamed Sharaf
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, 71515, Egypt
| | - Wael Soliman
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, 71515, Egypt
| | - Ehab Wasfi
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, 71515, Egypt
| | - Salma M Kedwany
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, 71515, Egypt
| |
Collapse
|
18
|
Zhou J, Wu Z, Jiang Z, Huang K, Guo K, Zhao S. Background selection schema on deep learning-based classification of dermatological disease. Comput Biol Med 2022; 149:105966. [PMID: 36029748 DOI: 10.1016/j.compbiomed.2022.105966] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 07/28/2022] [Accepted: 08/13/2022] [Indexed: 11/03/2022]
Abstract
Skin diseases are one of the most common ailments affecting humans. Artificial intelligence based on deep learning can significantly improve the efficiency of identifying skin disorders and alleviate the scarcity of medical resources. However, the distribution of background information in dermatological datasets is imbalanced, causing generalized deep learning models to perform poorly in skin disease classification. We propose a deep learning schema that combines data preprocessing, data augmentation, and residual networks to study the influence of color-based background selection on a deep model's capacity to learn foreground lesion subject attributes in a skin disease classification problem. First, clinical photographs are annotated by dermatologists, and then the original background information is masked with unique colors to generate several subsets with distinct background colors. Sample-balanced training and test sets are generated using random over/undersampling and data augmentation techniques. Finally, the deep learning networks are independently trained on diverse subsets of backdrop colors to compare the performance of classifiers based on different background information. Extensive experiments demonstrate that color-based background information significantly affects the classification of skin diseases and that classifiers trained on the green subset achieve state-of-the-art performance for classifying black and red skin lesions.
Collapse
Affiliation(s)
- Jiancun Zhou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China; College of Information and Electronic Engineering, Hunan City University, Yiyang 413000, China
| | - Zheng Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Zixi Jiang
- Department of Dermatology, Xiangya Hospital, Central South University, Changsha, China; Hunan Engineering Research Center of Skin Health and Disease, Xiangya Hospital, Central South University, Changsha, China; Hunan Key Laboratory of Skin Cancer and Psoriasis, Xiangya Hospital, Central South University, Changsha, China; National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, China
| | - Kai Huang
- Department of Dermatology, Xiangya Hospital, Central South University, Changsha, China; Hunan Engineering Research Center of Skin Health and Disease, Xiangya Hospital, Central South University, Changsha, China; Hunan Key Laboratory of Skin Cancer and Psoriasis, Xiangya Hospital, Central South University, Changsha, China; National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, China
| | - Kehua Guo
- School of Computer Science and Engineering, Central South University, Changsha 410083, China.
| | - Shuang Zhao
- Department of Dermatology, Xiangya Hospital, Central South University, Changsha, China; Hunan Engineering Research Center of Skin Health and Disease, Xiangya Hospital, Central South University, Changsha, China; Hunan Key Laboratory of Skin Cancer and Psoriasis, Xiangya Hospital, Central South University, Changsha, China; National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, China.
| |
Collapse
|
19
|
Roa C, Pedersen G, Bollinger M, Taylor C, Boswell KM. Taxonomical classification of reef fish with broadband backscattering models and machine learning approaches. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1020. [PMID: 36050156 DOI: 10.1121/10.0012192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 06/14/2022] [Indexed: 06/15/2023]
Abstract
Commercially available broadband echosounders have the potential to classify acoustic targets based on their scattering responses, which are a function of their species-specific morphological and physiological properties. This is particularly important in complex environments with biologically diverse fish assemblages. Using theoretical acoustic scattering models among 130 fishes across six species, we examine the potential to classify reef fish based on the fine-scale gas-bearing swim bladder morphology quantified from three-dimensional computed-tomography models. Modeled echoes of the swim bladder for an incident broadband sound source (30-200 kHz) and across a range of orientation angles (±44°) are acoustically simulated using the boundary element method. Backscatter models present characteristics that are consistent within species and distinguishable among them. Broadband and multifrequency echoes are classified and compared with Bayesian, support vector machine, k-nearest neighbor, and convolutional neural network estimators. Classifiers have higher accuracies (>70%) when noise is not present and perform better when applied to broadband spectra than multifrequency data (42, 70, 100, 132, 160, 184 kHz). The modeling and classification approaches presented indicate that a taxonomic distinction based on morphologically dependent scattering responses is possible and may provide the capacity to acoustically discriminate among fish species.
Collapse
Affiliation(s)
- Camilo Roa
- Institute of Environment and Department of Biological Sciences, Florida International University, Miami, Florida 33199, USA
| | | | - Michael Bollinger
- NOAA's National Centers for Coastal Ocean Science, Beaufort, North Carolina 28516, USA
| | - Christopher Taylor
- NOAA's National Centers for Coastal Ocean Science, Beaufort, North Carolina 28516, USA
| | - Kevin M Boswell
- Institute of Environment and Department of Biological Sciences, Florida International University, Miami, Florida 33199, USA
| |
Collapse
|
20
|
Saini LK, Mathur P. Medical image fusion by sparse-based modified fusion framework using block total least-square update dictionary learning algorithm. J Med Imaging (Bellingham) 2022; 9:052403. [DOI: 10.1117/1.jmi.9.5.052403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 04/26/2022] [Indexed: 11/14/2022] Open
Affiliation(s)
- Lalit Kumar Saini
- Manipal University Jaipur, Department of Information Technology, Jaipur
| | - Pratistha Mathur
- Manipal University Jaipur, Department of Information Technology, Jaipur
| |
Collapse
|
21
|
Singh N, Rathore SS, Kumar S. Towards a super-resolution based approach for improved face recognition in low resolution environment. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:38887-38919. [PMID: 35493417 PMCID: PMC9039276 DOI: 10.1007/s11042-022-13160-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 02/23/2022] [Accepted: 04/10/2022] [Indexed: 06/14/2023]
Abstract
The video surveillance activity generates a vast amount of data, which can be processed to detect miscreants. The task of identifying and recognizing an object in surveillance data is intriguing yet difficult due to the low resolution of captured images or video. The super-resolution approach aims to enhance the resolution of an image to generate a desirable high-resolution one. This paper develops a robust real-time face recognition approach that uses super-resolution to improve images and detect faces in the video. Many previously developed face detection systems are constrained by the severe distortion in the captured images. Further, many systems failed to handle the effect of motion, blur, and noise on the images registered on a camera. The presented approach improves descriptor count of the image based on the super-resolved faces and mitigates the effect of noise. Furthermore, it uses a parallel architecture to implement a super-resolution algorithm and overcomes the efficiency drawback increasing face recognition performance. Experimental analysis on the ORL, Caltech, and Chokepoint datasets has been carried out to evaluate the performance of the presented approach. The PSNR (Peak Signal-to-Noise-Ratio) and face recognition rate are used as the performance measures. The results showed significant improvement in the recognition rates for images where the face didn't contain pose expressions and scale variations. Further, for the complicated cases involving scale, pose, and lighting variations, the presented approach resulted in an improvement of 5%-6% in each case.
Collapse
Affiliation(s)
- Nalin Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, Roorkee, India
| | - Santosh Singh Rathore
- Department of Information Technology, ABV-Indian Institute of Information Technology and Management, Gwalior, India
| | - Sandeep Kumar
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, Roorkee, India
| |
Collapse
|
22
|
Faragallah OS, Muhammed AN, Taha TS, Geweid GG. PCA based SVD fusion for MRI and CT medical images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-202884] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This paper presents a new approach to the multi-modal medical image fusion based on Principal Component Analysis (PCA) and Singular value decomposition (SVD).The main objective of the proposed approach is to facilitate its implementation on a hardware unit, so it works effectively at run time. To evaluate the presented approach, it was tested in fusing four different cases of a registered CT and MRI images. Eleven quality metrics (including Mutual Information and Universal Image Quality Index) were used in evaluating the fused image obtained by the proposed approach, and compare it with the images obtained by the other fusion approaches. In experiments, the quality metrics shows that the fused image obtained by the presented approach has better quality result and it proved effective in medical image fusion especially in MRI and CT images. It also indicates that the paper approach had reduced the processing time and the memory required during the fusion process, and leads to very cheap and fast hardware implementation of the presented approach.
Collapse
Affiliation(s)
- Osama S. Faragallah
- Department of Information Technology, College of Computers and Information Technology, Taif University, Saudi Arabia
| | - Abdullah N. Muhammed
- Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| | - Taha S. Taha
- Department of Electronics and Communication Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| | - Gamal G.N. Geweid
- Department of Biomedical Engineering, College of Engineering and Computer Sciences, Marshall University, Huntington, WV, USA
- Department of Electrical Engineering, Faculty of Engineering, Benha University, Benha, Egypt
| |
Collapse
|