1
|
Kasahara A, Iwasaki T, Mizutani T, Ueyama T, Sekine Y, Uehara M, Kodera S, Gonoi W, Iwanaga H, Abe O. [Development of a Deep Learning Model for Judging Late Gadolinium-enhancement in Cardiac MRI]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2024; 80:750-759. [PMID: 38897968 DOI: 10.6009/jjrt.2024-1421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
PURPOSE To verify the usefulness of a deep learning model for determining the presence or absence of contrast-enhanced myocardium in late gadolinium-enhancement images in cardiac MRI. METHODS We used 174 late gadolinium-enhancement myocardial short-axis images obtained from contrast-enhanced cardiac MRI performed using a 3.0T MRI system at the University of Tokyo Hospital. Of these, 144 images were used for training, extracting a region of interest targeting the heart, scaling signal intensity, and data augmentation were performed to obtain 3312 training images. The interpretation report of two cardiology specialists of our hospital was used as the correct label. A learning model was constructed using a convolutional neural network and applied to 30 test data. In all cases, the acquired mean age was 56.4±12.1 years, and the male-to-female ratio was 1 : 0.82. RESULTS Before and after data augmentation, sensitivity remained consistent at 93.3%, specificity improved from 0.0% to 100.0%, and accuracy improved from 46.7% to 96.7%. CONCLUSION The prediction accuracy of the deep learning model developed in this research is high, suggesting its high usefulness.
Collapse
Affiliation(s)
| | | | | | | | | | - Masae Uehara
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Satoshi Kodera
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Wataru Gonoi
- Radiology Center, The University of Tokyo Hospital
| | | | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital
| |
Collapse
|
2
|
Naeeni Davarani M, Arian Darestani A, Guillen Cañas V, Azimi H, Havadaragh SH, Hashemi H, Harirchian MH. Efficient segmentation of active and inactive plaques in FLAIR-images using DeepLabV3Plus SE with efficientnetb0 backbone in multiple sclerosis. Sci Rep 2024; 14:16304. [PMID: 39009636 PMCID: PMC11251059 DOI: 10.1038/s41598-024-67130-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 07/08/2024] [Indexed: 07/17/2024] Open
Abstract
This research paper introduces an efficient approach for the segmentation of active and inactive plaques within Fluid-attenuated inversion recovery (FLAIR) images, employing a convolutional neural network (CNN) model known as DeepLabV3Plus SE with the EfficientNetB0 backbone in Multiple sclerosis (MS), and demonstrates its superior performance compared to other CNN architectures. The study encompasses various critical components, including dataset pre-processing techniques, the utilization of the Squeeze and Excitation Network (SE-Block), and the atrous spatial separable pyramid Block to enhance segmentation capabilities. Detailed descriptions of pre-processing procedures, such as removing the cranial bone segment, image resizing, and normalization, are provided. This study analyzed a cross-sectional cohort of 100 MS patients with active brain plaques, examining 5000 MRI slices. After filtering, 1500 slices were utilized for labeling and deep learning. The training process adopts the dice coefficient as the loss function and utilizes Adam optimization. The study evaluated the model's performance using multiple metrics, including intersection over union (IOU), Dice Score, Precision, Recall, and F1-Score, and offers a comparative analysis with other CNN architectures. Results demonstrate the superior segmentation ability of the proposed model, as evidenced by an IOU of 69.87, Dice Score of 76.24, Precision of 88.89, Recall of 73.52, and F1-Score of 80.47 for the DeepLabV3+SE_EfficientNetB0 model. This research contributes to the advancement of plaque segmentation in FLAIR images and offers a compelling approach with substantial potential for medical image analysis and diagnosis.
Collapse
Affiliation(s)
| | | | | | - Hossein Azimi
- Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran
| | - Sanaz Heydari Havadaragh
- Neurology Department, Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Hasan Hashemi
- Department of Radiology, School of Medicine, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Mohammd Hossein Harirchian
- Iranian Center of Neurological Research, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
3
|
Jodeiri A, Seyedarabi H, Danishvar S, Shafiei SH, Sales JG, Khoori M, Rahimi S, Mortazavi SMJ. Concurrent Learning Approach for Estimation of Pelvic Tilt from Anterior-Posterior Radiograph. Bioengineering (Basel) 2024; 11:194. [PMID: 38391680 PMCID: PMC10886461 DOI: 10.3390/bioengineering11020194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Revised: 02/02/2024] [Accepted: 02/08/2024] [Indexed: 02/24/2024] Open
Abstract
Accurate and reliable estimation of the pelvic tilt is one of the essential pre-planning factors for total hip arthroplasty to prevent common post-operative complications such as implant impingement and dislocation. Inspired by the latest advances in deep learning-based systems, our focus in this paper has been to present an innovative and accurate method for estimating the functional pelvic tilt (PT) from a standing anterior-posterior (AP) radiography image. We introduce an encoder-decoder-style network based on a concurrent learning approach called VGG-UNET (VGG embedded in U-NET), where a deep fully convolutional network known as VGG is embedded at the encoder part of an image segmentation network, i.e., U-NET. In the bottleneck of the VGG-UNET, in addition to the decoder path, we use another path utilizing light-weight convolutional and fully connected layers to combine all extracted feature maps from the final convolution layer of VGG and thus regress PT. In the test phase, we exclude the decoder path and consider only a single target task i.e., PT estimation. The absolute errors obtained using VGG-UNET, VGG, and Mask R-CNN are 3.04 ± 2.49, 3.92 ± 2.92, and 4.97 ± 3.87, respectively. It is observed that the VGG-UNET leads to a more accurate prediction with a lower standard deviation (STD). Our experimental results demonstrate that the proposed multi-task network leads to a significantly improved performance compared to the best-reported results based on cascaded networks.
Collapse
Affiliation(s)
- Ata Jodeiri
- Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 51666, Iran
- Faculty of Advanced Medical Sciences, Tabriz University of Medical Sciences, Tabriz 51656, Iran
| | - Hadi Seyedarabi
- Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 51666, Iran
| | - Sebelan Danishvar
- College of Engineering, Design and Physical Sciences, Brunel University London, Uxbridge UB8 3PH, UK
| | - Seyyed Hossein Shafiei
- Orthopedic Surgery Research Centre, Sina University Hospital, School of Medicine, Tehran University of Medical Sciences, Tehran 51656, Iran
| | - Jafar Ganjpour Sales
- Department of Orthopedic Surgery, Shohada Hospital, Tabriz University of Medical Sciences, Tabriz 51656, Iran
| | - Moein Khoori
- Joint Reconstruction Research Center (JRRC), Tehran University of Medical Sciences, Tehran 51656, Iran
| | - Shakiba Rahimi
- Orthopedic Surgery Research Centre, Sina University Hospital, School of Medicine, Tehran University of Medical Sciences, Tehran 51656, Iran
| | | |
Collapse
|
4
|
Wang J, Fang Z, Yao S, Yang F. Ellipse guided multi-task network for fetal head circumference measurement. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
5
|
Automatic Intelligent System Using Medical of Things for Multiple Sclerosis Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:4776770. [PMID: 36864930 PMCID: PMC9974276 DOI: 10.1155/2023/4776770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 07/31/2022] [Accepted: 08/16/2022] [Indexed: 02/25/2023]
Abstract
Malfunctions in the immune system cause multiple sclerosis (MS), which initiates mild to severe nerve damage. MS will disturb the signal communication between the brain and other body parts, and early diagnosis will help reduce the harshness of MS in humankind. Magnetic resonance imaging (MRI) supported MS detection is a standard clinical procedure in which the bio-image recorded with a chosen modality is considered to assess the severity of the disease. The proposed research aims to implement a convolutional neural network (CNN) supported scheme to detect MS lesions in the chosen brain MRI slices. The stages of this framework include (i) image collection and resizing, (ii) deep feature mining, (iii) hand-crafted feature mining, (iii) feature optimization with firefly algorithm, and (iv) serial feature integration and classification. In this work, five-fold cross-validation is executed, and the final result is considered for the assessment. The brain MRI slices with/without the skull section are examined separately, presenting the attained results. The experimental outcome of this study confirms that the VGG16 with random forest (RF) classifier offered a classification accuracy of >98% MRI with skull, and VGG16 with K-nearest neighbor (KNN) provided an accuracy of >98% without the skull.
Collapse
|
6
|
Rajinikanth V, Vincent PMDR, Srinivasan K, Ananth Prabhu G, Chang CY. A framework to distinguish healthy/cancer renal CT images using the fused deep features. Front Public Health 2023; 11:1109236. [PMID: 36794074 PMCID: PMC9922737 DOI: 10.3389/fpubh.2023.1109236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 01/04/2023] [Indexed: 02/01/2023] Open
Abstract
Introduction Cancer happening rates in humankind are gradually rising due to a variety of reasons, and sensible detection and management are essential to decrease the disease rates. The kidney is one of the vital organs in human physiology, and cancer in the kidney is a medical emergency and needs accurate diagnosis and well-organized management. Methods The proposed work aims to develop a framework to classify renal computed tomography (CT) images into healthy/cancer classes using pre-trained deep-learning schemes. To improve the detection accuracy, this work suggests a threshold filter-based pre-processing scheme, which helps in removing the artefact in the CT slices to achieve better detection. The various stages of this scheme involve: (i) Image collection, resizing, and artefact removal, (ii) Deep features extraction, (iii) Feature reduction and fusion, and (iv) Binary classification using five-fold cross-validation. Results and discussion This experimental investigation is executed separately for: (i) CT slices with the artefact and (ii) CT slices without the artefact. As a result of the experimental outcome of this study, the K-Nearest Neighbor (KNN) classifier is able to achieve 100% detection accuracy by using the pre-processed CT slices. Therefore, this scheme can be considered for the purpose of examining clinical grade renal CT images, as it is clinically significant.
Collapse
Affiliation(s)
- Venkatesan Rajinikanth
- Division of Research and Innovation, Department of Computer Science and Engineering, Saveetha School of Engineering, SIMATS, Chennai, Tamil Nadu, India
| | - P. M. Durai Raj Vincent
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - G. Ananth Prabhu
- Department of Computer Science Engineering, Sahyadri College of Engineering and Management, Mangaluru, India
| | - Chuan-Yu Chang
- Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Yunlin, Taiwan
- Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| |
Collapse
|
7
|
Aamir S, Rahim A, Aamir Z, Abbasi SF, Khan MS, Alhaisoni M, Khan MA, Khan K, Ahmad J. Predicting Breast Cancer Leveraging Supervised Machine Learning Techniques. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:5869529. [PMID: 36017156 PMCID: PMC9398810 DOI: 10.1155/2022/5869529] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 07/28/2022] [Indexed: 02/08/2023]
Abstract
Breast cancer is one of the leading causes of increasing deaths in women worldwide. The complex nature (microcalcification and masses) of breast cancer cells makes it quite difficult for radiologists to diagnose it properly. Subsequently, various computer-aided diagnosis (CAD) systems have previously been developed and are being used to aid radiologists in the diagnosis of cancer cells. However, due to intrinsic risks associated with the delayed and/or incorrect diagnosis, it is indispensable to improve the developed diagnostic systems. In this regard, machine learning has recently been playing a potential role in the early and precise detection of breast cancer. This paper presents a new machine learning-based framework that utilizes the Random Forest, Gradient Boosting, Support Vector Machine, Artificial Neural Network, and Multilayer Perception approaches to efficiently predict breast cancer from the patient data. For this purpose, the Wisconsin Diagnostic Breast Cancer (WDBC) dataset has been utilized and classified using a hybrid Multilayer Perceptron Model (MLP) and 5-fold cross-validation framework as a working prototype. For the improved classification, a connection-based feature selection technique has been used that also eliminates the recursive features. The proposed framework has been validated on two separate datasets, i.e., the Wisconsin Prognostic dataset (WPBC) and Wisconsin Original Breast Cancer (WOBC) datasets. The results demonstrate improved accuracy of 99.12% due to efficient data preprocessing and feature selection applied to the input data.
Collapse
Affiliation(s)
- Sanam Aamir
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan
| | - Aqsa Rahim
- Faculty of Science and Technology, University of Tromsø, Tromso, Norway
| | - Zain Aamir
- Department of Data Science, National University of Computer and Emerging Sciences, Islamabad 44000, Pakistan
| | - Saadullah Farooq Abbasi
- Department of Electrical Engineering, National University of Technology, Islamabad 44000, Pakistan
| | | | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Muhammad Attique Khan
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Khyber Khan
- Department of Computer Science, Khurasan University, Jalalabad, Afghanistan
| | - Jawad Ahmad
- School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK
| |
Collapse
|