1
|
Inomata S, Yoshimura T, Tang M, Ichikawa S, Sugimori H. Estimation of Left and Right Ventricular Ejection Fractions from cine-MRI Using 3D-CNN. SENSORS (BASEL, SWITZERLAND) 2023; 23:6580. [PMID: 37514888 PMCID: PMC10384911 DOI: 10.3390/s23146580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 07/17/2023] [Accepted: 07/20/2023] [Indexed: 07/30/2023]
Abstract
Cardiac function indices must be calculated using tracing from short-axis images in cine-MRI. A 3D-CNN (convolutional neural network) that adds time series information to images can estimate cardiac function indices without tracing using images with known values and cardiac cycles as the input. Since the short-axis image depicts the left and right ventricles, it is unclear which motion feature is captured. This study aims to estimate the indices by learning the short-axis images and the known left and right ventricular ejection fractions and to confirm the accuracy and whether each index is captured as a feature. A total of 100 patients with publicly available short-axis cine images were used. The dataset was divided into training:test = 8:2, and a regression model was built by training with the 3D-ResNet50. Accuracy was assessed using a five-fold cross-validation. The correlation coefficient, MAE (mean absolute error), and RMSE (root mean squared error) were determined as indices of accuracy evaluation. The mean correlation coefficient of the left ventricular ejection fraction was 0.80, MAE was 9.41, and RMSE was 12.26. The mean correlation coefficient of the right ventricular ejection fraction was 0.56, MAE was 11.35, and RMSE was 14.95. The correlation coefficient was considerably higher for the left ventricular ejection fraction. Regression modeling using the 3D-CNN indicated that the left ventricular ejection fraction was estimated more accurately, and left ventricular systolic function was captured as a feature.
Collapse
Affiliation(s)
- Soichiro Inomata
- Graduate School of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
| | - Takaaki Yoshimura
- Department of Health Sciences and Technology, Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
- Department of Medical Physics, Hokkaido University Hospital, Sapporo 060-8648, Japan
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
| | - Minghui Tang
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Department of Diagnostic Imaging, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo 060-8638, Japan
| | - Shota Ichikawa
- Department of Radiological Technology, School of Health Sciences, Faculty of Medicine, Niigata University, Niigata 951-8518, Japan
- Institute for Research Administration, Niigata University, Niigata 950-2181, Japan
| | - Hiroyuki Sugimori
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Department of Biomedical Science and Engineering, Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
| |
Collapse
|
2
|
Anaya-Isaza A, Mera-Jiménez L, Verdugo-Alejo L, Sarasti L. Optimizing MRI-based brain tumor classification and detection using AI: A comparative analysis of neural networks, transfer learning, data augmentation, and the cross-transformer network. Eur J Radiol Open 2023; 10:100484. [PMID: 36950474 PMCID: PMC10027502 DOI: 10.1016/j.ejro.2023.100484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 02/28/2023] [Accepted: 03/01/2023] [Indexed: 03/17/2023] Open
Abstract
Early detection and diagnosis of brain tumors are crucial to taking adequate preventive measures, as with most cancers. On the other hand, artificial intelligence (AI) has grown exponentially, even in such complex environments as medicine. Here it's proposed a framework to explore state-of-the-art deep learning architectures for brain tumor classification and detection. An own development called Cross-Transformer is also included, which consists of three scalar products that combine self-care model keys, queries, and values. Initially, we focused on the classification of three types of tumors: glioma, meningioma, and pituitary. With the Figshare brain tumor dataset was trained the InceptionResNetV2, InceptionV3, DenseNet121, Xception, ResNet50V2, VGG19, and EfficientNetB7 networks. Over 97 % of classifications were accurate in this experiment, which provided a network's performance overview. Subsequently, we focused on tumor detection using the Brain MRI Images for Brain Tumor Detection and The Cancer Genome Atlas Low-Grade Glioma database. The development encompasses learning transfer, data augmentation, as well as image acquisition sequences; T1-weighted images (T1WI), T1-weighted post-gadolinium (T1-Gd), and Fluid-Attenuated Inversion Recovery (FLAIR). Based on the results, using learning transfer and data augmentation increased accuracy by up to 6 %, with a p-value below the significance level of 0.05. As well, the FLAIR sequence was the most efficient for detection. As an alternative, our proposed model proved to be the most effective in terms of training time, using approximately half the time of the second fastest network.
Collapse
|
3
|
Ichikawa S, Itadani H, Sugimori H. Toward automatic reformation at the orbitomeatal line in head computed tomography using object detection algorithm. Phys Eng Sci Med 2022; 45:835-845. [PMID: 35793033 DOI: 10.1007/s13246-022-01153-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 06/07/2022] [Indexed: 11/24/2022]
Abstract
Consistent cross-sectional imaging is desirable to accurately detect lesions and facilitate follow-up in head computed tomography (CT). However, manual reformation causes image variations among technologists and requires additional time. We therefore developed a system that reformats head CT images at the orbitomeatal (OM) line and evaluated the system performance using real-world clinical data. Retrospective data were obtained for 681 consecutive patients who underwent non-contrast head CT. The datasets were randomly divided into one of three sets for training, validation, or testing. Four landmarks (bilateral eyes and external auditory canal) were detected with the trained You Look Only Once (YOLO)v5 model, and the head CT images were reformatted at the OM line. The precision, recall, and mean average precision at the intersection over union threshold of 0.5 were computed in the validation sets. The reformation quality in testing sets was evaluated by three radiological technologists on a qualitative 4-point scale. The precision, recall, and mean average precision of the trained YOLOv5 model for all categories were 0.688, 0.949, and 0.827, respectively. In our environment, the mean implementation time was 23.5 ± 2.4 s for each case. The qualitative evaluation in the testing sets showed that post-processed images of automatic reformation had clinically useful quality with scores 3 and 4 in 86.8%, 91.2%, and 94.1% for observers 1, 2, and 3, respectively. Our system demonstrated acceptable quality in reformatting the head CT images at the OM line using an object detection algorithm and was highly time efficient.
Collapse
Affiliation(s)
- Shota Ichikawa
- Graduate School of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, 060-0812, Japan.,Department of Radiological Technology, Kurashiki Central Hospital, 1-1-1 Miwa, Kurashiki, Okayama, 710-8602, Japan
| | - Hideki Itadani
- Department of Radiological Technology, Kurashiki Central Hospital, 1-1-1 Miwa, Kurashiki, Okayama, 710-8602, Japan
| | - Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, 060-0812, Japan.
| |
Collapse
|
4
|
Ichikawa S, Hamada M, Sugimori H. A deep-learning method using computed tomography scout images for estimating patient body weight. Sci Rep 2021; 11:15627. [PMID: 34341462 PMCID: PMC8329066 DOI: 10.1038/s41598-021-95170-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Accepted: 07/22/2021] [Indexed: 11/24/2022] Open
Abstract
Body weight is an indispensable parameter for determination of contrast medium dose, appropriate drug dosing, or management of radiation dose. However, we cannot always determine the accurate patient body weight at the time of computed tomography (CT) scanning, especially in emergency care. Time-efficient methods to estimate body weight with high accuracy before diagnostic CT scans currently do not exist. In this study, on the basis of 1831 chest and 519 abdominal CT scout images with the corresponding body weights, we developed and evaluated deep-learning models capable of automatically predicting body weight from CT scout images. In the model performance assessment, there were strong correlations between the actual and predicted body weights in both chest (ρ = 0.947, p < 0.001) and abdominal datasets (ρ = 0.869, p < 0.001). The mean absolute errors were 2.75 kg and 4.77 kg for the chest and abdominal datasets, respectively. Our proposed method with deep learning is useful for estimating body weights from CT scout images with clinically acceptable accuracy and potentially could be useful for determining the contrast medium dose and CT dose management in adult patients with unknown body weight.
Collapse
Affiliation(s)
- Shota Ichikawa
- Graduate School of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, 060-0812, Japan
- Department of Radiological Technology, Kurashiki Central Hospital, 1-1-1 Miwa, Kurashiki, Okayama, 710-8602, Japan
| | - Misaki Hamada
- Department of Radiological Technology, Kurashiki Central Hospital, 1-1-1 Miwa, Kurashiki, Okayama, 710-8602, Japan
| | - Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, 060-0812, Japan.
| |
Collapse
|