1
|
Yasaka K, Kawamura M, Sonoda Y, Kubo T, Kiryu S, Abe O. Large multimodality model fine-tuned for detecting breast and esophageal carcinomas on CT: a preliminary study. Jpn J Radiol 2024:10.1007/s11604-024-01718-w. [PMID: 39668277 DOI: 10.1007/s11604-024-01718-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2024] [Accepted: 12/02/2024] [Indexed: 12/14/2024]
Abstract
PURPOSE This study aimed to develop a large multimodality model (LMM) that can detect breast and esophageal carcinomas on chest contrast-enhanced CT. MATERIALS AND METHODS In this retrospective study, CT images of 401 (age, 62.9 ± 12.9 years; 169 males), 51 (age, 65.5 ± 11.6 years; 23 males), and 120 (age, 64.6 ± 14.2 years; 60 males) patients were used in the training, validation, and test phases. The numbers of CT images with breast carcinoma, esophageal carcinoma, and no lesion were 927, 2180, and 2087; 80, 233, and 270; and 184, 246, and 6919 for the training, validation, and test datasets, respectively. The LMM was fine-tuned using CT images as input and text data ("suspicious of breast carcinoma"/ "suspicious of esophageal carcinoma"/ "no lesion") as reference data on a desktop computer equipped with a single graphic processing unit. Because of the random nature of the training process, supervised learning was performed 10 times. The performance of the best performing model on the validation dataset was further tested using the time-independent test dataset. The detection performance was evaluated by calculating the area under the receiver operating characteristic curve (AUC). RESULTS The sensitivities of the fine-tuned LMM for detecting breast and esophageal carcinomas in the test dataset were 0.929 and 0.951, respectively. The diagnostic performance of the fine-tuned LMM for detecting breast and esophageal carcinomas was high, with AUCs of 0.890 (95%CI 0.871-0.909) and 0.880 (95%CI 0.865-0.894), respectively. CONCLUSIONS The fine-tuned LMM could detect both breast and esophageal carcinomas on chest contrast-enhanced CT with high diagnostic performance. Usefulness of large multimodality models in chest cancer imaging has not been assessed so far. The fine-tuned large multimodality model could detect breast and esophageal carcinomas with high diagnostic performance (area under the receiver operating characteristic curve of 0.890 and 0.880, respectively).
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Motohide Kawamura
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yuki Sonoda
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takatoshi Kubo
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
2
|
Yasaka K, Kanzawa J, Nakaya M, Kurokawa R, Tajima T, Akai H, Yoshioka N, Akahane M, Ohtomo K, Abe O, Kiryu S. Super-resolution Deep Learning Reconstruction for 3D Brain MR Imaging: Improvement of Cranial Nerve Depiction and Interobserver Agreement in Evaluations of Neurovascular Conflict. Acad Radiol 2024; 31:5118-5127. [PMID: 38897913 DOI: 10.1016/j.acra.2024.06.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/28/2024] [Accepted: 06/04/2024] [Indexed: 06/21/2024]
Abstract
RATIONALE AND OBJECTIVES To determine if super-resolution deep learning reconstruction (SR-DLR) improves the depiction of cranial nerves and interobserver agreement when assessing neurovascular conflict in 3D fast asymmetric spin echo (3D FASE) brain MR images, as compared to deep learning reconstruction (DLR). MATERIALS AND METHODS This retrospective study involved reconstructing 3D FASE MR images of the brain for 37 patients using SR-DLR and DLR. Three blinded readers conducted qualitative image analyses, evaluating the degree of neurovascular conflict, structure depiction, sharpness, noise, and diagnostic acceptability. Quantitative analyses included measuring edge rise distance (ERD), edge rise slope (ERS), and full width at half maximum (FWHM) using the signal intensity profile along a linear region of interest across the center of the basilar artery. RESULTS Interobserver agreement on the degree of neurovascular conflict of the facial nerve was generally higher with SR-DLR (0.429-0.923) compared to DLR (0.175-0.689). SR-DLR exhibited increased subjective image noise compared to DLR (p ≥ 0.008). However, all three readers found SR-DLR significantly superior in terms of sharpness (p < 0.001); cranial nerve depiction, particularly of facial and acoustic nerves, as well as the osseous spiral lamina (p < 0.001); and diagnostic acceptability (p ≤ 0.002). The FWHM (mm)/ERD (mm)/ERS (mm-1) for SR-DLR and DLR was 3.1-4.3/0.9-1.1/8795.5-10,703.5 and 3.3-4.8/1.4-2.1/5157.9-7705.8, respectively, with SR-DLR's image sharpness being significantly superior (p ≤ 0.001). CONCLUSION SR-DLR enhances image sharpness, leading to improved cranial nerve depiction and a tendency for greater interobserver agreement regarding facial nerve neurovascular conflict.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan
| | - Jun Kanzawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Moto Nakaya
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Taku Tajima
- Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-ku, Tokyo 108-8329, Japan
| | - Hiroyuki Akai
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan; Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639, Japan
| | - Naoki Yoshioka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan
| | - Masaaki Akahane
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan
| | - Kuni Ohtomo
- International University of Health and Welfare, 2600-1 Ktiakanemaru, Ohtawara, Tochigi 324-8501, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan.
| |
Collapse
|
3
|
Yasaka K, Uehara S, Kato S, Watanabe Y, Tajima T, Akai H, Yoshioka N, Akahane M, Ohtomo K, Abe O, Kiryu S. Super-resolution Deep Learning Reconstruction Cervical Spine 1.5T MRI: Improved Interobserver Agreement in Evaluations of Neuroforaminal Stenosis Compared to Conventional Deep Learning Reconstruction. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2466-2473. [PMID: 38671337 PMCID: PMC11522216 DOI: 10.1007/s10278-024-01112-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 03/28/2024] [Accepted: 04/01/2024] [Indexed: 04/28/2024]
Abstract
The aim of this study was to investigate whether super-resolution deep learning reconstruction (SR-DLR) is superior to conventional deep learning reconstruction (DLR) with respect to interobserver agreement in the evaluation of neuroforaminal stenosis using 1.5T cervical spine MRI. This retrospective study included 39 patients who underwent 1.5T cervical spine MRI. T2-weighted sagittal images were reconstructed with SR-DLR and DLR. Three blinded radiologists independently evaluated the images in terms of the degree of neuroforaminal stenosis, depictions of the vertebrae, spinal cord and neural foramina, sharpness, noise, artefacts and diagnostic acceptability. In quantitative image analyses, a fourth radiologist evaluated the signal-to-noise ratio (SNR) by placing a circular or ovoid region of interest on the spinal cord, and the edge slope based on a linear region of interest placed across the surface of the spinal cord. Interobserver agreement in the evaluations of neuroforaminal stenosis using SR-DLR and DLR was 0.422-0.571 and 0.410-0.542, respectively. The kappa values between reader 1 vs. reader 2 and reader 2 vs. reader 3 significantly differed. Two of the three readers rated depictions of the spinal cord, sharpness, and diagnostic acceptability as significantly better with SR-DLR than with DLR. Both SNR and edge slope (/mm) were also significantly better with SR-DLR (12.9 and 6031, respectively) than with DLR (11.5 and 3741, respectively) (p < 0.001 for both). In conclusion, compared to DLR, SR-DLR improved interobserver agreement in the evaluations of neuroforaminal stenosis using 1.5T cervical spine MRI.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Shunichi Uehara
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shimpei Kato
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Yusuke Watanabe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Taku Tajima
- Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-ku, Tokyo, 108-8329, Japan
| | - Hiroyuki Akai
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Naoki Yoshioka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Masaaki Akahane
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Kuni Ohtomo
- International University of Health and Welfare, 2600-1 Ktiakanemaru, Ohtawara, Tochigi, 324-8501, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan.
| |
Collapse
|
4
|
Hamada A, Yasaka K, Hatano S, Kurokawa M, Inui S, Kubo T, Watanabe Y, Abe O. Deep-Learning Reconstruction of High-Resolution CT Improves Interobserver Agreement for the Evaluation of Pulmonary Fibrosis. Can Assoc Radiol J 2024; 75:542-548. [PMID: 38293802 DOI: 10.1177/08465371241228468] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024] Open
Abstract
Objective: This study aimed to investigate whether deep-learning reconstruction (DLR) improves interobserver agreement in the evaluation of honeycombing for patients with interstitial lung disease (ILD) who underwent high-resolution computed tomography (CT) compared with hybrid iterative reconstruction (HIR). Methods: In this retrospective study, 35 consecutive patients suspected of ILD who underwent CT including the chest region were included. High-resolution CT images of the unilateral lung with DLR and HIR were reconstructed for the right and left lungs. A radiologist placed regions of interest on the lung and measured standard deviation of CT attenuation (i.e., quantitative image noise). In the qualitative image analyses, 5 blinded readers assessed the presence of honeycombing and reticulation, qualitative image noise, artifacts, and overall image quality using a 5-point scale (except for artifacts which was evaluated using a 3-point scale). Results: The quantitative and qualitative image noise in DLR was remarkably reduced compared to that in HIR (P < .001). Artifacts and overall DLR quality were significantly improved compared to those of HIR (P < .001 for 4 out of 5 readers). Interobserver agreement in the evaluations of honeycombing and reticulation for DLR (0.557 [0.450-0.693] and 0.525 [0.470-0.541], respectively) were higher than those for HIR (0.321 [0.211-0.520] and 0.470 [0.354-0.533], respectively). A statistically significant difference was found for honeycombing (P = .014). Conclusions: DLR improved interobserver agreement in the evaluation of honeycombing in patients with ILD on CT compared to HIR.
Collapse
Affiliation(s)
- Akiyoshi Hamada
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Sosuke Hatano
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Mariko Kurokawa
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Shohei Inui
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Takatoshi Kubo
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Yusuke Watanabe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| |
Collapse
|
5
|
Fujita N, Yasaka K, Hatano S, Sakamoto N, Kurokawa R, Abe O. Deep learning reconstruction for high-resolution computed tomography images of the temporal bone: comparison with hybrid iterative reconstruction. Neuroradiology 2024; 66:1105-1112. [PMID: 38514472 DOI: 10.1007/s00234-024-03330-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 03/04/2024] [Indexed: 03/23/2024]
Abstract
PURPOSE We investigated whether the quality of high-resolution computed tomography (CT) images of the temporal bone improves with deep learning reconstruction (DLR) compared with hybrid iterative reconstruction (HIR). METHODS This retrospective study enrolled 36 patients (15 men, 21 women; age, 53.9 ± 19.5 years) who had undergone high-resolution CT of the temporal bone. Axial and coronal images were reconstructed using DLR, HIR, and filtered back projection (FBP). In qualitative image analyses, two radiologists independently compared the DLR and HIR images with FBP in terms of depiction of structures, image noise, and overall quality, using a 5-point scale (5 = better than FBP, 1 = poorer than FBP) to evaluate image quality. The other two radiologists placed regions of interest on the tympanic cavity and measured the standard deviation of CT attenuation (i.e., quantitative image noise). Scores from the qualitative and quantitative analyses of the DLR and HIR images were compared using, respectively, the Wilcoxon signed-rank test and the paired t-test. RESULTS Qualitative and quantitative image noise was significantly reduced in DLR images compared with HIR images (all comparisons, p ≤ 0.016). Depiction of the otic capsule, auditory ossicles, and tympanic membrane was significantly improved in DLR images compared with HIR images (both readers, p ≤ 0.003). Overall image quality was significantly superior in DLR images compared with HIR images (both readers, p < 0.001). CONCLUSION Compared with HIR, DLR provided significantly better-quality high-resolution CT images of the temporal bone.
Collapse
Affiliation(s)
- Nana Fujita
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Sosuke Hatano
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|