1
|
Fujita N, Yasaka K, Hatano S, Sakamoto N, Kurokawa R, Abe O. Deep learning reconstruction for high-resolution computed tomography images of the temporal bone: comparison with hybrid iterative reconstruction. Neuroradiology 2024; 66:1105-1112. [PMID: 38514472 DOI: 10.1007/s00234-024-03330-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 03/04/2024] [Indexed: 03/23/2024]
Abstract
PURPOSE We investigated whether the quality of high-resolution computed tomography (CT) images of the temporal bone improves with deep learning reconstruction (DLR) compared with hybrid iterative reconstruction (HIR). METHODS This retrospective study enrolled 36 patients (15 men, 21 women; age, 53.9 ± 19.5 years) who had undergone high-resolution CT of the temporal bone. Axial and coronal images were reconstructed using DLR, HIR, and filtered back projection (FBP). In qualitative image analyses, two radiologists independently compared the DLR and HIR images with FBP in terms of depiction of structures, image noise, and overall quality, using a 5-point scale (5 = better than FBP, 1 = poorer than FBP) to evaluate image quality. The other two radiologists placed regions of interest on the tympanic cavity and measured the standard deviation of CT attenuation (i.e., quantitative image noise). Scores from the qualitative and quantitative analyses of the DLR and HIR images were compared using, respectively, the Wilcoxon signed-rank test and the paired t-test. RESULTS Qualitative and quantitative image noise was significantly reduced in DLR images compared with HIR images (all comparisons, p ≤ 0.016). Depiction of the otic capsule, auditory ossicles, and tympanic membrane was significantly improved in DLR images compared with HIR images (both readers, p ≤ 0.003). Overall image quality was significantly superior in DLR images compared with HIR images (both readers, p < 0.001). CONCLUSION Compared with HIR, DLR provided significantly better-quality high-resolution CT images of the temporal bone.
Collapse
Affiliation(s)
- Nana Fujita
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Sosuke Hatano
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
2
|
Yasaka K, Uehara S, Kato S, Watanabe Y, Tajima T, Akai H, Yoshioka N, Akahane M, Ohtomo K, Abe O, Kiryu S. Super-resolution Deep Learning Reconstruction Cervical Spine 1.5T MRI: Improved Interobserver Agreement in Evaluations of Neuroforaminal Stenosis Compared to Conventional Deep Learning Reconstruction. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01112-y. [PMID: 38671337 DOI: 10.1007/s10278-024-01112-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 03/28/2024] [Accepted: 04/01/2024] [Indexed: 04/28/2024]
Abstract
The aim of this study was to investigate whether super-resolution deep learning reconstruction (SR-DLR) is superior to conventional deep learning reconstruction (DLR) with respect to interobserver agreement in the evaluation of neuroforaminal stenosis using 1.5T cervical spine MRI. This retrospective study included 39 patients who underwent 1.5T cervical spine MRI. T2-weighted sagittal images were reconstructed with SR-DLR and DLR. Three blinded radiologists independently evaluated the images in terms of the degree of neuroforaminal stenosis, depictions of the vertebrae, spinal cord and neural foramina, sharpness, noise, artefacts and diagnostic acceptability. In quantitative image analyses, a fourth radiologist evaluated the signal-to-noise ratio (SNR) by placing a circular or ovoid region of interest on the spinal cord, and the edge slope based on a linear region of interest placed across the surface of the spinal cord. Interobserver agreement in the evaluations of neuroforaminal stenosis using SR-DLR and DLR was 0.422-0.571 and 0.410-0.542, respectively. The kappa values between reader 1 vs. reader 2 and reader 2 vs. reader 3 significantly differed. Two of the three readers rated depictions of the spinal cord, sharpness, and diagnostic acceptability as significantly better with SR-DLR than with DLR. Both SNR and edge slope (/mm) were also significantly better with SR-DLR (12.9 and 6031, respectively) than with DLR (11.5 and 3741, respectively) (p < 0.001 for both). In conclusion, compared to DLR, SR-DLR improved interobserver agreement in the evaluations of neuroforaminal stenosis using 1.5T cervical spine MRI.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Shunichi Uehara
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shimpei Kato
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Yusuke Watanabe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Taku Tajima
- Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-ku, Tokyo, 108-8329, Japan
| | - Hiroyuki Akai
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Naoki Yoshioka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Masaaki Akahane
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Kuni Ohtomo
- International University of Health and Welfare, 2600-1 Ktiakanemaru, Ohtawara, Tochigi, 324-8501, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan.
| |
Collapse
|
3
|
Lin C, Guo Y, Huang X, Rao S, Zhou J. Esophageal cancer detection via non-contrast CT and deep learning. Front Med (Lausanne) 2024; 11:1356752. [PMID: 38510455 PMCID: PMC10953501 DOI: 10.3389/fmed.2024.1356752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Accepted: 01/29/2024] [Indexed: 03/22/2024] Open
Abstract
Background Esophageal cancer is the seventh most frequently diagnosed cancer with a high mortality rate and the sixth leading cause of cancer deaths in the world. Early detection of esophageal cancer is very vital for the patients. Traditionally, contrast computed tomography (CT) was used to detect esophageal carcinomas, but with the development of deep learning (DL) technology, it may now be possible for non-contrast CT to detect esophageal carcinomas. In this study, we aimed to establish a DL-based diagnostic system to stage esophageal cancer from non-contrast chest CT images. Methods In this retrospective dual-center study, we included 397 primary esophageal cancer patients with pathologically confirmed non-contrast chest CT images, as well as 250 healthy individuals without esophageal tumors, confirmed through endoscopic examination. The images of these participants were treated as the training data. Additionally, images from 100 esophageal cancer patients and 100 healthy individuals were enrolled for model validation. The esophagus segmentation was performed using the no-new-Net (nnU-Net) model; based on the segmentation result and feature extraction, a decision tree was employed to classify whether cancer is present or not. We compared the diagnostic efficacy of the DL-based method with the performance of radiologists with various levels of experience. Meanwhile, a diagnostic performance comparison of radiologists with and without the aid of the DL-based method was also conducted. Results In this study, the DL-based method demonstrated a high level of diagnostic efficacy in the detection of esophageal cancer, with a performance of AUC of 0.890, sensitivity of 0.900, specificity of 0.880, accuracy of 0.882, and F-score of 0.891. Furthermore, the incorporation of the DL-based method resulted in a significant improvement of the AUC values w.r.t. of three radiologists from 0.855/0.820/0.930 to 0.910/0.955/0.965 (p = 0.0004/<0.0001/0.0068, with DeLong's test). Conclusion The DL-based method shows a satisfactory performance of sensitivity and specificity for detecting esophageal cancers from non-contrast chest CT images. With the aid of the DL-based method, radiologists can attain better diagnostic workup for esophageal cancer and minimize the chance of missing esophageal cancers in reading the CT scans acquired for health check-up purposes.
Collapse
Affiliation(s)
- Chong Lin
- Department of Radiology, Zhongshan Hospital (Xiamen), Fudan University, Shanghai, Fujian, China
- Xiamen Municipal Clinical Research Center for Medical Imaging, Xiamen, Fujian, China
| | - Yi Guo
- Department of Radiology, Zhongshan Hospital (Xiamen), Fudan University, Shanghai, Fujian, China
- Xiamen Municipal Clinical Research Center for Medical Imaging, Xiamen, Fujian, China
| | - Xu Huang
- Departments of Thoracic Surgery, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Shengxiang Rao
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Jianjun Zhou
- Department of Radiology, Zhongshan Hospital (Xiamen), Fudan University, Shanghai, Fujian, China
- Xiamen Municipal Clinical Research Center for Medical Imaging, Xiamen, Fujian, China
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
4
|
Hamada A, Yasaka K, Hatano S, Kurokawa M, Inui S, Kubo T, Watanabe Y, Abe O. Deep-Learning Reconstruction of High-Resolution CT Improves Interobserver Agreement for the Evaluation of Pulmonary Fibrosis. Can Assoc Radiol J 2024:8465371241228468. [PMID: 38293802 DOI: 10.1177/08465371241228468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024] Open
Abstract
Objective: This study aimed to investigate whether deep-learning reconstruction (DLR) improves interobserver agreement in the evaluation of honeycombing for patients with interstitial lung disease (ILD) who underwent high-resolution computed tomography (CT) compared with hybrid iterative reconstruction (HIR). Methods: In this retrospective study, 35 consecutive patients suspected of ILD who underwent CT including the chest region were included. High-resolution CT images of the unilateral lung with DLR and HIR were reconstructed for the right and left lungs. A radiologist placed regions of interest on the lung and measured standard deviation of CT attenuation (i.e., quantitative image noise). In the qualitative image analyses, 5 blinded readers assessed the presence of honeycombing and reticulation, qualitative image noise, artifacts, and overall image quality using a 5-point scale (except for artifacts which was evaluated using a 3-point scale). Results: The quantitative and qualitative image noise in DLR was remarkably reduced compared to that in HIR (P < .001). Artifacts and overall DLR quality were significantly improved compared to those of HIR (P < .001 for 4 out of 5 readers). Interobserver agreement in the evaluations of honeycombing and reticulation for DLR (0.557 [0.450-0.693] and 0.525 [0.470-0.541], respectively) were higher than those for HIR (0.321 [0.211-0.520] and 0.470 [0.354-0.533], respectively). A statistically significant difference was found for honeycombing (P = .014). Conclusions: DLR improved interobserver agreement in the evaluation of honeycombing in patients with ILD on CT compared to HIR.
Collapse
Affiliation(s)
- Akiyoshi Hamada
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Sosuke Hatano
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Mariko Kurokawa
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Shohei Inui
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Takatoshi Kubo
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Yusuke Watanabe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| |
Collapse
|
5
|
Okimoto N, Yasaka K, Fujita N, Watanabe Y, Kanzawa J, Abe O. Deep learning reconstruction for improving the visualization of acute brain infarct on computed tomography. Neuroradiology 2024; 66:63-71. [PMID: 37991522 PMCID: PMC10761512 DOI: 10.1007/s00234-023-03251-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 11/13/2023] [Indexed: 11/23/2023]
Abstract
PURPOSE This study aimed to investigate the impact of deep learning reconstruction (DLR) on acute infarct depiction compared with hybrid iterative reconstruction (Hybrid IR). METHODS This retrospective study included 29 (75.8 ± 13.2 years, 20 males) and 26 (64.4 ± 12.4 years, 18 males) patients with and without acute infarction, respectively. Unenhanced head CT images were reconstructed with DLR and Hybrid IR. In qualitative analyses, three readers evaluated the conspicuity of lesions based on five regions and image quality. A radiologist placed regions of interest on the lateral ventricle, putamen, and white matter in quantitative analyses, and the standard deviation of CT attenuation (i.e., quantitative image noise) was recorded. RESULTS Conspicuity of acute infarct in DLR was superior to that in Hybrid IR, and a statistically significant difference was observed for two readers (p ≤ 0.038). Conspicuity of acute infarct with time from onset to CT imaging at < 24 h in DLR was significantly improved compared with Hybrid IR for all readers (p ≤ 0.020). Image noise in DLR was significantly reduced compared with Hybrid IR with both the qualitative and quantitative analyses (p < 0.001 for all). CONCLUSION DLR in head CT helped improve acute infarct depiction, especially those with time from onset to CT imaging at < 24 h.
Collapse
Affiliation(s)
- Naomasa Okimoto
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, 4-23-15 Kotobashi, Sumida-Ku, Tokyo, 130-8575, Japan
| | - Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Nana Fujita
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yusuke Watanabe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Jun Kanzawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
6
|
Yasaka K, Sato C, Hirakawa H, Fujita N, Kurokawa M, Watanabe Y, Kubo T, Abe O. Impact of deep learning on radiologists and radiology residents in detecting breast cancer on CT: a cross-vendor test study. Clin Radiol 2024; 79:e41-e47. [PMID: 37872026 DOI: 10.1016/j.crad.2023.09.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/13/2023] [Accepted: 09/29/2023] [Indexed: 10/25/2023]
Abstract
AIM To investigate the effect of deep learning on the diagnostic performance of radiologists and radiology residents in detecting breast cancers on computed tomography (CT). MATERIALS AND METHODS In this retrospective study, patients undergoing contrast-enhanced chest CT between January 2010 and December 2020 using equipment from two vendors were included. Patients with confirmed breast cancer were categorised as the training (n=201) and validation (n=26) group and the testing group (n=30) using processed CT images from either vendor. The trained deep-learning model was applied to test group patients with (30 females; mean age = 59.2 ± 15.8 years) and without (19 males, 21 females; mean age = 64 ± 15.9 years) breast cancer. Image-based diagnostic performance of the deep-learning model was evaluated with the area under the receiver operating characteristic curve (AUC). Two radiologists and three radiology residents were asked to detect malignant lesions by recording a four-point diagnostic confidence score before and after referring to the result from the deep-learning model, and their diagnostic performance was evaluated using jackknife alternative free-response receiver operating characteristic analysis by calculating the figure of merit (FOM). RESULTS The AUCs of the trained deep-learning model on the validation and test data were 0.976 and 0.967, respectively. After referencing with the result of the deep learning model, the FOMs of readers significantly improved (reader 1/2/3/4/5: from 0.933/0.962/0.883/0.944/0.867 to 0.958/0.968/0.917/0.947/0.900; p=0.038). CONCLUSION Deep learning can help radiologists and radiology residents detect breast cancer on CT.
Collapse
Affiliation(s)
- K Yasaka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - C Sato
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - H Hirakawa
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - N Fujita
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - M Kurokawa
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Y Watanabe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - T Kubo
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - O Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
7
|
Naqa IE, Drukker K. AI in imaging and therapy: innovations, ethics, and impact - introductory editorial. Br J Radiol 2023; 96:20239004. [PMID: 38011226 PMCID: PMC10546442 DOI: 10.1259/bjr.20239004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023] Open
|