1
|
Xing X, Li L, Sun M, Yang J, Zhu X, Peng F, Du J, Feng Y. Deep-learning-based 3D super-resolution CT radiomics model: Predict the possibility of the micropapillary/solid component of lung adenocarcinoma. Heliyon 2024; 10:e34163. [PMID: 39071606 PMCID: PMC11279278 DOI: 10.1016/j.heliyon.2024.e34163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 07/03/2024] [Accepted: 07/04/2024] [Indexed: 07/30/2024] Open
Abstract
Objective Invasive lung adenocarcinoma(ILA) with micropapillary (MPP)/solid (SOL) components has a poor prognosis. Preoperative identification is essential for decision-making for subsequent treatment. This study aims to construct and evaluate a super-resolution(SR) enhanced radiomics model designed to predict the presence of MPP/SOL components preoperatively to provide more accurate and individualized treatment planning. Methods Between March 2018 and November 2023, patients who underwent curative intent ILA resection were included in the study. We implemented a deep transfer learning network on CT images to improve their resolution, resulting in the acquisition of preoperative super-resolution CT (SR-CT) images. Models were developed using radiomic features extracted from CT and SR-CT images. These models employed a range of classifiers, including Logistic Regression (LR), Support Vector Machines (SVM), k-Nearest Neighbors (KNN), Random Forest, Extra Trees, Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), and Multilayer Perceptron (MLP). The diagnostic performance of the models was assessed by measuring the area under the curve (AUC). Result A total of 245 patients were recruited, of which 109 (44.5 %) were diagnosed with ILA with MPP/SOL components. In the analysis of CT images, the SVM model exhibited outstanding effectiveness, recording AUC scores of 0.864 in the training group and 0.761 in the testing group. When this SVM approach was used to develop a radiomics model with SR-CT images, it recorded AUCs of 0.904 in the training and 0.819 in the test cohorts. The calibration curves indicated a high goodness of fit, while decision curve analysis (DCA) highlighted the model's clinical utility. Conclusion The study successfully constructed and evaluated a deep learning(DL)-enhanced SR-CT radiomics model. This model outperformed conventional CT radiomics models in predicting MPP/SOL patterns in ILA. Continued research and broader validation are necessary to fully harness and refine the clinical potential of radiomics when combined with SR reconstruction technology.
Collapse
Affiliation(s)
- Xiaowei Xing
- Cancer Center, Department of Radiology, Zhejiang Provincial People's Hospital, (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, Zhejiang, China
| | - Liangping Li
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Mingxia Sun
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Jiahu Yang
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Xinhai Zhu
- Department of Thoracic Surgery, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Fang Peng
- Department of Pathology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Jianzong Du
- Department of Respiratory Medicine, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Yue Feng
- Cancer Center, Department of Radiology, Zhejiang Provincial People's Hospital, (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, Zhejiang, China
| |
Collapse
|
2
|
Kawashita I, Fukumoto W, Mitani H, Narita K, Chosa K, Nakamura Y, Nagao M, Awai K. Development of a deep-learning algorithm for age estimation on CT images of the vertebral column. Leg Med (Tokyo) 2024; 69:102444. [PMID: 38604090 DOI: 10.1016/j.legalmed.2024.102444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 11/21/2023] [Accepted: 04/03/2024] [Indexed: 04/13/2024]
Abstract
PURPOSE The accurate age estimation of cadavers is essential for their identification. However, conventional methods fail to yield adequate age estimation especially in elderly cadavers. We developed a deep learning algorithm for age estimation on CT images of the vertebral column and checked its accuracy. METHOD For the development of our deep learning algorithm, we included 1,120 CT data of the vertebral column of 140 patients for each of 8 age decades. The deep learning model of regression analysis based on Visual Geometry Group-16 (VGG16) was improved in its estimation accuracy by bagging. To verify its accuracy, we applied our deep learning algorithm to estimate the age of 219 cadavers who had undergone postmortem CT (PMCT). The mean difference and the mean absolute error (MAE), the standard error of the estimate (SEE) between the known- and the estimated age, were calculated. Correlation analysis using the intraclass correlation coefficient (ICC) and Bland-Altman analysis were performed to assess differences between the known- and the estimated age. RESULTS For the 219 cadavers, the mean difference between the known- and the estimated age was 0.30 years; it was 4.36 years for the MAE, and 5.48 years for the SEE. The ICC (2,1) was 0.96 (95 % confidence interval: 0.95-0.97, p < 0.001). Bland-Altman analysis showed that there were no proportional or fixed errors (p = 0.08 and 0.41). CONCLUSIONS Our deep learning algorithm for estimating the age of 219 cadavers on CT images of the vertebral column was more accurate than conventional methods and highly useful.
Collapse
Affiliation(s)
- Ikuo Kawashita
- Department of Diagnostic Radiology, Graduate School of Biomedical and Health Science, Hiroshima University 1-2-3 Kasumi, Minamiku, Hiroshima 734-8551, Japan
| | - Wataru Fukumoto
- Department of Diagnostic Radiology, Graduate School of Biomedical and Health Science, Hiroshima University 1-2-3 Kasumi, Minamiku, Hiroshima 734-8551, Japan; Center for Cause of Death Investigation Research, Graduate School of Biomedical and Health Science, Hiroshima University 1-2-3 Kasumi, Minamiku, Hiroshima 734-8551, Japan.
| | - Hidenori Mitani
- Department of Diagnostic Radiology, Graduate School of Biomedical and Health Science, Hiroshima University 1-2-3 Kasumi, Minamiku, Hiroshima 734-8551, Japan
| | - Keigo Narita
- Department of Diagnostic Radiology, Graduate School of Biomedical and Health Science, Hiroshima University 1-2-3 Kasumi, Minamiku, Hiroshima 734-8551, Japan
| | - Keigo Chosa
- Department of Diagnostic Radiology, Graduate School of Biomedical and Health Science, Hiroshima University 1-2-3 Kasumi, Minamiku, Hiroshima 734-8551, Japan
| | - Yuko Nakamura
- Department of Diagnostic Radiology, Graduate School of Biomedical and Health Science, Hiroshima University 1-2-3 Kasumi, Minamiku, Hiroshima 734-8551, Japan
| | - Masataka Nagao
- Center for Cause of Death Investigation Research, Graduate School of Biomedical and Health Science, Hiroshima University 1-2-3 Kasumi, Minamiku, Hiroshima 734-8551, Japan
| | - Kazuo Awai
- Department of Diagnostic Radiology, Graduate School of Biomedical and Health Science, Hiroshima University 1-2-3 Kasumi, Minamiku, Hiroshima 734-8551, Japan; Center for Cause of Death Investigation Research, Graduate School of Biomedical and Health Science, Hiroshima University 1-2-3 Kasumi, Minamiku, Hiroshima 734-8551, Japan
| |
Collapse
|
3
|
Iwano S, Kamiya S, Ito R, Kudo A, Kitamura Y, Nakamura K, Naganawa S. Measurement of solid size in early-stage lung adenocarcinoma by virtual 3D thin-section CT applied artificial intelligence. Sci Rep 2023; 13:21709. [PMID: 38066174 PMCID: PMC10709591 DOI: 10.1038/s41598-023-48755-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 11/29/2023] [Indexed: 12/18/2023] Open
Abstract
An artificial intelligence (AI) system that reconstructs virtual 3D thin-section CT (TSCT) images from conventional CT images by applying deep learning was developed. The aim of this study was to investigate whether virtual and real TSCT could measure the solid size of early-stage lung adenocarcinoma. The pair of original thin-CT and simulated thick-CT from the training data with TSCT images (thickness, 0.5-1.0 mm) of 2700 pulmonary nodules were used to train the thin-CT generator in the generative adversarial network (GAN) framework and develop a virtual TSCT AI system. For validation, CT images of 93 stage 0-I lung adenocarcinomas were collected, and virtual TSCTs were reconstructed from conventional 5-mm thick-CT images using the AI system. Two radiologists measured and compared the solid size of tumors on conventional CT and virtual and real TSCT. The agreement between the two observers showed an almost perfect agreement on the virtual TSCT for solid size measurements (intraclass correlation coefficient = 0.967, P < 0.001, respectively). The virtual TSCT had a significantly stronger correlation than that of conventional CT (P = 0.003 and P = 0.001, respectively). The degree of agreement between the clinical T stage determined by virtual TSCT and the clinical T stage determined by real TSCT was excellent in both observers (k = 0.882 and k = 0.881, respectively). The AI system developed in this study was able to measure the solid size of early-stage lung adenocarcinoma on virtual TSCT as well as on real TSCT.
Collapse
Affiliation(s)
- Shingo Iwano
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan.
| | - Shinichiro Kamiya
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan
| | - Akira Kudo
- Imaging Technology Center, Fujifilm Corporation, 2-26-30, Nishiazabu, Minato-ku, Tokyo, 106-8620, Japan
| | - Yoshiro Kitamura
- Imaging Technology Center, Fujifilm Corporation, 2-26-30, Nishiazabu, Minato-ku, Tokyo, 106-8620, Japan
| | - Keigo Nakamura
- Imaging Technology Center, Fujifilm Corporation, 2-26-30, Nishiazabu, Minato-ku, Tokyo, 106-8620, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan
| |
Collapse
|
4
|
Milanese G, Ledda RE, Sabia F, Ruggirello M, Sestini S, Silva M, Sverzellati N, Marchianò AV, Pastorino U. Ultra-low dose computed tomography protocols using spectral shaping for lung cancer screening: Comparison with low-dose for volumetric LungRADS classification. Eur J Radiol 2023; 161:110760. [PMID: 36878153 DOI: 10.1016/j.ejrad.2023.110760] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 02/17/2023] [Accepted: 02/23/2023] [Indexed: 03/03/2023]
Abstract
PURPOSE To compare Low-Dose Computed Tomography (LDCT) with four different Ultra-Low-Dose Computed Tomography (ULDCT) protocols for PN classification according to the Lung Reporting and Data System (LungRADS). METHODS Three hundred sixty-one participants of an ongoing lung cancer screening (LCS) underwent single-breath-hold double chest Computed Tomography (CT), including LDCT (120kVp, 25mAs; CTDIvol 1,62 mGy) and one ULDCT among: fully automated exposure control ("ULDCT1"); fixed tube-voltage and current according to patient size ("ULDCT2"); hybrid approach with fixed tube-voltage ("ULDCT3") and tube current automated exposure control ("ULDCT4"). Two radiologists (R1, R2) assessed LungRADS 2022 categories on LDCT, and then after 2 weeks on ULDCT using two different kernels (R1: Qr49ADMIRE 4; R2: Br49ADMIRE 3). Intra-subject agreement for LungRADS categories between LDCT and ULDCT was measured by the k-Cohen Index with Fleiss-Cohen weights. RESULTS LDCT-dominant PNs were detected in ULDCT in 87 % of cases on Qr49ADMIRE 4 and 88 % on Br49ADMIRE 3. The intra-subject agreement was: κULDCT1 = 0.89 [95 %CI 0.82-0.96]; κULDCT2 = 0.90 [0.81-0.98]; κULDCT3 = 0.91 [0.84-0.99]; κULDCT4 = 0.88 [0.78-0.97] on Qr49ADMIRE 4, and κULDCT1 = 0.88 [0.80-0.95]; κULDCT2 = 0.91 [0.86-0.96]; κULDCT3 = 0.87 [0.78-0.95]; and κULDCT4 = 0.88 [0.82-0.94] on Br49ADMIRE 3. LDCT classified as LungRADS 4B were correctly identified as LungRADS 4B at ULDCT3, with the lowest radiation exposure among the tested protocols (median effective doses were 0.31, 0.36, 0.27 and 0.37 mSv for ULDCT1, ULDCT2, ULDCT3, and ULDCT4, respectively). CONCLUSIONS ULDCT by spectral shaping allows the detection and characterization of PNs with an excellent agreement with LDCT and can be proposed as a feasible approach in LCS.
Collapse
Affiliation(s)
- Gianluca Milanese
- Scienze Radiologiche, Department of Medicine and Surgery, University of Parma, Parma, Italy; Fondazione IRCCS Istituto Nazionale dei Tumori, Thoracic Surgery, Milan, Lombardia, Italy.
| | - Roberta Eufrasia Ledda
- Scienze Radiologiche, Department of Medicine and Surgery, University of Parma, Parma, Italy; Fondazione IRCCS Istituto Nazionale dei Tumori, Thoracic Surgery, Milan, Lombardia, Italy.
| | - Federica Sabia
- Fondazione IRCCS Istituto Nazionale dei Tumori, Thoracic Surgery, Milan, Lombardia, Italy.
| | - Margherita Ruggirello
- Fondazione IRCCS Istituto Nazionale dei Tumori, Department of Diagnostic Imaging and Radiotherapy, Milan, Italy.
| | - Stefano Sestini
- Fondazione IRCCS Istituto Nazionale dei Tumori, Thoracic Surgery, Milan, Lombardia, Italy.
| | - Mario Silva
- Scienze Radiologiche, Department of Medicine and Surgery, University of Parma, Parma, Italy.
| | - Nicola Sverzellati
- Scienze Radiologiche, Department of Medicine and Surgery, University of Parma, Parma, Italy.
| | - Alfonso Vittorio Marchianò
- Fondazione IRCCS Istituto Nazionale dei Tumori, Department of Diagnostic Imaging and Radiotherapy, Milan, Italy.
| | - Ugo Pastorino
- Fondazione IRCCS Istituto Nazionale dei Tumori, Thoracic Surgery, Milan, Lombardia, Italy.
| |
Collapse
|
5
|
Park SH, Han K, Jang HY, Park JE, Lee JG, Kim DW, Choi J. Methods for Clinical Evaluation of Artificial Intelligence Algorithms for Medical Diagnosis. Radiology 2023; 306:20-31. [PMID: 36346314 DOI: 10.1148/radiol.220182] [Citation(s) in RCA: 31] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Adequate clinical evaluation of artificial intelligence (AI) algorithms before adoption in practice is critical. Clinical evaluation aims to confirm acceptable AI performance through adequate external testing and confirm the benefits of AI-assisted care compared with conventional care through appropriately designed and conducted studies, for which prospective studies are desirable. This article explains some of the fundamental methodological points that should be considered when designing and appraising the clinical evaluation of AI algorithms for medical diagnosis. The specific topics addressed include the following: (a) the importance of external testing of AI algorithms and strategies for conducting the external testing effectively, (b) the various metrics and graphical methods for evaluating the AI performance as well as essential methodological points to note in using and interpreting them, (c) paired study designs primarily for comparative performance evaluation of conventional and AI-assisted diagnoses, (d) parallel study designs primarily for evaluating the effect of AI intervention with an emphasis on randomized clinical trials, and (e) up-to-date guidelines for reporting clinical studies on AI, with an emphasis on guidelines registered in the EQUATOR Network library. Sound methodological knowledge of these topics will aid the design, execution, reporting, and appraisal of clinical evaluation of AI.
Collapse
Affiliation(s)
- Seong Ho Park
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Kyunghwa Han
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Hye Young Jang
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Ji Eun Park
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - June-Goo Lee
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Dong Wook Kim
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Jaesoon Choi
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| |
Collapse
|
6
|
de Margerie-Mellon C, Chassagnon G. Artificial intelligence: A critical review of applications for lung nodule and lung cancer. Diagn Interv Imaging 2023; 104:11-17. [PMID: 36513593 DOI: 10.1016/j.diii.2022.11.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) is a broad concept that usually refers to computer programs that can learn from data and perform certain specific tasks. In the recent years, the growth of deep learning, a successful technique for computer vision tasks that does not require explicit programming, coupled with the availability of large imaging databases fostered the development of multiple applications in the medical imaging field, especially for lung nodules and lung cancer, mostly through convolutional neural networks (CNN). Some of the first applications of AI is this field were dedicated to automated detection of lung nodules on X-ray and computed tomography (CT) examinations, with performances now reaching or exceeding those of radiologists. For lung nodule segmentation, CNN-based algorithms applied to CT images show excellent spatial overlap index with manual segmentation, even for irregular and ground glass nodules. A third application of AI is the classification of lung nodules between malignant and benign, which could limit the number of follow-up CT examinations for less suspicious lesions. Several algorithms have demonstrated excellent capabilities for the prediction of the malignancy risk when a nodule is discovered. These different applications of AI for lung nodules are particularly appealing in the context of lung cancer screening. In the field of lung cancer, AI tools applied to lung imaging have been investigated for distinct aims. First, they could play a role for the non-invasive characterization of tumors, especially for histological subtype and somatic mutation predictions, with a potential therapeutic impact. Additionally, they could help predict the patient prognosis, in combination to clinical data. Despite these encouraging perspectives, clinical implementation of AI tools is only beginning because of the lack of generalizability of published studies, of an inner obscure working and because of limited data about the impact of such tools on the radiologists' decision and on the patient outcome. Radiologists must be active participants in the process of evaluating AI tools, as such tools could support their daily work and offer them more time for high added value tasks.
Collapse
Affiliation(s)
- Constance de Margerie-Mellon
- Université Paris Cité, Laboratory of Imaging Biomarkers, Center for Research on Inflammation, UMR 1149, INSERM, 75018 Paris, France; Department of Radiology, Hôpital Saint-Louis APHP, 75010 Paris, France
| | - Guillaume Chassagnon
- Université Paris Cité, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin APHP, 75014 Paris, France
| |
Collapse
|
7
|
Application of deep learning-based super-resolution to T1-weighted postcontrast gradient echo imaging of the chest. LA RADIOLOGIA MEDICA 2023; 128:184-190. [PMID: 36609662 PMCID: PMC9938811 DOI: 10.1007/s11547-022-01587-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 12/30/2022] [Indexed: 01/09/2023]
Abstract
OBJECTIVES A deep learning-based super-resolution for postcontrast volume-interpolated breath-hold examination (VIBE) of the chest was investigated in this study. Aim was to improve image quality, noise, artifacts and diagnostic confidence without change of acquisition parameters. MATERIALS AND METHODS Fifty patients who received VIBE postcontrast imaging of the chest at 1.5 T were included in this retrospective study. After acquisition of the standard VIBE (VIBES), a novel deep learning-based algorithm and a denoising algorithm were applied, resulting in enhanced images (VIBEDL). Two radiologists qualitatively evaluated both datasets independently, rating sharpness of soft tissue, vessels, bronchial structures, lymph nodes, artifacts, cardiac motion artifacts, noise levels and overall diagnostic confidence, using a Likert scale ranging from 1 to 4. In the presence of lung lesions, the largest lesion was rated regarding sharpness and diagnostic confidence using the same Likert scale as mentioned above. Additionally, the largest diameter of the lesion was measured. RESULTS The sharpness of soft tissue, vessels, bronchial structures and lymph nodes as well as the diagnostic confidence, the extent of artifacts, the extent of cardiac motion artifacts and noise levels were rated superior in VIBEDL (all P < 0.001). There was no significant difference in the diameter or the localization of the largest lung lesion in VIBEDL compared to VIBES. Lesion sharpness as well as detectability was rated significantly better by both readers with VIBEDL (both P < 0.001). CONCLUSION The application of a novel deep learning-based super-resolution approach in T1-weighted VIBE postcontrast imaging resulted in an improvement in image quality, noise levels and diagnostic confidence as well as in a shortened acquisition time.
Collapse
|
8
|
Deep learning reconstruction for 1.5 T cervical spine MRI: effect on interobserver agreement in the evaluation of degenerative changes. Eur Radiol 2022; 32:6118-6125. [DOI: 10.1007/s00330-022-08729-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 02/23/2022] [Accepted: 03/07/2022] [Indexed: 12/22/2022]
|
9
|
Goo JM. Deep Learning-based Super-Resolution Algorithm: Potential in the Management of Subsolid Nodules. Radiology 2021; 299:220-221. [PMID: 33561378 DOI: 10.1148/radiol.2021204463] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Jin Mo Goo
- From the Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul National University Medical Research Center, 101 Daekhak-ro, Jongno-gu, Seoul 110-744, Korea; and Cancer Research Institute, Seoul National University, Seoul, Korea
| |
Collapse
|