1
|
赵 明, 刘 珺, 郭 忠, 陈 祥, 张 帅, 郑 天. [Application of electrical impedance tomography imaging technology combined with generative adversarial network in pulmonary ventilation monitoring]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2024; 41:105-113. [PMID: 38403610 PMCID: PMC10894735 DOI: 10.7507/1001-5515.202308026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 11/28/2023] [Indexed: 02/27/2024]
Abstract
Electrical impedance tomography (EIT) plays a crucial role in the monitoring of pulmonary ventilation and regional pulmonary function test. However, the inherent ill-posed nature of EIT algorithms results in significant deviations in the reconstructed conductivity obtained from voltage data contaminated with noise, making it challenging to obtain accurate distribution images of conductivity change as well as clear boundary contours. In order to enhance the image quality of EIT in lung ventilation monitoring, a novel approach integrating the EIT with deep learning algorithm was proposed. Firstly, an optimized operator was introduced to enhance the Kalman filter algorithm, and Tikhonov regularization was incorporated into the state-space expression of the algorithm to obtain the initial lung image reconstructed. Following that, the imaging outcomes were fed into a generative adversarial network model in order to reconstruct accurate lung contours. The simulation experiment results indicate that the proposed method produces pulmonary images with clear boundaries, demonstrating increased robustness against noise interference. This methodology effectively achieves a satisfactory level of visualization and holds potential significance as a reference for the diagnostic purposes of imaging modalities such as computed tomography.
Collapse
Affiliation(s)
- 明康 赵
- 河北工业大学 生命科学与健康工程学院(天津 300130)School of Health Sciences & Biomedical Engineering, Hebei University of Technology, Tianjin 300130, P. R. China
- 河北工业大学 省部共建电工装备可靠性与智能化国家重点实验室(天津 300130)State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin 300130, P. R. China
- 天津市生物电工与智能健康重点实验室(天津 300130)Tianjin Key Laboratory of Bioelectricity and Intelligent Health, Tianjin 300130, P. R. China
| | - 珺 刘
- 河北工业大学 生命科学与健康工程学院(天津 300130)School of Health Sciences & Biomedical Engineering, Hebei University of Technology, Tianjin 300130, P. R. China
- 河北工业大学 省部共建电工装备可靠性与智能化国家重点实验室(天津 300130)State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin 300130, P. R. China
- 天津市生物电工与智能健康重点实验室(天津 300130)Tianjin Key Laboratory of Bioelectricity and Intelligent Health, Tianjin 300130, P. R. China
| | - 忠圣 郭
- 河北工业大学 生命科学与健康工程学院(天津 300130)School of Health Sciences & Biomedical Engineering, Hebei University of Technology, Tianjin 300130, P. R. China
- 河北工业大学 省部共建电工装备可靠性与智能化国家重点实验室(天津 300130)State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin 300130, P. R. China
| | - 祥琪 陈
- 河北工业大学 生命科学与健康工程学院(天津 300130)School of Health Sciences & Biomedical Engineering, Hebei University of Technology, Tianjin 300130, P. R. China
- 河北工业大学 省部共建电工装备可靠性与智能化国家重点实验室(天津 300130)State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin 300130, P. R. China
- 天津市生物电工与智能健康重点实验室(天津 300130)Tianjin Key Laboratory of Bioelectricity and Intelligent Health, Tianjin 300130, P. R. China
| | - 帅 张
- 河北工业大学 生命科学与健康工程学院(天津 300130)School of Health Sciences & Biomedical Engineering, Hebei University of Technology, Tianjin 300130, P. R. China
- 河北工业大学 省部共建电工装备可靠性与智能化国家重点实验室(天津 300130)State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin 300130, P. R. China
- 天津市生物电工与智能健康重点实验室(天津 300130)Tianjin Key Laboratory of Bioelectricity and Intelligent Health, Tianjin 300130, P. R. China
| | - 天予 郑
- 河北工业大学 生命科学与健康工程学院(天津 300130)School of Health Sciences & Biomedical Engineering, Hebei University of Technology, Tianjin 300130, P. R. China
| |
Collapse
|
2
|
Lizzi F, Postuma I, Brero F, Cabini RF, Fantacci ME, Lascialfari A, Oliva P, Rinaldi L, Retico A. Quantification of pulmonary involvement in COVID-19 pneumonia: an upgrade of the LungQuant software for lung CT segmentation. EUROPEAN PHYSICAL JOURNAL PLUS 2023; 138:326. [PMID: 37064789 PMCID: PMC10088731 DOI: 10.1140/epjp/s13360-023-03896-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 03/15/2023] [Indexed: 06/19/2023]
Abstract
Computed tomography (CT) scans are used to evaluate the severity of lung involvement in patients affected by COVID-19 pneumonia. Here, we present an improved version of the LungQuant automatic segmentation software (LungQuant v2), which implements a cascade of three deep neural networks (DNNs) to segment the lungs and the lung lesions associated with COVID-19 pneumonia. The first network (BB-net) defines a bounding box enclosing the lungs, the second one (U-net 1 ) outputs the mask of the lungs, and the final one (U-net 2 ) generates the mask of the COVID-19 lesions. With respect to the previous version (LungQuant v1), three main improvements are introduced: the BB-net, a new term in the loss function in the U-net for lesion segmentation and a post-processing procedure to separate the right and left lungs. The three DNNs were optimized, trained and tested on publicly available CT scans. We evaluated the system segmentation capability on an independent test set consisting of ten fully annotated CT scans, the COVID-19-CT-Seg benchmark dataset. The test performances are reported by means of the volumetric dice similarity coefficient (vDSC) and the surface dice similarity coefficient (sDSC) between the reference and the segmented objects. LungQuant v2 achieves a vDSC (sDSC) equal to 0.96 ± 0.01 (0.97 ± 0.01) and 0.69 ± 0.08 (0.83 ± 0.07) for the lung and lesion segmentations, respectively. The output of the segmentation software was then used to assess the percentage of infected lungs, obtaining a Mean Absolute Error (MAE) equal to 2%.
Collapse
Affiliation(s)
- Francesca Lizzi
- Pisa Division, National Institute for Nuclear Physics (INFN), Pisa, Italy
| | | | - Francesca Brero
- Pavia Division, INFN, Pavia, Italy
- Department of Physics, University of Pavia, Pavia, Italy
| | - Raffaella Fiamma Cabini
- Pavia Division, INFN, Pavia, Italy
- Department of Mathematics, University of Pavia, Pavia, Italy
| | - Maria Evelina Fantacci
- Pisa Division, National Institute for Nuclear Physics (INFN), Pisa, Italy
- Department of Physics, University of Pisa, Pisa, Italy
| | - Alessandro Lascialfari
- Pavia Division, INFN, Pavia, Italy
- Department of Physics, University of Pavia, Pavia, Italy
| | - Piernicola Oliva
- Department of Chemical, Physical, Mathematical and Natural Sciences, University of Sassari, Sassari, Italy
- Cagliari Division, INFN, Cagliari, Italy
| | - Lisa Rinaldi
- Pavia Division, INFN, Pavia, Italy
- Department of Physics, University of Pavia, Pavia, Italy
| | - Alessandra Retico
- Pisa Division, National Institute for Nuclear Physics (INFN), Pisa, Italy
| |
Collapse
|
3
|
Rodriguez-Obregon DE, Mejia-Rodriguez AR, Cendejas-Zaragoza L, Gutiérrez Mejía J, Arce-Santana ER, Charleston-Villalobos S, Aljama-Corrales T, Gabutti A, Santos-Díaz A. Semi-Supervised COVID-19 Volumetric Pulmonary Lesion Estimation on CT Images using Probabilistic Active Contour and CNN Segmentation. Biomed Signal Process Control 2023; 85:104905. [PMID: 36993838 PMCID: PMC10030333 DOI: 10.1016/j.bspc.2023.104905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 03/11/2023] [Accepted: 03/18/2023] [Indexed: 03/24/2023]
Abstract
Purpose A semi-supervised two-step methodology is proposed to obtain a volumetric estimation of COVID-19-related lesions on Computed Tomography (CT) images. Methods First, damaged tissue was segmented from CT images using a probabilistic active contours approach. Second, lung parenchyma was extracted using a previously trained U-Net. Finally, volumetric estimation of COVID-19 lesions was calculated considering the lung parenchyma masks. Our approach was validated using a publicly available dataset containing 20 CT COVID-19 images previously labeled and manually segmented. Then, it was applied to 295 COVID-19 patients CT scans admitted to an intensive care unit. We compared the lesion estimation between deceased and survived patients for high and low-resolution images. Results A comparable median Dice similarity coefficient of 0.66 for the 20 validation images was achieved. For the 295 images dataset, results show a significant difference in lesion percentages between deceased and survived patients, with a p-value of 9.1×10−4 in low-resolution and 5.1×10−5 for high-resolution images. Furthermore, the difference in lesion percentages between high and low-resolution images was 10% on average. Conclusion The proposed approach could help estimate the lesion size caused by COVID-19 in CT images and may be considered as an alternative to getting a volumetric segmentation for this novel disease without the requirement of large amounts of COVID-19 labeled data to train an artificial intelligence algorithm. The low variation between the estimated percentage of lesions in high and low-resolution CT images suggests that the proposed approach is robust and It may provide valuable information to differentiate between survived and deceased patients.
Collapse
Affiliation(s)
| | | | - Leopoldo Cendejas-Zaragoza
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
- Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán, Mexico City, Mexico
| | - Juan Gutiérrez Mejía
- Tecnologico de Monterrey, School of Medicine and Health Sciences, Mexico City, Mexico
| | | | | | | | - Alejandro Gabutti
- Department of Radiology and Imaging, Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán, Mexico City, Mexico
| | - Alejandro Santos-Díaz
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
- Tecnologico de Monterrey, School of Medicine and Health Sciences, Monterrey, Mexico
| |
Collapse
|
4
|
Khan A, Khan SH, Saif M, Batool A, Sohail A, Waleed Khan M. A Survey of Deep Learning Techniques for the Analysis of COVID-19 and their usability for Detecting Omicron. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Affiliation(s)
- Asifullah Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Islamabad, Pakistan
- Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Islamabad, Pakistan
| | - Saddam Hussain Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Computer Systems Engineering, University of Engineering and Applied Sciences (UEAS), Swat, Pakistan
| | - Mahrukh Saif
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
| | - Asiya Batool
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Computer Science, Faculty of Computing & Artificial Intelligence, Air University, Islamabad, Pakistan
| | - Muhammad Waleed Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Mechanical and Aerospace Engineering, Columbus, OH, USA
| |
Collapse
|
5
|
Learning Label Diffusion Maps for Semi-Automatic Segmentation of Lung CT Images with COVID-19. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
6
|
Qayyum A, Lalande A, Meriaudeau F. Effective multiscale deep learning model for COVID19 segmentation tasks: A further step towards helping radiologist. Neurocomputing 2022; 499:63-80. [PMID: 35578654 PMCID: PMC9095500 DOI: 10.1016/j.neucom.2022.05.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 01/28/2022] [Accepted: 05/02/2022] [Indexed: 12/14/2022]
Abstract
Infection by the SARS-CoV-2 leading to COVID-19 disease is still rising and techniques to either diagnose or evaluate the disease are still thoroughly investigated. The use of CT as a complementary tool to other biological tests is still under scrutiny as the CT scans are prone to many false positives as other lung diseases display similar characteristics on CT scans. However, fully investigating CT images is of tremendous interest to better understand the disease progression and therefore thousands of scans need to be segmented by radiologists to study infected areas. Over the last year, many deep learning models for segmenting CT-lungs were developed. Unfortunately, the lack of large and shared annotated multicentric datasets led to models that were either under-tested (small dataset) or not properly compared (own metrics, none shared dataset), often leading to poor generalization performance. To address, these issues, we developed a model that uses a multiscale and multilevel feature extraction strategy for COVID19 segmentation and extensively validated it on several datasets to assess its generalization capability for other segmentation tasks on similar organs. The proposed model uses a novel encoder and decoder with a proposed kernel-based atrous spatial pyramid pooling module that is used at the bottom of the model to extract small features with a multistage skip connection concatenation approach. The results proved that our proposed model could be applied on a small-scale dataset and still produce generalizable performances on other segmentation tasks. The proposed model produced an efficient Dice score of 90% on a 100 cases dataset, 95% on the NSCLC dataset, 88.49% on the COVID19 dataset, and 97.33 on the StructSeg 2019 dataset as compared to existing state-of-the-art models. The proposed solution could be used for COVID19 segmentation in clinic applications. The source code is publicly available at https://github.com/RespectKnowledge/Mutiscale-based-Covid-_segmentation-usingDeep-Learning-models.
Collapse
Affiliation(s)
- Abdul Qayyum
- ImViA Laboratory, University of Bourgogne Franche-Comt́e, Dijon, France
| | - Alain Lalande
- ImViA Laboratory, University of Bourgogne Franche-Comt́e, Dijon, France
- Medical Imaging Department, University Hospital of Dijon, Dijon, France
| | | |
Collapse
|
7
|
Sexauer R, Yang S, Weikert T, Poletti J, Bremerich J, Roth JA, Sauter AW, Anastasopoulos C. Automated Detection, Segmentation, and Classification of Pleural Effusion From Computed Tomography Scans Using Machine Learning. Invest Radiol 2022; 57:552-559. [PMID: 35797580 PMCID: PMC9390225 DOI: 10.1097/rli.0000000000000869] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 01/27/2022] [Indexed: 12/17/2022]
Abstract
OBJECTIVE This study trained and evaluated algorithms to detect, segment, and classify simple and complex pleural effusions on computed tomography (CT) scans. MATERIALS AND METHODS For detection and segmentation, we randomly selected 160 chest CT scans out of all consecutive patients (January 2016-January 2021, n = 2659) with reported pleural effusion. Effusions were manually segmented and a negative cohort of chest CTs from 160 patients without effusions was added. A deep convolutional neural network (nnU-Net) was trained and cross-validated (n = 224; 70%) for segmentation and tested on a separate subset (n = 96; 30%) with the same distribution of reported pleural complexity features as in the training cohort (eg, hyperdense fluid, gas, pleural thickening and loculation). On a separate consecutive cohort with a high prevalence of pleural complexity features (n = 335), a random forest model was implemented for classification of segmented effusions with Hounsfield unit thresholds, density distribution, and radiomics-based features as input. As performance measures, sensitivity, specificity, and area under the curves (AUCs) for detection/classifier evaluation (per-case level) and Dice coefficient and volume analysis for the segmentation task were used. RESULTS Sensitivity and specificity for detection of effusion were excellent at 0.99 and 0.98, respectively (n = 96; AUC, 0.996, test data). Segmentation was robust (median Dice, 0.89; median absolute volume difference, 13 mL), irrespective of size, complexity, or contrast phase. The sensitivity, specificity, and AUC for classification in simple versus complex effusions were 0.67, 0.75, and 0.77, respectively. CONCLUSION Using a dataset with different degrees of complexity, a robust model was developed for the detection, segmentation, and classification of effusion subtypes. The algorithms are openly available at https://github.com/usb-radiology/pleuraleffusion.git.
Collapse
Affiliation(s)
- Raphael Sexauer
- From the Divisions of Research and Analytical Services
- Cardiothoracic Imaging, Department of Radiology
| | - Shan Yang
- From the Divisions of Research and Analytical Services
| | - Thomas Weikert
- From the Divisions of Research and Analytical Services
- Cardiothoracic Imaging, Department of Radiology
| | | | | | - Jan Adam Roth
- From the Divisions of Research and Analytical Services
- Basel Institute for Clinical Epidemiology and Biostatistics, University Hospital Basel, Basel, Switzerland
| | - Alexander Walter Sauter
- From the Divisions of Research and Analytical Services
- Cardiothoracic Imaging, Department of Radiology
| | | |
Collapse
|
8
|
Kataoka Y, Baba T, Ikenoue T, Matsuoka Y, Matsumoto J, Kumasawa J, Tochitani K, Funakoshi H, Hosoda T, Kugimiya A, Shirano M, Hamabe F, Iwata S, Kitamura Y, Goto T, Hamaguchi S, Haraguchi T, Yamamoto S, Sumikawa H, Nishida K, Nishida H, Ariyoshi K, Sugiura H, Nakagawa H, Asaoka T, Yoshida N, Oda R, Koyama T, Iwai Y, Miyashita Y, Okazaki K, Tanizawa K, Handa T, Kido S, Fukuma S, Tomiyama N, Hirai T, Ogura T. Development and external validation of a deep learning-based computed tomography classification system for COVID-19. ANNALS OF CLINICAL EPIDEMIOLOGY 2022; 4:110-119. [PMID: 38505255 PMCID: PMC10760489 DOI: 10.37737/ace.22014] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 05/31/2022] [Indexed: 03/21/2024]
Abstract
BACKGROUND We aimed to develop and externally validate a novel machine learning model that can classify CT image findings as positive or negative for SARS-CoV-2 reverse transcription polymerase chain reaction (RT-PCR). METHODS We used 2,928 images from a wide variety of case-control type data sources for the development and internal validation of the machine learning model. A total of 633 COVID-19 cases and 2,295 non-COVID-19 cases were included in the study. We randomly divided cases into training and tuning sets at a ratio of 8:2. For external validation, we used 893 images from 740 consecutive patients at 11 acute care hospitals suspected of having COVID-19 at the time of diagnosis. The dataset included 343 COVID-19 patients. The reference standard was RT-PCR. RESULTS In external validation, the sensitivity and specificity of the model were 0.869 and 0.432, at the low-level cutoff, 0.724 and 0.721, at the high-level cutoff. Area under the receiver operating characteristic was 0.76. CONCLUSIONS Our machine learning model exhibited a high sensitivity in external validation datasets and may assist physicians to rule out COVID-19 diagnosis in a timely manner at emergency departments. Further studies are warranted to improve model specificity.
Collapse
Affiliation(s)
- Yuki Kataoka
- Department of Internal Medicine, Kyoto Min-Iren Asukai Hospital
- Section of Clinical Epidemiology, Department of Community Medicine, Kyoto University Graduate School of Medicine
- Department of Healthcare Epidemiology, Kyoto University Graduate School of Medicine/School of Public Health
- Scientific Research Works Peer Support Group (SRWS-PSG)
| | - Tomohisa Baba
- Department of Respiratory Medicine, Kanagawa Cardiovascular and Respiratory Center
| | - Tatsuyoshi Ikenoue
- Human Health Sciences, Kyoto University Graduate School of Medicine
- Graduate School of Data Science, Shiga University
| | - Yoshinori Matsuoka
- Department of Healthcare Epidemiology, Kyoto University Graduate School of Medicine/School of Public Health
- Department of Emergency Medicine, Kobe City Medical Center General Hospital
| | - Junichi Matsumoto
- Department of Emergency and Critical Care Medicine, St. Marianna University School of Medicine
| | - Junji Kumasawa
- Human Health Sciences, Kyoto University Graduate School of Medicine
- Department of Critical Care Medicine, Sakai City Medical Center
| | | | - Hiraku Funakoshi
- Department of Emergency and Critical Care Medicine Department of Interventional Radiology, Tokyo Bay Urayasu Ichikawa Medical Center
| | - Tomohiro Hosoda
- Department of Infectious Disease, Kawasaki Municipal Kawasaki Hospital
| | - Aiko Kugimiya
- Department of Respiratory Medicine, Yamanashi Prefectural Central Hospital
| | | | - Fumiko Hamabe
- Department of Radiology, National Defense Medical College
| | - Sachiyo Iwata
- Division of Cardiovascular Medicine, Hyogo Prefectural Kakogawa Medical Center
| | | | | | - Shingo Hamaguchi
- Department of Emergency and Critical Care Medicine Department of Interventional Radiology, Tokyo Bay Urayasu Ichikawa Medical Center
| | | | | | | | - Koji Nishida
- Department of Respiratory Medicine, Sakai City Medical Center
| | - Haruka Nishida
- Department of Emergency Medicine, Kobe City Medical Center General Hospital
| | - Koichi Ariyoshi
- Department of Emergency Medicine, Kobe City Medical Center General Hospital
| | | | | | - Tomohiro Asaoka
- Department of Infectious Diseases, Osaka City General Hospital
| | - Naofumi Yoshida
- Division of Cardiovascular Medicine, Department of Internal Medicine, Kobe University Graduate School of Medicine
| | - Rentaro Oda
- Department of Infectious Diseases, Tokyo Bay Urayasu Ichikawa Medical Center
| | - Takashi Koyama
- Department of Infectious Diseases, Hyogo Prefectural Amagasaki General Medical Center
| | - Yui Iwai
- Department of Infectious Diseases, Hyogo Prefectural Amagasaki General Medical Center
| | | | - Koya Okazaki
- Department of Respiratory Medicine, Hyogo Prefectural Amgasaki General Medical Center
| | - Kiminobu Tanizawa
- Department of Respiratory Medicine, Graduate School of Medicine, Kyoto University
| | - Tomohiro Handa
- Department of Respiratory Medicine, Graduate School of Medicine, Kyoto University
- Department of Advanced Medicine for Respiratory Failure, Graduate School of Medicine, Kyoto University
| | - Shoji Kido
- Department of Artificial Intelligence Diagnostic Radiology, Graduate School of Medicine, Osaka University
| | - Shingo Fukuma
- Human Health Sciences, Kyoto University Graduate School of Medicine
| | - Noriyuki Tomiyama
- Department of Diagnostic and Interventional Radiology, Osaka University Graduate School of Medicine
| | - Toyohiro Hirai
- Department of Respiratory Medicine, Graduate School of Medicine, Kyoto University
| | - Takashi Ogura
- Department of Respiratory Medicine, Kanagawa Cardiovascular and Respiratory Center
| |
Collapse
|
9
|
Shiri I, Arabi H, Salimi Y, Sanaat A, Akhavanallaf A, Hajianfar G, Askari D, Moradi S, Mansouri Z, Pakbin M, Sandoughdaran S, Abdollahi H, Radmard AR, Rezaei‐Kalantari K, Ghelich Oghli M, Zaidi H. COLI-Net: Deep learning-assisted fully automated COVID-19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:12-25. [PMID: 34898850 PMCID: PMC8652855 DOI: 10.1002/ima.22672] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 09/18/2021] [Accepted: 10/17/2021] [Indexed: 05/17/2023]
Abstract
We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347'259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7'333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98-0.99) and 0.91 ± 0.038 (95% CI, 0.90-0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, -0.12 to 0.18) and -0.18 ± 3.4% (95% CI, -0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16-0.59) and 0.81 ± 6.6% (95% CI, -0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (-6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Ghasem Hajianfar
- Rajaie Cardiovascular Medical and Research CenterIran University of Medical SciencesTehranIran
| | - Dariush Askari
- Department of Radiology TechnologyShahid Beheshti University of Medical SciencesTehranIran
| | - Shakiba Moradi
- Research and Development DepartmentMed Fanavaran Plus Co.KarajIran
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Masoumeh Pakbin
- Clinical Research Development CenterQom University of Medical SciencesQomIran
| | - Saleh Sandoughdaran
- Men's Health and Reproductive Health Research CenterShahid Beheshti University of Medical SciencesTehranIran
| | - Hamid Abdollahi
- Department of Radiologic Technology, Faculty of Allied MedicineKerman University of Medical SciencesKermanIran
| | - Amir Reza Radmard
- Department of RadiologyShariati Hospital, Tehran University of Medical SciencesTehranIran
| | - Kiara Rezaei‐Kalantari
- Rajaie Cardiovascular Medical and Research CenterIran University of Medical SciencesTehranIran
| | - Mostafa Ghelich Oghli
- Research and Development DepartmentMed Fanavaran Plus Co.KarajIran
- Department of Cardiovascular SciencesKU LeuvenLeuvenBelgium
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
- Geneva University NeurocenterGeneva UniversityGenevaSwitzerland
- Department of Nuclear Medicine and Molecular ImagingUniversity of Groningen, University Medical Center GroningenGroningenNetherlands
- Department of Nuclear MedicineUniversity of Southern DenmarkOdenseDenmark
| |
Collapse
|
10
|
Aiello M, Esposito G, Pagliari G, Borrelli P, Brancato V, Salvatore M. How does DICOM support big data management? Investigating its use in medical imaging community. Insights Imaging 2021; 12:164. [PMID: 34748101 PMCID: PMC8574146 DOI: 10.1186/s13244-021-01081-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 08/25/2021] [Indexed: 12/15/2022] Open
Abstract
The diagnostic imaging field is experiencing considerable growth, followed by increasing production of massive amounts of data. The lack of standardization and privacy concerns are considered the main barriers to big data capitalization. This work aims to verify whether the advanced features of the DICOM standard, beyond imaging data storage, are effectively used in research practice. This issue will be analyzed by investigating the publicly shared medical imaging databases and assessing how much the most common medical imaging software tools support DICOM in all its potential. Therefore, 100 public databases and ten medical imaging software tools were selected and examined using a systematic approach. In particular, the DICOM fields related to privacy, segmentation and reporting have been assessed in the selected database; software tools have been evaluated for reading and writing the same DICOM fields. From our analysis, less than a third of the databases examined use the DICOM format to record meaningful information to manage the images. Regarding software, the vast majority does not allow the management, reading and writing of some or all the DICOM fields. Surprisingly, if we observe chest computed tomography data sharing to address the COVID-19 emergency, there are only two datasets out of 12 released in DICOM format. Our work shows how the DICOM can potentially fully support big data management; however, further efforts are still needed from the scientific and technological community to promote the use of the existing standard, encouraging data sharing and interoperability for a concrete development of big data analytics.
Collapse
Affiliation(s)
- Marco Aiello
- IRCCS SDN, Via Emanuele Gianturco 113, 80143, Naples, Italy.
| | | | | | | | | | | |
Collapse
|
11
|
Cendejas-Zaragoza L, Rodriguez-Obregon DE, Mejia-Rodriguez AR, Arce-Santana ER, Santos-Diaz A. COVID-19 Volumetric Pulmonary Lesion Estimation on CT Images using a U-NET and Probabilistic Active Contour Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3850-3853. [PMID: 34892074 DOI: 10.1109/embc46164.2021.9629532] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
A two-step method for obtaining a volumetric estimation of COVID-19 related lesion from CT images is proposed. The first step consists in applying a U-NET convolutional neural network to provide a segmentation of the lung-parenchyma. This architecture is trained and validated using the Thoracic Volume and Pleural Effusion Segmentations in Diseased Lungs for Benchmarking Chest CT Processing Pipelines (PleThora) dataset, which is publicly available. The second step consists in obtaining the volumetric lesion estimation using an automatic algorithm based on a probabilistic active contour (PACO) region delimitation approach. Our pipeline successfully segmented COVID-19 related lesions in CT images, with exception of some mislabeled regions including lung airways and vasculature. Our workflow was applied to images in a cohort of 50 patients.
Collapse
|
12
|
Lizzi F, Agosti A, Brero F, Cabini RF, Fantacci ME, Figini S, Lascialfari A, Laruina F, Oliva P, Piffer S, Postuma I, Rinaldi L, Talamonti C, Retico A. Quantification of pulmonary involvement in COVID-19 pneumonia by means of a cascade of two U-nets: training and assessment on multiple datasets using different annotation criteria. Int J Comput Assist Radiol Surg 2021; 17:229-237. [PMID: 34698988 PMCID: PMC8547130 DOI: 10.1007/s11548-021-02501-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 09/15/2021] [Indexed: 12/24/2022]
Abstract
Purpose This study aims at exploiting artificial intelligence (AI) for the identification, segmentation and quantification of COVID-19 pulmonary lesions. The limited data availability and the annotation quality are relevant factors in training AI-methods. We investigated the effects of using multiple datasets, heterogeneously populated and annotated according to different criteria. Methods We developed an automated analysis pipeline, the LungQuant system, based on a cascade of two U-nets. The first one (U-net\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$_1$$\end{document}1) is devoted to the identification of the lung parenchyma; the second one (U-net\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$_2$$\end{document}2) acts on a bounding box enclosing the segmented lungs to identify the areas affected by COVID-19 lesions. Different public datasets were used to train the U-nets and to evaluate their segmentation performances, which have been quantified in terms of the Dice Similarity Coefficients. The accuracy in predicting the CT-Severity Score (CT-SS) of the LungQuant system has been also evaluated. Results Both the volumetric DSC (vDSC) and the accuracy showed a dependency on the annotation quality of the released data samples. On an independent dataset (COVID-19-CT-Seg), both the vDSC and the surface DSC (sDSC) were measured between the masks predicted by LungQuant system and the reference ones. The vDSC (sDSC) values of 0.95±0.01 and 0.66±0.13 (0.95±0.02 and 0.76±0.18, with 5 mm tolerance) were obtained for the segmentation of lungs and COVID-19 lesions, respectively. The system achieved an accuracy of 90% in CT-SS identification on this benchmark dataset. Conclusion We analysed the impact of using data samples with different annotation criteria in training an AI-based quantification system for pulmonary involvement in COVID-19 pneumonia. In terms of vDSC measures, the U-net segmentation strongly depends on the quality of the lesion annotations. Nevertheless, the CT-SS can be accurately predicted on independent test sets, demonstrating the satisfactory generalization ability of the LungQuant. Supplementary Information The online version supplementary material available at 10.1007/s11548-021-02501-2.
Collapse
Affiliation(s)
- Francesca Lizzi
- Scuola Normale Superiore, Pisa, Italy. .,National Institute of Nuclear Physics (INFN), Pisa division, Pisa, Italy.
| | - Abramo Agosti
- Department of Mathematics, University of Pavia, Pavia, Italy
| | - Francesca Brero
- INFN, Pavia division, Pavia, Italy.,Department of Physics, University of Pavia, Pavia, Italy
| | - Raffaella Fiamma Cabini
- INFN, Pavia division, Pavia, Italy.,Department of Mathematics, University of Pavia, Pavia, Italy
| | - Maria Evelina Fantacci
- National Institute of Nuclear Physics (INFN), Pisa division, Pisa, Italy.,Department of Physics, University of Pisa, Pisa, Italy
| | - Silvia Figini
- INFN, Pavia division, Pavia, Italy.,Department of Social and Political Science, University of Pavia, Pavia, Italy
| | - Alessandro Lascialfari
- INFN, Pavia division, Pavia, Italy.,Department of Physics, University of Pavia, Pavia, Italy
| | - Francesco Laruina
- Scuola Normale Superiore, Pisa, Italy.,National Institute of Nuclear Physics (INFN), Pisa division, Pisa, Italy
| | - Piernicola Oliva
- Department of Chemistry and Pharmacy, University of Sassari, Sassari, Italy.,INFN, Cagliari division, Cagliari, Italy
| | - Stefano Piffer
- Department of Biomedical Experimental Clinical Science "M. Serio", University of Florence, Florence, Italy.,INFN, Florence division, Florence, Italy
| | | | - Lisa Rinaldi
- INFN, Pavia division, Pavia, Italy.,Department of Physics, University of Pavia, Pavia, Italy
| | - Cinzia Talamonti
- Department of Biomedical Experimental Clinical Science "M. Serio", University of Florence, Florence, Italy.,INFN, Florence division, Florence, Italy
| | - Alessandra Retico
- National Institute of Nuclear Physics (INFN), Pisa division, Pisa, Italy
| |
Collapse
|
13
|
Li Z, Li R, Kiser KJ, Giancardo L, Zheng WJ. Segmenting Thoracic Cavities with Neoplastic Lesions: A Head-to-head Benchmark with Fully Convolutional Neural Networks. ACM-BCB ... ... : THE ... ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY AND BIOMEDICINE. ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY AND BIOMEDICINE 2021; 2021:33. [PMID: 35330920 PMCID: PMC8941645 DOI: 10.1145/3459930.3469564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Automatic segmentation of thoracic cavity structures in computer tomography (CT) is a key step for applications ranging from radiotherapy planning to imaging biomarker discovery with radiomics approaches. State-of-the-art segmentation can be provided by fully convolutional neural networks such as the U-Net or V-Net. However, there is a very limited body of work on a comparative analysis of the performance of these architectures for chest CTs with significant neoplastic disease. In this work, we compared four different types of fully convolutional architectures using the same pre-processing and post-processing pipelines. These methods were evaluated using a dataset of CT images and thoracic cavity segmentations from 402 cancer patients. We found that these methods achieved very high segmentation performance by benchmarks of three evaluation criteria, i.e. Dice coefficient, average symmetric surface distance and 95% Hausdorff distance. Overall, the two-stage 3D U-Net model performed slightly better than other models, with Dice coefficients for left and right lung reaching 0.947 and 0.952, respectively. However, 3D U-Net model achieved the best performance under the evaluation of HD95 for right lung and ASSD for both left and right lung. These results demonstrate that the current state-of-art deep learning models can work very well for segmenting not only healthy lungs but also the lung containing different stages of cancerous lesions. The comprehensive types of lung masks from these evaluated methods enabled the creation of imaging-based biomarkers representing both healthy lung parenchyma and neoplastic lesions, allowing us to utilize these segmented areas for the downstream analysis, e.g. treatment planning, prognosis and survival prediction.
Collapse
Affiliation(s)
- Zhao Li
- School of Biomedical Informatics, UTHealth, Houston, Texas
| | - Rongbin Li
- School of Biomedical Informatics, UTHealth, Houston, Texas
| | - Kendall J. Kiser
- Department of Radiation Oncology, Washington University School of Medicine in St. Louis, St. Louis, Missouri
| | | | | |
Collapse
|
14
|
Berta L, Rizzetto F, De Mattia C, Lizio D, Felisi M, Colombo PE, Carrazza S, Gelmini S, Bianchi L, Artioli D, Travaglini F, Vanzulli A, Torresin A. Automatic lung segmentation in COVID-19 patients: Impact on quantitative computed tomography analysis. Phys Med 2021; 87:115-122. [PMID: 34139383 PMCID: PMC9188767 DOI: 10.1016/j.ejmp.2021.06.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 05/05/2021] [Accepted: 06/04/2021] [Indexed: 12/04/2022] Open
Abstract
Purpose To assess the impact of lung segmentation accuracy in an automatic pipeline for quantitative analysis of CT images. Methods Four different platforms for automatic lung segmentation based on convolutional neural network (CNN), region-growing technique and atlas-based algorithm were considered. The platforms were tested using CT images of 55 COVID-19 patients with severe lung impairment. Four radiologists assessed the segmentations using a 5-point qualitative score (QS). For each CT series, a manually revised reference segmentation (RS) was obtained. Histogram-based quantitative metrics (QM) were calculated from CT histogram using lung segmentationsfrom all platforms and RS. Dice index (DI) and differences of QMs (ΔQMs) were calculated between RS and other segmentations. Results Highest QS and lower ΔQMs values were associated to the CNN algorithm. However, only 45% CNN segmentations were judged to need no or only minimal corrections, and in only 17 cases (31%), automatic segmentations provided RS without manual corrections. Median values of the DI for the four algorithms ranged from 0.993 to 0.904. Significant differences for all QMs calculated between automatic segmentations and RS were found both when data were pooled together and stratified according to QS, indicating a relationship between qualitative and quantitative measurements. The most unstable QM was the histogram 90th percentile, with median ΔQMs values ranging from 10HU and 158HU between different algorithms. Conclusions None of tested algorithms provided fully reliable segmentation. Segmentation accuracy impacts differently on different quantitative metrics, and each of them should be individually evaluated according to the purpose of subsequent analyses.
Collapse
Affiliation(s)
- L Berta
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162 Milan, Italy
| | - F Rizzetto
- Department of Radiology, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162 Milan, Italy; Postgraduate School of Diagnostic and Interventional Radiology, Università degli Studi di Milano, via Festa del Perdono 7, 20122, Milan, Italy
| | - C De Mattia
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162 Milan, Italy
| | - D Lizio
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162 Milan, Italy
| | - M Felisi
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162 Milan, Italy
| | - P E Colombo
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162 Milan, Italy
| | - S Carrazza
- Department of Physics, Università degli Studi di Milano, via Giovanni Celoria 16, 20133 Milan, Italy; Department of Physics, INFN Sezione di Milano, via Giovanni Celoria 16, 20133 Milan, Italy
| | - S Gelmini
- Department of Physics, Università degli Studi di Milano, via Giovanni Celoria 16, 20133 Milan, Italy
| | - L Bianchi
- Department of Radiology, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162 Milan, Italy; Postgraduate School of Diagnostic and Interventional Radiology, Università degli Studi di Milano, via Festa del Perdono 7, 20122, Milan, Italy
| | - D Artioli
- Department of Radiology, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162 Milan, Italy
| | - F Travaglini
- Department of Radiology, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162 Milan, Italy
| | - A Vanzulli
- Department of Radiology, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162 Milan, Italy; Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, via Festa del Perdono 7, 20122, Milan, Italy
| | - A Torresin
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162 Milan, Italy; Department of Physics, Università degli Studi di Milano, via Giovanni Celoria 16, 20133 Milan, Italy.
| | | |
Collapse
|
15
|
Kiser KJ, Barman A, Stieb S, Fuller CD, Giancardo L. Novel Autosegmentation Spatial Similarity Metrics Capture the Time Required to Correct Segmentations Better Than Traditional Metrics in a Thoracic Cavity Segmentation Workflow. J Digit Imaging 2021; 34:541-553. [PMID: 34027588 PMCID: PMC8329111 DOI: 10.1007/s10278-021-00460-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 03/28/2021] [Accepted: 05/04/2021] [Indexed: 12/20/2022] Open
Abstract
Automated segmentation templates can save clinicians time compared to de novo segmentation but may still take substantial time to review and correct. It has not been thoroughly investigated which automated segmentation-corrected segmentation similarity metrics best predict clinician correction time. Bilateral thoracic cavity volumes in 329 CT scans were segmented by a UNet-inspired deep learning segmentation tool and subsequently corrected by a fourth-year medical student. Eight spatial similarity metrics were calculated between the automated and corrected segmentations and associated with correction times using Spearman’s rank correlation coefficients. Nine clinical variables were also associated with metrics and correction times using Spearman’s rank correlation coefficients or Mann–Whitney U tests. The added path length, false negative path length, and surface Dice similarity coefficient correlated better with correction time than traditional metrics, including the popular volumetric Dice similarity coefficient (respectively ρ = 0.69, ρ = 0.65, ρ = − 0.48 versus ρ = − 0.25; correlation p values < 0.001). Clinical variables poorly represented in the autosegmentation tool’s training data were often associated with decreased accuracy but not necessarily with prolonged correction time. Metrics used to develop and evaluate autosegmentation tools should correlate with clinical time saved. To our knowledge, this is only the second investigation of which metrics correlate with time saved. Validation of our findings is indicated in other anatomic sites and clinical workflows. Novel spatial similarity metrics may be preferable to traditional metrics for developing and evaluating autosegmentation tools that are intended to save clinicians time.
Collapse
Affiliation(s)
- Kendall J. Kiser
- Department of Radiation Oncology, Washington University School of Medicine in St. Louis, St. Louis, MO USA
| | - Arko Barman
- Center for Precision Health, UT Health School of Biomedical Informatics, Houston, TX USA
| | - Sonja Stieb
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Clifton D. Fuller
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Luca Giancardo
- Center for Precision Health, UT Health School of Biomedical Informatics, Houston, TX USA
| |
Collapse
|
16
|
Kirby J, Prior F, Petrick N, Hadjiski L, Farahani K, Drukker K, Kalpathy-Cramer J, Glide-Hurst C, El Naqa I. Introduction to special issue on datasets hosted in The Cancer Imaging Archive (TCIA). Med Phys 2021; 47:6026-6028. [PMID: 33202038 DOI: 10.1002/mp.14595] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 11/03/2020] [Accepted: 11/09/2020] [Indexed: 01/19/2023] Open
Affiliation(s)
- Justin Kirby
- Frederick National Laboratory for Cancer Research, Cancer Imaging Informatics Lab, National Institute of Health, Frederick, MD, USA
| | - Fred Prior
- Department of Biomedical Informatics, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Nicholas Petrick
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, USA
| | - Lubomir Hadjiski
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | | | | | - Carri Glide-Hurst
- Department of Radiation Oncology, University of Wisconsin, Madison, WI, USA
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, USA
| |
Collapse
|
17
|
Goncharov M, Pisov M, Shevtsov A, Shirokikh B, Kurmukov A, Blokhin I, Chernina V, Solovev A, Gombolevskiy V, Morozov S, Belyaev M. CT-Based COVID-19 triage: Deep multitask learning improves joint identification and severity quantification. Med Image Anal 2021; 71:102054. [PMID: 33932751 PMCID: PMC8015379 DOI: 10.1016/j.media.2021.102054] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 03/21/2021] [Accepted: 03/26/2021] [Indexed: 12/12/2022]
Abstract
The current COVID-19 pandemic overloads healthcare systems, including radiology departments. Though several deep learning approaches were developed to assist in CT analysis, nobody considered study triage directly as a computer science problem. We describe two basic setups: Identification of COVID-19 to prioritize studies of potentially infected patients to isolate them as early as possible; Severity quantification to highlight patients with severe COVID-19, thus direct them to a hospital or provide emergency medical care. We formalize these tasks as binary classification and estimation of affected lung percentage. Though similar problems were well-studied separately, we show that existing methods could provide reasonable quality only for one of these setups. We employ a multitask approach to consolidate both triage approaches and propose a convolutional neural network to leverage all available labels within a single model. In contrast with the related multitask approaches, we show the benefit from applying the classification layers to the most spatially detailed feature map at the upper part of U-Net instead of the less detailed latent representation at the bottom. We train our model on approximately 1500 publicly available CT studies and test it on the holdout dataset that consists of 123 chest CT studies of patients drawn from the same healthcare system, specifically 32 COVID-19 and 30 bacterial pneumonia cases, 30 cases with cancerous nodules, and 31 healthy controls. The proposed multitask model outperforms the other approaches and achieves ROC AUC scores of 0.87±0.01 vs. bacterial pneumonia, 0.93±0.01 vs. cancerous nodules, and 0.97±0.01 vs. healthy controls in Identification of COVID-19, and achieves 0.97±0.01 Spearman Correlation in Severity quantification. We have released our code and shared the annotated lesions masks for 32 CT images of patients with COVID-19 from the test dataset.
Collapse
Affiliation(s)
- Mikhail Goncharov
- Skolkovo Institute of Science and Technology, Moscow, Russia; Kharkevich Institute for Information Transmission Problems, Moscow, Russia
| | - Maxim Pisov
- Skolkovo Institute of Science and Technology, Moscow, Russia
| | - Alexey Shevtsov
- Kharkevich Institute for Information Transmission Problems, Moscow, Russia
| | - Boris Shirokikh
- Skolkovo Institute of Science and Technology, Moscow, Russia
| | - Anvar Kurmukov
- Kharkevich Institute for Information Transmission Problems, Moscow, Russia
| | - Ivan Blokhin
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Russia
| | - Valeria Chernina
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Russia
| | - Alexander Solovev
- Sklifosovsky Clinical and Research Institute for Emergency Medicine, Moscow, Russia
| | - Victor Gombolevskiy
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Russia
| | - Sergey Morozov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Russia
| | - Mikhail Belyaev
- Skolkovo Institute of Science and Technology, Moscow, Russia.
| |
Collapse
|
18
|
Kiser KJ, Ahmed S, Stieb S, Mohamed ASR, Elhalawani H, Park PYS, Doyle NS, Wang BJ, Barman A, Li Z, Zheng WJ, Fuller CD, Giancardo L. PleThora: Pleural effusion and thoracic cavity segmentations in diseased lungs for benchmarking chest CT processing pipelines. Med Phys 2020; 47:5941-5952. [PMID: 32749075 PMCID: PMC7722027 DOI: 10.1002/mp.14424] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 07/22/2020] [Accepted: 07/27/2020] [Indexed: 12/19/2022] Open
Abstract
This manuscript describes a dataset of thoracic cavity segmentations and discrete pleural effusion segmentations we have annotated on 402 computed tomography (CT) scans acquired from patients with non-small cell lung cancer. The segmentation of these anatomic regions precedes fundamental tasks in image analysis pipelines such as lung structure segmentation, lesion detection, and radiomics feature extraction. Bilateral thoracic cavity volumes and pleural effusion volumes were manually segmented on CT scans acquired from The Cancer Imaging Archive "NSCLC Radiomics" data collection. Four hundred and two thoracic segmentations were first generated automatically by a U-Net based algorithm trained on chest CTs without cancer, manually corrected by a medical student to include the complete thoracic cavity (normal, pathologic, and atelectatic lung parenchyma, lung hilum, pleural effusion, fibrosis, nodules, tumor, and other anatomic anomalies), and revised by a radiation oncologist or a radiologist. Seventy-eight pleural effusions were manually segmented by a medical student and revised by a radiologist or radiation oncologist. Interobserver agreement between the radiation oncologist and radiologist corrections was acceptable. All expert-vetted segmentations are publicly available in NIfTI format through The Cancer Imaging Archive at https://doi.org/10.7937/tcia.2020.6c7y-gq39. Tabular data detailing clinical and technical metadata linked to segmentation cases are also available. Thoracic cavity segmentations will be valuable for developing image analysis pipelines on pathologic lungs - where current automated algorithms struggle most. In conjunction with gross tumor volume segmentations already available from "NSCLC Radiomics," pleural effusion segmentations may be valuable for investigating radiomics profile differences between effusion and primary tumor or training algorithms to discriminate between them.
Collapse
Affiliation(s)
- Kendall J. Kiser
- John P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Sara Ahmed
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Sonja Stieb
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Abdallah S. R. Mohamed
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
- MD Anderson Cancer Center‐UTHealth Graduate School of Biomedical SciencesHoustonTXUSA
| | - Hesham Elhalawani
- Department of Radiation OncologyCleveland Clinic Taussig Cancer CenterClevelandOHUSA
| | - Peter Y. S. Park
- Department of Diagnostic and Interventional ImagingJohn P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
| | - Nathan S. Doyle
- Department of Diagnostic and Interventional ImagingJohn P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
| | - Brandon J. Wang
- Department of Diagnostic and Interventional ImagingJohn P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
| | - Arko Barman
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
| | - Zhao Li
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
| | - W. Jim Zheng
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
| | - Clifton D. Fuller
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
- MD Anderson Cancer Center‐UTHealth Graduate School of Biomedical SciencesHoustonTXUSA
| | - Luca Giancardo
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
- Department of Radiation OncologyCleveland Clinic Taussig Cancer CenterClevelandOHUSA
| |
Collapse
|