1
|
Balagurunathan Y, Beers A, McNitt-Gray M, Hadjiiski L, Napel S, Goldgof D, Perez G, Arbelaez P, Mehrtash A, Kapur T, Yang E, Moon JW, Bernardino G, Delgado-Gonzalo R, Farhangi MM, Amini AA, Ni R, Feng X, Bagari A, Vaidhya K, Veasey B, Safta W, Frigui H, Enguehard J, Gholipour A, Castillo LS, Daza LA, Pinsky P, Kalpathy-Cramer J, Farahani K. Lung Nodule Malignancy Prediction in Sequential CT Scans: Summary of ISBI 2018 Challenge. IEEE Trans Med Imaging 2021; 40:3748-3761. [PMID: 34264825 PMCID: PMC9531053 DOI: 10.1109/tmi.2021.3097665] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Lung cancer is by far the leading cause of cancer death in the US. Recent studies have demonstrated the effectiveness of screening using low dose CT (LDCT) in reducing lung cancer related mortality. While lung nodules are detected with a high rate of sensitivity, this exam has a low specificity rate and it is still difficult to separate benign and malignant lesions. The ISBI 2018 Lung Nodule Malignancy Prediction Challenge, developed by a team from the Quantitative Imaging Network of the National Cancer Institute, was focused on the prediction of lung nodule malignancy from two sequential LDCT screening exams using automated (non-manual) algorithms. We curated a cohort of 100 subjects who participated in the National Lung Screening Trial and had established pathological diagnoses. Data from 30 subjects were randomly selected for training and the remaining was used for testing. Participants were evaluated based on the area under the receiver operating characteristic curve (AUC) of nodule-wise malignancy scores generated by their algorithms on the test set. The challenge had 17 participants, with 11 teams submitting reports with method description, mandated by the challenge rules. Participants used quantitative methods, resulting in a reporting test AUC ranging from 0.698 to 0.913. The top five contestants used deep learning approaches, reporting an AUC between 0.87 - 0.91. The team's predictor did not achieve significant differences from each other nor from a volume change estimate (p =.05 with Bonferroni-Holm's correction).
Collapse
Affiliation(s)
| | | | | | | | - Sandy Napel
- Dept. of Radiology, School of Medicine, Stanford University (SU), CA
| | | | - Gustavo Perez
- Biomedical computer vision lab (BCV), Universidad de los Andes, Colombia
| | - Pablo Arbelaez
- Biomedical computer vision lab (BCV), Universidad de los Andes, Colombia
| | - Alireza Mehrtash
- Robotics and Control Laboratory (RCL), Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC
- Surgical Planning Laboratory (SPL), Radiology Department, Brigham and Women’s Hospital, Boston, MA, 02130
| | - Tina Kapur
- Surgical Planning Laboratory (SPL), Radiology Department, Brigham and Women’s Hospital, Boston, MA, 02130
| | - Ehwa Yang
- Sungkyunkwan University School of Medicine, Seoul 06351, Korea
| | - Jung Won Moon
- Human Medical Imaging & Intervention Center, Seoul 06524, Korea
| | - Gabriel Bernardino
- Centre Suisse d’Électronique et de Microtechnique, Neuchâtel, Switzerland
| | | | - M. Mehdi Farhangi
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Computer Engineering and Computer Science, University of Louisville
| | - Amir A. Amini
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Electrical and Computer Engineering Department, University of Louisville, Louisville, KY. USA
| | | | - Xue Feng
- Spingbok Inc
- Department of Biomedical Engineering, University of Virginia, Charlottesville
| | | | | | - Benjamin Veasey
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Electrical and Computer Engineering Department, University of Louisville, Louisville, KY. USA
| | - Wiem Safta
- Computer Engineering and Computer Science, University of Louisville
| | - Hichem Frigui
- Computer Engineering and Computer Science, University of Louisville
| | - Joseph Enguehard
- Department of Radiology, Boston Children’s Hospital, and Harvard Medical School
| | - Ali Gholipour
- Department of Radiology, Boston Children’s Hospital, and Harvard Medical School
| | | | - Laura Alexandra Daza
- Department of Biomedical Engineering, Universidad de los Andes, Bogota, Colombia
| | - Paul Pinsky
- Divsion of Cancer Prevention, National Cancer Institute (NCI), Washington DC
| | | | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute (NCI), Washington DC
| |
Collapse
|
2
|
Enguehard J, O'Halloran P, Gholipour A. Semi Supervised Learning with Deep Embedded Clustering for Image Classification and Segmentation. IEEE Access 2019; 7:11093-11104. [PMID: 31588387 PMCID: PMC6777718 DOI: 10.1109/access.2019.2891970] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Deep neural networks usually require large labeled datasets to construct accurate models; however, in many real-world scenarios, such as medical image segmentation, labelling data is a time-consuming and costly human (expert) intelligent task. Semi-supervised methods leverage this issue by making use of a small labeled dataset and a larger set of unlabeled data. In this article, we present a flexible framework for semi-supervised learning that combines the power of supervised methods that learn feature representations using state-of-the-art deep convolutional neural networks with the deep embedded clustering algorithm that assigns data points to clusters based on their probability distributions and feature representations learned by the networks. Our proposed semi-supervised learning algorithm based on deep embedded clustering (SSLDEC) learns feature representations via iterations by alternatively using labeled and unlabeled data points and computing target distributions from predictions. During this iterative procedure the algorithm uses labeled samples to keep the model consistent and tuned with labeling, as it simultaneously learns to improve feature representation and predictions. SSLDEC requires few hyper-parameters and thus does not need large labeled validation sets, which addresses one of the main limitations of many semi-supervised learning algorithms. It is also flexible and can be used with many state-of-the-art deep neural network configurations for image classification and segmentation tasks. To this end, we implemented and tested our approach on benchmark image classification tasks as well as in a challenging medical image segmentation scenario. In benchmark classification tasks, SSLDEC outperformed several state-of-the-art semi-supervised learning methods, achieving 0.46% error on MNIST with 1000 labeled points, and 4.43% error on SVHN with 500 labeled points. In the iso-intense infant brain MRI tissue segmentation task, we implemented SSLDEC on a 3D densely connected fully convolutional neural network where we achieved significant improvement over supervised-only training as well as a semi-supervised method based on pseudo-labelling. Our results show that SSLDEC can be effectively used to reduce the need for costly expert annotations, enhancing applications such as automatic medical image segmentation.
Collapse
Affiliation(s)
- Joseph Enguehard
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, Boston, MA 02115, USA
- Harvard Medical School, Boston, MA 02115, USA
- Télécom ParisTech, 75013 Paris, France
| | - Peter O'Halloran
- Department of Radiology, Mount Auburn Hospital, Cambridge, MA 02138, USA
| | - Ali Gholipour
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, Boston, MA 02115, USA
- Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|