1
|
Wang J, Zheng N, Wan H, Yao Q, Jia S, Zhang X, Fu S, Ruan J, He G, Chen X, Li S, Chen R, Lai B, Wang J, Jiang Q, Ouyang N, Zhang Y. Deep learning models for thyroid nodules diagnosis of fine-needle aspiration biopsy: a retrospective, prospective, multicentre study in China. Lancet Digit Health 2024; 6:e458-e469. [PMID: 38849291 DOI: 10.1016/s2589-7500(24)00085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 03/25/2024] [Accepted: 04/17/2024] [Indexed: 06/09/2024]
Abstract
BACKGROUND Accurately distinguishing between malignant and benign thyroid nodules through fine-needle aspiration cytopathology is crucial for appropriate therapeutic intervention. However, cytopathologic diagnosis is time consuming and hindered by the shortage of experienced cytopathologists. Reliable assistive tools could improve cytopathologic diagnosis efficiency and accuracy. We aimed to develop and test an artificial intelligence (AI)-assistive system for thyroid cytopathologic diagnosis according to the Thyroid Bethesda Reporting System. METHODS 11 254 whole-slide images (WSIs) from 4037 patients were used to train deep learning models. Among the selected WSIs, cell level was manually annotated by cytopathologists according to The Bethesda System for Reporting Thyroid Cytopathology (TBSRTC) guidelines of the second edition (2017 version). A retrospective dataset of 5638 WSIs of 2914 patients from four medical centres was used for validation. 469 patients were recruited for the prospective study of the performance of AI models and their 537 thyroid nodule samples were used. Cohorts for training and validation were enrolled between Jan 1, 2016, and Aug 1, 2022, and the prospective dataset was recruited between Aug 1, 2022, and Jan 1, 2023. The performance of our AI models was estimated as the area under the receiver operating characteristic (AUROC), sensitivity, specificity, accuracy, positive predictive value, and negative predictive value. The primary outcomes were the prediction sensitivity and specificity of the model to assist cyto-diagnosis of thyroid nodules. FINDINGS The AUROC of TBSRTC III+ (which distinguishes benign from TBSRTC classes III, IV, V, and VI) was 0·930 (95% CI 0·921-0·939) for Sun Yat-sen Memorial Hospital of Sun Yat-sen University (SYSMH) internal validation and 0·944 (0·929 - 0·959), 0·939 (0·924-0·955), 0·971 (0·938-1·000) for The First People's Hospital of Foshan (FPHF), Sichuan Cancer Hospital & Institute (SCHI), and The Third Affiliated Hospital of Guangzhou Medical University (TAHGMU) medical centres, respectively. The AUROC of TBSRTC V+ (which distinguishes benign from TBSRTC classes V and VI) was 0·990 (95% CI 0·986-0·995) for SYSMH internal validation and 0·988 (0·980-0·995), 0·965 (0·953-0·977), and 0·991 (0·972-1·000) for FPHF, SCHI, and TAHGMU medical centres, respectively. For the prospective study at SYSMH, the AUROC of TBSRTC III+ and TBSRTC V+ was 0·977 and 0·981, respectively. With the assistance of AI, the specificity of junior cytopathologists was boosted from 0·887 (95% CI 0·8440-0·922) to 0·993 (0·974-0·999) and the accuracy was improved from 0·877 (0·846-0·904) to 0·948 (0·926-0·965). 186 atypia of undetermined significance samples from 186 patients with BRAF mutation information were collected; 43 of them harbour the BRAFV600E mutation. 91% (39/43) of BRAFV600E-positive atypia of undetermined significance samples were identified as malignant by the AI models. INTERPRETATION In this study, we developed an AI-assisted model named the Thyroid Patch-Oriented WSI Ensemble Recognition (ThyroPower) system, which facilitates rapid and robust cyto-diagnosis of thyroid nodules, potentially enhancing the diagnostic capabilities of cytopathologists. Moreover, it serves as a potential solution to mitigate the scarcity of cytopathologists. FUNDING Guangdong Science and Technology Department. TRANSLATION For the Chinese translation of the abstract see Supplementary Materials section.
Collapse
Affiliation(s)
- Jue Wang
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Nafen Zheng
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Huan Wan
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qinyue Yao
- Cells Vision (Guangzhou) Medical Technology, Guangzhou, China
| | - Shijun Jia
- Department of Pathology, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Chengdu, China
| | - Xin Zhang
- Department of Pathology, The First People's Hospital of Foshan, Foshan, China
| | - Sha Fu
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jingliang Ruan
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Gui He
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xulin Chen
- Cells Vision (Guangzhou) Medical Technology, Guangzhou, China
| | - Suiping Li
- Cells Vision (Guangzhou) Medical Technology, Guangzhou, China
| | - Rui Chen
- Cells Vision (Guangzhou) Medical Technology, Guangzhou, China
| | - Boan Lai
- Department of Pathology, The Third Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
| | - Jin Wang
- Cells Vision (Guangzhou) Medical Technology, Guangzhou, China
| | - Qingping Jiang
- Department of Pathology, The Third Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
| | - Nengtai Ouyang
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China; Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yin Zhang
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China; Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
2
|
Takeda K, Sakai T, Mitate E. Background removal for debiasing computer-aided cytological diagnosis. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03169-0. [PMID: 38918281 DOI: 10.1007/s11548-024-03169-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 04/30/2024] [Indexed: 06/27/2024]
Abstract
To address the background-bias problem in computer-aided cytology caused by microscopic slide deterioration, this article proposes a deep learning approach for cell segmentation and background removal without requiring cell annotation. A U-Net-based model was trained to separate cells from the background in an unsupervised manner by leveraging the redundancy of the background and the sparsity of cells in liquid-based cytology (LBC) images. The experimental results demonstrate that the U-Net-based model trained on a small set of cytology images can exclude background features and accurately segment cells. This capability is beneficial for debiasing in the detection and classification of the cells of interest in oral LBC. Slide deterioration can significantly affect deep learning-based cell classification. Our proposed method effectively removes background features at no cost of cell annotation, thereby enabling accurate cytological diagnosis through the deep learning of microscopic slide images.
Collapse
Affiliation(s)
- Keita Takeda
- School of Information and Data Sciences, Nagasaki University, 1-14 Bunkyo, Nagasaki, 8528521, Japan.
| | - Tomoya Sakai
- School of Information and Data Sciences, Nagasaki University, 1-14 Bunkyo, Nagasaki, 8528521, Japan
- Graduate School of Integrated Science and Technology, Nagasaki University, 1-14 Bunkyo, Nagasaki, 8528521, Japan
| | - Eiji Mitate
- Department of Oral and Maxillofacial Surgery, Kanazawa Medical University, 1-1 Daigaku, Uchinada, Kahoku, Ishikawa, 9200293, Japan
| |
Collapse
|
3
|
Abd-Almoniem E, Abd-Alsabour N, Elsheikh S, Mostafa RR, Elesawy YF. A Novel Validated Real-World Dataset for the Diagnosis of Multiclass Serous Effusion Cytology according to the International System and Ground-Truth Validation Data. Acta Cytol 2024; 68:160-170. [PMID: 38522415 DOI: 10.1159/000538465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Accepted: 03/12/2024] [Indexed: 03/26/2024]
Abstract
INTRODUCTION The application of artificial intelligence (AI) algorithms in serous fluid cytology is lacking due to the deficiency in standardized publicly available datasets. Here, we develop a novel public serous effusion cytology dataset. Furthermore, we apply AI algorithms on it to test its diagnostic utility and safety in clinical practice. METHODS The work is divided into three phases. Phase 1 entails building the dataset based on the multitiered evidence-based classification system proposed by the International System (TIS) of serous fluid cytology along with ground-truth tissue diagnosis for malignancy. To ensure reliable results of future AI research on this dataset, we carefully consider all the steps of the preparation and staining from a real-world cytopathology perspective. In phase 2, we pay special consideration to the image acquisition pipeline to ensure image integrity. Then we utilize the power of transfer learning using the convolutional layers of the VGG16 deep learning model for feature extraction. Finally, in phase 3, we apply the random forest classifier on the constructed dataset. RESULTS The dataset comprises 3,731 images distributed among the four TIS diagnostic categories. The model achieves 74% accuracy in this multiclass classification problem. Using a one-versus-all classifier, the fallout rate for images that are misclassified as negative for malignancy despite being a higher risk diagnosis is 0.13. Most of these misclassified images (77%) belong to the atypia of undetermined significance category in concordance with real-life statistics. CONCLUSION This is the first and largest publicly available serous fluid cytology dataset based on a standardized diagnostic system. It is also the first dataset to include various types of effusions and pericardial fluid specimens. In addition, it is the first dataset to include the diagnostically challenging atypical categories. AI algorithms applied on this novel dataset show reliable results that can be incorporated into actual clinical practice with minimal risk of missing a diagnosis of malignancy. This work provides a foundation for researchers to develop and test further AI algorithms for the diagnosis of serous effusions.
Collapse
Affiliation(s)
- Esraa Abd-Almoniem
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| | - Nadia Abd-Alsabour
- Department of Computer Science, Faculty of Graduate Studies for Statistical Research, Cairo University, Giza, Egypt
| | - Samar Elsheikh
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| | - Rasha R Mostafa
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| | - Yasmine Fathy Elesawy
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| |
Collapse
|
4
|
Civit-Masot J, Luna-Perejon F, Muñoz-Saavedra L, Domínguez-Morales M, Civit A. A lightweight xAI approach to cervical cancer classification. Med Biol Eng Comput 2024:10.1007/s11517-024-03063-6. [PMID: 38507122 DOI: 10.1007/s11517-024-03063-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 02/24/2024] [Indexed: 03/22/2024]
Abstract
Cervical cancer is caused in the vast majority of cases by the human papilloma virus (HPV) through sexual contact and requires a specific molecular-based analysis to be detected. As an HPV vaccine is available, the incidence of cervical cancer is up to ten times higher in areas without adequate healthcare resources. In recent years, liquid cytology has been used to overcome these shortcomings and perform mass screening. In addition, classifiers based on convolutional neural networks can be developed to help pathologists diagnose the disease. However, these systems always require the final verification of a pathologist to make a final diagnosis. For this reason, explainable AI techniques are required to highlight the most significant data to the healthcare professional, as it can be used to determine the confidence in the results and the areas of the image used for classification (allowing the professional to point out the areas he/she thinks are most important and cross-check them against those detected by the system in order to create incremental learning systems). In this work, a 4-phase optimization process is used to obtain a custom deep-learning classifier for distinguishing between 4 severity classes of cervical cancer with liquid-cytology images. The final classifier obtains an accuracy over 97% for 4 classes and 100% for 2 classes with execution times under 1 s (including the final report generation). Compared to previous works, the proposed classifier obtains better accuracy results with a lower computational cost.
Collapse
Affiliation(s)
- Javier Civit-Masot
- Robotics and Computer Technology Lab, ETSII, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain.
| | - Francisco Luna-Perejon
- Robotics and Computer Technology Lab, ETSII, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
| | - Luis Muñoz-Saavedra
- Robotics and Computer Technology Lab, ETSII, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
| | - Manuel Domínguez-Morales
- Robotics and Computer Technology Lab, ETSII, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
- Computer Engineering Research Institute, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
| | - Anton Civit
- Robotics and Computer Technology Lab, ETSII, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
- Computer Engineering Research Institute, Universidad de Sevilla, Reina Mercedes s/n, Seville, 41018, Spain
| |
Collapse
|
5
|
Kim D, Sundling KE, Virk R, Thrall MJ, Alperstein S, Bui MM, Chen-Yost H, Donnelly AD, Lin O, Liu X, Madrigal E, Michelow P, Schmitt FC, Vielh PR, Zakowski MF, Parwani AV, Jenkins E, Siddiqui MT, Pantanowitz L, Li Z. Digital cytology part 2: artificial intelligence in cytology: a concept paper with review and recommendations from the American Society of Cytopathology Digital Cytology Task Force. J Am Soc Cytopathol 2024; 13:97-110. [PMID: 38158317 DOI: 10.1016/j.jasc.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 11/28/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024]
Abstract
Digital cytology and artificial intelligence (AI) are gaining greater adoption in the cytology laboratory. However, peer-reviewed real-world data and literature are lacking in regard to the current clinical landscape. The American Society of Cytopathology in conjunction with the International Academy of Cytology and the Digital Pathology Association established a special task force comprising 20 members with expertise and/or interest in digital cytology. The aim of the group was to investigate the feasibility of incorporating digital cytology, specifically cytology whole slide scanning and AI applications, into the workflow of the laboratory. In turn, the impact on cytopathologists, cytologists (cytotechnologists), and cytology departments were also assessed. The task force reviewed existing literature on digital cytology, conducted a worldwide survey, and held a virtual roundtable discussion on digital cytology and AI with multiple industry corporate representatives. This white paper, presented in 2 parts, summarizes the current state of digital cytology and AI practice in global cytology practice. Part 1 of the white paper is presented as a separate paper which details a review and best practice recommendations for incorporating digital cytology into practice. Part 2 of the white paper presented here provides a comprehensive review of AI in cytology practice along with best practice recommendations and legal considerations. Additionally, the cytology global survey results highlighting current AI practices by various laboratories, as well as current attitudes, are reported.
Collapse
Affiliation(s)
- David Kim
- Department of Pathology & Laboratory Medicine, Memorial Sloan-Kettering Cancer Center, New York, New York
| | - Kaitlin E Sundling
- The Wisconsin State Laboratory of Hygiene and Department of Pathology and Laboratory Medicine, University of Wisconsin-Madison, Madison, Wisconsin
| | - Renu Virk
- Department of Pathology and Cell Biology, Columbia University, New York, New York
| | - Michael J Thrall
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Houston, Texas
| | - Susan Alperstein
- Department of Pathology and Laboratory Medicine, New York Presbyterian-Weill Cornell Medicine, New York, New York
| | - Marilyn M Bui
- The Department of Pathology, Moffitt Cancer Center & Research Institute, Tampa, Florida
| | | | - Amber D Donnelly
- Diagnostic Cytology Education, University of Nebraska Medical Center, College of Allied Health Professions, Omaha, Nebraska
| | - Oscar Lin
- Department of Pathology & Laboratory Medicine, Memorial Sloan-Kettering Cancer Center, New York, New York
| | - Xiaoying Liu
- Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire
| | - Emilio Madrigal
- Department of Pathology, Massachusetts General Hospital, Boston, Massachusetts
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa; Department of Pathology, National Health Laboratory Services, Johannesburg, South Africa
| | - Fernando C Schmitt
- Department of Pathology, Medical Faculty of Porto University, Porto, Portugal
| | - Philippe R Vielh
- Department of Pathology, Medipath and American Hospital of Paris, Paris, France
| | | | - Anil V Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | | | - Momin T Siddiqui
- Department of Pathology and Laboratory Medicine, New York Presbyterian-Weill Cornell Medicine, New York, New York
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.
| | - Zaibo Li
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio.
| |
Collapse
|
6
|
Nazir S, Dickson DM, Akram MU. Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med 2023; 156:106668. [PMID: 36863192 DOI: 10.1016/j.compbiomed.2023.106668] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 01/12/2023] [Accepted: 02/10/2023] [Indexed: 02/21/2023]
Abstract
Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a 'black box' nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements. This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.
Collapse
Affiliation(s)
- Sajid Nazir
- Department of Computing, Glasgow Caledonian University, Glasgow, UK.
| | - Diane M Dickson
- Department of Podiatry and Radiography, Research Centre for Health, Glasgow Caledonian University, Glasgow, UK
| | - Muhammad Usman Akram
- Computer and Software Engineering Department, National University of Sciences and Technology, Islamabad, Pakistan
| |
Collapse
|
7
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
8
|
Teramoto A, Tsukamoto T, Michiba A, Kiriyama Y, Sakurai E, Imaizumi K, Saito K, Fujita H. Automated Classification of Idiopathic Pulmonary Fibrosis in Pathological Images Using Convolutional Neural Network and Generative Adversarial Networks. Diagnostics (Basel) 2022; 12:diagnostics12123195. [PMID: 36553202 PMCID: PMC9777207 DOI: 10.3390/diagnostics12123195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/04/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022] Open
Abstract
Interstitial pneumonia of uncertain cause is referred to as idiopathic interstitial pneumonia (IIP). Among the various types of IIPs, the prognosis of cases of idiopathic pulmonary fibrosis (IPF) is extremely poor, and accurate differentiation between IPF and non-IPF pneumonia is critical. In this study, we consider deep learning (DL) methods owing to their excellent image classification capabilities. Although DL models require large quantities of training data, collecting a large number of pathological specimens is difficult for rare diseases. In this study, we propose an end-to-end scheme to automatically classify IIPs using a convolutional neural network (CNN) model. To compensate for the lack of data on rare diseases, we introduce a two-step training method to generate pathological images of IIPs using a generative adversarial network (GAN). Tissue specimens from 24 patients with IIPs were scanned using a whole slide scanner, and the resulting images were divided into patch images with a size of 224 × 224 pixels. A progressive growth GAN (PGGAN) model was trained using 23,142 IPF images and 7817 non-IPF images to generate 10,000 images for each of the two categories. The images generated by the PGGAN were used along with real images to train the CNN model. An evaluation of the images generated by the PGGAN showed that cells and their locations were well-expressed. We also obtained the best classification performance with a detection sensitivity of 97.2% and a specificity of 69.4% for IPF using DenseNet. The classification performance was also improved by using PGGAN-generated images. These results indicate that the proposed method may be considered effective for the diagnosis of IPF.
Collapse
Affiliation(s)
- Atsushi Teramoto
- School of Medical Sciences, Fujita Health University, Toyoake 470-1192, Japan
- Correspondence:
| | - Tetsuya Tsukamoto
- Graduate School of Medicine, Fujita Health University, Toyoake 470-1192, Japan
| | - Ayano Michiba
- Graduate School of Medicine, Fujita Health University, Toyoake 470-1192, Japan
| | - Yuka Kiriyama
- Graduate School of Medicine, Fujita Health University, Toyoake 470-1192, Japan
| | - Eiko Sakurai
- Graduate School of Medicine, Fujita Health University, Toyoake 470-1192, Japan
| | - Kazuyoshi Imaizumi
- Graduate School of Medicine, Fujita Health University, Toyoake 470-1192, Japan
| | - Kuniaki Saito
- School of Medical Sciences, Fujita Health University, Toyoake 470-1192, Japan
| | - Hiroshi Fujita
- Faculty of Engineering, Gifu University, Gifu 501-1194, Japan
| |
Collapse
|
9
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:cancers14225569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
10
|
Oshita Y, Takeuchi N, Teramoto A, Kondo M, Imaizumi K, Saito K, Fujita H. [Prognosis Prediction of Lung Cancer Patients Using CT Images: Feature Extraction by Convolutional Neural Network and Prediction by Machine Learning]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2022; 78:829-837. [PMID: 35811128 DOI: 10.6009/jjrt.2022-1224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Lung cancer accounts for the largest number of deaths among malignant tumors. Recently, more and more patients are concerned about their own life expectancy. CT examination is essential for the diagnosis of lung cancer. However, it is difficult to accurately predict the prognosis using CT images. In this study, we developed a method to predict the prognosis of lung cancer patients from CT images using a convolutional neural network (CNN) and a machine learning method. METHODS In this study, the CT images of 173 lung cancer patients were collected. First, we selected the slice with the largest tumor size in each case and extracted features using a CNN. Next, we performed feature selection using information gain and predicted alive or death by classifiers. An artificial neural network or Naïve Bayes was used as a classifier and alive and death were predicted at one-year intervals from one year to five years later. RESULTS We evaluated the prediction accuracy via the three-fold cross-validation method and found that the prediction accuracies were around 80% for all periods from 1 to 5 years. In the evaluation of the survival curve, the shape of the curve was close to the actual curve. CONCLUSION These results indicate that feature extraction by a CNN and classification by the machine learning method may be effective in predicting the prognosis of lung cancer patients using CT images.
Collapse
Affiliation(s)
- Yuki Oshita
- Graduate School of Health Sciences, Fujita Health University
| | - Nonoko Takeuchi
- Graduate School of Health Sciences, Fujita Health University (Current address: FUJIFILM Medical Corporation)
| | - Atsushi Teramoto
- Intelligent Information Engineering Field, Research Promotion Unit, School of Medical Sciences, Fujita Health University
| | | | | | - Kuniaki Saito
- Advanced Diagnostic System Development Field, Research Promotion Unit, School of Medical Sciences, Fujita Health University
| | | |
Collapse
|
11
|
Xu Z, Ren H, Zhou W, Liu Z. ISANET: Non-small cell lung cancer classification and detection based on CNN and attention mechanism. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103773] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
12
|
Thakur N, Alam MR, Abdul-Ghafar J, Chong Y. Recent Application of Artificial Intelligence in Non-Gynecological Cancer Cytopathology: A Systematic Review. Cancers (Basel) 2022; 14:cancers14143529. [PMID: 35884593 PMCID: PMC9316753 DOI: 10.3390/cancers14143529] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 07/12/2022] [Accepted: 07/15/2022] [Indexed: 11/27/2022] Open
Abstract
Simple Summary Artificial intelligence (AI) has attracted significant interest in the healthcare sector due to its promising results. Cytological examination is a critical step in the initial diagnosis of cancer. Here, we conducted a systematic review with quantitative analysis to understand the current status of AI applications in non-gynecological (non-GYN) cancer cytology. In our analysis, we found that most of the studies focused on classification and segmentation tasks. Overall, AI showed promising results for non-GYN cancer cytopathology analysis. However, the lack of well-annotated, large-scale datasets with Z-stacking and external cross-validation was the major limitation across all studies. Abstract State-of-the-art artificial intelligence (AI) has recently gained considerable interest in the healthcare sector and has provided solutions to problems through automated diagnosis. Cytological examination is a crucial step in the initial diagnosis of cancer, although it shows limited diagnostic efficacy. Recently, AI applications in the processing of cytopathological images have shown promising results despite the elementary level of the technology. Here, we performed a systematic review with a quantitative analysis of recent AI applications in non-gynecological (non-GYN) cancer cytology to understand the current technical status. We searched the major online databases, including MEDLINE, Cochrane Library, and EMBASE, for relevant English articles published from January 2010 to January 2021. The searched query terms were: “artificial intelligence”, “image processing”, “deep learning”, “cytopathology”, and “fine-needle aspiration cytology.” Out of 17,000 studies, only 26 studies (26 models) were included in the full-text review, whereas 13 studies were included for quantitative analysis. There were eight classes of AI models treated of according to target organs: thyroid (n = 11, 39%), urinary bladder (n = 6, 21%), lung (n = 4, 14%), breast (n = 2, 7%), pleural effusion (n = 2, 7%), ovary (n = 1, 4%), pancreas (n = 1, 4%), and prostate (n = 1, 4). Most of the studies focused on classification and segmentation tasks. Although most of the studies showed impressive results, the sizes of the training and validation datasets were limited. Overall, AI is also promising for non-GYN cancer cytopathology analysis, such as pathology or gynecological cytology. However, the lack of well-annotated, large-scale datasets with Z-stacking and external cross-validation was the major limitation found across all studies. Future studies with larger datasets with high-quality annotations and external validation are required.
Collapse
|
13
|
Transfer learning for histopathology images: an empirical study. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07516-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
14
|
van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 2022; 79:102470. [DOI: 10.1016/j.media.2022.102470] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 03/15/2022] [Accepted: 05/02/2022] [Indexed: 12/11/2022]
|
15
|
Deep convolutional neural network-based classification of cancer cells on cytological pleural effusion images. Mod Pathol 2022; 35:609-614. [PMID: 35013527 PMCID: PMC9042694 DOI: 10.1038/s41379-021-00987-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 11/20/2021] [Accepted: 11/22/2021] [Indexed: 11/23/2022]
Abstract
Lung cancer is one of the leading causes of cancer-related death worldwide. Cytology plays an important role in the initial evaluation and diagnosis of patients with lung cancer. However, due to the subjectivity of cytopathologists and the region-dependent diagnostic levels, the low consistency of liquid-based cytological diagnosis results in certain proportions of misdiagnoses and missed diagnoses. In this study, we performed a weakly supervised deep learning method for the classification of benign and malignant cells in lung cytological images through a deep convolutional neural network (DCNN). A total of 404 cases of lung cancer cells in effusion cytology specimens from Shanghai Pulmonary Hospital were investigated, in which 266, 78, and 60 cases were used as the training, validation and test sets, respectively. The proposed method was evaluated on 60 whole-slide images (WSIs) of lung cancer pleural effusion specimens. This study showed that the method had an accuracy, sensitivity, and specificity respectively of 91.67%, 87.50% and 94.44% in classifying malignant and benign lesions (or normal). The area under the receiver operating characteristic (ROC) curve (AUC) was 0.9526 (95% confidence interval (CI): 0.9019-9.9909). In contrast, the average accuracies of senior and junior cytopathologists were 98.34% and 83.34%, respectively. The proposed deep learning method will be useful and may assist pathologists with different levels of experience in the diagnosis of cancer cells on cytological pleural effusion images in the future.
Collapse
|
16
|
Dholey M, Sarkar A, Giri A, Sadhu A, Chaudhury K, Chatterjee J. Pixel-Based Nuclei Segmentation in Fine Needle Aspiration Cytology of Lung Lesions. ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING 2022:1-12. [DOI: 10.1007/978-981-16-4369-9_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
17
|
Ugawa M, Kawamura Y, Toda K, Teranishi K, Morita H, Adachi H, Tamoto R, Nomaru H, Nakagawa K, Sugimoto K, Borisova E, An Y, Konishi Y, Tabata S, Morishita S, Imai M, Takaku T, Araki M, Komatsu N, Hayashi Y, Sato I, Horisaki R, Noji H, Ota S. In silico-labeled ghost cytometry. eLife 2021; 10:e67660. [PMID: 34930522 PMCID: PMC8691837 DOI: 10.7554/elife.67660] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 10/26/2021] [Indexed: 11/13/2022] Open
Abstract
Characterization and isolation of a large population of cells are indispensable procedures in biological sciences. Flow cytometry is one of the standards that offers a method to characterize and isolate cells at high throughput. When performing flow cytometry, cells are molecularly stained with fluorescent labels to adopt biomolecular specificity which is essential for characterizing cells. However, molecular staining is costly and its chemical toxicity can cause side effects to the cells which becomes a critical issue when the cells are used downstream as medical products or for further analysis. Here, we introduce a high-throughput stain-free flow cytometry called in silico-labeled ghost cytometry which characterizes and sorts cells using machine-predicted labels. Instead of detecting molecular stains, we use machine learning to derive the molecular labels from compressive data obtained with diffractive and scattering imaging methods. By directly using the compressive 'imaging' data, our system can accurately assign the designated label to each cell in real time and perform sorting based on this judgment. With this method, we were able to distinguish different cell states, cell types derived from human induced pluripotent stem (iPS) cells, and subtypes of peripheral white blood cells using only stain-free modalities. Our method will find applications in cell manufacturing for regenerative medicine as well as in cell-based medical diagnostic assays in which fluorescence labeling of the cells is undesirable.
Collapse
Affiliation(s)
- Masashi Ugawa
- Thinkcyte IncTokyoJapan
- Center for Advanced Intelligence Project, RIKENTokyoJapan
- The University of TokyoTokyoJapan
| | | | | | | | | | | | | | | | | | | | | | - Yuri An
- BioResource Research Center, RIKENTsukubaJapan
| | | | | | | | | | | | | | | | | | - Issei Sato
- Thinkcyte IncTokyoJapan
- The University of TokyoTokyoJapan
| | - Ryoichi Horisaki
- Thinkcyte IncTokyoJapan
- The University of TokyoTokyoJapan
- PRESTO, Japan Science and Technology AgencyKawaguchiJapan
| | | | - Sadao Ota
- Thinkcyte IncTokyoJapan
- The University of TokyoTokyoJapan
- PRESTO, Japan Science and Technology AgencyKawaguchiJapan
| |
Collapse
|
18
|
Lin C, Chang J, Huang C, Wen Y, Ho C, Cheng Y. Effectiveness of convolutional neural networks in the interpretation of pulmonary cytologic images in endobronchial ultrasound procedures. Cancer Med 2021; 10:9047-9057. [PMID: 34725953 PMCID: PMC8683546 DOI: 10.1002/cam4.4383] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 08/27/2021] [Accepted: 09/26/2021] [Indexed: 01/17/2023] Open
Abstract
BACKGROUND Rapid on-site cytologic evaluation (ROSE) helps to improve the diagnostic accuracy in endobronchial ultrasound (EBUS) procedures. However, cytologists are seldom available to perform ROSE in many institutions. Recent studies have investigated the application of deep learning in cytologic image analysis. As such, the present study analyzed lung cytologic images obtained by EBUS procedures, and employed deep-learning methods to distinguish between benign and malignant cells and to semantically segment malignant cells. METHODS Ninety-seven patients who underwent 104 EBUS procedures were enrolled. Four hundred and ninety-nine lung cytologic images obtained via ROSE, including 425 malignant and 74 benign, and most malignant were lung adenocarcinoma (64.3%). All the images were used to train a residual network model with 101 layers (ResNet101), with suitable hyperparameters selected to classify benign and malignant lung cytologic images. An HRNet model was also employed to mark the area of malignant cells. Automatic patch-cropping was adopted to facilitate dataset preparation. RESULTS Malignant cells were successfully classified by ResNet101 with 98.8% classification accuracy, 98.8% sensitivity, and 98.8% specificity in patch-based classification; 95.5% classification accuracy in image-based classification; and 92.9% classification accuracy in patient-based classification. Malignant cell area was successfully marked by HRNet with a mean intersection over union of 89.2%. The automatic cropping method enabled the system to complete diagnosis within 1 s. CONCLUSIONS This is the first study to combine lung cytologic image deep-learning classification with semantic segmentation. The model was optimized for high accuracy and the automatic cropping facilitates the clinical application of our model. The success in both lung cytologic images classification and semantic segmentation on our dataset shows a promising result for clinical application in the future.
Collapse
Affiliation(s)
- Ching‐Kai Lin
- Department of Mechanical EngineeringCollege of EngineeringNational Yang Ming Chiao Tung UniversityHsin‐ChuTaiwan
- Department of MedicineNational Taiwan University Cancer CenterTaipeiTaiwan
- Department of Internal MedicineNational Taiwan University HospitalTaipeiTaiwan
- Department of Internal MedicineNational Taiwan University Hsin‐Chu HospitalHsin‐ChuTaiwan
| | - Jerry Chang
- Department of Mechanical EngineeringCollege of EngineeringNational Yang Ming Chiao Tung UniversityHsin‐ChuTaiwan
| | - Ching‐Chun Huang
- Department of Computer ScienceCollege of Computer ScienceNational Yang Ming Chiao Tung UniversityHsin‐ChuTaiwan
| | - Yueh‐Feng Wen
- Department of Internal MedicineNational Taiwan University HospitalTaipeiTaiwan
- Department of Internal MedicineNational Taiwan University Hsin‐Chu HospitalHsin‐ChuTaiwan
| | - Chao‐Chi Ho
- Department of Internal MedicineNational Taiwan University HospitalTaipeiTaiwan
| | - Yun‐Chien Cheng
- Department of Mechanical EngineeringCollege of EngineeringNational Yang Ming Chiao Tung UniversityHsin‐ChuTaiwan
| |
Collapse
|
19
|
Yan Y, Yao XJ, Wang SH, Zhang YD. A Survey of Computer-Aided Tumor Diagnosis Based on Convolutional Neural Network. BIOLOGY 2021; 10:biology10111084. [PMID: 34827077 PMCID: PMC8615026 DOI: 10.3390/biology10111084] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 10/19/2021] [Accepted: 10/20/2021] [Indexed: 01/10/2023]
Abstract
Simple Summary One of the hottest areas in deep learning is computerized tumor diagnosis and treatment. The identification of tumor markers, the outline of tumor growth activity, and the staging of various tumor kinds are frequently included. There are several deep learning models based on convolutional neural networks that have high performance and accurate identification, with the potential to improve medical tasks. Breakthroughs and updates in computer algorithms and hardware devices, and intelligent algorithms applied in medical images have a diagnostic accuracy that doctors cannot match in some diseases. This paper reviews the progress of tumor detection from traditional computer-aided methods to convolutional neural networks and demonstrates the potential of the practical application of convolutional neural networks from practical cases to transform the detection model from experiment to clinical application. Abstract Tumors are new tissues that are harmful to human health. The malignant tumor is one of the main diseases that seriously affect human health and threaten human life. For cancer treatment, early detection of pathological features is essential to reduce cancer mortality effectively. Traditional diagnostic methods include routine laboratory tests of the patient’s secretions, and serum, immune and genetic tests. At present, the commonly used clinical imaging examinations include X-ray, CT, MRI, SPECT scan, etc. With the emergence of new problems of radiation noise reduction, medical image noise reduction technology is more and more investigated by researchers. At the same time, doctors often need to rely on clinical experience and academic background knowledge in the follow-up diagnosis of lesions. However, it is challenging to promote clinical diagnosis technology. Therefore, due to the medical needs, research on medical imaging technology and computer-aided diagnosis appears. The advantages of a convolutional neural network in tumor diagnosis are increasingly obvious. The research on computer-aided diagnosis based on medical images of tumors has become a sharper focus in the industry. Neural networks have been commonly used to research intelligent methods to assist medical image diagnosis and have made significant progress. This paper introduces the traditional methods of computer-aided diagnosis of tumors. It introduces the segmentation and classification of tumor images as well as the diagnosis methods based on CNN to help doctors determine tumors. It provides a reference for developing a CNN computer-aided system based on tumor detection research in the future.
Collapse
|
20
|
Weakly supervised learning for classification of lung cytological images using attention-based multiple instance learning. Sci Rep 2021; 11:20317. [PMID: 34645863 PMCID: PMC8514584 DOI: 10.1038/s41598-021-99246-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 09/22/2021] [Indexed: 11/09/2022] Open
Abstract
In cytological examination, suspicious cells are evaluated regarding malignancy and cancer type. To assist this, we previously proposed an automated method based on supervised learning that classifies cells in lung cytological images as benign or malignant. However, it is often difficult to label all cells. In this study, we developed a weakly supervised method for the classification of benign and malignant lung cells in cytological images using attention-based deep multiple instance learning (AD MIL). Images of lung cytological specimens were divided into small patch images and stored in bags. Each bag was then labeled as benign or malignant, and classification was conducted using AD MIL. The distribution of attention weights was also calculated as a color map to confirm the presence of malignant cells in the image. AD MIL using the AlexNet-like convolutional neural network model showed the best classification performance, with an accuracy of 0.916, which was better than that of supervised learning. In addition, an attention map of the entire image based on the attention weight allowed AD MIL to focus on most malignant cells. Our weakly supervised method automatically classifies cytological images with acceptable accuracy based on supervised learning without complex annotations.
Collapse
|
21
|
Development of a Fully Automated Glioma-Grading Pipeline Using Post-Contrast T1-Weighted Images Combined with Cloud-Based 3D Convolutional Neural Network. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11115118] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Glioma is the most common type of brain tumor, and its grade influences its treatment policy and prognosis. Therefore, artificial-intelligence-based tumor grading methods have been studied. However, in most studies, two-dimensional (2D) analysis and manual tumor-region extraction were performed. Additionally, deep learning research that uses medical images experiences difficulties in collecting image data and preparing hardware, thus hindering its widespread use. Therefore, we developed a 3D convolutional neural network (3D CNN) pipeline for realizing a fully automated glioma-grading system by using the pretrained Clara segmentation model provided by NVIDIA and our original classification model. In this method, the brain tumor region was extracted using the Clara segmentation model, and the volume of interest (VOI) created using this extracted region was assigned to a grading 3D CNN and classified as either grade II, III, or IV. Through evaluation using 46 regions, the grading accuracy of all tumors was 91.3%, which was comparable to that of the method using multi-sequence. The proposed pipeline scheme may enable the creation of a fully automated glioma-grading pipeline in a single sequence by combining the pretrained 3D CNN and our original 3D CNN.
Collapse
|
22
|
Victória Matias A, Atkinson Amorim JG, Buschetto Macarini LA, Cerentini A, Casimiro Onofre AS, De Miranda Onofre FB, Daltoé FP, Stemmer MR, von Wangenheim A. What is the state of the art of computer vision-assisted cytology? A Systematic Literature Review. Comput Med Imaging Graph 2021; 91:101934. [PMID: 34174544 DOI: 10.1016/j.compmedimag.2021.101934] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 04/16/2021] [Accepted: 05/04/2021] [Indexed: 11/28/2022]
Abstract
Cytology is a low-cost and non-invasive diagnostic procedure employed to support the diagnosis of a broad range of pathologies. Cells are harvested from tissues by aspiration or scraping, and it is still predominantly performed manually by medical or laboratory professionals extensively trained for this purpose. It is a time-consuming and repetitive process where many diagnostic criteria are subjective and vulnerable to human interpretation. Computer Vision technologies, by automatically generating quantitative and objective descriptions of examinations' contents, can help minimize the chances of misdiagnoses and shorten the time required for analysis. To identify the state-of-art of computer vision techniques currently applied to cytology, we conducted a Systematic Literature Review, searching for approaches for the segmentation, detection, quantification, and classification of cells and organelles using computer vision on cytology slides. We analyzed papers published in the last 4 years. The initial search was executed in September 2020 and resulted in 431 articles. After applying the inclusion/exclusion criteria, 157 papers remained, which we analyzed to build a picture of the tendencies and problems present in this research area, highlighting the computer vision methods, staining techniques, evaluation metrics, and the availability of the used datasets and computer code. As a result, we identified that the most used methods in the analyzed works are deep learning-based (70 papers), while fewer works employ classic computer vision only (101 papers). The most recurrent metric used for classification and object detection was the accuracy (33 papers and 5 papers), while for segmentation it was the Dice Similarity Coefficient (38 papers). Regarding staining techniques, Papanicolaou was the most employed one (130 papers), followed by H&E (20 papers) and Feulgen (5 papers). Twelve of the datasets used in the papers are publicly available, with the DTU/Herlev dataset being the most used one. We conclude that there still is a lack of high-quality datasets for many types of stains and most of the works are not mature enough to be applied in a daily clinical diagnostic routine. We also identified a growing tendency towards adopting deep learning-based approaches as the methods of choice.
Collapse
Affiliation(s)
- André Victória Matias
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Allan Cerentini
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Felipe Perozzo Daltoé
- Department of Pathology, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Marcelo Ricardo Stemmer
- Automation and Systems Department, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Aldo von Wangenheim
- Brazilian Institute for Digital Convergence, Federal University of Santa Catarina, Florianópolis, Brazil.
| |
Collapse
|
23
|
A combined microfluidic deep learning approach for lung cancer cell high throughput screening toward automatic cancer screening applications. Sci Rep 2021; 11:9804. [PMID: 33963232 PMCID: PMC8105370 DOI: 10.1038/s41598-021-89352-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Accepted: 04/26/2021] [Indexed: 02/07/2023] Open
Abstract
Lung cancer is a leading cause of cancer death in both men and women worldwide. The high mortality rate in lung cancer is in part due to late-stage diagnostics as well as spread of cancer-cells to organs and tissues by metastasis. Automated lung cancer detection and its sub-types classification from cell’s images play a crucial role toward an early-stage cancer prognosis and more individualized therapy. The rapid development of machine learning techniques, especially deep learning algorithms, has attracted much interest in its application to medical image problems. In this study, to develop a reliable Computer-Aided Diagnosis (CAD) system for accurately distinguishing between cancer and healthy cells, we grew popular Non-Small Lung Cancer lines in a microfluidic chip followed by staining with Phalloidin and images were obtained by using an IX-81 inverted Olympus fluorescence microscope. We designed and tested a deep learning image analysis workflow for classification of lung cancer cell-line images into six classes, including five different cancer cell-lines (P-C9, SK-LU-1, H-1975, A-427, and A-549) and normal cell-line (16-HBE). Our results demonstrate that ResNet18, a residual learning convolutional neural network, is an efficient and promising method for lung cancer cell-lines categorization with a classification accuracy of 98.37% and F1-score of 97.29%. Our proposed workflow is also able to successfully distinguish normal versus cancerous cell-lines with a remarkable average accuracy of 99.77% and F1-score of 99.87%. The proposed CAD system completely eliminates the need for extensive user intervention, enabling the processing of large amounts of image data with robust and highly accurate results.
Collapse
|
24
|
Kimura R, Teramoto A, Ohno T, Saito K, Fujita H. Virtual digital subtraction angiography using multizone patch-based U-Net. Phys Eng Sci Med 2020; 43:1305-1315. [PMID: 33026591 DOI: 10.1007/s13246-020-00933-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 09/25/2020] [Indexed: 01/20/2023]
Abstract
Digital subtraction angiography (DSA) is a powerful technique for visualizing blood vessels from X-ray images. However, the subtraction images obtained with this technique suffer from artifacts caused by patient motion. To avoid these artifacts, a new method called "Virtual DSA" is proposed, which generates DSA images directly from a single live image without using a mask image. The proposed Virtual DSA method was developed using the U-Net deep learning architecture. In the proposed method, a virtual DSA image only containing the extracted blood vessels was generated by inputting a single live image into U-Net. To extract the blood vessels more accurately, U-Net operates on each small area via a patch-based process. In addition, a different network was used for each zone to use the local information. The evaluation of the live images of the head confirmed accurate blood vessel extraction without artifacts in the virtual DSA image generated with the proposed method. In this study, the NMSE, PSNR, and SSIM indices were 8.58%, 33.86 dB, and 0.829, respectively. These results indicate that the proposed method can visualize blood vessels without motion artifacts from a single live image.
Collapse
Affiliation(s)
- Ryusei Kimura
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Atsushi Teramoto
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan.
| | - Tomoyuki Ohno
- Fujita Health University Bantane Hospital, 3-6-10 Otobashi Nakagawa-ku, Nagoya-city, Aichi, 454-8509, Japan
| | - Kuniaki Saito
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Hiroshi Fujita
- Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu-city, Gifu, 501-1194, Japan
| |
Collapse
|
25
|
Automated Detection and Segmentation of Early Gastric Cancer from Endoscopic Images Using Mask R-CNN. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10113842] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Gastrointestinal endoscopy is widely conducted for the early detection of gastric cancer. However, it is often difficult to detect early gastric cancer lesions and accurately evaluate the invasive regions. Our study aimed to develop a detection and segmentation method for early gastric cancer regions from gastrointestinal endoscopic images. In this method, we first collected 1208 healthy and 533 cancer images. The gastric cancer region was detected and segmented from endoscopic images using Mask R-CNN, an instance segmentation method. An endoscopic image was provided to the Mask R-CNN, and a bounding box and a label image of the gastric cancer region were obtained. As a performance evaluation via five-fold cross-validation, sensitivity and false positives (FPs) per image were 96.0% and 0.10 FP/image, respectively. In the evaluation of segmentation of the gastric cancer region, the average Dice index was 71%. These results indicate that our proposed scheme may be useful for the detection of gastric cancer and evaluation of the invasive region in gastrointestinal endoscopy.
Collapse
|
26
|
Investigation of pulmonary nodule classification using multi-scale residual network enhanced with 3DGAN-synthesized volumes. Radiol Phys Technol 2020; 13:160-169. [DOI: 10.1007/s12194-020-00564-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2020] [Revised: 04/17/2020] [Accepted: 04/18/2020] [Indexed: 02/06/2023]
|
27
|
Hayashi Y. New unified insights on deep learning in radiological and pathological images: Beyond quantitative performances to qualitative interpretation. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100329] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
|
28
|
Decision Support System for Lung Cancer Using PET/CT and Microscopic Images. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1213:73-94. [DOI: 10.1007/978-3-030-33128-3_5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
29
|
Matsubara N, Teramoto A, Saito K, Fujita H. Bone suppression for chest X-ray image using a convolutional neural filter. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2019; 43:10.1007/s13246-019-00822-w. [PMID: 31773501 DOI: 10.1007/s13246-019-00822-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Accepted: 11/19/2019] [Indexed: 12/22/2022]
Abstract
Chest X-rays are used for mass screening for the early detection of lung cancer. However, lung nodules are often overlooked because of bones overlapping the lung fields. Bone suppression techniques based on artificial intelligence have been developed to solve this problem. However, bone suppression accuracy needs improvement. In this study, we propose a convolutional neural filter (CNF) for bone suppression based on a convolutional neural network which is frequently used in the medical field and has excellent performance in image processing. CNF outputs a value for the bone component of the target pixel by inputting pixel values in the neighborhood of the target pixel. By processing all positions in the input image, a bone-extracted image is generated. Finally, bone-suppressed image is obtained by subtracting the bone-extracted image from the original chest X-ray image. Bone suppression was most accurate when using CNF with six convolutional layers, yielding bone suppression of 89.2%. In addition, abnormalities, if present, were effectively imaged by suppressing only bone components and maintaining soft-tissue. These results suggest that the chances of missing abnormalities may be reduced by using the proposed method. The proposed method is useful for bone suppression in chest X-ray images.
Collapse
Affiliation(s)
- Naoki Matsubara
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Atsushi Teramoto
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan.
| | - Kuniaki Saito
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Hiroshi Fujita
- Department of Electrical, Electronic & Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu-city, Gifu, 501-1194, Japan
| |
Collapse
|
30
|
A method for the automated classification of benign and malignant masses on digital breast tomosynthesis images using machine learning and radiomic features. Radiol Phys Technol 2019; 13:27-36. [PMID: 31686300 DOI: 10.1007/s12194-019-00543-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2019] [Revised: 10/29/2019] [Accepted: 10/29/2019] [Indexed: 10/25/2022]
Abstract
In digital mammography, which is used for the early detection of breast tumors, oversight may occur due to overlap between normal tissues and lesions. However, since digital breast tomosynthesis can acquire three-dimensional images, tissue overlapping is reduced, and, therefore, the shape and distribution of the lesions can be easily identified. However, it is often difficult to distinguish between benign and malignant breast lesions on images, and the diagnostic accuracy can be reduced due to complications from radiological interpretations, owing to acquisition of a higher number of images. In this study, we developed an automated classification method for diagnosing breast lesions on digital breast tomosynthesis images using radiomics to comprehensively analyze the radiological images. We extracted an analysis area centered on the lesion and calculated 70 radiomic features, including the shape of the lesion, existence of spicula, and texture information. The accuracy was compared by inputting the obtained radiomic features to four classifiers (support vector machine, random forest, naïve Bayes, and multi-layer perceptron), and the final classification result was obtained as an output using a classifier with high accuracy. To confirm the effectiveness of the proposed method, we used 24 cases with confirmed pathological diagnosis on biopsy. We also compared the classification results based on the presence or absence of dimension reduction using least absolute shrinkage and a selection operator (LASSO). As a result, when the support vector machine was used as a classifier, the correct identification rate of the benign tumors was 55% and that of malignant tumors was 84%, with best results. These results indicate that the proposed method may help in more accurately diagnosing cases that are difficult to classify on images.
Collapse
|