1
|
Frewing A, Gibson AB, Robertson R, Urie PM, Corte DD. Don't Fear the Artificial Intelligence: A Systematic Review of Machine Learning for Prostate Cancer Detection in Pathology. Arch Pathol Lab Med 2024; 148:603-612. [PMID: 37594900 DOI: 10.5858/arpa.2022-0460-ra] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2023] [Indexed: 08/20/2023]
Abstract
CONTEXT Automated prostate cancer detection using machine learning technology has led to speculation that pathologists will soon be replaced by algorithms. This review covers the development of machine learning algorithms and their reported effectiveness specific to prostate cancer detection and Gleason grading. OBJECTIVE To examine current algorithms regarding their accuracy and classification abilities. We provide a general explanation of the technology and how it is being used in clinical practice. The challenges to the application of machine learning algorithms in clinical practice are also discussed. DATA SOURCES The literature for this review was identified and collected using a systematic search. Criteria were established prior to the sorting process to effectively direct the selection of studies. A 4-point system was implemented to rank the papers according to their relevancy. For papers accepted as relevant to our metrics, all cited and citing studies were also reviewed. Studies were then categorized based on whether they implemented binary or multi-class classification methods. Data were extracted from papers that contained accuracy, area under the curve (AUC), or κ values in the context of prostate cancer detection. The results were visually summarized to present accuracy trends between classification abilities. CONCLUSIONS It is more difficult to achieve high accuracy metrics for multiclassification tasks than for binary tasks. The clinical implementation of an algorithm that can assign a Gleason grade to clinical whole slide images (WSIs) remains elusive. Machine learning technology is currently not able to replace pathologists but can serve as an important safeguard against misdiagnosis.
Collapse
Affiliation(s)
- Aaryn Frewing
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Alexander B Gibson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Richard Robertson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Paul M Urie
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Dennis Della Corte
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| |
Collapse
|
2
|
Zhu L, Pan J, Mou W, Deng L, Zhu Y, Wang Y, Pareek G, Hyams E, Carneiro BA, Hadfield MJ, El-Deiry WS, Yang T, Tan T, Tong T, Ta N, Zhu Y, Gao Y, Lai Y, Cheng L, Chen R, Xue W. Harnessing artificial intelligence for prostate cancer management. Cell Rep Med 2024; 5:101506. [PMID: 38593808 PMCID: PMC11031422 DOI: 10.1016/j.xcrm.2024.101506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/05/2024] [Accepted: 03/19/2024] [Indexed: 04/11/2024]
Abstract
Prostate cancer (PCa) is a common malignancy in males. The pathology review of PCa is crucial for clinical decision-making, but traditional pathology review is labor intensive and subjective to some extent. Digital pathology and whole-slide imaging enable the application of artificial intelligence (AI) in pathology. This review highlights the success of AI in detecting and grading PCa, predicting patient outcomes, and identifying molecular subtypes. We propose that AI-based methods could collaborate with pathologists to reduce workload and assist clinicians in formulating treatment recommendations. We also introduce the general process and challenges in developing AI pathology models for PCa. Importantly, we summarize publicly available datasets and open-source codes to facilitate the utilization of existing data and the comparison of the performance of different models to improve future studies.
Collapse
Affiliation(s)
- Lingxuan Zhu
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China; Department of Etiology and Carcinogenesis, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China; Changping Laboratory, Beijing, China
| | - Jiahua Pan
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Weiming Mou
- Department of Urology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Longxin Deng
- Department of Urology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yinjie Zhu
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Yanqing Wang
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Gyan Pareek
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Minimally Invasive Urology Institute, Providence, RI, USA
| | - Elias Hyams
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Minimally Invasive Urology Institute, Providence, RI, USA
| | - Benedito A Carneiro
- The Legorreta Cancer Center at Brown University, Lifespan Cancer Institute, Providence, RI, USA
| | - Matthew J Hadfield
- The Legorreta Cancer Center at Brown University, Lifespan Cancer Institute, Providence, RI, USA
| | - Wafik S El-Deiry
- The Legorreta Cancer Center at Brown University, Laboratory of Translational Oncology and Experimental Cancer Therapeutics, Department of Pathology & Laboratory Medicine, The Warren Alpert Medical School of Brown University, The Joint Program in Cancer Biology, Brown University and Lifespan Health System, Division of Hematology/Oncology, The Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Tao Yang
- Department of Medical Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Tao Tan
- Faculty of Applied Sciences, Macao Polytechnic University, Address: R. de Luís Gonzaga Gomes, Macao, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fujian 350108, China
| | - Na Ta
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yan Zhu
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yisha Gao
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yancheng Lai
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China; The First School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Liang Cheng
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Department of Pathology and Laboratory Medicine, Department of Surgery (Urology), Brown University Warren Alpert Medical School, Lifespan Health, and the Legorreta Cancer Center at Brown University, Providence, RI, USA.
| | - Rui Chen
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China.
| | - Wei Xue
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China.
| |
Collapse
|
3
|
Gambella A, Salvi M, Molinaro L, Patrono D, Cassoni P, Papotti M, Romagnoli R, Molinari F. Improved assessment of donor liver steatosis using Banff consensus recommendations and deep learning algorithms. J Hepatol 2024; 80:495-504. [PMID: 38036009 DOI: 10.1016/j.jhep.2023.11.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 10/23/2023] [Accepted: 11/03/2023] [Indexed: 12/02/2023]
Abstract
BACKGROUND & AIMS The Banff Liver Working Group recently published consensus recommendations for steatosis assessment in donor liver biopsy, but few studies reported their use and no automated deep-learning algorithms based on the proposed criteria have been developed so far. We evaluated Banff recommendations on a large monocentric series of donor liver needle biopsies by comparing pathologists' scores with those generated by convolutional neural networks (CNNs) we specifically developed for automated steatosis assessment. METHODS We retrospectively retrieved 292 allograft liver needle biopsies collected between January 2016 and January 2020 and performed steatosis assessment using a former intra-institution method (pre-Banff method) and the newly introduced Banff recommendations. Scores provided by pathologists and CNN models were then compared, and the degree of agreement was measured with the intraclass correlation coefficient (ICC). RESULTS Regarding the pre-Banff method, poor agreement was observed between the pathologist and CNN models for small droplet macrovesicular steatosis (ICC: 0.38), large droplet macrovesicular steatosis (ICC: 0.08), and the final combined score (ICC: 0.16) evaluation, but none of these reached statistically significance. Interestingly, significantly improved agreement was observed using the Banff approach: ICC was 0.93 for the low-power score (p <0.001), 0.89 for the high-power score (p <0.001), and 0.93 for the final score (p <0.001). Comparing the pre-Banff method with the Banff approach on the same biopsy, pathologist and CNN model assessment showed a mean (±SD) percentage of discrepancy of 26.89 (±22.16) and 1.20 (±5.58), respectively. CONCLUSIONS Our findings support the use of Banff recommendations in daily practice and highlight the need for a granular analysis of their effect on liver transplantation outcomes. IMPACT AND IMPLICATIONS We developed and validated the first automated deep-learning algorithms for standardized steatosis assessment based on the Banff Liver Working Group consensus recommendations. Our algorithm provides an unbiased automated evaluation of steatosis, which will lay the groundwork for granular analysis of steatosis's short- and long-term effects on organ viability, enabling the identification of clinically relevant steatosis cut-offs for donor organ acceptance. Implementing our algorithm in daily clinical practice will allow for a more efficient and safe allocation of donor organs, improving the post-transplant outcomes of patients.
Collapse
Affiliation(s)
- Alessandro Gambella
- Pathology Unit, Department of Medical Sciences, University of Turin, Turin, Italy; Division of Liver and Transplant Pathology, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.
| | - Massimo Salvi
- Department of Electronics and Telecommunications, PolitoBIOMed Lab, Politecnico di Torino, Biolab, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Luca Molinaro
- Division of Pathology, AOU Città Della Salute e Della Scienza di Torino, Turin, Italy
| | - Damiano Patrono
- General Surgery 2U, Liver Transplant Center, AOU Città Della Salute e Della Scienza di Torino, University of Turin, Turin, Italy
| | - Paola Cassoni
- Pathology Unit, Department of Medical Sciences, University of Turin, Turin, Italy
| | - Mauro Papotti
- Division of Pathology, Department of Oncology, University of Turin, Turin, Italy
| | - Renato Romagnoli
- General Surgery 2U, Liver Transplant Center, AOU Città Della Salute e Della Scienza di Torino, University of Turin, Turin, Italy
| | - Filippo Molinari
- Department of Electronics and Telecommunications, PolitoBIOMed Lab, Politecnico di Torino, Biolab, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| |
Collapse
|
4
|
Busby D, Grauer R, Pandav K, Khosla A, Jain P, Menon M, Haines GK, Cordon-Cardo C, Gorin MA, Tewari AK. Applications of artificial intelligence in prostate cancer histopathology. Urol Oncol 2024; 42:37-47. [PMID: 36639335 DOI: 10.1016/j.urolonc.2022.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 11/27/2022] [Accepted: 12/03/2022] [Indexed: 01/12/2023]
Abstract
The diagnosis of prostate cancer (PCa) depends on the evaluation of core needle biopsies by trained pathologists. Artificial intelligence (AI) derived models have been created to address the challenges posed by pathologists' increasing workload, workforce shortages, and variability in histopathology assessment. These models with histopathological parameters integrated into sophisticated neural networks demonstrate remarkable ability to identify, grade, and predict outcomes for PCa. Though the fully autonomous diagnosis of PCa remains elusive, recently published data suggests that AI has begun to serve as an initial screening tool, an assistant in the form of a real-time interactive interface during histological analysis, and as a second read system to detect false negative diagnoses. Our article aims to describe recent advances and future opportunities for AI in PCa histopathology.
Collapse
Affiliation(s)
- Dallin Busby
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Ralph Grauer
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Krunal Pandav
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Akshita Khosla
- Department of Internal Medicine, Crozer Chester Medical Center, Philadelphia, PA
| | | | - Mani Menon
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - G Kenneth Haines
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Carlos Cordon-Cardo
- Department of Pathology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Michael A Gorin
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Ashutosh K Tewari
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY.
| |
Collapse
|
5
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
6
|
Sun M, Wang J, Gong Q, Huang W. Enhancing gland segmentation in colon histology images using an instance-aware diffusion model. Comput Biol Med 2023; 166:107527. [PMID: 37778210 DOI: 10.1016/j.compbiomed.2023.107527] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 08/17/2023] [Accepted: 09/19/2023] [Indexed: 10/03/2023]
Abstract
In pathological image analysis, determination of gland morphology in histology images of the colon is essential to determine the grade of colon cancer. However, manual segmentation of glands is extremely challenging and there is a need to develop automatic methods for segmenting gland instances. Recently, due to the powerful noise-to-image denoising pipeline, the diffusion model has become one of the hot spots in computer vision research and has been explored in the field of image segmentation. In this paper, we propose an instance segmentation method based on the diffusion model that can perform automatic gland instance segmentation. Firstly, we model the instance segmentation process for colon histology images as a denoising process based on a diffusion model. Secondly, to recover details lost during denoising, we use Instance Aware Filters and multi-scale Mask Branch to construct global mask instead of predicting only local masks. Thirdly, to improve the distinction between the object and the background, we apply Conditional Encoding to enhance the intermediate features with the original image encoding. To objectively validate the proposed method, we compared several state-of-the-art deep learning models on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset (165 images), the Colorectal Adenocarcinoma Glands (CRAG) dataset (213 images) and the RINGS dataset (1500 images). Our proposed method obtains significantly improved results for CRAG (Object F1 0.853 ± 0.054, Object Dice 0.906 ± 0.043), GlaS Test A (Object F1 0.941 ± 0.039, Object Dice 0.939 ± 0.060), GlaS Test B (Object F1 0.893 ± 0.073, Object Dice 0.889 ± 0.069), and RINGS dataset (Precision 0.893 ± 0.096, Dice 0.904 ± 0.091). The experimental results show that our method significantly improves the segmentation accuracy, and the experiment results demonstrate the efficacy of the method.
Collapse
Affiliation(s)
- Mengxue Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Jiale Wang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Qingtao Gong
- Ulsan Ship and Ocean College, Ludong University, Yantai, 264025, China
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China.
| |
Collapse
|
7
|
Salvi M, Manini C, López JI, Fenoglio D, Molinari F. Deep learning approach for accurate prostate cancer identification and stratification using combined immunostaining of cytokeratin, p63, and racemase. Comput Med Imaging Graph 2023; 109:102288. [PMID: 37633031 DOI: 10.1016/j.compmedimag.2023.102288] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 08/12/2023] [Accepted: 08/12/2023] [Indexed: 08/28/2023]
Abstract
BACKGROUND Prostate cancer (PCa) is the most frequently diagnosed cancer in men worldwide, affecting around 1.4 million individuals. Current PCa diagnosis relies on histological analysis of prostate biopsy samples, an activity that is both time-consuming and prone to observer bias. Previous studies have demonstrated that immunostaining of cytokeratin, p63, and racemase can significantly improve the sensitivity and the specificity of PCa detection compared to traditional H&E staining. METHODS This study introduces a novel approach that combines diagnosis-specific immunohistochemical (IHC) staining and deep learning techniques to provide reliable stratification of prostate glands. Our approach leverages a customized segmentation network, called K-PPM, that incorporates adaptive kernels and multiscale feature integration to enhance the functional information of IHC. To address the high class-imbalance problem in the dataset, we propose a weighted adaptive patch-extraction and specific-class kernel update. RESULTS Our system achieved noteworthy results, with a mean Dice Score Coefficient of 90.36% and a mean absolute error of 1.64 % in specific-class gland quantification on whole slides. These findings demonstrate the potential of our system as a valuable support tool for pathologists, reducing workload and decreasing diagnostic inter-observer variability. CONCLUSIONS Our study presents innovative approaches that have broad applicability to other digital pathology areas beyond PCa diagnosis. As a fully automated system, this model can serve as a framework for improving the histological and IHC diagnosis of other types of cancer.
Collapse
Affiliation(s)
- Massimo Salvi
- Biolab, PoliToBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy.
| | - Claudia Manini
- Department of Pathology, San Giovanni Bosco Hospital, 10154 Turin, Italy; Department of Sciences of Public Health and Pediatrics, University of Turin, 10124 Turin, Italy
| | - Jose I López
- Biomarkers in Cancer Group, Biocruces-Bizkaia Health Research Institute, 48903 Barakaldo, Spain
| | - Dario Fenoglio
- Biolab, PoliToBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Filippo Molinari
- Biolab, PoliToBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| |
Collapse
|
8
|
Das R, Bose S, Chowdhury RS, Maulik U. Dense Dilated Multi-Scale Supervised Attention-Guided Network for histopathology image segmentation. Comput Biol Med 2023; 163:107182. [PMID: 37379615 DOI: 10.1016/j.compbiomed.2023.107182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/24/2023] [Accepted: 06/13/2023] [Indexed: 06/30/2023]
Abstract
Over the last couple of decades, the introduction and proliferation of whole-slide scanners led to increasing interest in the research of digital pathology. Although manual analysis of histopathological images is still the gold standard, the process is often tedious and time consuming. Furthermore, manual analysis also suffers from intra- and interobserver variability. Separating structures or grading morphological changes can be difficult due to architectural variability of these images. Deep learning techniques have shown great potential in histopathology image segmentation that drastically reduces the time needed for downstream tasks of analysis and providing accurate diagnosis. However, few algorithms have clinical implementations. In this paper, we propose a new deep learning model Dense Dilated Multiscale Supervised Attention-Guided (D2MSA) Network for histopathology image segmentation that makes use of deep supervision coupled with a hierarchical system of novel attention mechanisms. The proposed model surpasses state-of-the-art performance while using similar computational resources. The performance of the model has been evaluated for the tasks of gland segmentation and nuclei instance segmentation, both of which are clinically relevant tasks to assess the state and progress of malignancy. Here, we have used histopathology image datasets for three different types of cancer. We have also performed extensive ablation tests and hyperparameter tuning to ensure the validity and reproducibility of the model performance. The proposed model is available at www.github.com/shirshabose/D2MSA-Net.
Collapse
Affiliation(s)
- Rangan Das
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Shirsha Bose
- Department of Informatics, Technical University of Munich, Munich, Bavaria 85748, Germany.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
9
|
Rabilloud N, Allaume P, Acosta O, De Crevoisier R, Bourgade R, Loussouarn D, Rioux-Leclercq N, Khene ZE, Mathieu R, Bensalah K, Pecot T, Kammerer-Jacquet SF. Deep Learning Methodologies Applied to Digital Pathology in Prostate Cancer: A Systematic Review. Diagnostics (Basel) 2023; 13:2676. [PMID: 37627935 PMCID: PMC10453406 DOI: 10.3390/diagnostics13162676] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/09/2023] [Accepted: 08/11/2023] [Indexed: 08/27/2023] Open
Abstract
Deep learning (DL), often called artificial intelligence (AI), has been increasingly used in Pathology thanks to the use of scanners to digitize slides which allow us to visualize them on monitors and process them with AI algorithms. Many articles have focused on DL applied to prostate cancer (PCa). This systematic review explains the DL applications and their performances for PCa in digital pathology. Article research was performed using PubMed and Embase to collect relevant articles. A Risk of Bias (RoB) was assessed with an adaptation of the QUADAS-2 tool. Out of the 77 included studies, eight focused on pre-processing tasks such as quality assessment or staining normalization. Most articles (n = 53) focused on diagnosis tasks like cancer detection or Gleason grading. Fifteen articles focused on prediction tasks, such as recurrence prediction or genomic correlations. Best performances were reached for cancer detection with an Area Under the Curve (AUC) up to 0.99 with algorithms already available for routine diagnosis. A few biases outlined by the RoB analysis are often found in these articles, such as the lack of external validation. This review was registered on PROSPERO under CRD42023418661.
Collapse
Affiliation(s)
- Noémie Rabilloud
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
| | - Pierre Allaume
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| | - Oscar Acosta
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
| | - Renaud De Crevoisier
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Radiotherapy, Centre Eugène Marquis, 35033 Rennes, France
| | - Raphael Bourgade
- Department of Pathology, Nantes University Hospital, 44000 Nantes, France
| | | | - Nathalie Rioux-Leclercq
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| | - Zine-eddine Khene
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Romain Mathieu
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Karim Bensalah
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Thierry Pecot
- Facility for Artificial Intelligence and Image Analysis (FAIIA), Biosit UAR 3480 CNRS-US18 INSERM, Rennes University, 2 Avenue du Professeur Léon Bernard, 35042 Rennes, France
| | - Solene-Florence Kammerer-Jacquet
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| |
Collapse
|
10
|
Fogarty R, Goldgof D, Hall L, Lopez A, Johnson J, Gadara M, Stoyanova R, Punnen S, Pollack A, Pow-Sang J, Balagurunathan Y. Classifying Malignancy in Prostate Glandular Structures from Biopsy Scans with Deep Learning. Cancers (Basel) 2023; 15:cancers15082335. [PMID: 37190264 DOI: 10.3390/cancers15082335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/07/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
Histopathological classification in prostate cancer remains a challenge with high dependence on the expert practitioner. We develop a deep learning (DL) model to identify the most prominent Gleason pattern in a highly curated data cohort and validate it on an independent dataset. The histology images are partitioned in tiles (14,509) and are curated by an expert to identify individual glandular structures with assigned primary Gleason pattern grades. We use transfer learning and fine-tuning approaches to compare several deep neural network architectures that are trained on a corpus of camera images (ImageNet) and tuned with histology examples to be context appropriate for histopathological discrimination with small samples. In our study, the best DL network is able to discriminate cancer grade (GS3/4) from benign with an accuracy of 91%, F1-score of 0.91 and AUC 0.96 in a baseline test (52 patients), while the cancer grade discrimination of the GS3 from GS4 had an accuracy of 68% and AUC of 0.71 (40 patients).
Collapse
Affiliation(s)
- Ryan Fogarty
- Department of Machine Learning, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Lawrence Hall
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Alex Lopez
- Tissue Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Joseph Johnson
- Analytic Microscopy Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Manoj Gadara
- Anatomic Pathology Division, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Quest Diagnostics, Tampa, FL 33612, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Sanoj Punnen
- Desai Sethi Urology Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Alan Pollack
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Julio Pow-Sang
- Genitourinary Cancers, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | | |
Collapse
|
11
|
Karri M, Annavarapu CSR, Acharya UR. Skin lesion segmentation using two-phase cross-domain transfer learning framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107408. [PMID: 36805279 DOI: 10.1016/j.cmpb.2023.107408] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/31/2023] [Accepted: 02/04/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning (DL) models have been used for medical imaging for a long time but they did not achieve their full potential in the past because of insufficient computing power and scarcity of training data. In recent years, we have seen substantial growth in DL networks because of improved technology and an abundance of data. However, previous studies indicate that even a well-trained DL algorithm may struggle to generalize data from multiple sources because of domain shifts. Additionally, ineffectiveness of basic data fusion methods, complexity of segmentation target and low interpretability of current DL models limit their use in clinical decisions. To meet these challenges, we present a new two-phase cross-domain transfer learning system for effective skin lesion segmentation from dermoscopic images. METHODS Our system is based on two significant technical inventions. We examine a two- phase cross-domain transfer learning approach, including model-level and data-level transfer learning, by fine-tuning the system on two datasets, MoleMap and ImageNet. We then present nSknRSUNet, a high-performing DL network, for skin lesion segmentation using broad receptive fields and spatial edge attention feature fusion. We examine the trained model's generalization capabilities on skin lesion segmentation to quantify these two inventions. We cross-examine the model using two skin lesion image datasets, MoleMap and HAM10000, obtained from varied clinical contexts. RESULTS At data-level transfer learning for the HAM10000 dataset, the proposed model obtained 94.63% of DSC and 99.12% accuracy. In cross-examination at data-level transfer learning for the Molemap dataset, the proposed model obtained 93.63% of DSC and 97.01% of accuracy. CONCLUSION Numerous experiments reveal that our system produces excellent performance and improves upon state-of-the-art methods on both qualitative and quantitative measures.
Collapse
Affiliation(s)
- Meghana Karri
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - Chandra Sekhara Rao Annavarapu
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of science and Technology, SUSS university, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia university, Taichung, Taiwan.
| |
Collapse
|
12
|
Wu Z, Li X, Zuo J. RAD-UNet: Research on an improved lung nodule semantic segmentation algorithm based on deep learning. Front Oncol 2023; 13:1084096. [PMID: 37035155 PMCID: PMC10076852 DOI: 10.3389/fonc.2023.1084096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 03/01/2023] [Indexed: 04/11/2023] Open
Abstract
Objective Due to the small proportion of target pixels in computed tomography (CT) images and the high similarity with the environment, convolutional neural network-based semantic segmentation models are difficult to develop by using deep learning. Extracting feature information often leads to under- or oversegmentation of lesions in CT images. In this paper, an improved convolutional neural network segmentation model known as RAD-UNet, which is based on the U-Net encoder-decoder architecture, is proposed and applied to lung nodular segmentation in CT images. Method The proposed RAD-UNet segmentation model includes several improved components: the U-Net encoder is replaced by a ResNet residual network module; an atrous spatial pyramid pooling module is added after the U-Net encoder; and the U-Net decoder is improved by introducing a cross-fusion feature module with channel and spatial attention. Results The segmentation model was applied to the LIDC dataset and a CT dataset collected by the Affiliated Hospital of Anhui Medical University. The experimental results show that compared with the existing SegNet [14] and U-Net [15] methods, the proposed model demonstrates better lung lesion segmentation performance. On the above two datasets, the mIoU reached 87.76% and 88.13%, and the F1-score reached 93.56% and 93.72%, respectively. Conclusion: The experimental results show that the improved RAD-UNet segmentation method achieves more accurate pixel-level segmentation in CT images of lung tumours and identifies lung nodules better than the SegNet [14] and U-Net [15] models. The problems of under- and oversegmentation that occur during segmentation are solved, effectively improving the image segmentation performance.
Collapse
Affiliation(s)
- Zezhi Wu
- Department of Computer Science, Anhui Medical University, Hefei, Anhui, China
| | - Xiaoshu Li
- Department of Radiology, First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Jianhui Zuo
- Department of General Thoracic Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| |
Collapse
|
13
|
Impact of Stain Normalization on Pathologist Assessment of Prostate Cancer: A Comparative Study. Cancers (Basel) 2023; 15:cancers15051503. [PMID: 36900293 PMCID: PMC10000688 DOI: 10.3390/cancers15051503] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 02/17/2023] [Accepted: 02/23/2023] [Indexed: 03/04/2023] Open
Abstract
In clinical routine, the quality of whole-slide images plays a key role in the pathologist's diagnosis, and suboptimal staining may be a limiting factor. The stain normalization process helps to solve this problem through the standardization of color appearance of a source image with respect to a target image with optimal chromatic features. The analysis is focused on the evaluation of the following parameters assessed by two experts on original and normalized slides: (i) perceived color quality, (ii) diagnosis for the patient, (iii) diagnostic confidence and (iv) time required for diagnosis. Results show a statistically significant increase in color quality in the normalized images for both experts (p < 0.0001). Regarding prostate cancer assessment, the average times for diagnosis are significantly lower for normalized images than original ones (first expert: 69.9 s vs. 77.9 s with p < 0.0001; second expert: 37.4 s vs. 52.7 s with p < 0.0001), and at the same time, a statistically significant increase in diagnostic confidence is proven. The improvement of poor-quality images and greater clarity of diagnostically important details in normalized slides demonstrate the potential of stain normalization in the routine practice of prostate cancer assessment.
Collapse
|
14
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
15
|
Oner MU, Ng MY, Giron DM, Chen Xi CE, Yuan Xiang LA, Singh M, Yu W, Sung WK, Wong CF, Lee HK. An AI-assisted tool for efficient prostate cancer diagnosis in low-grade and low-volume cases. PATTERNS (NEW YORK, N.Y.) 2022; 3:100642. [PMID: 36569545 PMCID: PMC9768677 DOI: 10.1016/j.patter.2022.100642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 03/30/2022] [Accepted: 11/01/2022] [Indexed: 12/03/2022]
Abstract
Pathologists diagnose prostate cancer by core needle biopsy. In low-grade and low-volume cases, they look for a few malignant glands out of hundreds within a core. They may miss a few malignant glands, resulting in repeat biopsies or missed therapeutic opportunities. This study developed a multi-resolution deep-learning pipeline to assist pathologists in detecting malignant glands in core needle biopsies of low-grade and low-volume cases. Analyzing a gland at multiple resolutions, our model exploited morphology and neighborhood information, which were crucial in prostate gland classification. We developed and tested our pipeline on the slides of a local cohort of 99 patients in Singapore. Besides, we made the images publicly available, becoming the first digital histopathology dataset of patients of Asian ancestry with prostatic carcinoma. Our multi-resolution classification model achieved an area under the receiver operating characteristic curve (AUROC) value of 0.992 (95% confidence interval [CI]: 0.985-0.997) in the external validation study, showing the generalizability of our multi-resolution approach.
Collapse
Affiliation(s)
- Mustafa Umit Oner
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
- School of Computing, National University of Singapore, Singapore 117417, Singapore
- Department of Artificial Intelligence Engineering, Bahcesehir University, Istanbul 34353, Turkey
| | - Mei Ying Ng
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Danilo Medina Giron
- Department of Pathology, Tan Tock Seng Hospital, Singapore 308433, Singapore
| | - Cecilia Ee Chen Xi
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Louis Ang Yuan Xiang
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Malay Singh
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Weimiao Yu
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research (A∗STAR), Singapore 138673, Singapore
| | - Wing-Kin Sung
- School of Computing, National University of Singapore, Singapore 117417, Singapore
- Genome Institute of Singapore, Agency for Science, Technology and Research (A∗STAR), Singapore 138672, Singapore
| | - Chin Fong Wong
- Department of Pathology, Tan Tock Seng Hospital, Singapore 308433, Singapore
| | - Hwee Kuan Lee
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
- School of Computing, National University of Singapore, Singapore 117417, Singapore
- Singapore Eye Research Institute (SERI), Singapore 169856, Singapore
- Image and Pervasive Access Lab (IPAL), Singapore 138632, Singapore
- Rehabilitation Research Institute of Singapore, Singapore 308232, Singapore
- Singapore Institute for Clinical Sciences, Singapore 117609, Singapore
| |
Collapse
|
16
|
Kim I, Kang K, Song Y, Kim TJ. Application of Artificial Intelligence in Pathology: Trends and Challenges. Diagnostics (Basel) 2022; 12:2794. [PMID: 36428854 PMCID: PMC9688959 DOI: 10.3390/diagnostics12112794] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/03/2022] [Accepted: 11/11/2022] [Indexed: 11/16/2022] Open
Abstract
Given the recent success of artificial intelligence (AI) in computer vision applications, many pathologists anticipate that AI will be able to assist them in a variety of digital pathology tasks. Simultaneously, tremendous advancements in deep learning have enabled a synergy with artificial intelligence (AI), allowing for image-based diagnosis on the background of digital pathology. There are efforts for developing AI-based tools to save pathologists time and eliminate errors. Here, we describe the elements in the development of computational pathology (CPATH), its applicability to AI development, and the challenges it faces, such as algorithm validation and interpretability, computing systems, reimbursement, ethics, and regulations. Furthermore, we present an overview of novel AI-based approaches that could be integrated into pathology laboratory workflows.
Collapse
Affiliation(s)
- Inho Kim
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Kyungmin Kang
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Youngjae Song
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Tae-Jung Kim
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Republic of Korea
| |
Collapse
|
17
|
Shi X, Wang L, Li Y, Wu J, Huang H. GCLDNet: Gastric cancer lesion detection network combining level feature aggregation and attention feature fusion. Front Oncol 2022; 12:901475. [PMID: 36106104 PMCID: PMC9464831 DOI: 10.3389/fonc.2022.901475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
Background Analysis of histopathological slices of gastric cancer is the gold standard for diagnosing gastric cancer, while manual identification is time-consuming and highly relies on the experience of pathologists. Artificial intelligence methods, particularly deep learning, can assist pathologists in finding cancerous tissues and realizing automated detection. However, due to the variety of shapes and sizes of gastric cancer lesions, as well as many interfering factors, GCHIs have a high level of complexity and difficulty in accurately finding the lesion region. Traditional deep learning methods cannot effectively extract discriminative features because of their simple decoding method so they cannot detect lesions accurately, and there is less research dedicated to detecting gastric cancer lesions. Methods We propose a gastric cancer lesion detection network (GCLDNet). At first, GCLDNet designs a level feature aggregation structure in decoder, which can effectively fuse deep and shallow features of GCHIs. Second, an attention feature fusion module is introduced to accurately locate the lesion area, which merges attention features of different scales and obtains rich discriminative information focusing on lesion. Finally, focal Tversky loss (FTL) is employed as a loss function to depress false-negative predictions and mine difficult samples. Results Experimental results on two GCHI datasets of SEED and BOT show that DSCs of the GCLDNet are 0.8265 and 0.8991, ACCs are 0.8827 and 0.8949, JIs are 0.7092 and 0.8182, and PREs are 0.7820 and 0.8763, respectively. Conclusions Experimental results demonstrate the effectiveness of GCLDNet in the detection of gastric cancer lesions. Compared with other state-of-the-art (SOTA) detection methods, the GCLDNet obtains a more satisfactory performance. This research can provide good auxiliary support for pathologists in clinical diagnosis.
Collapse
Affiliation(s)
- Xu Shi
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Long Wang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Yu Li
- Department of Pathology, Chongqing University Cancer Hospital and Chongqing Cancer Institute and Chongqing Cancer Hospital, Chongqing, China
| | - Jian Wu
- Head and Neck Cancer Center, Chongqing University Cancer Hospital and Chongqing Cancer Institute and Chongqing Cancer Hospital, Chongqing, China
- *Correspondence: Jian Wu, ; Hong Huang,
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
- *Correspondence: Jian Wu, ; Hong Huang,
| |
Collapse
|
18
|
He W, Liu T, Han Y, Ming W, Du J, Liu Y, Yang Y, Wang L, Jiang Z, Wang Y, Yuan J, Cao C. A review: The detection of cancer cells in histopathology based on machine vision. Comput Biol Med 2022; 146:105636. [PMID: 35751182 DOI: 10.1016/j.compbiomed.2022.105636] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 04/04/2022] [Accepted: 04/28/2022] [Indexed: 12/24/2022]
Abstract
Machine vision is being employed in defect detection, size measurement, pattern recognition, image fusion, target tracking and 3D reconstruction. Traditional cancer detection methods are dominated by manual detection, which wastes time and manpower, and heavily relies on the pathologists' skill and work experience. Therefore, these manual detection approaches are not convenient for the inheritance of domain knowledge, and are not suitable for the rapid development of medical care in the future. The emergence of machine vision can iteratively update and learn the domain knowledge of cancer cell pathology detection to achieve automated, high-precision, and consistent detection. Consequently, this paper reviews the use of machine vision to detect cancer cells in histopathology images, as well as the benefits and drawbacks of various detection approaches. First, we review the application of image preprocessing and image segmentation in histopathology for the detection of cancer cells, and compare the benefits and drawbacks of different algorithms. Secondly, for the characteristics of histopathological cancer cell images, the research progress of shape, color and texture features and other methods is mainly reviewed. Furthermore, for the classification methods of histopathological cancer cell images, the benefits and drawbacks of traditional machine vision approaches and deep learning methods are compared and analyzed. Finally, the above research is discussed and forecasted, with the expected future development tendency serving as a guide for future research.
Collapse
Affiliation(s)
- Wenbin He
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Ting Liu
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongjie Han
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Wuyi Ming
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China.
| | - Jinguang Du
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yinxia Liu
- Laboratory Medicine of Dongguan Kanghua Hospital, Dongguan, 523808, China
| | - Yuan Yang
- Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, 510120, China.
| | - Leijie Wang
- School of Mechanical Engineering, Dongguan University of Technology Dongguan, 523808, China
| | - Zhiwen Jiang
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongqiang Wang
- Zhengzhou Coal Mining Machinery Group Co., Ltd, Zhengzhou, 450016, China
| | - Jie Yuan
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Chen Cao
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China
| |
Collapse
|
19
|
Gupta S, Kalaivani S, Rajasundaram A, Ameta GK, Oleiwi AK, Dugbakie BN. Prediction Performance of Deep Learning for Colon Cancer Survival Prediction on SEER Data. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1467070. [PMID: 35757479 PMCID: PMC9225873 DOI: 10.1155/2022/1467070] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 05/21/2022] [Accepted: 05/25/2022] [Indexed: 11/22/2022]
Abstract
Colon and rectal cancers are the most common kinds of cancer globally. Colon cancer is more prevalent in men than in women. Early detection increases the likelihood of survival, and treatment significantly increases the likelihood of eradicating the disease. The Surveillance, Epidemiology, and End Results (SEER) programme is an excellent source of domestic cancer statistics. SEER includes nearly 30% of the United States population, covering various races and geographic locations. The data are made public via the SEER website when a SEER limited-use data agreement form is submitted and approved. We investigate data from the SEER programme, specifically colon cancer statistics, in this study. Our objective is to create reliable colon cancer survival and conditional survival prediction algorithms. In this study, we have presented an overview of cancer diagnosis methods and the treatments used to cure cancer. This paper presents an analysis of prediction performance of multiple deep learning approaches. The performance of multiple deep learning models is thoroughly examined to discover which algorithm surpasses the others, followed by an investigation of the network's prediction accuracy. The simulation outcomes indicate that automated prediction models can predict colon cancer patient survival. Deep autoencoders displayed the best performance outcomes attaining 97% accuracy and 95% area under curve-receiver operating characteristic (AUC-ROC).
Collapse
Affiliation(s)
- Surbhi Gupta
- Model Institute of Engineering & Technology, Jammu, J&K, India
| | - S. Kalaivani
- School of Information Technology and Engineering, Vellore Institute of Technology (VIT), Vellore, Tamil Nadu, India
| | - Archana Rajasundaram
- Department of Anatomy, Sree Balaji Medical College and Hospital, Chennai, Tamil Nadu, India
| | - Gaurav Kumar Ameta
- Department of Computer Engineering, Indus Institute of Technology & Engineering, Indus University, Ahmedabad, Gujarat, India
| | - Ahmed Kareem Oleiwi
- Department of Computer Technical Engineering, The Islamic University, 54001 Najaf, Iraq
| | - Betty Nokobi Dugbakie
- Department of Chemical Engineering, Kwame Nkrumah University of Science and Technology (KNUST), Ghana
| |
Collapse
|
20
|
Salvi M, De Santi B, Pop B, Bosco M, Giannini V, Regge D, Molinari F, Meiburger KM. Integration of Deep Learning and Active Shape Models for More Accurate Prostate Segmentation in 3D MR Images. J Imaging 2022; 8:133. [PMID: 35621897 PMCID: PMC9146644 DOI: 10.3390/jimaging8050133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/06/2022] [Accepted: 05/09/2022] [Indexed: 01/27/2023] Open
Abstract
Magnetic resonance imaging (MRI) has a growing role in the clinical workup of prostate cancer. However, manual three-dimensional (3D) segmentation of the prostate is a laborious and time-consuming task. In this scenario, the use of automated algorithms for prostate segmentation allows us to bypass the huge workload of physicians. In this work, we propose a fully automated hybrid approach for prostate gland segmentation in MR images using an initial segmentation of prostate volumes using a custom-made 3D deep network (VNet-T2), followed by refinement using an Active Shape Model (ASM). While the deep network focuses on three-dimensional spatial coherence of the shape, the ASM relies on local image information and this joint effort allows for improved segmentation of the organ contours. Our method is developed and tested on a dataset composed of T2-weighted (T2w) MRI prostatic volumes of 60 male patients. In the test set, the proposed method shows excellent segmentation performance, achieving a mean dice score and Hausdorff distance of 0.851 and 7.55 mm, respectively. In the future, this algorithm could serve as an enabling technology for the development of computer-aided systems for prostate cancer characterization in MR imaging.
Collapse
Affiliation(s)
- Massimo Salvi
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Bruno De Santi
- Multi-Modality Medical Imaging (M3I), Technical Medical Centre, University of Twente, PB217, 7500 AE Enschede, The Netherlands;
| | - Bianca Pop
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Martino Bosco
- Department of Pathology, Ospedale Michele e Pietro Ferrero, 12060 Verduno, Italy;
| | - Valentina Giannini
- Department of Surgical Sciences, University of Turin, 10126 Turin, Italy; (V.G.); (D.R.)
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy
| | - Daniele Regge
- Department of Surgical Sciences, University of Turin, 10126 Turin, Italy; (V.G.); (D.R.)
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy
| | - Filippo Molinari
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Kristen M. Meiburger
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| |
Collapse
|
21
|
Wu Y, Cheng M, Huang S, Pei Z, Zuo Y, Liu J, Yang K, Zhu Q, Zhang J, Hong H, Zhang D, Huang K, Cheng L, Shao W. Recent Advances of Deep Learning for Computational Histopathology: Principles and Applications. Cancers (Basel) 2022; 14:1199. [PMID: 35267505 PMCID: PMC8909166 DOI: 10.3390/cancers14051199] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 02/16/2022] [Accepted: 02/22/2022] [Indexed: 01/10/2023] Open
Abstract
With the remarkable success of digital histopathology, we have witnessed a rapid expansion of the use of computational methods for the analysis of digital pathology and biopsy image patches. However, the unprecedented scale and heterogeneous patterns of histopathological images have presented critical computational bottlenecks requiring new computational histopathology tools. Recently, deep learning technology has been extremely successful in the field of computer vision, which has also boosted considerable interest in digital pathology applications. Deep learning and its extensions have opened several avenues to tackle many challenging histopathological image analysis problems including color normalization, image segmentation, and the diagnosis/prognosis of human cancers. In this paper, we provide a comprehensive up-to-date review of the deep learning methods for digital H&E-stained pathology image analysis. Specifically, we first describe recent literature that uses deep learning for color normalization, which is one essential research direction for H&E-stained histopathological image analysis. Followed by the discussion of color normalization, we review applications of the deep learning method for various H&E-stained image analysis tasks such as nuclei and tissue segmentation. We also summarize several key clinical studies that use deep learning for the diagnosis and prognosis of human cancers from H&E-stained histopathological images. Finally, online resources and open research problems on pathological image analysis are also provided in this review for the convenience of researchers who are interested in this exciting field.
Collapse
Affiliation(s)
- Yawen Wu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Michael Cheng
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Shuo Huang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Zongxiang Pei
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Yingli Zuo
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jianxin Liu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kai Yang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Qi Zhu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jie Zhang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Honghai Hong
- Department of Clinical Laboratory, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510006, China;
| | - Daoqiang Zhang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kun Huang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Liang Cheng
- Departments of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Wei Shao
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| |
Collapse
|
22
|
|
23
|
|