1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
Sweeney PW, Hacker L, Lefebvre TL, Brown EL, Gröhl J, Bohndiek SE. Unsupervised Segmentation of 3D Microvascular Photoacoustic Images Using Deep Generative Learning. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024:e2402195. [PMID: 38923324 DOI: 10.1002/advs.202402195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 05/27/2024] [Indexed: 06/28/2024]
Abstract
Mesoscopic photoacoustic imaging (PAI) enables label-free visualization of vascular networks in tissues with high contrast and resolution. Segmenting these networks from 3D PAI data and interpreting their physiological and pathological significance is crucial yet challenging due to the time-consuming and error-prone nature of current methods. Deep learning offers a potential solution; however, supervised analysis frameworks typically require human-annotated ground-truth labels. To address this, an unsupervised image-to-image translation deep learning model is introduced, the Vessel Segmentation Generative Adversarial Network (VAN-GAN). VAN-GAN integrates synthetic blood vessel networks that closely resemble real-life anatomy into its training process and learns to replicate the underlying physics of the PAI system in order to learn how to segment vasculature from 3D photoacoustic images. Applied to a diverse range of in silico, in vitro, and in vivo data, including patient-derived breast cancer xenograft models and 3D clinical angiograms, VAN-GAN demonstrates its capability to facilitate accurate and unbiased segmentation of 3D vascular networks. By leveraging synthetic data, VAN-GAN reduces the reliance on manual labeling, thus lowering the barrier to entry for high-quality blood vessel segmentation (F1 score: VAN-GAN vs. U-Net = 0.84 vs. 0.87) and enhancing preclinical and clinical research into vascular structure and function.
Collapse
Affiliation(s)
- Paul W Sweeney
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Lina Hacker
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Thierry L Lefebvre
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Emma L Brown
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Janek Gröhl
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Sarah E Bohndiek
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| |
Collapse
|
3
|
Liu X, Qu L, Xie Z, Zhao J, Shi Y, Song Z. Towards more precise automatic analysis: a systematic review of deep learning-based multi-organ segmentation. Biomed Eng Online 2024; 23:52. [PMID: 38851691 PMCID: PMC11162022 DOI: 10.1186/s12938-024-01238-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/11/2024] [Indexed: 06/10/2024] Open
Abstract
Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step in computer-aided diagnosis, surgical navigation, and radiation therapy. In the past few years, with a data-driven feature extraction approach and end-to-end training, automatic deep learning-based multi-organ segmentation methods have far outperformed traditional methods and become a new research topic. This review systematically summarizes the latest research in this field. We searched Google Scholar for papers published from January 1, 2016 to December 31, 2023, using keywords "multi-organ segmentation" and "deep learning", resulting in 327 papers. We followed the PRISMA guidelines for paper selection, and 195 studies were deemed to be within the scope of this review. We summarized the two main aspects involved in multi-organ segmentation: datasets and methods. Regarding datasets, we provided an overview of existing public datasets and conducted an in-depth analysis. Concerning methods, we categorized existing approaches into three major classes: fully supervised, weakly supervised and semi-supervised, based on whether they require complete label information. We summarized the achievements of these methods in terms of segmentation accuracy. In the discussion and conclusion section, we outlined and summarized the current trends in multi-organ segmentation.
Collapse
Affiliation(s)
- Xiaoyu Liu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Linhao Qu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Ziyue Xie
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Jiayue Zhao
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| |
Collapse
|
4
|
Liu J, Zhang Y, Wang K, Yavuz MC, Chen X, Yuan Y, Li H, Yang Y, Yuille A, Tang Y, Zhou Z. Universal and extensible language-vision models for organ segmentation and tumor detection from abdominal computed tomography. Med Image Anal 2024; 97:103226. [PMID: 38852215 DOI: 10.1016/j.media.2024.103226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 03/30/2024] [Accepted: 05/27/2024] [Indexed: 06/11/2024]
Abstract
The advancement of artificial intelligence (AI) for organ segmentation and tumor detection is propelled by the growing availability of computed tomography (CT) datasets with detailed, per-voxel annotations. However, these AI models often struggle with flexibility for partially annotated datasets and extensibility for new classes due to limitations in the one-hot encoding, architectural design, and learning scheme. To overcome these limitations, we propose a universal, extensible framework enabling a single model, termed Universal Model, to deal with multiple public datasets and adapt to new classes (e.g., organs/tumors). Firstly, we introduce a novel language-driven parameter generator that leverages language embeddings from large language models, enriching semantic encoding compared with one-hot encoding. Secondly, the conventional output layers are replaced with lightweight, class-specific heads, allowing Universal Model to simultaneously segment 25 organs and six types of tumors and ease the addition of new classes. We train our Universal Model on 3410 CT volumes assembled from 14 publicly available datasets and then test it on 6173 CT volumes from four external datasets. Universal Model achieves first place on six CT tasks in the Medical Segmentation Decathlon (MSD) public leaderboard and leading performance on the Beyond The Cranial Vault (BTCV) dataset. In summary, Universal Model exhibits remarkable computational efficiency (6× faster than other dataset-specific models), demonstrates strong generalization across different hospitals, transfers well to numerous downstream tasks, and more importantly, facilitates the extensibility to new classes while alleviating the catastrophic forgetting of previously learned classes. Codes, models, and datasets are available at https://github.com/ljwztc/CLIP-Driven-Universal-Model.
Collapse
Affiliation(s)
- Jie Liu
- City University of Hong Kong, Hong Kong
| | - Yixiao Zhang
- Johns Hopkins University, United States of America
| | - Kang Wang
- University of California, San Francisco, United States of America
| | - Mehmet Can Yavuz
- University of California, San Francisco, United States of America
| | - Xiaoxi Chen
- University of Illinois Urbana-Champaign, United States of America
| | | | | | - Yang Yang
- University of California, San Francisco, United States of America
| | - Alan Yuille
- Johns Hopkins University, United States of America
| | | | - Zongwei Zhou
- Johns Hopkins University, United States of America.
| |
Collapse
|
5
|
Gao Y, Gonzalez Y, Nwachukwu C, Albuquerque K, Jia X. Predicting treatment plan approval probability for high-dose-rate brachytherapy of cervical cancer using adversarial deep learning. Phys Med Biol 2024; 69:095010. [PMID: 38537309 PMCID: PMC11023000 DOI: 10.1088/1361-6560/ad3880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 03/08/2024] [Accepted: 03/26/2024] [Indexed: 04/18/2024]
Abstract
Objective.Predicting the probability of having the plan approved by the physician is important for automatic treatment planning. Driven by the mathematical foundation of deep learning that can use a deep neural network to represent functions accurately and flexibly, we developed a deep-learning framework that learns the probability of plan approval for cervical cancer high-dose-rate brachytherapy (HDRBT).Approach.The system consisted of a dose prediction network (DPN) and a plan-approval probability network (PPN). DPN predicts organs at risk (OAR)D2ccand CTVD90%of the current fraction from the patient's current anatomy and prescription dose of HDRBT. PPN outputs the probability of a given plan being acceptable to the physician based on the patients anatomy and the total dose combining HDRBT and external beam radiotherapy sessions. Training of the networks was achieved by first training them separately for a good initialization, and then jointly via an adversarial process. We collected approved treatment plans of 248 treatment fractions from 63 patients. Among them, 216 plans from 54 patients were employed in a four-fold cross validation study, and the remaining 32 plans from other 9 patients were saved for independent testing.Main results.DPN predicted equivalent dose of 2 Gy for bladder, rectum, sigmoidD2ccand CTVD90%with a relative error of 11.51% ± 6.92%, 8.23% ± 5.75%, 7.12% ± 6.00%, and 10.16% ± 10.42%, respectively. In a task that differentiates clinically approved plans and disapproved plans generated by perturbing doses in ground truth approved plans by 20%, PPN achieved accuracy, sensitivity, specificity, and area under the curve 0.70, 0.74, 0.65, and 0.74.Significance.We demonstrated the feasibility of developing a novel deep-learning framework that predicts a probability of plan approval for HDRBT of cervical cancer, which is an essential component in automatic treatment planning.
Collapse
Affiliation(s)
- Yin Gao
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Yesenia Gonzalez
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Chika Nwachukwu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Kevin Albuquerque
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Xun Jia
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, United States of America
| |
Collapse
|
6
|
Wang G, Zhou M, Ning X, Tiwari P, Zhu H, Yang G, Yap CH. US2Mask: Image-to-mask generation learning via a conditional GAN for cardiac ultrasound image segmentation. Comput Biol Med 2024; 172:108282. [PMID: 38503085 DOI: 10.1016/j.compbiomed.2024.108282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 02/29/2024] [Accepted: 03/12/2024] [Indexed: 03/21/2024]
Abstract
Cardiac ultrasound (US) image segmentation is vital for evaluating clinical indices, but it often demands a large dataset and expert annotations, resulting in high costs for deep learning algorithms. To address this, our study presents a framework utilizing artificial intelligence generation technology to produce multi-class RGB masks for cardiac US image segmentation. The proposed approach directly performs semantic segmentation of the heart's main structures in US images from various scanning modes. Additionally, we introduce a novel learning approach based on conditional generative adversarial networks (CGAN) for cardiac US image segmentation, incorporating a conditional input and paired RGB masks. Experimental results from three cardiac US image datasets with diverse scan modes demonstrate that our approach outperforms several state-of-the-art models, showcasing improvements in five commonly used segmentation metrics, with lower noise sensitivity. Source code is available at https://github.com/energy588/US2mask.
Collapse
Affiliation(s)
- Gang Wang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, Chongqing; Department of Bioengineering, Imperial College London, London, UK
| | - Mingliang Zhou
- School of Computer Science, Chongqing University, Chongqing, Chongqing.
| | - Xin Ning
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Prayag Tiwari
- School of Information Technology, Halmstad University, Halmstad, Sweden
| | | | - Guang Yang
- Department of Bioengineering, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
7
|
Messeri L, Crockett MJ. Artificial intelligence and illusions of understanding in scientific research. Nature 2024; 627:49-58. [PMID: 38448693 DOI: 10.1038/s41586-024-07146-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 01/31/2024] [Indexed: 03/08/2024]
Abstract
Scientists are enthusiastically imagining ways in which artificial intelligence (AI) tools might improve research. Why are AI tools so attractive and what are the risks of implementing them across the research pipeline? Here we develop a taxonomy of scientists' visions for AI, observing that their appeal comes from promises to improve productivity and objectivity by overcoming human shortcomings. But proposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do. Such illusions obscure the scientific community's ability to see the formation of scientific monocultures, in which some types of methods, questions and viewpoints come to dominate alternative approaches, making science less innovative and more vulnerable to errors. The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less. By analysing the appeal of these tools, we provide a framework for advancing discussions of responsible knowledge production in the age of AI.
Collapse
Affiliation(s)
- Lisa Messeri
- Department of Anthropology, Yale University, New Haven, CT, USA.
| | - M J Crockett
- Department of Psychology, Princeton University, Princeton, NJ, USA.
- University Center for Human Values, Princeton University, Princeton, NJ, USA.
| |
Collapse
|
8
|
Zhang X, Yu X, Liang W, Zhang Z, Zhang S, Xu L, Zhang H, Feng Z, Song M, Zhang J, Feng S. Deep learning-based accurate diagnosis and quantitative evaluation of microvascular invasion in hepatocellular carcinoma on whole-slide histopathology images. Cancer Med 2024; 13:e7104. [PMID: 38488408 PMCID: PMC10941532 DOI: 10.1002/cam4.7104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 12/13/2023] [Accepted: 03/03/2024] [Indexed: 03/18/2024] Open
Abstract
BACKGROUND Microvascular invasion (MVI) is an independent prognostic factor that is associated with early recurrence and poor survival after resection of hepatocellular carcinoma (HCC). However, the traditional pathology approach is relatively subjective, time-consuming, and heterogeneous in the diagnosis of MVI. The aim of this study was to develop a deep-learning model that could significantly improve the efficiency and accuracy of MVI diagnosis. MATERIALS AND METHODS We collected H&E-stained slides from 753 patients with HCC at the First Affiliated Hospital of Zhejiang University. An external validation set with 358 patients was selected from The Cancer Genome Atlas database. The deep-learning model was trained by simulating the method used by pathologists to diagnose MVI. Model performance was evaluated with accuracy, precision, recall, F1 score, and the area under the receiver operating characteristic curve. RESULTS We successfully developed a MVI artificial intelligence diagnostic model (MVI-AIDM) which achieved an accuracy of 94.25% in the independent external validation set. The MVI positive detection rate of MVI-AIDM was significantly higher than the results of pathologists. Visualization results demonstrated the recognition of micro MVIs that were difficult to differentiate by the traditional pathology. Additionally, the model provided automatic quantification of the number of cancer cells and spatial information regarding MVI. CONCLUSIONS We developed a deep learning diagnostic model, which performed well and improved the efficiency and accuracy of MVI diagnosis. The model provided spatial information of MVI that was essential to accurately predict HCC recurrence after surgery.
Collapse
Affiliation(s)
- Xiuming Zhang
- Department of Pathology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| | - Xiaotian Yu
- Department of Computer Science and TechnologyZhejiang UniversityHangzhouP. R. China
| | - Wenjie Liang
- Department of Radiology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| | - Zhongliang Zhang
- School of ManagementHangzhou Dianzi UniversityHangzhouP. R. China
| | - Shengxuming Zhang
- Department of Computer Science and TechnologyZhejiang UniversityHangzhouP. R. China
| | - Linjie Xu
- Department of Pathology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| | - Han Zhang
- Department of Pathology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| | - Zunlei Feng
- Department of Computer Science and TechnologyZhejiang UniversityHangzhouP. R. China
| | - Mingli Song
- Department of Computer Science and TechnologyZhejiang UniversityHangzhouP. R. China
| | - Jing Zhang
- Department of Pathology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| | - Shi Feng
- Department of Pathology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| |
Collapse
|
9
|
Kadirappa R, S D, R P, Ko SB. DeepHistoNet: A robust deep-learning model for the classification of hepatocellular, lung, and colon carcinoma. Microsc Res Tech 2024; 87:229-256. [PMID: 37750465 DOI: 10.1002/jemt.24426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/24/2023] [Accepted: 09/12/2023] [Indexed: 09/27/2023]
Abstract
In recent days, non-communicable diseases (NCDs) require more attention since they require specialized infrastructure for treatment. As per the cancer population registry estimate, nearly 800,000 new cancer cases will be detected yearly. The statistics alarm the need for early cancer detection and diagnosis. Cancer identification can be made either through manual efforts or by computer-aided algorithms. Manual efforts-based cancer detection is labor intensive and also offers more time complexity. In contrast, computer-aided algorithms offer feasibility in reducing time and manual efforts. With the motivation to develop a computer-aided diagnosis system for NCD, we developed a cancer detection methodology. In the present article, a deep learning (DL)-based cancer identification model is developed. In DL-based architectures, the features are generally extracted using convolutional neural networks. The proposed attention-guided, densely connected residual, and dilated convolution deep neural network called DeepHistoNet acquire precise patterns for classification. Experimentation has been carried out on Kasturba Medical College (KMC), TCGA-LIHC, and LC25000 datasets to prove the robustness of the model. Performance evaluation metrics like F1-score, sensitivity, specificity, recall, and accuracy validate the experimentation. Experimental results demonstrate that the proposed DeepHistoNet model outperforms the other state-of-the-art methods. The proposed model has been able to classify the KMC liver dataset with 97.1% accuracy and 0.9867 value of area under the curve-receiver operating characteristic curve (AUC-ROC), which is the best result obtained compared to the state-of-the-art techniques. The performance of the DeepHistoNet has been even better on the LC25000 dataset. On the LC25000 dataset, the proposed model achieved 99.8% classification accuracy. To our knowledge, DeepHistoNet is a novel approach for multiple histopathological image classification. RESEARCH HIGHLIGHTS: A novel robust DL model is proposed for histopathological image carcinoma classification. The precise patterns for accurate classification are extracted using dense cross-connected residual blocks. Spatial attention is provided to the network so that the spatial information is not lost during the feature extraction. DeepHistoNet is trained and evaluated on the liver, lung, and colon histopathology datasets to demonstrate its resilience. The results are promising and outperform state-of-the-art techniques. The proposed methodology has obtained the AUC-ROC value of 0.9867 with a classification accuracy of 97.1% on the KMC dataset. The proposed DeepHistoNet has classified the LC25000 dataset with 99.8% accuracy. The results are the best obtained till date.
Collapse
Affiliation(s)
| | - Deivalakshmi S
- Department of ECE, National Institute of Technology, Tiruchirappalli, India
| | - Pandeeswari R
- Department of ECE, National Institute of Technology, Tiruchirappalli, India
| | - Seok-Bum Ko
- Department of Electrical and Computer, Biomedical Engineering, University of Saskatchewan, Saskatoon, Canada
| |
Collapse
|
10
|
Yang L, Lei Y, Huang Z, Geng M, Liu Z, Wang B, Luo D, Huang W, Liang D, Pang Z, Hu Z. An interactive nuclei segmentation framework with Voronoi diagrams and weighted convex difference for cervical cancer pathology images. Phys Med Biol 2024; 69:025021. [PMID: 37972412 DOI: 10.1088/1361-6560/ad0d44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Accepted: 11/16/2023] [Indexed: 11/19/2023]
Abstract
Objective.Nuclei segmentation is crucial for pathologists to accurately classify and grade cancer. However, this process faces significant challenges, such as the complex background structures in pathological images, the high-density distribution of nuclei, and cell adhesion.Approach.In this paper, we present an interactive nuclei segmentation framework that increases the precision of nuclei segmentation. Our framework incorporates expert monitoring to gather as much prior information as possible and accurately segment complex nucleus images through limited pathologist interaction, where only a small portion of the nucleus locations in each image are labeled. The initial contour is determined by the Voronoi diagram generated from the labeled points, which is then input into an optimized weighted convex difference model to regularize partition boundaries in an image. Specifically, we provide theoretical proof of the mathematical model, stating that the objective function monotonically decreases. Furthermore, we explore a postprocessing stage that incorporates histograms, which are simple and easy to handle and prevent arbitrariness and subjectivity in individual choices.Main results.To evaluate our approach, we conduct experiments on both a cervical cancer dataset and a nasopharyngeal cancer dataset. The experimental results demonstrate that our approach achieves competitive performance compared to other methods.Significance.The Voronoi diagram in the paper serves as prior information for the active contour, providing positional information for individual cells. Moreover, the active contour model achieves precise segmentation results while offering mathematical interpretability.
Collapse
Affiliation(s)
- Lin Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- College of Mathematics and Statistics, Henan University, Kaifeng 475004, People's Republic of China
| | - Yuanyuan Lei
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, People's Republic of China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Mengxiao Geng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- College of Mathematics and Statistics, Henan University, Kaifeng 475004, People's Republic of China
| | - Zhou Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, People's Republic of China
| | - Baijie Wang
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, People's Republic of China
| | - Dehong Luo
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, People's Republic of China
| | - Wenting Huang
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, People's Republic of China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Zhifeng Pang
- College of Mathematics and Statistics, Henan University, Kaifeng 475004, People's Republic of China
- Hubei Key Laboratory of Computational Science, Wuhan University, Wuhan 430072, People's Republic of China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| |
Collapse
|
11
|
Levy JJ, Davis MJ, Chacko RS, Davis MJ, Fu LJ, Goel T, Pamal A, Nafi I, Angirekula A, Suvarna A, Vempati R, Christensen BC, Hayden MS, Vaickus LJ, LeBoeuf MR. Intraoperative margin assessment for basal cell carcinoma with deep learning and histologic tumor mapping to surgical site. NPJ Precis Oncol 2024; 8:2. [PMID: 38172524 PMCID: PMC10764333 DOI: 10.1038/s41698-023-00477-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 11/14/2023] [Indexed: 01/05/2024] Open
Abstract
Successful treatment of solid cancers relies on complete surgical excision of the tumor either for definitive treatment or before adjuvant therapy. Intraoperative and postoperative radial sectioning, the most common form of margin assessment, can lead to incomplete excision and increase the risk of recurrence and repeat procedures. Mohs Micrographic Surgery is associated with complete removal of basal cell and squamous cell carcinoma through real-time margin assessment of 100% of the peripheral and deep margins. Real-time assessment in many tumor types is constrained by tissue size, complexity, and specimen processing / assessment time during general anesthesia. We developed an artificial intelligence platform to reduce the tissue preprocessing and histological assessment time through automated grossing recommendations, mapping and orientation of tumor to the surgical specimen. Using basal cell carcinoma as a model system, results demonstrate that this approach can address surgical laboratory efficiency bottlenecks for rapid and complete intraoperative margin assessment.
Collapse
Affiliation(s)
- Joshua J Levy
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA.
- Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Program in Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
| | - Matthew J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | | | - Michael J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Lucy J Fu
- Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA
| | - Tarushii Goel
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Akash Pamal
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Virginia, Charlottesville, VA, 22903, USA
| | - Irfan Nafi
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Stanford University, Palo Alto, CA, 94305, USA
| | - Abhinav Angirekula
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Illinois Urbana-Champaign, Champaign, IL, 61820, USA
| | - Anish Suvarna
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Ram Vempati
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Brock C Christensen
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Molecular and Systems Biology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Community and Family Medicine, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Matthew S Hayden
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Louis J Vaickus
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA
| | - Matthew R LeBoeuf
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| |
Collapse
|
12
|
Kim B, Oh Y, Wood BJ, Summers RM, Ye JC. C-DARL: Contrastive diffusion adversarial representation learning for label-free blood vessel segmentation. Med Image Anal 2024; 91:103022. [PMID: 37976870 DOI: 10.1016/j.media.2023.103022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/06/2023] [Accepted: 10/31/2023] [Indexed: 11/19/2023]
Abstract
Blood vessel segmentation in medical imaging is one of the essential steps for vascular disease diagnosis and interventional planning in a broad spectrum of clinical scenarios in image-based medicine and interventional medicine. Unfortunately, manual annotation of the vessel masks is challenging and resource-intensive due to subtle branches and complex structures. To overcome this issue, this paper presents a self-supervised vessel segmentation method, dubbed the contrastive diffusion adversarial representation learning (C-DARL) model. Our model is composed of a diffusion module and a generation module that learns the distribution of multi-domain blood vessel data by generating synthetic vessel images from diffusion latent. Moreover, we employ contrastive learning through a mask-based contrastive loss so that the model can learn more realistic vessel representations. To validate the efficacy, C-DARL is trained using various vessel datasets, including coronary angiograms, abdominal digital subtraction angiograms, and retinal imaging. Experimental results confirm that our model achieves performance improvement over baseline methods with noise robustness, suggesting the effectiveness of C-DARL for vessel segmentation.Our source code is available at https://github.com/boahK/MEDIA_CDARL.2.
Collapse
Affiliation(s)
- Boah Kim
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Yujin Oh
- Kim Jaechul Graduate School of AI, Korea Advanced Institute of Science & Technology (KAIST), Daejeon, Republic of Korea
| | - Bradford J Wood
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Ronald M Summers
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA.
| | - Jong Chul Ye
- Kim Jaechul Graduate School of AI, Korea Advanced Institute of Science & Technology (KAIST), Daejeon, Republic of Korea.
| |
Collapse
|
13
|
Wagner SJ, Matek C, Shetab Boushehri S, Boxberg M, Lamm L, Sadafi A, Winter DJE, Marr C, Peng T. Built to Last? Reproducibility and Reusability of Deep Learning Algorithms in Computational Pathology. Mod Pathol 2024; 37:100350. [PMID: 37827448 DOI: 10.1016/j.modpat.2023.100350] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 10/02/2023] [Accepted: 10/03/2023] [Indexed: 10/14/2023]
Abstract
Recent progress in computational pathology has been driven by deep learning. While code and data availability are essential to reproduce findings from preceding publications, ensuring a deep learning model's reusability is more challenging. For that, the codebase should be well-documented and easy to integrate into existing workflows and models should be robust toward noise and generalizable toward data from different sources. Strikingly, only a few computational pathology algorithms have been reused by other researchers so far, let alone employed in a clinical setting. To assess the current state of reproducibility and reusability of computational pathology algorithms, we evaluated peer-reviewed articles available in PubMed, published between January 2019 and March 2021, in 5 use cases: stain normalization; tissue type segmentation; evaluation of cell-level features; genetic alteration prediction; and inference of grading, staging, and prognostic information. We compiled criteria for data and code availability and statistical result analysis and assessed them in 160 publications. We found that only one-quarter (41 of 160 publications) made code publicly available. Among these 41 studies, three-quarters (30 of 41) analyzed their results statistically, half of them (20 of 41) released their trained model weights, and approximately a third (16 of 41) used an independent cohort for evaluation. Our review is intended for both pathologists interested in deep learning and researchers applying algorithms to computational pathology challenges. We provide a detailed overview of publications with published code in the field, list reusable data handling tools, and provide criteria for reproducibility and reusability.
Collapse
Affiliation(s)
- Sophia J Wagner
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; School of Computation, Information and Technology, Technical University of Munich, Garching, Germany
| | - Christian Matek
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Institute of Pathology, University Hospital Erlangen, Erlangen, Germany
| | - Sayedali Shetab Boushehri
- School of Computation, Information and Technology, Technical University of Munich, Garching, Germany; Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Data & Analytics (D&A), Roche Pharma Research and Early Development (pRED), Roche Innovation Center Munich, Germany
| | - Melanie Boxberg
- Institute of Pathology, Technical University Munich, Munich, Germany; Institute of Pathology Munich-North, Munich, Germany
| | - Lorenz Lamm
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Helmholtz Pioneer Campus, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany
| | - Ario Sadafi
- School of Computation, Information and Technology, Technical University of Munich, Garching, Germany; Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany
| | - Dominik J E Winter
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; School of Life Sciences, Technical University of Munich, Weihenstephan, Germany
| | - Carsten Marr
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany.
| | - Tingying Peng
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany.
| |
Collapse
|
14
|
Xu Z, Lim S, Lu Y, Jung SW. Reversed domain adaptation for nuclei segmentation-based pathological image classification. Comput Biol Med 2024; 168:107726. [PMID: 37984206 DOI: 10.1016/j.compbiomed.2023.107726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/01/2023] [Accepted: 11/15/2023] [Indexed: 11/22/2023]
Abstract
Despite the fact that digital pathology has provided a new paradigm for modern medicine, the insufficiency of annotations for training remains a significant challenge. Due to the weak generalization abilities of deep-learning models, their performance is notably constrained in domains without sufficient annotations. Our research aims to enhance the model's generalization ability through domain adaptation, increasing the prediction ability for the target domain data while only using the source domain labels for training. To further enhance classification performance, we introduce nuclei segmentation to provide the classifier with more diagnostically valuable nuclei information. In contrast to the general domain adaptation that generates source-like results in the target domain, we propose a reversed domain adaptation strategy that generates target-like results in the source domain, enabling the classification model to be more robust to inaccurate segmentation results. The proposed reversed unsupervised domain adaptation can effectively reduce the disparities in nuclei segmentation between the source and target domains without any target domain labels, leading to improved image classification performance in the target domain. The whole framework is designed in a unified manner so that the segmentation and classification modules can be trained jointly. Extensive experiments demonstrate that the proposed method significantly improves the classification performance in the target domain and outperforms existing general domain adaptation methods.
Collapse
Affiliation(s)
- Zhixin Xu
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea
| | - Seohoon Lim
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea
| | - Yucheng Lu
- Education and Research Center for Socialware IT, Korea University, Seoul, Republic of Korea
| | - Seung-Won Jung
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
15
|
Deshpande S, Dawood M, Minhas F, Rajpoot N. SynCLay: Interactive synthesis of histology images from bespoke cellular layouts. Med Image Anal 2024; 91:102995. [PMID: 37898050 DOI: 10.1016/j.media.2023.102995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 09/27/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Automated synthesis of histology images has several potential applications in computational pathology. However, no existing method can generate realistic tissue images with a bespoke cellular layout or user-defined histology parameters. In this work, we propose a novel framework called SynCLay (Synthesis from Cellular Layouts) that can construct realistic and high-quality histology images from user-defined cellular layouts along with annotated cellular boundaries. Tissue image generation based on bespoke cellular layouts through the proposed framework allows users to generate different histological patterns from arbitrary topological arrangement of different types of cells (e.g., neutrophils, lymphocytes, epithelial cells and others). SynCLay generated synthetic images can be helpful in studying the role of different types of cells present in the tumor microenvironment. Additionally, they can assist in balancing the distribution of cellular counts in tissue images for designing accurate cellular composition predictors by minimizing the effects of data imbalance. We train SynCLay in an adversarial manner and integrate a nuclear segmentation and classification model in its training to refine nuclear structures and generate nuclear masks in conjunction with synthetic images. During inference, we combine the model with another parametric model for generating colon images and associated cellular counts as annotations given the grade of differentiation and cellularities (cell densities) of different cells. We assess the generated images quantitatively using the Frechet Inception Distance and report on feedback from trained pathologists who assigned realism scores to a set of images generated by the framework. The average realism score across all pathologists for synthetic images was as high as that for the real images. Moreover, with the assistance from pathologists, we showcase the ability of the generated images to accurately differentiate between benign and malignant tumors, thus reinforcing their reliability. We demonstrate that the proposed framework can be used to add new cells to a tissue images and alter cellular positions. We also show that augmenting limited real data with the synthetic data generated by our framework can significantly boost prediction performance of the cellular composition prediction task. The implementation of the proposed SynCLay framework is available at https://github.com/Srijay/SynCLay-Framework.
Collapse
Affiliation(s)
- Srijay Deshpande
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK.
| | - Muhammad Dawood
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, London, UK; Department of Pathology, University Hospitals Coventry & Warwickshire, UK; Histofy Ltd, Birmingham, UK.
| |
Collapse
|
16
|
Harrison P, Hasan R, Park K. State-of-the-Art of Breast Cancer Diagnosis in Medical Images via Convolutional Neural Networks (CNNs). JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:387-432. [PMID: 37927373 PMCID: PMC10620373 DOI: 10.1007/s41666-023-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 08/14/2023] [Accepted: 08/22/2023] [Indexed: 11/07/2023]
Abstract
Early detection of breast cancer is crucial for a better prognosis. Various studies have been conducted where tumor lesions are detected and localized on images. This is a narrative review where the studies reviewed are related to five different image modalities: histopathological, mammogram, magnetic resonance imaging (MRI), ultrasound, and computed tomography (CT) images, making it different from other review studies where fewer image modalities are reviewed. The goal is to have the necessary information, such as pre-processing techniques and CNN-based diagnosis techniques for the five modalities, readily available in one place for future studies. Each modality has pros and cons, such as mammograms might give a high false positive rate for radiographically dense breasts, while ultrasounds with low soft tissue contrast result in early-stage false detection, and MRI provides a three-dimensional volumetric image, but it is expensive and cannot be used as a routine test. Various studies were manually reviewed using particular inclusion and exclusion criteria; as a result, 91 recent studies that classify and detect tumor lesions on breast cancer images from 2017 to 2022 related to the five image modalities were included. For histopathological images, the maximum accuracy achieved was around 99 % , and the maximum sensitivity achieved was 97.29 % by using DenseNet, ResNet34, and ResNet50 architecture. For mammogram images, the maximum accuracy achieved was 96.52 % using a customized CNN architecture. For MRI, the maximum accuracy achieved was 98.33 % using customized CNN architecture. For ultrasound, the maximum accuracy achieved was around 99 % by using DarkNet-53, ResNet-50, G-CNN, and VGG. For CT, the maximum sensitivity achieved was 96 % by using Xception architecture. Histopathological and ultrasound images achieved higher accuracy of around 99 % by using ResNet34, ResNet50, DarkNet-53, G-CNN, and VGG compared to other modalities for either of the following reasons: use of pre-trained architectures with pre-processing techniques, use of modified architectures with pre-processing techniques, use of two-stage CNN, and higher number of studies available for Artificial Intelligence (AI)/machine learning (ML) researchers to reference. One of the gaps we found is that only a single image modality is used for CNN-based diagnosis; in the future, a multiple image modality approach can be used to design a CNN architecture with higher accuracy.
Collapse
Affiliation(s)
- Pratibha Harrison
- Department of Computer and Information Science, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| | - Rakib Hasan
- Department of Mechanical Engineering, Khulna University of Engineering & Technology, PhulBari Gate, Khulna, 9203 Bangladesh
| | - Kihan Park
- Department of Mechanical Engineering, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| |
Collapse
|
17
|
Wu H, Wang Z, Zhao Z, Chen C, Qin J. Continual Nuclei Segmentation via Prototype-Wise Relation Distillation and Contrastive Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3794-3804. [PMID: 37610902 DOI: 10.1109/tmi.2023.3307892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
Deep learning models have achieved remarkable success in multi-type nuclei segmentation. These models are mostly trained at once with the full annotation of all types of nuclei available, while lack the ability of continually learning new classes due to the problem of catastrophic forgetting. In this paper, we study the practical and important class-incremental continual learning problem, where the model is incrementally updated to new classes without accessing to previous data. We propose a novel continual nuclei segmentation method, to avoid forgetting knowledge of old classes and facilitate the learning of new classes, by achieving feature-level knowledge distillation with prototype-wise relation distillation and contrastive learning. Concretely, prototype-wise relation distillation imposes constraints on the inter-class relation similarity, encouraging the encoder to extract similar class distribution for old classes in the feature space. Prototype-wise contrastive learning with a hard sampling strategy enhances the intra-class compactness and inter-class separability of features, improving the performance on both old and new classes. Experiments on two multi-type nuclei segmentation benchmarks, i.e., MoNuSAC and CoNSeP, demonstrate the effectiveness of our method with superior performance over many competitive methods. Codes are available at https://github.com/zzw-szu/CoNuSeg.
Collapse
|
18
|
Rasheed A, Shirazi SH, Umar AI, Shahzad M, Yousaf W, Khan Z. Cervical cell's nucleus segmentation through an improved UNet architecture. PLoS One 2023; 18:e0283568. [PMID: 37788295 PMCID: PMC10547184 DOI: 10.1371/journal.pone.0283568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 03/11/2023] [Indexed: 10/05/2023] Open
Abstract
Precise segmentation of the nucleus is vital for computer-aided diagnosis (CAD) in cervical cytology. Automated delineation of the cervical nucleus has notorious challenges due to clumped cells, color variation, noise, and fuzzy boundaries. Due to its standout performance in medical image analysis, deep learning has gained attention from other techniques. We have proposed a deep learning model, namely C-UNet (Cervical-UNet), to segment cervical nuclei from overlapped, fuzzy, and blurred cervical cell smear images. Cross-scale features integration based on a bi-directional feature pyramid network (BiFPN) and wide context unit are used in the encoder of classic UNet architecture to learn spatial and local features. The decoder of the improved network has two inter-connected decoders that mutually optimize and integrate these features to produce segmentation masks. Each component of the proposed C-UNet is extensively evaluated to judge its effectiveness on a complex cervical cell dataset. Different data augmentation techniques were employed to enhance the proposed model's training. Experimental results have shown that the proposed model outperformed extant models, i.e., CGAN (Conditional Generative Adversarial Network), DeepLabv3, Mask-RCNN (Region-Based Convolutional Neural Network), and FCN (Fully Connected Network), on the employed dataset used in this study and ISBI-2014 (International Symposium on Biomedical Imaging 2014), ISBI-2015 datasets. The C-UNet achieved an object-level accuracy of 93%, pixel-level accuracy of 92.56%, object-level recall of 95.32%, pixel-level recall of 92.27%, Dice coefficient of 93.12%, and F1-score of 94.96% on complex cervical images dataset.
Collapse
Affiliation(s)
- Assad Rasheed
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Syed Hamad Shirazi
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Arif Iqbal Umar
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Muhammad Shahzad
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Waqas Yousaf
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Zakir Khan
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| |
Collapse
|
19
|
Levy JJ, Chan N, Marotti JD, Kerr DA, Gutmann EJ, Glass RE, Dodge CP, Suriawinata AA, Christensen BC, Liu X, Vaickus LJ. Large-scale validation study of an improved semiautonomous urine cytology assessment tool: AutoParis-X. Cancer Cytopathol 2023; 131:637-654. [PMID: 37377320 DOI: 10.1002/cncy.22732] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 05/11/2023] [Accepted: 05/12/2023] [Indexed: 06/29/2023]
Abstract
BACKGROUND Adopting a computational approach for the assessment of urine cytology specimens has the potential to improve the efficiency, accuracy, and reliability of bladder cancer screening, which has heretofore relied on semisubjective manual assessment methods. As rigorous, quantitative criteria and guidelines have been introduced for improving screening practices (e.g., The Paris System for Reporting Urinary Cytology), algorithms to emulate semiautonomous diagnostic decision-making have lagged behind, in part because of the complex and nuanced nature of urine cytology reporting. METHODS In this study, the authors report on the development and large-scale validation of a deep-learning tool, AutoParis-X, which can facilitate rapid, semiautonomous examination of urine cytology specimens. RESULTS The results of this large-scale, retrospective validation study indicate that AutoParis-X can accurately determine urothelial cell atypia and aggregate a wide variety of cell-related and cluster-related information across a slide to yield an atypia burden score, which correlates closely with overall specimen atypia and is predictive of Paris system diagnostic categories. Importantly, this approach accounts for challenges associated with the assessment of overlapping cell cluster borders, which improve the ability to predict specimen atypia and accurately estimate the nuclear-to-cytoplasm ratio for cells in these clusters. CONCLUSIONS The authors developed a publicly available, open-source, interactive web application that features a simple, easy-to-use display for examining urine cytology whole-slide images and determining the level of atypia in specific cells, flagging the most abnormal cells for pathologist review. The accuracy of AutoParis-X (and other semiautomated digital pathology systems) indicates that these technologies are approaching clinical readiness and necessitates full evaluation of these algorithms in head-to-head clinical trials.
Collapse
Affiliation(s)
- Joshua J Levy
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Department of Dermatology, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Natt Chan
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Jonathan D Marotti
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Darcy A Kerr
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Edward J Gutmann
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Ryan E Glass
- Department of Pathology, University of Pittsburgh Medical Center East, Pittsburgh, Pennsylvania, USA
| | | | - Arief A Suriawinata
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Brock C Christensen
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
- Department of Molecular and Systems Biology, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
- Department of Community and Family Medicine, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Xiaoying Liu
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Louis J Vaickus
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| |
Collapse
|
20
|
Tasnadi E, Sliz-Nagy A, Horvath P. Structure preserving adversarial generation of labeled training samples for single-cell segmentation. CELL REPORTS METHODS 2023; 3:100592. [PMID: 37725984 PMCID: PMC10545934 DOI: 10.1016/j.crmeth.2023.100592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 05/09/2023] [Accepted: 08/24/2023] [Indexed: 09/21/2023]
Abstract
We introduce a generative data augmentation strategy to improve the accuracy of instance segmentation of microscopy data for complex tissue structures. Our pipeline uses regular and conditional generative adversarial networks (GANs) for image-to-image translation to construct synthetic microscopy images along with their corresponding masks to simulate the distribution and shape of the objects and their appearance. The synthetic samples are then used for training an instance segmentation network (for example, StarDist or Cellpose). We show on two single-cell-resolution tissue datasets that our method improves the accuracy of downstream instance segmentation tasks compared with traditional training strategies using either the raw data or basic augmentations. We also compare the quality of the object masks with those generated by a traditional cell population simulation method, finding that our synthesized masks are closer to the ground truth considering Fréchet inception distances.
Collapse
Affiliation(s)
- Ervin Tasnadi
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, 6726 Szeged, Hungary; Doctoral School of Computer Science, University of Szeged, 6720 Szeged, Hungary; Single-Cell Technologies, Ltd, 6726 Szeged, Hungary.
| | - Alex Sliz-Nagy
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, 6726 Szeged, Hungary
| | - Peter Horvath
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, 6726 Szeged, Hungary; Single-Cell Technologies, Ltd, 6726 Szeged, Hungary; Institute for Molecular Medicine Finland (FIMM), University of Helsinki, 00014 Helsinki, Finland.
| |
Collapse
|
21
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
22
|
Drioua WR, Benamrane N, Sais L. Breast Cancer Histopathological Images Segmentation Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:7318. [PMID: 37687772 PMCID: PMC10490494 DOI: 10.3390/s23177318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 08/10/2023] [Accepted: 08/18/2023] [Indexed: 09/10/2023]
Abstract
Hospitals generate a significant amount of medical data every day, which constitute a very rich database for research. Today, this database is still not exploitable because to make its valorization possible, the images require an annotation which remains a costly and difficult task. Thus, the use of an unsupervised segmentation method could facilitate the process. In this article, we propose two approaches for the semantic segmentation of breast cancer histopathology images. On the one hand, an autoencoder architecture for unsupervised segmentation is proposed, and on the other hand, an improvement U-Net architecture for supervised segmentation is proposed. We evaluate these models on a public dataset of histological images of breast cancer. In addition, the performance of our segmentation methods is measured using several evaluation metrics such as accuracy, recall, precision and F1 score. The results are competitive with those of other modern methods.
Collapse
Affiliation(s)
- Wafaa Rajaa Drioua
- Laboratoire SIMPA, Département d’Informatique, Université des Sciences et de la Technologie d’Oran Mohamed Boudiaf (USTO-MB), Oran 31000, Algeria;
| | - Nacéra Benamrane
- Laboratoire SIMPA, Département d’Informatique, Université des Sciences et de la Technologie d’Oran Mohamed Boudiaf (USTO-MB), Oran 31000, Algeria;
| | - Lakhdar Sais
- Centre de Recherche en Informatique de Lens, CRIL, CNRS, Université d’Artois, 62307 Lens, France;
| |
Collapse
|
23
|
Brémond-Martin C, Simon-Chane C, Clouchoux C, Histace A. Brain organoid data synthesis and evaluation. Front Neurosci 2023; 17:1220172. [PMID: 37650105 PMCID: PMC10465177 DOI: 10.3389/fnins.2023.1220172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 07/24/2023] [Indexed: 09/01/2023] Open
Abstract
Introduction Datasets containing only few images are common in the biomedical field. This poses a global challenge for the development of robust deep-learning analysis tools, which require a large number of images. Generative Adversarial Networks (GANs) are an increasingly used solution to expand small datasets, specifically in the biomedical domain. However, the validation of synthetic images by metrics is still controversial and psychovisual evaluations are time consuming. Methods We augment a small brain organoid bright-field database of 40 images using several GAN optimizations. We compare these synthetic images to the original dataset using similitude metrcis and we perform an psychovisual evaluation of the 240 images generated. Eight biological experts labeled the full dataset (280 images) as syntetic or natural using a custom-built software. We calculate the error rate per loss optimization as well as the hesitation time. We then compare these results to those provided by the similarity metrics. We test the psychovalidated images in a training step of a segmentation task. Results and discussion Generated images are considered as natural as the original dataset, with no increase of the hesitation time by experts. Experts are particularly misled by perceptual and Wasserstein loss optimization. These optimizations render the most qualitative and similar images according to metrics to the original dataset. We do not observe a strong correlation but links between some metrics and psychovisual decision according to the kind of generation. Particular Blur metric combinations could maybe replace the psychovisual evaluation. Segmentation task which use the most psychovalidated images are the most accurate.
Collapse
Affiliation(s)
- Clara Brémond-Martin
- ETIS Laboratory UMR 8051 (CY Cergy Paris Université, ENSEA, CNRS), Cergy, France
- Witsee, Neoxia, Paris, France
| | - Camille Simon-Chane
- ETIS Laboratory UMR 8051 (CY Cergy Paris Université, ENSEA, CNRS), Cergy, France
| | | | - Aymeric Histace
- ETIS Laboratory UMR 8051 (CY Cergy Paris Université, ENSEA, CNRS), Cergy, France
| |
Collapse
|
24
|
Xu H, Wu A, Ren H, Yu C, Liu G, Liu L. Classification of colorectal cancer consensus molecular subtypes using attention-based multi-instance learning network on whole-slide images. Acta Histochem 2023; 125:152057. [PMID: 37300984 DOI: 10.1016/j.acthis.2023.152057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 06/01/2023] [Indexed: 06/12/2023]
Abstract
Colorectal cancer (CRC) is the third most common and second most lethal cancer globally. It is highly heterogeneous with different clinical-pathological characteristics, prognostic status, and therapy responses. Thus, the precise diagnosis of CRC subtypes is of great significance for improving the prognosis and survival of CRC patients. Nowadays, the most commonly used molecular-level CRC classification system is the Consensus Molecular Subtypes (CMSs). In this study, we applied a weakly supervised deep learning method, named attention-based multi-instance learning (MIL), on formalin-fixed paraffin-embedded (FFPE) whole-slide images (WSIs) to distinguish CMS1 subtype from CMS2, CMS3, and CMS4 subtypes, as well as distinguish CMS4 from CMS1, CMS2, and CMS3 subtypes. The advantage of MIL is training a bag of the tiled instance with bag-level labels only. Our experiment was performed on 1218 WSIs obtained from The Cancer Genome Atlas (TCGA). We constructed three convolutional neural network-based structures for model training and evaluated the ability of the max-pooling operator and mean-pooling operator on aggregating bag-level scores. The results showed that the 3-layer model achieved the best performance in both comparison groups. When compared CMS1 with CMS234, max-pooling reached the ACC of 83.86 % and the mean-pooling operator reached the AUC of 0.731. While comparing CMS4 with CMS123, mean-pooling reached the ACC of 74.26 % and max-pooling reached the AUC of 0.609. Our results implied that WSIs could be utilized to classify CMSs, and manual pixel-level annotation is not a necessity for computational pathology imaging analysis.
Collapse
Affiliation(s)
- Huilin Xu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China
| | - Aoshen Wu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China
| | - He Ren
- Faculty of Medical Instrumentation, Shanghai University of Medicine and Health Sciences, Shanghai 201318, China
| | - Chenghang Yu
- National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (Chinese Center for Tropical Diseases Research), Key Laboratory of Parasite and Vector Biology, National Health Commission of the People's Republic of China, WHO Collaborating Center for Tropical Diseases, Shanghai 200025, China
| | - Gang Liu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China.
| | - Lei Liu
- Institutes of Biomedical Sciences and Intelligent Medicine Institute, Fudan University, Shanghai 200032, China.
| |
Collapse
|
25
|
Wang H, Fu T, Du Y, Gao W, Huang K, Liu Z, Chandak P, Liu S, Van Katwyk P, Deac A, Anandkumar A, Bergen K, Gomes CP, Ho S, Kohli P, Lasenby J, Leskovec J, Liu TY, Manrai A, Marks D, Ramsundar B, Song L, Sun J, Tang J, Veličković P, Welling M, Zhang L, Coley CW, Bengio Y, Zitnik M. Scientific discovery in the age of artificial intelligence. Nature 2023; 620:47-60. [PMID: 37532811 DOI: 10.1038/s41586-023-06221-2] [Citation(s) in RCA: 76] [Impact Index Per Article: 76.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 05/16/2023] [Indexed: 08/04/2023]
Abstract
Artificial intelligence (AI) is being increasingly integrated into scientific discovery to augment and accelerate research, helping scientists to generate hypotheses, design experiments, collect and interpret large datasets, and gain insights that might not have been possible using traditional scientific methods alone. Here we examine breakthroughs over the past decade that include self-supervised learning, which allows models to be trained on vast amounts of unlabelled data, and geometric deep learning, which leverages knowledge about the structure of scientific data to enhance model accuracy and efficiency. Generative AI methods can create designs, such as small-molecule drugs and proteins, by analysing diverse data modalities, including images and sequences. We discuss how these methods can help scientists throughout the scientific process and the central issues that remain despite such advances. Both developers and users of AI toolsneed a better understanding of when such approaches need improvement, and challenges posed by poor data quality and stewardship remain. These issues cut across scientific disciplines and require developing foundational algorithmic approaches that can contribute to scientific understanding or acquire it autonomously, making them critical areas of focus for AI innovation.
Collapse
Affiliation(s)
- Hanchen Wang
- Department of Engineering, University of Cambridge, Cambridge, UK
- Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
- Department of Research and Early Development, Genentech Inc, South San Francisco, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Tianfan Fu
- Department of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Yuanqi Du
- Department of Computer Science, Cornell University, Ithaca, NY, USA
| | - Wenhao Gao
- Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Kexin Huang
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Ziming Liu
- Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Payal Chandak
- Harvard-MIT Program in Health Sciences and Technology, Cambridge, MA, USA
| | - Shengchao Liu
- Mila - Quebec AI Institute, Montreal, Quebec, Canada
- Université de Montréal, Montreal, Quebec, Canada
| | - Peter Van Katwyk
- Department of Earth, Environmental and Planetary Sciences, Brown University, Providence, RI, USA
- Data Science Institute, Brown University, Providence, RI, USA
| | - Andreea Deac
- Mila - Quebec AI Institute, Montreal, Quebec, Canada
- Université de Montréal, Montreal, Quebec, Canada
| | - Anima Anandkumar
- Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
- NVIDIA, Santa Clara, CA, USA
| | - Karianne Bergen
- Department of Earth, Environmental and Planetary Sciences, Brown University, Providence, RI, USA
- Data Science Institute, Brown University, Providence, RI, USA
| | - Carla P Gomes
- Department of Computer Science, Cornell University, Ithaca, NY, USA
| | - Shirley Ho
- Center for Computational Astrophysics, Flatiron Institute, New York, NY, USA
- Department of Astrophysical Sciences, Princeton University, Princeton, NJ, USA
- Department of Physics, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Physics and Center for Data Science, New York University, New York, NY, USA
| | | | - Joan Lasenby
- Department of Engineering, University of Cambridge, Cambridge, UK
| | - Jure Leskovec
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | | | - Arjun Manrai
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Debora Marks
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | | | - Le Song
- BioMap, Beijing, China
- Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Jimeng Sun
- University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Jian Tang
- Mila - Quebec AI Institute, Montreal, Quebec, Canada
- HEC Montréal, Montreal, Quebec, Canada
- CIFAR AI Chair, Toronto, Ontario, Canada
| | - Petar Veličković
- Google DeepMind, London, UK
- Department of Computer Science and Technology, University of Cambridge, Cambridge, UK
| | - Max Welling
- University of Amsterdam, Amsterdam, Netherlands
- Microsoft Research Amsterdam, Amsterdam, Netherlands
| | - Linfeng Zhang
- DP Technology, Beijing, China
- AI for Science Institute, Beijing, China
| | - Connor W Coley
- Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Yoshua Bengio
- Mila - Quebec AI Institute, Montreal, Quebec, Canada
- Université de Montréal, Montreal, Quebec, Canada
| | - Marinka Zitnik
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
- Broad Institute of MIT and Harvard, Cambridge, MA, USA.
- Harvard Data Science Initiative, Cambridge, MA, USA.
- Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
26
|
Fu X, Sahai E, Wilkins A. Application of digital pathology-based advanced analytics of tumour microenvironment organisation to predict prognosis and therapeutic response. J Pathol 2023; 260:578-591. [PMID: 37551703 PMCID: PMC10952145 DOI: 10.1002/path.6153] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 05/25/2023] [Accepted: 06/07/2023] [Indexed: 08/09/2023]
Abstract
In recent years, the application of advanced analytics, especially artificial intelligence (AI), to digital H&E images, and other histological image types, has begun to radically change how histological images are used in the clinic. Alongside the recognition that the tumour microenvironment (TME) has a profound impact on tumour phenotype, the technical development of highly multiplexed immunofluorescence platforms has enhanced the biological complexity that can be captured in the TME with high precision. AI has an increasingly powerful role in the recognition and quantitation of image features and the association of such features with clinically important outcomes, as occurs in distinct stages in conventional machine learning. Deep-learning algorithms are able to elucidate TME patterns inherent in the input data with minimum levels of human intelligence and, hence, have the potential to achieve clinically relevant predictions and discovery of important TME features. Furthermore, the diverse repertoire of deep-learning algorithms able to interrogate TME patterns extends beyond convolutional neural networks to include attention-based models, graph neural networks, and multimodal models. To date, AI models have largely been evaluated retrospectively, outside the well-established rigour of prospective clinical trials, in part because traditional clinical trial methodology may not always be suitable for the assessment of AI technology. However, to enable digital pathology-based advanced analytics to meaningfully impact clinical care, specific measures of 'added benefit' to the current standard of care and validation in a prospective setting are important. This will need to be accompanied by adequate measures of explainability and interpretability. Despite such challenges, the combination of expanding datasets, increased computational power, and the possibility of integration of pre-clinical experimental insights into model development means there is exciting potential for the future progress of these AI applications. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Xiao Fu
- Tumour Cell Biology LaboratoryThe Francis Crick InstituteLondonUK
- Biomolecular Modelling LaboratoryThe Francis Crick InstituteLondonUK
| | - Erik Sahai
- Tumour Cell Biology LaboratoryThe Francis Crick InstituteLondonUK
| | - Anna Wilkins
- Tumour Cell Biology LaboratoryThe Francis Crick InstituteLondonUK
- Division of Radiotherapy and ImagingInstitute of Cancer ResearchLondonUK
- Royal Marsden Hospitals NHS TrustLondonUK
| |
Collapse
|
27
|
Kodipalli A, Fernandes SL, Gururaj V, Varada Rameshbabu S, Dasar S. Performance Analysis of Segmentation and Classification of CT-Scanned Ovarian Tumours Using U-Net and Deep Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:2282. [PMID: 37443676 DOI: 10.3390/diagnostics13132282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 06/29/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Abstract
Difficulty in detecting tumours in early stages is the major cause of mortalities in patients, despite the advancements in treatment and research regarding ovarian cancer. Deep learning algorithms were applied to serve the purpose as a diagnostic tool and applied to CT scan images of the ovarian region. The images went through a series of pre-processing techniques and, further, the tumour was segmented using the UNet model. The instances were then classified into two categories-benign and malignant tumours. Classification was performed using deep learning models like CNN, ResNet, DenseNet, Inception-ResNet, VGG16 and Xception, along with machine learning models such as Random Forest, Gradient Boosting, AdaBoosting and XGBoosting. DenseNet 121 emerges as the best model on this dataset after applying optimization on the machine learning models by obtaining an accuracy of 95.7%. The current work demonstrates the comparison of multiple CNN architectures with common machine learning algorithms, with and without optimization techniques applied.
Collapse
Affiliation(s)
- Ashwini Kodipalli
- Department of Artificial Intelligence & Data Science, Global Academy of Technology, Bangalore 560098, India
| | - Steven L Fernandes
- Department of Computer Science, Design, Journalism, Creighton University, Omaha, NE 68178, USA
| | - Vaishnavi Gururaj
- Department of Computer Science, George Mason University, Fairfax, VA 22030, USA
| | - Shriya Varada Rameshbabu
- Department of Computer Science & Engineering, Global Academy of Technology, Bangalore 560098, India
| | - Santosh Dasar
- Department of Radiologist, SDM College of Medical Sciences and Hospital, Dharwad 580009, India
| |
Collapse
|
28
|
Ajani SN, Mulla RA, Limkar S, Ashtagi R, Wagh SK, Pawar ME. DLMBHCO: design of an augmented bioinspired deep learning-based multidomain body parameter analysis via heterogeneous correlative body organ analysis. Soft comput 2023:1-21. [PMID: 37362266 PMCID: PMC10248994 DOI: 10.1007/s00500-023-08613-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/23/2023] [Indexed: 06/28/2023]
Abstract
Progressive organ-level disorders in the human body are often correlated with diseases in other body parts. For instance, liver diseases can be linked with heart issues, while cancers can be linked with brain diseases (or psychological conditions). Defining such correlations is a complex task, and existing deep learning models that perform this task either showcase lower accuracy or are non-comprehensive when applied to real-time scenarios. To overcome these issues, this text proposes design of an augmented bioinspired deep learning-based multidomain body parameter analysis via heterogeneous correlative body organ analysis. The proposed model initially collects temporal and spatial data scans for different body parts and uses a multidomain feature extraction engine to convert these scans into vector sets. These vectors are processed by a Bacterial Foraging Optimizer (BFO), which assists in identification of highly variant feature sets, which are individually classified into different disease categories. A fusion of Inception Net, XCeption Net, and GoogLeNet Models is used to perform these classifications. The classified categories are linked with other disease types via temporal analysis of blood reports. The temporal analysis engine uses Modified Analytical Hierarchical Processing (MAHP) Model for calculating inter-organ disease dependency probabilities. Based on these probabilities, the model is able to generate a patient-level correlation map, which can be used by clinical experts to suggest remedial treatments, due to which the model was able to identify correlations between brain disorders and kidneys, heart diseases and lungs, heart diseases and liver, brain diseases and different types of cancers with high efficiency when evaluated under clinical scenarios. When validated on MITBIH, DEAP, CT Kidney, RIDER, and PLCO data samples, it was observed that the proposed model was capable of improving accuracy of correlation by 8.5%, while improving precision and recall by 3.2% when compared with existing correlation models under similar clinical scenarios.
Collapse
Affiliation(s)
- Samir N. Ajani
- Department of Computer Science & Engineering (Data Science), St. Vincent Pallotti College of Engineering and Technology, Nagpur, Maharashtra India
| | - Rais Allauddin Mulla
- Department of Computer Engineering, Vasantdada Patil Pratishthan College of Engineering and Visual Arts, Mumbai, Maharashtra India
| | - Suresh Limkar
- Department of Artificial Intelligence and Data Science, AISSMS Institute of Information Technology, Pune, Maharashtra India
| | - Rashmi Ashtagi
- Department of Computer Engineering, Vishwakarma Institute of Technology, Bibwewadi, Pune, 411037 Maharashtra India
| | - Sharmila K. Wagh
- Department of Computer Engineering, Modern Education Society’s College of Engineering, Pune, Maharashtra India
| | - Mahendra Eknath Pawar
- Department of Computer Engineering, Vasantdada Patil Pratishthan College of Engineering and Visual Arts, Mumbai, Maharashtra India
| |
Collapse
|
29
|
Li H, Zhong J, Lin L, Chen Y, Shi P. Semi-supervised nuclei segmentation based on multi-edge features fusion attention network. PLoS One 2023; 18:e0286161. [PMID: 37228137 DOI: 10.1371/journal.pone.0286161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 05/09/2023] [Indexed: 05/27/2023] Open
Abstract
The morphology of the nuclei represents most of the clinical pathological information, and nuclei segmentation is a vital step in current automated histopathological image analysis. Supervised machine learning-based segmentation models have already achieved outstanding performance with sufficiently precise human annotations. Nevertheless, outlining such labels on numerous nuclei is extremely professional needing and time consuming. Automatic nuclei segmentation with minimal manual interventions is highly needed to promote the effectiveness of clinical pathological researches. Semi-supervised learning greatly reduces the dependence on labeled samples while ensuring sufficient accuracy. In this paper, we propose a Multi-Edge Feature Fusion Attention Network (MEFFA-Net) with three feature inputs including image, pseudo-mask and edge, which enhances its learning ability by considering multiple features. Only a few labeled nuclei boundaries are used to train annotations on the remaining mostly unlabeled data. The MEFFA-Net creates more precise boundary masks for nucleus segmentation based on pseudo-masks, which greatly reduces the dependence on manual labeling. The MEFFA-Block focuses on the nuclei outline and selects features conducive to segment, making full use of the multiple features in segmentation. Experimental results on public multi-organ databases including MoNuSeg, CPM-17 and CoNSeP show that the proposed model has the mean IoU segmentation evaluations of 0.706, 0.751, and 0.722, respectively. The model also achieves better results than some cutting-edge methods while the labeling work is reduced to 1/8 of common supervised strategies. Our method provides a more efficient and accurate basis for nuclei segmentations and further quantifications in pathological researches.
Collapse
Affiliation(s)
- Huachang Li
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| | - Jing Zhong
- Department of Radiology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Liyan Lin
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Yanping Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Peng Shi
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| |
Collapse
|
30
|
Islam Sumon R, Bhattacharjee S, Hwang YB, Rahman H, Kim HC, Ryu WS, Kim DM, Cho NH, Choi HK. Densely Convolutional Spatial Attention Network for nuclei segmentation of histological images for computational pathology. Front Oncol 2023; 13:1009681. [PMID: 37305563 PMCID: PMC10248729 DOI: 10.3389/fonc.2023.1009681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 05/05/2023] [Indexed: 06/13/2023] Open
Abstract
Introduction Automatic nuclear segmentation in digital microscopic tissue images can aid pathologists to extract high-quality features for nuclear morphometrics and other analyses. However, image segmentation is a challenging task in medical image processing and analysis. This study aimed to develop a deep learning-based method for nuclei segmentation of histological images for computational pathology. Methods The original U-Net model sometime has a caveat in exploring significant features. Herein, we present the Densely Convolutional Spatial Attention Network (DCSA-Net) model based on U-Net to perform the segmentation task. Furthermore, the developed model was tested on external multi-tissue dataset - MoNuSeg. To develop deep learning algorithms for well-segmenting nuclei, a large quantity of data are mandatory, which is expensive and less feasible. We collected hematoxylin and eosin-stained image data sets from two hospitals to train the model with a variety of nuclear appearances. Because of the limited number of annotated pathology images, we introduced a small publicly accessible data set of prostate cancer (PCa) with more than 16,000 labeled nuclei. Nevertheless, to construct our proposed model, we developed the DCSA module, an attention mechanism for capturing useful information from raw images. We also used several other artificial intelligence-based segmentation methods and tools to compare their results to our proposed technique. Results To prioritize the performance of nuclei segmentation, we evaluated the model's outputs based on the Accuracy, Dice coefficient (DC), and Jaccard coefficient (JC) scores. The proposed technique outperformed the other methods and achieved superior nuclei segmentation with accuracy, DC, and JC of 96.4% (95% confidence interval [CI]: 96.2 - 96.6), 81.8 (95% CI: 80.8 - 83.0), and 69.3 (95% CI: 68.2 - 70.0), respectively, on the internal test data set. Conclusion Our proposed method demonstrates superior performance in segmenting cell nuclei of histological images from internal and external datasets, and outperforms many standard segmentation algorithms used for comparative analysis.
Collapse
Affiliation(s)
- Rashadul Islam Sumon
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Subrata Bhattacharjee
- Department of Computer Engineering, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Yeong-Byn Hwang
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Hafizur Rahman
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Hee-Cheol Kim
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Wi-Sun Ryu
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| | - Dong Min Kim
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| | - Nam-Hoon Cho
- Department of Pathology, Yonsei University Hospital, Seoul, Republic of Korea
| | - Heung-Kook Choi
- Department of Computer Engineering, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| |
Collapse
|
31
|
Ginley B, Lucarelli N, Zee J, Jain S, Han SS, Rodrigues L, Ozrazgat-Baslanti T, Wong ML, Nadkarni G, Jen KY, Sarder P. Correlating Deep Learning-Based Automated Reference Kidney Histomorphometry with Patient Demographics and Creatinine. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.18.541348. [PMID: 37292965 PMCID: PMC10245721 DOI: 10.1101/2023.05.18.541348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Background Reference histomorphometric data of healthy human kidneys are largely lacking due to laborious quantitation requirements. Correlating histomorphometric features with clinical parameters through machine learning approaches can provide valuable information about natural population variance. To this end, we leveraged deep learning, computational image analysis, and feature analysis to investigate the relationship of histomorphometry with patient age, sex, and serum creatinine (SCr) in a multinational set of reference kidney tissue sections. Methods A panoptic segmentation neural network was developed and used to segment viable and sclerotic glomeruli, cortical and medullary interstitia, tubules, and arteries/arterioles in the digitized images of 79 periodic acid-Schiff-stained human nephrectomy sections showing minimal pathologic changes. Simple morphometrics (e.g., area, radius, density) were quantified from the segmented classes. Regression analysis aided in determining the relationship of histomorphometric parameters with age, sex, and SCr. Results Our deep-learning model achieved high segmentation performance for all test compartments. The size and density of nephrons and arteries/arterioles varied significantly among healthy humans, with potentially large differences between geographically diverse patients. Nephron size was significantly dependent on SCr. Slight, albeit significant, differences in renal vasculature were observed between sexes. Glomerulosclerosis percentage increased, and cortical density of arteries/arterioles decreased, as a function of age. Conclusions Using deep learning, we automated precise measurements of kidney histomorphometric features. In the reference kidney tissue, several histomorphometric features demonstrated significant correlation to patient demographics and SCr. Deep learning tools can increase the efficiency and rigor of histomorphometric analysis.
Collapse
Affiliation(s)
- Brandon Ginley
- Departments of Pathology & Anatomical Sciences, University at Buffalo Jacobs School of Medicine and Biomedical Sciences – The State University of New York, Buffalo, NY, USA
| | - Nicholas Lucarelli
- Department of Biomedical Engineering, University of Florida College of Engineering, Gainesville, FL Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Jarcy Zee
- Department of Biostatistics, Epidemiology, & Informatics, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Sanjay Jain
- Division of Nephrology, Department of Medicine, Washington University School of Medicine, St. Louis, MO, USA
- Departments of Pediatrics and Pathology, Washington University School of Medicine, St. Louis, MO, USA
| | - Seung Sook Han
- Department of Internal Medicine, Seoul National University College of Medicine, Seoul, South Korea
| | - Luis Rodrigues
- University Clinic of Nephrology, Faculty of Medicine, University of Coimbra, Coimbra, Portugal
- Nephrology Unit, Centro Hospitalare Universitário de Coimbra, Coimbra, Portugal
| | - Tezcan Ozrazgat-Baslanti
- Quantitative Health Section, Division of Nephrology, Hypertension, and Renal Transplantation, Department of Medicine, University of Florida, Gainesville, FL, USA
| | - Michelle L. Wong
- The Mount Sinai Clinical Intelligence Center, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Girish Nadkarni
- The Mount Sinai Clinical Intelligence Center, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Division of Nephrology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Kuang-Yu Jen
- Department of Pathology and Laboratory Medicine, University of California, Davis School of Medicine, Sacramento, CA, USA
| | - Pinaki Sarder
- Quantitative Health Section, Division of Nephrology, Hypertension, and Renal Transplantation, Department of Medicine, University of Florida, Gainesville, FL, USA
- Department of Electrical & Computer Engineering, University of Florida College of Engineering, Gainesville, FL, USA
| |
Collapse
|
32
|
Ding K, Zhou M, Wang H, Gevaert O, Metaxas D, Zhang S. A Large-scale Synthetic Pathological Dataset for Deep Learning-enabled Segmentation of Breast Cancer. Sci Data 2023; 10:231. [PMID: 37085533 PMCID: PMC10121551 DOI: 10.1038/s41597-023-02125-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 03/31/2023] [Indexed: 04/23/2023] Open
Abstract
The success of training computer-vision models heavily relies on the support of large-scale, real-world images with annotations. Yet such an annotation-ready dataset is difficult to curate in pathology due to the privacy protection and excessive annotation burden. To aid in computational pathology, synthetic data generation, curation, and annotation present a cost-effective means to quickly enable data diversity that is required to boost model performance at different stages. In this study, we introduce a large-scale synthetic pathological image dataset paired with the annotation for nuclei semantic segmentation, termed as Synthetic Nuclei and annOtation Wizard (SNOW). The proposed SNOW is developed via a standardized workflow by applying the off-the-shelf image generator and nuclei annotator. The dataset contains overall 20k image tiles and 1,448,522 annotated nuclei with the CC-BY license. We show that SNOW can be used in both supervised and semi-supervised training scenarios. Extensive results suggest that synthetic-data-trained models are competitive under a variety of model training settings, expanding the scope of better using synthetic images for enhancing downstream data-driven clinical tasks.
Collapse
Affiliation(s)
- Kexin Ding
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28262, USA
| | - Mu Zhou
- Sensebrain Research, San Jose, CA, 95131, USA
| | - He Wang
- Department of Pathology, Yale University, New Haven, CT, 06520, USA
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research, Department of Medicine and Biomedical Data Science, Stanford University, Stanford, CA, 94305, USA
| | - Dimitris Metaxas
- Department of Computer Science, Rutgers University, New Brunswick, NJ, 08901, USA
| | - Shaoting Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| |
Collapse
|
33
|
Winkelmaier G, Koch B, Bogardus S, Borowsky AD, Parvin B. Biomarkers of Tumor Heterogeneity in Glioblastoma Multiforme Cohort of TCGA. Cancers (Basel) 2023; 15:cancers15082387. [PMID: 37190318 DOI: 10.3390/cancers15082387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 04/06/2023] [Accepted: 04/14/2023] [Indexed: 05/17/2023] Open
Abstract
Tumor Whole Slide Images (WSI) are often heterogeneous, which hinders the discovery of biomarkers in the presence of confounding clinical factors. In this study, we present a pipeline for identifying biomarkers from the Glioblastoma Multiforme (GBM) cohort of WSIs from TCGA archive. The GBM cohort endures many technical artifacts while the discovery of GBM biomarkers is challenged because "age" is the single most confounding factor for predicting outcomes. The proposed approach relies on interpretable features (e.g., nuclear morphometric indices), effective similarity metrics for heterogeneity analysis, and robust statistics for identifying biomarkers. The pipeline first removes artifacts (e.g., pen marks) and partitions each WSI into patches for nuclear segmentation via an extended U-Net for subsequent quantitative representation. Given the variations in fixation and staining that can artificially modulate hematoxylin optical density (HOD), we extended Navab's Lab method to normalize images and reduce the impact of batch effects. The heterogeneity of each WSI is then represented either as probability density functions (PDF) per patient or as the composition of a dictionary predicted from the entire cohort of WSIs. For PDF- or dictionary-based methods, morphometric subtypes are constructed based on distances computed from optimal transport and linkage analysis or consensus clustering with Euclidean distances, respectively. For each inferred subtype, Kaplan-Meier and/or the Cox regression model are used to regress the survival time. Since age is the single most important confounder for predicting survival in GBM and there is an observed violation of the proportionality assumption in the Cox model, we use both age and age-squared coupled with the Likelihood ratio test and forest plots for evaluating competing statistics. Next, the PDF- and dictionary-based methods are combined to identify biomarkers that are predictive of survival. The combined model has the advantage of integrating global (e.g., cohort scale) and local (e.g., patient scale) attributes of morphometric heterogeneity, coupled with robust statistics, to reveal stable biomarkers. The results indicate that, after normalization of the GBM cohort, mean HOD, eccentricity, and cellularity are predictive of survival. Finally, we also stratified the GBM cohort as a function of EGFR expression and published genomic subtypes to reveal genomic-dependent morphometric biomarkers.
Collapse
Affiliation(s)
- Garrett Winkelmaier
- Department of Electrical and Biomedical Engineering, College of Engineering, University of Nevada Reno, 1664 N. Virginia St., Reno, NV 89509, USA
| | - Brandon Koch
- Department of Biostatics, College of Public Health, Ohio State University, 281 W. Lane Ave., Columbus, OH 43210, USA
| | - Skylar Bogardus
- Department of Electrical and Biomedical Engineering, College of Engineering, University of Nevada Reno, 1664 N. Virginia St., Reno, NV 89509, USA
| | - Alexander D Borowsky
- Department of Pathology, UC Davis Comprehensive Cancer Center, University of California Davis, 1 Shields Ave, Davis, CA 95616, USA
| | - Bahram Parvin
- Department of Electrical and Biomedical Engineering, College of Engineering, University of Nevada Reno, 1664 N. Virginia St., Reno, NV 89509, USA
- Pennington Cancer Institute, Renown Health, Reno, NV 89502, USA
| |
Collapse
|
34
|
Lou W, Li H, Li G, Han X, Wan X. Which Pixel to Annotate: A Label-Efficient Nuclei Segmentation Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:947-958. [PMID: 36355729 DOI: 10.1109/tmi.2022.3221666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Recently deep neural networks, which require a large amount of annotated samples, have been widely applied in nuclei instance segmentation of H&E stained pathology images. However, it is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns. Although unsupervised and semi-supervised learning methods have been studied for nuclei segmentation, very few works have delved into the selective labeling of samples to reduce the workload of annotation. Thus, in this paper, we propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner. In the proposed framework, we first develop a novel consistency-based patch selection method to determine which image patches are the most beneficial to the training. Then we introduce a conditional single-image GAN with a component-wise discriminator, to synthesize more training samples. Lastly, our proposed framework trains an existing segmentation model with the above augmented samples. The experimental results show that our proposed method could obtain the same-level performance as a fully-supervised baseline by annotating less than 5% pixels on some benchmarks.
Collapse
|
35
|
Han L, Su H, Yin Z. Phase Contrast Image Restoration by Formulating Its Imaging Principle and Reversing the Formulation With Deep Neural Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1068-1082. [PMID: 36409800 DOI: 10.1109/tmi.2022.3223677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Phase contrast microscopy, as a noninvasive imaging technique, has been widely used to monitor the behavior of transparent cells without staining or altering them. Due to the optical principle of the specifically-designed microscope, phase contrast microscopy images contain artifacts such as halo and shade-off which hinder the cell segmentation and detection tasks. Some previous works developed simplified computational imaging models for phase contrast microscopes by linear approximations and convolutions. The approximated models do not exactly reflect the imaging principle of the phase contrast microscope and accordingly the image restoration by solving the corresponding deconvolution process is not perfect. In this paper, we revisit the optical principle of the phase contrast microscope to precisely formulate its imaging model without any approximation. Based on this model, we propose an image restoration procedure by reversing this imaging model with a deep neural network, instead of mathematically deriving the inverse operator of the model which is technically impossible. Extensive experiments are conducted to demonstrate the superiority of the newly derived phase contrast microscopy imaging model and the power of the deep neural network on modeling the inverse imaging procedure. Moreover, the restored images enable that high quality cell segmentation task can be easily achieved by simply thresholding methods. Implementations of this work are publicly available at https://github.com/LiangHann/Phase-Contrast-Microscopy-Image-Restoration.
Collapse
|
36
|
Haq MM, Ma H, Huang J. NuSegDA: Domain adaptation for nuclei segmentation. Front Big Data 2023; 6:1108659. [PMID: 36936996 PMCID: PMC10018010 DOI: 10.3389/fdata.2023.1108659] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Accepted: 02/13/2023] [Indexed: 03/06/2023] Open
Abstract
The accurate segmentation of nuclei is crucial for cancer diagnosis and further clinical treatments. To successfully train a nuclei segmentation network in a fully-supervised manner for a particular type of organ or cancer, we need the dataset with ground-truth annotations. However, such well-annotated nuclei segmentation datasets are highly rare, and manually labeling an unannotated dataset is an expensive, time-consuming, and tedious process. Consequently, we require to discover a way for training the nuclei segmentation network with unlabeled dataset. In this paper, we propose a model named NuSegUDA for nuclei segmentation on the unlabeled dataset (target domain). It is achieved by applying Unsupervised Domain Adaptation (UDA) technique with the help of another labeled dataset (source domain) that may come from different type of organ, cancer, or source. We apply UDA technique at both of feature space and output space. We additionally utilize a reconstruction network and incorporate adversarial learning into it so that the source-domain images can be accurately translated to the target-domain for further training of the segmentation network. We validate our proposed NuSegUDA on two public nuclei segmentation datasets, and obtain significant improvement as compared with the baseline methods. Extensive experiments also verify the contribution of newly proposed image reconstruction adversarial loss, and target-translated source supervised loss to the performance boost of NuSegUDA. Finally, considering the scenario when we have a small number of annotations available from the target domain, we extend our work and propose NuSegSSDA, a Semi-Supervised Domain Adaptation (SSDA) based approach.
Collapse
Affiliation(s)
- Mohammad Minhazul Haq
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX, United States
| | - Hehuan Ma
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX, United States
| | - Junzhou Huang
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX, United States
| |
Collapse
|
37
|
Guo R, Xie K, Pagnucco M, Song Y. SAC-Net: Learning with weak and noisy labels in histopathology image segmentation. Med Image Anal 2023; 86:102790. [PMID: 36878159 DOI: 10.1016/j.media.2023.102790] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 11/24/2022] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
Deep convolutional neural networks have been highly effective in segmentation tasks. However, segmentation becomes more difficult when training images include many complex instances to segment, such as the task of nuclei segmentation in histopathology images. Weakly supervised learning can reduce the need for large-scale, high-quality ground truth annotations by involving non-expert annotators or algorithms to generate supervision information for segmentation. However, there is still a significant performance gap between weakly supervised learning and fully supervised learning approaches. In this work, we propose a weakly-supervised nuclei segmentation method in a two-stage training manner that only requires annotation of the nuclear centroids. First, we generate boundary and superpixel-based masks as pseudo ground truth labels to train our SAC-Net, which is a segmentation network enhanced by a constraint network and an attention network to effectively address the problems caused by noisy labels. Then, we refine the pseudo labels at the pixel level based on Confident Learning to train the network again. Our method shows highly competitive performance of cell nuclei segmentation in histopathology images on three public datasets. Code will be available at: https://github.com/RuoyuGuo/MaskGA_Net.
Collapse
Affiliation(s)
- Ruoyu Guo
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Kunzi Xie
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Maurice Pagnucco
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Australia.
| |
Collapse
|
38
|
Xiang D, Yan S, Guan Y, Cai M, Li Z, Liu H, Chen X, Tian B. Semi-Supervised Dual Stream Segmentation Network for Fundus Lesion Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:713-725. [PMID: 36260572 DOI: 10.1109/tmi.2022.3215580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Accurate segmentation of retinal images can assist ophthalmologists to determine the degree of retinopathy and diagnose other systemic diseases. However, the structure of the retina is complex, and different anatomical structures often affect the segmentation of fundus lesions. In this paper, a new segmentation strategy called a dual stream segmentation network embedded into a conditional generative adversarial network is proposed to improve the accuracy of retinal lesion segmentation. First, a dual stream encoder is proposed to utilize the capabilities of two different networks and extract more feature information. Second, a multiple level fuse block is proposed to decode the richer and more effective features from the two different parallel encoders. Third, the proposed network is further trained in a semi-supervised adversarial manner to leverage from labeled images and unlabeled images with high confident pseudo labels, which are selected by the dual stream Bayesian segmentation network. An annotation discriminator is further proposed to reduce the negativity that prediction tends to become increasingly similar to the inaccurate predictions of unlabeled images. The proposed method is cross-validated in 384 clinical fundus fluorescein angiography images and 1040 optical coherence tomography images. Compared to state-of-the-art methods, the proposed method can achieve better segmentation of retinal capillary non-perfusion region and choroidal neovascularization.
Collapse
|
39
|
Billot B, Greve DN, Puonti O, Thielscher A, Van Leemput K, Fischl B, Dalca AV, Iglesias JE. SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining. Med Image Anal 2023; 86:102789. [PMID: 36857946 PMCID: PMC10154424 DOI: 10.1016/j.media.2023.102789] [Citation(s) in RCA: 51] [Impact Index Per Article: 51.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 01/20/2023] [Accepted: 02/22/2023] [Indexed: 03/03/2023]
Abstract
Despite advances in data augmentation and transfer learning, convolutional neural networks (CNNs) difficultly generalise to unseen domains. When segmenting brain scans, CNNs are highly sensitive to changes in resolution and contrast: even within the same MRI modality, performance can decrease across datasets. Here we introduce SynthSeg, the first segmentation CNN robust against changes in contrast and resolution. SynthSeg is trained with synthetic data sampled from a generative model conditioned on segmentations. Crucially, we adopt a domain randomisation strategy where we fully randomise the contrast and resolution of the synthetic training data. Consequently, SynthSeg can segment real scans from a wide range of target domains without retraining or fine-tuning, which enables straightforward analysis of huge amounts of heterogeneous clinical data. Because SynthSeg only requires segmentations to be trained (no images), it can learn from labels obtained by automated methods on diverse populations (e.g., ageing and diseased), thus achieving robustness to a wide range of morphological variability. We demonstrate SynthSeg on 5,000 scans of six modalities (including CT) and ten resolutions, where it exhibits unparallelled generalisation compared with supervised CNNs, state-of-the-art domain adaptation, and Bayesian segmentation. Finally, we demonstrate the generalisability of SynthSeg by applying it to cardiac MRI and CT scans.
Collapse
Affiliation(s)
- Benjamin Billot
- Centre for Medical Image Computing, University College London, UK.
| | - Douglas N Greve
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital, Denmark
| | - Axel Thielscher
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital, Denmark; Department of Health Technology, Technical University of, Denmark
| | - Koen Van Leemput
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA; Department of Health Technology, Technical University of, Denmark
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA; Program in Health Sciences and Technology, Massachusetts Institute of Technology, USA
| | - Adrian V Dalca
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, University College London, UK; Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| | | |
Collapse
|
40
|
Suresh K, Cohen MS, Hartnick CJ, Bartholomew RA, Lee DJ, Crowson MG. Generation of synthetic tympanic membrane images: Development, human validation, and clinical implications of synthetic data. PLOS DIGITAL HEALTH 2023; 2:e0000202. [PMID: 36827244 PMCID: PMC9956018 DOI: 10.1371/journal.pdig.0000202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 01/24/2023] [Indexed: 02/25/2023]
Abstract
Synthetic clinical images could augment real medical image datasets, a novel approach in otolaryngology-head and neck surgery (OHNS). Our objective was to develop a generative adversarial network (GAN) for tympanic membrane images and to validate the quality of synthetic images with human reviewers. Our model was developed using a state-of-the-art GAN architecture, StyleGAN2-ADA. The network was trained on intraoperative high-definition (HD) endoscopic images of tympanic membranes collected from pediatric patients undergoing myringotomy with possible tympanostomy tube placement. A human validation survey was administered to a cohort of OHNS and pediatrics trainees at our institution. The primary measure of model quality was the Frechet Inception Distance (FID), a metric comparing the distribution of generated images with the distribution of real images. The measures used for human reviewer validation were the sensitivity, specificity, and area under the curve (AUC) for humans' ability to discern synthetic from real images. Our dataset comprised 202 images. The best GAN was trained at 512x512 image resolution with a FID of 47.0. The progression of images through training showed stepwise "learning" of the anatomic features of a tympanic membrane. The validation survey was taken by 65 persons who reviewed 925 images. Human reviewers demonstrated a sensitivity of 66%, specificity of 73%, and AUC of 0.69 for the detection of synthetic images. In summary, we successfully developed a GAN to produce synthetic tympanic membrane images and validated this with human reviewers. These images could be used to bolster real datasets with various pathologies and develop more robust deep learning models such as those used for diagnostic predictions from otoscopic images. However, caution should be exercised with the use of synthetic data given issues regarding data diversity and performance validation. Any model trained using synthetic data will require robust external validation to ensure validity and generalizability.
Collapse
Affiliation(s)
- Krish Suresh
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
- * E-mail:
| | - Michael S. Cohen
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Christopher J. Hartnick
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Ryan A. Bartholomew
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Daniel J. Lee
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Matthew G. Crowson
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
41
|
Combined segmentation and classification-based approach to automated analysis of biomedical signals obtained from calcium imaging. PLoS One 2023; 18:e0281236. [PMID: 36745648 PMCID: PMC9901747 DOI: 10.1371/journal.pone.0281236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Accepted: 01/18/2023] [Indexed: 02/07/2023] Open
Abstract
Automated screening systems in conjunction with machine learning-based methods are becoming an essential part of the healthcare systems for assisting in disease diagnosis. Moreover, manually annotating data and hand-crafting features for training purposes are impractical and time-consuming. We propose a segmentation and classification-based approach for assembling an automated screening system for the analysis of calcium imaging. The method was developed and verified using the effects of disease IgGs (from Amyotrophic Lateral Sclerosis patients) on calcium (Ca2+) homeostasis. From 33 imaging videos we analyzed, 21 belonged to the disease and 12 to the control experimental groups. The method consists of three main steps: projection, segmentation, and classification. The entire Ca2+ time-lapse image recordings (videos) were projected into a single image using different projection methods. Segmentation was performed by using a multi-level thresholding (MLT) step and the Regions of Interest (ROIs) that encompassed cell somas were detected. A mean value of the pixels within these boundaries was collected at each time point to obtain the Ca2+ traces (time-series). Finally, a new matrix called feature image was generated from those traces and used for assessing the classification accuracy of various classifiers (control vs. disease). The mean value of the segmentation F-score for all the data was above 0.80 throughout the tested threshold levels for all projection methods, namely maximum intensity, standard deviation, and standard deviation with linear scaling projection. Although the classification accuracy reached up to 90.14%, interestingly, we observed that achieving better scores in segmentation results did not necessarily correspond to an increase in classification performance. Our method takes the advantage of the multi-level thresholding and of a classification procedure based on the feature images, thus it does not have to rely on hand-crafted training parameters of each event. It thus provides a semi-autonomous tool for assessing segmentation parameters which allows for the best classification accuracy.
Collapse
|
42
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
43
|
A generalizable and robust deep learning algorithm for mitosis detection in multicenter breast histopathological images. Med Image Anal 2023; 84:102703. [PMID: 36481608 DOI: 10.1016/j.media.2022.102703] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 09/16/2022] [Accepted: 11/21/2022] [Indexed: 11/24/2022]
Abstract
Mitosis counting of biopsies is an important biomarker for breast cancer patients, which supports disease prognostication and treatment planning. Developing a robust mitotic cell detection model is highly challenging due to its complex growth pattern and high similarities with non-mitotic cells. Most mitosis detection algorithms have poor generalizability across image domains and lack reproducibility and validation in multicenter settings. To overcome these issues, we propose a generalizable and robust mitosis detection algorithm (called FMDet), which is independently tested on multicenter breast histopathological images. To capture more refined morphological features of cells, we convert the object detection task as a semantic segmentation problem. The pixel-level annotations for mitotic nuclei are obtained by taking the intersection of the masks generated from a well-trained nuclear segmentation model and the bounding boxes provided by the MIDOG 2021 challenge. In our segmentation framework, a robust feature extractor is developed to capture the appearance variations of mitotic cells, which is constructed by integrating a channel-wise multi-scale attention mechanism into a fully convolutional network structure. Benefiting from the fact that the changes in the low-level spectrum do not affect the high-level semantic perception, we employ a Fourier-based data augmentation method to reduce domain discrepancies by exchanging the low-frequency spectrum between two domains. Our FMDet algorithm has been tested in the MIDOG 2021 challenge and ranked first place. Further, our algorithm is also externally validated on four independent datasets for mitosis detection, which exhibits state-of-the-art performance in comparison with previously published results. These results demonstrate that our algorithm has the potential to be deployed as an assistant decision support tool in clinical practice. Our code has been released at https://github.com/Xiyue-Wang/1st-in-MICCAI-MIDOG-2021-challenge.
Collapse
|
44
|
Liu K, Li B, Wu W, May C, Chang O, Knezevich S, Reisch L, Elmore J, Shapiro L. VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2023; 2023:1918-1927. [PMID: 36865487 PMCID: PMC9977454 DOI: 10.1109/wacv56688.2023.00196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Detection of melanocytes serves as a critical prerequisite in assessing melanocytic growth patterns when diagnosing melanoma and its precursor lesions on skin biopsy specimens. However, this detection is challenging due to the visual similarity of melanocytes to other cells in routine Hematoxylin and Eosin (H&E) stained images, leading to the failure of current nuclei detection methods. Stains such as Sox10 can mark melanocytes, but they require an additional step and expense and thus are not regularly used in clinical practice. To address these limitations, we introduce VSGD-Net, a novel detection network that learns melanocyte identification through virtual staining from H&E to Sox10. The method takes only routine H&E images during inference, resulting in a promising approach to support pathologists in the diagnosis of melanoma. To the best of our knowledge, this is the first study that investigates the detection problem using image synthesis features between two distinct pathology stainings. Extensive experimental results show that our proposed model outperforms state-of-the-art nuclei detection methods for melanocyte detection. The source code and pre-trained model are available at: https://github.com/kechunl/VSGD-Net.
Collapse
Affiliation(s)
| | - Beibin Li
- University of Washington.,Microsoft Research
| | | | | | | | | | | | | | | |
Collapse
|
45
|
Liang Y, Yin Z, Liu H, Zeng H, Wang J, Liu J, Che N. Weakly Supervised Deep Nuclei Segmentation With Sparsely Annotated Bounding Boxes for DNA Image Cytometry. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:785-795. [PMID: 34951851 DOI: 10.1109/tcbb.2021.3138189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Nuclei segmentation is an essential step in DNA ploidy analysis by image-based cytometry (DNA-ICM) which is widely used in cytopathology and allows an objective measurement of DNA content (ploidy). The routine fully supervised learning-based method requires often tedious and expensive pixel-wise labels. In this paper, we propose a novel weakly supervised nuclei segmentation framework which exploits only sparsely annotated bounding boxes, without any segmentation labels. The key is to integrate the traditional image segmentation and self-training into fully supervised instance segmentation. We first leverage the traditional segmentation to generate coarse masks for each box-annotated nucleus to supervise the training of a teacher model, which is then responsible for both the refinement of these coarse masks and pseudo labels generation of unlabeled nuclei. These pseudo labels and refined masks along with the original manually annotated bounding boxes jointly supervise the training of student model. Both teacher and student share the same architecture and especially the student is initialized by the teacher. We have extensively evaluated our method with both our DNA-ICM dataset and public cytopathological dataset. Without bells and whistles, our method outperforms all existing weakly supervised entries on both datasets. Code and our DNA-ICM dataset are publicly available at https://github.com/CVIU-CSU/Weakly-Supervised-Nuclei-Segmentation.
Collapse
|
46
|
FRE-Net: Full-region enhanced network for nuclei segmentation in histopathology images. Biocybern Biomed Eng 2023. [DOI: 10.1016/j.bbe.2023.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
47
|
Xu J, Wang A, Wang Y, Li J, Xu R, Shi H, Li X, Liang Y, Yang J, Gao TM. AICellCounter: A Machine Learning-Based Automated Cell Counting Tool Requiring Only One Image for Training. Neurosci Bull 2023; 39:83-88. [PMID: 35704210 PMCID: PMC9849527 DOI: 10.1007/s12264-022-00895-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 04/14/2022] [Indexed: 01/22/2023] Open
Affiliation(s)
- Junnan Xu
- State Key Laboratory of Organ Failure Research, Key Laboratory of Mental Health of the Ministry of Education, Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence, Guangdong Province Key Laboratory of Psychiatric Disorders, Department of Neurobiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, 510515, China
| | - Andong Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, China
| | - Yunfeng Wang
- State Key Laboratory of Organ Failure Research, Key Laboratory of Mental Health of the Ministry of Education, Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence, Guangdong Province Key Laboratory of Psychiatric Disorders, Department of Neurobiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, 510515, China
| | - Jingting Li
- State Key Laboratory of Organ Failure Research, Key Laboratory of Mental Health of the Ministry of Education, Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence, Guangdong Province Key Laboratory of Psychiatric Disorders, Department of Neurobiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, 510515, China
| | - Ruxia Xu
- State Key Laboratory of Organ Failure Research, Key Laboratory of Mental Health of the Ministry of Education, Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence, Guangdong Province Key Laboratory of Psychiatric Disorders, Department of Neurobiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, 510515, China
| | - Hao Shi
- State Key Laboratory of Organ Failure Research, Key Laboratory of Mental Health of the Ministry of Education, Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence, Guangdong Province Key Laboratory of Psychiatric Disorders, Department of Neurobiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, 510515, China
| | - Xiaowen Li
- State Key Laboratory of Organ Failure Research, Key Laboratory of Mental Health of the Ministry of Education, Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence, Guangdong Province Key Laboratory of Psychiatric Disorders, Department of Neurobiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, 510515, China
| | - Yu Liang
- State Key Laboratory of Organ Failure Research, Key Laboratory of Mental Health of the Ministry of Education, Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence, Guangdong Province Key Laboratory of Psychiatric Disorders, Department of Neurobiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, 510515, China
| | - Jianming Yang
- State Key Laboratory of Organ Failure Research, Key Laboratory of Mental Health of the Ministry of Education, Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence, Guangdong Province Key Laboratory of Psychiatric Disorders, Department of Neurobiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, 510515, China
| | - Tian-Ming Gao
- State Key Laboratory of Organ Failure Research, Key Laboratory of Mental Health of the Ministry of Education, Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence, Guangdong Province Key Laboratory of Psychiatric Disorders, Department of Neurobiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, 510515, China.
| |
Collapse
|
48
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
49
|
Efficient Staining-Invariant Nuclei Segmentation Approach Using Self-Supervised Deep Contrastive Network. Diagnostics (Basel) 2022; 12:diagnostics12123024. [PMID: 36553031 PMCID: PMC9777104 DOI: 10.3390/diagnostics12123024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 11/28/2022] [Accepted: 11/29/2022] [Indexed: 12/12/2022] Open
Abstract
Existing nuclei segmentation methods face challenges with hematoxylin and eosin (H&E) whole slide imaging (WSI) due to the variations in staining methods and nuclei shapes and sizes. Most existing approaches require a stain normalization step that may cause losing source information and fail to handle the inter-scanner feature instability problem. To mitigate these issues, this article proposes an efficient staining-invariant nuclei segmentation method based on self-supervised contrastive learning and an effective weighted hybrid dilated convolution (WHDC) block. In particular, we propose a staining-invariant encoder (SIE) that includes convolution and transformers blocks. We also propose the WHDC block allowing the network to learn multi-scale nuclei-relevant features to handle the variation in the sizes and shapes of nuclei. The SIE network is trained on five unlabeled WSIs datasets using self-supervised contrastive learning and then used as a backbone for the downstream nuclei segmentation network. Our method outperforms existing approaches in challenging multiple WSI datasets without stain color normalization.
Collapse
|
50
|
Zhang W, Zhang J, Yang S, Wang X, Yang W, Huang J, Wang W, Han X. Knowledge-Based Representation Learning for Nucleus Instance Classification From Histopathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3939-3951. [PMID: 36037453 DOI: 10.1109/tmi.2022.3201981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The classification of nuclei in H&E-stained histopathological images is a fundamental step in the quantitative analysis of digital pathology. Most existing methods employ multi-class classification on the detected nucleus instances, while the annotation scale greatly limits their performance. Moreover, they often downplay the contextual information surrounding nucleus instances that is critical for classification. To explicitly provide contextual information to the classification model, we design a new structured input consisting of a content-rich image patch and a target instance mask. The image patch provides rich contextual information, while the target instance mask indicates the location of the instance to be classified and emphasizes its shape. Benefiting from our structured input format, we propose Structured Triplet for representation learning, a triplet learning framework on unlabelled nucleus instances with customized positive and negative sampling strategies. We pre-train a feature extraction model based on this framework with a large-scale unlabeled dataset, making it possible to train an effective classification model with limited annotated data. We also add two auxiliary branches, namely the attribute learning branch and the conventional self-supervised learning branch, to further improve its performance. As part of this work, we will release a new dataset of H&E-stained pathology images with nucleus instance masks, containing 20,187 patches of size 1024 ×1024 , where each patch comes from a different whole-slide image. The model pre-trained on this dataset with our framework significantly reduces the burden of extensive labeling. We show a substantial improvement in nucleus classification accuracy compared with the state-of-the-art methods.
Collapse
|