1
|
Weitz P, Valkonen M, Solorzano L, Carr C, Kartasalo K, Boissin C, Koivukoski S, Kuusela A, Rasic D, Feng Y, Pouplier SS, Sharma A, Eriksson KL, Robertson S, Marzahl C, Gatenbee CD, Anderson ARA, Wodzinski M, Jurgas A, Marini N, Atzori M, Müller H, Budelmann D, Weiss N, Heldmann S, Lotz J, Wolterink JM, De Santi B, Patil A, Sethi A, Kondo S, Kasai S, Hirasawa K, Farrokh M, Kumar N, Greiner R, Latonen L, Laenkholm AV, Hartman J, Ruusuvuori P, Rantalainen M. The ACROBAT 2022 challenge: Automatic registration of breast cancer tissue. Med Image Anal 2024; 97:103257. [PMID: 38981282 DOI: 10.1016/j.media.2024.103257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 05/17/2024] [Accepted: 06/24/2024] [Indexed: 07/11/2024]
Abstract
The alignment of tissue between histopathological whole-slide-images (WSI) is crucial for research and clinical applications. Advances in computing, deep learning, and availability of large WSI datasets have revolutionised WSI analysis. Therefore, the current state-of-the-art in WSI registration is unclear. To address this, we conducted the ACROBAT challenge, based on the largest WSI registration dataset to date, including 4,212 WSIs from 1,152 breast cancer patients. The challenge objective was to align WSIs of tissue that was stained with routine diagnostic immunohistochemistry to its H&E-stained counterpart. We compare the performance of eight WSI registration algorithms, including an investigation of the impact of different WSI properties and clinical covariates. We find that conceptually distinct WSI registration methods can lead to highly accurate registration performances and identify covariates that impact performances across methods. These results provide a comparison of the performance of current WSI registration methods and guide researchers in selecting and developing methods.
Collapse
Affiliation(s)
- Philippe Weitz
- Department of Medical Epidemiology and Biostatistics, Karolinska Insitutet, Stockholm, Sweden.
| | - Masi Valkonen
- Institute of Biomedicine, University of Turku, Turku, Finland
| | - Leslie Solorzano
- Department of Medical Epidemiology and Biostatistics, Karolinska Insitutet, Stockholm, Sweden
| | - Circe Carr
- Institute of Biomedicine, University of Turku, Turku, Finland
| | - Kimmo Kartasalo
- Department of Medical Epidemiology and Biostatistics, Karolinska Insitutet, Stockholm, Sweden
| | - Constance Boissin
- Department of Medical Epidemiology and Biostatistics, Karolinska Insitutet, Stockholm, Sweden
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Aino Kuusela
- Institute of Biomedicine, University of Turku, Turku, Finland
| | - Dusan Rasic
- Department of Surgical Pathology, Zealand University Hospital, Roskilde, Denmark
| | - Yanbo Feng
- Department of Medical Epidemiology and Biostatistics, Karolinska Insitutet, Stockholm, Sweden
| | | | - Abhinav Sharma
- Department of Medical Epidemiology and Biostatistics, Karolinska Insitutet, Stockholm, Sweden
| | - Kajsa Ledesma Eriksson
- Department of Medical Epidemiology and Biostatistics, Karolinska Insitutet, Stockholm, Sweden
| | - Stephanie Robertson
- Department of Oncology and Pathology, Karolinska Institutet, Stockholm, Sweden
| | | | - Chandler D Gatenbee
- Department of Integrated Mathematical Oncology, Moffitt Cancer Center, Tampa, USA
| | | | - Marek Wodzinski
- Informatics Institute, University of Applied Sciences Western Switzerland, Switzerland; Department of Measurement and Electronics, AGH University of Kraków, Poland
| | - Artur Jurgas
- Informatics Institute, University of Applied Sciences Western Switzerland, Switzerland; Department of Measurement and Electronics, AGH University of Kraków, Poland
| | - Niccolò Marini
- Informatics Institute, University of Applied Sciences Western Switzerland, Switzerland; Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Manfredo Atzori
- Informatics Institute, University of Applied Sciences Western Switzerland, Switzerland; Department of Neuroscience, University of Padova, Italy
| | - Henning Müller
- Informatics Institute, University of Applied Sciences Western Switzerland, Switzerland; Medical Faculty, University of Geneva, Switzerland
| | - Daniel Budelmann
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Nick Weiss
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Stefan Heldmann
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Johannes Lotz
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Jelmer M Wolterink
- Department of Applied Mathematics, Technical Medical Centre, University of Twente, Enschede, The Netherlands
| | - Bruno De Santi
- Multimodality Medical Imaging, Technical Medical Centre, University of Twente, Enschede, The Netherlands
| | - Abhijeet Patil
- Department of Electrical Engineering, Indian Institute of Technology, Bombay, India
| | - Amit Sethi
- Department of Electrical Engineering, Indian Institute of Technology, Bombay, India
| | - Satoshi Kondo
- Graduate School of Engineering, Muroran Institute of Technology, Hokkaido, Japan
| | - Satoshi Kasai
- Faculty of Medical Technology, Niigata University of Health and Welfare, Niigata, Japan
| | | | - Mahtab Farrokh
- Department of Computing Science, University of Alberta, Edmonton, Alberta
| | - Neeraj Kumar
- Department of Computing Science, University of Alberta, Edmonton, Alberta
| | - Russell Greiner
- Department of Computing Science, University of Alberta, Edmonton, Alberta; Alberta Machine Intelligence Institute, Edmonton, Canada
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | | | - Johan Hartman
- Department of Oncology and Pathology, Karolinska Institutet, Stockholm, Sweden; MedTechLabs, BioClinicum, Karolinska University Hospital, Stockholm, Sweden
| | - Pekka Ruusuvuori
- Institute of Biomedicine, University of Turku, Turku, Finland; Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Mattias Rantalainen
- Department of Medical Epidemiology and Biostatistics, Karolinska Insitutet, Stockholm, Sweden; MedTechLabs, BioClinicum, Karolinska University Hospital, Stockholm, Sweden.
| |
Collapse
|
2
|
Kildal W, Cyll K, Kalsnes J, Islam R, Julbø FM, Pradhan M, Ersvær E, Shepherd N, Vlatkovic L, Tekpli X, Garred Ø, Kristensen GB, Askautrud HA, Hveem TS, Danielsen HE. Deep learning for automated scoring of immunohistochemically stained tumour tissue sections - Validation across tumour types based on patient outcomes. Heliyon 2024; 10:e32529. [PMID: 39040241 PMCID: PMC11261074 DOI: 10.1016/j.heliyon.2024.e32529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Accepted: 06/05/2024] [Indexed: 07/24/2024] Open
Abstract
We aimed to develop deep learning (DL) models to detect protein expression in immunohistochemically (IHC) stained tissue-sections, and to compare their accuracy and performance with manually scored clinically relevant proteins in common cancer types. Five cancer patient cohorts (colon, two prostate, breast, and endometrial) were included. We developed separate DL models for scoring IHC-stained tissue-sections with nuclear, cytoplasmic, and membranous staining patterns. For training, we used images with annotations of cells with positive and negative staining from the colon cohort stained for Ki-67 and PMS2 (nuclear model), the prostate cohort 1 stained for PTEN (cytoplasmic model) and β-catenin (membranous model). The nuclear DL model was validated for MSH6 in the colon, MSH6 and PMS2 in the endometrium, Ki-67 and CyclinB1 in prostate, and oestrogen and progesterone receptors in the breast cancer cohorts. The cytoplasmic DL model was validated for PTEN and Mapre2, and the membranous DL model for CD44 and Flotillin1, all in prostate cohorts. When comparing the results of manual and DL scores in the validation sets, using manual scores as the ground truth, we observed an average correct classification rate of 91.5 % (76.9-98.5 %) for the nuclear model, 85.6 % (73.3-96.6 %) for the cytoplasmic model, and 78.4 % (75.5-84.3 %) for the membranous model. In survival analyses, manual and DL scores showed similar prognostic impact, with similar hazard ratios and p-values for all DL models. Our findings demonstrate that DL models offer a promising alternative to manual IHC scoring, providing efficiency and reproducibility across various data sources and markers.
Collapse
Affiliation(s)
- Wanja Kildal
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
| | - Karolina Cyll
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
| | - Joakim Kalsnes
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
| | - Rakibul Islam
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
| | - Frida M. Julbø
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
| | - Manohar Pradhan
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
| | - Elin Ersvær
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
| | - Neil Shepherd
- Gloucestershire Cellular Pathology Laboratory, Gloucester, GL53 7AN, UK
| | - Ljiljana Vlatkovic
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
| | - OSBREAC
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
- Gloucestershire Cellular Pathology Laboratory, Gloucester, GL53 7AN, UK
- Department of Medical Genetics, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo and Oslo University Hospital, NO-0450, Oslo, Norway
- Department of Pathology, Oslo University Hospital, NO-0424, Oslo, Norway
- Nuffield Division of Clinical Laboratory Sciences, University of Oxford, Oxford, OX3 9DU, UK
| | - Xavier Tekpli
- Department of Medical Genetics, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo and Oslo University Hospital, NO-0450, Oslo, Norway
| | - Øystein Garred
- Department of Pathology, Oslo University Hospital, NO-0424, Oslo, Norway
| | - Gunnar B. Kristensen
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
| | - Hanne A. Askautrud
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
| | - Tarjei S. Hveem
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
| | - Håvard E. Danielsen
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424, Oslo, Norway
- Nuffield Division of Clinical Laboratory Sciences, University of Oxford, Oxford, OX3 9DU, UK
| |
Collapse
|
3
|
Subhashini R, Velswamy R, Sree Rathna Lakshmi NVS, Sivanandam C. An innovative breast cancer detection framework using multiscale dilated densenet with attention mechanism. NETWORK (BRISTOL, ENGLAND) 2024:1-37. [PMID: 38648017 DOI: 10.1080/0954898x.2024.2343348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Accepted: 04/05/2024] [Indexed: 04/25/2024]
Abstract
Cancer-related deadly diseases affect both developed and underdeveloped nations worldwide. Effective network learning is crucial to more reliably identify and categorize breast carcinoma in vast and unbalanced image datasets. The absence of early cancer symptoms makes the early identification process challenging. Therefore, from the perspectives of diagnosis, prevention, and therapy, cancer continues to be among the healthcare concerns that numerous researchers work to advance. It is highly essential to design an innovative breast cancer detection model by considering the complications presented in the classical techniques. Initially, breast cancer images are gathered from online sources and it is further subjected to the segmentation region. Here, it is segmented using Adaptive Trans-Dense-Unet (A-TDUNet), and their parameters are tuned using the developed Modified Sheep Flock Optimization Algorithm (MSFOA). The segmented images are further subjected to the breast cancer detection stage and effective breast cancer detection is performed by Multiscale Dilated Densenet with Attention Mechanism (MDD-AM). Throughout the result validation, the Net Present Value (NPV) and accuracy rate of the designed approach are 96.719% and 93.494%. Hence, the implemented breast cancer detection model secured a better efficacy rate than the baseline detection methods in diverse experimental conditions.
Collapse
Affiliation(s)
- R Subhashini
- Department of Information Technology, Sona College of Technology, Salem, Tamil Nadu, India
| | - Rajasekar Velswamy
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India
| | - N V S Sree Rathna Lakshmi
- Department of Electronics and Communication Engineering, Agni College of Technology, Thazhambur, Tamil Nadu, India
| | - Chakaravarthi Sivanandam
- Department of Computer Science and Engineering, Panimalar Engineering College, Poonamallee, Chennai, Tamil Nadu, India
| |
Collapse
|
4
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
5
|
Liu Y, Zhen T, Fu Y, Wang Y, He Y, Han A, Shi H. AI-Powered Segmentation of Invasive Carcinoma Regions in Breast Cancer Immunohistochemical Whole-Slide Images. Cancers (Basel) 2023; 16:167. [PMID: 38201594 PMCID: PMC10778369 DOI: 10.3390/cancers16010167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/24/2023] [Accepted: 12/26/2023] [Indexed: 01/12/2024] Open
Abstract
AIMS The automation of quantitative evaluation for breast immunohistochemistry (IHC) plays a crucial role in reducing the workload of pathologists and enhancing the objectivity of diagnoses. However, current methods face challenges in achieving fully automated immunohistochemistry quantification due to the complexity of segmenting the tumor area into distinct ductal carcinoma in situ (DCIS) and invasive carcinoma (IC) regions. Moreover, the quantitative analysis of immunohistochemistry requires a specific focus on invasive carcinoma regions. METHODS AND RESULTS In this study, we propose an innovative approach to automatically identify invasive carcinoma regions in breast cancer immunohistochemistry whole-slide images (WSIs). Our method leverages a neural network that combines multi-scale morphological features with boundary features, enabling precise segmentation of invasive carcinoma regions without the need for additional H&E and P63 staining slides. In addition, we introduced an advanced semi-supervised learning algorithm, allowing efficient training of the model using unlabeled data. To evaluate the effectiveness of our approach, we constructed a dataset consisting of 618 IHC-stained WSIs from 170 cases, including four types of staining (ER, PR, HER2, and Ki-67). Notably, the model demonstrated an impressive intersection over union (IoU) score exceeding 80% on the test set. Furthermore, to ascertain the practical utility of our model in IHC quantitative evaluation, we constructed a fully automated Ki-67 scoring system based on the model's predictions. Comparative experiments convincingly demonstrated that our system exhibited high consistency with the scores given by experienced pathologists. CONCLUSIONS Our developed model excels in accurately distinguishing between DCIS and invasive carcinoma regions in breast cancer immunohistochemistry WSIs. This method paves the way for a clinically available, fully automated immunohistochemistry quantitative scoring system.
Collapse
Affiliation(s)
- Yiqing Liu
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China; (Y.L.); (Y.F.); (Y.W.); (Y.H.)
| | - Tiantian Zhen
- Department of Pathology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, China;
| | - Yuqiu Fu
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China; (Y.L.); (Y.F.); (Y.W.); (Y.H.)
| | - Yizhi Wang
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China; (Y.L.); (Y.F.); (Y.W.); (Y.H.)
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China; (Y.L.); (Y.F.); (Y.W.); (Y.H.)
| | - Anjia Han
- Department of Pathology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, China;
| | - Huijuan Shi
- Department of Pathology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, China;
| |
Collapse
|
6
|
Honkamaa J, Khan U, Koivukoski S, Valkonen M, Latonen L, Ruusuvuori P, Marttinen P. Deformation equivariant cross-modality image synthesis with paired non-aligned training data. Med Image Anal 2023; 90:102940. [PMID: 37666115 DOI: 10.1016/j.media.2023.102940] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 08/14/2023] [Accepted: 08/18/2023] [Indexed: 09/06/2023]
Abstract
Cross-modality image synthesis is an active research topic with multiple medical clinically relevant applications. Recently, methods allowing training with paired but misaligned data have started to emerge. However, no robust and well-performing methods applicable to a wide range of real world data sets exist. In this work, we propose a generic solution to the problem of cross-modality image synthesis with paired but non-aligned data by introducing new deformation equivariance encouraging loss functions. The method consists of joint training of an image synthesis network together with separate registration networks and allows adversarial training conditioned on the input even with misaligned data. The work lowers the bar for new clinical applications by allowing effortless training of cross-modality image synthesis networks for more difficult data sets.
Collapse
Affiliation(s)
- Joel Honkamaa
- Department of Computer Science, Aalto University, Finland.
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Finland
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Mira Valkonen
- Faculty of Medicine and Health Technology, Tampere University, Finland
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Pekka Ruusuvuori
- Institute of Biomedicine, University of Turku, Finland; Faculty of Medicine and Health Technology, Tampere University, Finland
| | | |
Collapse
|
7
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Weitz P, Valkonen M, Solorzano L, Carr C, Kartasalo K, Boissin C, Koivukoski S, Kuusela A, Rasic D, Feng Y, Sinius Pouplier S, Sharma A, Ledesma Eriksson K, Latonen L, Laenkholm AV, Hartman J, Ruusuvuori P, Rantalainen M. A Multi-Stain Breast Cancer Histological Whole-Slide-Image Data Set from Routine Diagnostics. Sci Data 2023; 10:562. [PMID: 37620357 PMCID: PMC10449765 DOI: 10.1038/s41597-023-02422-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 07/27/2023] [Indexed: 08/26/2023] Open
Abstract
The analysis of FFPE tissue sections stained with haematoxylin and eosin (H&E) or immunohistochemistry (IHC) is essential for the pathologic assessment of surgically resected breast cancer specimens. IHC staining has been broadly adopted into diagnostic guidelines and routine workflows to assess the status of several established biomarkers, including ER, PGR, HER2 and KI67. Biomarker assessment can also be facilitated by computational pathology image analysis methods, which have made numerous substantial advances recently, often based on publicly available whole slide image (WSI) data sets. However, the field is still considerably limited by the sparsity of public data sets. In particular, there are no large, high quality publicly available data sets with WSIs of matching IHC and H&E-stained tissue sections from the same tumour. Here, we publish the currently largest publicly available data set of WSIs of tissue sections from surgical resection specimens from female primary breast cancer patients with matched WSIs of corresponding H&E and IHC-stained tissue, consisting of 4,212 WSIs from 1,153 patients.
Collapse
Affiliation(s)
- Philippe Weitz
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden.
| | - Masi Valkonen
- Institute of Biomedicine, University of Turku, Turku, Finland
| | - Leslie Solorzano
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Circe Carr
- Institute of Biomedicine, University of Turku, Turku, Finland
| | - Kimmo Kartasalo
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Constance Boissin
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Aino Kuusela
- Institute of Biomedicine, University of Turku, Turku, Finland
| | - Dusan Rasic
- Department of Surgical Pathology, Zealand University Hospital, Roskilde, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Yanbo Feng
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Sandra Sinius Pouplier
- Department of Surgical Pathology, Zealand University Hospital, Roskilde, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Abhinav Sharma
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Kajsa Ledesma Eriksson
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
- Foundation for the Finnish Cancer Institute, Helsinki, Finland
| | - Anne-Vibeke Laenkholm
- Department of Surgical Pathology, Zealand University Hospital, Roskilde, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Johan Hartman
- Department of Oncology and Pathology, Karolinska Institutet, Stockholm, Sweden
- MedTechLabs, BioClinicum, Karolinska University Hospital, Stockholm, Sweden
| | - Pekka Ruusuvuori
- Institute of Biomedicine, University of Turku, Turku, Finland
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Mattias Rantalainen
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden.
- MedTechLabs, BioClinicum, Karolinska University Hospital, Stockholm, Sweden.
| |
Collapse
|
9
|
Ram S, Tang W, Bell AJ, Pal R, Spencer C, Buschhaus A, Hatt CR, diMagliano MP, Rehemtulla A, Rodríguez JJ, Galban S, Galban CJ. Lung cancer lesion detection in histopathology images using graph-based sparse PCA network. Neoplasia 2023; 42:100911. [PMID: 37269818 DOI: 10.1016/j.neo.2023.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 05/17/2023] [Indexed: 06/05/2023]
Abstract
Early detection of lung cancer is critical for improvement of patient survival. To address the clinical need for efficacious treatments, genetically engineered mouse models (GEMM) have become integral in identifying and evaluating the molecular underpinnings of this complex disease that may be exploited as therapeutic targets. Assessment of GEMM tumor burden on histopathological sections performed by manual inspection is both time consuming and prone to subjective bias. Therefore, an interplay of needs and challenges exists for computer-aided diagnostic tools, for accurate and efficient analysis of these histopathology images. In this paper, we propose a simple machine learning approach called the graph-based sparse principal component analysis (GS-PCA) network, for automated detection of cancerous lesions on histological lung slides stained by hematoxylin and eosin (H&E). Our method comprises four steps: 1) cascaded graph-based sparse PCA, 2) PCA binary hashing, 3) block-wise histograms, and 4) support vector machine (SVM) classification. In our proposed architecture, graph-based sparse PCA is employed to learn the filter banks of the multiple stages of a convolutional network. This is followed by PCA hashing and block histograms for indexing and pooling. The meaningful features extracted from this GS-PCA are then fed to an SVM classifier. We evaluate the performance of the proposed algorithm on H&E slides obtained from an inducible K-rasG12D lung cancer mouse model using precision/recall rates, Fβ-score, Tanimoto coefficient, and area under the curve (AUC) of the receiver operator characteristic (ROC) and show that our algorithm is efficient and provides improved detection accuracy compared to existing algorithms.
Collapse
Affiliation(s)
- Sundaresh Ram
- Departments of Radiology, and Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA.
| | - Wenfei Tang
- Department of Computer Science and Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Alexander J Bell
- Departments of Radiology, and Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Ravi Pal
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Cara Spencer
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI 48109, USA
| | | | - Charles R Hatt
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; Imbio LLC, Minneapolis, MN 55405, USA
| | - Marina Pasca diMagliano
- Departments of Surgery, and Cell and Developmental Biology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Alnawaz Rehemtulla
- Departments of Radiology, and Radiation Oncology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Jeffrey J Rodríguez
- Departments of Electrical and Computer Engineering, and Biomedical Engineering, The University of Arizona, Tucson, AZ 85721, USA
| | - Stefanie Galban
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Craig J Galban
- Departments of Radiology, and Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
10
|
He Q, Liu Y, Pan F, Duan H, Guan J, Liang Z, Zhong H, Wang X, He Y, Huang W, Guan T. Unsupervised domain adaptive tumor region recognition for Ki67 automated assisted quantification. Int J Comput Assist Radiol Surg 2023; 18:629-640. [PMID: 36371746 DOI: 10.1007/s11548-022-02781-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/13/2022] [Indexed: 11/15/2022]
Abstract
PURPOSE Ki67 is a protein associated with tumor proliferation and metastasis in breast cancer and acts as an essential prognostic factor. Clinical work requires recognizing tumor regions on Ki67-stained whole-slide images (WSIs) before quantitation. Deep learning has the potential to provide assistance but largely relies on massive annotations and consumes a huge amount of time and energy. Hence, a novel tumor region recognition approach is proposed for more precise Ki67 quantification. METHODS An unsupervised domain adaptive method is proposed, which combines adversarial and self-training. The model trained on labeled hematoxylin and eosin (H&E) data and unlabeled Ki67 data can recognize tumor regions in Ki67 WSIs. Based on the UDA method, a Ki67 automated assisted quantification system is developed, which contains foreground segmentation, tumor region recognition, cell counting, and WSI-level score calculation. RESULTS The proposed UDA method achieves high performance in tumor region recognition and Ki67 quantification. The AUC reached 0.9915, 0.9352, and 0.9689 on the validation set and internal and external test sets, respectively, substantially exceeding baseline (0.9334, 0.9167, 0.9408) and rivaling the fully supervised method (0.9950, 0.9284, 0.9652). The evaluation of automated quantification on 148 WSIs illustrated statistical agreement with pathological reports. CONCLUSION The model trained by the proposed method is capable of accurately recognizing Ki67 tumor regions. The proposed UDA method can be readily extended to other types of immunohistochemical staining images. The results of automated assisted quantification are accurate and interpretable to provide assistance to both junior and senior pathologists in their interpretation.
Collapse
Affiliation(s)
- Qiming He
- Department of Life and Health, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Yiqing Liu
- Department of Life and Health, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Feiyang Pan
- Department of Life and Health, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Hufei Duan
- Department of Life and Health, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Jian Guan
- Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zhendong Liang
- Department of Life and Health, Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Hui Zhong
- Huaibei Maternal and Child Health Care Hospital, Huaibei, China
| | - Xing Wang
- New H3C Technologies Co., Ltd., Hangzhou, China
| | - Yonghong He
- New H3C Technologies Co., Ltd., Hangzhou, China
| | - Wenting Huang
- Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| | - Tian Guan
- Department of Life and Health, Tsinghua Shenzhen International Graduate School, Shenzhen, China.
| |
Collapse
|
11
|
Chan RC, To CKC, Cheng KCT, Yoshikazu T, Yan LLA, Tse GM. Artificial intelligence in breast cancer histopathology. Histopathology 2023; 82:198-210. [PMID: 36482271 DOI: 10.1111/his.14820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/22/2022] [Accepted: 09/28/2022] [Indexed: 12/13/2022]
Abstract
This is a review on the use of artificial intelligence for digital breast pathology. A systematic search on PubMed was conducted, identifying 17,324 research papers related to breast cancer pathology. Following a semimanual screening, 664 papers were retrieved and pursued. The papers are grouped into six major tasks performed by pathologists-namely, molecular and hormonal analysis, grading, mitotic figure counting, ki-67 indexing, tumour-infiltrating lymphocyte assessment, and lymph node metastases identification. Under each task, open-source datasets for research to build artificial intelligence (AI) tools are also listed. Many AI tools showed promise and demonstrated feasibility in the automation of routine pathology investigations. We expect continued growth of AI in this field as new algorithms mature.
Collapse
Affiliation(s)
- Ronald Ck Chan
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Chun Kit Curtis To
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Ka Chuen Tom Cheng
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Tada Yoshikazu
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Lai Ling Amy Yan
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Gary M Tse
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| |
Collapse
|
12
|
Parvathi S, Vaishnavi P. An efficient breast cancer detection with secured cloud storage & reliability analysis using FMEA. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Breast cancer is considered as a most dangerous type of cancer found in women among all the cancers. Around 2.3 million women in the world are affected by this cancer and there is no cure if it is left untreated at an earlier stage. Therefore, early diagnosis of this disease is an important consideration to save the life of millions of women. Many machine learning models have been evolved in the recent years for breast cancer detection. However, all the currently available works focused only on improving the prediction accuracy, they need more attention on providing reliable services. This work presents an efficient breast cancer detection mechanism using deep learning strategies. The various assortments like breast image shapes, the intensity of images, regions of an image, illuminations, and contrast are the conceivable factors that define breast cancer identification. This study offers a strong image detection process for breast cancer mammography images by considering the whole slide image. Here, the input process for the preprocessing stage will remove the noise present in the image using Gaussian Filter (GF). The preprocessed image moves to the image segmentation and then forward to the feature extraction for extracting the features of the images using Cauchy distribution-based segmentation and Shearlet based feature extraction. Then the specialized features can be isolated using the Entropy PCA based feature selection. Finally, the breast cancer area is to be detected as benign or malignant accurately by using the Unified probability with LSTM neural network classification (UP-LSTM) for whole slide image (WSI). The attained outcomes and the detected outcomes were stored in cloud using a security mechanism for further monitoring purposes. To provide an efficient security, a Bio-inspired Iterative Honey Bee (BI-IHB) encryption is employed which is decrypted on user request. The reliability of the stored data is then found using FMEA (Failure mode and effective analysis) approach. From the experimental analysis, it is observed that UP-LSTM classifier model offers accuracy of 99.26% , sensitivity of 100% , and precision value of 98.59% which is better than the other state of the art techniques.
Collapse
Affiliation(s)
- S. Parvathi
- Department of Computer Applications, UCE, Anna University, BIT Campus, Trichy, India
| | - P. Vaishnavi
- Department of Computer Applications, UCE, Anna University, BIT Campus, Trichy, India
| |
Collapse
|
13
|
Number of Convolution Layers and Convolution Kernel Determination and Validation for Multilayer Convolutional Neural Network: Case Study in Breast Lesion Screening of Mammographic Images. Processes (Basel) 2022. [DOI: 10.3390/pr10091867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Mammography is a low-dose X-ray imaging technique that can detect breast tumors, cysts, and calcifications, which can aid in detecting potential breast cancer in the early stage and reduce the mortality rate. This study employed a multilayer convolutional neural network (MCNN) to screen breast lesions with mammographic images. Within the region of interest, a specific bounding box is used to extract feature maps before automatic image segmentation and feature classification are conducted. These include three classes, namely, normal, benign tumor, and malignant tumor. Multiconvolution processes with kernel convolution operations have noise removal and sharpening effects that are better than other image processing methods, which can strengthen the features of the desired object and contour and increase the classifier’s classification accuracy. However, excessive convolution layers and kernel convolution operations will increase the computational complexity, computational time, and training time for training the classifier. Thus, this study aimed to determine a suitable number of convolution layers and kernels to achieve a classifier with high learning performance and classification accuracy, with a case study in the breast lesion screening of mammographic images. The Mammographic Image Analysis Society Digital Mammogram Database (United Kingdom National Breast Screening Program) was used for experimental tests to determine the number of convolution layers and kernels. The optimal classifier’s performance is evaluated using accuracy (%), precision (%), recall (%), and F1 score to test and validate the most suitable MCNN model architecture.
Collapse
|
14
|
Automatic Breast Tumor Screening of Mammographic Images with Optimal Convolutional Neural Network. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12084079] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Mammography is a first-line imaging examination approach used for early breast tumor screening. Computational techniques based on deep-learning methods, such as convolutional neural network (CNN), are routinely used as classifiers for rapid automatic breast tumor screening in mammography examination. Classifying multiple feature maps on two-dimensional (2D) digital images, a multilayer CNN has multiple convolutional-pooling layers and fully connected networks, which can increase the screening accuracy and reduce the error rate. However, this multilayer architecture presents some limitations, such as high computational complexity, large-scale training dataset requirements, and poor suitability for real-time clinical applications. Hence, this study designs an optimal multilayer architecture for a CNN-based classifier for automatic breast tumor screening, consisting of three convolutional layers, two pooling layers, a flattening layer, and a classification layer. In the first convolutional layer, the proposed classifier performs the fractional-order convolutional process to enhance the image and remove unwanted noise for obtaining the desired object’s edges; in the second and third convolutional-pooling layers, two kernel convolutional and pooling operations are used to ensure the continuous enhancement and sharpening of the feature patterns for further extracting of the desired features at different scales and different levels. Moreover, there is a reduction of the dimensions of the feature patterns. In the classification layer, a multilayer network with an adaptive moment estimation algorithm is used to refine a classifier’s network parameters for mammography classification by separating tumor-free feature patterns from tumor feature patterns. Images can be selected from a curated breast imaging subset of a digital database for screening mammography (CBIS-DDSM), and K-fold cross-validations are performed. The experimental results indicate promising performance for automatic breast tumor screening in terms of recall (%), precision (%), accuracy (%), F1 score, and Youden’s index.
Collapse
|
15
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 113] [Impact Index Per Article: 56.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
16
|
Zhu J, Liu M, Li X. Progress on deep learning in digital pathology of breast cancer: a narrative review. Gland Surg 2022; 11:751-766. [PMID: 35531111 PMCID: PMC9068546 DOI: 10.21037/gs-22-11] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/04/2022] [Indexed: 01/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Pathology is the gold standard criteria for breast cancer diagnosis and has important guiding value in formulating the clinical treatment plan and predicting the prognosis. However, traditional microscopic examinations of tissue sections are time consuming and labor intensive, with unavoidable subjective variations. Deep learning (DL) can evaluate and extract the most important information from images with less need for human instruction, providing a promising approach to assist in the pathological diagnosis of breast cancer. To provide an informative and up-to-date summary on the topic of DL-based diagnostic systems for breast cancer pathology image analysis and discuss the advantages and challenges to the routine clinical application of digital pathology. METHODS A PubMed search with keywords ("breast neoplasm" or "breast cancer") and ("pathology" or "histopathology") and ("artificial intelligence" or "deep learning") was conducted. Relevant publications in English published from January 2000 to October 2021 were screened manually for their title, abstract, and even full text to determine their true relevance. References from the searched articles and other supplementary articles were also studied. KEY CONTENT AND FINDINGS DL-based computerized image analysis has obtained impressive achievements in breast cancer pathology diagnosis, classification, grading, staging, and prognostic prediction, providing powerful methods for faster, more reproducible, and more precise diagnoses. However, all artificial intelligence (AI)-assisted pathology diagnostic models are still in the experimental stage. Improving their economic efficiency and clinical adaptability are still required to be developed as the focus of further researches. CONCLUSIONS Having searched PubMed and other databases and summarized the application of DL-based AI models in breast cancer pathology, we conclude that DL is undoubtedly a promising tool for assisting pathologists in routines, but further studies are needed to realize the digitization and automation of clinical pathology.
Collapse
Affiliation(s)
- Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
| | - Mei Liu
- Department of Pathology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| |
Collapse
|
17
|
Fulawka L, Blaszczyk J, Tabakov M, Halon A. Assessment of Ki-67 proliferation index with deep learning in DCIS (ductal carcinoma in situ). Sci Rep 2022; 12:3166. [PMID: 35210450 PMCID: PMC8873444 DOI: 10.1038/s41598-022-06555-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 01/31/2022] [Indexed: 12/26/2022] Open
Abstract
The proliferation index (PI) is crucial in histopathologic diagnostics, in particular tumors. It is calculated based on Ki-67 protein expression by immunohistochemistry. PI is routinely evaluated by a visual assessment of the sample by a pathologist. However, this approach is far from ideal due to its poor intra- and interobserver variability and time-consuming. These factors force the community to seek out more precise solutions. Virtual pathology as being increasingly popular in diagnostics, armed with artificial intelligence, may potentially address this issue. The proposed solution calculates the Ki-67 proliferation index by utilizing a deep learning model and fuzzy-set interpretations for hot-spots detection. The obtained region-of-interest is then used to segment relevant cells via classical methods of image processing. The index value is approximated by relating the total surface area occupied by immunopositive cells to the total surface area of relevant cells. The achieved results are compared to the manual calculation of the Ki-67 index made by a domain expert. To increase results reliability, we trained several models in a threefold manner and compared the impact of different hyper-parameters. Our best-proposed method estimates PI with 0.024 mean absolute error, which gives a significant advantage over the current state-of-the-art solution.
Collapse
Affiliation(s)
- Lukasz Fulawka
- Molecular Pathology Centre Cellgen, ul. Piwna 13, 50-353, Wroclaw, Poland.
| | - Jakub Blaszczyk
- Department of Computational Intelligence, Wroclaw University of Science and Technology, wybrzeże Wyspiańskiego 27, 50-370, Wrocław, Poland
| | - Martin Tabakov
- Department of Computational Intelligence, Wroclaw University of Science and Technology, wybrzeże Wyspiańskiego 27, 50-370, Wrocław, Poland
| | - Agnieszka Halon
- Department of General and Experimental Pathology, Wroclaw Medical University, ul. Borowska 213, 50-556, Wroclaw, Poland
| |
Collapse
|
18
|
Luo R, Zheng C, Song W, Tan Q, Shi Y, Han X. High-throughput and multi-phases identification of autoantibodies in diagnosing early-stage breast cancer and subtypes. Cancer Sci 2021; 113:770-783. [PMID: 34843149 PMCID: PMC8819333 DOI: 10.1111/cas.15227] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 11/12/2021] [Accepted: 11/21/2021] [Indexed: 12/12/2022] Open
Abstract
Autoantibodies (AAbs) targeted tumor‐associated antigens (TAAs) have the potential for early detection of breast cancer. Here, 574 early‐stage breast cancer (ES‐BC) patients containing 4 subtypes (Luminal A, Luminal B, HER2+, TN), 126 benign breast disease (BBD) patients, and 199 normal healthy controls (NHC) were separated into three‐phases to discover, verify, and validate AAbs. In discovery phase using high‐throughput protein microarray, 37 AAbs with sensitivity of 31.25%‐86.25% and specificity over 73% in ES‐BC, and 40 AAbs with different positive rates between subtypes were identified as candidates. In verification phase, 18 AAbs were significantly increased compared with the Control (BBD and NHC) in focused array. Ten out of 18 AAbs exhibited a significant difference between subtypes (P < .05). In ELISA validation phase, 5 novel AAbs (anti‐KJ901215, ‐FAM49B, ‐HYI, ‐GARS, ‐CRLF3) exhibited significantly higher levels in ES‐BC compared with BBD/NHC (P < .05). The sensitivities of individual AAb and a 5‐AAbs panel were 20.41%‐28.57% and 38.78%, whereas the specificities were over 90% and 85.94%. Simultaneously, 4 AAbs except anti‐GARS differed significantly between TN and non‐TN subtype (P < .05). We constructed 3 random forest classifier models based on AAbs to discriminant ES‐BC from Control or BBD, and to discern TN subtype, which yielded an area under the curve of 0.870, 0.860, and 0.875, respectively. Biological interaction analysis revealed 4 TAAs, except for KJ901215, that were associated with well known proteins of BC. This study discovered and stepwise validated 5 novel AAbs with the potential to diagnose ES‐BC and discern TN subtype, indicating easy‐to‐detect and minimally invasive diagnostic value of serum AAbs ahead of biopsy for future application.
Collapse
Affiliation(s)
- Rongrong Luo
- Department of Clinical Laboratory, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Cuiling Zheng
- Department of Clinical Laboratory, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Wenya Song
- Department of Medical Oncology, Beijing Key Laboratory of Clinical Study on Anticancer Molecular Targeted Drugs, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Qiaoyun Tan
- Department of Medical Oncology, Beijing Key Laboratory of Clinical Study on Anticancer Molecular Targeted Drugs, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Yuankai Shi
- Department of Medical Oncology, Beijing Key Laboratory of Clinical Study on Anticancer Molecular Targeted Drugs, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Xiaohong Han
- Clinical Pharmacology Research Center, Peking Union Medical College Hospital, State Key Laboratory of Complex Severe and Rare Diseases, NMPA Key Laboratory for Clinical Research and Evaluation of Drug, Beijing Key Laboratory of Clinical PK & PD Investigation for Innovative Drugs, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| |
Collapse
|
19
|
Brázdil T, Gallo M, Nenutil R, Kubanda A, Toufar M, Holub P. Automated annotations of epithelial cells and stroma in hematoxylin-eosin-stained whole-slide images using cytokeratin re-staining. JOURNAL OF PATHOLOGY CLINICAL RESEARCH 2021; 8:129-142. [PMID: 34716754 PMCID: PMC8822376 DOI: 10.1002/cjp2.249] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 09/22/2021] [Accepted: 09/29/2021] [Indexed: 11/24/2022]
Abstract
The diagnosis of solid tumors of epithelial origin (carcinomas) represents a major part of the workload in clinical histopathology. Carcinomas consist of malignant epithelial cells arranged in more or less cohesive clusters of variable size and shape, together with stromal cells, extracellular matrix, and blood vessels. Distinguishing stroma from epithelium is a critical component of artificial intelligence (AI) methods developed to detect and analyze carcinomas. In this paper, we propose a novel automated workflow that enables large‐scale guidance of AI methods to identify the epithelial component. The workflow is based on re‐staining existing hematoxylin and eosin (H&E) formalin‐fixed paraffin‐embedded sections by immunohistochemistry for cytokeratins, cytoskeletal components specific to epithelial cells. Compared to existing methods, clinically available H&E sections are reused and no additional material, such as consecutive slides, is needed. We developed a simple and reliable method for automatic alignment to generate masks denoting cytokeratin‐rich regions, using cell nuclei positions that are visible in both the original and the re‐stained slide. The registration method has been compared to state‐of‐the‐art methods for alignment of consecutive slides and shows that, despite being simpler, it provides similar accuracy and is more robust. We also demonstrate how the automatically generated masks can be used to train modern AI image segmentation based on U‐Net, resulting in reliable detection of epithelial regions in previously unseen H&E slides. Through training on real‐world material available in clinical laboratories, this approach therefore has widespread applications toward achieving AI‐assisted tumor assessment directly from scanned H&E sections. In addition, the re‐staining method will facilitate additional automated quantitative studies of tumor cell and stromal cell phenotypes.
Collapse
Affiliation(s)
- Tomáš Brázdil
- Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Matej Gallo
- Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Rudolf Nenutil
- Department of Pathology, Masaryk Memorial Cancer Institute, Brno, Czech Republic
| | - Andrej Kubanda
- Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Martin Toufar
- Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Petr Holub
- Institute of Computer Science, Masaryk University, Brno, Czech Republic
| |
Collapse
|
20
|
Das A, Narayan Mohanty M, Kumar Mallick P, Tiwari P, Muhammad K, Zhu H. Breast cancer detection using an ensemble deep learning method. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103009] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
21
|
Cyll K, Kleppe A, Kalsnes J, Vlatkovic L, Pradhan M, Kildal W, Tobin KAR, Reine TM, Wæhre H, Brennhovd B, Askautrud HA, Skaaheim Haug E, Hveem TS, Danielsen HE. PTEN and DNA Ploidy Status by Machine Learning in Prostate Cancer. Cancers (Basel) 2021; 13:cancers13174291. [PMID: 34503100 PMCID: PMC8428363 DOI: 10.3390/cancers13174291] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 08/23/2021] [Accepted: 08/24/2021] [Indexed: 12/05/2022] Open
Abstract
Simple Summary Molecular tissue-based prognostic biomarkers are anticipated to complement the current risk stratification systems in prostate cancer, but their manual assessment is subjective and time-consuming. Objective assessment of such biomarkers by machine learning-based methods could advance their adoption in a clinical workflow. PTEN and DNA ploidy status are well-studied biomarkers, which can provide clinically relevant information in prostate cancer at a low cost. Using a cohort of 253 patients who received radical prostatectomy, we developed a novel, fully-automated PTEN scoring in immunohistochemically-stained tissue slides, which could be used to assess PTEN status in a reliable and reproducible manner. In an independent validation cohort of 259 patients, automatically assessed PTEN status was significantly associated with time to biochemical recurrence after radical prostatectomy, and the combination of PTEN and DNA ploidy status further improved risk stratification. These results demonstrate the utility of machine learning in biomarker assessment. Abstract Machine learning (ML) is expected to improve biomarker assessment. Using convolution neural networks, we developed a fully-automated method for assessing PTEN protein status in immunohistochemically-stained slides using a radical prostatectomy (RP) cohort (n = 253). It was validated according to a predefined protocol in an independent RP cohort (n = 259), alone and by measuring its prognostic value in combination with DNA ploidy status determined by ML-based image cytometry. In the primary analysis, automatically assessed dichotomized PTEN status was associated with time to biochemical recurrence (TTBCR) (hazard ratio (HR) = 3.32, 95% CI 2.05 to 5.38). Patients with both non-diploid tumors and PTEN-low had an HR of 4.63 (95% CI 2.50 to 8.57), while patients with one of these characteristics had an HR of 1.94 (95% CI 1.15 to 3.30), compared to patients with diploid tumors and PTEN-high, in univariable analysis of TTBCR in the validation cohort. Automatic PTEN scoring was strongly predictive of the PTEN status assessed by human experts (area under the curve 0.987 (95% CI 0.968 to 0.994)). This suggests that PTEN status can be accurately assessed using ML, and that the combined marker of automatically assessed PTEN and DNA ploidy status may provide an objective supplement to the existing risk stratification factors in prostate cancer.
Collapse
Affiliation(s)
- Karolina Cyll
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
| | - Andreas Kleppe
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
- Department of Informatics, University of Oslo, NO-0316 Oslo, Norway
| | - Joakim Kalsnes
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
| | - Ljiljana Vlatkovic
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
| | - Manohar Pradhan
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
| | - Wanja Kildal
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
| | - Kari Anne R. Tobin
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
| | - Trine M. Reine
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
| | - Håkon Wæhre
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
| | - Bjørn Brennhovd
- Department of Urology, Oslo University Hospital, NO-0424 Oslo, Norway;
| | - Hanne A. Askautrud
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
| | - Erik Skaaheim Haug
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
- Department of Urology, Vestfold Hospital Trust, NO-3103 Tønsberg, Norway
| | - Tarjei S. Hveem
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
| | - Håvard E. Danielsen
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, NO-0424 Oslo, Norway; (K.C.); (A.K.); (J.K.); (L.V.); (M.P.); (W.K.); (K.A.R.T.); (T.M.R.); (H.W.); (H.A.A.); (E.S.H.); (T.S.H.)
- Department of Informatics, University of Oslo, NO-0316 Oslo, Norway
- Nuffield Division of Clinical Laboratory Sciences, University of Oxford, Oxford OX3 9DU, UK
- Correspondence: ; Tel.: +47-22-78-23-20
| |
Collapse
|
22
|
Wharton KA, Wood D, Manesse M, Maclean KH, Leiss F, Zuraw A. Tissue Multiplex Analyte Detection in Anatomic Pathology - Pathways to Clinical Implementation. Front Mol Biosci 2021; 8:672531. [PMID: 34386519 PMCID: PMC8353449 DOI: 10.3389/fmolb.2021.672531] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 07/14/2021] [Indexed: 12/12/2022] Open
Abstract
Background: Multiplex tissue analysis has revolutionized our understanding of the tumor microenvironment (TME) with implications for biomarker development and diagnostic testing. Multiplex labeling is used for specific clinical situations, but there remain barriers to expanded use in anatomic pathology practice. Methods: We review immunohistochemistry (IHC) and related assays used to localize molecules in tissues, with reference to United States regulatory and practice landscapes. We review multiplex methods and strategies used in clinical diagnosis and in research, particularly in immuno-oncology. Within the framework of assay design and testing phases, we examine the suitability of multiplex immunofluorescence (mIF) for clinical diagnostic workflows, considering its advantages and challenges to implementation. Results: Multiplex labeling is poised to radically transform pathologic diagnosis because it can answer questions about tissue-level biology and single-cell phenotypes that cannot be addressed with traditional IHC biomarker panels. Widespread implementation will require improved detection chemistry, illustrated by InSituPlex technology (Ultivue, Inc., Cambridge, MA) that allows coregistration of hematoxylin and eosin (H&E) and mIF images, greater standardization and interoperability of workflow and data pipelines to facilitate consistent interpretation by pathologists, and integration of multichannel images into digital pathology whole slide imaging (WSI) systems, including interpretation aided by artificial intelligence (AI). Adoption will also be facilitated by evidence that justifies incorporation into clinical practice, an ability to navigate regulatory pathways, and adequate health care budgets and reimbursement. We expand the brightfield WSI system “pixel pathway” concept to multiplex workflows, suggesting that adoption might be accelerated by data standardization centered on cell phenotypes defined by coexpression of multiple molecules. Conclusion: Multiplex labeling has the potential to complement next generation sequencing in cancer diagnosis by allowing pathologists to visualize and understand every cell in a tissue biopsy slide. Until mIF reagents, digital pathology systems including fluorescence scanners, and data pipelines are standardized, we propose that diagnostic labs will play a crucial role in driving adoption of multiplex tissue diagnostics by using retrospective data from tissue collections as a foundation for laboratory-developed test (LDT) implementation and use in prospective trials as companion diagnostics (CDx).
Collapse
|
23
|
Valkonen M, Hognas G, Bova GS, Ruusuvuori P. Generalized Fixation Invariant Nuclei Detection Through Domain Adaptation Based Deep Learning. IEEE J Biomed Health Inform 2021; 25:1747-1757. [PMID: 33211668 DOI: 10.1109/jbhi.2020.3039414] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Nucleus detection is a fundamental task in histological image analysis and an important tool for many follow up analyses. It is known that sample preparation and scanning procedure of histological slides introduce a great amount of variability to the histological images and poses challenges for automated nucleus detection. Here, we studied the effect of histopathological sample fixation on the accuracy of a deep learning based nuclei detection model trained with hematoxylin and eosin stained images. We experimented with training data that includes three methods of fixation; PAXgene, formalin and frozen, and studied the detection accuracy results of various convolutional neural networks. Our results indicate that the variability introduced during sample preparation affects the generalization of a model and should be considered when building accurate and robust nuclei detection algorithms. Our dataset includes over 67 000 annotated nuclei locations from 16 patients and three different sample fixation types. The dataset provides excellent basis for building an accurate and robust nuclei detection model, and combined with unsupervised domain adaptation, the workflow allows generalization to images from unseen domains, including different tissues and images from different labs.
Collapse
|
24
|
van der Laak J, Litjens G, Ciompi F. Deep learning in histopathology: the path to the clinic. Nat Med 2021; 27:775-784. [PMID: 33990804 DOI: 10.1038/s41591-021-01343-4] [Citation(s) in RCA: 295] [Impact Index Per Article: 98.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 03/31/2021] [Indexed: 02/08/2023]
Abstract
Machine learning techniques have great potential to improve medical diagnostics, offering ways to improve accuracy, reproducibility and speed, and to ease workloads for clinicians. In the field of histopathology, deep learning algorithms have been developed that perform similarly to trained pathologists for tasks such as tumor detection and grading. However, despite these promising results, very few algorithms have reached clinical implementation, challenging the balance between hope and hype for these new techniques. This Review provides an overview of the current state of the field, as well as describing the challenges that still need to be addressed before artificial intelligence in histopathology can achieve clinical value.
Collapse
Affiliation(s)
- Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands. .,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| |
Collapse
|
25
|
Puttagunta M, Ravi S. Medical image analysis based on deep learning approach. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:24365-24398. [PMID: 33841033 PMCID: PMC8023554 DOI: 10.1007/s11042-021-10707-4] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/28/2020] [Accepted: 02/10/2021] [Indexed: 05/05/2023]
Abstract
Medical imaging plays a significant role in different clinical applications such as medical procedures used for early detection, monitoring, diagnosis, and treatment evaluation of various medical conditions. Basicsof the principles and implementations of artificial neural networks and deep learning are essential for understanding medical image analysis in computer vision. Deep Learning Approach (DLA) in medical image analysis emerges as a fast-growing research field. DLA has been widely used in medical imaging to detect the presence or absence of the disease. This paper presents the development of artificial neural networks, comprehensive analysis of DLA, which delivers promising medical imaging applications. Most of the DLA implementations concentrate on the X-ray images, computerized tomography, mammography images, and digital histopathology images. It provides a systematic review of the articles for classification, detection, and segmentation of medical images based on DLA. This review guides the researchers to think of appropriate changes in medical image analysis based on DLA.
Collapse
Affiliation(s)
- Muralikrishna Puttagunta
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| | - S. Ravi
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| |
Collapse
|
26
|
Stenman S, Bychkov D, Kucukel H, Linder N, Haglund C, Arola J, Lundin J. Antibody Supervised Training of a Deep Learning Based Algorithm for Leukocyte Segmentation in Papillary Thyroid Carcinoma. IEEE J Biomed Health Inform 2021; 25:422-428. [PMID: 32750899 DOI: 10.1109/jbhi.2020.2994970] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The quantity of leukocytes in papillary thyroid carcinoma (PTC) potentially have prognostic and treatment predictive value. Here, we propose a novel method for training a convolutional neural network (CNN) algorithm for segmenting leukocytes in PTCs. Tissue samples from two retrospective PTC cohort were obtained and representative tissue slides from twelve patients were stained with hematoxylin and eosin (HE) and digitized. Then, the HE slides were destained and restained immunohistochemically (IHC) with antibodies to the pan-leukocyte anti CD45 antigen and scanned again. The two stain-pairs of all representative tissue slides were registered, and image tiles of regions of interests were exported. The image tiles were processed and the 3,3'-diaminobenzidine (DAB) stained areas representing anti CD45 expression were turned into binary masks. These binary masks were applied as annotations on the HE image tiles and used in the training of a CNN algorithm. Ten whole slide images (WSIs) were used for training using a five-fold cross-validation and the remaining two slides were used as an independent test set for the trained model. For visual evaluation, the algorithm was run on all twelve WSIs, and in total 238,144 tiles sized 500 × 500 pixels were analyzed. The trained CNN algorithm had an intersection over union of 0.82 for detection of leukocytes in the HE image tiles when comparing the prediction masks to the ground truth anti CD45 mask. We conclude that this method for generating antibody supervised annotations using the destain-restain IHC guided annotations resulted in high accuracy segmentations of leukocytes in HE tissue images.
Collapse
|
27
|
|