1
|
Asaf MZ, Rasul H, Akram MU, Hina T, Rashid T, Shaukat A. A Modified Deep Semantic Segmentation Model for Analysis of Whole Slide Skin Images. Sci Rep 2024; 14:23489. [PMID: 39379448 PMCID: PMC11461484 DOI: 10.1038/s41598-024-71080-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 08/23/2024] [Indexed: 10/10/2024] Open
Abstract
Automated segmentation of biomedical image has been recognized as an important step in computer-aided diagnosis systems for detection of abnormalities. Despite its importance, the segmentation process remains an open challenge due to variations in color, texture, shape diversity and boundaries. Semantic segmentation often requires deeper neural networks to achieve higher accuracy, making the segmentation model more complex and slower. Due to the need to process a large number of biomedical images, more efficient and cheaper image processing techniques for accurate segmentation are needed. In this article, we present a modified deep semantic segmentation model that utilizes the backbone of EfficientNet-B3 along with UNet for reliable segmentation. We trained our model on Non-melanoma skin cancer segmentation for histopathology dataset to divide the image in 12 different classes for segmentation. Our method outperforms the existing literature with an increase in average class accuracy from 79 to 83%. Our approach also shows an increase in overall accuracy from 85 to 94%.
Collapse
Affiliation(s)
- Muhammad Zeeshan Asaf
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | - Hamid Rasul
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | - Muhammad Usman Akram
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan.
| | - Tazeen Hina
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | - Tayyab Rashid
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | - Arslan Shaukat
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan
| |
Collapse
|
2
|
Pan N, Mi X, Li H, Ge X, Sui X, Jiang Y. WSSS-CRAM: precise segmentation of histopathological images via class region activation mapping. Front Microbiol 2024; 15:1483052. [PMID: 39421560 PMCID: PMC11484024 DOI: 10.3389/fmicb.2024.1483052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Accepted: 09/18/2024] [Indexed: 10/19/2024] Open
Abstract
Introduction Fast, accurate, and automatic analysis of histopathological images using digital image processing and deep learning technology is a necessary task. Conventional histopathological image analysis algorithms require the manual design of features, while deep learning methods can achieve fast prediction and accurate analysis, but rely on the drive of a large amount of labeled data. Methods In this work, we introduce WSSS-CRAM a weakly-supervised semantic segmentation that can obtain detailed pixel-level labels from image-level annotated data. Specifically, we use a discriminative activation strategy to generate category-specific image activation maps via class labels. The category-specific activation maps are then post-processed using conditional random fields to obtain reliable regions that are directly used as ground-truth labels for the segmentation branch. Critically, the two steps of the pseudo-label acquisition and training segmentation model are integrated into an end-to-end model for joint training in this method. Results Through quantitative evaluation and visualization results, we demonstrate that the framework can predict pixel-level labels from image-level labels, and also perform well when testing images without image-level annotations. Discussion Future, we consider extending the algorithm to different pathological datasets and types of tissue images to validate its generalization capability.
Collapse
|
3
|
Chechekhina E, Voloshin N, Kulebyakin K, Tyurin-Kuzmin P. Code-Free Machine Learning Solutions for Microscopy Image Processing: Deep Learning. Tissue Eng Part A 2024; 30:627-639. [PMID: 38556835 DOI: 10.1089/ten.tea.2024.0014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2024] Open
Abstract
In recent years, there has been a significant expansion in the realm of processing microscopy images, thanks to the advent of machine learning techniques. These techniques offer diverse applications for image processing. Currently, numerous methods are used for processing microscopy images in the field of biology, ranging from conventional machine learning algorithms to sophisticated deep learning artificial neural networks with millions of parameters. However, a comprehensive grasp of the intricacies of these methods usually necessitates proficiency in programming and advanced mathematics. In our comprehensive review, we explore various widely used deep learning approaches tailored for the processing of microscopy images. Our emphasis is on algorithms that have gained popularity in the field of biology and have been adapted to cater to users lacking programming expertise. In essence, our target audience comprises biologists interested in exploring the potential of deep learning algorithms, even without programming skills. Throughout the review, we elucidate each algorithm's fundamental concepts and capabilities without delving into mathematical and programming complexities. Crucially, all the highlighted algorithms are accessible on open platforms without requiring code, and we provide detailed descriptions and links within our review. It's essential to recognize that addressing each specific problem demands an individualized approach. Consequently, our focus is not on comparing algorithms but on delineating the problems they are adept at solving. In practical scenarios, researchers typically select multiple algorithms suited to their tasks and experimentally determine the most effective one. It is worth noting that microscopy extends beyond the realm of biology; its applications span diverse fields such as geology and material science. Although our review predominantly centers on biomedical applications, the algorithms and principles outlined here are equally applicable to other scientific domains. Furthermore, a number of the proposed solutions can be modified for use in entirely distinct computer vision cases.
Collapse
Affiliation(s)
- Elizaveta Chechekhina
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| | - Nikita Voloshin
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| | - Konstantin Kulebyakin
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| | - Pyotr Tyurin-Kuzmin
- Department of Biochemistry and Regenerative Biomedicine, Faculty of Medicine, Lomonosov Moscow State University, Moscow, Russia
| |
Collapse
|
4
|
Saqi A, Liu Y, Politis MG, Salvatore M, Jambawalikar S. Combined expert-in-the-loop-random forest multiclass segmentation U-net based artificial intelligence model: evaluation of non-small cell lung cancer in fibrotic and non-fibrotic microenvironments. J Transl Med 2024; 22:640. [PMID: 38978066 PMCID: PMC11232199 DOI: 10.1186/s12967-024-05394-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 06/12/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND The tumor microenvironment (TME) plays a key role in lung cancer initiation, proliferation, invasion, and metastasis. Artificial intelligence (AI) methods could potentially accelerate TME analysis. The aims of this study were to (1) assess the feasibility of using hematoxylin and eosin (H&E)-stained whole slide images (WSI) to develop an AI model for evaluating the TME and (2) to characterize the TME of adenocarcinoma (ADCA) and squamous cell carcinoma (SCCA) in fibrotic and non-fibrotic lung. METHODS The cohort was derived from chest CT scans of patients presenting with lung neoplasms, with and without background fibrosis. WSI images were generated from slides of all 76 available pathology cases with ADCA (n = 53) or SCCA (n = 23) in fibrotic (n = 47) or non-fibrotic (n = 29) lung. Detailed ground-truth annotations, including of stroma (i.e., fibrosis, vessels, inflammation), necrosis and background, were performed on WSI and optimized via an expert-in-the-loop (EITL) iterative procedure using a lightweight [random forest (RF)] classifier. A convolution neural network (CNN)-based model was used to achieve tissue-level multiclass segmentation. The model was trained on 25 annotated WSI from 13 cases of ADCA and SCCA within and without fibrosis and then applied to the 76-case cohort. The TME analysis included tumor stroma ratio (TSR), tumor fibrosis ratio (TFR), tumor inflammation ratio (TIR), tumor vessel ratio (TVR), tumor necrosis ratio (TNR), and tumor background ratio (TBR). RESULTS The model's overall classification for precision, sensitivity, and F1-score were 94%, 90%, and 91%, respectively. Statistically significant differences were noted in TSR (p = 0.041) and TFR (p = 0.001) between fibrotic and non-fibrotic ADCA. Within fibrotic lung, statistically significant differences were present in TFR (p = 0.039), TIR (p = 0.003), TVR (p = 0.041), TNR (p = 0.0003), and TBR (p = 0.020) between ADCA and SCCA. CONCLUSION The combined EITL-RF CNN model using only H&E WSI can facilitate multiclass evaluation and quantification of the TME. There are significant differences in the TME of ADCA and SCCA present within or without background fibrosis. Future studies are needed to determine the significance of TME on prognosis and treatment.
Collapse
Affiliation(s)
- Anjali Saqi
- Department of Pathology and Cell Biology, Columbia University Irving Medical Center, 630 West 168th Street, New York, NY, VC14-215, 10032, USA.
| | - Yucheng Liu
- Department of Radiation Physics, Atlantic Health System, New Jersey, NJ, USA
| | - Michelle Garlin Politis
- Department of Pathology and Cell Biology, Columbia University Irving Medical Center, 630 West 168th Street, New York, NY, VC14-215, 10032, USA
| | - Mary Salvatore
- Department of Radiology, Columbia University Irving Medical Center, New York, NY, USA
| | - Sachin Jambawalikar
- Department of Radiology, Columbia University Irving Medical Center, New York, NY, USA
| |
Collapse
|
5
|
Chelebian E, Avenel C, Ciompi F, Wählby C. DEPICTER: Deep representation clustering for histology annotation. Comput Biol Med 2024; 170:108026. [PMID: 38308865 DOI: 10.1016/j.compbiomed.2024.108026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 01/24/2024] [Accepted: 01/24/2024] [Indexed: 02/05/2024]
Abstract
Automatic segmentation of histopathology whole-slide images (WSI) usually involves supervised training of deep learning models with pixel-level labels to classify each pixel of the WSI into tissue regions such as benign or cancerous. However, fully supervised segmentation requires large-scale data manually annotated by experts, which can be expensive and time-consuming to obtain. Non-fully supervised methods, ranging from semi-supervised to unsupervised, have been proposed to address this issue and have been successful in WSI segmentation tasks. But these methods have mainly been focused on technical advancements in algorithmic performance rather than on the development of practical tools that could be used by pathologists or researchers in real-world scenarios. In contrast, we present DEPICTER (Deep rEPresentatIon ClusTERing), an interactive segmentation tool for histopathology annotation that produces a patch-wise dense segmentation map at WSI level. The interactive nature of DEPICTER leverages self- and semi-supervised learning approaches to allow the user to participate in the segmentation producing reliable results while reducing the workload. DEPICTER consists of three steps: first, a pretrained model is used to compute embeddings from image patches. Next, the user selects a number of benign and cancerous patches from the multi-resolution image. Finally, guided by the deep representations, label propagation is achieved using our novel seeded iterative clustering method or by directly interacting with the embedding space via feature space gating. We report both real-time interaction results with three pathologists and evaluate the performance on three public cancer classification dataset benchmarks through simulations. The code and demos of DEPICTER are publicly available at https://github.com/eduardchelebian/depicter.
Collapse
Affiliation(s)
- Eduard Chelebian
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden.
| | - Chirstophe Avenel
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Carolina Wählby
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| |
Collapse
|
6
|
Hermosilla P, Soto R, Vega E, Suazo C, Ponce J. Skin Cancer Detection and Classification Using Neural Network Algorithms: A Systematic Review. Diagnostics (Basel) 2024; 14:454. [PMID: 38396492 PMCID: PMC10888121 DOI: 10.3390/diagnostics14040454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 02/07/2024] [Accepted: 02/10/2024] [Indexed: 02/25/2024] Open
Abstract
In recent years, there has been growing interest in the use of computer-assisted technology for early detection of skin cancer through the analysis of dermatoscopic images. However, the accuracy illustrated behind the state-of-the-art approaches depends on several factors, such as the quality of the images and the interpretation of the results by medical experts. This systematic review aims to critically assess the efficacy and challenges of this research field in order to explain the usability and limitations and highlight potential future lines of work for the scientific and clinical community. In this study, the analysis was carried out over 45 contemporary studies extracted from databases such as Web of Science and Scopus. Several computer vision techniques related to image and video processing for early skin cancer diagnosis were identified. In this context, the focus behind the process included the algorithms employed, result accuracy, and validation metrics. Thus, the results yielded significant advancements in cancer detection using deep learning and machine learning algorithms. Lastly, this review establishes a foundation for future research, highlighting potential contributions and opportunities to improve the effectiveness of skin cancer detection through machine learning.
Collapse
Affiliation(s)
- Pamela Hermosilla
- Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2241, Valparaíso 2362807, Chile (E.V.); (C.S.); (J.P.)
| | | | | | | | | |
Collapse
|
7
|
Sauter D, Lodde G, Nensa F, Schadendorf D, Livingstone E, Kukuk M. Deep learning in computational dermatopathology of melanoma: A technical systematic literature review. Comput Biol Med 2023; 163:107083. [PMID: 37315382 DOI: 10.1016/j.compbiomed.2023.107083] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 05/10/2023] [Accepted: 05/27/2023] [Indexed: 06/16/2023]
Abstract
Deep learning (DL) has become one of the major approaches in computational dermatopathology, evidenced by a significant increase in this topic in the current literature. We aim to provide a structured and comprehensive overview of peer-reviewed publications on DL applied to dermatopathology focused on melanoma. In comparison to well-published DL methods on non-medical images (e.g., classification on ImageNet), this field of application comprises a specific set of challenges, such as staining artifacts, large gigapixel images, and various magnification levels. Thus, we are particularly interested in the pathology-specific technical state-of-the-art. We also aim to summarize the best performances achieved thus far with respect to accuracy, along with an overview of self-reported limitations. Accordingly, we conducted a systematic literature review of peer-reviewed journal and conference articles published between 2012 and 2022 in the databases ACM Digital Library, Embase, IEEE Xplore, PubMed, and Scopus, expanded by forward and backward searches to identify 495 potentially eligible studies. After screening for relevance and quality, a total of 54 studies were included. We qualitatively summarized and analyzed these studies from technical, problem-oriented, and task-oriented perspectives. Our findings suggest that the technical aspects of DL for histopathology in melanoma can be further improved. The DL methodology was adopted later in this field, and still lacks the wider adoption of DL methods already shown to be effective for other applications. We also discuss upcoming trends toward ImageNet-based feature extraction and larger models. While DL has achieved human-competitive accuracy in routine pathological tasks, its performance on advanced tasks is still inferior to wet-lab testing (for example). Finally, we discuss the challenges impeding the translation of DL methods to clinical practice and provide insight into future research directions.
Collapse
Affiliation(s)
- Daniel Sauter
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany.
| | - Georg Lodde
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | - Felix Nensa
- Institute for AI in Medicine (IKIM), University Hospital Essen, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, 45147 Essen, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | | | - Markus Kukuk
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany
| |
Collapse
|
8
|
Jumutc V, Bliznuks D, Lihachev A. Multi-Loss U-Net Reformulation as an Efficient Solution to the Colony-Forming Unit Counting Problem. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083582 DOI: 10.1109/embc40787.2023.10340810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
U-Net is undoubtedly the most cited and popularized deep learning architecture in the biomedical domain. Starting with image, volume, or video segmentation in numerous practical applications, such as digital pathology, and continuing to Colony-Forming Unit (CFU) segmentation, new emerging areas require an additional U-Net reformulation to solve some inherent inefficiencies of a simple segmentation-tailored loss function, such as the Dice Similarity Coefficient, being applied at the training step. One of such areas is segmentation-driven CFU counting, where after receiving a segmentation output map one should count all distinct segmented regions belonging to different detected microbial colonies. This problem can be a challenging endeavor, as a pure segmentation objective tends to produce many irrelevant artifacts or flipped pixels. Our novel multi-loss U-Net reformulation proposes an efficient solution to the aforementioned problem. It introduces an additional loss term at the bottom-most U-Net level, which focuses on providing an auxiliary signal of where to look for distinct CFUs. In general, our experiments show that all probed multi-loss U-Net architectures consistently outperform their single-loss counterparts, which embark on the Dice Similarity Coefficient and Cross-Entropy training losses alone.
Collapse
|
9
|
Zhu C, Hu P, Wang X, Zeng X, Shi L. A real-time computer-aided diagnosis method for hydatidiform mole recognition using deep neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 234:107510. [PMID: 37003042 DOI: 10.1016/j.cmpb.2023.107510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 02/20/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND OBJECTIVE Hydatidiform mole (HM) is one of the most common gestational trophoblastic diseases with malignant potential. Histopathological examination is the primary method for diagnosing HM. However, due to the obscure and confusing pathology features of HM, significant observer variability exists among pathologists, leading to over- and misdiagnosis in clinical practice. Efficient feature extraction can significantly improve the accuracy and speed of the diagnostic process. Deep neural network (DNN) has been proven to have excellent feature extraction and segmentation capabilities, which is widely used in clinical practice for many other diseases. We constructed a deep learning-based CAD method to recognize HM hydrops lesions under the microscopic view in real-time. METHODS To solve the challenge of lesion segmentation due to difficulties in extracting effective features from HM slide images, we proposed a hydrops lesion recognition module that employs DeepLabv3+ with our novel compound loss function and a stepwise training strategy to achieve great performance in recognizing hydrops lesions at both pixel and lesion level. Meanwhile, a Fourier transform-based image mosaic module and an edge extension module for image sequences were developed to make the recognition model more applicable to the case of moving slides in clinical practice. Such an approach also addresses the situation where the model has poor results for image edge recognition. RESULTS We evaluated our method using widely adopted DNNs on an HM dataset and chose DeepLabv3+ with our compound loss function as the segmentation model. The comparison experiments show that the edge extension module is able to improve the model performance by at most 3.4% regarding pixel-level IoU and 9.0% regarding lesion-level IoU. As for the final result, our method is able to achieve a pixel-level IoU of 77.0%, a precision of 86.0%, and a lesion-level recall of 86.2% while having a response time of 82 ms per frame. Experiments show that our method is able to display the full microscopic view with accurately labeled HM hydrops lesions following the movement of slides in real-time. CONCLUSIONS To the best of our knowledge, this is the first method to utilize deep neural networks in HM lesion recognition. This method provides a robust and accurate solution with powerful feature extraction and segmentation capabilities for auxiliary diagnosis of HM.
Collapse
Affiliation(s)
- Chengze Zhu
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Pingge Hu
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Xingtong Wang
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Xianxu Zeng
- Department of Pathology, the Third Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
| | - Li Shi
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
10
|
Wagner P, Springenberg M, Kröger M, Moritz RKC, Schleusener J, Meinke MC, Ma J. Semantic modeling of cell damage prediction: a machine learning approach at human-level performance in dermatology. Sci Rep 2023; 13:8336. [PMID: 37221254 DOI: 10.1038/s41598-023-35370-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 05/17/2023] [Indexed: 05/25/2023] Open
Abstract
Machine learning is transforming the field of histopathology. Especially in classification related tasks, there have been many successful applications of deep learning already. Yet, in tasks that rely on regression and many niche applications, the domain lacks cohesive procedures that are adapted to the learning processes of neural networks. In this work, we investigate cell damage in whole slide images of the epidermis. A common way for pathologists to annotate a score, characterizing the degree of damage for these samples, is the ratio between healthy and unhealthy nuclei. The annotation procedure of these scores, however, is expensive and prone to be noisy among pathologists. We propose a new measure of damage, that is the total area of damage, relative to the total area of the epidermis. In this work, we present results of regression and segmentation models, predicting both scores on a curated and public dataset. We have acquired the dataset in collaborative efforts with medical professionals. Our study resulted in a comprehensive evaluation of the proposed damage metrics in the epidermis, with recommendations, emphasizing practical relevance for real world applications.
Collapse
Affiliation(s)
- Patrick Wagner
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Einsteinufer 37, 10587, Berlin, Germany
| | - Maximilian Springenberg
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Einsteinufer 37, 10587, Berlin, Germany
| | - Marius Kröger
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, 10117, Berlin, Germany
| | - Rose K C Moritz
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, 10117, Berlin, Germany
| | - Johannes Schleusener
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, 10117, Berlin, Germany
| | - Martina C Meinke
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, 10117, Berlin, Germany
| | - Jackie Ma
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Einsteinufer 37, 10587, Berlin, Germany.
| |
Collapse
|
11
|
Diagnostic and Prognostic Deep Learning Applications for Histological Assessment of Cutaneous Melanoma. Cancers (Basel) 2022; 14:cancers14246231. [PMID: 36551716 PMCID: PMC9776963 DOI: 10.3390/cancers14246231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 12/08/2022] [Accepted: 12/14/2022] [Indexed: 12/23/2022] Open
Abstract
Melanoma is among the most devastating human malignancies. Accurate diagnosis and prognosis are essential to offer optimal treatment. Histopathology is the gold standard for establishing melanoma diagnosis and prognostic features. However, discrepancies often exist between pathologists, and analysis is costly and time-consuming. Deep-learning algorithms are deployed to improve melanoma diagnosis and prognostication from histological images of melanoma. In recent years, the development of these machine-learning tools has accelerated, and machine learning is poised to become a clinical tool to aid melanoma histology. Nevertheless, a review of the advances in machine learning in melanoma histology was lacking. We performed a comprehensive literature search to provide a complete overview of the recent advances in machine learning in the assessment of melanoma based on hematoxylin eosin digital pathology images. In our work, we review 37 recent publications, compare the methods and performance of the reviewed studies, and highlight the variety of promising machine-learning applications in melanoma histology.
Collapse
|
12
|
Ramamurthy K, Varikuti AR, Gupta B, Aswani N. A deep learning network for Gleason grading of prostate biopsies using EfficientNet. BIOMED ENG-BIOMED TE 2022; 68:187-198. [PMID: 36332194 DOI: 10.1515/bmt-2022-0201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/23/2022] [Indexed: 11/06/2022]
Abstract
Abstract
Objectives
The most crucial part in the diagnosis of cancer is severity grading. Gleason’s score is a widely used grading system for prostate cancer. Manual examination of the microscopic images and grading them is tiresome and consumes a lot of time. Hence to automate the Gleason grading process, a novel deep learning network is proposed in this work.
Methods
In this work, a deep learning network for Gleason grading of prostate cancer is proposed based on EfficientNet architecture. It applies a compound scaling method to balance the dimensions of the underlying network. Also, an additional attention branch is added to EfficientNet-B7 for precise feature weighting.
Result
To the best of our knowledge, this is the first work that integrates an additional attention branch with EfficientNet architecture for Gleason grading. The proposed models were trained using H&E-stained samples from prostate cancer Tissue Microarrays (TMAs) in the Harvard Dataverse dataset.
Conclusions
The proposed network was able to outperform the existing methods and it achieved an Kappa score of 0.5775.
Collapse
Affiliation(s)
- Karthik Ramamurthy
- Centre for Cyber Physical Systems, School of Electronics Engineering, Vellore Institute of Technology , Chennai , India
| | - Abinash Reddy Varikuti
- School of Computer Science Engineering, Vellore Institute of Technology , Chennai , India
| | - Bhavya Gupta
- School of Computer Science Engineering, Vellore Institute of Technology , Chennai , India
| | - Nehal Aswani
- School of Electronics Engineering, Vellore Institute of Technology , Chennai , India
| |
Collapse
|
13
|
Zováthi BH, Mohácsi R, Szász AM, Cserey G. Breast Tumor Tissue Segmentation with Area-Based Annotation Using Convolutional Neural Network. Diagnostics (Basel) 2022; 12:diagnostics12092161. [PMID: 36140562 PMCID: PMC9498155 DOI: 10.3390/diagnostics12092161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 08/27/2022] [Accepted: 09/02/2022] [Indexed: 11/26/2022] Open
Abstract
In this paper, we propose a novel approach to segment tumor and normal regions in human breast tissues. Cancer is the second most common cause of death in our society; every eighth woman will be diagnosed with breast cancer in her life. Histological diagnosis is key in the process where oncotherapy is administered. Due to the time-consuming analysis and the lack of specialists alike, obtaining a timely diagnosis is often a difficult process in healthcare institutions, so there is an urgent need for improvement in diagnostics. To reduce costs and speed up the process, an automated algorithm could aid routine diagnostics. We propose an area-based annotation approach generalized by a new rule template to accurately solve high-resolution biological segmentation tasks in a time-efficient way. These algorithm and implementation rules provide an alternative solution for pathologists to make decisions as accurate as manually. This research is based on an individual database from Semmelweis University, containing 291 high-resolution, bright field microscopy breast tumor tissue images. A total of 70% of the 128 × 128-pixel resolution images (206,174 patches) were used for training a convolutional neural network to learn the features of normal and tumor tissue samples. The evaluation of the small regions results in high-resolution histopathological image segmentation; the optimal parameters were calculated on the validation dataset (29 images, 10%), considering the accuracy and time factor as well. The algorithm was tested on the test dataset (61 images, 20%), reaching a 99.10% f1 score on pixel level evaluation within 3 min on average. Besides the quantitative analyses, the system’s accuracy was measured qualitatively by a histopathologist, who confirmed that the algorithm was also accurate in regions not annotated before.
Collapse
Affiliation(s)
- Bendegúz H. Zováthi
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, 1083 Budapest, Hungary
- Correspondence:
| | - Réka Mohácsi
- Department of Internal Medicine and Oncology, Semmelweis University, 1083 Budapest, Hungary
| | | | - György Cserey
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, 1083 Budapest, Hungary
| |
Collapse
|
14
|
Jansen P, Baguer DO, Duschner N, Le’Clerc Arrastia J, Schmidt M, Wiepjes B, Schadendorf D, Hadaschik E, Maass P, Schaller J, Griewank KG. Evaluation of a Deep Learning Approach to Differentiate Bowen's Disease and Seborrheic Keratosis. Cancers (Basel) 2022; 14:cancers14143518. [PMID: 35884578 PMCID: PMC9320483 DOI: 10.3390/cancers14143518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 06/21/2022] [Accepted: 07/18/2022] [Indexed: 12/10/2022] Open
Abstract
Background: Some of the most common cutaneous neoplasms are Bowen’s disease and seborrheic keratosis, a malignant and a benign proliferation, respectively. These entities represent a significant fraction of a dermatopathologists’ workload, and in some cases, histological differentiation may be challenging. The potential of deep learning networks to distinguish these diseases is assessed. Methods: In total, 1935 whole-slide images from three institutions were scanned on two different slide scanners. A U-Net-based segmentation deep learning algorithm was trained on data from one of the centers to differentiate Bowen’s disease, seborrheic keratosis, and normal tissue, learning from annotations performed by dermatopathologists. Optimal thresholds for the class distinction of diagnoses were extracted and assessed on a test set with data from all three institutions. Results: We aimed to diagnose Bowen’s diseases with the highest sensitivity. A good performance was observed across all three centers, underlining the model’s robustness. In one of the centers, the distinction between Bowen’s disease and all other diagnoses was achieved with an AUC of 0.9858 and a sensitivity of 0.9511. Seborrheic keratosis was detected with an AUC of 0.9764 and a sensitivity of 0.9394. Nevertheless, distinguishing irritated seborrheic keratosis from Bowen’s disease remained challenging. Conclusions: Bowen’s disease and seborrheic keratosis could be correctly identified by the evaluated deep learning model on test sets from three different centers, two of which were not involved in training, and AUC scores > 0.97 were obtained. The method proved robust to changes in the staining solution and scanner model. We believe this demonstrates that deep learning algorithms can aid in clinical routine; however, the results should be confirmed by qualified histopathologists.
Collapse
Affiliation(s)
- Philipp Jansen
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany; (P.J.); (D.S.); (E.H.)
- Department of Dermatology, University Hospital Bonn, 53127 Bonn, Germany
| | - Daniel Otero Baguer
- Center for Industrial Mathematics (ZeTeM), University of Bremen, 28359 Bremen, Germany; (D.O.B.); (J.L.A.); (M.S.); (P.M.)
| | - Nicole Duschner
- Dermatopathologie Duisburg Essen GmbH, 45329 Essen, Germany; (N.D.); (B.W.); (J.S.)
| | - Jean Le’Clerc Arrastia
- Center for Industrial Mathematics (ZeTeM), University of Bremen, 28359 Bremen, Germany; (D.O.B.); (J.L.A.); (M.S.); (P.M.)
| | - Maximilian Schmidt
- Center for Industrial Mathematics (ZeTeM), University of Bremen, 28359 Bremen, Germany; (D.O.B.); (J.L.A.); (M.S.); (P.M.)
| | - Bettina Wiepjes
- Dermatopathologie Duisburg Essen GmbH, 45329 Essen, Germany; (N.D.); (B.W.); (J.S.)
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany; (P.J.); (D.S.); (E.H.)
| | - Eva Hadaschik
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany; (P.J.); (D.S.); (E.H.)
| | - Peter Maass
- Center for Industrial Mathematics (ZeTeM), University of Bremen, 28359 Bremen, Germany; (D.O.B.); (J.L.A.); (M.S.); (P.M.)
| | - Jörg Schaller
- Dermatopathologie Duisburg Essen GmbH, 45329 Essen, Germany; (N.D.); (B.W.); (J.S.)
| | - Klaus Georg Griewank
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany; (P.J.); (D.S.); (E.H.)
- Dermatopathologie bei Mainz, 55268 Nieder-Olm, Germany
- Correspondence: ; Tel.: +49-201-723-2326
| |
Collapse
|
15
|
Viswanathan VS, Toro P, Corredor G, Mukhopadhyay S, Madabhushi A. The state of the art for artificial intelligence in lung digital pathology. J Pathol 2022; 257:413-429. [PMID: 35579955 PMCID: PMC9254900 DOI: 10.1002/path.5966] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/26/2022] [Accepted: 05/15/2022] [Indexed: 12/03/2022]
Abstract
Lung diseases carry a significant burden of morbidity and mortality worldwide. The advent of digital pathology (DP) and an increase in computational power have led to the development of artificial intelligence (AI)-based tools that can assist pathologists and pulmonologists in improving clinical workflow and patient management. While previous works have explored the advances in computational approaches for breast, prostate, and head and neck cancers, there has been a growing interest in applying these technologies to lung diseases as well. The application of AI tools on radiology images for better characterization of indeterminate lung nodules, fibrotic lung disease, and lung cancer risk stratification has been well documented. In this article, we discuss methodologies used to build AI tools in lung DP, describing the various hand-crafted and deep learning-based unsupervised feature approaches. Next, we review AI tools across a wide spectrum of lung diseases including cancer, tuberculosis, idiopathic pulmonary fibrosis, and COVID-19. We discuss the utility of novel imaging biomarkers for different types of clinical problems including quantification of biomarkers like PD-L1, lung disease diagnosis, risk stratification, and prediction of response to treatments such as immune checkpoint inhibitors. We also look briefly at some emerging applications of AI tools in lung DP such as multimodal data analysis, 3D pathology, and transplant rejection. Lastly, we discuss the future of DP-based AI tools, describing the challenges with regulatory approval, developing reimbursement models, planning clinical deployment, and addressing AI biases. © 2022 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
| | - Paula Toro
- Department of PathologyCleveland ClinicClevelandOHUSA
| | - Germán Corredor
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Louis Stokes Cleveland VA Medical CenterClevelandOHUSA
| | | | - Anant Madabhushi
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Louis Stokes Cleveland VA Medical CenterClevelandOHUSA
| |
Collapse
|
16
|
Maurya A, Stanley RJ, Lama N, Jagannathan S, Saeed D, Swinfard S, Hagerty JR, Stoecker WV. A deep learning approach to detect blood vessels in basal cell carcinoma. Skin Res Technol 2022; 28:571-576. [PMID: 35611797 PMCID: PMC9907638 DOI: 10.1111/srt.13150] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 03/09/2022] [Indexed: 11/29/2022]
Abstract
PURPOSE Blood vessels called telangiectasia are visible in skin lesions with the aid of dermoscopy. Telangiectasia are a pivotal identifying feature of basal cell carcinoma. These vessels appear thready, serpiginous, and may also appear arborizing, that is, wide vessels branch into successively thinner vessels. Due to these intricacies, their detection is not an easy task, neither with manual annotation nor with computerized techniques. In this study, we automate the segmentation of telangiectasia in dermoscopic images with a deep learning U-Net approach. METHODS We apply a combination of image processing techniques and a deep learning-based U-Net approach to detect telangiectasia in digital basal cell carcinoma skin cancer images. We compare loss functions and optimize the performance by using a combination loss function to manage class imbalance of skin versus vessel pixels. RESULTS We establish a baseline method for pixel-based telangiectasia detection in skin cancer lesion images. An analysis and comparison for human observer variability in annotation is also presented. CONCLUSION Our approach yields Jaccard score within the variation of human observers as it addresses a new aspect of the rapidly evolving field of deep learning: automatic identification of cancer-specific structures. Further application of DL techniques to detect dermoscopic structures and handle noisy labels is warranted.
Collapse
Affiliation(s)
- A Maurya
- Missouri University of Science &Technology, Rolla, Missouri
| | - R J Stanley
- Missouri University of Science &Technology, Rolla, Missouri
| | - N Lama
- Missouri University of Science &Technology, Rolla, Missouri
| | | | - D Saeed
- St. Louis University, St. Louis, Missouri
| | - S Swinfard
- Missouri University of Science &Technology, Rolla, Missouri
| | | | | |
Collapse
|
17
|
Lai Z, Oliveira LC, Guo R, Xu W, Hu Z, Mifflin K, Decarli C, Cheung SC, Chuah CN, Dugger BN. BrainSec: Automated Brain Tissue Segmentation Pipeline for Scalable Neuropathological Analysis. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2022; 10:49064-49079. [PMID: 36157332 PMCID: PMC9503016 DOI: 10.1109/access.2022.3171927] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As neurodegenerative disease pathological hallmarks have been reported in both grey matter (GM) and white matter (WM) with different density distributions, automating the segmentation process of GM/WM would be extremely advantageous for aiding in neuropathologic deep phenotyping. Standard segmentation methods typically involve manual annotations, where a trained researcher traces the delineation of GM/WM in ultra-high-resolution Whole Slide Images (WSIs). This method can be time-consuming and subjective, preventing a scalable analysis on pathology images. This paper proposes an automated segmentation pipeline (BrainSec) combining a Convolutional Neural Network (CNN) module for segmenting GM/WM regions and a post-processing module to remove artifacts/residues of tissues. The final output generates XML annotations that can be visualized via Aperio ImageScope. First, we investigate two baseline models for medical image segmentation: FCN, and U-Net. Then we propose a patch-based approach, BrainSec, to classify the GM/WM/background regions. We demonstrate BrainSec is robust and has reliable performance by testing it on over 180 WSIs that incorporate numerous unique cases as well as distinct neuroanatomic brain regions. We also apply gradient-weighted class activation mapping (Grad-CAM) to interpret the segmentation masks and provide relevant explanations and insights. In addition, we have integrated BrainSec with an existing Amyloid-β pathology classification model into a unified framework (without incurring significant computation complexity) to identify pathologies, visualize their distributions, and quantify each type of pathologies in segmented GM/WM regions, respectively.
Collapse
Affiliation(s)
- Zhengfeng Lai
- Department of Electrical and Computer Engineering, University of California Davis, Davis, CA 95616, USA
| | - Luca Cerny Oliveira
- Department of Electrical and Computer Engineering, University of California Davis, Davis, CA 95616, USA
| | - Runlin Guo
- Department of Electrical and Computer Engineering, University of California Davis, Davis, CA 95616, USA
| | - Wenda Xu
- Department of Electrical and Computer Engineering, University of California Davis, Davis, CA 95616, USA
| | - Zin Hu
- Department of Pathology and Laboratory Medicine, University of California Davis, Davis, CA 95817, USA
| | - Kelsey Mifflin
- Department of Pathology and Laboratory Medicine, University of California Davis, Davis, CA 95817, USA
| | - Charles Decarli
- Department of Pathology and Laboratory Medicine, University of California Davis, Davis, CA 95817, USA
| | - Sen-Ching Cheung
- Department of Electrical and Computer Engineering, University of Kentucky, Lexington, KY 40506, USA
| | - Chen-Nee Chuah
- Department of Electrical and Computer Engineering, University of California Davis, Davis, CA 95616, USA
| | - Brittany N Dugger
- Department of Pathology and Laboratory Medicine, University of California Davis, Davis, CA 95817, USA
| |
Collapse
|
18
|
Jumutc V, Bļizņuks D, Lihachev A. Multi-Path U-Net Architecture for Cell and Colony-Forming Unit Image Segmentation. SENSORS (BASEL, SWITZERLAND) 2022; 22:990. [PMID: 35161735 PMCID: PMC8839202 DOI: 10.3390/s22030990] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 01/17/2022] [Accepted: 01/20/2022] [Indexed: 06/14/2023]
Abstract
U-Net is the most cited and widely-used deep learning model for biomedical image segmentation. In this paper, we propose a new enhanced version of a ubiquitous U-Net architecture, which improves upon the original one in terms of generalization capabilities, while addressing several immanent shortcomings, such as constrained resolution and non-resilient receptive fields of the main pathway. Our novel multi-path architecture introduces a notion of an individual receptive field pathway, which is merged with other pathways at the bottom-most layer by concatenation and subsequent application of Layer Normalization and Spatial Dropout, which can improve generalization performance for small datasets. In general, our experiments show that the proposed multi-path architecture outperforms other state-of-the-art approaches that embark on similar ideas of pyramid structures, skip-connections, and encoder-decoder pathways. A significant improvement of the Dice similarity coefficient is attained at our proprietary colony-forming unit dataset, where a score of 0.809 was achieved for the foreground class.
Collapse
Affiliation(s)
- Vilen Jumutc
- Institute of Smart Computer Technologies, Riga Technical University, LV-1658 Riga, Latvia;
| | - Dmitrijs Bļizņuks
- Institute of Smart Computer Technologies, Riga Technical University, LV-1658 Riga, Latvia;
| | - Alexey Lihachev
- Institute of Atomic Physics and Spectroscopy, University of Latvia, LV-1586 Riga, Latvia;
| |
Collapse
|
19
|
Lai Z, Wang C, Hu Z, Dugger BN, Cheung SC, Chuah CN. A Semi-supervised Learning for Segmentation of Gigapixel Histopathology Images from Brain Tissues. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:1920-1923. [PMID: 34891662 DOI: 10.1109/embc46164.2021.9629715] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automated segmentation of grey matter (GM) and white matter (WM) in gigapixel histopathology images is advantageous to analyzing distributions of disease pathologies, further aiding in neuropathologic deep phenotyping. Although supervised deep learning methods have shown good performance, its requirement of a large amount of labeled data may not be cost-effective for large scale projects. In the case of GM/WM segmentation, trained experts need to carefully trace the delineation in gigapixel images. To minimize manual labeling, we consider semi-surprised learning (SSL) and deploy one state-of-the-art SSL method (FixMatch) on WSIs. Then we propose a two-stage scheme to further improve the performance of SSL: the first stage is a self-supervised module to train an encoder to learn the visual representations of unlabeled data, subsequently, this well-trained encoder will be an initialization of consistency loss-based SSL in the second stage. We test our method on Amyloid-β stained histopathology images and the results outperform FixMatch with the mean IoU score at around 2% by using 6,000 labeled tiles while over 10% by using only 600 labeled tiles from 2 WSIs.Clinical relevance- this work minimizes the required labeling efforts by trained personnel. An improved GM/WM segmentation method could further aid in the study of brain diseases, such as Alzheimer's disease.
Collapse
|
20
|
Lai Z, Wang C, Oliveira LC, Dugger BN, Cheung SC, Chuah CN. Joint Semi-supervised and Active Learning for Segmentation of Gigapixel Pathology Images with Cost-Effective Labeling. ... IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2021; 2021:591-600. [PMID: 35372752 PMCID: PMC8972970 DOI: 10.1109/iccvw54120.2021.00072] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
The need for manual and detailed annotations limits the applicability of supervised deep learning algorithms in medical image analyses, specifically in the field of pathology. Semi-supervised learning (SSL) provides an effective way for leveraging unlabeled data to relieve the heavy reliance on the amount of labeled samples when training a model. Although SSL has shown good performance, the performance of recent state-of-the-art SSL methods on pathology images is still under study. The problem for selecting the most optimal data to label for SSL is not fully explored. To tackle this challenge, we propose a semi-supervised active learning framework with a region-based selection criterion. This framework iteratively selects regions for annotation query to quickly expand the diversity and volume of the labeled set. We evaluate our framework on a grey-matter/white-matter segmentation problem using gigapixel pathology images from autopsied human brain tissues. With only 0.1% regions labeled, our proposed algorithm can reach a competitive IoU score compared to fully-supervised learning and outperform the current state-of-the-art SSL by more than 10% of IoU score and DICE coefficient.
Collapse
|
21
|
Jones JD, Rodriguez MR, Quinn KP. Automated Extraction of Skin Wound Healing Biomarkers From In Vivo Label-Free Multiphoton Microscopy Using Convolutional Neural Networks. Lasers Surg Med 2021; 53:1086-1095. [PMID: 33442889 PMCID: PMC8275674 DOI: 10.1002/lsm.23375] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 12/06/2020] [Accepted: 12/23/2020] [Indexed: 12/16/2022]
Abstract
BACKGROUND AND OBJECTIVES Histological analysis is a gold standard technique for studying impaired skin wound healing. Label-free multiphoton microscopy (MPM) can provide natural image contrast similar to histological sections and quantitative metabolic information using NADH and FAD autofluorescence. However, MPM analysis requires time-intensive manual segmentation of specific wound tissue regions limiting the practicality and usage of the technology for monitoring wounds. The goal of this study was to train a series of convolutional neural networks (CNNs) to segment MPM images of skin wounds to automate image processing and quantification of wound geometry and metabolism. STUDY DESIGN/MATERIALS AND METHODS Two CNNs with a 4-layer U-Net architecture were trained to segment unstained skin wound tissue sections and in vivo z-stacks of the wound edge. The wound section CNN used 380 distinct MPM images while the in vivo CNN used 5,848 with both image sets being randomly distributed to training, validation, and test sets following a 70%, 20%, and 10% split. The accuracy of each network was evaluated on the test set of images, and the effectiveness of automated measurement of wound geometry and optical redox ratio were compared with hand traced outputs of six unstained wound sections and 69 wound edge z-stacks from eight mice. RESULTS The MPM wound section CNN had an overall accuracy of 92.83%. Measurements of epidermal/dermal thickness, wound depth, wound width, and % re-epithelialization were within 10% error when evaluated on six full wound sections from days 3, 5, and 10 post-wounding that were not included in the training set. The in vivo wound z-stack CNN had an overall accuracy of 89.66% and was able to isolate the wound edge epithelium in z-stacks from eight mice across post-wound time points to quantify the optical redox ratio within 5% of what was recorded by manual segmentations. CONCLUSION The CNNs trained and presented in this study can accurately segment MPM imaged wound sections and in vivo z-stacks to enable automated and rapid calculation of wound geometry and metabolism. Although MPM is a noninvasive imaging modality well suited to imaging living wound tissue, its use has been limited by time-intensive user segmentation. The use of CNNs for automated image segmentation demonstrate that it is possible for MPM to deliver near real-time quantitative readouts of tissue structure and function. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- Jake D. Jones
- Department of Biomedical Engineering, University of Arkansas, Fayetteville, AR, USA
| | - Marcos R. Rodriguez
- Department of Biomedical Engineering, University of Arkansas, Fayetteville, AR, USA
| | - Kyle P. Quinn
- Department of Biomedical Engineering, University of Arkansas, Fayetteville, AR, USA
| |
Collapse
|
22
|
Rastghalam R, Danyali H, Helfroush MS, Celebi ME, Mokhtari M. Skin Melanoma Detection in Microscopic Images Using HMM-Based Asymmetric Analysis and Expectation Maximization. IEEE J Biomed Health Inform 2021; 25:3486-3497. [PMID: 34003756 DOI: 10.1109/jbhi.2021.3081185] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Melanoma is one of the deadliest types of skin cancer with increasing incidence. The most definitive diagnosis method is the histopathological examination of the tissue sample. In this paper, a melanoma detection algorithm is proposed based on decision-level fusion and a Hidden Markov Model (HMM), whose parameters are optimized using Expectation Maximization (EM) and asymmetric analysis. The texture heterogeneity of the samples is determined using asymmetric analysis. A fusion-based HMM classifier trained using EM is introduced. For this purpose, a novel texture feature is extracted based on two local binary patterns, namely local difference pattern (LDP) and statistical histogram features of the microscopic image. Extensive experiments demonstrate that the proposed melanoma detection algorithm yields a total error of less than 0.04%.
Collapse
|
23
|
Aatresh AA, Yatgiri RP, Chanchal AK, Kumar A, Ravi A, Das D, Bs R, Lal S, Kini J. Efficient deep learning architecture with dimension-wise pyramid pooling for nuclei segmentation of histopathology images. Comput Med Imaging Graph 2021; 93:101975. [PMID: 34461375 DOI: 10.1016/j.compmedimag.2021.101975] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 08/05/2021] [Accepted: 08/19/2021] [Indexed: 11/30/2022]
Abstract
Image segmentation remains to be one of the most vital tasks in the area of computer vision and more so in the case of medical image processing. Image segmentation quality is the main metric that is often considered with memory and computation efficiency overlooked, limiting the use of power hungry models for practical use. In this paper, we propose a novel framework (Kidney-SegNet) that combines the effectiveness of an attention based encoder-decoder architecture with atrous spatial pyramid pooling with highly efficient dimension-wise convolutions. The segmentation results of the proposed Kidney-SegNet architecture have been shown to outperform existing state-of-the-art deep learning methods by evaluating them on two publicly available kidney and TNBC breast H&E stained histopathology image datasets. Further, our simulation experiments also reveal that the computational complexity and memory requirement of our proposed architecture is very efficient compared to existing deep learning state-of-the-art methods for the task of nuclei segmentation of H&E stained histopathology images. The source code of our implementation will be available at https://github.com/Aaatresh/Kidney-SegNet.
Collapse
Affiliation(s)
- Anirudh Ashok Aatresh
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Rohit Prashant Yatgiri
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Amit Kumar Chanchal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Aman Kumar
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Akansh Ravi
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Devikalyan Das
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Raghavendra Bs
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Shyam Lal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Jyoti Kini
- Department of Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
24
|
Brady L, Wang YN, Rombokas E, Ledoux WR. Comparison of texture-based classification and deep learning for plantar soft tissue histology segmentation. Comput Biol Med 2021; 134:104491. [PMID: 34090017 PMCID: PMC8263502 DOI: 10.1016/j.compbiomed.2021.104491] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 05/10/2021] [Accepted: 05/10/2021] [Indexed: 11/22/2022]
Abstract
Histomorphological measurements can be used to identify microstructural changes related to disease pathomechanics, in particular, plantar soft tissue changes with diabetes. However, these measurements are time-consuming and susceptible to sampling and human measurement error. We investigated two approaches to automate segmentation of plantar soft tissue stained with modified Hart's stain for elastin with the eventual goal of subsequent morphological analysis. The first approach used multiple texture- and color-based features with tile-wise classification. The second approach used a convolutional neural network modified from the U-Net architecture with fewer channel dimensions and additional downsampling steps. A hybrid color and texture feature, Fourier reduced histogram of uniform improved opponent color local binary patterns (f-IOCLBP), yielded the best feature-based segmentation, but still performed 3.6% worse on average than the modified U-Net. The texture-based method was sensitive to changes in illumination and stain intensity, and segmentation errors were often in large regions of single tissues or at tissue boundaries. The U-Net was able to segment small, few-pixel tissue boundaries, and errors were often trivial to clean up with post-processing. A U-Net approach outperforms hand-crafted features for segmentation of plantar soft tissue stained with modified Hart's stain for elastin.
Collapse
Affiliation(s)
- Lynda Brady
- Center for Limb Loss and MoBility (CLiMB), VA Puget Sound, Seattle, WA, 98108, USA; Department of Mechanical Engineering, University of Washington, Seattle, WA, 98195, USA
| | - Yak-Nam Wang
- Center for Limb Loss and MoBility (CLiMB), VA Puget Sound, Seattle, WA, 98108, USA; Center for Industrial and Medical Ultrasound, Applied Physics Laboratory, University of Washington, Seattle, WA, 98195, USA
| | - Eric Rombokas
- Center for Limb Loss and MoBility (CLiMB), VA Puget Sound, Seattle, WA, 98108, USA; Department of Mechanical Engineering, University of Washington, Seattle, WA, 98195, USA; Department of Electrical Engineering, University of Washington, Seattle, WA, 98195, USA
| | - William R Ledoux
- Center for Limb Loss and MoBility (CLiMB), VA Puget Sound, Seattle, WA, 98108, USA; Department of Mechanical Engineering, University of Washington, Seattle, WA, 98195, USA; Department of Orthopaedics and Sports Medicine, University of Washington, Seattle, WA, 98195, USA.
| |
Collapse
|
25
|
Khened M, Kori A, Rajkumar H, Krishnamurthi G, Srinivasan B. A generalized deep learning framework for whole-slide image segmentation and analysis. Sci Rep 2021; 11:11579. [PMID: 34078928 PMCID: PMC8172839 DOI: 10.1038/s41598-021-90444-8] [Citation(s) in RCA: 61] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 05/04/2021] [Indexed: 12/22/2022] Open
Abstract
Histopathology tissue analysis is considered the gold standard in cancer diagnosis and prognosis. Whole-slide imaging (WSI), i.e., the scanning and digitization of entire histology slides, are now being adopted across the world in pathology labs. Trained histopathologists can provide an accurate diagnosis of biopsy specimens based on WSI data. Given the dimensionality of WSIs and the increase in the number of potential cancer cases, analyzing these images is a time-consuming process. Automated segmentation of tumorous tissue helps in elevating the precision, speed, and reproducibility of research. In the recent past, deep learning-based techniques have provided state-of-the-art results in a wide variety of image analysis tasks, including the analysis of digitized slides. However, deep learning-based solutions pose many technical challenges, including the large size of WSI data, heterogeneity in images, and complexity of features. In this study, we propose a generalized deep learning-based framework for histopathology tissue analysis to address these challenges. Our framework is, in essence, a sequence of individual techniques in the preprocessing-training-inference pipeline which, in conjunction, improve the efficiency and the generalizability of the analysis. The combination of techniques we have introduced includes an ensemble segmentation model, division of the WSI into smaller overlapping patches while addressing class imbalances, efficient techniques for inference, and an efficient, patch-based uncertainty estimation framework. Our ensemble consists of DenseNet-121, Inception-ResNet-V2, and DeeplabV3Plus, where all the networks were trained end to end for every task. We demonstrate the efficacy and improved generalizability of our framework by evaluating it on a variety of histopathology tasks including breast cancer metastases (CAMELYON), colon cancer (DigestPath), and liver cancer (PAIP). Our proposed framework has state-of-the-art performance across all these tasks and is ranked within the top 5 currently for the challenges based on these datasets. The entire framework along with the trained models and the related documentation are made freely available at GitHub and PyPi. Our framework is expected to aid histopathologists in accurate and efficient initial diagnosis. Moreover, the estimated uncertainty maps will help clinicians to make informed decisions and further treatment planning or analysis.
Collapse
Affiliation(s)
- Mahendra Khened
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India
| | - Avinash Kori
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India
| | - Haran Rajkumar
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India
| | - Ganapathy Krishnamurthi
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India.
| | - Balaji Srinivasan
- Department of Mechanical Engineering, Indian Institute of Technology Madras, Chennai, 600036, India
| |
Collapse
|
26
|
Le’Clerc Arrastia J, Heilenkötter N, Otero Baguer D, Hauberg-Lotte L, Boskamp T, Hetzer S, Duschner N, Schaller J, Maass P. Deeply Supervised UNet for Semantic Segmentation to Assist Dermatopathological Assessment of Basal Cell Carcinoma. J Imaging 2021; 7:71. [PMID: 34460521 PMCID: PMC8321345 DOI: 10.3390/jimaging7040071] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 03/29/2021] [Accepted: 04/06/2021] [Indexed: 11/19/2022] Open
Abstract
Accurate and fast assessment of resection margins is an essential part of a dermatopathologist's clinical routine. In this work, we successfully develop a deep learning method to assist the dermatopathologists by marking critical regions that have a high probability of exhibiting pathological features in whole slide images (WSI). We focus on detecting basal cell carcinoma (BCC) through semantic segmentation using several models based on the UNet architecture. The study includes 650 WSI with 3443 tissue sections in total. Two clinical dermatopathologists annotated the data, marking tumor tissues' exact location on 100 WSI. The rest of the data, with ground-truth sectionwise labels, are used to further validate and test the models. We analyze two different encoders for the first part of the UNet network and two additional training strategies: (a) deep supervision, (b) linear combination of decoder outputs, and obtain some interpretations about what the network's decoder does in each case. The best model achieves over 96%, accuracy, sensitivity, and specificity on the Test set.
Collapse
Affiliation(s)
- Jean Le’Clerc Arrastia
- Center for Industrial Mathematics, University of Bremen, 28359 Bremen, Germany; (N.H.); (D.O.B.); (L.H.-L.); (P.M.)
| | - Nick Heilenkötter
- Center for Industrial Mathematics, University of Bremen, 28359 Bremen, Germany; (N.H.); (D.O.B.); (L.H.-L.); (P.M.)
| | - Daniel Otero Baguer
- Center for Industrial Mathematics, University of Bremen, 28359 Bremen, Germany; (N.H.); (D.O.B.); (L.H.-L.); (P.M.)
| | - Lena Hauberg-Lotte
- Center for Industrial Mathematics, University of Bremen, 28359 Bremen, Germany; (N.H.); (D.O.B.); (L.H.-L.); (P.M.)
| | | | - Sonja Hetzer
- Dermatopathologie Duisburg Essen, 45329 Essen, Germany; (S.H.); (N.D.); (J.S.)
| | - Nicole Duschner
- Dermatopathologie Duisburg Essen, 45329 Essen, Germany; (S.H.); (N.D.); (J.S.)
| | - Jörg Schaller
- Dermatopathologie Duisburg Essen, 45329 Essen, Germany; (S.H.); (N.D.); (J.S.)
| | - Peter Maass
- Center for Industrial Mathematics, University of Bremen, 28359 Bremen, Germany; (N.H.); (D.O.B.); (L.H.-L.); (P.M.)
| |
Collapse
|
27
|
Wang Q, Sun L, Wang Y, Zhou M, Hu M, Chen J, Wen Y, Li Q. Identification of Melanoma From Hyperspectral Pathology Image Using 3D Convolutional Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:218-227. [PMID: 32956043 DOI: 10.1109/tmi.2020.3024923] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Skin biopsy histopathological analysis is one of the primary methods used for pathologists to assess the presence and deterioration of melanoma in clinical. A comprehensive and reliable pathological analysis is the result of correctly segmented melanoma and its interaction with benign tissues, and therefore providing accurate therapy. In this study, we applied the deep convolution network on the hyperspectral pathology images to perform the segmentation of melanoma. To make the best use of spectral properties of three dimensional hyperspectral data, we proposed a 3D fully convolutional network named Hyper-net to segment melanoma from hyperspectral pathology images. In order to enhance the sensitivity of the model, we made a specific modification to the loss function with caution of false negative in diagnosis. The performance of Hyper-net surpassed the 2D model with the accuracy over 92%. The false negative rate decreased by nearly 66% using Hyper-net with the modified loss function. These findings demonstrated the ability of the Hyper-net for assisting pathologists in diagnosis of melanoma based on hyperspectral pathology images.
Collapse
|
28
|
Deep learning in prostate cancer diagnosis and Gleason grading in histopathology images: An extensive study. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100582] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
|
29
|
Hussein S. Automatic layer segmentation in H&E images of mice skin based on colour deconvolution and fuzzy C-mean clustering. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100692] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
|
30
|
Creasy DM, Panchal ST, Garg R, Samanta P. Deep Learning-Based Spermatogenic Staging Assessment for Hematoxylin and Eosin-Stained Sections of Rat Testes. Toxicol Pathol 2020; 49:872-887. [PMID: 33252007 DOI: 10.1177/0192623320969678] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In preclinical toxicology studies, a "stage-aware" histopathological evaluation of testes is recognized as the most sensitive method to detect effects on spermatogenesis. A stage-aware evaluation requires the pathologist to be able to identify the different stages of the spermatogenic cycle. Classically, this evaluation has been performed using periodic acid-Schiff (PAS)-stained sections to visualize the morphology of the developing spermatid acrosome, but due to the complexity of the rat spermatogenic cycle and the subtlety of the criteria used to distinguish between the 14 stages of the cycle, staging of tubules is not only time consuming but also requires specialized training and practice to become competent. Using different criteria, based largely on the shape and movement of the elongating spermatids within the tubule and pooling some of the stages, it is possible to stage tubules using routine hematoxylin and eosin (H&E)-stained sections, thereby negating the need for a special PAS stain. These criteria have been used to develop an automated method to identify the stages of the rat spermatogenic cycle in digital images of H&E-stained Wistar rat testes. The algorithm identifies the spermatogenic stage of each tubule, thereby allowing the pathologist to quickly evaluate the testis in a stage-aware manner and rapidly calculate the stage frequencies.
Collapse
Affiliation(s)
| | - Satish T Panchal
- 211325Sun Pharma Advanced Research Co Ltd, Vadodara, Gujarat, India
| | | | | |
Collapse
|
31
|
Wang X, Fang Y, Yang S, Zhu D, Wang M, Zhang J, Tong KY, Han X. A hybrid network for automatic hepatocellular carcinoma segmentation in H&E-stained whole slide images. Med Image Anal 2020; 68:101914. [PMID: 33285479 DOI: 10.1016/j.media.2020.101914] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Revised: 11/09/2020] [Accepted: 11/16/2020] [Indexed: 12/17/2022]
Abstract
Hepatocellular carcinoma (HCC), as the most common type of primary malignant liver cancer, has become a leading cause of cancer deaths in recent years. Accurate segmentation of HCC lesions is critical for tumor load assessment, surgery planning, and postoperative examination. As the appearance of HCC lesions varies greatly across patients, traditional manual segmentation is a very tedious and time-consuming process, the accuracy of which is also difficult to ensure. Therefore, a fully automated and reliable HCC segmentation system is in high demand. In this work, we present a novel hybrid neural network based on multi-task learning and ensemble learning techniques for accurate HCC segmentation of hematoxylin and eosin (H&E)-stained whole slide images (WSIs). First, three task-specific branches are integrated to enlarge the feature space, based on which the network is able to learn more general features and thus reduce the risk of overfitting. Second, an ensemble learning scheme is leveraged to perform feature aggregation, in which selective kernel modules (SKMs) and spatial and channel-wise squeeze-and-excitation modules (scSEMs) are adopted for capturing the features from different spaces and scales. Our proposed method achieves state-of-the-art performance on three publicly available datasets, with segmentation accuracies of 0.797, 0.923, and 0.765 in the PAIP, CRAG, and UHCMC&CWRU datasets, respectively, which demonstrates its effectiveness in addressing the HCC segmentation problem. To the best of our knowledge, this is also the first work on the pixel-wise HCC segmentation of H&E-stained WSIs.
Collapse
Affiliation(s)
- Xiyue Wang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Yuqi Fang
- Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong; Tencent AI Lab, Shenzhen 518057, China
| | - Sen Yang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; Tencent AI Lab, Shenzhen 518057, China
| | - Delong Zhu
- Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
| | - Minghui Wang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Jing Zhang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China.
| | - Kai-Yu Tong
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
| | - Xiao Han
- Tencent AI Lab, Shenzhen 518057, China.
| |
Collapse
|
32
|
Jones JD, Quinn KP. Automated Quantitative Analysis of Wound Histology Using Deep-Learning Neural Networks. J Invest Dermatol 2020; 141:1367-1370. [PMID: 33121938 DOI: 10.1016/j.jid.2020.10.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 09/27/2020] [Accepted: 10/07/2020] [Indexed: 01/20/2023]
Affiliation(s)
- Jake D Jones
- Department of Biomedical Engineering, University of Arkansas, Fayetteville, Arkansas, USA
| | - Kyle P Quinn
- Department of Biomedical Engineering, University of Arkansas, Fayetteville, Arkansas, USA.
| |
Collapse
|