1
|
Klimek A, Mondal D, Block S, Sharma P, Netz RR. Data-driven classification of individual cells by their non-Markovian motion. Biophys J 2024; 123:1173-1183. [PMID: 38515300 PMCID: PMC11140416 DOI: 10.1016/j.bpj.2024.03.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 03/11/2024] [Accepted: 03/18/2024] [Indexed: 03/23/2024] Open
Abstract
We present a method to differentiate organisms solely by their motion based on the generalized Langevin equation (GLE) and use it to distinguish two different swimming modes of strongly confined unicellular microalgae Chlamydomonas reinhardtii. The GLE is a general model for active or passive motion of organisms and particles that can be derived from a time-dependent general many-body Hamiltonian and in particular includes non-Markovian effects (i.e., the trajectory memory of its past). We extract all GLE parameters from individual cell trajectories and perform an unbiased cluster analysis to group them into different classes. For the specific cell population employed in the experiments, the GLE-based assignment into the two different swimming modes works perfectly, as checked by control experiments. The classification and sorting of single cells and organisms is important in different areas; our method, which is based on motion trajectories, offers wide-ranging applications in biology and medicine.
Collapse
Affiliation(s)
- Anton Klimek
- Fachbereich Physik, Freie Universität Berlin, Berlin, Germany
| | - Debasmita Mondal
- Department of Physics, Indian Institute of Science, Bangalore, India; James Franck Institute, University of Chicago, Chicago, Illinois
| | - Stephan Block
- Institut für Chemie und Biochemie, Freie Universität Berlin, Berlin, Germany
| | - Prerna Sharma
- Department of Physics, Indian Institute of Science, Bangalore, India; Department of Bioengineering, Indian Institute of Science, Bangalore, India
| | - Roland R Netz
- Fachbereich Physik, Freie Universität Berlin, Berlin, Germany.
| |
Collapse
|
2
|
Shanmugam K, Rajaguru H. Exploration and Enhancement of Classifiers in the Detection of Lung Cancer from Histopathological Images. Diagnostics (Basel) 2023; 13:3289. [PMID: 37892110 PMCID: PMC10606104 DOI: 10.3390/diagnostics13203289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 10/20/2023] [Accepted: 10/21/2023] [Indexed: 10/29/2023] Open
Abstract
Lung cancer is a prevalent malignancy that impacts individuals of all genders and is often diagnosed late due to delayed symptoms. To catch it early, researchers are developing algorithms to study lung cancer images. The primary objective of this work is to propose a novel approach for the detection of lung cancer using histopathological images. In this work, the histopathological images underwent preprocessing, followed by segmentation using a modified approach of KFCM-based segmentation and the segmented image intensity values were dimensionally reduced using Particle Swarm Optimization (PSO) and Grey Wolf Optimization (GWO). Algorithms such as KL Divergence and Invasive Weed Optimization (IWO) are used for feature selection. Seven different classifiers such as SVM, KNN, Random Forest, Decision Tree, Softmax Discriminant, Multilayer Perceptron, and BLDC were used to analyze and classify the images as benign or malignant. Results were compared using standard metrics, and kappa analysis assessed classifier agreement. The Decision Tree Classifier with GWO feature extraction achieved good accuracy of 85.01% without feature selection and hyperparameter tuning approaches. Furthermore, we present a methodology to enhance the accuracy of the classifiers by employing hyperparameter tuning algorithms based on Adam and RAdam. By combining features from GWO and IWO, and using the RAdam algorithm, the Decision Tree classifier achieves the commendable accuracy of 91.57%.
Collapse
Affiliation(s)
| | - Harikumar Rajaguru
- Department of Electronics and Communication Engineering, Bannari Amman Institute of Technology, Sathyamangalam 638401, India;
| |
Collapse
|
3
|
Sorrentino S, Manetti F, Bresci A, Vernuccio F, Ceconello C, Ghislanzoni S, Bongarzone I, Vanna R, Cerullo G, Polli D. Deep ensemble learning and transfer learning methods for classification of senescent cells from nonlinear optical microscopy images. Front Chem 2023; 11:1213981. [PMID: 37426334 PMCID: PMC10326547 DOI: 10.3389/fchem.2023.1213981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 06/14/2023] [Indexed: 07/11/2023] Open
Abstract
The success of chemotherapy and radiotherapy anti-cancer treatments can result in tumor suppression or senescence induction. Senescence was previously considered a favorable therapeutic outcome, until recent advancements in oncology research evidenced senescence as one of the culprits of cancer recurrence. Its detection requires multiple assays, and nonlinear optical (NLO) microscopy provides a solution for fast, non-invasive, and label-free detection of therapy-induced senescent cells. Here, we develop several deep learning architectures to perform binary classification between senescent and proliferating human cancer cells using NLO microscopy images and we compare their performances. As a result of our work, we demonstrate that the most performing approach is the one based on an ensemble classifier, that uses seven different pre-trained classification networks, taken from literature, with the addition of fully connected layers on top of their architectures. This approach achieves a classification accuracy of over 90%, showing the possibility of building an automatic, unbiased senescent cells image classifier starting from multimodal NLO microscopy data. Our results open the way to a deeper investigation of senescence classification via deep learning techniques with a potential application in clinical diagnosis.
Collapse
Affiliation(s)
| | | | - Arianna Bresci
- Department of Physics, Politecnico di Milano, Milan, Italy
| | | | | | - Silvia Ghislanzoni
- Department of Advanced Diagnostics, Fondazione IRCCS Istituto Nazionale dei Tumori Milano, Milan, Italy
| | - Italia Bongarzone
- Department of Advanced Diagnostics, Fondazione IRCCS Istituto Nazionale dei Tumori Milano, Milan, Italy
| | - Renzo Vanna
- CNR-Institute for Photonics and Nanotechnologies (CNR-IFN), Milan, Italy
| | - Giulio Cerullo
- Department of Physics, Politecnico di Milano, Milan, Italy
- CNR-Institute for Photonics and Nanotechnologies (CNR-IFN), Milan, Italy
| | - Dario Polli
- Department of Physics, Politecnico di Milano, Milan, Italy
- CNR-Institute for Photonics and Nanotechnologies (CNR-IFN), Milan, Italy
| |
Collapse
|
4
|
Identifying out of distribution samples for skin cancer and malaria images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
5
|
Deep Neural Network Models for Colon Cancer Screening. Cancers (Basel) 2022; 14:cancers14153707. [PMID: 35954370 PMCID: PMC9367621 DOI: 10.3390/cancers14153707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 07/26/2022] [Accepted: 07/27/2022] [Indexed: 12/24/2022] Open
Abstract
Simple Summary Deep learning models have been shown to achieve high performance in diagnosing colon cancer compared to conventional image processing and hand-crafted machine learning methods. Hence, several studies have focused on developing hybrid learning, end-to-end, and transfer learning techniques to reduce manual interaction and for labelling the regions of interest. However, these weak learning techniques do not always provide a clear diagnosis. Therefore, it is necessary to develop a clear explainable learning method that can highlight factors and form the basis of clinical decisions. However, there has been little research carried out employing such transparent approaches. This study discussed the aforementioned models for colon cancer diagnosis. Abstract Early detection of colorectal cancer can significantly facilitate clinicians’ decision-making and reduce their workload. This can be achieved using automatic systems with endoscopic and histological images. Recently, the success of deep learning has motivated the development of image- and video-based polyp identification and segmentation. Currently, most diagnostic colonoscopy rooms utilize artificial intelligence methods that are considered to perform well in predicting invasive cancer. Convolutional neural network-based architectures, together with image patches and preprocesses are often widely used. Furthermore, learning transfer and end-to-end learning techniques have been adopted for detection and localization tasks, which improve accuracy and reduce user dependence with limited datasets. However, explainable deep networks that provide transparency, interpretability, reliability, and fairness in clinical diagnostics are preferred. In this review, we summarize the latest advances in such models, with or without transparency, for the prediction of colorectal cancer and also address the knowledge gap in the upcoming technology.
Collapse
|
6
|
|
7
|
Ellebrecht DB, von Weihe S. Endoscopic confocal laser-microscopy for the intraoperative nerve recognition: is it feasible? BIOMED ENG-BIOMED TE 2021; 67:11-17. [PMID: 34913620 DOI: 10.1515/bmt-2021-0171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 11/26/2021] [Indexed: 11/15/2022]
Abstract
Surgeons lose most of their tactile tissue information during minimal invasive surgery and need an additional tool of intraoperative tissue recognition. Confocal laser microscopy (CLM) is a well-established method of tissue investigation. The objective of this study was to analyze the feasibility and diagnostic accuracy of CLM nervous tissue recognition. Images taken with an endoscopic CLM system of sympathetic ganglions, nerve fibers and pleural tissue were characterized in terms of specific signal-patterns ex-vivo. No fluorescent dye was used. Diagnostic accuracy of tissue classification was evaluated by newly trained observers (sensitivity, specificity, PPV, NPV and interobserver variability). Although CLM images showed low CLM image contrast, assessment of nerve tissue was feasible without any fluorescent dye. Sensitivity and specificity ranged between 0.73 and 0.9 and 0.55-1.0, respectively. PPVs were 0.71-1.0 and the NPV range was between 0.58 and 0.86. The overall interobserver variability was 0.36. The eCLM enables to evaluate nervous tissue and to distinguish between nerve fibers, ganglions and pleural tissue based on backscattered light. However, the low image contrast and the heterogeneity in correct tissue diagnosis and a fair interobserver variability indicate the limit of CLM imaging without any fluorescent dye.
Collapse
Affiliation(s)
| | - Sönke von Weihe
- Department of Thoracic Surgery, LungClinic Großhansdorf, Wöhrendamm 80, 22927 Großhansdorf, Germany
| |
Collapse
|
8
|
Ellebrecht DB, Heßler N, Schlaefer A, Gessert N. Confocal Laser Microscopy for in vivo Intraoperative Application: Diagnostic Accuracy of Investigator and Machine Learning Strategies. Visc Med 2021; 37:533-541. [PMID: 35087903 PMCID: PMC8740144 DOI: 10.1159/000517146] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 05/10/2021] [Indexed: 07/26/2023] Open
Abstract
BACKGROUND Confocal laser microscopy (CLM) is one of the optical techniques that are promising methods of intraoperative in vivo real-time tissue examination based on tissue fluorescence. However, surgeons might struggle interpreting CLM images intraoperatively due to different tissue characteristics of different tissue pathologies in clinical reality. Deep learning techniques enable fast and consistent image analysis and might support intraoperative image interpretation. The objective of this study was to analyze the diagnostic accuracy of newly trained observers in the evaluation of normal colon and peritoneal tissue and colon cancer and metastasis, respectively, and to compare it with that of convolutional neural networks (CNNs). METHODS Two hundred representative CLM images of the normal and malignant colon and peritoneal tissue were evaluated by newly trained observers (surgeons and pathologists) and CNNs (VGG-16 and Densenet121), respectively, based on tissue dignity. The primary endpoint was the correct detection of the normal and cancer/metastasis tissue measured by sensitivity and specificity of both groups. Additionally, positive predictive values (PPVs) and negative predictive values (NPVs) were calculated for the newly trained observer group. The interobserver variability of dignity evaluation was calculated using kappa statistic. The F1-score and area under the curve (AUC) were used to evaluate the performance of image recognition of the CNNs' training scenarios. RESULTS Sensitivity and specificity ranged between 0.55 and 1.0 (pathologists: 0.66-0.97; surgeons: 0.55-1.0) and between 0.65 and 0.96 (pathologists: 0.68-0.93; surgeons: 0.65-0.96), respectively. PPVs were 0.75 and 0.90 in the pathologists' group and 0.73-0.96 in the surgeons' group, respectively. NPVs were 0.73 and 0.96 for pathologists' and between 0.66 and 1.00 for surgeons' tissue analysis. The overall interobserver variability was 0.54. Depending on the training scenario, cancer/metastasis tissue was classified with an AUC of 0.77-0.88 by VGG-16 and 0.85-0.89 by Densenet121. Transfer learning improved performance over training from scratch. CONCLUSIONS Newly trained investigators are able to learn CLM images features and interpretation rapidly, regardless of their clinical experience. Heterogeneity in tissue diagnosis and a moderate interobserver variability reflect the clinical reality more realistic. CNNs provide comparable diagnostic results as clinical observers and could improve surgeons' intraoperative tissue assessment.
Collapse
Affiliation(s)
- David Benjamin Ellebrecht
- Department of Thoracic Surgery, LungenClinic Großhansdorf, Großhansdorf, Germany
- Department of Surgery, Campus Lübeck, University Medical Centre Schleswig-Holstein, Lübeck, Germany
| | - Nicole Heßler
- Institute of Medical Biometry and Statistics, University of Lübeck, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Nils Gessert
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| |
Collapse
|
9
|
Machine learning-based multiparametric traditional multislice computed tomography radiomics for improving the discrimination of parotid neoplasms. Mol Clin Oncol 2021; 15:245. [PMID: 34650812 DOI: 10.3892/mco.2021.2407] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 07/28/2021] [Indexed: 11/05/2022] Open
Abstract
Characterization of parotid tumors is important for treatment planning and prognosis, and parotid tumor discrimination has recently been developed at the molecular level. The aim of the present study was to establish a machine learning (ML) predictive model based on multiparametric traditional multislice CT (MSCT) radiomic and clinical data analysis to improve the accuracy of differentiation among pleomorphic adenoma (PA), Warthin tumor (WT) and parotid carcinoma (PCa). A total of 345 patients (200 with WT, 91 with PA and 54 with PCa) with pathologically confirmed parotid tumors were retrospectively enrolled from five independent institutions between January 2010 and May 2019. A total of 273 patients recruited from institutions 1, 2 and 3 were randomly assigned to the training model; the independent validation set consisted of 72 patients treated at institutions 1, 4 and 5. Data were investigated using a linear discriminant analysis-based ML classifier. Feature selection and dimension reduction were conducted using reproducibility testing and a wrapper method. The diagnostic accuracy of the predictive model was compared with histopathological findings as reference results. This classifier achieved a satisfactory performance for the discrimination of PA, WT and PCa, with a total accuracy of 82.1% in the training cohort and 80.5% in the validation cohort. In conclusion, ML-based multiparametric traditional MSCT radiomics can improve the accuracy of differentiation among PA, WT and PCa. The findings of the present study should be validated by multicenter prospective studies using completely independent external data.
Collapse
|
10
|
Toğaçar M. Disease type detection in lung and colon cancer images using the complement approach of inefficient sets. Comput Biol Med 2021; 137:104827. [PMID: 34560401 DOI: 10.1016/j.compbiomed.2021.104827] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 08/25/2021] [Accepted: 08/29/2021] [Indexed: 12/19/2022]
Abstract
Lung and colon cancers are deadly diseases that can develop simultaneously in organs and adversely affect human life in some special cases. Although the frequency of simultaneous occurrence of these two types of cancer is unlikely, there is a high probability of metastasis between the two organs if not diagnosed early. Traditionally, specialists have to go through a lengthy and complicated process to examine histopathological images and diagnose cancer cases; yet, it is now possible to achieve this process faster with the available technological possibilities. In this study, artificial intelligence-supported model and optimization methods were used to realize the classification of lung and colon cancers' histopathological images. The used dataset has five classes of histopathological images consisting of two colon cancer classes and three lung cancer classes. In the proposed approach, the image classes were trained from scratch with the DarkNet-19 model, which is one of the deep learning models. In the feature set extracted from the DarkNet-19 model, selection of the inefficient features was performed by using Equilibrium and Manta Ray Foraging optimization algorithms. Then, the set containing the inefficient features was distinguished from the rest of the set features, creating an efficient feature set (complementary rule insets). The efficient features obtained by the two used optimization algorithms were combined and classified with the Support Vector Machine (SVM) method. The overall accuracy rate obtained in the classification process was 99.69%. Based on the outcomes of this study, it has been observed that using the complementary method together with some optimization methods improved the classification performance of the dataset.
Collapse
Affiliation(s)
- Mesut Toğaçar
- Department of Computer Technology, Technical Sciences Vocational School, Fırat UniversityElazig, Turkey.
| |
Collapse
|
11
|
Tian L, Hunt B, Bell MAL, Yi J, Smith JT, Ochoa M, Intes X, Durr NJ. Deep Learning in Biomedical Optics. Lasers Surg Med 2021; 53:748-775. [PMID: 34015146 PMCID: PMC8273152 DOI: 10.1002/lsm.23414] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 01/02/2023]
Abstract
This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- L. Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - B. Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - M. A. L. Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - J. Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, USA
| | - J. T. Smith
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - M. Ochoa
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - X. Intes
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - N. J. Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
12
|
Wang SH, Nayak DR, Guttery DS, Zhang X, Zhang YD. COVID-19 classification by CCSHNet with deep fusion using transfer learning and discriminant correlation analysis. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2021; 68:131-148. [PMID: 33519321 PMCID: PMC7837204 DOI: 10.1016/j.inffus.2020.11.005] [Citation(s) in RCA: 90] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 10/18/2020] [Accepted: 11/07/2020] [Indexed: 05/15/2023]
Abstract
AIM : COVID-19 is a disease caused by a new strain of coronavirus. Up to 18th October 2020, worldwide there have been 39.6 million confirmed cases resulting in more than 1.1 million deaths. To improve diagnosis, we aimed to design and develop a novel advanced AI system for COVID-19 classification based on chest CT (CCT) images. METHODS : Our dataset from local hospitals consisted of 284 COVID-19 images, 281 community-acquired pneumonia images, 293 secondary pulmonary tuberculosis images; and 306 healthy control images. We first used pretrained models (PTMs) to learn features, and proposed a novel (L, 2) transfer feature learning algorithm to extract features, with a hyperparameter of number of layers to be removed (NLR, symbolized as L). Second, we proposed a selection algorithm of pretrained network for fusion to determine the best two models characterized by PTM and NLR. Third, deep CCT fusion by discriminant correlation analysis was proposed to help fuse the two features from the two models. Micro-averaged (MA) F1 score was used as the measuring indicator. The final determined model was named CCSHNet. RESULTS : On the test set, CCSHNet achieved sensitivities of four classes of 95.61%, 96.25%, 98.30%, and 97.86%, respectively. The precision values of four classes were 97.32%, 96.42%, 96.99%, and 97.38%, respectively. The F1 scores of four classes were 96.46%, 96.33%, 97.64%, and 97.62%, respectively. The MA F1 score was 97.04%. In addition, CCSHNet outperformed 12 state-of-the-art COVID-19 detection methods. CONCLUSIONS : CCSHNet is effective in detecting COVID-19 and other lung infectious diseases using first-line clinical imaging and can therefore assist radiologists in making accurate diagnoses based on CCTs.
Collapse
Affiliation(s)
- Shui-Hua Wang
- Department of Cardiovascular Sciences, University of Leicester, LE1 7RH, UK
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- School of Architecture Building and Civil engineering, Loughborough University, Loughborough, LE11 3TU, UK
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, India
| | - David S Guttery
- Leicester Cancer Research Center, University of Leicester, Leicester, LE2 7LX, UK
| | - Xin Zhang
- Department of Medical Imaging, The Fourth People's Hospital of Huai'an, Huai'an, Jiangsu Province, 223002, China
| | - Yu-Dong Zhang
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- School of Informatics, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
13
|
Power L, Acevedo L, Yamashita R, Rubin D, Martin I, Barbero A. Deep learning enables the automation of grading histological tissue engineered cartilage images for quality control standardization. Osteoarthritis Cartilage 2021; 29:433-443. [PMID: 33422705 DOI: 10.1016/j.joca.2020.12.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 12/22/2020] [Accepted: 12/28/2020] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To automate the grading of histological images of engineered cartilage tissues using deep learning. METHODS Cartilaginous tissues were engineered from various cell sources. Safranin O and fast green stained histological images of the tissues were graded for chondrogenic quality according to the Modified Bern Score, which ranks images on a scale from zero to six according to the intensity of staining and cell morphology. The whole images were tiled, and the tiles were graded by two experts and grouped into four categories with the following grades: 0, 1-2, 3-4, and 5-6. Deep learning was used to train models to classify images into these histological score groups. Finally, the tile grades per donor were averaged. The root mean square errors (RMSEs) were calculated between each user and the model. RESULTS Transfer learning using a pretrained DenseNet model was selected. The RMSEs of the model predictions and 95% confidence intervals were 0.49 (0.37, 0.61) and 0.78 (0.57, 0.99) for each user, which was in the same range as the inter-user RMSE of 0.71 (0.51, 0.93). CONCLUSION Using supervised deep learning, we could automate the scoring of histological images of engineered cartilage and achieve results with errors comparable to inter-user error. Thus, the model could enable the automation and standardization of assessments currently used for experimental studies as well as release criteria that ensure the quality of manufactured clinical grafts and compliance with regulatory requirements.
Collapse
Affiliation(s)
- L Power
- Department of Biomedical Engineering, University of Basel, Switzerland; Department of Biomedicine, University Hospital Basel, University of Basel, Switzerland.
| | - L Acevedo
- Department of Biomedicine, University Hospital Basel, University of Basel, Switzerland.
| | - R Yamashita
- Department of Biomedical Data Science, Stanford University School of Medicine, USA.
| | - D Rubin
- Department of Biomedical Data Science, Stanford University School of Medicine, USA.
| | - I Martin
- Department of Biomedical Engineering, University of Basel, Switzerland; Department of Biomedicine, University Hospital Basel, University of Basel, Switzerland.
| | - A Barbero
- Department of Biomedicine, University Hospital Basel, University of Basel, Switzerland.
| |
Collapse
|
14
|
Pacal I, Karaboga D, Basturk A, Akay B, Nalbantoglu U. A comprehensive review of deep learning in colon cancer. Comput Biol Med 2020; 126:104003. [DOI: 10.1016/j.compbiomed.2020.104003] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 08/28/2020] [Accepted: 08/28/2020] [Indexed: 12/17/2022]
|
15
|
Ellebrecht DB, Latus S, Schlaefer A, Keck T, Gessert N. Towards an Optical Biopsy during Visceral Surgical Interventions. Visc Med 2020; 36:70-79. [PMID: 32355663 DOI: 10.1159/000505938] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 01/13/2020] [Indexed: 12/24/2022] Open
Abstract
Background Cancer will replace cardiovascular diseases as the most frequent cause of death. Therefore, the goals of cancer treatment are prevention strategies and early detection by cancer screening and ideal stage therapy. From an oncological point of view, complete tumor resection is a significant prognostic factor. Optical coherence tomography (OCT) and confocal laser microscopy (CLM) are two techniques that have the potential to complement intraoperative frozen section analysis as in vivo and real-time optical biopsies. Summary In this review we present both procedures and review the progress of evaluation for intraoperative application in visceral surgery. For visceral surgery, there are promising studies evaluating OCT and CLM; however, application during routine visceral surgical interventions is still lacking. Key Message OCT and CLM are not competing but complementary approaches of tissue analysis to intraoperative frozen section analysis. Although intraoperative application of OCT and CLM is at an early stage, they are two promising techniques of intraoperative in vivo and real-time tissue examination. Additionally, deep learning strategies provide a significant supplement for automated tissue detection.
Collapse
Affiliation(s)
- David Benjamin Ellebrecht
- LungenClinic Grosshansdorf, Department of Thoracic Surgery, Grosshansdorf, Germany.,University Medical Center Schleswig-Holstein, Campus Lübeck, Department of Surgery, Lübeck, Germany
| | - Sarah Latus
- Hamburg University of Technology, Institute of Medical Technology, Hamburg, Germany
| | - Alexander Schlaefer
- Hamburg University of Technology, Institute of Medical Technology, Hamburg, Germany
| | - Tobias Keck
- University Medical Center Schleswig-Holstein, Campus Lübeck, Department of Surgery, Lübeck, Germany
| | - Nils Gessert
- Hamburg University of Technology, Institute of Medical Technology, Hamburg, Germany
| |
Collapse
|