1
|
Habart D, Koza A, Leontovyc I, Kosinova L, Berkova Z, Kriz J, Zacharovova K, Brinkhof B, Cornelissen DJ, Magrane N, Bittenglova K, Capek M, Valecka J, Habartova A, Saudek F. IsletSwipe, a mobile platform for expert opinion exchange on islet graft images. Islets 2023; 15:2189873. [PMID: 36987915 PMCID: PMC10064927 DOI: 10.1080/19382014.2023.2189873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/30/2023] Open
Abstract
We previously developed a deep learning-based web service (IsletNet) for an automated counting of isolated pancreatic islets. The neural network training is limited by the absent consensus on the ground truth annotations. Here, we present a platform (IsletSwipe) for an exchange of graphical opinions among experts to facilitate the consensus formation. The platform consists of a web interface and a mobile application. In a small pilot study, we demonstrate the functionalities and the use case scenarios of the platform. Nine experts from three centers validated the drawing tools, tested precision and consistency of the expert contour drawing, and evaluated user experience. Eight experts from two centers proceeded to evaluate additional images to demonstrate the following two use case scenarios. The Validation scenario involves an automated selection of images and islets for the expert scrutiny. It is scalable (more experts, images, and islets may readily be added) and can be applied to independent validation of islet contours from various sources. The Inquiry scenario serves the ground truth generating expert in seeking assistance from peers to achieve consensus on challenging cases during the preparation for IsletNet training. This scenario is limited to a small number of manually selected images and islets. The experts gained an opportunity to influence IsletNet training and to compare other experts' opinions with their own. The ground truth-generating expert obtained feedback for future IsletNet training. IsletSwipe is a suitable tool for the consensus finding. Experts from additional centers are welcome to participate.
Collapse
Affiliation(s)
- David Habart
- Laboratory of Pancreatic Islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine (IKEM), Prague, Czech Republic
- CONTACT David Habart Laboratory of pancreatic islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine, Videnska 1958/9, Prague 4, 140 21, Czech Republic
| | - Adam Koza
- Dino School & Novy PORG, Prague, Czech Republic
| | - Ivan Leontovyc
- Laboratory of Pancreatic Islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine (IKEM), Prague, Czech Republic
| | - Lucie Kosinova
- Laboratory of Pancreatic Islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine (IKEM), Prague, Czech Republic
| | - Zuzana Berkova
- Laboratory of Pancreatic Islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine (IKEM), Prague, Czech Republic
| | - Jan Kriz
- Diabetes Center, Institute for Clinical and Experimental Medicine, Prague, Czech Republic
| | - Klara Zacharovova
- Laboratory of Pancreatic Islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine (IKEM), Prague, Czech Republic
| | - Bas Brinkhof
- Department of Internal Medicine, Leiden University Medical Center (LUMC), Leiden, Netheralnds
| | - Dirk-Jan Cornelissen
- Department of Internal Medicine, Leiden University Medical Center (LUMC), Leiden, Netheralnds
| | - Nicholas Magrane
- Nuffield department of surgical sciences, Oxford Consortium for Islet transplantation, Oxford, UK
| | - Katerina Bittenglova
- Diabetes Center, Institute for Clinical and Experimental Medicine, Prague, Czech Republic
| | - Martin Capek
- Light Microscopy Laboratory, Institute of Molecular Genetics of the Czech Academy of Sciences, Prague, Czech Republic
- Laboratory of Biomathematics, Institute of Physiology of the Czech Academy of Sciences, Prague, Czech Republic
| | - Jan Valecka
- Laboratory of Biomathematics, Institute of Physiology of the Czech Academy of Sciences, Prague, Czech Republic
| | - Alena Habartova
- Redox Photochemistry Lab, Institute of Organic Chemistry and Biochemistry of the Czech Academy of Sciences, Prague, Czech Republic
| | - František Saudek
- Diabetes Center, Institute for Clinical and Experimental Medicine, Prague, Czech Republic
| |
Collapse
|
2
|
Ghosh T, McCrory MA, Marden T, Higgins J, Anderson AK, Domfe CA, Jia W, Lo B, Frost G, Steiner-Asiedu M, Baranowski T, Sun M, Sazonov E. I2N: image to nutrients, a sensor guided semi-automated tool for annotation of images for nutrition analysis of eating episodes. Front Nutr 2023; 10:1191962. [PMID: 37575335 PMCID: PMC10415029 DOI: 10.3389/fnut.2023.1191962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 07/05/2023] [Indexed: 08/15/2023] Open
Abstract
Introduction Dietary assessment is important for understanding nutritional status. Traditional methods of monitoring food intake through self-report such as diet diaries, 24-hour dietary recall, and food frequency questionnaires may be subject to errors and can be time-consuming for the user. Methods This paper presents a semi-automatic dietary assessment tool we developed - a desktop application called Image to Nutrients (I2N) - to process sensor-detected eating events and images captured during these eating events by a wearable sensor. I2N has the capacity to offer multiple food and nutrient databases (e.g., USDA-SR, FNDDS, USDA Global Branded Food Products Database) for annotating eating episodes and food items. I2N estimates energy intake, nutritional content, and the amount consumed. The components of I2N are three-fold: 1) sensor-guided image review, 2) annotation of food images for nutritional analysis, and 3) access to multiple food databases. Two studies were used to evaluate the feasibility and usefulness of I2N: 1) a US-based study with 30 participants and a total of 60 days of data and 2) a Ghana-based study with 41 participants and a total of 41 days of data). Results In both studies, a total of 314 eating episodes were annotated using at least three food databases. Using I2N's sensor-guided image review, the number of images that needed to be reviewed was reduced by 93% and 85% for the two studies, respectively, compared to reviewing all the images. Discussion I2N is a unique tool that allows for simultaneous viewing of food images, sensor-guided image review, and access to multiple databases in one tool, making nutritional analysis of food images efficient. The tool is flexible, allowing for nutritional analysis of images if sensor signals aren't available.
Collapse
Affiliation(s)
- Tonmoy Ghosh
- Department of Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL, United States
| | - Megan A. McCrory
- Department of Health Sciences, Boston University, Boston, MA, United States
| | - Tyson Marden
- Colorado Clinical and Translational Sciences Institute, University of Colorado, Denver, CO, United States
| | - Janine Higgins
- Department of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, United States
| | - Alex Kojo Anderson
- Department of Nutritional Sciences, University of Georgia, Athens, GA, United States
| | | | - Wenyan Jia
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Benny Lo
- Department of Surgery and Cancer, Imperial College, London, United Kingdom
| | - Gary Frost
- Department of Metabolism, Digestion and Reproduction, Imperial College, London, United Kingdom
| | | | - Tom Baranowski
- Children’s Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, TX, United States
| | - Mingui Sun
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, United States
| | - Edward Sazonov
- Department of Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL, United States
| |
Collapse
|
3
|
Ciach MA, Bokota G, Manda-Handzlik A, Kuźmicka W, Demkow U, Gambin A. Trapalyzer: a computer program for quantitative analyses in fluorescent live-imaging studies of neutrophil extracellular trap formation. Front Immunol 2023; 14:1021638. [PMID: 37359539 PMCID: PMC10285529 DOI: 10.3389/fimmu.2023.1021638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 04/24/2023] [Indexed: 06/28/2023] Open
Abstract
Neutrophil extracellular traps (NETs), pathogen-ensnaring structures formed by neutrophils by expelling their DNA into the environment, are believed to play an important role in immunity and autoimmune diseases. In recent years, a growing attention has been put into developing software tools to quantify NETs in fluorescent microscopy images. However, current solutions require large, manually-prepared training data sets, are difficult to use for users without background in computer science, or have limited capabilities. To overcome these problems, we developed Trapalyzer, a computer program for automatic quantification of NETs. Trapalyzer analyzes fluorescent microscopy images of samples double-stained with a cell-permeable and a cell-impermeable dye, such as the popular combination of Hoechst 33342 and SYTOX™ Green. The program is designed with emphasis on software ergonomy and accompanied with step-by-step tutorials to make its use easy and intuitive. The installation and configuration of the software takes less than half an hour for an untrained user. In addition to NETs, Trapalyzer detects, classifies and counts neutrophils at different stages of NET formation, allowing for gaining a greater insight into this process. It is the first tool that makes this possible without large training data sets. At the same time, it attains a precision of classification on par with state-of-the-art machine learning algorithms. As an example application, we show how to use Trapalyzer to study NET release in a neutrophil-bacteria co-culture. Here, after configuration, Trapalyzer processed 121 images and detected and classified 16 000 ROIs in approximately three minutes on a personal computer. The software and usage tutorials are available at https://github.com/Czaki/Trapalyzer.
Collapse
Affiliation(s)
| | - Grzegorz Bokota
- Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland
- Centre of New Technologies, University of Warsaw, Warsaw, Poland
| | - Aneta Manda-Handzlik
- Department of Laboratory Diagnostics and Clinical Immunology of Developmental Age, Medical University of Warsaw, Warsaw, Poland
| | - Weronika Kuźmicka
- Department of Laboratory Diagnostics and Clinical Immunology of Developmental Age, Medical University of Warsaw, Warsaw, Poland
| | - Urszula Demkow
- Department of Laboratory Diagnostics and Clinical Immunology of Developmental Age, Medical University of Warsaw, Warsaw, Poland
| | - Anna Gambin
- Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland
| |
Collapse
|
4
|
Geiß M, Wagner R, Baresch M, Steiner J, Zwick M. Automatic Bounding Box Annotation with Small Training Datasets for Industrial Manufacturing. Micromachines (Basel) 2023; 14:442. [PMID: 36838142 PMCID: PMC9962188 DOI: 10.3390/mi14020442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/01/2023] [Accepted: 02/02/2023] [Indexed: 06/18/2023]
Abstract
In the past few years, object detection has attracted a lot of attention in the context of human-robot collaboration and Industry 5.0 due to enormous quality improvements in deep learning technologies. In many applications, object detection models have to be able to quickly adapt to a changing environment, i.e., to learn new objects. A crucial but challenging prerequisite for this is the automatic generation of new training data which currently still limits the broad application of object detection methods in industrial manufacturing. In this work, we discuss how to adapt state-of-the-art object detection methods for the task of automatic bounding box annotation in a use case where the background is homogeneous and the object's label is provided by a human. We compare an adapted version of Faster R-CNN and the Scaled-YOLOv4-p5 architecture and show that both can be trained to distinguish unknown objects from a complex but homogeneous background using only a small amount of training data. In contrast to most other state-of-the-art methods for bounding box labeling, our proposed method neither requires human verification, a predefined set of classes, nor a very large manually annotated dataset. Our method outperforms the state-of-the-art, transformer-based object discovery method LOST on our simple fruits dataset by large margins.
Collapse
Affiliation(s)
- Manuela Geiß
- Software Competence Center Hagenberg GmbH, Softwarepark 32a, 4232 Hagenberg, Austria
| | - Raphael Wagner
- Software Competence Center Hagenberg GmbH, Softwarepark 32a, 4232 Hagenberg, Austria
| | | | | | - Michael Zwick
- Software Competence Center Hagenberg GmbH, Softwarepark 32a, 4232 Hagenberg, Austria
| |
Collapse
|
5
|
Xuan J, Ke B, Ma W, Liang Y, Hu W. Spinal disease diagnosis assistant based on MRI images using deep transfer learning methods. Front Public Health 2023; 11:1044525. [PMID: 36908475 PMCID: PMC9998513 DOI: 10.3389/fpubh.2023.1044525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 02/06/2023] [Indexed: 03/14/2023] Open
Abstract
Introduction In light of the potential problems of missed diagnosis and misdiagnosis in the diagnosis of spinal diseases caused by experience differences and fatigue, this paper investigates the use of artificial intelligence technology for auxiliary diagnosis of spinal diseases. Methods The LableImg tool was used to label the MRIs of 604 patients by clinically experienced doctors. Then, in order to select an appropriate object detection algorithm, deep transfer learning models of YOLOv3, YOLOv5, and PP-YOLOv2 were created and trained on the Baidu PaddlePaddle framework. The experimental results showed that the PP-YOLOv2 model achieved a 90.08% overall accuracy in the diagnosis of normal, IVD bulges and spondylolisthesis, which were 27.5 and 3.9% higher than YOLOv3 and YOLOv5, respectively. Finally, a visualization of the intelligent spine assistant diagnostic software based on the PP-YOLOv2 model was created and the software was made available to the doctors in the spine and osteopathic surgery at Guilin People's Hospital. Results and discussion This software automatically provides auxiliary diagnoses in 14.5 s on a standard computer, is much faster than doctors in diagnosing human spines, which typically take 10 min, and its accuracy of 98% can be compared to that of experienced doctors in the comparison of various diagnostic methods. It significantly improves doctors' working efficiency, reduces the phenomenon of missed diagnoses and misdiagnoses, and demonstrates the efficacy of the developed intelligent spinal auxiliary diagnosis software.
Collapse
Affiliation(s)
- Junbo Xuan
- Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, China.,School of Artificial Intelligence, Nanning College for Vocational Technology, Nanning, China
| | - Baoyi Ke
- Department of Spine and Osteopathy Surgery, Guilin People's Hospital, Guilin, China
| | - Wenyu Ma
- Department of Spine and Osteopathy Surgery, Guilin People's Hospital, Guilin, China
| | - Yinghao Liang
- School of Artificial Intelligence, Nanning College for Vocational Technology, Nanning, China
| | - Wei Hu
- Department of Spine and Osteopathy Surgery, Guilin People's Hospital, Guilin, China
| |
Collapse
|
6
|
Li D, Pehrson LM, Tøttrup L, Fraccaro M, Bonnevie R, Thrane J, Sørensen PJ, Rykkje A, Andersen TT, Steglich-Arnholm H, Stærk DMR, Borgwardt L, Hansen KL, Darkner S, Carlsen JF, Nielsen MB. Inter- and Intra-Observer Agreement When Using a Diagnostic Labeling Scheme for Annotating Findings on Chest X-rays-An Early Step in the Development of a Deep Learning-Based Decision Support System. Diagnostics (Basel) 2022; 12:diagnostics12123112. [PMID: 36553118 PMCID: PMC9776917 DOI: 10.3390/diagnostics12123112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 11/21/2022] [Accepted: 11/26/2022] [Indexed: 12/14/2022] Open
Abstract
Consistent annotation of data is a prerequisite for the successful training and testing of artificial intelligence-based decision support systems in radiology. This can be obtained by standardizing terminology when annotating diagnostic images. The purpose of this study was to evaluate the annotation consistency among radiologists when using a novel diagnostic labeling scheme for chest X-rays. Six radiologists with experience ranging from one to sixteen years, annotated a set of 100 fully anonymized chest X-rays. The blinded radiologists annotated on two separate occasions. Statistical analyses were done using Randolph's kappa and PABAK, and the proportions of specific agreements were calculated. Fair-to-excellent agreement was found for all labels among the annotators (Randolph's Kappa, 0.40-0.99). The PABAK ranged from 0.12 to 1 for the two-reader inter-rater agreement and 0.26 to 1 for the intra-rater agreement. Descriptive and broad labels achieved the highest proportion of positive agreement in both the inter- and intra-reader analyses. Annotating findings with specific, interpretive labels were found to be difficult for less experienced radiologists. Annotating images with descriptive labels may increase agreement between radiologists with different experience levels compared to annotation with interpretive labels.
Collapse
Affiliation(s)
- Dana Li
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
- Correspondence:
| | - Lea Marie Pehrson
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark
| | | | | | | | | | - Peter Jagd Sørensen
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Alexander Rykkje
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Tobias Thostrup Andersen
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
| | - Henrik Steglich-Arnholm
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
| | - Dorte Marianne Rohde Stærk
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
| | - Lotte Borgwardt
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
| | - Kristoffer Lindskov Hansen
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Sune Darkner
- Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Jonathan Frederik Carlsen
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Michael Bachmann Nielsen
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| |
Collapse
|
7
|
Abstract
Deep learning has revolutionized the automatic processing of images. While deep convolutional neural networks have demonstrated astonishing segmentation results for many biological objects acquired with microscopy, this technology's good performance relies on large training datasets. In this paper, we present a strategy to minimize the amount of time spent in manually annotating images for segmentation. It involves using an efficient and open source annotation tool, the artificial increase of the training dataset with data augmentation, the creation of an artificial dataset with a conditional generative adversarial network and the combination of semantic and instance segmentations. We evaluate the impact of each of these approaches for the segmentation of nuclei in 2D widefield images of human precancerous polyp biopsies in order to define an optimal strategy.
Collapse
Affiliation(s)
- Thierry Pécot
- Department of Biochemistry and Molecular Biology, Hollings Cancer Center, Medical University of South Carolina, Charleston, SC, 29407, USA.,Rennes 1 University, SFR Biosit (UMS 3480 - US 018), F-35000 Rennes, France
| | - Alexander Alekseyenko
- Departments of Public Health Sciences and Oral Health Sciences, Biomedical Informatics Center, Medical University of South Carolina, Charleston, SC, 29407, USA
| | - Kristin Wallace
- Department of Public Health Sciences, Medical University of South Carolina, Charleston, SC, 29407, USA
| |
Collapse
|
8
|
Schröder SM, Kiko R. Assessing Representation Learning and Clustering Algorithms for Computer-Assisted Image Annotation-Simulating and Benchmarking MorphoCluster. Sensors (Basel) 2022; 22:s22072775. [PMID: 35408389 PMCID: PMC9003521 DOI: 10.3390/s22072775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 03/28/2022] [Accepted: 03/29/2022] [Indexed: 02/04/2023]
Abstract
Image annotation is a time-consuming and costly task. Previously, we published MorphoCluster as a novel image annotation tool to address problems of conventional, classifier-based image annotation approaches: their limited efficiency, training set bias and lack of novelty detection. MorphoCluster uses clustering and similarity search to enable efficient, computer-assisted image annotation. In this work, we provide a deeper analysis of this approach. We simulate the actions of a MorphoCluster user to avoid extensive manual annotation runs. This simulation is used to test supervised, unsupervised and transfer representation learning approaches. Furthermore, shrunken k-means and partially labeled k-means, two new clustering algorithms that are tailored specifically for the MorphoCluster approach, are compared to the previously used HDBSCAN*. We find that labeled training data improve the image representations, that unsupervised learning beats transfer learning and that all three clustering algorithms are viable options, depending on whether completeness, efficiency or runtime is the priority. The simulation results support our earlier finding that MorphoCluster is very efficient and precise. Within the simulation, more than five objects per simulated click are being annotated with 95% precision.
Collapse
Affiliation(s)
| | - Rainer Kiko
- Laboratoire d’Océanographie de Villefranche, Sorbonne Université, 06230 Villefranche-sur-Mer, France;
| |
Collapse
|
9
|
Mikhailov IA, Khvostikov AV, Krylov AS. [Methodical approaches to annotation and labeling of histological images in order to automatically detect the layers of the stomach wall and the depth of invasion of gastric cancer]. Arkh Patol 2022; 84:67-73. [PMID: 36469721 DOI: 10.17116/patol20228406167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE Development of original methodological approaches to annotation and labeling of histological images in relation to the problem of automatic segmentation of the layers of the stomach wall. MATERIAL AND METHODS Three image collections were used in the study: NCT-CRC-HE-100K, CRC-VAL-HE-7K, and part of the PATH-DT-MSU collection. The used part of the original PATH-DT-MSU collection contains 20 histological images obtained using a high performance digital scanning microscope. UNLABELLED Each image is a fragment of the stomach wall, cut from the surgical material of gastric cancer and stained with hematoxylin and eosin. Images were obtained using a scanning microscope Leica Aperio AT2 (Leica Microsystems Inc., Germany), annotations were made using Aperio ImageScope 12.3.3 (Leica Microsystems Inc., Germany). RESULTS A labeling system is proposed that includes 5 classes (tissue types): areas of gastric adenocarcinoma (TUM), unchanged areas of the lamina propria (LP), unchanged areas of the muscular lamina of the mucosa (MM), a class of underlying tissues (AT), including areas of the submucosa, own muscular layer of the stomach and subserous sections, image background (BG). The advantage of this marking technique is to ensure high efficiency of recognition of the muscularis lamina (MM) - a natural «line» separating the lamina propria of the mucous membrane and all other underlying layers of the stomach wall. The disadvantages of the technique include a small number of classes, which leads to insufficient detailing of automatic segmentation. CONCLUSION In the course of the study, an original technique for labeling and annotating images was developed, including 5 classes (types of tissues). This technique is effective at the initial stages of teaching mathematical algorithms for the classification and segmentation of histological images. Further stages in the development of a real diagnostic algorithm to automatically determine the depth of invasion of gastric cancer require the correction and development of the presented method of marking and annotation.
Collapse
Affiliation(s)
| | | | - A S Krylov
- Lomonosov Moscow State University, Moscow, Russia
| |
Collapse
|
10
|
Ahn SW, Ferland B, Jonas OH. An Interactive Pipeline for Quantitative Histopathological Analysis of Spatially Defined Drug Effects in Tumors. J Pathol Inform 2021; 12:34. [PMID: 34760331 PMCID: PMC8529341 DOI: 10.4103/jpi.jpi_17_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/10/2021] [Accepted: 05/10/2021] [Indexed: 12/13/2022] Open
Abstract
Background: Tumor heterogeneity is increasingly being recognized as a major source of variability in the histopathological assessment of drug responses. Quantitative analysis of immunohistochemistry (IHC) and immunofluorescence (IF) images using biomarkers that capture spatialpatterns of distinct tumor biology and drug concentration in tumors is of high interest to the field. Methods: We have developed an image analysis pipeline to measure drug response using IF and IHC images along spatial gradients of local drug release from a tumor-implantable drug delivery microdevice. The pipeline utilizes a series of user-interactive python scripts and CellProfiler pipelines with custom modules to perform image and spatial analysis of regions of interest within whole-slide images. Results: Worked examples demonstrate that intratumor measurements such as apoptosis, cell proliferation, and immune cell population density can be quantitated in a spatially and drug concentration-dependent manner, establishing in vivo profiles of pharmacodynamics and pharmacokinetics in tumors. Conclusions: Spatial image analysis of tumor response along gradients of local drug release is achievable in high throughput. The major advantage of this approach is the use of spatially aware annotation tools to correlate drug gradients with drug effects in tumors in vivo.
Collapse
Affiliation(s)
- Sebastian W Ahn
- Department of Radiology, Laboratory for Bio-Micro Devices, Brigham and Women's Hospital, Boston, MA, USA
| | - Benjamin Ferland
- Department of Radiology, Laboratory for Bio-Micro Devices, Brigham and Women's Hospital, Boston, MA, USA
| | - Oliver H Jonas
- Department of Radiology, Laboratory for Bio-Micro Devices, Brigham and Women's Hospital, Boston, MA, USA
| |
Collapse
|
11
|
Younis S, Schmidt M, Weiland C, Dressler S, Seeger B, Hickler T. Detection and annotation of plant organs from digitised herbarium scans using deep learning. Biodivers Data J 2020; 8:e57090. [PMID: 33343217 PMCID: PMC7746675 DOI: 10.3897/bdj.8.e57090] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2020] [Accepted: 11/16/2020] [Indexed: 12/22/2022] Open
Abstract
As herbarium specimens are increasingly becoming digitised and accessible in online repositories, advanced computer vision techniques are being used to extract information from them. The presence of certain plant organs on herbarium sheets is useful information in various scientific contexts and automatic recognition of these organs will help mobilise such information. In our study, we use deep learning to detect plant organs on digitised herbarium specimens with Faster R-CNN. For our experiment, we manually annotated hundreds of herbarium scans with thousands of bounding boxes for six types of plant organs and used them for training and evaluating the plant organ detection model. The model worked particularly well on leaves and stems, while flowers were also present in large numbers in the sheets, but were not equally well recognised.
Collapse
Affiliation(s)
- Sohaib Younis
- Senckenberg Biodiversity and Climate Research Centre (SBiK-F), Frankfurt am Main, Germany Senckenberg Biodiversity and Climate Research Centre (SBiK-F) Frankfurt am Main Germany.,Department of Mathematics and Computer Science, Philipps-University Marburg, Marburg, Germany Department of Mathematics and Computer Science, Philipps-University Marburg Marburg Germany
| | - Marco Schmidt
- Palmengarten der Stadt Frankfurt, Frankfurt am Main, Germany Palmengarten der Stadt Frankfurt Frankfurt am Main Germany.,Senckenberg Biodiversity and Climate Research Centre (SBiK-F), Frankfurt am Main, Germany Senckenberg Biodiversity and Climate Research Centre (SBiK-F) Frankfurt am Main Germany
| | - Claus Weiland
- Senckenberg Biodiversity and Climate Research Centre (SBiK-F), Frankfurt am Main, Germany Senckenberg Biodiversity and Climate Research Centre (SBiK-F) Frankfurt am Main Germany
| | - Stefan Dressler
- Senckenberg Research Institute and Natural History Museum, Frankfurt am Main, Germany Senckenberg Research Institute and Natural History Museum Frankfurt am Main Germany
| | - Bernhard Seeger
- Department of Mathematics and Computer Science, Philipps-University Marburg, Marburg, Germany Department of Mathematics and Computer Science, Philipps-University Marburg Marburg Germany
| | - Thomas Hickler
- Senckenberg Biodiversity and Climate Research Centre (SBiK-F), Frankfurt am Main, Germany Senckenberg Biodiversity and Climate Research Centre (SBiK-F) Frankfurt am Main Germany
| |
Collapse
|
12
|
Abstract
Quantitative measurements and qualitative description of scientific images are both important to describe the complexity of digital image data. While various software solutions for quantitative measurements in images exist, there is a lack of simple tools for the qualitative description of images in common user-oriented image analysis software. To address this issue, we developed a set of Fiji plugins that facilitate the systematic manual annotation of images or image-regions. From a list of user-defined keywords, these plugins generate an easy-to-use graphical interface with buttons or checkboxes for the assignment of single or multiple pre-defined categories to full images or individual regions of interest. In addition to qualitative annotations, any quantitative measurement from the standard Fiji options can also be automatically reported. Besides the interactive user interface, keyboard shortcuts are available to speed-up the annotation process for larger datasets. The annotations are reported in a Fiji result table that can be exported as a pre-formatted csv file, for further analysis with common spreadsheet software or custom automated pipelines. To illustrate possible use case of the annotations, and facilitate the analysis of the generated annotations, we provide examples of such pipelines, including data-visualization solutions in Fiji and KNIME, as well as a complete workflow for training and application of a deep learning model for image classification in KNIME. Ultimately, the plugins enable standardized routine sample evaluation, classification, or ground-truth category annotation of any digital image data compatible with Fiji.
Collapse
Affiliation(s)
- Laurent S V Thomas
- Acquifer Imaging GmbH, Heidelberg, Germany.,DITABIS, Digital Biomedical Imaging Systems AG, Pforzheim, Germany.,Department of Pediatrics, University Children's Hospital, Heidelberg, Germany
| | - Franz Schaefer
- Department of Pediatrics, University Children's Hospital, Heidelberg, Germany
| | - Jochen Gehrig
- Acquifer Imaging GmbH, Heidelberg, Germany.,DITABIS, Digital Biomedical Imaging Systems AG, Pforzheim, Germany
| |
Collapse
|
13
|
Samiei S, Rasti P, Richard P, Galopin G, Rousseau D. Toward Joint Acquisition-Annotation of Images with Egocentric Devices for a Lower-Cost Machine Learning Application to Apple Detection. Sensors (Basel) 2020; 20:E4173. [PMID: 32727124 DOI: 10.3390/s20154173] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Revised: 07/01/2020] [Accepted: 07/24/2020] [Indexed: 01/02/2023]
Abstract
Since most computer vision approaches are now driven by machine learning, the current bottleneck is the annotation of images. This time-consuming task is usually performed manually after the acquisition of images. In this article, we assess the value of various egocentric vision approaches in regard to performing joint acquisition and automatic image annotation rather than the conventional two-step process of acquisition followed by manual annotation. This approach is illustrated with apple detection in challenging field conditions. We demonstrate the possibility of high performance in automatic apple segmentation (Dice 0.85), apple counting (88 percent of probability of good detection, and 0.09 true-negative rate), and apple localization (a shift error of fewer than 3 pixels) with eye-tracking systems. This is obtained by simply applying the areas of interest captured by the egocentric devices to standard, non-supervised image segmentation. We especially stress the importance in terms of time of using such eye-tracking devices on head-mounted systems to jointly perform image acquisition and automatic annotation. A gain of time of over 10-fold by comparison with classical image acquisition followed by manual image annotation is demonstrated.
Collapse
|
14
|
Vaidyanathan P, Prud'hommeaux E, Alm CO, Pelz JB. Computational framework for fusing eye movements and spoken narratives for image annotation. J Vis 2020; 20:13. [PMID: 32678878 PMCID: PMC7424957 DOI: 10.1167/jov.20.7.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2019] [Accepted: 10/23/2019] [Indexed: 11/24/2022] Open
Abstract
Despite many recent advances in the field of computer vision, there remains a disconnect between how computers process images and how humans understand them. To begin to bridge this gap, we propose a framework that integrates human-elicited gaze and spoken language to label perceptually important regions in an image. Our work relies on the notion that gaze and spoken narratives can jointly model how humans inspect and analyze images. Using an unsupervised bitext alignment algorithm originally developed for machine translation, we create meaningful mappings between participants' eye movements over an image and their spoken descriptions of that image. The resulting multimodal alignments are then used to annotate image regions with linguistic labels. The accuracy of these labels exceeds that of baseline alignments obtained using purely temporal correspondence between fixations and words. We also find differences in system performances when identifying image regions using clustering methods that rely on gaze information rather than image features. The alignments produced by our framework can be used to create a database of low-level image features and high-level semantic annotations corresponding to perceptually important image regions. The framework can potentially be applied to any multimodal data stream and to any visual domain. To this end, we provide the research community with access to the computational framework.
Collapse
Affiliation(s)
| | | | - Cecilia O. Alm
- College of Liberal Arts, Rochester Institute of Technology, Rochester, NY, USA
| | - Jeff B. Pelz
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
| |
Collapse
|
15
|
Lindskog C, Backman M, Zieba A, Asplund A, Uhlén M, Landegren U, Pontén F. Proximity Ligation Assay as a Tool for Antibody Validation in Human Tissues. J Histochem Cytochem 2020; 68:515-529. [PMID: 32602410 DOI: 10.1369/0022155420936384] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Immunohistochemistry (IHC) is the accepted standard for spatial analysis of protein expression in tissues. IHC is widely used for cancer diagnostics and in basic research. The development of new antibodies to proteins with unknown expression patterns has created a demand for thorough validation. We have applied resources from the Human Protein Atlas project and the Antibody Portal at National Cancer Institute to generate protein expression data for 12 proteins across 39 cancer cell lines and 37 normal human tissue types. The outcome of IHC on consecutive sections from both cell and tissue microarrays using two independent antibodies for each protein was compared with in situ proximity ligation (isPLA), where binding by both antibodies is required to generate detection signals. Semi-quantitative scores from IHC and isPLA were compared with expression of the corresponding 12 transcripts across all cell lines and tissue types. Our results show a more consistent correlation between mRNA levels and isPLA as compared to IHC. The main benefits of isPLA include increased detection specificity and decreased unspecific staining compared to IHC. We conclude that implementing isPLA as a complement to IHC for analysis of protein expression and in antibody validation pipelines can lead to more accurate localization of proteins in tissue.
Collapse
Affiliation(s)
- Cecilia Lindskog
- Department of Immunology, Genetics and Pathology, Rudbeck Laboratory, Uppsala University, Uppsala, Sweden
| | - Max Backman
- Department of Immunology, Genetics and Pathology, Rudbeck Laboratory, Uppsala University, Uppsala, Sweden
| | - Agata Zieba
- Department of Immunology, Genetics and Pathology, Rudbeck Laboratory, Uppsala University, Uppsala, Sweden
| | - Anna Asplund
- Department of Immunology, Genetics and Pathology, Rudbeck Laboratory, Uppsala University, Uppsala, Sweden
| | - Mathias Uhlén
- Science for Life Laboratory, Royal Institute of Technology, Stockholm, Sweden
| | - Ulf Landegren
- Department of Immunology, Genetics and Pathology, Rudbeck Laboratory, Uppsala University, Uppsala, Sweden
| | - Fredrik Pontén
- Department of Immunology, Genetics and Pathology, Rudbeck Laboratory, Uppsala University, Uppsala, Sweden
| |
Collapse
|
16
|
Brenskelle L, Guralnick RP, Denslow M, Stucky BJ. Maximizing human effort for analyzing scientific images: A case study using digitized herbarium sheets. Appl Plant Sci 2020; 8:e11370. [PMID: 32626612 PMCID: PMC7328657 DOI: 10.1002/aps3.11370] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 03/30/2020] [Indexed: 05/28/2023]
Abstract
PREMISE Digitization and imaging of herbarium specimens provides essential historical phenotypic and phenological information about plants. However, the full use of these resources requires high-quality human annotations for downstream use. Here we provide guidance on the design and implementation of image annotation projects for botanical research. METHODS AND RESULTS We used a novel gold-standard data set to test the accuracy of human phenological annotations of herbarium specimen images in two settings: structured, in-person sessions and an online, community-science platform. We examined how different factors influenced annotation accuracy and found that botanical expertise, academic career level, and time spent on annotations had little effect on accuracy. Rather, key factors included traits and taxa being scored, the annotation setting, and the individual scorer. In-person annotations were significantly more accurate than online annotations, but both generated relatively high-quality outputs. Gathering multiple, independent annotations for each image improved overall accuracy. CONCLUSIONS Our results provide a best-practices basis for using human effort to annotate images of plants. We show that scalable community science mechanisms can produce high-quality data, but care must be taken to choose tractable taxa and phenophases and to provide informative training material.
Collapse
Affiliation(s)
- Laura Brenskelle
- Florida Museum of Natural HistoryUniversity of FloridaGainesvilleFloridaUSA
- Department of BiologyUniversity of FloridaGainesvilleFloridaUSA
| | - Rob P. Guralnick
- Florida Museum of Natural HistoryUniversity of FloridaGainesvilleFloridaUSA
| | - Michael Denslow
- Florida Museum of Natural HistoryUniversity of FloridaGainesvilleFloridaUSA
| | - Brian J. Stucky
- Florida Museum of Natural HistoryUniversity of FloridaGainesvilleFloridaUSA
| |
Collapse
|
17
|
Van Eycke YR, Foucart A, Decaestecker C. Strategies to Reduce the Expert Supervision Required for Deep Learning-Based Segmentation of Histopathological Images. Front Med (Lausanne) 2019; 6:222. [PMID: 31681779 PMCID: PMC6803466 DOI: 10.3389/fmed.2019.00222] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 09/27/2019] [Indexed: 12/21/2022] Open
Abstract
The emergence of computational pathology comes with a demand to extract more and more information from each tissue sample. Such information extraction often requires the segmentation of numerous histological objects (e.g., cell nuclei, glands, etc.) in histological slide images, a task for which deep learning algorithms have demonstrated their effectiveness. However, these algorithms require many training examples to be efficient and robust. For this purpose, pathologists must manually segment hundreds or even thousands of objects in histological images, i.e., a long, tedious and potentially biased task. The present paper aims to review strategies that could help provide the very large number of annotated images needed to automate the segmentation of histological images using deep learning. This review identifies and describes four different approaches: the use of immunohistochemical markers as labels, realistic data augmentation, Generative Adversarial Networks (GAN), and transfer learning. In addition, we describe alternative learning strategies that can use imperfect annotations. Adding real data with high-quality annotations to the training set is a safe way to improve the performance of a well configured deep neural network. However, the present review provides new perspectives through the use of artificially generated data and/or imperfect annotations, in addition to transfer learning opportunities.
Collapse
Affiliation(s)
- Yves-Rémi Van Eycke
- Digital Image Analysis in Pathology (DIAPath), Center for Microscopy and Molecular Imaging (CMMI), Université Libre de Bruxelles, Charleroi, Belgium.,Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| | - Adrien Foucart
- Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| | - Christine Decaestecker
- Digital Image Analysis in Pathology (DIAPath), Center for Microscopy and Molecular Imaging (CMMI), Université Libre de Bruxelles, Charleroi, Belgium.,Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| |
Collapse
|
18
|
Abstract
PURPOSE To describe and evaluate a segmentation method using joint adversarial and segmentation convolutional neural network to achieve accurate segmentation using unannotated MR image datasets. THEORY AND METHODS A segmentation pipeline was built using joint adversarial and segmentation network. A convolutional neural network technique called cycle-consistent generative adversarial network (CycleGAN) was applied as the core of the method to perform unpaired image-to-image translation between different MR image datasets. A joint segmentation network was incorporated into the adversarial network to obtain additional functionality for semantic segmentation. The fully automated segmentation method termed as SUSAN was tested for segmenting bone and cartilage on 2 clinical knee MR image datasets using images and annotated segmentation masks from an online publicly available knee MR image dataset. The segmentation results were compared using quantitative segmentation metrics with the results from a supervised U-Net segmentation method and 2 registration methods. The Wilcoxon signed-rank test was used to evaluate the value difference of quantitative metrics between different methods. RESULTS The proposed method SUSAN provided high segmentation accuracy with results comparable to the supervised U-Net segmentation method (most quantitative metrics having P > 0.05) and significantly better than a multiatlas registration method (all quantitative metrics having P < 0.001) and a direct registration method (all quantitative metrics having P< 0.0001) for the clinical knee image datasets. SUSAN also demonstrated the applicability for segmenting knee MR images with different tissue contrasts. CONCLUSION SUSAN performed rapid and accurate tissue segmentation for multiple MR image datasets without the need for sequence specific segmentation annotation. The joint adversarial and segmentation network and training strategy have promising potential applications in medical image segmentation.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53705–2275
| |
Collapse
|
19
|
Zhou J, Bell D, Nusrat S, Hingle M, Surdeanu M, Kobourov S. Calorie Estimation From Pictures of Food: Crowdsourcing Study. Interact J Med Res 2018; 7:e17. [PMID: 30401671 PMCID: PMC6246963 DOI: 10.2196/ijmr.9359] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Revised: 04/26/2018] [Accepted: 08/20/2018] [Indexed: 12/29/2022] Open
Abstract
Background Software designed to accurately estimate food calories from still images could help users and health professionals identify dietary patterns and food choices associated with health and health risks more effectively. However, calorie estimation from images is difficult, and no publicly available software can do so accurately while minimizing the burden associated with data collection and analysis. Objective The aim of this study was to determine the accuracy of crowdsourced annotations of calorie content in food images and to identify and quantify sources of bias and noise as a function of respondent characteristics and food qualities (eg, energy density). Methods We invited adult social media users to provide calorie estimates for 20 food images (for which ground truth calorie data were known) using a custom-built webpage that administers an online quiz. The images were selected to provide a range of food types and energy density. Participants optionally provided age range, gender, and their height and weight. In addition, 5 nutrition experts provided annotations for the same data to form a basis of comparison. We examined estimated accuracy on the basis of expertise, demographic data, and food qualities using linear mixed-effects models with participant and image index as random variables. We also analyzed the advantage of aggregating nonexpert estimates. Results A total of 2028 respondents agreed to participate in the study (males: 770/2028, 37.97%, mean body mass index: 27.5 kg/m2). Average accuracy was 5 out of 20 correct guesses, where “correct” was defined as a number within 20% of the ground truth. Even a small crowd of 10 individuals achieved an accuracy of 7, exceeding the average individual and expert annotator’s accuracy of 5. Women were more accurate than men (P<.001), and younger people were more accurate than older people (P<.001). The calorie content of energy-dense foods was overestimated (P=.02). Participants performed worse when images contained reference objects, such as credit cards, for scale (P=.01). Conclusions Our findings provide new information about how calories are estimated from food images, which can inform the design of related software and analyses.
Collapse
Affiliation(s)
- Jun Zhou
- Department of Computer Science, Columbia University, New York, NY, United States
| | - Dane Bell
- Department of Linguistics, University of Arizona, Tucson, AZ, United States
| | - Sabrina Nusrat
- Department of Computer Science, University of Arizona, Tucson, AZ, United States
| | - Melanie Hingle
- Department of Nutritional Sciences, University of Arizona, Tucson, AZ, United States
| | - Mihai Surdeanu
- Department of Computer Science, University of Arizona, Tucson, AZ, United States
| | - Stephen Kobourov
- Department of Computer Science, University of Arizona, Tucson, AZ, United States
| |
Collapse
|
20
|
Yang T, Zhao X, Lin B, Zeng T, Ji S, Ye J. Automated gene expression pattern annotation in the mouse brain. Pac Symp Biocomput 2015; 20:144-155. [PMID: 25592576 PMCID: PMC4299912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Brain tumor is a fatal central nervous system disease that occurs in around 250,000 people each year globally and it is the second cause of cancer in children. It has been widely acknowledged that genetic factor is one of the significant risk factors for brain cancer. Thus, accurate descriptions of the locations of where the relative genes are active and how these genes express are critical for understanding the pathogenesis of brain tumor and for early detection. The Allen Developing Mouse Brain Atlas is a project on gene expression over the course of mouse brain development stages. Utilizing mouse models allows us to use a relatively homogeneous system to reveal the genetic risk factor of brain cancer. In the Allen atlas, about 435,000 high-resolution spatiotemporal in situ hybridization images have been generated for approximately 2,100 genes and currently the expression patterns over specific brain regions are manually annotated by experts, which does not scale with the continuously expanding collection of images. In this paper, we present an efficient computational approach to perform automated gene expression pattern annotation on brain images. First, the gene expression information in the brain images is captured by invariant features extracted from local image patches. Next, we adopt an augmented sparse coding method, called Stochastic Coordinate Coding, to construct high-level representations. Different pooling methods are then applied to generate gene-level features. To discriminate gene expression patterns at specific brain regions, we employ supervised learning methods to build accurate models for both binary-class and multi-class cases. Random undersampling and majority voting strategies are utilized to deal with the inherently imbalanced class distribution within each annotation task in order to further improve predictive performance. In addition, we propose a novel structure-based multi-label classification approach, which makes use of label hierarchy based on brain ontology during model learning. Extensive experiments have been conducted on the atlas and results show that the proposed approach produces higher annotation accuracy than several baseline methods. Our approach is shown to be robust on both binary-class and multi-class tasks and even with a relatively low training ratio. Our results also show that the use of label hierarchy can significantly improve the annotation accuracy at all brain ontology levels.
Collapse
Affiliation(s)
- Tao Yang
- Department of Computer Science and Engineering, Arizona State University, Tempe, AZ 85287, USA.
| | | | | | | | | | | |
Collapse
|
21
|
Yun K, Peng Y, Samaras D, Zelinsky GJ, Berg TL. Exploring the role of gaze behavior and object detection in scene understanding. Front Psychol 2013; 4:917. [PMID: 24367348 PMCID: PMC3854460 DOI: 10.3389/fpsyg.2013.00917] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2013] [Accepted: 11/18/2013] [Indexed: 11/29/2022] Open
Abstract
We posit that a person's gaze behavior while freely viewing a scene contains an abundance of information, not only about their intent and what they consider to be important in the scene, but also about the scene's content. Experiments are reported, using two popular image datasets from computer vision, that explore the relationship between the fixations that people make during scene viewing, how they describe the scene, and automatic detection predictions of object categories in the scene. From these exploratory analyses, we then combine human behavior with the outputs of current visual recognition methods to build prototype human-in-the-loop applications for gaze-enabled object detection and scene annotation.
Collapse
Affiliation(s)
- Kiwon Yun
- Computer Science Department, Stony Brook UniversityStony Brook, NY, USA
| | - Yifan Peng
- Computer Science Department, Stony Brook UniversityStony Brook, NY, USA
| | - Dimitris Samaras
- Computer Science Department, Stony Brook UniversityStony Brook, NY, USA
| | - Gregory J. Zelinsky
- Computer Science Department, Stony Brook UniversityStony Brook, NY, USA
- Psychology Department, Stony Brook UniversityStony Brook, NY, USA
| | - Tamara L. Berg
- Department of Computer Science, University of North Carolina, Chapel-HillChapel Hill, NC, USA
| |
Collapse
|
22
|
Cheng C, Stokes TH, Hang S, Wang MD. TissueWiki Mobile: an Integrative Protein Expression Image Browser for Pathological Knowledge Sharing and Annotation on a Mobile Device. IEEE Int Conf Bioinform Biomed Workshops 2010; 2010:473-480. [PMID: 27532057 PMCID: PMC4983421 DOI: 10.1109/bibmw.2010.5703848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Doctors need fast and convenient access to medical data. This motivates the use of mobile devices for knowledge retrieval and sharing. We have developed TissueWikiMobile on the Apple iPhone and iPad to seamlessly access TissueWiki, an enormous repository of medical histology images. TissueWiki is a three terabyte database of antibody information and histology images from the Human Protein Atlas (HPA). Using TissueWikiMobile, users are capable of extracting knowledge from protein expression, adding annotations to highlight regions of interest on images, and sharing their professional insight. By providing an intuitive human computer interface, users can efficiently operate TissueWikiMobile to access important biomedical data without losing mobility. TissueWikiMobile furnishes the health community a ubiquitous way to collaborate and share their expert opinions not only on the performance of various antibodies stains but also on histology image annotation.
Collapse
Affiliation(s)
- Chihwen Cheng
- Electrical and Computer Engineering, Georgia Institute of Technology
| | | | - Sovandy Hang
- Biomedical Engineering, Georgia Institute of Technology
| | - May D. Wang
- Electrical and Computer Engineering, Georgia Institute of Technology
- Biomedical Engineering, Georgia Institute of Technology
| |
Collapse
|