1
|
Rajapaksa S, Khalvati F. Relevance maps: A weakly supervised segmentation method for 3D brain tumours in MRIs. Front Radiol 2022; 2:1061402. [PMID: 37492689 PMCID: PMC10365288 DOI: 10.3389/fradi.2022.1061402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Accepted: 11/28/2022] [Indexed: 07/27/2023]
Abstract
With the increased reliance on medical imaging, Deep convolutional neural networks (CNNs) have become an essential tool in the medical imaging-based computer-aided diagnostic pipelines. However, training accurate and reliable classification models often require large fine-grained annotated datasets. To alleviate this, weakly-supervised methods can be used to obtain local information such as region of interest from global labels. This work proposes a weakly-supervised pipeline to extract Relevance Maps of medical images from pre-trained 3D classification models using localized perturbations. The extracted Relevance Map describes a given region's importance to the classification model and produces the segmentation for the region. Furthermore, we propose a novel optimal perturbation generation method that exploits 3D superpixels to find the most relevant area for a given classification using U-net architecture. This model is trained with perturbation loss, which maximizes the difference between unperturbed and perturbed predictions. We validated the effectiveness of our methodology by applying it to the segmentation of Glioma brain tumours in MRI scans using only classification labels for glioma type. The proposed method outperforms existing methods in both Dice Similarity Coefficient for segmentation and resolution for visualizations.
Collapse
Affiliation(s)
- Sajith Rajapaksa
- Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
| | - Farzad Khalvati
- Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada
- Department of Diagnostic Imaging, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
- Vector Institute, Toronto, ON, Canada
| |
Collapse
|
2
|
Tomczyk A, Szczepaniak PS. Ear Detection Using Convolutional Neural Network on Graphs with Filter Rotation. Sensors (Basel) 2019; 19:E5510. [PMID: 31847162 DOI: 10.3390/s19245510] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 12/05/2019] [Accepted: 12/06/2019] [Indexed: 12/04/2022]
Abstract
Geometric deep learning (GDL) generalizes convolutional neural networks (CNNs) to non-Euclidean domains. In this work, a GDL technique, allowing the application of CNN on graphs, is examined. It defines convolutional filters with the use of the Gaussian mixture model (GMM). As those filters are defined in continuous space, they can be easily rotated without the need for some additional interpolation. This, in turn, allows constructing systems having rotation equivariance property. The characteristic of the proposed approach is illustrated with the problem of ear detection, which is of great importance in biometric systems enabling image based, discrete human identification. The analyzed graphs were constructed taking into account superpixels representing image content. This kind of representation has several advantages. On the one hand, it significantly reduces the amount of processed data, allowing building simpler and more effective models. On the other hand, it seems to be closer to the conscious process of human image understanding as it does not operate on millions of pixels. The contributions of the paper lie both in GDL application area extension (semantic segmentation of the images) and in the novel concept of trained filter transformations. We show that even significantly reduced information about image content and a relatively simple, in comparison with classic CNN, model (smaller number of parameters and significantly faster processing) allows obtaining detection results on the quality level similar to those reported in the literature on the UBEAR dataset. Moreover, we show experimentally that the proposed approach possesses in fact the rotation equivariance property allowing detecting rotated structures without the need for labor consuming training on all rotated and non-rotated images.
Collapse
|
3
|
Sadeghi-Tehran P, Virlet N, Ampe EM, Reyns P, Hawkesford MJ. DeepCount: In-Field Automatic Quantification of Wheat Spikes Using Simple Linear Iterative Clustering and Deep Convolutional Neural Networks. Front Plant Sci 2019; 10:1176. [PMID: 31616456 PMCID: PMC6775245 DOI: 10.3389/fpls.2019.01176] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 08/28/2019] [Indexed: 05/19/2023]
Abstract
Crop yield is an essential measure for breeders, researchers, and farmers and is composed of and may be calculated by the number of ears per square meter, grains per ear, and thousand grain weight. Manual wheat ear counting, required in breeding programs to evaluate crop yield potential, is labor-intensive and expensive; thus, the development of a real-time wheat head counting system would be a significant advancement. In this paper, we propose a computationally efficient system called DeepCount to automatically identify and count the number of wheat spikes in digital images taken under natural field conditions. The proposed method tackles wheat spike quantification by segmenting an image into superpixels using simple linear iterative clustering (SLIC), deriving canopy relevant features, and then constructing a rational feature model fed into the deep convolutional neural network (CNN) classification for semantic segmentation of wheat spikes. As the method is based on a deep learning model, it replaces hand-engineered features required for traditional machine learning methods with more efficient algorithms. The method is tested on digital images taken directly in the field at different stages of ear emergence/maturity (using visually different wheat varieties), with different canopy complexities (achieved through varying nitrogen inputs) and different heights above the canopy under varying environmental conditions. In addition, the proposed technique is compared with a wheat ear counting method based on a previously developed edge detection technique and morphological analysis. The proposed approach is validated with image-based ear counting and ground-based measurements. The results demonstrate that the DeepCount technique has a high level of robustness regardless of variables, such as growth stage and weather conditions, hence demonstrating the feasibility of the approach in real scenarios. The system is a leap toward a portable and smartphone-assisted wheat ear counting systems, results in reducing the labor involved, and is suitable for high-throughput analysis. It may also be adapted to work on Red; Green; Blue (RGB) images acquired from unmanned aerial vehicle (UAVs).
Collapse
Affiliation(s)
| | - Nicolas Virlet
- Plant Sciences Department, Rothamsted Research, Harpenden, United Kingdom
| | - Eva M. Ampe
- Phenotyping, Near Infrared and Research Automation Group, Limagrain Europe, Chappes, Netherlands
| | - Piet Reyns
- Phenotyping, Near Infrared and Research Automation Group, Limagrain Europe, Chappes, Netherlands
| | | |
Collapse
|
4
|
Zhou FY, Ruiz-Puig C, Owen RP, White MJ, Rittscher J, Lu X. Motion sensing superpixels (MOSES) is a systematic computational framework to quantify and discover cellular motion phenotypes. eLife 2019; 8:e40162. [PMID: 30803483 PMCID: PMC6391079 DOI: 10.7554/elife.40162] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Accepted: 01/11/2019] [Indexed: 12/12/2022] Open
Abstract
Correct cell/cell interactions and motion dynamics are fundamental in tissue homeostasis, and defects in these cellular processes cause diseases. Therefore, there is strong interest in identifying factors, including drug candidates that affect cell/cell interactions and motion dynamics. However, existing quantitative tools for systematically interrogating complex motion phenotypes in timelapse datasets are limited. We present Motion Sensing Superpixels (MOSES), a computational framework that measures and characterises biological motion with a unique superpixel 'mesh' formulation. Using published datasets, MOSES demonstrates single-cell tracking capability and more advanced population quantification than Particle Image Velocimetry approaches. From > 190 co-culture videos, MOSES motion-mapped the interactions between human esophageal squamous epithelial and columnar cells mimicking the esophageal squamous-columnar junction, a site where Barrett's esophagus and esophageal adenocarcinoma often arise clinically. MOSES is a powerful tool that will facilitate unbiased, systematic analysis of cellular dynamics from high-content time-lapse imaging screens with little prior knowledge and few assumptions.
Collapse
Affiliation(s)
- Felix Y Zhou
- Ludwig Institute for Cancer Research, Nuffield Department of Clinical MedicineUniversity of OxfordOxfordUnited Kingdom
| | - Carlos Ruiz-Puig
- Ludwig Institute for Cancer Research, Nuffield Department of Clinical MedicineUniversity of OxfordOxfordUnited Kingdom
| | - Richard P Owen
- Ludwig Institute for Cancer Research, Nuffield Department of Clinical MedicineUniversity of OxfordOxfordUnited Kingdom
| | - Michael J White
- Ludwig Institute for Cancer Research, Nuffield Department of Clinical MedicineUniversity of OxfordOxfordUnited Kingdom
| | - Jens Rittscher
- Ludwig Institute for Cancer Research, Nuffield Department of Clinical MedicineUniversity of OxfordOxfordUnited Kingdom
- Institute of Biomedical Engineering, Department of EngineeringUniversity of OxfordOxfordUnited Kingdom
- Big Data Institute, Li Ka Shing Centre for Health Information and DiscoveryUniversity of OxfordOxfordUnited Kingdom
| | - Xin Lu
- Ludwig Institute for Cancer Research, Nuffield Department of Clinical MedicineUniversity of OxfordOxfordUnited Kingdom
| |
Collapse
|
5
|
Toro CAO, Gonzalo Martín C, García-Pedrero A, Menasalvas Ruiz E. Supervoxels-Based Histon as a New Alzheimer's Disease Imaging Biomarker. Sensors (Basel) 2018; 18:s18061752. [PMID: 29844294 PMCID: PMC6022184 DOI: 10.3390/s18061752] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 05/22/2018] [Accepted: 05/25/2018] [Indexed: 01/31/2023]
Abstract
Alzheimer’s disease (AD) represents the prevalent type of dementia in the elderly, and is characterized by the presence of neurofibrillary tangles and amyloid plaques that eventually leads to the loss of neurons, resulting in atrophy in specific brain areas. Although the process of degeneration can be visualized through various modalities of medical imaging and has proved to be a valuable biomarker, the accurate diagnosis of Alzheimer’s disease remains a challenge, especially in its early stages. In this paper, we propose a novel classification method for Alzheimer’s disease/cognitive normal discrimination in structural magnetic resonance images (MRI), based on the extension of the concept of histons to volumetric images. The proposed method exploits the relationship between grey matter, white matter and cerebrospinal fluid degeneration by means of a segmentation using supervoxels. The calculated histons are then processed for a reduction in dimensionality using principal components analysis (PCA) and the resulting vector is used to train an support vector machine (SVM) classifier. Experimental results using the OASIS-1 database have proven to be a significant improvement compared to a baseline classification made using the pipeline provided by Clinica software.
Collapse
Affiliation(s)
- César A Ortiz Toro
- Centro de Tecnología Biomédica, Campus de Montegancedo, Universidad Politécnica de Madrid, 28233 Pozuelo de Alarcón, Spain.
| | - Consuelo Gonzalo Martín
- Centro de Tecnología Biomédica, Campus de Montegancedo, Universidad Politécnica de Madrid, 28233 Pozuelo de Alarcón, Spain.
| | - Angel García-Pedrero
- Centro de Tecnología Biomédica, Campus de Montegancedo, Universidad Politécnica de Madrid, 28233 Pozuelo de Alarcón, Spain.
| | - Ernestina Menasalvas Ruiz
- Centro de Tecnología Biomédica, Campus de Montegancedo, Universidad Politécnica de Madrid, 28233 Pozuelo de Alarcón, Spain.
| |
Collapse
|
6
|
Sornapudi S, Stanley RJ, Stoecker WV, Almubarak H, Long R, Antani S, Thoma G, Zuna R, Frazier SR. Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels. J Pathol Inform 2018; 9:5. [PMID: 29619277 PMCID: PMC5869967 DOI: 10.4103/jpi.jpi_74_17] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Accepted: 01/17/2018] [Indexed: 01/08/2023] Open
Abstract
Background Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. Methods In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. Results The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. Conclusions The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods.
Collapse
Affiliation(s)
- Sudhir Sornapudi
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, USA
| | - Ronald Joe Stanley
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, USA
| | | | - Haidar Almubarak
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, USA
| | - Rodney Long
- DHHS, Lister Hill National Center for Biomedical Communications for National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- DHHS, Lister Hill National Center for Biomedical Communications for National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - George Thoma
- DHHS, Lister Hill National Center for Biomedical Communications for National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Rosemary Zuna
- Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Shelliane R Frazier
- Department of Surgical Pathology, University of Missouri Hospitals and Clinics, Columbia, USA
| |
Collapse
|
7
|
Wu W, Lin J, Wang S, Li Y, Liu M, Liu G, Cai J, Chen G, Chen R. A novel multiphoton microscopy images segmentation method based on superpixel and watershed. J Biophotonics 2017; 10:532-541. [PMID: 27090206 DOI: 10.1002/jbio.201600007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Revised: 03/24/2016] [Accepted: 03/28/2016] [Indexed: 06/05/2023]
Abstract
Multiphoton microscopy (MPM) imaging technique based on two-photon excited fluorescence (TPEF) and second harmonic generation (SHG) shows fantastic performance for biological imaging. The automatic segmentation of cellular architectural properties for biomedical diagnosis based on MPM images is still a challenging issue. A novel multiphoton microscopy images segmentation method based on superpixels and watershed (MSW) is presented here to provide good segmentation results for MPM images. The proposed method uses SLIC superpixels instead of pixels to analyze MPM images for the first time. The superpixels segmentation based on a new distance metric combined with spatial, CIE Lab color space and phase congruency features, divides the images into patches which keep the details of the cell boundaries. Then the superpixels are used to reconstruct new images by defining an average value of superpixels as image pixels intensity level. Finally, the marker-controlled watershed is utilized to segment the cell boundaries from the reconstructed images. Experimental results show that cellular boundaries can be extracted from MPM images by MSW with higher accuracy and robustness.
Collapse
Affiliation(s)
- Weilin Wu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education Fujian Normal University, Fuzhou, Fujian, 350007, China
- Department of Network and Communication Engineering, Fujian Normal University, Fuzhou, Fujian, 350007, China
| | - Jinyong Lin
- Department of Radiation Oncology, Fujian Provincial Cancer Hospital, Fuzhou, Fujian, 350014, China
| | - Shu Wang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education Fujian Normal University, Fuzhou, Fujian, 350007, China
- Department of Network and Communication Engineering, Fujian Normal University, Fuzhou, Fujian, 350007, China
| | - Yan Li
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education Fujian Normal University, Fuzhou, Fujian, 350007, China
- Department of Network and Communication Engineering, Fujian Normal University, Fuzhou, Fujian, 350007, China
| | - Mingyu Liu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education Fujian Normal University, Fuzhou, Fujian, 350007, China
- Department of Network and Communication Engineering, Fujian Normal University, Fuzhou, Fujian, 350007, China
| | - Gaoqiang Liu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education Fujian Normal University, Fuzhou, Fujian, 350007, China
- Department of Network and Communication Engineering, Fujian Normal University, Fuzhou, Fujian, 350007, China
| | - Jianyong Cai
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education Fujian Normal University, Fuzhou, Fujian, 350007, China
- Department of Network and Communication Engineering, Fujian Normal University, Fuzhou, Fujian, 350007, China
| | - Guannan Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education Fujian Normal University, Fuzhou, Fujian, 350007, China
- Department of Network and Communication Engineering, Fujian Normal University, Fuzhou, Fujian, 350007, China
| | - Rong Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education Fujian Normal University, Fuzhou, Fujian, 350007, China
- Department of Network and Communication Engineering, Fujian Normal University, Fuzhou, Fujian, 350007, China
| |
Collapse
|
8
|
Cong J, Wei B, Yin Y, Xi X, Zheng Y. Performance evaluation of simple linear iterative clustering algorithm on medical image processing. Biomed Mater Eng 2014; 24:3231-8. [PMID: 25227032 DOI: 10.3233/bme-141145] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Simple Linear Iterative Clustering (SLIC) algorithm is increasingly applied to different kinds of image processing because of its excellent perceptually meaningful characteristics. In order to better meet the needs of medical image processing and provide technical reference for SLIC on the application of medical image segmentation, two indicators of boundary accuracy and superpixel uniformity are introduced with other indicators to systematically analyze the performance of SLIC algorithm, compared with Normalized cuts and Turbopixels algorithm. The extensive experimental results show that SLIC is faster and less sensitive to the image type and the setting superpixel number than other similar algorithms such as Turbopixels and Normalized cuts algorithms. And it also has a great benefit to the boundary recall, the robustness of fuzzy boundary, the setting superpixel size and the segmentation performance on medical image segmentation.
Collapse
Affiliation(s)
- Jinyu Cong
- College of Science and Technology, Shandong University of Traditional Chinese Medicine, Jinan 250355, China
| | - Benzheng Wei
- College of Science and Technology, Shandong University of Traditional Chinese Medicine, Jinan 250355, China
| | - Yilong Yin
- School of Computer Science and Technology, Shandong University, Jinan 250100, China
| | - Xiaoming Xi
- School of Computer Science and Technology, Shandong University, Jinan 250100, China
| | - Yuanjie Zheng
- School of Medicine, University of Pennsylvania, Philadelphia 19104, USA
| |
Collapse
|