1
|
Liu M, Wu S, Chen R, Lin Z, Wang Y, Meijering E. Brain Image Segmentation for Ultrascale Neuron Reconstruction via an Adaptive Dual-Task Learning Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2574-2586. [PMID: 38373129 DOI: 10.1109/tmi.2024.3367384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Accurate morphological reconstruction of neurons in whole brain images is critical for brain science research. However, due to the wide range of whole brain imaging, uneven staining, and optical system fluctuations, there are significant differences in image properties between different regions of the ultrascale brain image, such as dramatically varying voxel intensities and inhomogeneous distribution of background noise, posing an enormous challenge to neuron reconstruction from whole brain images. In this paper, we propose an adaptive dual-task learning network (ADTL-Net) to quickly and accurately extract neuronal structures from ultrascale brain images. Specifically, this framework includes an External Features Classifier (EFC) and a Parameter Adaptive Segmentation Decoder (PASD), which share the same Multi-Scale Feature Encoder (MSFE). MSFE introduces an attention module named Channel Space Fusion Module (CSFM) to extract structure and intensity distribution features of neurons at different scales for addressing the problem of anisotropy in 3D space. Then, EFC is designed to classify these feature maps based on external features, such as foreground intensity distributions and image smoothness, and select specific PASD parameters to decode them of different classes to obtain accurate segmentation results. PASD contains multiple sets of parameters trained by different representative complex signal-to-noise distribution image blocks to handle various images more robustly. Experimental results prove that compared with other advanced segmentation methods for neuron reconstruction, the proposed method achieves state-of-the-art results in the task of neuron reconstruction from ultrascale brain images, with an improvement of about 49% in speed and 12% in F1 score.
Collapse
|
2
|
Zhang B, Wang W, Zhao W, Jiang X, Patnaik LM. An improved approach for automated cervical cell segmentation with PointRend. Sci Rep 2024; 14:14210. [PMID: 38902285 PMCID: PMC11189924 DOI: 10.1038/s41598-024-64583-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 06/11/2024] [Indexed: 06/22/2024] Open
Abstract
Regular screening for cervical cancer is one of the best tools to reduce cancer incidence. Automated cell segmentation in screening is an essential task because it can present better understanding of the characteristics of cervical cells. The main challenge of cell cytoplasm segmentation is that many boundaries in cell clumps are extremely difficult to be identified. This paper proposes a new convolutional neural network based on Mask RCNN and PointRend module, to segment overlapping cervical cells. The PointRend head concatenates fine grained features and coarse features extracted from different feature maps to fine-tune the candidate boundary pixels of cell cytoplasm, which are crucial for precise cell segmentation. The proposed model achieves a 0.97 DSC (Dice Similarity Coefficient), 0.96 TPRp (Pixelwise True Positive Rate), 0.007 FPRp (Pixelwise False Positive Rate) and 0.006 FNRo (Object False Negative Rate) on dataset from ISBI2014. Specially, the proposed method outperforms state-of-the-art result by about 3 % on DSC, 1 % on TPRp and 1.4 % on FNRo respectively. The performance metrics of our model on dataset from ISBI2015 are slight better than the average value of other approaches. Those results indicate that the proposed method could be effective in cytological analysis and then help experts correctly discover cervical cell lesions.
Collapse
Affiliation(s)
- Baocan Zhang
- Chengyi College, Jimei University, Xiamen, 361021, Fujian, China
| | - Wenfeng Wang
- Shanghai Institute of Technology, Shanghai, 200235, China.
- London Institute of Technology, International Academy of Visual Art and Engineering, London, CR2 6EQ, UK.
| | - Wei Zhao
- Chengyi College, Jimei University, Xiamen, 361021, Fujian, China
| | - Xiaolu Jiang
- Chengyi College, Jimei University, Xiamen, 361021, Fujian, China
| | | |
Collapse
|
3
|
Yang T, Hu H, Li X, Meng Q, Lu H, Huang Q. An efficient Fusion-Purification Network for Cervical pap-smear image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 251:108199. [PMID: 38728830 DOI: 10.1016/j.cmpb.2024.108199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/28/2024] [Accepted: 04/21/2024] [Indexed: 05/12/2024]
Abstract
BACKGROUND AND OBJECTIVES In cervical cell diagnostics, autonomous screening technology constitutes the foundation of automated diagnostic systems. Currently, numerous deep learning-based classification techniques have been successfully implemented in the analysis of cervical cell images, yielding favorable outcomes. Nevertheless, efficient discrimination of cervical cells continues to be challenging due to large intra-class and small inter-class variations. The key to dealing with this problem is to capture localized informative differences from cervical cell images and to represent discriminative features efficiently. Existing methods neglect the importance of global morphological information, resulting in inadequate feature representation capability. METHODS To address this limitation, we propose a novel cervical cell classification model that focuses on purified fusion information. Specifically, we first integrate the detailed texture information and morphological structure features, named cervical pathology information fusion. Second, in order to enhance the discrimination of cervical cell features and address the data redundancy and bias inherent after fusion, we design a cervical purification bottleneck module. This model strikes a balance between leveraging purified features and facilitating high-efficiency discrimination. Furthermore, we intend to unveil a more intricate cervical cell dataset: Cervical Cytopathology Image Dataset (CCID). RESULTS Extensive experiments on two real-world datasets show that our proposed model outperforms state-of-the-art cervical cell classification models. CONCLUSIONS The results show that our method can well help pathologists to accurately evaluate cervical smears.
Collapse
Affiliation(s)
- Tianjin Yang
- College of Computer and Information, Hohai University, Nanjing, 211100, PR China.
| | - Hexuan Hu
- College of Computer and Information, Hohai University, Nanjing, 211100, PR China.
| | - Xing Li
- College of information Science and Technology & College of Artificial Intelligence, Nanjing Forestry University, Nanjing 210037, PR China.
| | - Qing Meng
- College of Computer and Information, Hohai University, Nanjing, 211100, PR China.
| | - Hao Lu
- College of Computer and Information, Hohai University, Nanjing, 211100, PR China.
| | - Qian Huang
- College of Computer and Information, Hohai University, Nanjing, 211100, PR China.
| |
Collapse
|
4
|
Han S, Phasouk K, Zhu J, Fong Y. Optimizing deep learning-based segmentation of densely packed cells using cell surface markers. BMC Med Inform Decis Mak 2024; 24:124. [PMID: 38750526 PMCID: PMC11094866 DOI: 10.1186/s12911-024-02502-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 04/08/2024] [Indexed: 05/19/2024] Open
Abstract
BACKGROUND Spatial molecular profiling depends on accurate cell segmentation. Identification and quantitation of individual cells in dense tissues, e.g. highly inflamed tissue caused by viral infection or immune reaction, remains a challenge. METHODS We first assess the performance of 18 deep learning-based cell segmentation models, either pre-trained or trained by us using two public image sets, on a set of immunofluorescence images stained with immune cell surface markers in skin tissue obtained during human herpes simplex virus (HSV) infection. We then further train eight of these models using up to 10,000+ training instances from the current image set. Finally, we seek to improve performance by tuning parameters of the most successful method from the previous step. RESULTS The best model before fine-tuning achieves a mean Average Precision (mAP) of 0.516. Prediction performance improves substantially after training. The best model is the cyto model from Cellpose. After training, it achieves an mAP of 0.694; with further parameter tuning, the mAP reaches 0.711. CONCLUSION Selecting the best model among the existing approaches and further training the model with images of interest produce the most gain in prediction performance. The performance of the resulting model compares favorably to human performance. The imperfection of the final model performance can be attributed to the moderate signal-to-noise ratio in the imageset.
Collapse
Affiliation(s)
- Sunwoo Han
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| | - Khamsone Phasouk
- Department of Laboratory Medicine and Pathology, University of Washington School of Medicine, Seattle, United States
| | - Jia Zhu
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA.
- Department of Laboratory Medicine and Pathology, University of Washington School of Medicine, Seattle, United States.
| | - Youyi Fong
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA.
| |
Collapse
|
5
|
Hörst F, Rempe M, Heine L, Seibold C, Keyl J, Baldini G, Ugurel S, Siveke J, Grünwald B, Egger J, Kleesiek J. CellViT: Vision Transformers for precise cell segmentation and classification. Med Image Anal 2024; 94:103143. [PMID: 38507894 DOI: 10.1016/j.media.2024.103143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 02/14/2024] [Accepted: 03/12/2024] [Indexed: 03/22/2024]
Abstract
Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.
Collapse
Affiliation(s)
- Fabian Hörst
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany.
| | - Moritz Rempe
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Lukas Heine
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Constantin Seibold
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Clinic for Nuclear Medicine, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Julius Keyl
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Pathology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Giulia Baldini
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Selma Ugurel
- Department of Dermatology, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany
| | - Jens Siveke
- West German Cancer Center, partner site Essen, a partnership between German Cancer Research Center (DKFZ) and University Hospital Essen, University Hospital Essen (AöR), 45147 Essen, Germany; Bridge Institute of Experimental Tumor Therapy (BIT) and Division of Solid Tumor Translational Oncology (DKTK), West German Cancer Center Essen, University Hospital Essen (AöR), University of Duisburg-Essen, 45147 Essen, Germany
| | - Barbara Grünwald
- Department of Urology, West German Cancer Center, 45147 University Hospital Essen (AöR), Germany; Princess Margaret Cancer Centre, M5G 2M9 Toronto, Ontario, Canada
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany; Department of Physics, TU Dortmund University, 44227 Dortmund, Germany
| |
Collapse
|
6
|
Rasheed A, Shirazi SH, Umar AI, Shahzad M, Yousaf W, Khan Z. Cervical cell's nucleus segmentation through an improved UNet architecture. PLoS One 2023; 18:e0283568. [PMID: 37788295 PMCID: PMC10547184 DOI: 10.1371/journal.pone.0283568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 03/11/2023] [Indexed: 10/05/2023] Open
Abstract
Precise segmentation of the nucleus is vital for computer-aided diagnosis (CAD) in cervical cytology. Automated delineation of the cervical nucleus has notorious challenges due to clumped cells, color variation, noise, and fuzzy boundaries. Due to its standout performance in medical image analysis, deep learning has gained attention from other techniques. We have proposed a deep learning model, namely C-UNet (Cervical-UNet), to segment cervical nuclei from overlapped, fuzzy, and blurred cervical cell smear images. Cross-scale features integration based on a bi-directional feature pyramid network (BiFPN) and wide context unit are used in the encoder of classic UNet architecture to learn spatial and local features. The decoder of the improved network has two inter-connected decoders that mutually optimize and integrate these features to produce segmentation masks. Each component of the proposed C-UNet is extensively evaluated to judge its effectiveness on a complex cervical cell dataset. Different data augmentation techniques were employed to enhance the proposed model's training. Experimental results have shown that the proposed model outperformed extant models, i.e., CGAN (Conditional Generative Adversarial Network), DeepLabv3, Mask-RCNN (Region-Based Convolutional Neural Network), and FCN (Fully Connected Network), on the employed dataset used in this study and ISBI-2014 (International Symposium on Biomedical Imaging 2014), ISBI-2015 datasets. The C-UNet achieved an object-level accuracy of 93%, pixel-level accuracy of 92.56%, object-level recall of 95.32%, pixel-level recall of 92.27%, Dice coefficient of 93.12%, and F1-score of 94.96% on complex cervical images dataset.
Collapse
Affiliation(s)
- Assad Rasheed
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Syed Hamad Shirazi
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Arif Iqbal Umar
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Muhammad Shahzad
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Waqas Yousaf
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Zakir Khan
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| |
Collapse
|
7
|
Wei S, Si L, Huang T, Du S, Yao Y, Dong Y, Ma H. Deep-learning-based cross-modality translation from Stokes image to bright-field contrast. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:102911. [PMID: 37867633 PMCID: PMC10587695 DOI: 10.1117/1.jbo.28.10.102911] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/25/2023] [Accepted: 09/25/2023] [Indexed: 10/24/2023]
Abstract
Significance Mueller matrix (MM) microscopy has proven to be a powerful tool for probing microstructural characteristics of biological samples down to subwavelength scale. However, in clinical practice, doctors usually rely on bright-field microscopy images of stained tissue slides to identify characteristic features of specific diseases and make accurate diagnosis. Cross-modality translation based on polarization imaging helps to improve the efficiency and stability in analyzing sample properties from different modalities for pathologists. Aim In this work, we propose a computational image translation technique based on deep learning to enable bright-field microscopy contrast using snapshot Stokes images of stained pathological tissue slides. Taking Stokes images as input instead of MM images allows the translated bright-field images to be unaffected by variations of light source and samples. Approach We adopted CycleGAN as the translation model to avoid requirements on co-registered image pairs in the training. This method can generate images that are equivalent to the bright-field images with different staining styles on the same region. Results Pathological slices of liver and breast tissues with hematoxylin and eosin staining and lung tissues with two types of immunohistochemistry staining, i.e., thyroid transcription factor-1 and Ki-67, were used to demonstrate the effectiveness of our method. The output results were evaluated by four image quality assessment methods. Conclusions By comparing the cross-modality translation performance with MM images, we found that the Stokes images, with the advantages of faster acquisition and independence from light intensity and image registration, can be well translated to bright-field images.
Collapse
Affiliation(s)
- Shilong Wei
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Lu Si
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Tongyu Huang
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
| | - Shan Du
- University of Chinese Academy of Sciences, Shenzhen Hospital, Department of Pathology, Shenzhen, China
| | - Yue Yao
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Yang Dong
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Hui Ma
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
- Tsinghua University, Department of Physics, Beijing, China
| |
Collapse
|
8
|
Che VL, Zimmermann J, Zhou Y, Lu XL, van Rienen U. Contributions of deep learning to automated numerical modelling of the interaction of electric fields and cartilage tissue based on 3D images. Front Bioeng Biotechnol 2023; 11:1225495. [PMID: 37711443 PMCID: PMC10497969 DOI: 10.3389/fbioe.2023.1225495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/07/2023] [Indexed: 09/16/2023] Open
Abstract
Electric fields find use in tissue engineering but also in sensor applications besides the broad classical application range. Accurate numerical models of electrical stimulation devices can pave the way for effective therapies in cartilage regeneration. To this end, the dielectric properties of the electrically stimulated tissue have to be known. However, knowledge of the dielectric properties is scarce. Electric field-based methods such as impedance spectroscopy enable determining the dielectric properties of tissue samples. To develop a detailed understanding of the interaction of the employed electric fields and the tissue, fine-grained numerical models based on tissue-specific 3D geometries are considered. A crucial ingredient in this approach is the automated generation of numerical models from biomedical images. In this work, we explore classical and artificial intelligence methods for volumetric image segmentation to generate model geometries. We find that deep learning, in particular the StarDist algorithm, permits fast and automatic model geometry and discretisation generation once a sufficient amount of training data is available. Our results suggest that already a small number of 3D images (23 images) is sufficient to achieve 80% accuracy on the test data. The proposed method enables the creation of high-quality meshes without the need for computer-aided design geometry post-processing. Particularly, the computational time for the geometrical model creation was reduced by half. Uncertainty quantification as well as a direct comparison between the deep learning and the classical approach reveal that the numerical results mainly depend on the cell volume. This result motivates further research into impedance sensors for tissue characterisation. The presented approach can significantly improve the accuracy and computational speed of image-based models of electrical stimulation for tissue engineering applications.
Collapse
Affiliation(s)
- Vien Lam Che
- Institute of General Electrical Engineering, University of Rostock, Rostock, Germany
| | - Julius Zimmermann
- Institute of General Electrical Engineering, University of Rostock, Rostock, Germany
| | - Yilu Zhou
- Department of Mechanical Engineering, University of Delaware, Delaware, DE, United States
| | - X. Lucas Lu
- Department of Mechanical Engineering, University of Delaware, Delaware, DE, United States
| | - Ursula van Rienen
- Institute of General Electrical Engineering, University of Rostock, Rostock, Germany
- Department Life, Light and Matter, University of Rostock, Rostock, Germany
- Department of Ageing of Individuals and Society, Interdisciplinary Faculty, University of Rostock, Rostock, Germany
| |
Collapse
|
9
|
Zhang E, Xie R, Bian Y, Wang J, Tao P, Zhang H, Jiang S. Cervical cell nuclei segmentation based on GC-UNet. Heliyon 2023; 9:e17647. [PMID: 37456010 PMCID: PMC10345258 DOI: 10.1016/j.heliyon.2023.e17647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 06/23/2023] [Accepted: 06/24/2023] [Indexed: 07/18/2023] Open
Abstract
Cervical cancer diagnosis hinges significantly on precise nuclei segmentation at early stages, which however, remains largely elusive due to challenges such as overlapping cells and blurred nuclei boundaries. This paper presents a novel deep neural network (DNN), the Global Context UNet (GC-UNet), designed to adeptly handle intricate environments and deliver accurate cell segmentation. At the core of GC-UNet is DenseNet, which serves as the backbone, encoding cell images and capitalizing on pre-existing knowledge. A unique context-aware pooling module, equipped with a gating model, is integrated for effective encoding of ImageNet pre-trained features, ensuring essential features at different levels are retained. Further, a decoder grounded in a global context attention block is employed to foster global feature interaction and refine the predicted masks.
Collapse
Affiliation(s)
- Enguang Zhang
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
- Zhuhai College of Science and Technology, Zhuhai, China
| | - Rixin Xie
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
| | - Yuxin Bian
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
| | - Jiayan Wang
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
| | - Pengyi Tao
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
| | - Heng Zhang
- Faculty of Education, The University of Hong Kong, Pokfulam Road, Hong Kong, China
| | - Shenlu Jiang
- School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
| |
Collapse
|
10
|
Ke J, Lu Y, Shen Y, Zhu J, Zhou Y, Huang J, Yao J, Liang X, Guo Y, Wei Z, Liu S, Huang Q, Jiang F, Shen D. ClusterSeg: A crowd cluster pinpointed nucleus segmentation framework with cross-modality datasets. Med Image Anal 2023; 85:102758. [PMID: 36731275 DOI: 10.1016/j.media.2023.102758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/27/2022] [Accepted: 01/18/2023] [Indexed: 01/26/2023]
Abstract
The detection and segmentation of individual cells or nuclei is often involved in image analysis across a variety of biology and biomedical applications as an indispensable prerequisite. However, the ubiquitous presence of crowd clusters with morphological variations often hinders successful instance segmentation. In this paper, nuclei cluster focused annotation strategies and frameworks are proposed to overcome this challenging practical problem. Specifically, we design a nucleus segmentation framework, namely ClusterSeg, to tackle nuclei clusters, which consists of a convolutional-transformer hybrid encoder and a 2.5-path decoder for precise predictions of nuclei instance mask, contours, and clustered-edges. Additionally, an annotation-efficient clustered-edge pointed strategy pinpoints the salient and error-prone boundaries, where a partially-supervised PS-ClusterSeg is presented using ClusterSeg as the segmentation backbone. The framework is evaluated with four privately curated image sets and two public sets with characteristic severely clustered nuclei across a variety range of image modalities, e.g., microscope, cytopathology, and histopathology images. The proposed ClusterSeg and PS-ClusterSeg are modality-independent and generalizable, and superior to current state-of-the-art approaches in multiple metrics empirically. Our collected data, the elaborate annotations to both public and private set, as well the source code, are released publicly at https://github.com/lu-yizhou/ClusterSeg.
Collapse
Affiliation(s)
- Jing Ke
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China; School of Computer Science and Engineering, University of New South Wales, Sydney, Australia.
| | - Yizhou Lu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiqing Shen
- Department of Computer Science, Johns Hopkins University, MD, USA
| | - Junchao Zhu
- School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Yijin Zhou
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Jinghan Huang
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Jieteng Yao
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoyao Liang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yi Guo
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, Australia
| | - Zhonghua Wei
- Department of Pathology, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Sheng Liu
- Department of Thyroid Breast and Vascular Surgery, Shanghai Fourth People's Hospital, School of Medicine, Tongji University, Shanghai, China.
| | - Qin Huang
- Department of Pathology, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Fusong Jiang
- Department of Endocrinology and Metabolism, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
11
|
Kostrykin L, Rohr K. Superadditivity and Convex Optimization for Globally Optimal Cell Segmentation Using Deformable Shape Models. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:3831-3847. [PMID: 35737620 DOI: 10.1109/tpami.2022.3185583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cell nuclei segmentation is challenging due to shape variation and closely clustered or partially overlapping objects. Most previous methods are not globally optimal, limited to elliptical models, or are computationally expensive. In this work, we introduce a globally optimal approach based on deformable shape models and global energy minimization for cell nuclei segmentation and cluster splitting. We propose an implicit parameterization of deformable shape models and show that it leads to a convex energy. Convex energy minimization yields the global solution independently of the initialization, is fast, and robust. To jointly perform cell nuclei segmentation and cluster splitting, we developed a novel iterative global energy minimization method, which leverages the inherent property of superadditivity of the convex energy. This property exploits the lower bound of the energy of the union of the models and improves the computational efficiency. Our method provably determines a solution close to global optimality. In addition, we derive a closed-form solution of the proposed global minimization based on the superadditivity property for non-clustered cell nuclei. We evaluated our method using fluorescence microscopy images of five different cell types comprising various challenges, and performed a quantitative comparison with previous methods. Our method achieved state-of-the-art or improved performance.
Collapse
|
12
|
Geng X, Liu A, Chen Y, Meyers G. The area-reconstruction h-dome technique and its efficient Python implementation for improved particle size image analysis. Microsc Res Tech 2023; 86:614-626. [PMID: 36748122 DOI: 10.1002/jemt.24300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 12/11/2022] [Accepted: 01/15/2023] [Indexed: 02/08/2023]
Abstract
The traditional watershed segmentation methods usually suffer from over segmentation for irregularly shaped particles. This is because the distance map of an irregularly shaped particle contains multiple local maxima, and over segmentation would happen if these local maxima were used as seeds for watershed segmentation. In this work, several methods based on morphological reconstruction, including h-dome transform, h-maxima, and area-reconstruction h-dome transform, are introduced to merge, or erase redundant local maxima, and the performance of these methods in avoiding over segmentation is compared. The results show that the area-reconstruction h-dome transform is the most effective method in controlling over segmentation among the evaluated methods. However, the area-reconstruction h-dome transform is achieved by superposition of binary reconstructions at each grayscale level, which is extremely time-consuming and impractical for batch processing. A hybrid pixel-queue algorithm is applied to accelerate the area-reconstruction h-dome transform, and the algorithm is implemented in Cython to further improve the computational efficiency. For a 2592 × 1944 pixel image, on a PC with an Intel Core i5 2.4GHz processor and 8 GB RAM, the processing time of the area-reconstruction h-dome transform after acceleration is about 549 ms, which is 249 times faster than the unaccelerated algorithm and 4 times faster than the reconstruction function in the Scikit-image library (an open-source image processing library for the Python programming language) which performs reconstruction by dilation. The accelerated area-reconstruction h-dome transform algorithm was successfully applied to the segmentation of rubber particles in a thermoplastic polyolefin (TPO) compound. RESEARCH HIGHLIGHTS: Techniques for segmenting particles with irregular shapes based on morphological reconstruction are reviewed. A fast algorithm for area-reconstruction h-dome transform is introduced based on Vincent's first approach combined with the pixel queue algorithm and Cython acceleration. The accelerated reconstruction algorithm is 249 times faster than the unaccelerated algorithm. The fast area-reconstruction h-dome transform algorithm is successfully applied to rubber particle segmentation of a thermal plastic polyolefin.
Collapse
Affiliation(s)
- Xiang Geng
- School of Environmental and Chemical Engineering, Shanghai University, Shanghai, China.,Dow Chemical (China) Invest Co., Ltd, Shanghai, China
| | - Andong Liu
- Dow Chemical (China) Invest Co., Ltd, Shanghai, China
| | - Yan'an Chen
- Kingfa Sci. & Tech. Co., Ltd, Guangzhou, China
| | | |
Collapse
|
13
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
14
|
Liu G, Ding Q, Luo H, Sha M, Li X, Ju M. Cx22: A new publicly available dataset for deep learning-based segmentation of cervical cytology images. Comput Biol Med 2022; 150:106194. [PMID: 37859287 DOI: 10.1016/j.compbiomed.2022.106194] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 09/12/2022] [Accepted: 10/09/2022] [Indexed: 11/24/2022]
Abstract
The segmentation of cervical cytology images plays an important role in the automatic analysis of cervical cytology screening. Although deep learning-based segmentation methods are well-developed in other image segmentation areas, their application in the segmentation of cervical cytology images is still in the early stage. The most important reason for the slow progress is the lack of publicly available and high-quality datasets, and the study on the deep learning-based segmentation methods may be hampered by the present datasets which are either artificial or plagued by the issue of false-negative objects. In this paper, we develop a new dataset of cervical cytology images named Cx22, which consists of the completely annotated labels of the cellular instances based on the open-source images released by our institute previously. Firstly, we meticulously delineate the contours of 14,946 cellular instances in1320 images that are generated by our proposed ROI-based label cropping algorithm. Then, we propose the baseline methods for the deep learning-based semantic and instance segmentation tasks based on Cx22. Finally, through the experiments, we validate the task suitability of Cx22, and the results reveal the impact of false-negative objects on the performance of the baseline methods. Based on our work, Cx22 can provide a foundation for fellow researchers to develop high-performance deep learning-based methods for the segmentation of cervical cytology images. Other detailed information and step-by-step guidance on accessing the dataset are made available to fellow researchers at https://github.com/LGQ330/Cx22.
Collapse
Affiliation(s)
- Guangqi Liu
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang, 110016, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, 110169, China; University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Qinghai Ding
- Space Star Technology Co, Ltd., Beijing, 100086, China.
| | - Haibo Luo
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang, 110016, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, 110169, China.
| | - Min Sha
- Archives of NEU, Northeastern University, Shenyang, 110819, China.
| | - Xiang Li
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang, 110016, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, 110169, China; University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Moran Ju
- College of Information Science and Technology, Dalian Maritime University, Dalian, 116026, China.
| |
Collapse
|
15
|
Wu H, Pang KKY, Pang GKH, Au-Yeung RKH. A soft-computing based approach to overlapped cells analysis in histopathology images with genetic algorithm. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
16
|
Efficient tooth gingival margin line reconstruction via adversarial learning. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
17
|
Thai PL, Merry Geisa J. Classification of microscopic cervical blood cells using inception ResNet V2 with modified activation function. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-220511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Cervical cancer is the most frequent and fatal malignancy among women worldwide. If this tumor is detected and treated early enough, the complications it causes can be minimized. Deep learning demonstrated significant promise when imposed on biomedical difficulties such as medical image processing and disease prognostication. Therefore, in this paper, an automatic cervical cell classification approach named IR-PapNet is developed based on Inception-ResNet which is an optimized version of Inception. The learning model’s conventional ReLu activation is replaced with the parametric-rectified linear unit (PReLu) to overcome the nullification of negative values and dying ReLu. Finally, the model loss function is minimized with the SGD optimization model by modifying the attributes of the neural network. Furthermore, we present a simple but efficient noise removal technique called 2D-Discrete Wavelet Transform (2D-DWT) algorithm for enhancing image quality. Experimental results show that this model can achieve a top-1 average identification accuracy of 99.8% on the pap smear cervical Herlev datasets, which verifies its satisfactory performance. The restructured Inception-ResNet network model can obtain significant improvements over most of the state-of-the-art models in 2-class classification, and it achieves a high learning rate without experiencing dead nodes.
Collapse
Affiliation(s)
- Pon L.T. Thai
- Department of Computer Science and Engineering, Arunachala College of Engineering for Women, Nagercoil, Tamil Nadu, India
| | - J. Merry Geisa
- Department of Electrical and ElectronicsEngineering, St. Xavier’s Catholic College of Engineering, Nagercoil, Tamil Nadu, India
| |
Collapse
|
18
|
Cervical Cell Segmentation Method Based on Global Dependency and Local Attention. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157742] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
The refined segmentation of nuclei and the cytoplasm is the most challenging task in the automation of cervical cell screening. The U-Shape network structure has demonstrated great superiority in the field of biomedical imaging. However, the classical U-Net network cannot effectively utilize mixed domain information and contextual information, and fails to achieve satisfactory results in this task. To address the above problems, a module based on global dependency and local attention (GDLA) for contextual information modeling and features refinement, is proposed in this study. It consists of three components computed in parallel, which are the global dependency module, the spatial attention module, and the channel attention module. The global dependency module models global contextual information to capture a priori knowledge of cervical cells, such as the positional dependence of the nuclei and cytoplasm, and the closure and uniqueness of the nuclei. The spatial attention module combines contextual information to extract cell boundary information and refine target boundaries. The channel and spatial attention modules are used to provide adaption of the input information, and make it easy to identify subtle but dominant differences of similar objects. Comparative and ablation experiments are conducted on the Herlev dataset, and the experimental results demonstrate the effectiveness of the proposed method, which surpasses the most popular existing channel attention, hybrid attention, and context networks in terms of the nuclei and cytoplasm segmentation metrics, achieving better segmentation performance than most previous advanced methods.
Collapse
|
19
|
Ilyas T, Mannan ZI, Khan A, Azam S, Kim H, De Boer F. TSFD-Net: Tissue specific feature distillation network for nuclei segmentation and classification. Neural Netw 2022; 151:1-15. [DOI: 10.1016/j.neunet.2022.02.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 12/26/2021] [Accepted: 02/23/2022] [Indexed: 10/18/2022]
|
20
|
Devaraj S, Madian N, Suresh S. Mathematical approach for segmenting chromosome clusters in metaspread images. Exp Cell Res 2022; 418:113251. [PMID: 35691379 DOI: 10.1016/j.yexcr.2022.113251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 05/15/2022] [Accepted: 06/06/2022] [Indexed: 11/04/2022]
Abstract
Karyotyping is an examination that helps in detecting chromosomal abnormalities. Chromosome analysis is a very challenging task which requires various steps to obtain a karyotype. The challenges associated with chromosome analysis are overlapping and touching of chromosomes. The input considered for chromosome analysis is the metaspread G band chromosomes. The proposed work mainly focus on separation the overlapped and touching chromosomes which is considered to be the major challenge in karyotype. There are various research contribution in chromosome analysis in progress which includes both low (Machine Learning) and high level (Deep Learning) methods. This paper proposes a mathematical based approaches which is very effective in segmentation of clustered chromosomes. The accuracy of segmentation is robust compared to high level approaches.
Collapse
Affiliation(s)
| | - Nirmala Madian
- Department of BME, Dr.N.G.P Institute of Technology, Coimbatore, India.
| | - S Suresh
- Mediscan Systems, Chennai, India
| |
Collapse
|
21
|
Liu J, Fan H, Wang Q, Li W, Tang Y, Wang D, Zhou M, Chen L. Local Label Point Correction for Edge Detection of Overlapping Cervical Cells. Front Neuroinform 2022; 16:895290. [PMID: 35645753 PMCID: PMC9133536 DOI: 10.3389/fninf.2022.895290] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 04/20/2022] [Indexed: 11/18/2022] Open
Abstract
Accurate labeling is essential for supervised deep learning methods. However, it is almost impossible to accurately and manually annotate thousands of images, which results in many labeling errors for most datasets. We proposes a local label point correction (LLPC) method to improve annotation quality for edge detection and image segmentation tasks. Our algorithm contains three steps: gradient-guided point correction, point interpolation, and local point smoothing. We correct the labels of object contours by moving the annotated points to the pixel gradient peaks. This can improve the edge localization accuracy, but it also causes unsmooth contours due to the interference of image noise. Therefore, we design a point smoothing method based on local linear fitting to smooth the corrected edge. To verify the effectiveness of our LLPC, we construct a largest overlapping cervical cell edge detection dataset (CCEDD) with higher precision label corrected by our label correction method. Our LLPC only needs to set three parameters, but yields 30–40% average precision improvement on multiple networks. The qualitative and quantitative experimental results show that our LLPC can improve the quality of manual labels and the accuracy of overlapping cell edge detection. We hope that our study will give a strong boost to the development of the label correction for edge detection and image segmentation. We will release the dataset and code at: https://github.com/nachifur/LLPC.
Collapse
Affiliation(s)
- Jiawei Liu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Huijie Fan
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- *Correspondence: Huijie Fan
| | - Qiang Wang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Key Laboratory of Manufacturing Industrial Integrated, Shenyang University, Shenyang, China
| | - Wentao Li
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - Yandong Tang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - Danbo Wang
- Department of Gynecology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
- Danbo Wang
| | - Mingyi Zhou
- Department of Gynecology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Li Chen
- Department of Pathology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| |
Collapse
|
22
|
Khadka R, Jha D, Hicks S, Thambawita V, Riegler MA, Ali S, Halvorsen P. Meta-learning with implicit gradients in a few-shot setting for medical image segmentation. Comput Biol Med 2022; 143:105227. [PMID: 35124439 DOI: 10.1016/j.compbiomed.2022.105227] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 01/05/2022] [Accepted: 01/05/2022] [Indexed: 12/26/2022]
Abstract
Widely used traditional supervised deep learning methods require a large number of training samples but often fail to generalize on unseen datasets. Therefore, a more general application of any trained model is quite limited for medical imaging for clinical practice. Using separately trained models for each unique lesion category or a unique patient population will require sufficiently large curated datasets, which is not practical to use in a real-world clinical set-up. Few-shot learning approaches can not only minimize the need for an enormous number of reliable ground truth labels that are labour-intensive and expensive, but can also be used to model on a dataset coming from a new population. To this end, we propose to exploit an optimization-based implicit model agnostic meta-learning (iMAML) algorithm under few-shot settings for medical image segmentation. Our approach can leverage the learned weights from diverse but small training samples to perform analysis on unseen datasets with high accuracy. We show that, unlike classical few-shot learning approaches, our method improves generalization capability. To our knowledge, this is the first work that exploits iMAML for medical image segmentation and explores the strength of the model on scenarios such as meta-training on unique and mixed instances of lesion datasets. Our quantitative results on publicly available skin and polyp datasets show that the proposed method outperforms the naive supervised baseline model and two recent few-shot segmentation approaches by large margins. In addition, our iMAML approach shows an improvement of 2%-4% in dice score compared to its counterpart MAML for most experiments.
Collapse
Affiliation(s)
- Rabindra Khadka
- SimulaMet, Oslo, Norway; Oslo Metropolitan University, Oslo, Norway
| | - Debesh Jha
- SimulaMet, Oslo, Norway; UiT the Arctic University of Norway, Tromsø, Norway.
| | - Steven Hicks
- SimulaMet, Oslo, Norway; Oslo Metropolitan University, Oslo, Norway
| | | | - Michael A Riegler
- SimulaMet, Oslo, Norway; UiT the Arctic University of Norway, Tromsø, Norway
| | - Sharib Ali
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, UK; NIHR Oxford Biomedical Research Centre, University of Oxford, Oxford, UK.
| | - Pål Halvorsen
- SimulaMet, Oslo, Norway; Oslo Metropolitan University, Oslo, Norway
| |
Collapse
|
23
|
Zhao M, Wang S, Shi F, Jia C, Sun X, Chen S. OUP accepted manuscript. Bioinformatics 2022; 38:i53-i59. [PMID: 35758798 PMCID: PMC9235483 DOI: 10.1093/bioinformatics/btac219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Motivation The presence of tumor cell clusters in pleural effusion may be a signal of cancer metastasis. The instance segmentation of single cell from cell clusters plays a pivotal role in cluster cell analysis. However, current cell segmentation methods perform poorly for cluster cells due to the overlapping/touching characters of clusters, multiple instance properties of cells, and the poor generalization ability of the models. Results In this article, we propose a contour constraint instance segmentation framework (CC framework) for cluster cells based on a cluster cell combination enhancement module. The framework can accurately locate each instance from cluster cells and realize high-precision contour segmentation under a few samples. Specifically, we propose the contour attention constraint module to alleviate over- and under-segmentation among individual cell-instance boundaries. In addition, to evaluate the framework, we construct a pleural effusion cluster cell dataset including 197 high-quality samples. The quantitative results show that the numeric result of APmask is > 90%, a more than 10% increase compared with state-of-the-art semantic segmentation algorithms. From the qualitative results, we can observe that our method rarely has segmentation errors.
Collapse
Affiliation(s)
- Meng Zhao
- To whom correspondence should be addressed. E-mail: or
| | - Siyu Wang
- Engineering Research Center of Learning-Based Intelligent System (Ministry of Education), The Key Laboratory of Computer Vision and System (Ministry of Education), and the School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, China
| | - Fan Shi
- Engineering Research Center of Learning-Based Intelligent System (Ministry of Education), The Key Laboratory of Computer Vision and System (Ministry of Education), and the School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, China
| | - Chen Jia
- Engineering Research Center of Learning-Based Intelligent System (Ministry of Education), The Key Laboratory of Computer Vision and System (Ministry of Education), and the School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, China
| | - Xuguo Sun
- School of Medical Laboratory, Tianjin Medical University, Tianjin 300204, China
| | - Shengyong Chen
- Engineering Research Center of Learning-Based Intelligent System (Ministry of Education), The Key Laboratory of Computer Vision and System (Ministry of Education), and the School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, China
| |
Collapse
|
24
|
Luo D, Kang H, Long J, Zhang J, Chen L, Quan T, Liu X. Dual supervised sampling networks for real-time segmentation of cervical cell nucleus. Comput Struct Biotechnol J 2022; 20:4360-4368. [PMID: 36051871 PMCID: PMC9411584 DOI: 10.1016/j.csbj.2022.08.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 08/09/2022] [Accepted: 08/09/2022] [Indexed: 12/24/2022] Open
|
25
|
Ergun H. Segmentation of wood cell in cross-section using deep convolutional neural networks. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-211386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Fiber and vessel structures located in the cross-section are anatomical features that play an important role in identifying tree species. In order to determine the microscopic anatomical structure of these cell types, each cell must be accurately segmented. In this study, a segmentation method is proposed for wood cell images based on deep convolutional neural networks. The network, which was developed by combining two-stage CNN structures, was trained using the Adam optimization algorithm. For evaluation, the method was compared with SegNet and U-Net architectures, trained with the same dataset. The losses in these models trained were compared using IoU (Intersection over Union), accuracy, and BF-score measurements on the test data. The automatic identification of the cells in the wood images obtained using a microscope will provide a fast, inexpensive, and reliable tool for those working in this field.
Collapse
Affiliation(s)
- Halime Ergun
- Necmettin Erbakan University, Seydişehir Ahmet Cengiz Faculty of Engineering, Computer Engineering, Konya, Turkey
| |
Collapse
|
26
|
Elameer AS, Jaber MM, Abd SK. Radiography image analysis using cat swarm optimized deep belief networks. JOURNAL OF INTELLIGENT SYSTEMS 2021; 31:40-54. [DOI: 10.1515/jisys-2021-0172] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023] Open
Abstract
Abstract
Radiography images are widely utilized in the health sector to recognize the patient health condition. The noise and irrelevant region information minimize the entire disease detection accuracy and computation complexity. Therefore, in this study, statistical Kolmogorov–Smirnov test has been integrated with wavelet transform to overcome the de-noising issues. Then the cat swarm-optimized deep belief network is applied to extract the features from the affected region. The optimized deep learning model reduces the feature training cost and time and improves the overall disease detection accuracy. The network learning process is enhanced according to the AdaDelta learning process, which replaces the learning parameter with a delta value. This process minimizes the error rate while recognizing the disease. The efficiency of the system evaluated using image retrieval in medical application dataset. This process helps to determine the various diseases such as breast, lung, and pediatric studies.
Collapse
Affiliation(s)
- Amer S. Elameer
- Biomedical Informatics College, University of Information Technology and Communications (UOITC) , Baghdad , Iraq
| | - Mustafa Musa Jaber
- Department of Computer Science, Dijlah University Collage , Baghdad , 00964 , Iraq
- Department of Computer Science, Al-Turath University College , Baghdad , Iraq
| | - Sura Khalil Abd
- Department of Computer Science, Dijlah University Collage , Baghdad , 00964 , Iraq
| |
Collapse
|
27
|
Li J, Dou Q, Yang H, Liu J, Fu L, Zhang Y, Zheng L, Zhang D. Cervical cell multi-classification algorithm using global context information and attention mechanism. Tissue Cell 2021; 74:101677. [PMID: 34814053 DOI: 10.1016/j.tice.2021.101677] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 11/01/2021] [Accepted: 11/09/2021] [Indexed: 11/30/2022]
Abstract
Cervical cancer is the second biggest killer of female cancer, second only to breast cancer. The cure rate of precancerous lesions found early is relatively high. Therefore, cervical cell classification has very important clinical value in the early screening of cervical cancer. This paper proposes a convolutional neural network (L-PCNN) that integrates global context information and attention mechanism to classify cervical cells. The cell image is sent to the improved ResNet-50 backbone network to extract deep learning features. In order to better extract deep features, each convolution block introduces a convolution block attention mechanism to guide the network to focus on the cell area. Then, the end of the backbone network adds a pyramid pooling layer and a long short-term memory module (LSTM) to aggregate image features in different regions. The low-level features and high-level features are integrated, so that the whole network can learn more regional detail features, and solve the problem of network gradient disappearance. The experiment is conducted on the SIPaKMeD public data set. The experimental results show that the accuracy of the proposed l-PCNN in cervical cell accuracy is 98.89 %, the sensitivity is 99.9 %, the specificity is 99.8 % and the F-measure is 99.89 %, which is better than most cervical cell classification models, which proves the effectiveness of the model.
Collapse
Affiliation(s)
- Jun Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Qiyan Dou
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Haima Yang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Jin Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China
| | - Le Fu
- Department of Radiology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Yu Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Lulu Zheng
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Dawei Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| |
Collapse
|
28
|
Segmentation of Overlapping Cervical Cells with Mask Region Convolutional Neural Network. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:3890988. [PMID: 34646333 PMCID: PMC8505098 DOI: 10.1155/2021/3890988] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 09/18/2021] [Indexed: 11/18/2022]
Abstract
The task of segmenting cytoplasm in cytology images is one of the most challenging tasks in cervix cytological analysis due to the presence of fuzzy and highly overlapping cells. Deep learning-based diagnostic technology has proven to be effective in segmenting complex medical images. We present a two-stage framework based on Mask RCNN to automatically segment overlapping cells. In stage one, candidate cytoplasm bounding boxes are proposed. In stage two, pixel-to-pixel alignment is used to refine the boundary and category classification is also presented. The performance of the proposed method is evaluated on publicly available datasets from ISBI 2014 and 2015. The experimental results demonstrate that our method outperforms other state-of-the-art approaches with DSC 0.92 and FPRp 0.0008 at the DSC threshold of 0.8. Those results indicate that our Mask RCNN-based segmentation method could be effective in cytological analysis.
Collapse
|
29
|
Cheng HJ, Hsu CH, Hung CL, Lin CY. A review for Cell and Particle Tracking on Microscopy Images using Algorithms and Deep Learning Technologies. Biomed J 2021; 45:465-471. [PMID: 34628059 PMCID: PMC9421944 DOI: 10.1016/j.bj.2021.10.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 09/30/2021] [Accepted: 10/01/2021] [Indexed: 01/06/2023] Open
Abstract
Time-lapse microscopy images generated by biological experiments have been widely used for observing target activities, such as the motion trajectories and survival states. Based on these observations, biologists can conclude experimental results or present new hypotheses for several biological applications, i.e. virus research or drug design. Many methods or tools have been proposed in the past to observe cell and particle activities, which are defined as single cell tracking and single particle tracking problems, by using algorithms and deep learning technologies. In this article, a review for these works is presented in order to summarize the past methods and research topics at first, then points out the problems raised by these works, and finally proposes future research directions. The contributions of this article will help researchers to understand past development trends and further propose innovative technologies.
Collapse
Affiliation(s)
- Hui-Jun Cheng
- Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou 510095, China; Department of Computer Science and Information Engineering, Providence University, Taichung 43301, Taiwan
| | - Ching-Hsien Hsu
- Department of Computer Science and Information Engineering, Asia University, Taichung 41354, Taiwan; Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, School of Mathematics and Big Data, Foshan University, Foshan 528000, China; Department of Medical Research, China Medical University Hospital, China Medical University, Taiwan
| | - Che-Lun Hung
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan; Department of Computer Science and Communication Engineering, Providence University, Taichung 43301, Taiwan
| | - Chun-Yuan Lin
- Department of Computer Science and Information Engineering, Asia University, Taichung 41354, Taiwan; Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan 33302, Taiwan.
| |
Collapse
|
30
|
Li X, Xu Z, Shen X, Zhou Y, Xiao B, Li TQ. Detection of Cervical Cancer Cells in Whole Slide Images Using Deformable and Global Context Aware Faster RCNN-FPN. Curr Oncol 2021; 28:3585-3601. [PMID: 34590614 PMCID: PMC8482136 DOI: 10.3390/curroncol28050307] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 09/06/2021] [Accepted: 09/12/2021] [Indexed: 01/16/2023] Open
Abstract
Cervical cancer is a worldwide public health problem with a high rate of illness and mortality among women. In this study, we proposed a novel framework based on Faster RCNN-FPN architecture for the detection of abnormal cervical cells in cytology images from a cancer screening test. We extended the Faster RCNN-FPN model by infusing deformable convolution layers into the feature pyramid network (FPN) to improve scalability. Furthermore, we introduced a global contextual aware module alongside the Region Proposal Network (RPN) to enhance the spatial correlation between the background and the foreground. Extensive experimentations with the proposed deformable and global context aware (DGCA) RCNN were carried out using the cervical image dataset of "Digital Human Body" Vision Challenge from the Alibaba Cloud TianChi Company. Performance evaluation based on the mean average precision (mAP) and receiver operating characteristic (ROC) curve has demonstrated considerable advantages of the proposed framework. Particularly, when combined with tagging of the negative image samples using traditional computer-vision techniques, 6-9% increase in mAP has been achieved. The proposed DGCA-RCNN model has potential to become a clinically useful AI tool for automated detection of cervical cancer cells in whole slide images of Pap smear.
Collapse
Affiliation(s)
- Xia Li
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Zhenhao Xu
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Xi Shen
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Yongxia Zhou
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Binggang Xiao
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Tie-Qiang Li
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, S-17177 Stockholm, Sweden
- Department of Medical Radiation and Nuclear Medicine, Karolinska University Hospital, S-14186 Stockholm, Sweden
| |
Collapse
|
31
|
Shi J, Ding X, Liu X, Li Y, Liang W, Wu J. Automatic clinical target volume delineation for cervical cancer in CT images using deep learning. Med Phys 2021; 48:3968-3981. [PMID: 33905545 DOI: 10.1002/mp.14898] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 01/14/2021] [Accepted: 03/29/2021] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Accurately delineating clinical target volumes (CTV) is essential for completing radiotherapy plans but is time-consuming, labor-intensive, and prone to inter-observer variation. Automating CTV delineation has the benefits of both speeding up contouring process and improving the quality of contours. Recently, auto-segmentation approaches based on deep learning have achieved some improvements. However, unlike organ segmentation, the CTV contains potential tumor spread tissues or subclinical disease tissues, resulting in poorly defined margin interface and irregular shape. It is not reasonable to directly apply the deep learning segmentation algorithms to CTV tasks without considering the unique characteristics of shape and margin. In this work, we propose a novel automatic CTV delineation algorithm based on deep learning addressing the unique shape and margin challenges. METHODS Our deep learning method, called RA-CTVNet, segments the CTV from cervical cancer CT images. RA-CTVNet denotes our automatic CTV delineation algorithm based on deep learning with Area-aware reweight strategy and Recursive refinement strategy. (1) In order to process the whole-volume CT images and delineate all CTVs in one shot, our method is built upon the popular 3D Unet architecture. We further extend it with robust residual learning and squeeze-and-excitation blocks for better feature representation. (2) We propose area-aware reweight strategy which assigns different weights for different slices. The core is adjusting model's attention to each slice. (3) In terms of the trade-off between providing performance improvements and meeting the limitations of GPU memory, we exploit a new recursive refinement strategy to address margin challenge. RESULTS This retrospective study included 462 patients diagnosed with cervical cancer who received radiotherapy from June 2017 to May 2019. Extensive experiments were conducted to evaluate performance of RA-CTVNet. First, compared to different network architectures, RA-CTVNet achieved improvements in Dice similarity coefficient (DSC). Second, we conducted ablation study. The results showed that compared to the backbone, area-aware reweight strategy increased DSC by 3.3% on average and recursive refinement strategy further increased DSC by 1.6% on average. Then, we compared our method with three human experts. Our RA-CTVNet performed better than two experts while comparably to the third expert. Finally, a multicenter evaluation was conducted to verify the accuracy and generalizability. CONCLUSIONS Our findings show that deep learning is able to offer an efficient framework for automatic CTV delineation. The tailored RA-CTVNet can improve the quality of CTV contours, which has great potential for reducing the burden of experts and increasing the accuracy of delineation. In the future, if with more training data, further improvements are possible, bringing this approach closer to real clinical practice.
Collapse
Affiliation(s)
- Jialin Shi
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Xiaofeng Ding
- Department of Radiation Oncology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China
| | - Xien Liu
- Tsinghua-iFLYTEK Joint Lab, Hefei, 230022, China
| | - Yan Li
- The 901th Hospital of the Joint Logistics Support Force of People's Liberation Army, Hefei, 230022, China
| | - Wei Liang
- Department of Radiation Oncology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China
| | - Ji Wu
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China.,Institute for Precision Medicine, Tsinghua University, Beijing, 100084e, China
| |
Collapse
|
32
|
Victória Matias A, Atkinson Amorim JG, Buschetto Macarini LA, Cerentini A, Casimiro Onofre AS, De Miranda Onofre FB, Daltoé FP, Stemmer MR, von Wangenheim A. What is the state of the art of computer vision-assisted cytology? A Systematic Literature Review. Comput Med Imaging Graph 2021; 91:101934. [PMID: 34174544 DOI: 10.1016/j.compmedimag.2021.101934] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 04/16/2021] [Accepted: 05/04/2021] [Indexed: 11/28/2022]
Abstract
Cytology is a low-cost and non-invasive diagnostic procedure employed to support the diagnosis of a broad range of pathologies. Cells are harvested from tissues by aspiration or scraping, and it is still predominantly performed manually by medical or laboratory professionals extensively trained for this purpose. It is a time-consuming and repetitive process where many diagnostic criteria are subjective and vulnerable to human interpretation. Computer Vision technologies, by automatically generating quantitative and objective descriptions of examinations' contents, can help minimize the chances of misdiagnoses and shorten the time required for analysis. To identify the state-of-art of computer vision techniques currently applied to cytology, we conducted a Systematic Literature Review, searching for approaches for the segmentation, detection, quantification, and classification of cells and organelles using computer vision on cytology slides. We analyzed papers published in the last 4 years. The initial search was executed in September 2020 and resulted in 431 articles. After applying the inclusion/exclusion criteria, 157 papers remained, which we analyzed to build a picture of the tendencies and problems present in this research area, highlighting the computer vision methods, staining techniques, evaluation metrics, and the availability of the used datasets and computer code. As a result, we identified that the most used methods in the analyzed works are deep learning-based (70 papers), while fewer works employ classic computer vision only (101 papers). The most recurrent metric used for classification and object detection was the accuracy (33 papers and 5 papers), while for segmentation it was the Dice Similarity Coefficient (38 papers). Regarding staining techniques, Papanicolaou was the most employed one (130 papers), followed by H&E (20 papers) and Feulgen (5 papers). Twelve of the datasets used in the papers are publicly available, with the DTU/Herlev dataset being the most used one. We conclude that there still is a lack of high-quality datasets for many types of stains and most of the works are not mature enough to be applied in a daily clinical diagnostic routine. We also identified a growing tendency towards adopting deep learning-based approaches as the methods of choice.
Collapse
Affiliation(s)
- André Victória Matias
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Allan Cerentini
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Felipe Perozzo Daltoé
- Department of Pathology, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Marcelo Ricardo Stemmer
- Automation and Systems Department, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Aldo von Wangenheim
- Brazilian Institute for Digital Convergence, Federal University of Santa Catarina, Florianópolis, Brazil.
| |
Collapse
|
33
|
Puttagunta M, Ravi S. Medical image analysis based on deep learning approach. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:24365-24398. [PMID: 33841033 PMCID: PMC8023554 DOI: 10.1007/s11042-021-10707-4] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/28/2020] [Accepted: 02/10/2021] [Indexed: 05/05/2023]
Abstract
Medical imaging plays a significant role in different clinical applications such as medical procedures used for early detection, monitoring, diagnosis, and treatment evaluation of various medical conditions. Basicsof the principles and implementations of artificial neural networks and deep learning are essential for understanding medical image analysis in computer vision. Deep Learning Approach (DLA) in medical image analysis emerges as a fast-growing research field. DLA has been widely used in medical imaging to detect the presence or absence of the disease. This paper presents the development of artificial neural networks, comprehensive analysis of DLA, which delivers promising medical imaging applications. Most of the DLA implementations concentrate on the X-ray images, computerized tomography, mammography images, and digital histopathology images. It provides a systematic review of the articles for classification, detection, and segmentation of medical images based on DLA. This review guides the researchers to think of appropriate changes in medical image analysis based on DLA.
Collapse
Affiliation(s)
- Muralikrishna Puttagunta
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| | - S. Ravi
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| |
Collapse
|
34
|
Li S, Jiang H, Li H, Yao YD. AW-SDRLSE: Adaptive Weighting and Scalable Distance Regularized Level Set Evolution for Lymphoma Segmentation on PET Images. IEEE J Biomed Health Inform 2021; 25:1173-1184. [PMID: 32841130 DOI: 10.1109/jbhi.2020.3017546] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Accurate lymphoma segmentation on Positron Emission Tomography (PET) images is of great importance for medical diagnoses, such as for distinguishing benign and malignant. To this end, this paper proposes an adaptive weighting and scalable distance regularized level set evolution (AW-SDRLSE) method for delineating lymphoma boundaries on 2D PET slices. There are three important characteristics with respect to AW-SDRLSE: 1) A scalable distance regularization term is proposed and a parameter q can control the contour's convergence rate and precision in theory. 2) A novel dynamic annular mask is proposed to calculate mean intensities of local interior and exterior regions and further define the region energy term. 3) As the level set method is sensitive to parameters, we thus propose an adaptive weighting strategy for the length and area energy terms using local region intensity and boundary direction information. AW-SDRLSE is evaluated on 90 cases of real PET data with a mean Dice coefficient of 0.8796. Comparative results demonstrate the accuracy and robustness of AW-SDRLSE as well as its performance advantages as compared with related level set methods. In addition, experimental results indicate that AW-SDRLSE can be a fine segmentation method for improving the lymphoma segmentation results obtained by deep learning (DL) methods significantly.
Collapse
|
35
|
Sobhani F, Robinson R, Hamidinekoo A, Roxanis I, Somaiah N, Yuan Y. Artificial intelligence and digital pathology: Opportunities and implications for immuno-oncology. Biochim Biophys Acta Rev Cancer 2021; 1875:188520. [PMID: 33561505 PMCID: PMC9062980 DOI: 10.1016/j.bbcan.2021.188520] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 01/04/2021] [Accepted: 01/30/2021] [Indexed: 02/08/2023]
Abstract
The field of immuno-oncology has expanded rapidly over the past decade, but key questions remain. How does tumour-immune interaction regulate disease progression? How can we prospectively identify patients who will benefit from immunotherapy? Identifying measurable features of the tumour immune-microenvironment which have prognostic or predictive value will be key to making meaningful gains in these areas. Recent developments in deep learning enable big-data analysis of pathological samples. Digital approaches allow data to be acquired, integrated and analysed far beyond what is possible with conventional techniques, and to do so efficiently and at scale. This has the potential to reshape what can be achieved in terms of volume, precision and reliability of output, enabling data for large cohorts to be summarised and compared. This review examines applications of artificial intelligence (AI) to important questions in immuno-oncology (IO). We discuss general considerations that need to be taken into account before AI can be applied in any clinical setting. We describe AI methods that have been applied to the field of IO to date and present several examples of their use.
Collapse
Affiliation(s)
- Faranak Sobhani
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ruth Robinson
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Azam Hamidinekoo
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ioannis Roxanis
- The Breast Cancer Now Toby Robins Research Centre, The Institute of Cancer Research, London, UK.
| | - Navita Somaiah
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Yinyin Yuan
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| |
Collapse
|
36
|
Ke J, Shen Y, Lu Y, Deng J, Wright JD, Zhang Y, Huang Q, Wang D, Jing N, Liang X, Jiang F. Quantitative analysis of abnormalities in gynecologic cytopathology with deep learning. J Transl Med 2021; 101:513-524. [PMID: 33526806 DOI: 10.1038/s41374-021-00537-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 12/21/2020] [Accepted: 01/04/2021] [Indexed: 12/19/2022] Open
Abstract
Cervical cancer is one of the most frequent cancers in women worldwide, yet the early detection and treatment of lesions via regular cervical screening have led to a drastic reduction in the mortality rate. However, the routine examination of screening as a regular health checkup of women is characterized as time-consuming and labor-intensive, while there is lack of characteristic phenotypic profile and quantitative analysis. In this research, over the analysis of a privately collected and manually annotated dataset of 130 cytological whole-slide images, the authors proposed a deep-learning diagnostic system to localize, grade, and quantify squamous cell abnormalities. The system can distinguish abnormalities at the morphology level, namely atypical squamous cells of undetermined significance, low-grade squamous intraepithelial lesion, high-grade squamous intraepithelial lesion, and squamous cell carcinoma, as well as differential phenotypes of normal cells. The case study covered 51 positive and 79 negative digital gynecologic cytology slides collected from 2016 to 2018. Our automatic diagnostic system demonstrated its sensitivity of 100% at slide-level abnormality prediction, with the confirmation with three pathologists who performed slide-level diagnosis and training sample annotations. In the cellular-level classification, we yielded an accuracy of 94.5% in the binary classification between normality and abnormality, and the AUC was above 85% for each subtype of epithelial abnormality. Although the final confirmation from pathologists is often a must, empirically, computer-aided methods are capable of the effective extraction, interpretation, and quantification of morphological features, while also making it more objective and reproducible.
Collapse
Affiliation(s)
- Jing Ke
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China.
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Yiqing Shen
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Yizhou Lu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Junwei Deng
- School of Information, University of Michigan, Ann Arbor, MI, USA
| | - Jason D Wright
- Department of Obstetrics and Gynecology, Columbia University, New York, NY, USA
| | - Yan Zhang
- Department of Pathology, Shanghai Tongshu Medical Laboratory Co.Ltd, Shanghai, China
| | - Qin Huang
- Department of Endocrinology and Metabolism, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Dadong Wang
- Quantitative Imaging, Data61 CSIRO, Sydney, NSW, Australia
| | - Naifeng Jing
- Department of Micro-Nano Electronics, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoyao Liang
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Biren Research, Shanghai, China, Shanghai, China
| | - Fusong Jiang
- Department of Endocrinology and Metabolism, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai Clinical Center for Diabetes, Shanghai, China
| |
Collapse
|
37
|
Qamar S, Ahmad P, Shen L. Dense Encoder-Decoder–Based Architecture for Skin Lesion Segmentation. Cognit Comput 2021. [DOI: 10.1007/s12559-020-09805-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
38
|
Lin H, Chen H, Wang X, Wang Q, Wang L, Heng PA. Dual-path network with synergistic grouping loss and evidence driven risk stratification for whole slide cervical image analysis. Med Image Anal 2021; 69:101955. [PMID: 33588122 DOI: 10.1016/j.media.2021.101955] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 12/28/2020] [Accepted: 01/02/2021] [Indexed: 12/26/2022]
Abstract
Cervical cancer has been one of the most lethal cancers threatening women's health. Nevertheless, the incidence of cervical cancer can be effectively minimized with preventive clinical management strategies, including vaccines and regular screening examinations. Screening cervical smears under microscope by cytologist is a widely used routine in regular examination, which consumes cytologists' large amount of time and labour. Computerized cytology analysis appropriately caters to such an imperative need, which alleviates cytologists' workload and reduce potential misdiagnosis rate. However, automatic analysis of cervical smear via digitalized whole slide images (WSIs) remains a challenging problem, due to the extreme huge image resolution, existence of tiny lesions, noisy dataset and intricate clinical definition of classes with fuzzy boundaries. In this paper, we design an efficient deep convolutional neural network (CNN) with dual-path (DP) encoder for lesion retrieval, which ensures the inference efficiency and the sensitivity on both tiny and large lesions. Incorporated with synergistic grouping loss (SGL), the network can be effectively trained on noisy dataset with fuzzy inter-class boundaries. Inspired by the clinical diagnostic criteria from the cytologists, a novel smear-level classifier, i.e., rule-based risk stratification (RRS), is proposed for accurate smear-level classification and risk stratification, which aligns reasonably with intricate cytological definition of the classes. Extensive experiments on the largest dataset including 19,303 WSIs from multiple medical centers validate the robustness of our method. With high sensitivity of 0.907 and specificity of 0.80 being achieved, our method manifests the potential to reduce the workload for cytologists in the routine practice.
Collapse
Affiliation(s)
- Huangjing Lin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Hao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Xi Wang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qiong Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| | - Liansheng Wang
- Department of Computer Science, Xiamen University, Xiamen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
39
|
Tan X, Li K, Zhang J, Wang W, Wu B, Wu J, Li X, Huang X. Automatic model for cervical cancer screening based on convolutional neural network: a retrospective, multicohort, multicenter study. Cancer Cell Int 2021; 21:35. [PMID: 33413391 PMCID: PMC7791865 DOI: 10.1186/s12935-020-01742-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 12/14/2020] [Accepted: 12/25/2020] [Indexed: 12/21/2022] Open
Abstract
Background The incidence rates of cervical cancer in developing countries have been steeply increasing while the medical resources for prevention, detection, and treatment are still quite limited. Computer-based deep learning methods can achieve high-accuracy fast cancer screening. Such methods can lead to early diagnosis, effective treatment, and hopefully successful prevention of cervical cancer. In this work, we seek to construct a robust deep convolutional neural network (DCNN) model that can assist pathologists in screening cervical cancer. Methods ThinPrep cytologic test (TCT) images diagnosed by pathologists from many collaborating hospitals in different regions were collected. The images were divided into a training dataset (13,775 images), validation dataset (2301 images), and test dataset (408,030 images from 290 scanned copies) for training and effect evaluation of a faster region convolutional neural network (Faster R-CNN) system. Results The sensitivity and specificity of the proposed cervical cancer screening system was 99.4 and 34.8%, respectively, with an area under the curve (AUC) of 0.67. The model could also distinguish between negative and positive cells. The sensitivity values of the atypical squamous cells of undetermined significance (ASCUS), the low-grade squamous intraepithelial lesion (LSIL), and the high-grade squamous intraepithelial lesions (HSIL) were 89.3, 71.5, and 73.9%, respectively. This system could quickly classify the images and generate a test report in about 3 minutes. Hence, the system can reduce the burden on the pathologists and saves them valuable time to analyze more complex cases. Conclusions In our study, a CNN-based TCT cervical-cancer screening model was established through a retrospective study of multicenter TCT images. This model shows improved speed and accuracy for cervical cancer screening, and helps overcome the shortage of medical resources required for cervical cancer screening.
Collapse
Affiliation(s)
- Xiangyu Tan
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, 430030, Wuhan, Hubei, China
| | - Kexin Li
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, 430030, Wuhan, Hubei, China
| | - Jiucheng Zhang
- College of Computer Science & Technology, Zhejiang University, 310027, Hangzhou, China
| | - Wenzhe Wang
- College of Computer Science & Technology, Zhejiang University, 310027, Hangzhou, China
| | - Bian Wu
- Data Science and AI Lab, WeDoctor Group Limited, 311200, Hangzhou, China
| | - Jian Wu
- School of Public Health, Zhejiang University, 310027, Hangzhou, China
| | - Xiaoping Li
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, 430030, Wuhan, Hubei, China.
| | - Xiaoyuan Huang
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, 430030, Wuhan, Hubei, China.
| |
Collapse
|
40
|
Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Med Image Anal 2021; 67:101813. [PMID: 33049577 PMCID: PMC7725956 DOI: 10.1016/j.media.2020.101813] [Citation(s) in RCA: 212] [Impact Index Per Article: 70.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 05/12/2020] [Accepted: 08/09/2020] [Indexed: 12/14/2022]
Abstract
Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field's progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
Collapse
Affiliation(s)
- Chetan L Srinidhi
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada.
| | - Ozan Ciga
- Department of Medical Biophysics, University of Toronto, Canada
| | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada
| |
Collapse
|
41
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
42
|
Mota SM, Rogers RE, Haskell AW, McNeill EP, Kaunas R, Gregory CA, Giger ML, Maitland KC. Automated mesenchymal stem cell segmentation and machine learning-based phenotype classification using morphometric and textural analysis. J Med Imaging (Bellingham) 2021; 8:014503. [PMID: 33542945 PMCID: PMC7849042 DOI: 10.1117/1.jmi.8.1.014503] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Accepted: 01/11/2021] [Indexed: 01/22/2023] Open
Abstract
Purpose: Mesenchymal stem cells (MSCs) have demonstrated clinically relevant therapeutic effects for treatment of trauma and chronic diseases. The proliferative potential, immunomodulatory characteristics, and multipotentiality of MSCs in monolayer culture is reflected by their morphological phenotype. Standard techniques to evaluate culture viability are subjective, destructive, or time-consuming. We present an image analysis approach to objectively determine morphological phenotype of MSCs for prediction of culture efficacy. Approach: The algorithm was trained using phase-contrast micrographs acquired during the early and mid-logarithmic stages of MSC expansion. Cell regions are localized using edge detection, thresholding, and morphological operations, followed by cell marker identification using H-minima transform within each region to differentiate individual cells from cell clusters. Clusters are segmented using marker-controlled watershed to obtain single cells. Morphometric and textural features are extracted to classify cells based on phenotype using machine learning. Results: Algorithm performance was validated using an independent test dataset of 186 MSCs in 36 culture images. Results show 88% sensitivity and 86% precision for overall cell detection and a mean Sorensen-Dice coefficient of 0.849 ± 0.106 for segmentation per image. The algorithm exhibited an area under the curve of 0.816 (CI 95 = 0.769 to 0.886) and 0.787 (CI 95 = 0.716 to 0.851) for classifying MSCs according to their phenotype at early and mid-logarithmic expansion, respectively. Conclusions: The proposed method shows potential to segment and classify low and moderately dense MSCs based on phenotype with high accuracy and robustness. It enables quantifiable and consistent morphology-based quality assessment for various culture protocols to facilitate cytotherapy development.
Collapse
Affiliation(s)
- Sakina M. Mota
- Texas A&M University, Department of Biomedical Engineering, College Station, Texas, United States
| | - Robert E. Rogers
- Texas A&M Health Science Center, College of Medicine, Bryan, Texas, United States
| | - Andrew W. Haskell
- Texas A&M Health Science Center, College of Medicine, Bryan, Texas, United States
| | - Eoin P. McNeill
- Texas A&M Health Science Center, College of Medicine, Bryan, Texas, United States
| | - Roland Kaunas
- Texas A&M University, Department of Biomedical Engineering, College Station, Texas, United States
- Texas A&M Health Science Center, College of Medicine, Bryan, Texas, United States
| | - Carl A. Gregory
- Texas A&M Health Science Center, College of Medicine, Bryan, Texas, United States
| | - Maryellen L. Giger
- University of Chicago, Department of Radiology, Committee on Medical Physics, Chicago, Illinois, United States
| | - Kristen C. Maitland
- Texas A&M University, Department of Biomedical Engineering, College Station, Texas, United States
| |
Collapse
|
43
|
Kiwitz K, Schiffer C, Spitzer H, Dickscheid T, Amunts K. Deep learning networks reflect cytoarchitectonic features used in brain mapping. Sci Rep 2020; 10:22039. [PMID: 33328511 PMCID: PMC7744572 DOI: 10.1038/s41598-020-78638-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 11/27/2020] [Indexed: 12/21/2022] Open
Abstract
The distribution of neurons in the cortex (cytoarchitecture) differs between cortical areas and constitutes the basis for structural maps of the human brain. Deep learning approaches provide a promising alternative to overcome throughput limitations of currently used cytoarchitectonic mapping methods, but typically lack insight as to what extent they follow cytoarchitectonic principles. We therefore investigated in how far the internal structure of deep convolutional neural networks trained for cytoarchitectonic brain mapping reflect traditional cytoarchitectonic features, and compared them to features of the current grey level index (GLI) profile approach. The networks consisted of a 10-block deep convolutional architecture trained to segment the primary and secondary visual cortex. Filter activations of the networks served to analyse resemblances to traditional cytoarchitectonic features and comparisons to the GLI profile approach. Our analysis revealed resemblances to cellular, laminar- as well as cortical area related cytoarchitectonic features. The networks learned filter activations that reflect the distinct cytoarchitecture of the segmented cortical areas with special regard to their laminar organization and compared well to statistical criteria of the GLI profile approach. These results confirm an incorporation of relevant cytoarchitectonic features in the deep convolutional neural networks and mark them as a valid support for high-throughput cytoarchitectonic mapping workflows.
Collapse
Affiliation(s)
- Kai Kiwitz
- Cécile and Oskar Vogt Institute of Brain Research, Univ. Hospital Düsseldorf, Heinrich-Heine University, Düsseldorf, Germany.
- Max Planck School of Cognition, Stephanstrasse 1a, Leipzig, Germany.
| | - Christian Schiffer
- Institute of Neuroscience and Medicine (INM-1), Forschungszentrum Jülich, Jülich, Germany
| | - Hannah Spitzer
- Institute of Computational Biology, Helmholtz Zentrum, München, Germany
| | - Timo Dickscheid
- Institute of Neuroscience and Medicine (INM-1), Forschungszentrum Jülich, Jülich, Germany
| | - Katrin Amunts
- Cécile and Oskar Vogt Institute of Brain Research, Univ. Hospital Düsseldorf, Heinrich-Heine University, Düsseldorf, Germany
- Max Planck School of Cognition, Stephanstrasse 1a, Leipzig, Germany
- Institute of Neuroscience and Medicine (INM-1), Forschungszentrum Jülich, Jülich, Germany
| |
Collapse
|
44
|
Zeng X, Wen L, Xu Y, Ji C. Generating diagnostic report for medical image by high-middle-level visual information incorporation on double deep learning models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105700. [PMID: 32818914 DOI: 10.1016/j.cmpb.2020.105700] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Accepted: 08/05/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVES Writing diagnostic reports for medical images is a heavy and tedious work. The automatic generation of medical image diagnostic reports can assist doctors to reduce their workload and improve diagnosis efficiency. It is of great significance to introduce image caption algorithm into medical image processing. Existing approaches attempt to generate medical image diagnostic reports using image caption algorithms but without taking the accuracy of pathological information in generated diagnostic reports into account. METHODS To solve the mentioned problem, we propose a Semantic Fusion Network (SFNet) including a lesion area detection model and a diagnostic generation model. The lesion area detection model can extract visual and pathological information from medical image, and the diagnostic report generation model can learn to fuse the two kinds of information to generate reports. Thus, the pathological information in the generated diagnostic reports can be more accurate. RESULTS Experimental results have verified the performance of our model (Accuracy increases 1.2% on the Ultrasound Image Dataset and 2.4% on the Open-i X-ray Image Dataset), compared with the model only using visual feature to generate diagnostic reports. CONCLUSIONS This work utilizes computer algorithms to generate the more accurate diagnostic reports for medical images automatically, which expands the application of computer-aided diagnosis and promotes the implementation of deep learning in the medical image analysis field.
Collapse
Affiliation(s)
- Xianhua Zeng
- Chongqing Key Laboratory of Image Cognition, College of Computer Science and Technology, Chongqing University of Posts and Telecommunication, Chongqing 400065, China.
| | - Li Wen
- Chongqing Key Laboratory of Image Cognition, College of Computer Science and Technology, Chongqing University of Posts and Telecommunication, Chongqing 400065, China
| | - Yang Xu
- Chongqing Key Laboratory of Image Cognition, College of Computer Science and Technology, Chongqing University of Posts and Telecommunication, Chongqing 400065, China
| | - Conghui Ji
- Chongqing Key Laboratory of Image Cognition, College of Computer Science and Technology, Chongqing University of Posts and Telecommunication, Chongqing 400065, China
| |
Collapse
|
45
|
Chlis NK, Karlas A, Fasoula NA, Kallmayer M, Eckstein HH, Theis FJ, Ntziachristos V, Marr C. A sparse deep learning approach for automatic segmentation of human vasculature in multispectral optoacoustic tomography. PHOTOACOUSTICS 2020; 20:100203. [PMID: 33194545 PMCID: PMC7644749 DOI: 10.1016/j.pacs.2020.100203] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 07/20/2020] [Accepted: 07/26/2020] [Indexed: 05/02/2023]
Abstract
Multispectral Optoacoustic Tomography (MSOT) resolves oxy- (HbO2) and deoxy-hemoglobin (Hb) to perform vascular imaging. MSOT suffers from gradual signal attenuation with depth due to light-tissue interactions: an effect that hinders the precise manual segmentation of vessels. Furthermore, vascular assessment requires functional tests, which last several minutes and result in recording thousands of images. Here, we introduce a deep learning approach with a sparse-UNET (S-UNET) for automatic vascular segmentation in MSOT images to avoid the rigorous and time-consuming manual segmentation. We evaluated the S-UNET on a test-set of 33 images, achieving a median DICE score of 0.88. Apart from high segmentation performance, our method based its decision on two wavelengths with physical meaning for the task-at-hand: 850 nm (peak absorption of oxy-hemoglobin) and 810 nm (isosbestic point of oxy-and deoxy-hemoglobin). Thus, our approach achieves precise data-driven vascular segmentation for automated vascular assessment and may boost MSOT further towards its clinical translation.
Collapse
Affiliation(s)
- Nikolaos-Kosmas Chlis
- Institute of Computational Biology, Helmholtz Center Munich, Neuherberg, Germany
- Institute of Biological and Medical Imaging, Helmholtz Center Munich, Neuherberg, Germany
- Roche Pharma Research and Early Development, Large Molecule Research, Roche Innovation Center Munich, Penzberg 82377, Germany
| | - Angelos Karlas
- Institute of Biological and Medical Imaging, Helmholtz Center Munich, Neuherberg, Germany
- Chair of Biological Imaging and Center for Translational Cancer Research (TranslaTUM), Munich, Germany
- DZHK (German Centre for Cardiovascular Research), Partner Site Munich Heart Alliance, Munich, Germany
- Clinic for Vascular and Endovascular Surgery, Rechts Der Isar Hospital, Munich, Germany
| | - Nikolina-Alexia Fasoula
- Institute of Biological and Medical Imaging, Helmholtz Center Munich, Neuherberg, Germany
- Chair of Biological Imaging and Center for Translational Cancer Research (TranslaTUM), Munich, Germany
| | - Michael Kallmayer
- Clinic for Vascular and Endovascular Surgery, Rechts Der Isar Hospital, Munich, Germany
| | - Hans-Henning Eckstein
- Clinic for Vascular and Endovascular Surgery, Rechts Der Isar Hospital, Munich, Germany
| | - Fabian J. Theis
- Institute of Computational Biology, Helmholtz Center Munich, Neuherberg, Germany
- Department of Mathematics, Technical University of Munich, Munich, Germany
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Center Munich, Neuherberg, Germany
- Chair of Biological Imaging and Center for Translational Cancer Research (TranslaTUM), Munich, Germany
- DZHK (German Centre for Cardiovascular Research), Partner Site Munich Heart Alliance, Munich, Germany
| | - Carsten Marr
- Institute of Computational Biology, Helmholtz Center Munich, Neuherberg, Germany
| |
Collapse
|
46
|
Wang X, Tang F, Chen H, Luo L, Tang Z, Ran AR, Cheung CY, Heng PA. UD-MIL: Uncertainty-Driven Deep Multiple Instance Learning for OCT Image Classification. IEEE J Biomed Health Inform 2020; 24:3431-3442. [DOI: 10.1109/jbhi.2020.2983730] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
47
|
Objective Diagnosis for Histopathological Images Based on Machine Learning Techniques: Classical Approaches and New Trends. MATHEMATICS 2020. [DOI: 10.3390/math8111863] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Histopathology refers to the examination by a pathologist of biopsy samples. Histopathology images are captured by a microscope to locate, examine, and classify many diseases, such as different cancer types. They provide a detailed view of different types of diseases and their tissue status. These images are an essential resource with which to define biological compositions or analyze cell and tissue structures. This imaging modality is very important for diagnostic applications. The analysis of histopathology images is a prolific and relevant research area supporting disease diagnosis. In this paper, the challenges of histopathology image analysis are evaluated. An extensive review of conventional and deep learning techniques which have been applied in histological image analyses is presented. This review summarizes many current datasets and highlights important challenges and constraints with recent deep learning techniques, alongside possible future research avenues. Despite the progress made in this research area so far, it is still a significant area of open research because of the variety of imaging techniques and disease-specific characteristics.
Collapse
|
48
|
Zhang H, Kalirai H, Acha-Sagredo A, Yang X, Zheng Y, Coupland SE. Piloting a Deep Learning Model for Predicting Nuclear BAP1 Immunohistochemical Expression of Uveal Melanoma from Hematoxylin-and-Eosin Sections. Transl Vis Sci Technol 2020; 9:50. [PMID: 32953248 PMCID: PMC7476670 DOI: 10.1167/tvst.9.2.50] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 07/28/2020] [Indexed: 12/20/2022] Open
Abstract
Background Uveal melanoma (UM) is the most common primary intraocular malignancy in adults. Monosomy 3 and BAP1 mutation are strong prognostic factors predicting metastatic risk in UM. Nuclear BAP1 (nBAP1) expression is a close immunohistochemical surrogate for both genetic alterations. Not all laboratories perform routine BAP1 immunohistochemistry or genetic testing, and rely mainly on clinical information and anatomic/morphologic analyses for UM prognostication. The purpose of our study was to pilot deep learning (DL) techniques to predict nBAP1 expression on whole slide images (WSIs) of hematoxylin and eosin (H&E) stained UM sections. Methods One hundred forty H&E-stained UMs were scanned at 40 × magnification, using commercially available WSI image scanners. The training cohort comprised 66 BAP1+ and 74 BAP1− UM, with known chromosome 3 status and clinical outcomes. Nonoverlapping areas of three different dimensions (512 × 512, 1024 × 1024, and 2048 × 2048 pixels) for comparison were extracted from tumor regions in each WSI, and were resized to 256 × 256 pixels. Deep convolutional neural networks (Resnet18 pre-trained on Imagenet) and auto-encoder-decoders (U-Net) were trained to predict nBAP1 expression of these patches. Trained models were tested on the patches cropped from a test cohort of WSIs of 16 BAP1+ and 28 BAP1− UM cases. Results The trained model with best performance achieved area under the curve values of 0.90 for patches and 0.93 for slides on the test set. Conclusions Our results show the effectiveness of DL for predicting nBAP1 expression in UM on the basis of H&E sections only. Translational Relevance Our pilot demonstrates a high capacity of artificial intelligence-related techniques for automated prediction on the basis of histomorphology, and may be translatable into routine histology laboratories.
Collapse
Affiliation(s)
- Hongrun Zhang
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Helen Kalirai
- Liverpool Ocular Oncology Research Group, Department of Molecular and Clinical Cancer Medicine, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, UK.,Liverpool Clinical Laboratories, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
| | - Amelia Acha-Sagredo
- Liverpool Ocular Oncology Research Group, Department of Molecular and Clinical Cancer Medicine, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, UK
| | - Xiaoyun Yang
- Chinese Academy of Sciences (CAS) IntelliCloud Technology Co., Ltd., Shanghai, China
| | - Yalin Zheng
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Sarah E Coupland
- Liverpool Ocular Oncology Research Group, Department of Molecular and Clinical Cancer Medicine, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, UK.,Liverpool Clinical Laboratories, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
| |
Collapse
|
49
|
Skin lesion segmentation via generative adversarial networks with dual discriminators. Med Image Anal 2020; 64:101716. [DOI: 10.1016/j.media.2020.101716] [Citation(s) in RCA: 85] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Revised: 03/26/2020] [Accepted: 04/24/2020] [Indexed: 11/21/2022]
|
50
|
Deng S, Zhang X, Yan W, Chang EIC, Fan Y, Lai M, Xu Y. Deep learning in digital pathology image analysis: a survey. Front Med 2020; 14:470-487. [PMID: 32728875 DOI: 10.1007/s11684-020-0782-9] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 03/05/2020] [Indexed: 12/21/2022]
Abstract
Deep learning (DL) has achieved state-of-the-art performance in many digital pathology analysis tasks. Traditional methods usually require hand-crafted domain-specific features, and DL methods can learn representations without manually designed features. In terms of feature extraction, DL approaches are less labor intensive compared with conventional machine learning methods. In this paper, we comprehensively summarize recent DL-based image analysis studies in histopathology, including different tasks (e.g., classification, semantic segmentation, detection, and instance segmentation) and various applications (e.g., stain normalization, cell/gland/region structure analysis). DL methods can provide consistent and accurate outcomes. DL is a promising tool to assist pathologists in clinical diagnosis.
Collapse
Affiliation(s)
- Shujian Deng
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Xin Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Wen Yan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | | | - Yubo Fan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang University, Hangzhou, 310007, China
| | - Yan Xu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China.
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China.
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
- Microsoft Research Asia, Beijing, 100080, China.
| |
Collapse
|