1
|
Li Y, Lac L, Liu Q, Hu P. ST-CellSeg: Cell segmentation for imaging-based spatial transcriptomics using multi-scale manifold learning. PLoS Comput Biol 2024; 20:e1012254. [PMID: 38935799 PMCID: PMC11236102 DOI: 10.1371/journal.pcbi.1012254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 07/10/2024] [Accepted: 06/16/2024] [Indexed: 06/29/2024] Open
Abstract
Spatial transcriptomics has gained popularity over the past decade due to its ability to evaluate transcriptome data while preserving spatial information. Cell segmentation is a crucial step in spatial transcriptomic analysis, as it enables the avoidance of unpredictable tissue disentanglement steps. Although high-quality cell segmentation algorithms can aid in the extraction of valuable data, traditional methods are frequently non-spatial, do not account for spatial information efficiently, and perform poorly when confronted with the problem of spatial transcriptome cell segmentation with varying shapes. In this study, we propose ST-CellSeg, an image-based machine learning method for spatial transcriptomics that uses manifold for cell segmentation and is novel in its consideration of multi-scale information. We first construct a fully connected graph which acts as a spatial transcriptomic manifold. Using multi-scale data, we then determine the low-dimensional spatial probability distribution representation for cell segmentation. Using the adjusted Rand index (ARI), normalized mutual information (NMI), and Silhouette coefficient (SC) as model performance measures, the proposed algorithm significantly outperforms baseline models in selected datasets and is efficient in computational complexity.
Collapse
Affiliation(s)
- Youcheng Li
- Department of Biochemistry, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
- Department of Computer Science, Western University, London, Ontario, Canada
- Department of Computer Science, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Leann Lac
- Department of Computer Science, University of Manitoba, Winnipeg, Manitoba, Canada
- Department of Statistics, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Qian Liu
- Department of Applied Computer Science, University of Winnipeg, Winnipeg, Manitoba, Canada
| | - Pingzhao Hu
- Department of Biochemistry, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
- Department of Computer Science, Western University, London, Ontario, Canada
- Department of Computer Science, University of Manitoba, Winnipeg, Manitoba, Canada
- Department of Epidemiology and Biostatistics, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
- Department of Oncology, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
- The Children's Health Research Institute, Lawson Health Research Institute, London, Ontario, Canada
| |
Collapse
|
2
|
Rashad M, Afifi I, Abdelfatah M. RbQE: An Efficient Method for Content-Based Medical Image Retrieval Based on Query Expansion. J Digit Imaging 2023; 36:1248-1261. [PMID: 36702987 PMCID: PMC10287886 DOI: 10.1007/s10278-022-00769-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Revised: 12/18/2022] [Accepted: 12/19/2022] [Indexed: 01/27/2023] Open
Abstract
Systems for retrieving and managing content-based medical images are becoming more important, especially as medical imaging technology advances and the medical image database grows. In addition, these systems can also use medical images to better grasp and gain a deeper understanding of the causes and treatments of different diseases, not just for diagnostic purposes. For achieving all these purposes, there is a critical need for an efficient and accurate content-based medical image retrieval (CBMIR) method. This paper proposes an efficient method (RbQE) for the retrieval of computed tomography (CT) and magnetic resonance (MR) images. RbQE is based on expanding the features of querying and exploiting the pre-trained learning models AlexNet and VGG-19 to extract compact, deep, and high-level features from medical images. There are two searching procedures in RbQE: a rapid search and a final search. In the rapid search, the original query is expanded by retrieving the top-ranked images from each class and is used to reformulate the query by calculating the mean values for deep features of the top-ranked images, resulting in a new query for each class. In the final search, the new query that is most similar to the original query will be used for retrieval from the database. The performance of the proposed method has been compared to state-of-the-art methods on four publicly available standard databases, namely, TCIA-CT, EXACT09-CT, NEMA-CT, and OASIS-MRI. Experimental results show that the proposed method exceeds the compared methods by 0.84%, 4.86%, 1.24%, and 14.34% in average retrieval precision (ARP) for the TCIA-CT, EXACT09-CT, NEMA-CT, and OASIS-MRI databases, respectively.
Collapse
Affiliation(s)
- Metwally Rashad
- Department of Computer Science, Faculty of Computers & Artificial Intelligence, Benha University, Benha, Egypt
- Faculty of Artificial Intelligence, Delta University for Science and Technology, Gamasa, Egypt
| | - Ibrahem Afifi
- Department of Information System, Faculty of Computers & Artificial Intelligence, Benha University, Benha, Egypt
| | - Mohammed Abdelfatah
- Department of Information System, Faculty of Computers & Artificial Intelligence, Benha University, Benha, Egypt
| |
Collapse
|
3
|
Rout NK, Ahirwal MK, Atulkar M. Content-Based Medical Image Retrieval System for Skin Melanoma Diagnosis Based on Optimized Pair-Wise Comparison Approach. J Digit Imaging 2023; 36:45-58. [PMID: 36253580 PMCID: PMC9984623 DOI: 10.1007/s10278-022-00710-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 08/20/2022] [Accepted: 09/27/2022] [Indexed: 10/24/2022] Open
Abstract
Medical image analysis for perfect diagnosis of disease has become a very challenging task. Due to improper diagnosis, required medical treatment may be skipped. Proper diagnosis is needed as suspected lesions could be missed by the physician's eye. Hence, this problem can be settled up by better means with the investigation of similar case studies present in the healthcare database. In this context, this paper substantiates an assistive system that would help dermatologists for accurate identification of 23 different kinds of melanoma. For this, 2300 dermoscopic images were used to train the skin-melanoma similar image search system. The proposed system uses feature extraction by assigning dynamic weights to the low-level features based on the individual characteristics of the searched images. Optimal weights are obtained by the newly proposed optimized pair-wise comparison (OPWC) approach. The uniqueness of the proposed approach is that it provides the dynamic weights to the features of the searched image instead of applying static weights. The proposed approach is supported by analytic hierarchy process (AHP) and meta-heuristic optimization algorithms such as particle swarm optimization (PSO), JAYA, genetic algorithm (GA), and gray wolf optimization (GWO). The proposed approach has been tested with images of 23 classes of melanoma and achieved significant precision and recall. Thus, this approach of skin melanoma image search can be used as an expert assistive system to help dermatologists/physicians for accurate identification of different types of melanomas.
Collapse
Affiliation(s)
| | - Mitul Kumar Ahirwal
- Department of Computer Science and Engineering, MANIT, Bhopal, M.P. 462003 India
| | | |
Collapse
|
4
|
Shakarami A, Tarrah H. An efficient image descriptor for image classification and CBIR. OPTIK 2020; 214:164833. [PMID: 32372771 PMCID: PMC7198219 DOI: 10.1016/j.ijleo.2020.164833] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Accepted: 04/28/2020] [Indexed: 06/01/2023]
Abstract
Pattern recognition and feature extraction of images always have been important subjects in improving the performance of image classification and Content-Based Image Retrieval (CBIR). Recently, Machine Learning and Deep Learning algorithms are utilized widely in order to achieve these targets. In this research, an efficient method for image description is proposed which is developed by Machine Learning and Deep Learning algorithms. This method is created using combination of an improved AlexNet Convolutional Neural Network (CNN), Histogram of Oriented Gradients (HOG) and Local Binary Pattern (LBP) descriptors. Furthermore, the Principle Component Analysis (PCA) algorithm has been used for dimension reduction. The experimental results demonstrate the superiority of the offered method compared to existing methods by improving the accuracy, mean Average Precision (mAP) and decreasing the complex computation. The experiments have been run on Corel-1000, OT and FP datasets.
Collapse
Affiliation(s)
- Ashkan Shakarami
- Department of Computer Engineering, Afarinesh Institute of Higher Education, Boroujerd, Iran
| | - Hadis Tarrah
- Department of Electrical, Computer and Biomedical Engineering, Islamic Azad University, Qazvin, Iran
| |
Collapse
|