1
|
Ma Z, Li C, Du T, Zhang L, Tang D, Ma D, Huang S, Liu Y, Sun Y, Chen Z, Yuan J, Nie Q, Grzegorzek M, Sun H. AATCT-IDS: A benchmark Abdominal Adipose Tissue CT Image Dataset for image denoising, semantic segmentation, and radiomics evaluation. Comput Biol Med 2024; 177:108628. [PMID: 38810476 DOI: 10.1016/j.compbiomed.2024.108628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 04/14/2024] [Accepted: 05/18/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND AND OBJECTIVE The metabolic syndrome induced by obesity is closely associated with cardiovascular disease, and the prevalence is increasing globally, year by year. Obesity is a risk marker for detecting this disease. However, current research on computer-aided detection of adipose distribution is hampered by the lack of open-source large abdominal adipose datasets. METHODS In this study, a benchmark Abdominal Adipose Tissue CT Image Dataset (AATCT-IDS) containing 300 subjects is prepared and published. AATCT-IDS publics 13,732 raw CT slices, and the researchers individually annotate the subcutaneous and visceral adipose tissue regions of 3213 of those slices that have the same slice distance to validate denoising methods, train semantic segmentation models, and study radiomics. For different tasks, this paper compares and analyzes the performance of various methods on AATCT-IDS by combining the visualization results and evaluation data. Thus, verify the research potential of this data set in the above three types of tasks. RESULTS In the comparative study of image denoising, algorithms using a smoothing strategy suppress mixed noise at the expense of image details and obtain better evaluation data. Methods such as BM3D preserve the original image structure better, although the evaluation data are slightly lower. The results show significant differences among them. In the comparative study of semantic segmentation of abdominal adipose tissue, the segmentation results of adipose tissue by each model show different structural characteristics. Among them, BiSeNet obtains segmentation results only slightly inferior to U-Net with the shortest training time and effectively separates small and isolated adipose tissue. In addition, the radiomics study based on AATCT-IDS reveals three adipose distributions in the subject population. CONCLUSION AATCT-IDS contains the ground truth of adipose tissue regions in abdominal CT slices. This open-source dataset can attract researchers to explore the multi-dimensional characteristics of abdominal adipose tissue and thus help physicians and patients in clinical practice. AATCT-IDS is freely published for non-commercial purpose at: https://figshare.com/articles/dataset/AATTCT-IDS/23807256.
Collapse
Affiliation(s)
- Zhiyu Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Le Zhang
- Department of Radiology, Qingdao Municipal Hospital, Qingdao University, Qingdao, China
| | - Dechao Tang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Deguo Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Shanchuan Huang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Yan Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Yihao Sun
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Zhihao Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Jin Yuan
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Qianqing Nie
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang 110122, China.
| |
Collapse
|
2
|
Wang L, Cheng Y, Meftaul IM, Luo F, Kabir MA, Doyle R, Lin Z, Naidu R. Advancing Soil Health: Challenges and Opportunities in Integrating Digital Imaging, Spectroscopy, and Machine Learning for Bioindicator Analysis. Anal Chem 2024; 96:8109-8123. [PMID: 38490962 DOI: 10.1021/acs.analchem.3c05311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2024]
Affiliation(s)
- Liang Wang
- Global Centre for Environmental Remediation, College of Engineering, Science and Environment, University of Newcastle, Callaghan, New South Wales 2308, Australia
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
| | - Ying Cheng
- Global Centre for Environmental Remediation, College of Engineering, Science and Environment, University of Newcastle, Callaghan, New South Wales 2308, Australia
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
| | - Islam Md Meftaul
- Global Centre for Environmental Remediation, College of Engineering, Science and Environment, University of Newcastle, Callaghan, New South Wales 2308, Australia
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
| | - Fang Luo
- Ministry of Education Key Laboratory for Analytical Science of Food Safety and Biology, Fujian Provincial Key Laboratory of Analysis and Detection for Food Safety, Fuzhou University, Fuzhou, Fjian 350108, China
| | - Muhammad Ashad Kabir
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
- School of Computing, Mathematics and Engineering, Charles Sturt University, Bathurst, New South Wales 2795, Australia
| | - Richard Doyle
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
- Tasmanian Institute of Agriculture (TIA), University of Tasmania, Launceston, Tasmania 7250, Australia
| | - Zhenyu Lin
- Ministry of Education Key Laboratory for Analytical Science of Food Safety and Biology, Fujian Provincial Key Laboratory of Analysis and Detection for Food Safety, Fuzhou University, Fuzhou, Fjian 350108, China
| | - Ravi Naidu
- Global Centre for Environmental Remediation, College of Engineering, Science and Environment, University of Newcastle, Callaghan, New South Wales 2308, Australia
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
| |
Collapse
|
3
|
Ma D, Li C, Du T, Qiao L, Tang D, Ma Z, Shi L, Lu G, Meng Q, Chen Z, Grzegorzek M, Sun H. PHE-SICH-CT-IDS: A benchmark CT image dataset for evaluation semantic segmentation, object detection and radiomic feature extraction of perihematomal edema in spontaneous intracerebral hemorrhage. Comput Biol Med 2024; 173:108342. [PMID: 38522249 DOI: 10.1016/j.compbiomed.2024.108342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/05/2024] [Accepted: 03/17/2024] [Indexed: 03/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Intracerebral hemorrhage is one of the diseases with the highest mortality and poorest prognosis worldwide. Spontaneous intracerebral hemorrhage (SICH) typically presents acutely, prompt and expedited radiological examination is crucial for diagnosis, localization, and quantification of the hemorrhage. Early detection and accurate segmentation of perihematomal edema (PHE) play a critical role in guiding appropriate clinical intervention and enhancing patient prognosis. However, the progress and assessment of computer-aided diagnostic methods for PHE segmentation and detection face challenges due to the scarcity of publicly accessible brain CT image datasets. METHODS This study establishes a publicly available CT dataset named PHE-SICH-CT-IDS for perihematomal edema in spontaneous intracerebral hemorrhage. The dataset comprises 120 brain CT scans and 7,022 CT images, along with corresponding medical information of the patients. To demonstrate its effectiveness, classical algorithms for semantic segmentation, object detection, and radiomic feature extraction are evaluated. The experimental results confirm the suitability of PHE-SICH-CT-IDS for assessing the performance of segmentation, detection and radiomic feature extraction methods. RESULTS This study conducts numerous experiments using classical machine learning and deep learning methods, demonstrating the differences in various segmentation and detection methods on the PHE-SICH-CT-IDS. The highest precision achieved in semantic segmentation is 76.31%, while object detection attains a maximum precision of 97.62%. The experimental results on radiomic feature extraction and analysis prove the suitability of PHE-SICH-CT-IDS for evaluating image features and highlight the predictive value of these features for the prognosis of SICH patients. CONCLUSION To the best of our knowledge, this is the first publicly available dataset for PHE in SICH, comprising various data formats suitable for applications across diverse medical scenarios. We believe that PHE-SICH-CT-IDS will allure researchers to explore novel algorithms, providing valuable support for clinicians and patients in the clinical setting. PHE-SICH-CT-IDS is freely published for non-commercial purpose at https://figshare.com/articles/dataset/PHE-SICH-CT-IDS/23957937.
Collapse
Affiliation(s)
- Deguo Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Lin Qiao
- Shengjing Hospital, China Medical University, Shenyang, China
| | - Dechao Tang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Zhiyu Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Guotao Lu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Qingtao Meng
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Zhihao Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang, China.
| |
Collapse
|
4
|
Mehmood A, Ko J, Kim H, Kim J. Optimizing Image Enhancement: Feature Engineering for Improved Classification in AI-Assisted Artificial Retinas. SENSORS (BASEL, SWITZERLAND) 2024; 24:2678. [PMID: 38732784 PMCID: PMC11085662 DOI: 10.3390/s24092678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/13/2024]
Abstract
Artificial retinas have revolutionized the lives of many blind people by enabling their ability to perceive vision via an implanted chip. Despite significant advancements, there are some limitations that cannot be ignored. Presenting all objects captured in a scene makes their identification difficult. Addressing this limitation is necessary because the artificial retina can utilize a very limited number of pixels to represent vision information. This problem in a multi-object scenario can be mitigated by enhancing images such that only the major objects are considered to be shown in vision. Although simple techniques like edge detection are used, they fall short in representing identifiable objects in complex scenarios, suggesting the idea of integrating primary object edges. To support this idea, the proposed classification model aims at identifying the primary objects based on a suggested set of selective features. The proposed classification model can then be equipped into the artificial retina system for filtering multiple primary objects to enhance vision. The suitability of handling multi-objects enables the system to cope with real-world complex scenarios. The proposed classification model is based on a multi-label deep neural network, specifically designed to leverage from the selective feature set. Initially, the enhanced images proposed in this research are compared with the ones that utilize an edge detection technique for single, dual, and multi-object images. These enhancements are also verified through an intensity profile analysis. Subsequently, the proposed classification model's performance is evaluated to show the significance of utilizing the suggested features. This includes evaluating the model's ability to correctly classify the top five, four, three, two, and one object(s), with respective accuracies of up to 84.8%, 85.2%, 86.8%, 91.8%, and 96.4%. Several comparisons such as training/validation loss and accuracies, precision, recall, specificity, and area under a curve indicate reliable results. Based on the overall evaluation of this study, it is concluded that using the suggested set of selective features not only improves the classification model's performance, but aligns with the specific problem to address the challenge of correctly identifying objects in multi-object scenarios. Therefore, the proposed classification model designed on the basis of selective features is considered to be a very useful tool in supporting the idea of optimizing image enhancement.
Collapse
Affiliation(s)
- Asif Mehmood
- Department of Biomedical Engineering, College of IT Convergence, Gachon University, 1342 Seongnamdaero, Sujeong-gu, Seongnam-si 13120, Republic of Korea;
| | - Jungbeom Ko
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon 21936, Republic of Korea;
| | - Hyunchul Kim
- School of Information, University of California, 102 South Hall 4600, Berkeley, CA 94720, USA;
| | - Jungsuk Kim
- Department of Biomedical Engineering, College of IT Convergence, Gachon University, 1342 Seongnamdaero, Sujeong-gu, Seongnam-si 13120, Republic of Korea;
- Research and Development Laboratory, Cellico Company, Seongnam-si 13449, Republic of Korea
| |
Collapse
|
5
|
Bommanapally V, Abeyrathna D, Chundi P, Subramaniam M. Super resolution-based methodology for self-supervised segmentation of microscopy images. Front Microbiol 2024; 15:1255850. [PMID: 38533330 PMCID: PMC10963421 DOI: 10.3389/fmicb.2024.1255850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 02/15/2024] [Indexed: 03/28/2024] Open
Abstract
Data-driven Artificial Intelligence (AI)/Machine learning (ML) image analysis approaches have gained a lot of momentum in analyzing microscopy images in bioengineering, biotechnology, and medicine. The success of these approaches crucially relies on the availability of high-quality microscopy images, which is often a challenge due to the diverse experimental conditions and modes under which these images are obtained. In this study, we propose the use of recent ML-based image super-resolution (SR) techniques for improving the image quality of microscopy images, incorporating them into multiple ML-based image analysis tasks, and describing a comprehensive study, investigating the impact of SR techniques on the segmentation of microscopy images. The impacts of four Generative Adversarial Network (GAN)- and transformer-based SR techniques on microscopy image quality are measured using three well-established quality metrics. These SR techniques are incorporated into multiple deep network pipelines using supervised, contrastive, and non-contrastive self-supervised methods to semantically segment microscopy images from multiple datasets. Our results show that the image quality of microscopy images has a direct influence on the ML model performance and that both supervised and self-supervised network pipelines using SR images perform better by 2%-6% in comparison to baselines, not using SR. Based on our experiments, we also establish that the image quality improvement threshold range [20-64] for the complemented Perception-based Image Quality Evaluator(PIQE) metric can be used as a pre-condition by domain experts to incorporate SR techniques to significantly improve segmentation performance. A plug-and-play software platform developed to integrate SR techniques with various deep networks using supervised and self-supervised learning methods is also presented.
Collapse
Affiliation(s)
- Vidya Bommanapally
- Department of Computer Science, University of Nebraska, Omaha, NE, United States
| | | | | | | |
Collapse
|
6
|
Tang D, Li C, Du T, Jiang H, Ma D, Ma Z, Grzegorzek M, Jiang T, Sun H. ECPC-IDS: A benchmark endometrial cancer PET/CT image dataset for evaluation of semantic segmentation and detection of hypermetabolic regions. Comput Biol Med 2024; 171:108217. [PMID: 38430743 DOI: 10.1016/j.compbiomed.2024.108217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 02/19/2024] [Accepted: 02/25/2024] [Indexed: 03/05/2024]
Abstract
BACKGROUND Endometrial cancer is one of the most common tumors in the female reproductive system and is the third most common gynecological malignancy that causes death after ovarian and cervical cancer. Early diagnosis can significantly improve the 5-year survival rate of patients. With the development of artificial intelligence, computer-assisted diagnosis plays an increasingly important role in improving the accuracy and objectivity of diagnosis and reducing the workload of doctors. However, the absence of publicly available image datasets restricts the application of computer-assisted diagnostic techniques. METHODS In this paper, a publicly available Endometrial Cancer PET/CT Image Dataset for Evaluation of Semantic Segmentation and Detection of Hypermetabolic Regions (ECPC-IDS) are published. Specifically, the segmentation section includes PET and CT images, with 7159 images in multiple formats totally. In order to prove the effectiveness of segmentation on ECPC-IDS, six deep learning semantic segmentation methods are selected to test the image segmentation task. The object detection section also includes PET and CT images, with 3579 images and XML files with annotation information totally. Eight deep learning methods are selected for experiments on the detection task. RESULTS This study is conduct using deep learning-based semantic segmentation and object detection methods to demonstrate the distinguishability on ECPC-IDS. From a separate perspective, the minimum and maximum values of Dice on PET images are 0.546 and 0.743, respectively. The minimum and maximum values of Dice on CT images are 0.012 and 0.510, respectively. The target detection section's maximum mAP values on PET and CT images are 0.993 and 0.986, respectively. CONCLUSION As far as we know, this is the first publicly available dataset of endometrial cancer with a large number of multi-modality images. ECPC-IDS can assist researchers in exploring new algorithms to enhance computer-assisted diagnosis, benefiting both clinical doctors and patients. ECPC-IDS is also freely published for non-commercial at: https://figshare.com/articles/dataset/ECPC-IDS/23808258.
Collapse
Affiliation(s)
- Dechao Tang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China.
| | - Tianmin Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang, China
| | - Deguo Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Zhiyu Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Poland
| | - Tao Jiang
- Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, China
| | - Hongzan Sun
- Department of Radiology, Shengjing Hospital, China Medical University, Shenyang, China.
| |
Collapse
|
7
|
Nie Q, Li C, Yang J, Yao Y, Sun H, Jiang T, Grzegorzek M, Chen A, Chen H, Hu W, Li R, Zhang J, Wang D. OII-DS: A benchmark Oral Implant Image Dataset for object detection and image classification evaluation. Comput Biol Med 2023; 167:107620. [PMID: 37922604 DOI: 10.1016/j.compbiomed.2023.107620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/07/2023]
Abstract
In recent years, there is been a growing reliance on image analysis methods to bolster dentistry practices, such as image classification, segmentation and object detection. However, the availability of related benchmark datasets remains limited. Hence, we spent six years to prepare and test a bench Oral Implant Image Dataset (OII-DS) to support the work in this research domain. OII-DS is a benchmark oral image dataset consisting of 3834 oral CT imaging images and 15240 oral implant images. It serves the purpose of object detection and image classification. To demonstrate the validity of the OII-DS, for each function, the most representative algorithms and metrics are selected for testing and evaluation. For object detection, five object detection algorithms are adopted to test and four evaluation criteria are used to assess the detection of each of the five objects. Additionally, mean average precision serves as the evaluation metric for multi-objective detection. For image classification, 13 classifiers are used for testing and evaluating each of the five categories by meeting four evaluation criteria. Experimental results affirm the high quality of our data in OII-DS, rendering it suitable for evaluating object detection and image classification methods. Furthermore, OII-DS is openly available at the URL for non-commercial purpose: https://doi.org/10.6084/m9.figshare.22608790.
Collapse
Affiliation(s)
- Qianqing Nie
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, USA
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Ao Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Rui Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Jiawei Zhang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Danning Wang
- Center of Implant Dentistry, School and Hospital of Stomatology, China Medical University, Liaoning Provincial Key Laboratory of Oral Diseases, Shenyang, China.
| |
Collapse
|
8
|
Jing Y, Li C, Du T, Jiang T, Sun H, Yang J, Shi L, Gao M, Grzegorzek M, Li X. A comprehensive survey of intestine histopathological image analysis using machine vision approaches. Comput Biol Med 2023; 165:107388. [PMID: 37696178 DOI: 10.1016/j.compbiomed.2023.107388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/06/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023]
Abstract
Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.
Collapse
Affiliation(s)
- Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China.
| |
Collapse
|
9
|
Yao H, Zhang X. A comprehensive review for machine learning based human papillomavirus detection in forensic identification with multiple medical samples. Front Microbiol 2023; 14:1232295. [PMID: 37529327 PMCID: PMC10387549 DOI: 10.3389/fmicb.2023.1232295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 06/30/2023] [Indexed: 08/03/2023] Open
Abstract
Human papillomavirus (HPV) is a sexually transmitted virus. Cervical cancer is one of the highest incidences of cancer, almost all patients are accompanied by HPV infection. In addition, the occurrence of a variety of cancers is also associated with HPV infection. HPV vaccination has gained widespread popularity in recent years with the increase in public health awareness. In this context, HPV testing not only needs to be sensitive and specific but also needs to trace the source of HPV infection. Through machine learning and deep learning, information from medical examinations can be used more effectively. In this review, we discuss recent advances in HPV testing in combination with machine learning and deep learning.
Collapse
Affiliation(s)
- Huanchun Yao
- Department of Cancer, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| | - Xinglong Zhang
- Department of Hematology, The Fourth Affiliated Hospital of China Medical University, Shenyang, Liaoning, China
| |
Collapse
|
10
|
Kandel S, Su S, Hall RM, Tipper JL. An automated system for polymer wear debris analysis in total disc arthroplasty using convolution neural network. Front Bioeng Biotechnol 2023; 11:1108021. [PMID: 37362220 PMCID: PMC10285289 DOI: 10.3389/fbioe.2023.1108021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 05/29/2023] [Indexed: 06/28/2023] Open
Abstract
Introduction: Polymer wear debris is one of the major concerns in total joint replacements due to wear-induced biological reactions which can lead to osteolysis and joint failure. The wear-induced biological reactions depend on the wear volume, shape and size of the wear debris and their volumetric concentration. The study of wear particles is crucial in analysing the failure modes of the total joint replacements to ensure improved designs and materials are introduced for the next generation of devices. Existing methods of wear debris analysis follow a traditional approach of computer-aided manual identification and segmentation of wear debris which encounters problems such as significant manual effort, time consumption, low accuracy due to user errors and biases, and overall lack of insight into the wear regime. Methods: This study proposes an automatic particle segmentation algorithm using adaptive thresholding followed by classification using Convolution Neural Network (CNN) to classify ultra-high molecular weight polyethylene polymer wear debris generated from total disc replacements tested in a spine simulator. A CNN takes object pixels as numeric input and uses convolution operations to create feature maps which are used to classify objects. Results: Classification accuracies of up to 96.49% were achieved for the identification of wear particles. Particle characteristics such as shape, size and area were estimated to generate size and volumetric distribution graphs. Discussion: The use of computer algorithms and CNN facilitates the analysis of a wider range of wear debris with complex characteristics with significantly fewer resources which results in robust size and volume distribution graphs for the estimation of the osteolytic potential of devices using functional biological activity estimates.
Collapse
Affiliation(s)
- Sushil Kandel
- Faculty of Engineering and IT, University of Technology, Sydney, NSW, Australia
| | - Steven Su
- Faculty of Engineering and IT, University of Technology, Sydney, NSW, Australia
- College of Artificial Intelligence and Big Data for Medical Science, Shandong First Medical University & Shandong Academy of Medical Sciences, Taian, China
| | - Richard M. Hall
- School of Mechanical Engineering, University of Leeds, Leeds, United Kingdom
| | - Joanne L. Tipper
- Faculty of Engineering and IT, University of Technology, Sydney, NSW, Australia
- School of Mechanical Engineering, University of Leeds, Leeds, United Kingdom
| |
Collapse
|
11
|
Zhang J, Liu Z, Jiang W, Liu Y, Zhou X, Li X. Application of deep generative networks for SAR/ISAR: a review. Artif Intell Rev 2023. [DOI: 10.1007/s10462-023-10469-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
|
12
|
Bai J, Xue H, Jiang X, Zhou Y. Classification and recognition of milk somatic cell images based on PolyLoss and PCAM-Reset50. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:9423-9442. [PMID: 37161250 DOI: 10.3934/mbe.2023414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Somatic cell count (SCC) is a fundamental approach for determining the quality of cattle and bovine milk. So far, different classification and recognition methods have been proposed, all with certain limitations. In this study, we introduced a new deep learning tool, i.e., an improved ResNet50 model constructed based on the residual network and fused with the position attention module and channel attention module to extract the feature information more effectively. In this paper, macrophages, lymphocytes, epithelial cells, and neutrophils were assessed. An image dataset for milk somatic cells was constructed by preprocessing to increase the diversity of samples. PolyLoss was selected as the loss function to solve the unbalanced category samples and difficult sample mining. The Adam optimization algorithm was used to update the gradient, while Warm-up was used to warm up the learning rate to alleviate the overfitting caused by small sample data sets and improve the model's generalization ability. The experimental results showed that the classification accuracy, precision rate, recall rate, and comprehensive evaluation index F value of the proposed model reached 97%, 94.5%, 90.75%, and 92.25%, respectively, indicating that the proposed model could effectively classify the milk somatic cell images, showing a better classification performance than five previous models (i.e., ResNet50, ResNet18, ResNet34, AlexNet andMobileNetv2). The accuracies of the ResNet18, ResNet34, ResNet50, AlexNet, MobileNetv2, and the new model were 95%, 93%, 93%, 56%, 37%, and 97%, respectively. In addition, the comprehensive evaluation index F1 showed the best effect, fully verifying the effectiveness of the proposed method in this paper. The proposed method overcame the limitations of image preprocessing and manual feature extraction by traditional machine learning methods and the limitations of manual feature selection, improving the classification accuracy and showing a strong generalization ability.
Collapse
Affiliation(s)
- Jie Bai
- College of Computer and Information Engineering Inner Mongolia Agricultural University, Hohhot 010018, China
- Inner Mongolia Autonomous Region Key Laboratory of Big Data Research and Application of Agriculture and Animal Husbandry, Hohhot 010018, China
| | - Heru Xue
- College of Computer and Information Engineering Inner Mongolia Agricultural University, Hohhot 010018, China
- Inner Mongolia Autonomous Region Key Laboratory of Big Data Research and Application of Agriculture and Animal Husbandry, Hohhot 010018, China
| | - Xinhua Jiang
- College of Computer and Information Engineering Inner Mongolia Agricultural University, Hohhot 010018, China
- Inner Mongolia Autonomous Region Key Laboratory of Big Data Research and Application of Agriculture and Animal Husbandry, Hohhot 010018, China
| | - Yanqing Zhou
- College of Computer and Information Engineering Inner Mongolia Agricultural University, Hohhot 010018, China
- Inner Mongolia Autonomous Region Key Laboratory of Big Data Research and Application of Agriculture and Animal Husbandry, Hohhot 010018, China
| |
Collapse
|
13
|
Shi L, Li X, Hu W, Chen H, Chen J, Fan Z, Gao M, Jing Y, Lu G, Ma D, Ma Z, Meng Q, Tang D, Sun H, Grzegorzek M, Qi S, Teng Y, Li C. EBHI-Seg: A novel enteroscope biopsy histopathological hematoxylin and eosin image dataset for image segmentation tasks. Front Med (Lausanne) 2023; 10:1114673. [PMID: 36760405 PMCID: PMC9902656 DOI: 10.3389/fmed.2023.1114673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 01/06/2023] [Indexed: 01/25/2023] Open
Abstract
Background and purpose Colorectal cancer is a common fatal malignancy, the fourth most common cancer in men, and the third most common cancer in women worldwide. Timely detection of cancer in its early stages is essential for treating the disease. Currently, there is a lack of datasets for histopathological image segmentation of colorectal cancer, which often hampers the assessment accuracy when computer technology is used to aid in diagnosis. Methods This present study provided a new publicly available Enteroscope Biopsy Histopathological Hematoxylin and Eosin Image Dataset for Image Segmentation Tasks (EBHI-Seg). To demonstrate the validity and extensiveness of EBHI-Seg, the experimental results for EBHI-Seg are evaluated using classical machine learning methods and deep learning methods. Results The experimental results showed that deep learning methods had a better image segmentation performance when utilizing EBHI-Seg. The maximum accuracy of the Dice evaluation metric for the classical machine learning method is 0.948, while the Dice evaluation metric for the deep learning method is 0.965. Conclusion This publicly available dataset contained 4,456 images of six types of tumor differentiation stages and the corresponding ground truth images. The dataset can provide researchers with new segmentation algorithms for medical diagnosis of colorectal cancer, which can be used in the clinical setting to help doctors and patients. EBHI-Seg is publicly available at: https://figshare.com/articles/dataset/EBHI-SEG/21540159/1.
Collapse
Affiliation(s)
- Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xiaoyan Li
- Department of Pathology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital and Institute, Shengyang, China,*Correspondence: Xiaoyan Li ✉
| | - Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Jing Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Zizhen Fan
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Guotao Lu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Deguo Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Zhiyu Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Qingtao Meng
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Dechao Tang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Lübeck, Lübeck, Germany,Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Shouliang Qi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yueyang Teng
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China,Chen Li ✉
| |
Collapse
|
14
|
Yuan H, Wang Z, Wang Z, Zhang F, Guan D, Zhao R. Trends in forensic microbiology: From classical methods to deep learning. Front Microbiol 2023; 14:1163741. [PMID: 37065115 PMCID: PMC10098119 DOI: 10.3389/fmicb.2023.1163741] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 03/08/2023] [Indexed: 04/18/2023] Open
Abstract
Forensic microbiology has been widely used in the diagnosis of causes and manner of death, identification of individuals, detection of crime locations, and estimation of postmortem interval. However, the traditional method, microbial culture, has low efficiency, high consumption, and a low degree of quantitative analysis. With the development of high-throughput sequencing technology, advanced bioinformatics, and fast-evolving artificial intelligence, numerous machine learning models, such as RF, SVM, ANN, DNN, regression, PLS, ANOSIM, and ANOVA, have been established with the advancement of the microbiome and metagenomic studies. Recently, deep learning models, including the convolutional neural network (CNN) model and CNN-derived models, improve the accuracy of forensic prognosis using object detection techniques in microorganism image analysis. This review summarizes the application and development of forensic microbiology, as well as the research progress of machine learning (ML) and deep learning (DL) based on microbial genome sequencing and microbial images, and provided a future outlook on forensic microbiology.
Collapse
Affiliation(s)
- Huiya Yuan
- Department of Forensic Analytical Toxicology, China Medical University School of Forensic Medicine, Shenyang, China
- Liaoning Province Key Laboratory of Forensic Bio-Evidence Science, Shenyang, China
| | - Ziwei Wang
- Department of Forensic Pathology, China Medical University School of Forensic Medicine, Shenyang, China
| | - Zhi Wang
- Department of Forensic Pathology, China Medical University School of Forensic Medicine, Shenyang, China
| | - Fuyuan Zhang
- Department of Forensic Pathology, China Medical University School of Forensic Medicine, Shenyang, China
| | - Dawei Guan
- Liaoning Province Key Laboratory of Forensic Bio-Evidence Science, Shenyang, China
- Department of Forensic Pathology, China Medical University School of Forensic Medicine, Shenyang, China
- *Correspondence: Dawei Guan
| | - Rui Zhao
- Liaoning Province Key Laboratory of Forensic Bio-Evidence Science, Shenyang, China
- Department of Forensic Pathology, China Medical University School of Forensic Medicine, Shenyang, China
- Rui Zhao
| |
Collapse
|
15
|
Yang H, Li C, Zhao X, Cai B, Zhang J, Ma P, Zhao P, Chen A, Jiang T, Sun H, Teng Y, Qi S, Huang X, Grzegorzek M. EMDS-7: Environmental microorganism image dataset seventh version for multiple object detection evaluation. Front Microbiol 2023; 14:1084312. [PMID: 36891388 PMCID: PMC9986282 DOI: 10.3389/fmicb.2023.1084312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Accepted: 01/30/2023] [Indexed: 02/22/2023] Open
Abstract
Nowadays, the detection of environmental microorganism indicators is essential for us to assess the degree of pollution, but the traditional detection methods consume a lot of manpower and material resources. Therefore, it is necessary for us to make microbial data sets to be used in artificial intelligence. The Environmental Microorganism Image Dataset Seventh Version (EMDS-7) is a microscopic image data set that is applied in the field of multi-object detection of artificial intelligence. This method reduces the chemicals, manpower and equipment used in the process of detecting microorganisms. EMDS-7 including the original Environmental Microorganism (EM) images and the corresponding object labeling files in ".XML" format file. The EMDS-7 data set consists of 41 types of EMs, which has a total of 2,65 images and 13,216 labeled objects. The EMDS-7 database mainly focuses on the object detection. In order to prove the effectiveness of EMDS-7, we select the most commonly used deep learning methods (Faster-Region Convolutional Neural Network (Faster-RCNN), YOLOv3, YOLOv4, SSD, and RetinaNet) and evaluation indices for testing and evaluation. EMDS-7 is freely published for non-commercial purpose at: https://figshare.com/articles/dataset/EMDS-7_DataSet/16869571.
Collapse
Affiliation(s)
- Hechen Yang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xin Zhao
- School of Resources and Civil Engineering, Northeastern University, Shenyang, China
| | - Bencheng Cai
- School of Resources and Civil Engineering, Northeastern University, Shenyang, China
| | - Jiawei Zhang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Pingli Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Peng Zhao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Ao Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China.,International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang, China
| | - Yueyang Teng
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shouliang Qi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xinyu Huang
- Institute of Medical Informatics, University of Lübeck, Lübeck, Germany
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Lübeck, Lübeck, Germany.,Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| |
Collapse
|
16
|
Hu W, Chen H, Liu W, Li X, Sun H, Huang X, Grzegorzek M, Li C. A comparative study of gastric histopathology sub-size image classification: From linear regression to visual transformer. Front Med (Lausanne) 2022; 9:1072109. [PMID: 36569152 PMCID: PMC9767945 DOI: 10.3389/fmed.2022.1072109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 11/18/2022] [Indexed: 12/12/2022] Open
Abstract
Introduction Gastric cancer is the fifth most common cancer in the world. At the same time, it is also the fourth most deadly cancer. Early detection of cancer exists as a guide for the treatment of gastric cancer. Nowadays, computer technology has advanced rapidly to assist physicians in the diagnosis of pathological pictures of gastric cancer. Ensemble learning is a way to improve the accuracy of algorithms, and finding multiple learning models with complementarity types is the basis of ensemble learning. Therefore, this paper compares the performance of multiple algorithms in anticipation of applying ensemble learning to a practical gastric cancer classification problem. Methods The complementarity of sub-size pathology image classifiers when machine performance is insufficient is explored in this experimental platform. We choose seven classical machine learning classifiers and four deep learning classifiers for classification experiments on the GasHisSDB database. Among them, classical machine learning algorithms extract five different image virtual features to match multiple classifier algorithms. For deep learning, we choose three convolutional neural network classifiers. In addition, we also choose a novel Transformer-based classifier. Results The experimental platform, in which a large number of classical machine learning and deep learning methods are performed, demonstrates that there are differences in the performance of different classifiers on GasHisSDB. Classical machine learning models exist for classifiers that classify Abnormal categories very well, while classifiers that excel in classifying Normal categories also exist. Deep learning models also exist with multiple models that can be complementarity. Discussion Suitable classifiers are selected for ensemble learning, when machine performance is insufficient. This experimental platform demonstrates that multiple classifiers are indeed complementarity and can improve the efficiency of ensemble learning. This can better assist doctors in diagnosis, improve the detection of gastric cancer, and increase the cure rate.
Collapse
Affiliation(s)
- Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wanli Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xiaoyan Li
- Department of Pathology, Liaoning Cancer Hospital and Institute, Cancer Hospital, China Medical University, Shenyang, China
| | - Hongzan Sun
- Department of Radiology, Shengjing Hospital, China Medical University, Shenyang, China
| | - Xinyu Huang
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany
- Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
17
|
An Application of Pixel Interval Down-Sampling (PID) for Dense Tiny Microorganism Counting on Environmental Microorganism Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12147314] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This paper proposes a novel pixel interval down-sampling network (PID-Net) for dense tiny object (yeast cells) counting tasks with higher accuracy. The PID-Net is an end-to-end convolutional neural network (CNN) model with an encoder–decoder architecture. The pixel interval down-sampling operations are concatenated with max-pooling operations to combine the sparse and dense features. This addresses the limitation of contour conglutination of dense objects while counting. The evaluation was conducted using classical segmentation metrics (the Dice, Jaccard and Hausdorff distance) as well as counting metrics. The experimental results show that the proposed PID-Net had the best performance and potential for dense tiny object counting tasks, which achieved 96.97% counting accuracy on the dataset with 2448 yeast cell images. By comparing with the state-of-the-art approaches, such as Attention U-Net, Swin U-Net and Trans U-Net, the proposed PID-Net can segment dense tiny objects with clearer boundaries and fewer incorrect debris, which shows the great potential of PID-Net in the task of accurate counting.
Collapse
|