1
|
Cao W, Pomeroy MJ, Liang Z, Gao Y, Shi Y, Tan J, Han F, Wang J, Ma J, Lu H, Abbasi AF, Pickhardt PJ. Lesion Classification by Model-Based Feature Extraction: A Differential Affine Invariant Model of Soft Tissue Elasticity in CT Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01178-8. [PMID: 39164453 DOI: 10.1007/s10278-024-01178-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 05/29/2024] [Accepted: 06/14/2024] [Indexed: 08/22/2024]
Abstract
The elasticity of soft tissues has been widely considered a characteristic property for differentiation of healthy and lesions and, therefore, motivated the development of several elasticity imaging modalities, for example, ultrasound elastography, magnetic resonance elastography, and optical coherence elastography to directly measure the tissue elasticity. This paper proposes an alternative approach of modeling the elasticity for prior knowledge-based extraction of tissue elastic characteristic features for machine learning (ML) lesion classification using computed tomography (CT) imaging modality. The model describes a dynamic non-rigid (or elastic) soft tissue deformation in differential manifold to mimic the tissues' elasticity under wave fluctuation in vivo. Based on the model, a local deformation invariant is formulated using the 1st and 2nd order derivatives of the lesion volumetric CT image and used to generate elastic feature map of the lesion volume. From the feature map, tissue elastic features are extracted and fed to ML to perform lesion classification. Two pathologically proven image datasets of colon polyps and lung nodules were used to test the modeling strategy. The outcomes reached the score of area under the curve of receiver operating characteristics of 94.2% for the polyps and 87.4% for the nodules, resulting in an average gain of 5 to 20% over several existing state-of-the-art image feature-based lesion classification methods. The gain demonstrates the importance of extracting tissue characteristic features for lesion classification, instead of extracting image features, which can include various image artifacts and may vary for different protocols in image acquisition and different imaging modalities.
Collapse
Affiliation(s)
- Weiguo Cao
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794, USA.
| | - Marc J Pomeroy
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794, USA
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Zhengrong Liang
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794, USA.
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA.
| | - Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yongyi Shi
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Jiaxing Tan
- Department of Computer Science, City University of New York, New York, NY, 10314, USA
| | - Fangfang Han
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Centre, Dallas, TX, 75235, USA
| | - Jianhua Ma
- Department of Radiation Oncology, University of Texas Southwestern Medical Centre, Dallas, TX, 75235, USA
| | - Hongbin Lu
- Department of Biomedical Engineering, The Fourth Medical University, Xi'an, China
| | - Almas F Abbasi
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Perry J Pickhardt
- Department of Radiology, School of Medicine, University of Wisconsin, Madison, WI, 53792, USA
| |
Collapse
|
2
|
Liang DD, Liang DD, Pomeroy MJ, Gao Y, Kuo LR, Li LC. Examining feature extraction and classification modules in machine learning for diagnosis of low-dose computed tomographic screening-detected in vivo lesions. J Med Imaging (Bellingham) 2024; 11:044501. [PMID: 38993628 PMCID: PMC11234229 DOI: 10.1117/1.jmi.11.4.044501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Revised: 05/27/2024] [Accepted: 06/03/2024] [Indexed: 07/13/2024] Open
Abstract
Purpose Medical imaging-based machine learning (ML) for computer-aided diagnosis of in vivo lesions consists of two basic components or modules of (i) feature extraction from non-invasively acquired medical images and (ii) feature classification for prediction of malignancy of lesions detected or localized in the medical images. This study investigates their individual performances for diagnosis of low-dose computed tomography (CT) screening-detected lesions of pulmonary nodules and colorectal polyps. Approach Three feature extraction methods were investigated. One uses the mathematical descriptor of gray-level co-occurrence image texture measure to extract the Haralick image texture features (HFs). One uses the convolutional neural network (CNN) architecture to extract deep learning (DL) image abstractive features (DFs). The third one uses the interactions between lesion tissues and X-ray energy of CT to extract tissue-energy specific characteristic features (TFs). All the above three categories of extracted features were classified by the random forest (RF) classifier with comparison to the DL-CNN method, which reads the images, extracts the DFs, and classifies the DFs in an end-to-end manner. The ML diagnosis of lesions or prediction of lesion malignancy was measured by the area under the receiver operating characteristic curve (AUC). Three lesion image datasets were used. The lesions' tissue pathological reports were used as the learning labels. Results Experiments on the three datasets produced AUC values of 0.724 to 0.878 for the HFs, 0.652 to 0.965 for the DFs, and 0.985 to 0.996 for the TFs, compared to the DL-CNN of 0.694 to 0.964. These experimental outcomes indicate that the RF classifier performed comparably to the DL-CNN classification module and the extraction of tissue-energy specific characteristic features dramatically improved AUC value. Conclusions The feature extraction module is more important than the feature classification module. Extraction of tissue-energy specific characteristic features is more important than extraction of image abstractive and characteristic features.
Collapse
Affiliation(s)
- Daniel D Liang
- Ward Melville High School, East Setauket, New York, United States
| | - David D Liang
- University of Chicago, Department of Computer Science, Chicago, Illinois, United States
| | - Marc J Pomeroy
- State University of New York, Department of Biomedical Engineering, Stony Brook, New York, United States
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Yongfeng Gao
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Licheng R Kuo
- State University of New York, Department of Biomedical Engineering, Stony Brook, New York, United States
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Lihong C Li
- City University of New York/CSI, Department of Engineering and Environment Science, Staten Island, New York, United States
| |
Collapse
|
3
|
Nagasaki Y, Taki T, Nomura K, Tane K, Miyoshi T, Samejima J, Aokage K, Ohtani-Kim SJY, Kojima M, Sakashita S, Sakamoto N, Ishikawa S, Suzuki K, Tsuboi M, Ishii G. Spatial intratumor heterogeneity of programmed death-ligand 1 expression predicts poor prognosis in resected non-small cell lung cancer. J Natl Cancer Inst 2024; 116:1158-1168. [PMID: 38459590 DOI: 10.1093/jnci/djae053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 12/27/2023] [Accepted: 01/25/2024] [Indexed: 03/10/2024] Open
Abstract
BACKGROUND We quantified the pathological spatial intratumor heterogeneity of programmed death-ligand 1 (PD-L1) expression and investigated its relevance to patient outcomes in surgically resected non-small cell lung carcinoma (NSCLC). METHODS This study enrolled 239 consecutive surgically resected NSCLC specimens of pathological stage IIA-IIIB. To characterize the spatial intratumor heterogeneity of PD-L1 expression in NSCLC tissues, we developed a mathematical model based on texture image analysis and determined the spatial heterogeneity index of PD-L1 for each tumor. The correlation between the spatial heterogeneity index of PD-L1 values and clinicopathological characteristics, including prognosis, was analyzed. Furthermore, an independent cohort of 70 cases was analyzed for model validation. RESULTS Clinicopathological analysis showed correlations between high spatial heterogeneity index of PD-L1 values and histological subtype (squamous cell carcinoma; P < .001) and vascular invasion (P = .004). Survival analysis revealed that patients with high spatial heterogeneity index of PD-L1 values presented a significantly worse recurrence-free rate than those with low spatial heterogeneity index of PD-L1 values (5-year recurrence-free survival [RFS] = 26.3% vs 47.1%, P < .005). The impact of spatial heterogeneity index of PD-L1 on cancer survival rates was verified through validation in an independent cohort. Additionally, high spatial heterogeneity index of PD-L1 values were associated with tumor recurrence in squamous cell carcinoma (5-year RFS = 29.2% vs 52.8%, P < .05) and adenocarcinoma (5-year RFS = 19.6% vs 43.0%, P < .01). Moreover, we demonstrated that a high spatial heterogeneity index of PD-L1 value was an independent risk factor for tumor recurrence. CONCLUSIONS We presented an image analysis model to quantify the spatial intratumor heterogeneity of protein expression in tumor tissues. This model demonstrated that the spatial intratumor heterogeneity of PD-L1 expression in surgically resected NSCLC predicts poor patient outcomes.
Collapse
MESH Headings
- Humans
- Carcinoma, Non-Small-Cell Lung/surgery
- Carcinoma, Non-Small-Cell Lung/pathology
- Carcinoma, Non-Small-Cell Lung/mortality
- Carcinoma, Non-Small-Cell Lung/metabolism
- B7-H1 Antigen/metabolism
- B7-H1 Antigen/analysis
- Male
- Female
- Lung Neoplasms/pathology
- Lung Neoplasms/surgery
- Lung Neoplasms/mortality
- Lung Neoplasms/metabolism
- Prognosis
- Middle Aged
- Aged
- Biomarkers, Tumor/metabolism
- Neoplasm Recurrence, Local
- Neoplasm Staging
- Adult
- Carcinoma, Squamous Cell/surgery
- Carcinoma, Squamous Cell/pathology
- Carcinoma, Squamous Cell/mortality
- Carcinoma, Squamous Cell/metabolism
Collapse
Affiliation(s)
- Yusuke Nagasaki
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
- Department of Thoracic Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
- Department of General Thoracic Surgery, Juntendo University Graduate School of Medicine, Tokyo, Japan
| | - Tetsuro Taki
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Kotaro Nomura
- Department of Thoracic Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Kenta Tane
- Department of Thoracic Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Tomohiro Miyoshi
- Department of Thoracic Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Joji Samejima
- Department of Thoracic Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Keiju Aokage
- Department of Thoracic Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Seiyu Jeong-Yoo Ohtani-Kim
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
- Department of Thoracic Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Motohiro Kojima
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
- Division of Pathology, National Cancer Center, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Shingo Sakashita
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
- Division of Pathology, National Cancer Center, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Naoya Sakamoto
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
- Division of Pathology, National Cancer Center, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Shumpei Ishikawa
- Division of Pathology, National Cancer Center, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kenji Suzuki
- Department of General Thoracic Surgery, Juntendo University Graduate School of Medicine, Tokyo, Japan
| | - Masahiro Tsuboi
- Department of Thoracic Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Genichiro Ishii
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
- Division of Innovative Pathology and Laboratory Medicine, Exploratory Oncology Research and Clinical Trial Center, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| |
Collapse
|
4
|
Ma F, Wang S, Guo Y, Dai C, Meng J. Image segmentation of mouse eye in vivo with optical coherence tomography based on Bayesian classification. BIOMED ENG-BIOMED TE 2024; 69:307-315. [PMID: 38178615 DOI: 10.1515/bmt-2023-0266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 12/22/2023] [Indexed: 01/06/2024]
Abstract
OBJECTIVES Optical coherence tomography (OCT) is a new imaging technology that uses an optical analog of ultrasound imaging for biological tissues. Image segmentation plays an important role in dealing with quantitative analysis of medical images. METHODS We have proposed a novel framework to deal with the low intensity problem, based on the labeled patches and Bayesian classification (LPBC) model. The proposed method includes training and testing phases. During the training phase, firstly, we manually select the sub-images of background and Region of Interest (ROI) from the training image, and then extract features by patches. Finally, we train the Bayesian model with the features. The segmentation threshold of each patch is computed by the learned Bayesian model. RESULTS In addition, we have collected a new dataset of mouse eyes in vivo with OCT, named MEVOCT, which can be found at URL https://17861318579.github.io/LPBC. MEVOCT consists of 20 high-resolution images. The resolution of every image is 2048 × 2048 pixels. CONCLUSIONS The experimental results demonstrate the effectiveness of the LPBC method on the new MEVOCT dataset. The ROI segmentation is of great importance for the distortion correction.
Collapse
Affiliation(s)
- Fei Ma
- School of Computer Science, Qufu Normal University, Rizhao, Shandong, China
| | - Shengbo Wang
- School of Computer Science, Qufu Normal University, Rizhao, Shandong, China
| | - Yanfei Guo
- School of Computer Science, Qufu Normal University, Rizhao, Shandong, China
| | - Cuixia Dai
- Department of College Science, Shanghai Institute of Technology, Shanghai, Shanghai, China
| | - Jing Meng
- School of Computer Science, Qufu Normal University, Rizhao, Shandong, China
| |
Collapse
|
5
|
Cao W, Howe BM, Wright DE, Ramanathan S, Rhodes NG, Korfiatis P, Amrami KK, Spinner RJ, Kline TL. Abnormal Brachial Plexus Differentiation from Routine Magnetic Resonance Imaging: An AI-based Approach. Neuroscience 2024; 546:178-187. [PMID: 38518925 DOI: 10.1016/j.neuroscience.2024.03.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 03/11/2024] [Accepted: 03/17/2024] [Indexed: 03/24/2024]
Abstract
Automatic abnormality identification of brachial plexus (BP) from normal magnetic resonance imaging to localize and identify a neurologic injury in clinical practice (MRI) is still a novel topic in brachial plexopathy. This study developed and evaluated an approach to differentiate abnormal BP with artificial intelligence (AI) over three commonly used MRI sequences, i.e. T1, FLUID sensitive and post-gadolinium sequences. A BP dataset was collected by radiological experts and a semi-supervised artificial intelligence method was used to segment the BP (based on nnU-net). Hereafter, a radiomics method was utilized to extract 107 shape and texture features from these ROIs. From various machine learning methods, we selected six widely recognized classifiers for training our Brachial plexus (BP) models and assessing their efficacy. To optimize these models, we introduced a dynamic feature selection approach aimed at discarding redundant and less informative features. Our experimental findings demonstrated that, in the context of identifying abnormal BP cases, shape features displayed heightened sensitivity compared to texture features. Notably, both the Logistic classifier and Bagging classifier outperformed other methods in our study. These evaluations illuminated the exceptional performance of our model trained on FLUID-sensitive sequences, which notably exceeded the results of both T1 and post-gadolinium sequences. Crucially, our analysis highlighted that both its classification accuracies and AUC score (area under the curve of receiver operating characteristics) over FLUID-sensitive sequence exceeded 90%. This outcome served as a robust experimental validation, affirming the substantial potential and strong feasibility of integrating AI into clinical practice.
Collapse
Affiliation(s)
- Weiguo Cao
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN 55905, USA
| | - Benjamin M Howe
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN 55905, USA
| | - Darryl E Wright
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN 55905, USA
| | - Sumana Ramanathan
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN 55905, USA
| | - Nicholas G Rhodes
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN 55905, USA
| | - Panagiotis Korfiatis
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN 55905, USA
| | - Kimberly K Amrami
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN 55905, USA
| | - Robert J Spinner
- Department of Neurological Surgery, Mayo Clinic, 200 First Street SW, Gonda 8, Rochester, MN 55905, USA
| | - Timothy L Kline
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN 55905, USA.
| |
Collapse
|
6
|
Al-Gaashani MS, Samee NA, Alkanhel R, Atteia G, Abdallah HA, Ashurov A, Ali Muthanna MS. Deep transfer learning with gravitational search algorithm for enhanced plant disease classification. Heliyon 2024; 10:e28967. [PMID: 38601589 PMCID: PMC11004804 DOI: 10.1016/j.heliyon.2024.e28967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 03/15/2024] [Accepted: 03/27/2024] [Indexed: 04/12/2024] Open
Abstract
Plant diseases annually cause damage and loss of much of the crop, if not its complete destruction, and this constitutes a significant challenge for farm owners, governments, and consumers alike. Therefore, identifying and classifying diseases at an early stage is very important in order to sustain local and global food security. In this research, we designed a new method to identify plant diseases by combining transfer learning and Gravitational Search Algorithm (GSA). Two state-of-the-art pretrained models have been adopted for extracting features in this study, which are MobileNetV2 and ResNe50V2. Multilayer feature extraction is applied in this study to ensure representations of plant leaves from different levels of abstraction for precise classification. These features are then concatenated and passed to GSA for optimizing them. Finally, optimized features are passed to Multinomial Logistic Regression (MLR) for final classification. This integration is essential for categorizing 18 different types of infected and healthy leaf samples. The performance of our approach is strengthened by a comparative analysis that incorporates features optimized by the Genetic Algorithm (GA). Additionally, the MLR algorithm is contrasted with K-Nearest Neighbors (KNN). The empirical findings indicate that our model, which has been refined using GSA, achieves very high levels of precision. Specifically, the average precision for MLR is 99.2%, while for KNN it is 98.6%. The resulting results significantly exceed those achieved with GA-optimized features, thereby highlighting the superiority of our suggested strategy. One important result of our study is that we were able to decrease the number of features by more than 50%. This reduction greatly reduces the processing requirements without sacrificing the quality of the diagnosis. This work presents a robust and efficient approach to the early detection of plant diseases. The work demonstrates the utilization of sophisticated computational methods in agriculture, enabling the development of novel data-driven strategies for plant health management, therefore enhancing worldwide food security.
Collapse
Affiliation(s)
- Mehdhar S.A.M. Al-Gaashani
- School of Resources and Environment, University of Electronic Science and Technology of China, 4 1st Ring Rd East 2 Section, Chenghua District, Chengdu, 610056, Sichuan, China
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Reem Alkanhel
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Hanaa A. Abdallah
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Asadulla Ashurov
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Mohammed Saleh Ali Muthanna
- Institute of Computer Technologies and Information Security, Southern Federal University, 344006, Taganrog, Russia
| |
Collapse
|
7
|
Wu D, Ni J, Fan W, Jiang Q, Wang L, Sun L, Cai Z. Opportunities and challenges of computer aided diagnosis in new millennium: A bibliometric analysis from 2000 to 2023. Medicine (Baltimore) 2023; 102:e36703. [PMID: 38134105 PMCID: PMC10735127 DOI: 10.1097/md.0000000000036703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND After entering the new millennium, computer-aided diagnosis (CAD) is rapidly developing as an emerging technology worldwide. Expanding the spectrum of CAD-related diseases is a possible future research trend. Nevertheless, bibliometric studies in this area have not yet been reported. This study aimed to explore the hotspots and frontiers of research on CAD from 2000 to 2023, which may provide a reference for researchers in this field. METHODS In this paper, we use bibliometrics to analyze CAD-related literature in the Web of Science database between 2000 and 2023. The scientometric softwares VOSviewer and CiteSpace were used to visually analyze the countries, institutions, authors, journals, references and keywords involved in the literature. Keywords burst analysis were utilized to further explore the current state and development trends of research on CAD. RESULTS A total of 13,970 publications were included in this study, with a noticeably rising annual publication trend. China and the United States are major contributors to the publication, with the United States being the dominant position in CAD research. The American research institutions, lead by the University of Chicago, are pioneers of CAD. Acharya UR, Zheng B and Chan HP are the most prolific authors. Institute of Electrical and Electronics Engineers Transactions on Medical Imaging focuses on CAD and publishes the most articles. New computer technologies related to CAD are in the forefront of attention. Currently, CAD is used extensively in breast diseases, pulmonary diseases and brain diseases. CONCLUSION Expanding the spectrum of CAD-related diseases is a possible future research trend. How to overcome the lack of large sample datasets and establish a universally accepted standard for the evaluation of CAD system performance are urgent issues for CAD development and validation. In conclusion, this paper provides valuable information on the current state of CAD research and future developments.
Collapse
Affiliation(s)
- Di Wu
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Jiachun Ni
- Department of Coloproctology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Wenbin Fan
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Qiong Jiang
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Ling Wang
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Li Sun
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Zengjin Cai
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| |
Collapse
|
8
|
Wang M, Jiang H. PST-Radiomics: a PET/CT lymphoma classification method based on pseudo spatial-temporal radiomic features and structured atrous recurrent convolutional neural network. Phys Med Biol 2023; 68:235014. [PMID: 37956448 DOI: 10.1088/1361-6560/ad0c0f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 11/13/2023] [Indexed: 11/15/2023]
Abstract
Objective.Existing radiomic methods tend to treat each isolated tumor as an inseparable whole, when extracting radiomic features. However, they may discard the critical intra-tumor metabolic heterogeneity (ITMH) information, that contributes to triggering tumor subtypes. To improve lymphoma classification performance, we propose a pseudo spatial-temporal radiomic method (PST-Radiomics) based on positron emission tomography computed tomography (PET/CT).Approach.Specifically, to enable exploitation of ITMH, we first present a multi-threshold gross tumor volume sequence (GTVS). Next, we extract 1D radiomic features based on PET images and each volume in GTVS and create a pseudo spatial-temporal feature sequence (PSTFS) tightly interwoven with ITMH. Then, we reshape PSTFS to create 2D pseudo spatial-temporal feature maps (PSTFM), of which the columns are elements of PSTFS. Finally, to learn from PSTFM in an end-to-end manner, we build a light-weighted pseudo spatial-temporal radiomic network (PSTR-Net), in which a structured atrous recurrent convolutional neural network serves as a PET branch to better exploit the strong local dependencies in PSTFM, and a residual convolutional neural network is used as a CT branch to exploit conventional radiomic features extracted from CT volumes.Main results.We validate PST-Radiomics based on a PET/CT lymphoma subtype classification task. Experimental results quantitatively demonstrate the superiority of PST-Radiomics, when compared to existing radiomic methods.Significance.Feature map visualization of our method shows that it performs complex feature selection while extracting hierarchical feature maps, which qualitatively demonstrates its superiority.
Collapse
Affiliation(s)
- Meng Wang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
9
|
Li T, Xu Y, Wu T, Charlton JR, Bennett KM, Al-Hindawi F. BlobCUT: A Contrastive Learning Method to Support Small Blob Detection in Medical Imaging. Bioengineering (Basel) 2023; 10:1372. [PMID: 38135963 PMCID: PMC10740534 DOI: 10.3390/bioengineering10121372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/19/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
Medical imaging-based biomarkers derived from small objects (e.g., cell nuclei) play a crucial role in medical applications. However, detecting and segmenting small objects (a.k.a. blobs) remains a challenging task. In this research, we propose a novel 3D small blob detector called BlobCUT. BlobCUT is an unpaired image-to-image (I2I) translation model that falls under the Contrastive Unpaired Translation paradigm. It employs a blob synthesis module to generate synthetic 3D blobs with corresponding masks. This is incorporated into the iterative model training as the ground truth. The I2I translation process is designed with two constraints: (1) a convexity consistency constraint that relies on Hessian analysis to preserve the geometric properties and (2) an intensity distribution consistency constraint based on Kullback-Leibler divergence to preserve the intensity distribution of blobs. BlobCUT learns the inherent noise distribution from the target noisy blob images and performs image translation from the noisy domain to the clean domain, effectively functioning as a denoising process to support blob identification. To validate the performance of BlobCUT, we evaluate it on a 3D simulated dataset of blobs and a 3D MRI dataset of mouse kidneys. We conduct a comparative analysis involving six state-of-the-art methods. Our findings reveal that BlobCUT exhibits superior performance and training efficiency, utilizing only 56.6% of the training time required by the state-of-the-art BlobDetGAN. This underscores the effectiveness of BlobCUT in accurately segmenting small blobs while achieving notable gains in training efficiency.
Collapse
Affiliation(s)
- Teng Li
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| | - Yanzhe Xu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| | - Teresa Wu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| | - Jennifer R. Charlton
- Division Nephrology, Department of Pediatrics, University of Virginia, Charlottesville, VA 22903, USA;
| | - Kevin M. Bennett
- Department of Radiology, Washington University, St. Louis, MO 63130, USA;
| | - Firas Al-Hindawi
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| |
Collapse
|
10
|
Qu W, Yang J, Li J, Yuan G, Li S, Chu Q, Xie Q, Zhang Q, Cheng B, Li Z. Avoid non-diagnostic EUS-FNA: a DNN model as a possible gatekeeper to distinguish pancreatic lesions prone to inconclusive biopsy. Br J Radiol 2023; 96:20221112. [PMID: 37195026 PMCID: PMC10607397 DOI: 10.1259/bjr.20221112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 03/20/2023] [Accepted: 05/11/2023] [Indexed: 05/18/2023] Open
Abstract
OBJECTIVE This work aimed to explore the utility of CT radiomics with machine learning for distinguishing the pancreatic lesions prone to non-diagnostic ultrasound-guided fine-needle aspiration (EUS-FNA). METHODS 498 patients with pancreatic EUS-FNA were retrospectively reviewed [Development cohort: 147 pancreatic ductal adenocarcinoma (PDAC); Validation cohort: 37 PDAC]. Pancreatic lesions not PDAC were also tested exploratively. Radiomics extracted from contrast-enhanced CT was integrated with deep neural networks (DNN) after dimension reduction. The receiver operating characteristic (ROC) curve, and decision curve analysis (DCA) were performed for model evaluation. And, the explainability of the DNN model was analyzed by integrated gradients. RESULTS The DNN model was effective in distinguishing PDAC lesions prone to non-diagnostic EUS-FNA (Development cohort: AUC = 0.821, 95% CI: 0.742-0.900; Validation cohort: AUC = 0.745, 95% CI: 0.534-0.956). In all cohorts, the DNN model showed better utility than the logistic model based on traditional lesion characteristics with NRI >0 (p < 0.05). And, the DNN model had net benefits of 21.6% at the risk threshold of 0.60 in the validation cohort. As for the model explainability, gray-level co-occurrence matrix (GLCM) features contributed the most averagely and the first-order features were the most important in the sum attribution. CONCLUSION The CT radiomics-based DNN model can be a useful auxiliary tool for distinguishing the pancreatic lesions prone to nondiagnostic EUS-FNA and provide alerts for endoscopists preoperatively to reduce unnecessary EUS-FNA. ADVANCES IN KNOWLEDGE This is the first investigation into the utility of CT radiomics-based machine learning in avoiding non-diagnostic EUS-FNA for patients with pancreatic masses and providing potential pre-operative assistance for endoscopists.
Collapse
Affiliation(s)
- Weinuo Qu
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Jiannan Yang
- School of Data Science, City University of Hong Kong, Kowloon, Hong Kong, China
| | - Jiali Li
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Guanjie Yuan
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Shichao Li
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Qian Chu
- Department of Oncology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qingguo Xie
- Biomedical Engineering Department, College of Life Sciences and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | | | - Bin Cheng
- Department of Gastroenterology and Hepatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhen Li
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| |
Collapse
|
11
|
Huang X, Chen X, Zhong X, Tian T. The CNN model aided the study of the clinical value hidden in the implant images. J Appl Clin Med Phys 2023; 24:e14141. [PMID: 37656066 PMCID: PMC10562019 DOI: 10.1002/acm2.14141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 08/14/2023] [Accepted: 08/16/2023] [Indexed: 09/02/2023] Open
Abstract
PURPOSE This article aims to construct a new method to evaluate radiographic image identification results based on artificial intelligence, which can complement the limited vision of researchers when studying the effect of various factors on clinical implantation outcomes. METHODS We constructed a convolutional neural network (CNN) model using the clinical implant radiographic images. Moreover, we used gradient-weighted class activation mapping (Grad-CAM) to obtain thermal maps to present identification differences before performing statistical analyses. Subsequently, to verify whether these differences presented by the Grad-CAM algorithm would be of value to clinical practices, we measured the bone thickness around the identified sites. Finally, we analyzed the influence of the implant type on the implantation according to the measurement results. RESULTS The thermal maps showed that the sites with significant differences between Straumann BL and Bicon implants as identified by the CNN model were mainly the thread and neck area. (2) The heights of the mesial, distal, buccal, and lingual bone of the Bicon implant post-op were greater than those of Straumann BL (P < 0.05). (3) Between the first and second stages of surgery, the amount of bone thickness variation at the buccal and lingual sides of the Bicon implant platform was greater than that of the Straumann BL implant (P < 0.05). CONCLUSION According to the results of this study, we found that the identified-neck-area of the Bicon implant was placed deeper than the Straumann BL implant, and there was more bone resorption on the buccal and lingual sides at the Bicon implant platform between the first and second stages of surgery. In summary, this study proves that using the CNN classification model can identify differences that complement our limited vision.
Collapse
Affiliation(s)
- Xinxu Huang
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Xingyu Chen
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Xinnan Zhong
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Taoran Tian
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| |
Collapse
|
12
|
Wanjiku RN, Nderu L, Kimwele M. Improved transfer learning using textural features conflation and dynamically fine-tuned layers. PeerJ Comput Sci 2023; 9:e1601. [PMID: 37810335 PMCID: PMC10557498 DOI: 10.7717/peerj-cs.1601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 08/29/2023] [Indexed: 10/10/2023]
Abstract
Transfer learning involves using previously learnt knowledge of a model task in addressing another task. However, this process works well when the tasks are closely related. It is, therefore, important to select data points that are closely relevant to the previous task and fine-tune the suitable pre-trained model's layers for effective transfer. This work utilises the least divergent textural features of the target datasets and pre-trained model's layers, minimising the lost knowledge during the transfer learning process. This study extends previous works on selecting data points with good textural features and dynamically selected layers using divergence measures by combining them into one model pipeline. Five pre-trained models are used: ResNet50, DenseNet169, InceptionV3, VGG16 and MobileNetV2 on nine datasets: CIFAR-10, CIFAR-100, MNIST, Fashion-MNIST, Stanford Dogs, Caltech 256, ISIC 2016, ChestX-ray8 and MIT Indoor Scenes. Experimental results show that data points with lower textural feature divergence and layers with more positive weights give better accuracy than other data points and layers. The data points with lower divergence give an average improvement of 3.54% to 6.75%, while the layers improve by 2.42% to 13.04% for the CIFAR-100 dataset. Combining the two methods gives an extra accuracy improvement of 1.56%. This combined approach shows that data points with lower divergence from the source dataset samples can lead to a better adaptation for the target task. The results also demonstrate that selecting layers with more positive weights reduces instances of trial and error in selecting fine-tuning layers for pre-trained models.
Collapse
Affiliation(s)
| | - Lawrence Nderu
- Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
| | - Michael Kimwele
- Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
| |
Collapse
|
13
|
Zhang S, Wu J, Shi E, Yu S, Gao Y, Li LC, Kuo LR, Pomeroy MJ, Liang ZJ. MM-GLCM-CNN: A multi-scale and multi-level based GLCM-CNN for polyp classification. Comput Med Imaging Graph 2023; 108:102257. [PMID: 37301171 DOI: 10.1016/j.compmedimag.2023.102257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 05/04/2023] [Accepted: 05/30/2023] [Indexed: 06/12/2023]
Abstract
Distinguishing malignant from benign lesions has significant clinical impacts on both early detection and optimal management of those early detections. Convolutional neural network (CNN) has shown great potential in medical imaging applications due to its powerful feature learning capability. However, it is very challenging to obtain pathological ground truth, addition to collected in vivo medical images, to construct objective training labels for feature learning, leading to the difficulty of performing lesion diagnosis. This is contrary to the requirement that CNN algorithms need a large number of datasets for the training. To explore the ability to learn features from small pathologically-proven datasets for differentiation of malignant from benign polyps, we propose a Multi-scale and Multi-level based Gray-level Co-occurrence Matrix CNN (MM-GLCM-CNN). Specifically, instead of inputting the lesions' medical images, the GLCM, which characterizes the lesion heterogeneity in terms of image texture characteristics, is fed into the MM-GLCN-CNN model for the training. This aims to improve feature extraction by introducing multi-scale and multi-level analysis into the construction of lesion texture characteristic descriptors (LTCDs). To learn and fuse multiple sets of LTCDs from small datasets for lesion diagnosis, we further propose an adaptive multi-input CNN learning framework. Furthermore, an Adaptive Weight Network is used to highlight important information and suppress redundant information after the fusion of the LTCDs. We evaluated the performance of MM-GLCM-CNN by the area under the receiver operating characteristic curve (AUC) merit on small private lesion datasets of colon polyps. The AUC score reaches 93.99% with a gain of 1.49% over current state-of-the-art lesion classification methods on the same dataset. This gain indicates the importance of incorporating lesion characteristic heterogeneity for the prediction of lesion malignancy using small pathologically-proven datasets.
Collapse
Affiliation(s)
- Shu Zhang
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China.
| | - Jinru Wu
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China
| | - Enze Shi
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China
| | - Sigang Yu
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China
| | - Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Lihong Connie Li
- Department of Engineering & Environmental Science, City University of New York, Staten Island, NY 10314, USA
| | - Licheng Ryan Kuo
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Marc Jason Pomeroy
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Zhengrong Jerome Liang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
14
|
Wang X, Li N, Yin X, Xing L, Zheng Y. Classification of metastatic hepatic carcinoma and hepatocellular carcinoma lesions using contrast-enhanced CT based on EI-CNNet. Med Phys 2023; 50:5630-5642. [PMID: 36869656 DOI: 10.1002/mp.16340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 02/24/2023] [Accepted: 02/24/2023] [Indexed: 03/05/2023] Open
Abstract
BACKGROUND For hepatocellular carcinoma and metastatic hepatic carcinoma, imaging is one of the main diagnostic methods. In clinical practice, diagnosis mainly relied on experienced imaging physicians, which was inefficient and cannot met the demand for rapid and accurate diagnosis. Therefore, how to efficiently and accurately classify the two types of liver cancer based on imaging is an urgent problem to be solved at present. PURPOSE The purpose of this study was to use the deep learning classification model to help radiologists classify the single metastatic hepatic carcinoma and hepatocellular carcinoma based on the enhanced features of enhanced CT (Computer Tomography) portal phase images of the liver site. METHODS In this retrospective study, 52 patients with metastatic hepatic carcinoma and 50 patients with hepatocellular carcinoma were among the patients who underwent preoperative enhanced CT examinations from 2017-2020. A total of 565 CT slices from these patients were used to train and validate the classification network (EI-CNNet, training/validation: 452/113). First, the EI block was used to extract edge information from CT slices to enrich fine-grained information and classify them. Then, ROC (Receiver Operating Characteristic) curve was used to evaluate the performance, accuracy, and recall of the EI-CNNet. Finally, the classification results of EI-CNNet were compared with popular classification models. RESULTS By utilizing 80% data for model training and 20% data for model validation, the average accuracy of this experiment was 98.2% ± 0.62 (mean ± standard deviation (SD)), the recall rate was 97.23% ± 2.77, the precision rate was 98.02% ± 2.07, the network parameters were 11.83 MB, and the validation time was 9.83 s/sample. The classification accuracy was improved by 20.98% compared to the base CNN network and the validation time was 10.38 s/sample. Compared with other classification networks, the InceptionV3 network showed improved classification results, but the number of parameters was increased and the validation time was 33 s/sample, and the classification accuracy was improved by 6.51% using this method. CONCLUSION EI-CNNet demonstrated promised diagnostic performance and has potential to reduce the workload of radiologists and may help distinguish whether the tumor is primary or metastatic in time; otherwise, it may be missed or misjudged.
Collapse
Affiliation(s)
- Xuehu Wang
- College of Electronic and Information Engineering, Hebei University, Baoding, China
- Research Center of Machine Vision Engineering & Technology of Hebei Province, Baoding, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding, China
| | - Nie Li
- College of Electronic and Information Engineering, Hebei University, Baoding, China
- Research Center of Machine Vision Engineering & Technology of Hebei Province, Baoding, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding, China
| | - Xiaoping Yin
- Affiliated Hospital of Hebei University, Bao ding, China
| | - Lihong Xing
- CT/MRI room, Affiliated Hospital of Hebei University, Baoding, Hebei Province, China
| | - Yongchang Zheng
- Department of Liver Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (CAMS & PUMC), Beijing, China
| |
Collapse
|
15
|
Kim MW, Huh JW, Noh YM, Seo HE, Lee DH. Exploring the Paradox of Bone Mineral Density in Type 2 Diabetes: A Comparative Study Using Opportunistic Chest CT Texture Analysis and DXA. Diagnostics (Basel) 2023; 13:2784. [PMID: 37685322 PMCID: PMC10486730 DOI: 10.3390/diagnostics13172784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/25/2023] [Accepted: 08/25/2023] [Indexed: 09/10/2023] Open
Abstract
BACKGROUND This study aimed to validate the application of CT texture analysis in estimating Bone Mineral Density (BMD) in patients with Type 2 Diabetes (T2D) and comparing it with the results of dual-energy X-ray absorptiometry (DXA) in a normative cohort. METHODS We analyzed a total of 510 cases (145 T2D patients and 365 normal patients) from a single institution. DXA-derived BMD and CT texture analysis-estimated BMD were compared for each participant. Additionally, we investigated the correlation among 45 different texture features within each group. RESULTS The correlation between CT texture analysis-estimated BMD and DXA-derived BMD in T2D patients was consistently high (0.94 or above), whether measured at L1 BMD, L1 BMC, total hip BMD, or total hip BMC. In contrast, the normative cohort showed a modest correlation, ranging from 0.66 to 0.75. Among the 45 texture features, significant differences were found in the Contrast V 64 and Contrast V 128 features in the normal group. CONCLUSION In essence, our study emphasizes that the clinical assessment of bone health, particularly in T2D patients, should not merely rely on traditional measures, such as DXA BMD. Rather, it may be beneficial to incorporate other diagnostic tools, such as CT texture analysis, to better comprehend the complex interplay between various factors impacting bone health.
Collapse
Affiliation(s)
| | | | | | | | - Dong Ha Lee
- Department of Orthopedic Surgery, Busan Medical Center, 62, Yangjeong-ro, Busanjin-gu, Busan 47227, Republic of Korea; (M.W.K.); (J.W.H.); (Y.M.N.); (H.E.S.)
| |
Collapse
|
16
|
Zhang J, Zhou K. Identification of Solid and Liquid Materials Using Acoustic Signals and Frequency-Graph Features. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1170. [PMID: 37628200 PMCID: PMC10453644 DOI: 10.3390/e25081170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 07/24/2023] [Accepted: 08/03/2023] [Indexed: 08/27/2023]
Abstract
Material identification is playing an increasingly important role in various sectors such as industry, petrochemical, mining, and in our daily lives. In recent years, material identification has been utilized for security checks, waste sorting, etc. However, current methods for identifying materials require direct contact with the target and specialized equipment that can be costly, bulky, and not easily portable. Past proposals for addressing this limitation relied on non-contact material identification methods, such as Wi-Fi-based and radar-based material identification methods, which can identify materials with high accuracy without physical contact; however, they are not easily integrated into portable devices. This paper introduces a novel non-contact material identification based on acoustic signals. Different from previous work, our design leverages the built-in microphone and speaker of smartphones as the transceiver to identify target materials. The fundamental idea of our design is that acoustic signals, when propagated through different materials, reach the receiver via multiple paths, producing distinct multipath profiles. These profiles can serve as fingerprints for material identification. We captured and extracted them using acoustic signals, calculated channel impulse response (CIR) measurements, and then extracted image features from the time-frequency domain feature graphs, including histogram of oriented gradient (HOG) and gray-level co-occurrence matrix (GLCM) image features. Furthermore, we adopted the error-correcting output code (ECOC) learning method combined with the majority voting method to identify target materials. We built a prototype for this paper using three mobile phones based on the Android platform. The results from three different solid and liquid materials in varied multipath environments reveal that our design can achieve average identification accuracies of 90% and 97%.
Collapse
Affiliation(s)
- Jie Zhang
- School of Computer Science & Technology, Xi’an University of Posts & Telecommunications, Xi’an 710121, China;
- School of Information Science and Technology, Northwest University, Xi’an 710127, China
| | - Kexin Zhou
- School of Computer Science & Technology, Xi’an University of Posts & Telecommunications, Xi’an 710121, China;
| |
Collapse
|
17
|
Valjarevic S, Jovanovic MB, Miladinovic N, Cumic J, Dugalic S, Corridon PR, Pantic I. Gray-Level Co-occurrence Matrix Analysis of Nuclear Textural Patterns in Laryngeal Squamous Cell Carcinoma: Focus on Artificial Intelligence Methods. MICROSCOPY AND MICROANALYSIS : THE OFFICIAL JOURNAL OF MICROSCOPY SOCIETY OF AMERICA, MICROBEAM ANALYSIS SOCIETY, MICROSCOPICAL SOCIETY OF CANADA 2023; 29:1220-1227. [PMID: 37749686 DOI: 10.1093/micmic/ozad042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 03/05/2023] [Accepted: 03/10/2023] [Indexed: 09/27/2023]
Abstract
Gray-level co-occurrence matrix (GLCM) and discrete wavelet transform (DWT) analyses are two contemporary computational methods that can identify discrete changes in cell and tissue textural features. Previous research has indicated that these methods may be applicable in the pathology for identification and classification of various types of cancers. In this study, we present findings that squamous epithelial cells in laryngeal carcinoma, which appear morphologically intact during conventional pathohistological evaluation, have distinct nuclear GLCM and DWT features. The average values of nuclear GLCM indicators of these cells, such as angular second moment, inverse difference moment, and textural contrast, substantially differ when compared to those in noncancerous tissue. In this work, we also propose machine learning models based on random forests and support vector machine that can be successfully trained to separate the cells using GLCM and DWT quantifiers as input data. We show that, based on a limited cell sample, these models have relatively good classification accuracy and discriminatory power, which makes them suitable candidates for future development of AI-based sensors potentially applicable in laryngeal carcinoma diagnostic protocols.
Collapse
Affiliation(s)
- Svetlana Valjarevic
- University of Belgrade, Faculty of Medicine, Clinical Hospital Center "Zemun", Vukova 9, RS-11080 Belgrade, Serbia
| | - Milan B Jovanovic
- University of Belgrade, Faculty of Medicine, Clinical Hospital Center "Zemun", Vukova 9, RS-11080 Belgrade, Serbia
| | - Nenad Miladinovic
- University of Belgrade, Faculty of Medicine, Clinical Hospital Center "Zemun", Vukova 9, RS-11080 Belgrade, Serbia
| | - Jelena Cumic
- University of Belgrade, Faculty of Medicine, University Clinical Centre of Serbia, Dr. Koste Todorovića 8, RS-11129, Belgrade, Serbia
| | - Stefan Dugalic
- University of Belgrade, Faculty of Medicine, University Clinical Centre of Serbia, Dr. Koste Todorovića 8, RS-11129, Belgrade, Serbia
| | - Peter R Corridon
- Department of Immunology and Physiology, College of Medicine and Health Sciences, Khalifa University of Science and Technology, Shakhbout Bin Sultan St - Hadbat Al Za'faranah - Zone 1 - Abu Dhabi, UAE
- Biomedical Engineering, Healthcare Engineering Innovation Center, Khalifa University of Science and Technology, Shakhbout Bin Sultan St - Hadbat Al Za'faranah - Zone 1 - Abu Dhabi, UAE
- Center for Biotechnology, Khalifa University of Science and Technology, Shakhbout Bin Sultan St - Hadbat Al Za'faranah - Zone 1 - Abu Dhabi, UAE
| | - Igor Pantic
- University of Belgrade, Faculty of Medicine, Department of Medical Physiology, Višegradska 26/2, RS-11129 Belgrade, Serbia
- Department of Pharmacology, College of Medicine and Health Sciences, Khalifa University of Science and Technology, Shakhbout Bin Sultan St - Hadbat Al Za'faranah - Zone 1 - Abu Dhabi, UAE
- University of Haifa, 199 Abba Hushi Blvd, Mount Carmel, Haifa IL-3498838, Israel
| |
Collapse
|
18
|
Chang S, Gao Y, Pomeroy MJ, Bai T, Zhang H, Lu S, Pickhardt PJ, Gupta A, Reiter MJ, Gould ES, Liang Z. Exploring Dual-Energy CT Spectral Information for Machine Learning-Driven Lesion Diagnosis in Pre-Log Domain. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1835-1845. [PMID: 37022248 PMCID: PMC10238622 DOI: 10.1109/tmi.2023.3240847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In this study, we proposed a computer-aided diagnosis (CADx) framework under dual-energy spectral CT (DECT), which operates directly on the transmission data in the pre-log domain, called CADxDE, to explore the spectral information for lesion diagnosis. The CADxDE includes material identification and machine learning (ML) based CADx. Benefits from DECT's capability of performing virtual monoenergetic imaging with the identified materials, the responses of different tissue types (e.g., muscle, water, and fat) in lesions at each energy can be explored by ML for CADx. Without losing essential factors in the DECT scan, a pre-log domain model-based iterative reconstruction is adopted to obtain decomposed material images, which are then used to generate the virtual monoenergetic images (VMIs) at selected n energies. While these VMIs have the same anatomy, their contrast distribution patterns contain rich information along with the n energies for tissue characterization. Thus, a corresponding ML-based CADx is developed to exploit the energy-enhanced tissue features for differentiating malignant from benign lesions. Specifically, an original image-driven multi-channel three-dimensional convolutional neural network (CNN) and extracted lesion feature-based ML CADx methods are developed to show the feasibility of CADxDE. Results from three pathologically proven clinical datasets showed 4.01% to 14.25% higher AUC (area under the receiver operating characteristic curve) scores than the scores of both the conventional DECT data (high and low energy spectrum separately) and the conventional CT data. The mean gain >9.13% in AUC scores indicated that the energy spectral-enhanced tissue features from CADxDE have great potential to improve lesion diagnosis performance.
Collapse
Affiliation(s)
- Shaojie Chang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Marc J. Pomeroy
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Ti Bai
- Department of Radiation Oncology, University of Texas Southwestern Medical Centre, Dallas, TX 75390, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, NY 10065, USA
| | - Siming Lu
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Perry J. Pickhardt
- Department of Radiology, School of Medicine, University of Wisconsin, Madison, WI 53792, USA
| | - Amit Gupta
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Michael J. Reiter
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Elaine S. Gould
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Zhengrong Liang
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
19
|
Qu W, Zhou Z, Yuan G, Li S, Li J, Chu Q, Zhang Q, Xie Q, Li Z, Kamel IR. Is the radiomics-clinical combined model helpful in distinguishing between pancreatic cancer and mass-forming pancreatitis? Eur J Radiol 2023; 164:110857. [PMID: 37172441 DOI: 10.1016/j.ejrad.2023.110857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 03/22/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023]
Abstract
PURPOSE To develop CT-based radiomics models for distinguishing between resectable PDAC and mass-forming pancreatitis (MFP) and to provide a non-invasive tool for cases of equivocal imaging findings with EUS-FNA needed. METHODS A total of 201 patients with resectable PDAC and 54 patients with MFP were included. Development cohort: patients without preoperative EUS-FNA (175 PDAC cases, 38 MFP cases); validation cohort: patients with EUS-FNA (26 PDAC cases, 16 MFP cases). Two radiomic signatures (LASSOscore, PCAscore) were developed based on the LASSO model and principal component analysis. LASSOCli and PCACli prediction models were established by combining clinical features with CT radiomic features. ROC analysis and decision curve analysis (DCA) were performed to evaluate the utility of the model versus EUS-FNA in the validation cohort. RESULTS In the validation cohort, the radiomic signatures (LASSOscore, PCAscore) were both effective in distinguishing between resectable PDAC and MFP (AUCLASSO = 0.743, 95% CI: 0.590-0.896; AUCPCA = 0.788, 95% CI: 0.639-0.938) and improved the diagnostic accuracy of the baseline onlyCli model (AUConlyCli = 0.760, 95% CI: 0.614-0.960) after combination with variables including age, CA19-9, and the double-duct sign (AUCPCACli = 0.880, 95% CI: 0.776-0.983; AUCLASSOCli = 0.825, 95% CI: 0.694-0.955). The PCACli model showed comparable performance to FNA (AUCFNA = 0.810, 95% CI: 0.685-0.935). In DCA, the net benefit of the PCACli model was superior to that of EUS-FNA, avoiding biopsies in 70 per 1000 patients at a risk threshold of 35%. CONCLUSIONS The PCACli model showed comparable performance with EUS-FNA in discriminating resectable PDAC from MFP.
Collapse
Affiliation(s)
- Weinuo Qu
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| | - Ziling Zhou
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China; Biomedical Engineering Department, College of Life Sciences and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| | - Guanjie Yuan
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| | - Shichao Li
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| | - Jiali Li
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| | - Qian Chu
- Department of Oncology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
| | - Qingpeng Zhang
- Musketeers Foundation Institute of Data Science, The University of Hong Kong, Hong Kong Special Administrative Region; The Department of Pharmacology and Pharmacy, LKS Faculty of Medicine, The University of Hong Kong, Hong Kong Special Administrative Region.
| | - Qingguo Xie
- Biomedical Engineering Department, College of Life Sciences and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| | - Zhen Li
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| | - Ihab R Kamel
- Johns Hopkins Hospital, Russell H Morgan Department of Radiology & Radiological Science, 600 N Wolfe St, Baltimore, MD 21205, USA.
| |
Collapse
|
20
|
Wang M, Jiang H. Memory-Net: Coupling feature maps extraction and hierarchical feature maps reuse for efficient and effective PET/CT multi-modality image-based tumor segmentation. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
21
|
Shayeste H, Asl BM. Automatic seizure detection based on Gray Level Co-occurrence Matrix of STFT imaged-EEG. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
22
|
Jin J, Zhou H, Sun S, Tian Z, Ren H, Feng J, Jiang X. Machine learning based gray-level co-occurrence matrix early warning system enables accurate detection of colorectal cancer pelvic bone metastases on MRI. Front Oncol 2023; 13:1121594. [PMID: 37035167 PMCID: PMC10073745 DOI: 10.3389/fonc.2023.1121594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/02/2023] [Indexed: 04/11/2023] Open
Abstract
Objective The mortality of colorectal cancer patients with pelvic bone metastasis is imminent, and timely diagnosis and intervention to improve the prognosis is particularly important. Therefore, this study aimed to build a bone metastasis prediction model based on Gray level Co-occurrence Matrix (GLCM) - based Score to guide clinical diagnosis and treatment. Methods We retrospectively included 614 patients with colorectal cancer who underwent pelvic multiparameter magnetic resonance image(MRI) from January 2015 to January 2022 in the gastrointestinal surgery department of Gezhouba Central Hospital of Sinopharm. GLCM-based Score and Machine learning algorithm, that is,artificial neural net7work model(ANNM), random forest model(RFM), decision tree model(DTM) and support vector machine model(SVMM) were used to build prediction model of bone metastasis in colorectal cancer patients. The effectiveness evaluation of each model mainly included decision curve analysis(DCA), area under the receiver operating characteristic (AUROC) curve and clinical influence curve(CIC). Results We captured fourteen categories of radiomics data based on GLCM for variable screening of bone metastasis prediction models. Among them, Haralick_90, IV_0, IG_90, Haralick_30, CSV, Entropy and Haralick_45 were significantly related to the risk of bone metastasis, and were listed as candidate variables of machine learning prediction models. Among them, the prediction efficiency of RFM in combination with Haralick_90, Haralick_all, IV_0, IG_90, IG_0, Haralick_30, CSV, Entropy and Haralick_45 in training set and internal verification set was [AUC: 0.926,95% CI: 0.873-0.979] and [AUC: 0.919,95% CI: 0.868-0.970] respectively. The prediction efficiency of the other four types of prediction models was between [AUC: 0.716,95% CI: 0.663-0.769] and [AUC: 0.912,95% CI: 0.859-0.965]. Conclusion The automatic segmentation model based on diffusion-weighted imaging(DWI) using depth learning method can accurately segment the pelvic bone structure, and the subsequently established radiomics model can effectively detect bone metastases within the pelvic scope, especially the RFM algorithm, which can provide a new method for automatically evaluating the pelvic bone turnover of colorectal cancer patients.
Collapse
|
23
|
A Review of Radiomics in Predicting Therapeutic Response in Colorectal Liver Metastases: From Traditional to Artificial Intelligence Techniques. Healthcare (Basel) 2022; 10:healthcare10102075. [DOI: 10.3390/healthcare10102075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 10/12/2022] [Accepted: 10/13/2022] [Indexed: 11/17/2022] Open
Abstract
An early evaluation of colorectal cancer liver metastasis (CRCLM) is crucial in determining treatment options that ultimately affect patient survival rates and outcomes. Radiomics (quantitative imaging features) have recently gained popularity in diagnostic and therapeutic strategies. Despite this, radiomics faces many challenges and limitations. This study sheds light on these limitations by reviewing the studies that used radiomics to predict therapeutic response in CRCLM. Despite radiomics’ potential to enhance clinical decision-making, it lacks standardization. According to the results of this study, the instability of radiomics quantification is caused by changes in CT scan parameters used to obtain CT scans, lesion segmentation methods used for contouring liver metastases, feature extraction methods, and dataset size used for experimentation and validation. Accordingly, the study recommends combining radiomics with deep learning to improve prediction accuracy.
Collapse
|
24
|
Cui R, Yang R, Liu F, Cai C. N-Net: Lesion region segmentations using the generalized hybrid dilated convolutions for polyps in colonoscopy images. Front Bioeng Biotechnol 2022; 10:963590. [DOI: 10.3389/fbioe.2022.963590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 08/12/2022] [Indexed: 11/13/2022] Open
Abstract
Colorectal cancer is the cancer with the second highest and the third highest incidence rates for the female and the male, respectively. Colorectal polyps are potential prognostic indicators of colorectal cancer, and colonoscopy is the gold standard for the biopsy and the removal of colorectal polyps. In this scenario, one of the main concerns is to ensure the accuracy of lesion region identifications. However, the missing rate of polyps through manual observations in colonoscopy can reach 14%–30%. In this paper, we focus on the identifications of polyps in clinical colonoscopy images and propose a new N-shaped deep neural network (N-Net) structure to conduct the lesion region segmentations. The encoder-decoder framework is adopted in the N-Net structure and the DenseNet modules are implemented in the encoding path of the network. Moreover, we innovatively propose the strategy to design the generalized hybrid dilated convolution (GHDC), which enables flexible dilated rates and convolutional kernel sizes, to facilitate the transmission of the multi-scale information with the respective fields expanded. Based on the strategy of GHDC designing, we design four GHDC blocks to connect the encoding and the decoding paths. Through the experiments on two publicly available datasets on polyp segmentations of colonoscopy images: the Kvasir-SEG dataset and the CVC-ClinicDB dataset, the rationality and superiority of the proposed GHDC blocks and the proposed N-Net are verified. Through the comparative studies with the state-of-the-art methods, such as TransU-Net, DeepLabV3+ and CA-Net, we show that even with a small amount of network parameters, the N-Net outperforms with the Dice of 94.45%, the average symmetric surface distance (ASSD) of 0.38 pix and the mean intersection-over-union (mIoU) of 89.80% on the Kvasir-SEG dataset, and with the Dice of 97.03%, the ASSD of 0.16 pix and the mIoU of 94.35% on the CVC-ClinicDB dataset.
Collapse
|
25
|
Saihood A, Karshenas H, Nilchi ARN. Deep fusion of gray level co-occurrence matrices for lung nodule classification. PLoS One 2022; 17:e0274516. [PMID: 36174073 PMCID: PMC9521911 DOI: 10.1371/journal.pone.0274516] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 08/28/2022] [Indexed: 11/19/2022] Open
Abstract
Lung cancer is a serious threat to human health, with millions dying because of its late diagnosis. The computerized tomography (CT) scan of the chest is an efficient method for early detection and classification of lung nodules. The requirement for high accuracy in analyzing CT scan images is a significant challenge in detecting and classifying lung cancer. In this paper, a new deep fusion structure based on the long short-term memory (LSTM) has been introduced, which is applied to the texture features computed from lung nodules through new volumetric grey-level-co-occurrence-matrices (GLCMs), classifying the nodules into benign, malignant, and ambiguous. Also, an improved Otsu segmentation method combined with the water strider optimization algorithm (WSA) is proposed to detect the lung nodules. WSA-Otsu thresholding can overcome the fixed thresholds and time requirement restrictions in previous thresholding methods. Extended experiments are used to assess this fusion structure by considering 2D-GLCM based on 2D-slices and approximating the proposed 3D-GLCM computations based on volumetric 2.5D-GLCMs. The proposed methods are trained and assessed through the LIDC-IDRI dataset. The accuracy, sensitivity, and specificity obtained for 2D-GLCM fusion are 94.4%, 91.6%, and 95.8%, respectively. For 2.5D-GLCM fusion, the accuracy, sensitivity, and specificity are 97.33%, 96%, and 98%, respectively. For 3D-GLCM, the accuracy, sensitivity, and specificity of the proposed fusion structure reached 98.7%, 98%, and 99%, respectively, outperforming most state-of-the-art counterparts. The results and analysis also indicate that the WSA-Otsu method requires a shorter execution time and yields a more accurate thresholding process.
Collapse
Affiliation(s)
- Ahmed Saihood
- Artificial Intelligence Department, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran
- Faculty of Computer Science and Mathematics, University of Thi-Qar, Nasiriyah, Thi-Qar, Iraq
| | - Hossein Karshenas
- Artificial Intelligence Department, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran
| | - Ahmad Reza Naghsh Nilchi
- Artificial Intelligence Department, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran
| |
Collapse
|
26
|
A semi-supervised learning approach for COVID-19 detection from chest CT scans. Neurocomputing 2022; 503:314-324. [PMID: 35765410 PMCID: PMC9221925 DOI: 10.1016/j.neucom.2022.06.076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 05/11/2022] [Accepted: 06/18/2022] [Indexed: 01/17/2023]
Abstract
COVID-19 has spread rapidly all over the world and has infected more than 200 countries and regions. Early screening of suspected infected patients is essential for preventing and combating COVID-19. Computed Tomography (CT) is a fast and efficient tool which can quickly provide chest scan results. To reduce the burden on doctors of reading CTs, in this article, a high precision diagnosis algorithm of COVID-19 from chest CTs is designed for intelligent diagnosis. A semi-supervised learning approach is developed to solve the problem when only small amount of labelled data is available. While following the MixMatch rules to conduct sophisticated data augmentation, we introduce a model training technique to reduce the risk of model over-fitting. At the same time, a new data enhancement method is proposed to modify the regularization term in MixMatch. To further enhance the generalization of the model, a convolutional neural network based on an attention mechanism is then developed that enables to extract multi-scale features on CT scans. The proposed algorithm is evaluated on an independent CT dataset of the chest from COVID-19 and achieves the area under the receiver operating characteristic curve (AUC) value of 0.932, accuracy of 90.1%, sensitivity of 91.4%, specificity of 88.9%, and F1-score of 89.9%. The results show that the proposed algorithm can accurately diagnose whether a chest CT belongs to a positive or negative indication of COVID-19, and can help doctors to diagnose rapidly in the early stages of a COVID-19 outbreak.
Collapse
|
27
|
You Z, Jiang M, Shi Z, Zhao M, Shi C, Du S, Hérard AS, Souedet N, Delzescaux T. Multiscale segmentation- and error-guided iterative convolutional neural network for cerebral neuron segmentation in microscopic images. Microsc Res Tech 2022; 85:3541-3552. [PMID: 35855638 DOI: 10.1002/jemt.24206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 07/07/2022] [Indexed: 11/10/2022]
Abstract
This article uses microscopy images obtained from diverse anatomical regions of macaque brain for neuron semantic segmentation. The complex structure of brain, the large intra-class staining intensity difference within neuron class, the small inter-class staining intensity difference between neuron and tissue class, and the unbalanced dataset increase the difficulty of neuron semantic segmentation. To address this problem, we propose a multiscale segmentation- and error-guided iterative convolutional neural network (MSEG-iCNN) to improve the semantic segmentation performance in major anatomical regions of the macaque brain. After evaluating microscopic images from 17 anatomical regions, the semantic segmentation performance of neurons is improved by 10.6%, 4.0%, 1.5%, and 1.2% compared with Random Forest, FCN-8s, U-Net, and UNet++, respectively. Especially for neurons with brighter staining intensity in the anatomical regions such as lateral geniculate, globus pallidus and hypothalamus, the performance is improved by 66.1%, 23.9%, 11.2%, and 6.7%, respectively. Experiments show that our proposed method can efficiently segment neurons with a wide range of staining intensities. The semantic segmentation results are of great significance and can be further used for neuron instance segmentation, morphological analysis and disease diagnosis. Cell segmentation plays a critical role in extracting cerebral information, such as cell counting, cell morphometry and distribution analysis. Accurate automated neuron segmentation is challenging due to the complex structure of brain, the large intra-class staining intensity difference within neuron class, the small inter-class staining intensity difference between neuron and tissue class, and the unbalanced dataset. The proposed multiscale segmentation- and error-guided iterative convolutional neural network (MSEG-iCNN) improve the segmentation performance in 17 major anatomical regions of the macaque brain.
Collapse
Affiliation(s)
- Zhenzhen You
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China.,CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Fontenay-aux-Roses, Université Paris-Saclay, Paris, France
| | - Ming Jiang
- National Laboratory of Radar Signal Processing, Xidian University, Xi'an, China
| | - Zhenghao Shi
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - Minghua Zhao
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - Cheng Shi
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - Shuangli Du
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - Anne-Sophie Hérard
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Fontenay-aux-Roses, Université Paris-Saclay, Paris, France
| | - Nicolas Souedet
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Fontenay-aux-Roses, Université Paris-Saclay, Paris, France
| | - Thierry Delzescaux
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Fontenay-aux-Roses, Université Paris-Saclay, Paris, France
| |
Collapse
|
28
|
Khomkham B, Lipikorn R. Pulmonary Lesion Classification Framework Using the Weighted Ensemble Classification with Random Forest and CNN Models for EBUS Images. Diagnostics (Basel) 2022; 12:1552. [PMID: 35885458 PMCID: PMC9319293 DOI: 10.3390/diagnostics12071552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 06/18/2022] [Accepted: 06/24/2022] [Indexed: 11/16/2022] Open
Abstract
Lung cancer is a deadly disease with a high mortality rate. Endobronchial ultrasonography (EBUS) is one of the methods for detecting pulmonary lesions. Computer-aided diagnosis of pulmonary lesions from images can help radiologists to classify lesions; however, most of the existing methods need a large volume of data to give good results. Thus, this paper proposes a novel pulmonary lesion classification framework for EBUS images that works well with small datasets. The proposed framework integrates the statistical results from three classification models using the weighted ensemble classification. The three classification models include the radiomics feature and patient data-based model, the single-image-based model, and the multi-patch-based model. The radiomics features are combined with the patient data to be used as input data for the random forest, whereas the EBUS images are used as input data to the other two CNN models. The performance of the proposed framework was evaluated on a set of 200 EBUS images consisting of 124 malignant lesions and 76 benign lesions. The experimental results show that the accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and area under the curve are 95.00%, 100%, 86.67%, 92.59%, 100%, and 93.33%, respectively. This framework can significantly improve the pulmonary lesion classification.
Collapse
Affiliation(s)
| | - Rajalida Lipikorn
- Machine Intelligence and Multimedia Information Technology Laboratory (MIMIT), Department of Mathematics and Computer Science, Faculty of Science, Chulalongkorn University, Bangkok 10330, Thailand;
| |
Collapse
|
29
|
Cao W, Pomeroy MJ, Liang Z, Abbasi AF, Pickhardt PJ, Lu H. Vector textures derived from higher order derivative domains for classification of colorectal polyps. Vis Comput Ind Biomed Art 2022; 5:16. [PMID: 35699865 PMCID: PMC9198194 DOI: 10.1186/s42492-022-00108-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 03/22/2022] [Indexed: 11/10/2022] Open
Abstract
Textures have become widely adopted as an essential tool for lesion detection and classification through analysis of the lesion heterogeneities. In this study, higher order derivative images are being employed to combat the challenge of the poor contrast across similar tissue types among certain imaging modalities. To make good use of the derivative information, a novel concept of vector texture is firstly introduced to construct and extract several types of polyp descriptors. Two widely used differential operators, i.e., the gradient operator and Hessian operator, are utilized to generate the first and second order derivative images. These derivative volumetric images are used to produce two angle-based and two vector-based (including both angle and magnitude) textures. Next, a vector-based co-occurrence matrix is proposed to extract texture features which are fed to a random forest classifier to perform polyp classifications. To evaluate the performance of our method, experiments are implemented over a private colorectal polyp dataset obtained from computed tomographic colonography. We compare our method with four existing state-of-the-art methods and find that our method can outperform those competing methods over 4%-13% evaluated by the area under the receiver operating characteristics curves.
Collapse
Affiliation(s)
- Weiguo Cao
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY 11794, USA
| | - Marc J Pomeroy
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY 11794, USA.,Department of Biomedical Engineering, State University of New York at Stony Brook, Stony Brook, NY 11794, USA
| | - Zhengrong Liang
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY 11794, USA. .,Department of Biomedical Engineering, State University of New York at Stony Brook, Stony Brook, NY 11794, USA.
| | - Almas F Abbasi
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY 11794, USA
| | - Perry J Pickhardt
- Department of Radiology, University of Wisconsin Medical School, Madison, WI 53705, USA
| | - Hongbing Lu
- Department of Biomedical Engineering, the Fourth Medical University, Xi'an, 710032, Shaanxi, China
| |
Collapse
|
30
|
Modified Gray-Level Haralick Texture Features for Early Detection of Diabetes Mellitus and High Cholesterol with Iris Image. Int J Biomed Imaging 2022; 2022:5336373. [PMID: 35496640 PMCID: PMC9045982 DOI: 10.1155/2022/5336373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 02/18/2022] [Accepted: 03/25/2022] [Indexed: 11/18/2022] Open
Abstract
Iris has specific advantages, which can record all organ conditions, body construction, and psychological disorders. Traces related to the intensity or deviation of organs caused by the disease are recorded systematically and patterned on the iris and its surroundings. The pattern that appears on the iris can be recognized by using image processing techniques. Based on the pattern in the iris image, this paper aims to provide an alternative noninvasive method for the early detection of DM and HC. In this paper, we perform detection based on iris images for two diseases, DM and HC simultaneously, by developing the invariant Haralick feature on quantized images with 256, 128, 64, 32, and 16 gray levels. The feature extraction process does early detection based on iris images. Researchers and scientists have introduced many methods, one of which is the feature extraction of the gray-level co-occurrence matrix (GLCM). Early detection based on the iris is done using the volumetric GLCM development, namely, 3D-GLCM. Based on 3D-GLCM, which is formed at a distance of d = 1 and in the direction of 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°, it is used to calculate Haralick features and develop Haralick features which are invariant to the number of quantization gray levels. The test results show that the invariant feature with a gray level of 256 has the best identification performance. In dataset I, the accuracy value is 97.92, precision is 96.88, and recall is 95.83, while in dataset II, the accuracy value is 95.83, precision is 89.69, and recall is 91.67. The identification of DM and HC trained on invariant features showed higher accuracy than the original features.
Collapse
|
31
|
Pantic I, Paunovic J, Pejic S, Drakulic D, Todorovic A, Stankovic S, Vucevic D, Cumic J, Radosavljevic T. Artificial intelligence approaches to the biochemistry of oxidative stress: Current state of the art. Chem Biol Interact 2022; 358:109888. [PMID: 35296431 DOI: 10.1016/j.cbi.2022.109888] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 03/04/2022] [Accepted: 03/09/2022] [Indexed: 02/05/2023]
Abstract
Artificial intelligence (AI) and machine learning models are today frequently used for classification and prediction of various biochemical processes and phenomena. In recent years, numerous research efforts have been focused on developing such models for assessment, categorization, and prediction of oxidative stress. Supervised machine learning can successfully automate the process of evaluation and quantification of oxidative damage in biological samples, as well as extract useful data from the abundance of experimental results. In this concise review, we cover the possible applications of neural networks, decision trees and regression analysis as three common strategies in machine learning. We also review recent works on the various weaknesses and limitations of artificial intelligence in biochemistry and related scientific areas. Finally, we discuss future innovative approaches on the ways how AI can contribute to the automation of oxidative stress measurement and diagnosis of diseases associated with oxidative damage.
Collapse
Affiliation(s)
- Igor Pantic
- University of Belgrade, Faculty of Medicine, Institute of Medical Physiology, Laboratory for Cellular Physiology, Visegradska 26/II, RS-11129, Belgrade, Serbia; University of Haifa, 199 Abba Hushi Blvd, Mount Carmel, Haifa, IL, 3498838, Israel; Ben-Gurion University of the Negev, Faculty of Health Sciences, Department of Physiology and Cell Biology, 84105 Be'er Sheva, Israel.
| | - Jovana Paunovic
- University of Belgrade, Faculty of Medicine, Institute of Pathological Physiology, Dr Subotica 9, RS-11129, Belgrade, Serbia
| | - Snezana Pejic
- University of Belgrade, Vinca Institute of Nuclear Sciences, Department of Molecular Biology and Endocrinology, Mike Petrovica Alasa 12-14, RS-11351, Belgrade, Serbia
| | - Dunja Drakulic
- University of Belgrade, Vinca Institute of Nuclear Sciences, Department of Molecular Biology and Endocrinology, Mike Petrovica Alasa 12-14, RS-11351, Belgrade, Serbia
| | - Ana Todorovic
- University of Belgrade, Vinca Institute of Nuclear Sciences, Department of Molecular Biology and Endocrinology, Mike Petrovica Alasa 12-14, RS-11351, Belgrade, Serbia
| | - Sanja Stankovic
- University Clinical Centre of Serbia, Centre for Medical Biochemistry, Visegradska 26, RS-11000, Belgrade, Serbia; University of Kragujevac, Faculty of Medical Sciences, Svetozara Markovica 69, RS-34000, Kragujevac, Serbia
| | - Danijela Vucevic
- University of Belgrade, Faculty of Medicine, Institute of Pathological Physiology, Dr Subotica 9, RS-11129, Belgrade, Serbia
| | - Jelena Cumic
- University of Belgrade, Faculty of Medicine, University Clinical Centre of Serbia, Dr. Koste Todorovića 8, RS-11129, Belgrade, Serbia
| | - Tatjana Radosavljevic
- University of Belgrade, Faculty of Medicine, Institute of Pathological Physiology, Dr Subotica 9, RS-11129, Belgrade, Serbia
| |
Collapse
|
32
|
Guo L, Wang Y, Song Y, Sun T. Classification and detection using hidden Markov model‐support vector machine algorithm based on optimal colour space selection for blood images. COGNITIVE COMPUTATION AND SYSTEMS 2022. [DOI: 10.1049/ccs2.12045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Lei Guo
- School of Artificial Intelligence Beijing University of Posts and Telecommunications, Beijing Beijing China
| | - Yao Wang
- School of Artificial Intelligence Beijing University of Posts and Telecommunications, Beijing Beijing China
| | - Yuan Song
- School of Artificial Intelligence Beijing University of Posts and Telecommunications, Beijing Beijing China
| | - Tengyue Sun
- School of Modern Post Beijing University of Posts and Telecommunications, Beijing Beijing China
| |
Collapse
|
33
|
Davidovic LM, Cumic J, Dugalic S, Vicentic S, Sevarac Z, Petroianu G, Corridon P, Pantic I. Gray-Level Co-occurrence Matrix Analysis for the Detection of Discrete, Ethanol-Induced, Structural Changes in Cell Nuclei: An Artificial Intelligence Approach. MICROSCOPY AND MICROANALYSIS : THE OFFICIAL JOURNAL OF MICROSCOPY SOCIETY OF AMERICA, MICROBEAM ANALYSIS SOCIETY, MICROSCOPICAL SOCIETY OF CANADA 2022; 28:265-271. [PMID: 34937605 DOI: 10.1017/s1431927621013878] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Gray-level co-occurrence matrix (GLCM) analysis is a contemporary and innovative computational method for the assessment of textural patterns, applicable in almost any area of microscopy. The aim of our research was to perform the GLCM analysis of cell nuclei in Saccharomyces cerevisiae yeast cells after the induction of sublethal cell damage with ethyl alcohol, and to evaluate the performance of various machine learning (ML) models regarding their ability to separate damaged from intact cells. For each cell nucleus, five GLCM parameters were calculated: angular second moment, inverse difference moment, GLCM contrast, GLCM correlation, and textural variance. Based on the obtained GLCM data, we applied three ML approaches: neural network, random trees, and binomial logistic regression. Statistically significant differences in GLCM features were observed between treated and untreated cells. The multilayer perceptron neural network had the highest classification accuracy. The model also showed a relatively high level of sensitivity and specificity, as well as an excellent discriminatory power in the separation of treated from untreated cells. To the best of our knowledge, this is the first study to demonstrate that it is possible to create a relatively sensitive GLCM-based ML model for the detection of alcohol-induced damage in Saccharomyces cerevisiae cell nuclei.
Collapse
Affiliation(s)
| | - Jelena Cumic
- University of Belgrade, Faculty of Medicine, University Clinical Center of Serbia, Dr. Koste Todorovica 8, RS-11129 Belgrade, Serbia
| | - Stefan Dugalic
- University of Belgrade, Faculty of Medicine, University Clinical Center of Serbia, Dr. Koste Todorovica 8, RS-11129 Belgrade, Serbia
| | - Sreten Vicentic
- University of Belgrade, Faculty of Medicine, University Clinical Center of Serbia, Clinic of Psychiatry, Pasterova 2, RS-11000 Belgrade, Serbia
| | - Zoran Sevarac
- University of Belgrade, Faculty of Organizational Sciences, Jove Ilica 154, RS-11000 Belgrade, Serbia
| | - Georg Petroianu
- Department of Pharmacology & Therapeutics, Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, UAE
| | - Peter Corridon
- Department of Immunology and Physiology, College of Medicine and Health Sciences; Biomedical Engineering, Healthcare Engineering Innovation Center; Center for Biotechnology; Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, UAE
| | - Igor Pantic
- University of Belgrade, Faculty of Medicine, Department of Medical Physiology, Laboratory for Cellular Physiology, Visegradska 26/II, RS-11129 Belgrade, Serbia
- University of Haifa, 199 Abba Hushi Blvd. Mount Carmel, HaifaIL-3498838, Israel
| |
Collapse
|
34
|
Abstract
The efficiency of lung cancer screening for reducing mortality is hindered by the high rate of false positives. Artificial intelligence applied to radiomics could help to early discard benign cases from the analysis of CT scans. The available amount of data and the fact that benign cases are a minority, constitutes a main challenge for the successful use of state of the art methods (like deep learning), which can be biased, over-fitted and lack of clinical reproducibility. We present an hybrid approach combining the potential of radiomic features to characterize nodules in CT scans and the generalization of the feed forward networks. In order to obtain maximal reproducibility with minimal training data, we propose an embedding of nodules based on the statistical significance of radiomic features for malignancy detection. This representation space of lesions is the input to a feed forward network, which architecture and hyperparameters are optimized using own-defined metrics of the diagnostic power of the whole system. Results of the best model on an independent set of patients achieve 100% of sensitivity and 83% of specificity (AUC = 0.94) for malignancy detection.
Collapse
|
35
|
Wesp P, Grosu S, Graser A, Maurus S, Schulz C, Knösel T, Fabritius MP, Schachtner B, Yeh BM, Cyran CC, Ricke J, Kazmierczak PM, Ingrisch M. Deep learning in CT colonography: differentiating premalignant from benign colorectal polyps. Eur Radiol 2022; 32:4749-4759. [PMID: 35083528 PMCID: PMC9213389 DOI: 10.1007/s00330-021-08532-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 12/06/2021] [Accepted: 12/20/2021] [Indexed: 11/24/2022]
Abstract
Objectives To investigate the differentiation of premalignant from benign colorectal polyps detected by CT colonography using deep learning. Methods In this retrospective analysis of an average risk colorectal cancer screening sample, polyps of all size categories and morphologies were manually segmented on supine and prone CT colonography images and classified as premalignant (adenoma) or benign (hyperplastic polyp or regular mucosa) according to histopathology. Two deep learning models SEG and noSEG were trained on 3D CT colonography image subvolumes to predict polyp class, and model SEG was additionally trained with polyp segmentation masks. Diagnostic performance was validated in an independent external multicentre test sample. Predictions were analysed with the visualisation technique Grad-CAM++. Results The training set consisted of 107 colorectal polyps in 63 patients (mean age: 63 ± 8 years, 40 men) comprising 169 polyp segmentations. The external test set included 77 polyps in 59 patients comprising 118 polyp segmentations. Model SEG achieved a ROC-AUC of 0.83 and 80% sensitivity at 69% specificity for differentiating premalignant from benign polyps. Model noSEG yielded a ROC-AUC of 0.75, 80% sensitivity at 44% specificity, and an average Grad-CAM++ heatmap score of ≥ 0.25 in 90% of polyp tissue. Conclusions In this proof-of-concept study, deep learning enabled the differentiation of premalignant from benign colorectal polyps detected with CT colonography and the visualisation of image regions important for predictions. The approach did not require polyp segmentation and thus has the potential to facilitate the identification of high-risk polyps as an automated second reader. Key Points • Non-invasive deep learning image analysis may differentiate premalignant from benign colorectal polyps found in CT colonography scans. • Deep learning autonomously learned to focus on polyp tissue for predictions without the need for prior polyp segmentation by experts. • Deep learning potentially improves the diagnostic accuracy of CT colonography in colorectal cancer screening by allowing for a more precise selection of patients who would benefit from endoscopic polypectomy, especially for patients with polyps of 6–9 mm size. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-08532-2.
Collapse
Affiliation(s)
- Philipp Wesp
- Department of Radiology, University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Germany.
| | - Sergio Grosu
- Department of Radiology, University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Germany
| | - Anno Graser
- Radiologie München, Burgstraße 7, 80331, Munich, Germany
| | - Stefan Maurus
- Department of Radiology, University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Germany
| | - Christian Schulz
- Department of Medicine II, University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Germany
| | - Thomas Knösel
- Department of Pathology, University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Germany
| | - Matthias P Fabritius
- Department of Radiology, University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Germany
| | - Balthasar Schachtner
- Department of Radiology, University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Germany.,Comprehensive Pneumology Center (CPC-M), Member of the German Center for Lung Research (DZL), Max-Lebsche-Platz 31, 81377, Munich, Germany
| | - Benjamin M Yeh
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA, 94117, USA
| | - Clemens C Cyran
- Department of Radiology, University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Germany
| | - Jens Ricke
- Department of Radiology, University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Germany
| | - Philipp M Kazmierczak
- Department of Radiology, University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Germany
| | - Michael Ingrisch
- Department of Radiology, University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Germany
| |
Collapse
|
36
|
Cao W, Pomeroy MJ, Zhang S, Tan J, Liang Z, Gao Y, Abbasi AF, Pickhardt PJ. An Adaptive Learning Model for Multiscale Texture Features in Polyp Classification via Computed Tomographic Colonography. SENSORS (BASEL, SWITZERLAND) 2022; 22:907. [PMID: 35161653 PMCID: PMC8840570 DOI: 10.3390/s22030907] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 01/14/2022] [Accepted: 01/20/2022] [Indexed: 12/10/2022]
Abstract
Objective: As an effective lesion heterogeneity depiction, texture information extracted from computed tomography has become increasingly important in polyp classification. However, variation and redundancy among multiple texture descriptors render a challenging task of integrating them into a general characterization. Considering these two problems, this work proposes an adaptive learning model to integrate multi-scale texture features. Methods: To mitigate feature variation, the whole feature set is geometrically split into several independent subsets that are ranked by a learning evaluation measure after preliminary classifications. To reduce feature redundancy, a bottom-up hierarchical learning framework is proposed to ensure monotonic increase of classification performance while integrating these ranked sets selectively. Two types of classifiers, traditional (random forest + support vector machine)- and convolutional neural network (CNN)-based, are employed to perform the polyp classification under the proposed framework with extended Haralick measures and gray-level co-occurrence matrix (GLCM) as inputs, respectively. Experimental results are based on a retrospective dataset of 63 polyp masses (defined as greater than 3 cm in largest diameter), including 32 adenocarcinomas and 31 benign adenomas, from adult patients undergoing first-time computed tomography colonography and who had corresponding histopathology of the detected masses. Results: We evaluate the performance of the proposed models by the area under the curve (AUC) of the receiver operating characteristic curve. The proposed models show encouraging performances of an AUC score of 0.925 with the traditional classification method and an AUC score of 0.902 with CNN. The proposed adaptive learning framework significantly outperforms nine well-established classification methods, including six traditional methods and three deep learning ones with a large margin. Conclusions: The proposed adaptive learning model can combat the challenges of feature variation through a multiscale grouping of feature inputs, and the feature redundancy through a hierarchal sorting of these feature groups. The improved classification performance against comparative models demonstrated the feasibility and utility of this adaptive learning procedure for feature integration.
Collapse
Affiliation(s)
- Weiguo Cao
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; (W.C.); (M.J.P.); (S.Z.); (Y.G.); (A.F.A.)
| | - Marc J. Pomeroy
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; (W.C.); (M.J.P.); (S.Z.); (Y.G.); (A.F.A.)
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Shu Zhang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; (W.C.); (M.J.P.); (S.Z.); (Y.G.); (A.F.A.)
| | - Jiaxing Tan
- Department of Computer Science, City University of New York, New York, NY 10314, USA;
| | - Zhengrong Liang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; (W.C.); (M.J.P.); (S.Z.); (Y.G.); (A.F.A.)
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; (W.C.); (M.J.P.); (S.Z.); (Y.G.); (A.F.A.)
| | - Almas F. Abbasi
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; (W.C.); (M.J.P.); (S.Z.); (Y.G.); (A.F.A.)
| | - Perry J. Pickhardt
- Department of Radiology, School of Medicine, University of Wisconsin, Madison, WI 53792, USA;
| |
Collapse
|
37
|
Viscaino M, Torres Bustos J, Muñoz P, Auat Cheein C, Cheein FA. Artificial intelligence for the early detection of colorectal cancer: A comprehensive review of its advantages and misconceptions. World J Gastroenterol 2021; 27:6399-6414. [PMID: 34720530 PMCID: PMC8517786 DOI: 10.3748/wjg.v27.i38.6399] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 04/26/2021] [Accepted: 09/14/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer (CRC) was the second-ranked worldwide type of cancer during 2020 due to the crude mortality rate of 12.0 per 100000 inhabitants. It can be prevented if glandular tissue (adenomatous polyps) is detected early. Colonoscopy has been strongly recommended as a screening test for both early cancer and adenomatous polyps. However, it has some limitations that include the high polyp miss rate for smaller (< 10 mm) or flat polyps, which are easily missed during visual inspection. Due to the rapid advancement of technology, artificial intelligence (AI) has been a thriving area in different fields, including medicine. Particularly, in gastroenterology AI software has been included in computer-aided systems for diagnosis and to improve the assertiveness of automatic polyp detection and its classification as a preventive method for CRC. This article provides an overview of recent research focusing on AI tools and their applications in the early detection of CRC and adenomatous polyps, as well as an insightful analysis of the main advantages and misconceptions in the field.
Collapse
Affiliation(s)
- Michelle Viscaino
- Department of Electronic Engineering, Universidad Tecnica Federico Santa Maria, Valpaiso 2340000, Chile
| | - Javier Torres Bustos
- Department of Electronic Engineering, Universidad Tecnica Federico Santa Maria, Valpaiso 2340000, Chile
| | - Pablo Muñoz
- Hospital Clinico, University of Chile, Santiago 8380456, Chile
| | - Cecilia Auat Cheein
- Facultad de Medicina, Universidad Nacional de Santiago del Estero, Santiago del Estero 4200, Argentina
| | - Fernando Auat Cheein
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaiso 2340000, Chile
| |
Collapse
|
38
|
Liu Y, Zheng H, Liang Z, Miao Q, Brisbane WG, Marks LS, Raman SS, Reiter RE, Yang G, Sung K. Textured-Based Deep Learning in Prostate Cancer Classification with 3T Multiparametric MRI: Comparison with PI-RADS-Based Classification. Diagnostics (Basel) 2021; 11:1785. [PMID: 34679484 PMCID: PMC8535024 DOI: 10.3390/diagnostics11101785] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 09/24/2021] [Accepted: 09/24/2021] [Indexed: 12/24/2022] Open
Abstract
The current standardized scheme for interpreting MRI requires a high level of expertise and exhibits a significant degree of inter-reader and intra-reader variability. An automated prostate cancer (PCa) classification can improve the ability of MRI to assess the spectrum of PCa. The purpose of the study was to evaluate the performance of a texture-based deep learning model (Textured-DL) for differentiating between clinically significant PCa (csPCa) and non-csPCa and to compare the Textured-DL with Prostate Imaging Reporting and Data System (PI-RADS)-based classification (PI-RADS-CLA), where a threshold of PI-RADS ≥ 4, representing highly suspicious lesions for csPCa, was applied. The study cohort included 402 patients (60% (n = 239) of patients for training, 10% (n = 42) for validation, and 30% (n = 121) for testing) with 3T multiparametric MRI matched with whole-mount histopathology after radical prostatectomy. For a given suspicious prostate lesion, the volumetric patches of T2-Weighted MRI and apparent diffusion coefficient images were cropped and used as the input to Textured-DL, consisting of a 3D gray-level co-occurrence matrix extractor and a CNN. PI-RADS-CLA by an expert reader served as a baseline to compare classification performance with Textured-DL in differentiating csPCa from non-csPCa. Sensitivity and specificity comparisons were performed using Mcnemar's test. Bootstrapping with 1000 samples was performed to estimate the 95% confidence interval (CI) for AUC. CIs of sensitivity and specificity were calculated by the Wald method. The Textured-DL model achieved an AUC of 0.85 (CI [0.79, 0.91]), which was significantly higher than the PI-RADS-CLA (AUC of 0.73 (CI [0.65, 0.80]); p < 0.05) for PCa classification, and the specificity was significantly different between Textured-DL and PI-RADS-CLA (0.70 (CI [0.59, 0.82]) vs. 0.47 (CI [0.35, 0.59]); p < 0.05). In sub-analyses, Textured-DL demonstrated significantly higher specificities in the peripheral zone (PZ) and solitary tumor lesions compared to the PI-RADS-CLA (0.78 (CI [0.66, 0.90]) vs. 0.42 (CI [0.28, 0.57]); 0.75 (CI [0.54, 0.96]) vs. 0.38 [0.14, 0.61]; all p values < 0.05). Moreover, Textured-DL demonstrated a high negative predictive value of 92% while maintaining a high positive predictive value of 58% among the lesions with a PI-RADS score of 3. In conclusion, the Textured-DL model was superior to the PI-RADS-CLA in the classification of PCa. In addition, Textured-DL demonstrated superior performance in the specificities for the peripheral zone and solitary tumors compared with PI-RADS-based risk assessment.
Collapse
Affiliation(s)
- Yongkai Liu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, CA 90095, USA; (H.Z.); (Q.M.); (S.S.R.); (K.S.)
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Haoxin Zheng
- Department of Radiological Sciences, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, CA 90095, USA; (H.Z.); (Q.M.); (S.S.R.); (K.S.)
| | - Zhengrong Liang
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA;
| | - Qi Miao
- Department of Radiological Sciences, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, CA 90095, USA; (H.Z.); (Q.M.); (S.S.R.); (K.S.)
| | - Wayne G. Brisbane
- Department of Urology, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, CA 90095, USA; (W.G.B.); (L.S.M.); (R.E.R.)
| | - Leonard S. Marks
- Department of Urology, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, CA 90095, USA; (W.G.B.); (L.S.M.); (R.E.R.)
| | - Steven S. Raman
- Department of Radiological Sciences, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, CA 90095, USA; (H.Z.); (Q.M.); (S.S.R.); (K.S.)
| | - Robert E. Reiter
- Department of Urology, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, CA 90095, USA; (W.G.B.); (L.S.M.); (R.E.R.)
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, South Kensington, London SW7 2AZ, UK;
| | - Kyunghyun Sung
- Department of Radiological Sciences, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, CA 90095, USA; (H.Z.); (Q.M.); (S.S.R.); (K.S.)
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, CA 90095, USA
| |
Collapse
|
39
|
Chen Q, Huang J, Salehi HS, Zhu H, Lian L, Lai X, Wei K. Hierarchical CNN-based occlusal surface morphology analysis for classifying posterior tooth type using augmented images from 3D dental surface models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106295. [PMID: 34329895 DOI: 10.1016/j.cmpb.2021.106295] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 07/15/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE 3D Digitization of dental model is growing in popularity for dental application. Classification of tooth type from single 3D point cloud model without assist of relative position among teeth is still a challenging task. METHODS In this paper, 8-class posterior tooth type classification (first premolar, second premolar, first molar, second molar in maxilla and mandible respectively) was investigated by convolutional neural network (CNN)-based occlusal surface morphology analysis. 3D occlusal surface was transformed to depth image for basic CNN-based classification. Considering the logical hierarchy of tooth categories, a hierarchical classification structure was proposed to decompose 8-class classification task into two-stage cascaded classification subtasks. Image augmentations including traditional geometrical transformation and deep convolutional generative adversarial networks (DCGANs) were applied for each subnetworks and cascaded network. RESULTS Results indicate that combing traditional and DCGAN-based augmented images to train CNN models can improve classification performance. In the paper, we achieve overall accuracy 91.35%, macro precision 91.49%, macro-recall 91.29%, and macro-F1 0.9139 for the 8-class posterior tooth type classification, which outperform other deep learning models. Meanwhile, Grad-cam results demonstrate that CNN model trained by our augmented images will focus on smaller important region for better generality. And anatomic landmarks of cusp, fossa, and groove work as important regions for cascaded classification model. CONCLUSION The reported work has proved that using basic CNN to construct two-stage hierarchical structure can achieve the best classification performance of posterior tooth type in 3D model without assistance of relative position information. The proposed method has advantages of easy training, great ability to learn discriminative features from small image region.
Collapse
Affiliation(s)
- Qingguang Chen
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China.
| | - Junchao Huang
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China
| | - Hassan S Salehi
- Department of Electrical and Computer Engineering, California State University, Chico, 95929, United States
| | - Haihua Zhu
- Hospital of Stomatology of Zhejiang University, Hangzhou, 310018, China
| | - Luya Lian
- Hospital of Stomatology of Zhejiang University, Hangzhou, 310018, China
| | - Xiaomin Lai
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China
| | - Kaihua Wei
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China
| |
Collapse
|
40
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN’s clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|
41
|
Effects of Iron Oxide Nanoparticles on Structural Organization of Hepatocyte Chromatin: Gray Level Co-occurrence Matrix Analysis. MICROSCOPY AND MICROANALYSIS 2021; 27:889-896. [PMID: 34039461 DOI: 10.1017/s1431927621000532] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Gray level co-occurrence matrix (GLCM) analysis is a contemporary and innovative computer-based algorithm that can be used for the quantification of subtle changes in a cellular structure. In this work, we use this method for the detection of discrete alterations in hepatocyte chromatin distribution after in vivo exposure to iron oxide nanoparticles (IONPs). The study was performed on 40 male, healthy C57BL/6 mice divided into four groups: three experimental groups that received different doses of IONPs and 1 control group. We describe the dose-dependent reduction of chromatin textural uniformity measured as GLCM angular second moment. Similar changes were detected for chromatin textural uniformity expressed as the value of inverse difference moment. To the best of our knowledge, this is the first study to investigate the impact of iron-based nanomaterials on hepatocyte GLCM parameters. Also, this is the first study to apply discrete wavelet transform analysis, as a supplementary method to GLCM, for the assessment of hepatocyte chromatin structure in these conditions. The results may present the useful basis for future research on the application of GLCM and DWT methods in pathology and other medical research areas.
Collapse
|
42
|
Chen B, Zhang L, Chen H, Liang K, Chen X. A novel extended Kalman filter with support vector machine based method for the automatic diagnosis and segmentation of brain tumors. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105797. [PMID: 33317871 DOI: 10.1016/j.cmpb.2020.105797] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 10/10/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND Brain tumors are life-threatening, and their early detection is crucial for improving survival rates. Conventionally, brain tumors are detected by radiologists based on their clinical experience. However, this process is inefficient. This paper proposes a machine learning-based method to 1) determine the presence of a tumor, 2) automatically segment the tumor, and 3) classify it as benign or malignant. METHODS We implemented an Extended Kalman Filter with Support Vector Machine (EKF-SVM), an image analysis platform based on an SVM for automated brain tumor detection. A development dataset of 120 patients which supported by Tiantan Hospital was used for algorithm training. Our machine learning algorithm has 5 components as follows. Firstly, image standardization is applied to all the images. This is followed by noise removal with a non-local means filter, and contrast enhancement with improved dynamic histogram equalization. Secondly, a gray-level co-occurrence matrix is utilized for feature extraction to get the image features. Thirdly, the extracted features are fed into a SVM for classify the MRI initially, and an EKF is used to classify brain tumors in the brain MRIs. Fourthly, cross-validation is used to verify the accuracy of the classifier. Finally, an automatic segmentation method based on the combination of k-means clustering and region growth is used for detecting brain tumors. RESULTS With regard to the diagnostic performance, the EKF-SVM had a 96.05% accuracy for automatically classifying brain tumors. Segmentation based on k-means clustering was capable of identifying the tumor boundaries and extracting the whole tumor. CONCLUSION The proposed EKF-SVM based method has better classification performance for positive brain tumor images, which was mainly due to the dearth of negative examples in our dataset. Therefore, future work should obtain more negative examples and investigate the performance of deep learning algorithms such as the convolutional neural networks for automatic diagnosis and segmentation of brain tumors.
Collapse
Affiliation(s)
- Baoshi Chen
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Lingling Zhang
- Department of Neuroradiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Hongyan Chen
- Department of Neuroradiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Kewei Liang
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Xuzhu Chen
- Department of Neuroradiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China.
| |
Collapse
|
43
|
Cao W, Liang Z, Gao Y, Pomeroy MJ, Han F, Abbasi A, Pickhardt PJ. A dynamic lesion model for differentiation of malignant and benign pathologies. Sci Rep 2021; 11:3485. [PMID: 33568762 PMCID: PMC7875978 DOI: 10.1038/s41598-021-83095-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Accepted: 01/20/2021] [Indexed: 11/21/2022] Open
Abstract
Malignant lesions have a high tendency to invade their surrounding environment compared to benign ones. This paper proposes a dynamic lesion model and explores the 2nd order derivatives at each image voxel, which reflect the rate of change of image intensity, as a quantitative measure of the tendency. The 2nd order derivatives at each image voxel are usually represented by the Hessian matrix, but it is difficult to quantify a matrix field (or image) through the lesion space as a measure of the tendency. We conjecture that the three eigenvalues contain important information of the Hessian matrix and are chosen as the surrogate representation of the Hessian matrix. By treating the three eigenvalues as a vector, called Hessian vector, which is defined in a local coordinate formed by three orthogonal Hessian eigenvectors and further adapting the gray level occurrence computing method to extract the vector texture descriptors (or measures) from the Hessian vector, a quantitative presentation for the dynamic lesion model is completed. The vector texture descriptors were applied to differentiate malignant from benign lesions from two pathologically proven datasets: colon polyps and lung nodules. The classification results not only outperform four state-of-the-art methods but also three radiologist experts.
Collapse
Affiliation(s)
- Weiguo Cao
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY, USA
| | - Zhengrong Liang
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY, USA.
- Department of Biomedical Engineering, State University of New York at Stony Brook, Stony Brook, NY, USA.
| | - Yongfeng Gao
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY, USA
| | - Marc J Pomeroy
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY, USA
- Department of Biomedical Engineering, State University of New York at Stony Brook, Stony Brook, NY, USA
| | - Fangfang Han
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, People's Republic of China
| | - Almas Abbasi
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY, USA
| | - Perry J Pickhardt
- Department of Radiology, School of Medicine, University of Wisconsin, Madison, WI, USA
| |
Collapse
|