1
|
Zwijnen AW, Watzema L, Ridwan Y, van Der Pluijm I, Smal I, Essers J. Self-adaptive deep learning-based segmentation for universal and functional clinical and preclinical CT image analysis. Comput Biol Med 2024; 179:108853. [PMID: 39013341 DOI: 10.1016/j.compbiomed.2024.108853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 07/04/2024] [Accepted: 07/04/2024] [Indexed: 07/18/2024]
Abstract
BACKGROUND Methods to monitor cardiac functioning non-invasively can accelerate preclinical and clinical research into novel treatment options for heart failure. However, manual image analysis of cardiac substructures is resource-intensive and error-prone. While automated methods exist for clinical CT images, translating these to preclinical μCT data is challenging. We employed deep learning to automate the extraction of quantitative data from both CT and μCT images. METHODS We collected a public dataset of cardiac CT images of human patients, as well as acquired μCT images of wild-type and accelerated aging mice. The left ventricle, myocardium, and right ventricle were manually segmented in the μCT training set. After template-based heart detection, two separate segmentation neural networks were trained using the nnU-Net framework. RESULTS The mean Dice score of the CT segmentation results (0.925 ± 0.019, n = 40) was superior to those achieved by state-of-the-art algorithms. Automated and manual segmentations of the μCT training set were nearly identical. The estimated median Dice score (0.940) of the test set results was comparable to existing methods. The automated volume metrics were similar to manual expert observations. In aging mice, ejection fractions had significantly decreased, and myocardial volume increased by age 24 weeks. CONCLUSIONS With further optimization, automated data extraction expands the application of (μ)CT imaging, while reducing subjectivity and workload. The proposed method efficiently measures the left and right ventricular ejection fraction and myocardial mass. With uniform translation between image types, cardiac functioning in diastolic and systolic phases can be monitored in both animals and humans.
Collapse
Affiliation(s)
- Anne-Wietje Zwijnen
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands
| | | | - Yanto Ridwan
- AMIE Core Facility, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Ingrid van Der Pluijm
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Vascular Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - Ihor Smal
- Department of Cell Biology, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - Jeroen Essers
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Vascular Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Radiotherapy, Erasmus University Medical Center, Rotterdam, the Netherlands.
| |
Collapse
|
2
|
Zilka T, Benesova W. Radiomics of pituitary adenoma using computer vision: a review. Med Biol Eng Comput 2024:10.1007/s11517-024-03163-3. [PMID: 39012416 DOI: 10.1007/s11517-024-03163-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Accepted: 07/01/2024] [Indexed: 07/17/2024]
Abstract
Pituitary adenomas (PA) represent the most common type of sellar neoplasm. Extracting relevant information from radiological images is essential for decision support in addressing various objectives related to PA. Given the critical need for an accurate assessment of the natural progression of PA, computer vision (CV) and artificial intelligence (AI) play a pivotal role in automatically extracting features from radiological images. The field of "Radiomics" involves the extraction of high-dimensional features, often referred to as "Radiomic features," from digital radiological images. This survey offers an analysis of the current state of research in PA radiomics. Our work comprises a systematic review of 34 publications focused on PA radiomics and other automated information mining pertaining to PA through the analysis of radiological data using computer vision methods. We begin with a theoretical exploration essential for understanding the theoretical background of radionmics, encompassing traditional approaches from computer vision and machine learning, as well as the latest methodologies in deep radiomics utilizing deep learning (DL). Thirty-four research works under examination are comprehensively compared and evaluated. The overall results achieved in the analyzed papers are high, e.g., the best accuracy is up to 96% and the best achieved AUC is up to 0.99, which establishes optimism for the successful use of radiomic features. Methods based on deep learning seem to be the most promising for the future. In relation to this perspective DL methods, several challenges are remarkable: It is important to create high-quality and sufficiently extensive datasets necessary for training deep neural networks. Interpretability of deep radiomics is also a big open challenge. It is necessary to develop and verify methods that will explain to us how deep radiomic features reflect various physics-explainable aspects.
Collapse
Affiliation(s)
- Tomas Zilka
- Saint Michal's Hospital, Bratislava, Slovakia
- Masaryk University, Brno, Czech Republic
| | - Wanda Benesova
- Slovak University of Technology in Bratislava, Bratislava, Slovakia.
| |
Collapse
|
3
|
Xia S, Li Q, Zhu HT, Zhang XY, Shi YJ, Yang D, Wu J, Guan Z, Lu Q, Li XT, Sun YS. Fully semantic segmentation for rectal cancer based on post-nCRT MRl modality and deep learning framework. BMC Cancer 2024; 24:315. [PMID: 38454349 PMCID: PMC10919051 DOI: 10.1186/s12885-024-11997-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 02/13/2024] [Indexed: 03/09/2024] Open
Abstract
PURPOSE Rectal tumor segmentation on post neoadjuvant chemoradiotherapy (nCRT) magnetic resonance imaging (MRI) has great significance for tumor measurement, radiomics analysis, treatment planning, and operative strategy. In this study, we developed and evaluated segmentation potential exclusively on post-chemoradiation T2-weighted MRI using convolutional neural networks, with the aim of reducing the detection workload for radiologists and clinicians. METHODS A total of 372 consecutive patients with LARC were retrospectively enrolled from October 2015 to December 2017. The standard-of-care neoadjuvant process included 22-fraction intensity-modulated radiation therapy and oral capecitabine. Further, 243 patients (3061 slices) were grouped into training and validation datasets with a random 80:20 split, and 41 patients (408 slices) were used as the test dataset. A symmetric eight-layer deep network was developed using the nnU-Net Framework, which outputs the segmentation result with the same size. The trained deep learning (DL) network was examined using fivefold cross-validation and tumor lesions with different TRGs. RESULTS At the stage of testing, the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and mean surface distance (MSD) were applied to quantitatively evaluate the performance of generalization. Considering the test dataset (41 patients, 408 slices), the average DSC, HD95, and MSD were 0.700 (95% CI: 0.680-0.720), 17.73 mm (95% CI: 16.08-19.39), and 3.11 mm (95% CI: 2.67-3.56), respectively. Eighty-two percent of the MSD values were less than 5 mm, and fifty-five percent were less than 2 mm (median 1.62 mm, minimum 0.07 mm). CONCLUSIONS The experimental results indicated that the constructed pipeline could achieve relatively high accuracy. Future work will focus on assessing the performances with multicentre external validation.
Collapse
Affiliation(s)
- Shaojun Xia
- Institute of Medical Technology, Peking University Health Science Center, Haidian District, No. 38 Xueyuan Road, Beijing, 100191, China
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Qingyang Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Hai-Tao Zhu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Xiao-Yan Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Yan-Jie Shi
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Ding Yang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Jiaqi Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Zhen Guan
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Qiaoyuan Lu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Xiao-Ting Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Ying-Shi Sun
- Institute of Medical Technology, Peking University Health Science Center, Haidian District, No. 38 Xueyuan Road, Beijing, 100191, China.
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China.
| |
Collapse
|
4
|
Choi US, Sung YW, Ogawa S. deepPGSegNet: MRI-based pituitary gland segmentation using deep learning. Front Endocrinol (Lausanne) 2024; 15:1338743. [PMID: 38370353 PMCID: PMC10869468 DOI: 10.3389/fendo.2024.1338743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 01/18/2024] [Indexed: 02/20/2024] Open
Abstract
Introduction In clinical research on pituitary disorders, pituitary gland (PG) segmentation plays a pivotal role, which impacts the diagnosis and treatment of conditions such as endocrine dysfunctions and visual impairments. Manual segmentation, which is the traditional method, is tedious and susceptible to inter-observer differences. Thus, this study introduces an automated solution, utilizing deep learning, for PG segmentation from magnetic resonance imaging (MRI). Methods A total of 153 university students were enrolled, and their MRI images were used to build a training dataset and ground truth data through manual segmentation of the PGs. A model was trained employing data augmentation and a three-dimensional U-Net architecture with a five-fold cross-validation. A predefined field of view was applied to highlight the PG region to optimize memory usage. The model's performance was tested on an independent dataset. The model's performance was tested on an independent dataset for evaluating accuracy, precision, recall, and an F1 score. Results and discussion The model achieved a training accuracy, precision, recall, and an F1 score of 92.7%, 0.87, 0.91, and 0.89, respectively. Moreover, the study explored the relationship between PG morphology and age using the model. The results indicated a significant association between PG volume and midsagittal area with age. These findings suggest that a precise volumetric PG analysis through an automated segmentation can greatly enhance diagnostic accuracy and surveillance of pituitary disorders.
Collapse
Affiliation(s)
- Uk-Su Choi
- Medical Device Development Center, Daegu-Gyeongbuk Medical Innovation Foundation, Daegu, Republic of Korea
| | - Yul-Wan Sung
- Kansei Fukushi Research Institute, Tohoku Fukushi University, Sendai, Japan
| | - Seiji Ogawa
- Kansei Fukushi Research Institute, Tohoku Fukushi University, Sendai, Japan
| |
Collapse
|
5
|
Černý M, Kybic J, Májovský M, Sedlák V, Pirgl K, Misiorzová E, Lipina R, Netuka D. Fully automated imaging protocol independent system for pituitary adenoma segmentation: a convolutional neural network-based model on sparsely annotated MRI. Neurosurg Rev 2023; 46:116. [PMID: 37162632 DOI: 10.1007/s10143-023-02014-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 03/08/2023] [Accepted: 04/28/2023] [Indexed: 05/11/2023]
Abstract
This study aims to develop a fully automated imaging protocol independent system for pituitary adenoma segmentation from magnetic resonance imaging (MRI) scans that can work without user interaction and evaluate its accuracy and utility for clinical applications. We trained two independent artificial neural networks on MRI scans of 394 patients. The scans were acquired according to various imaging protocols over the course of 11 years on 1.5T and 3T MRI systems. The segmentation model assigned a class label to each input pixel (pituitary adenoma, internal carotid artery, normal pituitary gland, background). The slice segmentation model classified slices as clinically relevant (structures of interest in slice) or irrelevant (anterior or posterior to sella turcica). We used MRI data of another 99 patients to evaluate the performance of the model during training. We validated the model on a prospective cohort of 28 patients, Dice coefficients of 0.910, 0.719, and 0.240 for tumour, internal carotid artery, and normal gland labels, respectively, were achieved. The slice selection model achieved 82.5% accuracy, 88.7% sensitivity, 76.7% specificity, and an AUC of 0.904. A human expert rated 71.4% of the segmentation results as accurate, 21.4% as slightly inaccurate, and 7.1% as coarsely inaccurate. Our model achieved good results comparable with recent works of other authors on the largest dataset to date and generalized well for various imaging protocols. We discussed future clinical applications, and their considerations. Models and frameworks for clinical use have yet to be developed and evaluated.
Collapse
Affiliation(s)
- Martin Černý
- Department of Neurosurgery and Neurooncology, 1st Faculty of Medicine, Charles University, Central Military Hospital Prague, U Vojenské nemocnice 1200, 169 02, Praha 6, Czech Republic.
- 1st Faculty of Medicine, Charles University Prague, Kateřinská 1660/32, 121 08, Praha 2, Czech Republic.
| | - Jan Kybic
- Department of Cybernetics, Faculty of Electrical Engineering, Czech Technical University in Prague, Technická 2, 166 27, Praha 6, Czech Republic
| | - Martin Májovský
- Department of Neurosurgery and Neurooncology, 1st Faculty of Medicine, Charles University, Central Military Hospital Prague, U Vojenské nemocnice 1200, 169 02, Praha 6, Czech Republic
| | - Vojtěch Sedlák
- Department of Radiodiagnostics, Central Military Hospital Prague, U Vojenské nemocnice 1200, 169 02, Praha 6, Czech Republic
| | - Karin Pirgl
- Department of Neurosurgery and Neurooncology, 1st Faculty of Medicine, Charles University, Central Military Hospital Prague, U Vojenské nemocnice 1200, 169 02, Praha 6, Czech Republic
- 3rd Faculty of Medicine, Charles University Prague, Ruská 87, 100 00, Praha 10, Czech Republic
| | - Eva Misiorzová
- Department of Neurosurgery, Faculty of Medicine, University of Ostrava, University Hospital Ostrava, 17. listopadu 1790/5, 708 52, Ostrava-Poruba, Czech Republic
| | - Radim Lipina
- Department of Neurosurgery, Faculty of Medicine, University of Ostrava, University Hospital Ostrava, 17. listopadu 1790/5, 708 52, Ostrava-Poruba, Czech Republic
| | - David Netuka
- Department of Neurosurgery and Neurooncology, 1st Faculty of Medicine, Charles University, Central Military Hospital Prague, U Vojenské nemocnice 1200, 169 02, Praha 6, Czech Republic
| |
Collapse
|
6
|
A review of deep learning-based multiple-lesion recognition from medical images: classification, detection and segmentation. Comput Biol Med 2023; 157:106726. [PMID: 36924732 DOI: 10.1016/j.compbiomed.2023.106726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 02/07/2023] [Accepted: 02/27/2023] [Indexed: 03/05/2023]
Abstract
Deep learning-based methods have become the dominant methodology in medical image processing with the advancement of deep learning in natural image classification, detection, and segmentation. Deep learning-based approaches have proven to be quite effective in single lesion recognition and segmentation. Multiple-lesion recognition is more difficult than single-lesion recognition due to the little variation between lesions or the too wide range of lesions involved. Several studies have recently explored deep learning-based algorithms to solve the multiple-lesion recognition challenge. This paper includes an in-depth overview and analysis of deep learning-based methods for multiple-lesion recognition developed in recent years, including multiple-lesion recognition in diverse body areas and recognition of whole-body multiple diseases. We discuss the challenges that still persist in the multiple-lesion recognition tasks by critically assessing these efforts. Finally, we outline existing problems and potential future research areas, with the hope that this review will help researchers in developing future approaches that will drive additional advances.
Collapse
|
7
|
Xu Z, Yu F, Zhang B, Zhang Q. Intelligent diagnosis of left ventricular hypertrophy using transthoracic echocardiography videos. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107182. [PMID: 36257197 DOI: 10.1016/j.cmpb.2022.107182] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 09/14/2022] [Accepted: 10/08/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE Left ventricular hypertrophy (LVH) is an independent risk factor for cardiovascular events and mortality. Pathological LVH can be caused by various diseases. In this study, we explored the possibility of using time and frequency domain analysis of myocardial radiomics features for patients with LVH in differentiating hypertrophic cardiomyopathy (HCM), hypertensive heart disease (HHD) and uremic cardiomyopathy (UCM) based on transthoracic echocardiography (TTE). This was the first study to explore TTE myocardial time and frequency domain analyses for multiple LVH etiology differentiation. MATERIALS AND METHODS We proposed an artificially intelligent diagnosis system based on radiomics techniques for differentiating HCM, HHD and UCM on TTE videos of the apical four-chamber view, which mainly included interventricular septum (IVS) segmentation, feature extraction and classification. We used two independent cohorts, one with 150 patients, including 50 HHD, 50 HCM and 50 UCM, for segmentation training and testing, and another with 149 patients (namely the main cohort), including 50 HHD, 46 HCM and 53 UCM, for classification training and testing after segmentation and feature extraction. Firstly, the U-Net, Residual U-Net (ResUNet) and nnU-Net were trained and tested to segment the IVS on TTE still images in the first cohort. Then the trained model with the best segmentation performance was further used for IVS prediction of ordered TTE images in video sequences in the main cohort. The post-processing was used to eliminate the noisy debris by selecting the maximum connected region and smoothing the edges of the predicted IVS region. Secondly, static radiomics features were extracted from the IVS of ordered TTE images in each video sequence, and subsequently the time and frequency domain features were further extracted from each time series of a static radiomics feature in the video sequence. Finally, the point-wise gated Boltzmann machine (PGBM) was used to learn and fuse the time and frequency domain features, and the support vector machine was used to classify the learned features for LVH diagnosis. The classification was performed with five-fold cross validation. RESULTS The ResUNet showed the best segmentation performance, with Dice coefficient, sensitivity, specificity and accuracy of 0.817, 76.3%, 99.6% and 98.6%, respectively. With post-processing, the Dice coefficient, sensitivity, specificity and accuracy of the ResUNet were further improved to 0.839, 77.0%, 99.8%, and 98.8%, respectively. The classification areas under the receiver operating characteristic curves (AUCs) were 0.838 ± 0.049 for HHD vs. HCM, 0.868 ± 0.042 for HCM vs. UCM and 0.701 ± 0.140 for HHD vs. UCM. CONCLUSION In this work, we proposed an intelligent identification system for LVH etiology classification based on routine TTE video images with good diagnostic performance. This deep learning method is feasible in automatic TTE images interpretation and expected to assist clinicians in detecting the primary cause of LVH.
Collapse
Affiliation(s)
- Zhou Xu
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Fei Yu
- Department of Ultrasound in Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China; Department of Ultrasound in Medicine, Ningbo First Hospital, Ningbo, China
| | - Bo Zhang
- Department of Ultrasound in Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China.
| | - Qi Zhang
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai, China.
| |
Collapse
|
8
|
Wen B, Wang Z. Editorial for the Special Issue on Advanced Machine Learning Techniques for Sensing and Imaging Applications. MICROMACHINES 2022; 13:mi13071030. [PMID: 35888847 PMCID: PMC9319337 DOI: 10.3390/mi13071030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 06/27/2022] [Indexed: 02/04/2023]
Affiliation(s)
- Bihan Wen
- School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| | - Zhangyang Wang
- Electrical and Computer Engineering, University of Texas at Austin, Austin, TX 78705, USA;
| |
Collapse
|