1
|
Wang J, He Y, Yan L, Chen S, Zhang K. Predicting Osteoporosis and Osteopenia by Fusing Deep Transfer Learning Features and Classical Radiomics Features Based on Single-Source Dual-energy CT Imaging. Acad Radiol 2024; 31:4159-4170. [PMID: 38693026 DOI: 10.1016/j.acra.2024.04.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/14/2024] [Accepted: 04/14/2024] [Indexed: 05/03/2024]
Abstract
RATIONALE AND OBJECTIVES To develop and validate a predictive model for osteoporosis and osteopenia prediction by fusing deep transfer learning (DTL) features and classical radiomics features based on single-source dual-energy computed tomography (CT) virtual monochromatic imaging. METHODS A total of 606 lumbar vertebrae with dual-energy CT imaging and quantitative CT (QCT) evaluation were included in the retrospective study and randomly divided into the training (n = 424) and validation (n = 182) cohorts. Radiomics features and DTL features were extracted from 70-keV monochromatic CT images, followed by feature selection and model construction, radiomics and DTL features models were established. Then, we integrated the selected two types of features into a features fusion model. We developed a two-level classifier for the hierarchical pairwise classification of each vertebra. All the vertebrae were first classified into osteoporosis and non-osteoporosis groups, then non-osteoporosis group was classified into osteopenia and normal groups. QCT was used as reference. The predictive performance and clinical usefulness of three models were evaluated and compared. RESULTS The area under the curve (AUC) of the features fusion, radiomics and DTL models for the classification between osteoporosis and non-osteoporosis were 0.981, 0.999, 0.997 in the training cohort and 0.979, 0.943, 0.848 in the validation cohort. Furthermore, the AUCs of the previously mentioned models for the differentiation between osteopenia and normal were 0.994, 0.971, 0.996 in the training cohort and 0.990, 0.968, 0.908 in the validation cohort. The overall accuracy of the previously mentioned models for two-level classifications was 0.979, 0.955, 0.908 in the training cohort and 0.918, 0.885, 0.841 in the validation cohort. Decision curve analysis showed that all models had high clinical value. CONCLUSION The feature fusion model can be used for osteoporosis and osteopenia prediction with improved predictive ability over a radiomics model or a DTL model alone.
Collapse
Affiliation(s)
- Jinling Wang
- Department of Radiology, The First Hospital of Hunan University of Chinese Medicine, 95 Shaoshan Middle Road, Yuhua District, Changsha 410007, PR China
| | - Yewen He
- Department of Radiology, The First Hospital of Hunan University of Chinese Medicine, 95 Shaoshan Middle Road, Yuhua District, Changsha 410007, PR China
| | - Luyou Yan
- Department of Radiology, The First Hospital of Hunan University of Chinese Medicine, 95 Shaoshan Middle Road, Yuhua District, Changsha 410007, PR China
| | - Suping Chen
- GE Healthcare (Shanghai) Co., Ltd., Shanghai 201203, PR China
| | - Kun Zhang
- Department of Radiology, The First Hospital of Hunan University of Chinese Medicine, 95 Shaoshan Middle Road, Yuhua District, Changsha 410007, PR China; College of Integrated Traditional Chinese and Western Medicine, Hunan University of Chinese Medicine, 300 Xueshi Road, Yuelu District, Changsha 410208, PR China.
| |
Collapse
|
2
|
Guan QL, Zhang HX, Gu JP, Cao GF, Ren WX. Omics-imaging signature-based nomogram to predict the progression-free survival of patients with hepatocellular carcinoma after transcatheter arterial chemoembolization. World J Clin Cases 2024; 12:3340-3350. [PMID: 38983440 PMCID: PMC11229926 DOI: 10.12998/wjcc.v12.i18.3340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 04/17/2024] [Accepted: 04/23/2024] [Indexed: 06/13/2024] Open
Abstract
BACKGROUND Enhanced magnetic resonance imaging (MRI) is widely used in the diagnosis, treatment and prognosis of hepatocellular carcinoma (HCC), but it can not effectively reflect the heterogeneity within the tumor and evaluate the effect after treatment. Preoperative imaging analysis of voxel changes can effectively reflect the internal heterogeneity of the tumor and evaluate the progression-free survival (PFS). AIM To predict the PFS of patients with HCC before operation by building a model with enhanced MRI images. METHODS Delineate the regions of interest (ROI) in arterial phase, portal venous phase and delayed phase of enhanced MRI. After extracting the combinatorial features of ROI, the features are fused to obtain deep learning radiomics (DLR)_Sig. DeLong's test was used to evaluate the diagnostic performance of different typological features. K-M analysis was applied to assess PFS in different risk groups, and the discriminative ability of the model was evaluated using the C-index. RESULTS Tumor diameter and diolame were independent factors influencing the prognosis of PFS. Delong's test revealed multi-phase combined radiomic features had significantly greater area under the curve values than did those of the individual phases (P < 0.05).In deep transfer learning (DTL) and DLR, significant differences were observed between the multi-phase and individual phases feature sets (P < 0.05). K-M survival analysis revealed a median survival time of high risk group and low risk group was 12.8 and 14.2 months, respectively, and the predicted probabilities of 6 months, 1 year and 2 years were 92%, 60%, 40% and 98%, 90%,73%, respectively. The C-index was 0.764, indicating relatively good consistency between the predicted and observed results. DTL and DLR have higher predictive value for 2-year PFS in nomogram. CONCLUSION Based on the multi-temporal characteristics of enhanced MRI and the constructed Nomograph, it provides a new strategy for predicting the PFS of transarterial chemoembolization treatment of HCC.
Collapse
Affiliation(s)
- Qing-Long Guan
- Department of Interventional Radiology, The First Affiliated Hospital of Xinjiang Medical University, Urumqi 830011, Xinjiang Uygur Autonomous region, China
| | - Hai-Xiao Zhang
- Department of Interventional Radiology, The First Affiliated Hospital of Xinjiang Medical University, Urumqi 830011, Xinjiang Uygur Autonomous region, China
| | - Jun-Peng Gu
- Department of Interventional Radiology, The First Affiliated Hospital of Xinjiang Medical University, Urumqi 830011, Xinjiang Uygur Autonomous region, China
| | - Geng-Fei Cao
- Department of Interventional Radiology, The First Affiliated Hospital of Xinjiang Medical University, Urumqi 830011, Xinjiang Uygur Autonomous region, China
| | - Wei-Xin Ren
- Department of Interventional Radiology, The First Affiliated Hospital of Xinjiang Medical University, Urumqi 830011, Xinjiang Uygur Autonomous Region, China
| |
Collapse
|
3
|
Arabi H, Zaidi H. Contrastive Learning vs. Self-Learning vs. Deformable Data Augmentation in Semantic Segmentation of Medical Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01159-x. [PMID: 38858260 DOI: 10.1007/s10278-024-01159-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 05/23/2024] [Accepted: 05/24/2024] [Indexed: 06/12/2024]
Abstract
To develop a robust segmentation model, encoding the underlying features/structures of the input data is essential to discriminate the target structure from the background. To enrich the extracted feature maps, contrastive learning and self-learning techniques are employed, particularly when the size of the training dataset is limited. In this work, we set out to investigate the impact of contrastive learning and self-learning on the performance of the deep learning-based semantic segmentation. To this end, three different datasets were employed used for brain tumor and hippocampus delineation from MR images (BraTS and Decathlon datasets, respectively) and kidney segmentation from CT images (Decathlon dataset). Since data augmentation techniques are also aimed at enhancing the performance of deep learning methods, a deformable data augmentation technique was proposed and compared with contrastive learning and self-learning frameworks. The segmentation accuracy for the three datasets was assessed with and without applying data augmentation, contrastive learning, and self-learning to individually investigate the impact of these techniques. The self-learning and deformable data augmentation techniques exhibited comparable performance with Dice indices of 0.913 ± 0.030 and 0.920 ± 0.022 for kidney segmentation, 0.890 ± 0.035 and 0.898 ± 0.027 for hippocampus segmentation, and 0.891 ± 0.045 and 0.897 ± 0.040 for lesion segmentation, respectively. These two approaches significantly outperformed the contrastive learning and the original model with Dice indices of 0.871 ± 0.039 and 0.868 ± 0.042 for kidney segmentation, 0.872 ± 0.045 and 0.865 ± 0.048 for hippocampus segmentation, and 0.870 ± 0.049 and 0.860 ± 0.058 for lesion segmentation, respectively. The combination of self-learning with deformable data augmentation led to a robust segmentation model with no outliers in the outcomes. This work demonstrated the beneficial impact of self-learning and deformable data augmentation on organ and lesion segmentation, where no additional training datasets are needed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, DK-500, Odense, Denmark.
- University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
4
|
Gullo RL, Brunekreef J, Marcus E, Han LK, Eskreis-Winkler S, Thakur SB, Mann R, Lipman KG, Teuwen J, Pinker K. AI Applications to Breast MRI: Today and Tomorrow. J Magn Reson Imaging 2024:10.1002/jmri.29358. [PMID: 38581127 PMCID: PMC11452568 DOI: 10.1002/jmri.29358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 03/07/2024] [Accepted: 03/09/2024] [Indexed: 04/08/2024] Open
Abstract
In breast imaging, there is an unrelenting increase in the demand for breast imaging services, partly explained by continuous expanding imaging indications in breast diagnosis and treatment. As the human workforce providing these services is not growing at the same rate, the implementation of artificial intelligence (AI) in breast imaging has gained significant momentum to maximize workflow efficiency and increase productivity while concurrently improving diagnostic accuracy and patient outcomes. Thus far, the implementation of AI in breast imaging is at the most advanced stage with mammography and digital breast tomosynthesis techniques, followed by ultrasound, whereas the implementation of AI in breast magnetic resonance imaging (MRI) is not moving along as rapidly due to the complexity of MRI examinations and fewer available dataset. Nevertheless, there is persisting interest in AI-enhanced breast MRI applications, even as the use of and indications of breast MRI continue to expand. This review presents an overview of the basic concepts of AI imaging analysis and subsequently reviews the use cases for AI-enhanced MRI interpretation, that is, breast MRI triaging and lesion detection, lesion classification, prediction of treatment response, risk assessment, and image quality. Finally, it provides an outlook on the barriers and facilitators for the adoption of AI in breast MRI. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 6.
Collapse
Affiliation(s)
- Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Joren Brunekreef
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Eric Marcus
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Lynn K Han
- Weill Cornell Medical College, New York-Presbyterian Hospital, New York, NY, USA
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Sunitha B Thakur
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Ritse Mann
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Kevin Groot Lipman
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jonas Teuwen
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Katja Pinker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
5
|
Yari Y, Næve I, Hammerdal A, Bergtun PH, Måsøy SE, Voormolen MM, Lovstakken L. Automated Measurement of Ovary Development in Atlantic Salmon Using Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:364-373. [PMID: 38195265 DOI: 10.1016/j.ultrasmedbio.2023.11.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 11/14/2023] [Accepted: 11/16/2023] [Indexed: 01/11/2024]
Abstract
OBJECTIVE Salmon breeding companies control the egg stripping period through environmental change, which triggers the need to identify the state of maturation. Ultrasound imaging of the salmon ovary is a proven non-invasive tool for this purpose; however, the process is laborious, and the interpretation of the ultrasound scans is subjective. Real-time ultrasound image segmentation of Atlantic salmon ovary provides an opportunity to overcome these limitations. However, several application challenges need to be addressed to achieve this goal. These challenges include the potential for false-positive and false-negative predictions, accurate prediction of attenuated lower ovary parts and resolution of inconsistencies in predicted ovary shape. METHODS We describe an approach designed to tackle these obstacles by employing targeted pre-training of a modified U-Net, capable of performing both segmentation and classification. In addition, a variational autoencoder (VAE) and generative adversarial network (GAN) were incorporated to rectify shape inconsistencies in the segmentation output. To train the proposed model, a data set of Atlantic salmon ovaries throughout two maturation periods was recorded. RESULTS We then tested our model and compared its performance with that of conventional and novel U-Nets. The method was also tested in a salmon on-site ultrasound examination setting. The results of our application indicate that our method is able to efficiently segment salmon ovary with an average Dice score of 0.885 per individual in real-time. CONCLUSION These results represent a competitive performance for this specific application, which enables us to design an automated system for smart monitoring of maturation state in Atlantic salmon.
Collapse
Affiliation(s)
- Yasin Yari
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Ingun Næve
- BDO AS, Trondheim, Norway; AquaGen AS, Trondheim, Norway
| | | | | | - Svein-Erik Måsøy
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | | | - Lasse Lovstakken
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
6
|
van den Berg K, Wolfs CJA, Verhaegen F. A 3D transfer learning approach for identifying multiple simultaneous errors during radiotherapy. Phys Med Biol 2024; 69:035002. [PMID: 38091615 DOI: 10.1088/1361-6560/ad1547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024]
Abstract
Objective. Deep learning models, such as convolutional neural networks (CNNs), can take full dose comparison images as input and have shown promising results for error identification during treatment. Clinically, complex scenarios should be considered, with the risk of multiple anatomical and/or mechanical errors occurring simultaneously during treatment. The purpose of this study was to evaluate the capability of CNN-based error identification in this more complex scenario.Approach. For 40 lung cancer patients, clinically realistic ranges of combinations of various treatment errors within treatment plans and/or computed tomography (CT) images were simulated. Modified CT images and treatment plans were used to predict 2580 3D dose distributions, which were compared to dose distributions without errors using various gamma analysis criteria and relative dose difference as dose comparison methods. A 3D CNN capable of multilabel classification was trained to identify treatment errors at two classification levels, using dose comparison volumes as input: Level 1 (main error type, e.g. anatomical change, mechanical error) and Level 2 (error subtype, e.g. tumor regression, patient rotation). For training the CNNs, a transfer learning approach was employed. An ensemble model was also evaluated, which consisted of three separate CNNs each taking a region of interest of the dose comparison volume as input. Model performance was evaluated by calculating sample F1-scores for training and validation sets.Main results. The model had high F1-scores for Level 1 classification, but performance for Level 2 was lower, and overfitting became more apparent. Using relative dose difference instead of gamma volumes as input improved performance for Level 2 classification, whereas using an ensemble model additionally reduced overfitting. The models obtained F1-scores of 0.86 and 0.62 on an independent test set for Level 1 and Level 2, respectively.Significance. This study shows that it is possible to identify multiple errors occurring simultaneously in 3D dose verification data.
Collapse
Affiliation(s)
- Kars van den Berg
- Medical Image Analysis group, Department of Biomedical Engineering, Eindhoven University of Technology, The Netherlands
| | - Cecile J A Wolfs
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, The Netherlands
| | - Frank Verhaegen
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, The Netherlands
| |
Collapse
|
7
|
Cheng PC, Chiang HHK. Diagnosis of Salivary Gland Tumors Using Transfer Learning with Fine-Tuning and Gradual Unfreezing. Diagnostics (Basel) 2023; 13:3333. [PMID: 37958229 PMCID: PMC10648910 DOI: 10.3390/diagnostics13213333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/25/2023] [Accepted: 10/27/2023] [Indexed: 11/15/2023] Open
Abstract
Ultrasound is the primary tool for evaluating salivary gland tumors (SGTs); however, tumor diagnosis currently relies on subjective features. This study aimed to establish an objective ultrasound diagnostic method using deep learning. We collected 446 benign and 223 malignant SGT ultrasound images in the training/validation set and 119 benign and 44 malignant SGT ultrasound images in the testing set. We trained convolutional neural network (CNN) models from scratch and employed transfer learning (TL) with fine-tuning and gradual unfreezing to classify malignant and benign SGTs. The diagnostic performances of these models were compared. By utilizing the pretrained ResNet50V2 with fine-tuning and gradual unfreezing, we achieved a 5-fold average validation accuracy of 0.920. The diagnostic performance on the testing set demonstrated an accuracy of 89.0%, a sensitivity of 81.8%, a specificity of 91.6%, a positive predictive value of 78.3%, and a negative predictive value of 93.2%. This performance surpasses that of other models in our study. The corresponding Grad-CAM visualizations were also presented to provide explanations for the diagnosis. This study presents an effective and objective ultrasound method for distinguishing between malignant and benign SGTs, which could assist in preoperative evaluation.
Collapse
Affiliation(s)
- Ping-Chia Cheng
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan;
- Department of Otolaryngology Head and Neck Surgery, Far Eastern Memorial Hospital, New Taipei City 22060, Taiwan
- Department of Communication Engineering, Asia Eastern University of Science and Technology, New Taipei City 22060, Taiwan
| | - Hui-Hua Kenny Chiang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan;
| |
Collapse
|
8
|
Metshein M, Abdullayev A, Gautier A, Larras B, Frappe A, Cardiff B, Annus P, Land R, Märtens O. Sensor-Location-Specific Joint Acquisition of Peripheral Artery Bioimpedance and Photoplethysmogram for Wearable Applications. SENSORS (BASEL, SWITZERLAND) 2023; 23:7111. [PMID: 37631647 PMCID: PMC10457752 DOI: 10.3390/s23167111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 08/05/2023] [Accepted: 08/08/2023] [Indexed: 08/27/2023]
Abstract
BACKGROUND Cardiovascular diseases (CVDs), being the culprit for one-third of deaths globally, constitute a challenge for biomedical instrumentation development, especially for early disease detection. Pulsating arterial blood flow, providing access to cardiac-related parameters, involves the whole body. Unobtrusive and continuous acquisition of electrical bioimpedance (EBI) and photoplethysmography (PPG) constitute important techniques for monitoring the peripheral arteries, requiring novel approaches and clever means. METHODS In this work, five peripheral arteries were selected for EBI and PPG signal acquisition. The acquisition sites were evaluated based on the signal morphological parameters. A small-data-based deep learning model, which increases the data by dividing them into cardiac periods, was proposed to evaluate the continuity of the signals. RESULTS The highest sensitivity of EBI was gained for the carotid artery (0.86%), three times higher than that for the next best, the posterior tibial artery (0.27%). The excitation signal parameters affect the measured EBI, confirming the suitability of classical 100 kHz frequency (average probability of 52.35%). The continuity evaluation of the EBI signals confirmed the advantage of the carotid artery (59.4%), while the posterior tibial artery (49.26%) surpasses the radial artery (48.17%). The PPG signal, conversely, commends the location of the posterior tibial artery (97.87%). CONCLUSIONS The peripheral arteries are highly suitable for non-invasive EBI and PPG signal acquisition. The posterior tibial artery constitutes a candidate for the joint acquisition of EBI and PPG signals in sensor-fusion-based wearable devices-an important finding of this research.
Collapse
Affiliation(s)
- Margus Metshein
- Thomas Johann Seebeck Department of Electronics, Tallinn University of Technology, Ehitajate Tee 5, 19086 Tallinn, Estonia
| | - Anar Abdullayev
- Thomas Johann Seebeck Department of Electronics, Tallinn University of Technology, Ehitajate Tee 5, 19086 Tallinn, Estonia
| | - Antoine Gautier
- University Lille, CNRS, Centrale Lille, Junia, University Polytechnique Hauts-de-France, UMR 8520-IEMN, F-59000 Lille, France
| | - Benoit Larras
- University Lille, CNRS, Centrale Lille, Junia, University Polytechnique Hauts-de-France, UMR 8520-IEMN, F-59000 Lille, France
| | - Antoine Frappe
- University Lille, CNRS, Centrale Lille, Junia, University Polytechnique Hauts-de-France, UMR 8520-IEMN, F-59000 Lille, France
| | - Barry Cardiff
- School of Electrical and Electronic Engineering, University College Dublin, D04V1W8 Dublin, Ireland
| | - Paul Annus
- Thomas Johann Seebeck Department of Electronics, Tallinn University of Technology, Ehitajate Tee 5, 19086 Tallinn, Estonia
| | - Raul Land
- Thomas Johann Seebeck Department of Electronics, Tallinn University of Technology, Ehitajate Tee 5, 19086 Tallinn, Estonia
| | - Olev Märtens
- Thomas Johann Seebeck Department of Electronics, Tallinn University of Technology, Ehitajate Tee 5, 19086 Tallinn, Estonia
| |
Collapse
|
9
|
Bi W, Xv J, Song M, Hao X, Gao D, Qi F. Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction. Front Neurosci 2023; 17:1202143. [PMID: 37409107 PMCID: PMC10318193 DOI: 10.3389/fnins.2023.1202143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 06/05/2023] [Indexed: 07/07/2023] Open
Abstract
Introduction Fine-tuning (FT) is a generally adopted transfer learning method for deep learning-based magnetic resonance imaging (MRI) reconstruction. In this approach, the reconstruction model is initialized with pre-trained weights derived from a source domain with ample data and subsequently updated with limited data from the target domain. However, the direct full-weight update strategy can pose the risk of "catastrophic forgetting" and overfitting, hindering its effectiveness. The goal of this study is to develop a zero-weight update transfer strategy to preserve pre-trained generic knowledge and reduce overfitting. Methods Based on the commonality between the source and target domains, we assume a linear transformation relationship of the optimal model weights from the source domain to the target domain. Accordingly, we propose a novel transfer strategy, linear fine-tuning (LFT), which introduces scaling and shifting (SS) factors into the pre-trained model. In contrast to FT, LFT only updates SS factors in the transfer phase, while the pre-trained weights remain fixed. Results To evaluate the proposed LFT, we designed three different transfer scenarios and conducted a comparative analysis of FT, LFT, and other methods at various sampling rates and data volumes. In the transfer scenario between different contrasts, LFT outperforms typical transfer strategies at various sampling rates and considerably reduces artifacts on reconstructed images. In transfer scenarios between different slice directions or anatomical structures, LFT surpasses the FT method, particularly when the target domain contains a decreasing number of training images, with a maximum improvement of up to 2.06 dB (5.89%) in peak signal-to-noise ratio. Discussion The LFT strategy shows great potential to address the issues of "catastrophic forgetting" and overfitting in transfer scenarios for MRI reconstruction, while reducing the reliance on the amount of data in the target domain. Linear fine-tuning is expected to shorten the development cycle of reconstruction models for adapting complicated clinical scenarios, thereby enhancing the clinical applicability of deep MRI reconstruction.
Collapse
Affiliation(s)
- Wanqing Bi
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Jianan Xv
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Mengdie Song
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiaohan Hao
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
- Fuqing Medical Co., Ltd., Hefei, Anhui, China
| | - Dayong Gao
- Department of Mechanical Engineering, University of Washington, Seattle, WA, United States
| | - Fulang Qi
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
10
|
Lou YS, Lin CS, Fang WH, Lee CC, Lin C. Extensive deep learning model to enhance electrocardiogram application via latent cardiovascular feature extraction from identity identification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107359. [PMID: 36738606 DOI: 10.1016/j.cmpb.2023.107359] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 12/22/2022] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning models (DLMs) have been successfully applied in biomedicine primarily using supervised learning with large, annotated databases. However, scarce training resources limit the potential of DLMs for electrocardiogram (ECG) analysis. METHODS We have developed a novel pre-training strategy for unsupervised identity identification with an area under the receiver operating characteristic curve (AUC) >0.98. Accordingly, a DLM pre-trained with identity identification can be applied to 70 patient characteristic predictions using transfer learning (TL). These ECG-based patient characteristics were then used for cardiovascular disease (CVD) risk prediction. The DLMs were trained using 507,729 ECGs from 222,473 patients and validated using two independent validation sets (n = 27,824/31,925). RESULTS The DLMs using our method exhibited better performance than directly trained DLMs. Additionally, our DLM performed better than those of previous studies in terms of gender (AUC [internal/external] = 0.982/0.968), age (correlation = 0.886/0.892), low ejection fraction (AUC = 0.942/0.951), and critical markers not addressed previously, including high B-type natriuretic peptide (AUC = 0.921/0.899). Additionally, approximately 50% of the ECG-based characteristics provided significantly more prediction information for cardiovascular risk than real characteristics. CONCLUSIONS This is the first study to use identity identification as a pre-training task for TL in ECG analysis. An extensive exploration of the relationship between ECG and 70 patient characteristics was conducted. Our DLM-enhanced ECG interpretation system extensively advanced ECG-related patient characteristic prediction and mortality risk management for cardiovascular diseases.
Collapse
Affiliation(s)
- Yu-Sheng Lou
- Graduate Institutes of Life Sciences, National Defense Medical Center, Taipei, Taiwan; School of Public Health, National Defense Medical Center, Taipei, Taiwan
| | - Chin-Sheng Lin
- Division of Cardiology, Department of Internal Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C.; Medical Technology Education Center, School of Medicine, National Defense Medical Center, Taipei, Taiwan, R.O.C
| | - Wen-Hui Fang
- Department of Family and Community Medicine, Department of Internal Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C
| | - Chia-Cheng Lee
- Department of Medical Informatics, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C.; Division of Colorectal Surgery, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, R.O.C
| | - Chin Lin
- Graduate Institutes of Life Sciences, National Defense Medical Center, Taipei, Taiwan; School of Public Health, National Defense Medical Center, Taipei, Taiwan; Medical Technology Education Center, School of Medicine, National Defense Medical Center, Taipei, Taiwan, R.O.C.; School of Medicine, National Defense Medical Center, Taipei, Taiwan, R.O.C..
| |
Collapse
|
11
|
Zhang J, Liu J, Liang Z, Xia L, Zhang W, Xing Y, Zhang X, Tang G. Differentiation of acute and chronic vertebral compression fractures using conventional CT based on deep transfer learning features and hand-crafted radiomics features. BMC Musculoskelet Disord 2023; 24:165. [PMID: 36879285 PMCID: PMC9987077 DOI: 10.1186/s12891-023-06281-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 02/28/2023] [Indexed: 03/08/2023] Open
Abstract
BACKGROUND We evaluated the diagnostic efficacy of deep learning radiomics (DLR) and hand-crafted radiomics (HCR) features in differentiating acute and chronic vertebral compression fractures (VCFs). METHODS A total of 365 patients with VCFs were retrospectively analysed based on their computed tomography (CT) scan data. All patients completed MRI examination within 2 weeks. There were 315 acute VCFs and 205 chronic VCFs. Deep transfer learning (DTL) features and HCR features were extracted from CT images of patients with VCFs using DLR and traditional radiomics, respectively, and feature fusion was performed to establish the least absolute shrinkage and selection operator. The MRI display of vertebral bone marrow oedema was used as the gold standard for acute VCF, and the model performance was evaluated using the receiver operating characteristic (ROC).To separately evaluate the effectiveness of DLR, traditional radiomics and feature fusion in the differential diagnosis of acute and chronic VCFs, we constructed a nomogram based on the clinical baseline data to visualize the classification evaluation. The predictive power of each model was compared using the Delong test, and the clinical value of the nomogram was evaluated using decision curve analysis (DCA). RESULTS Fifty DTL features were obtained from DLR, 41 HCR features were obtained from traditional radiomics, and 77 features fusion were obtained after feature screening and fusion of the two. The area under the curve (AUC) of the DLR model in the training cohort and test cohort were 0.992 (95% confidence interval (CI), 0.983-0.999) and 0.871 (95% CI, 0.805-0.938), respectively. While the AUCs of the conventional radiomics model in the training cohort and test cohort were 0.973 (95% CI, 0.955-0.990) and 0.854 (95% CI, 0.773-0.934), respectively. The AUCs of the features fusion model in the training cohort and test cohort were 0.997 (95% CI, 0.994-0.999) and 0.915 (95% CI, 0.855-0.974), respectively. The AUCs of nomogram constructed by the features fusion in combination with clinical baseline data were 0.998 (95% CI, 0.996-0.999) and 0.946 (95% CI, 0.906-0.987) in the training cohort and test cohort, respectively. The Delong test showed that the differences between the features fusion model and the nomogram in the training cohort and the test cohort were not statistically significant (P values were 0.794 and 0.668, respectively), and the differences in the other prediction models in the training cohort and the test cohort were statistically significant (P < 0.05). DCA showed that the nomogram had high clinical value. CONCLUSION The features fusion model can be used for the differential diagnosis of acute and chronic VCFs, and its differential diagnosis ability is improved when compared with that when either radiomics is used alone. At the same time, the nomogram has a high predictive value for acute and chronic VCFs and can be a potential decision-making tool to assist clinicians, especially when a patient is unable to undergo spinal MRI examination.
Collapse
Affiliation(s)
- Jun Zhang
- Department of Radiology, Clinical Medical College of Shanghai Tenth People's Hospital of Nanjing Medical University, 301 Middle Yanchang Road, Shanghai, 200072, P.R. China.,Department of Radiology, Sir RunRun Hospital affiliated to Nanjing Medical University, 109 Longmian Road, Nanjing, Jiangsu, 211002, P.R. China
| | - Jiayi Liu
- Department of Radiology, Sir RunRun Hospital affiliated to Nanjing Medical University, 109 Longmian Road, Nanjing, Jiangsu, 211002, P.R. China
| | - Zhipeng Liang
- Department of Radiology, Sir RunRun Hospital affiliated to Nanjing Medical University, 109 Longmian Road, Nanjing, Jiangsu, 211002, P.R. China
| | - Liang Xia
- Department of Radiology, Sir RunRun Hospital affiliated to Nanjing Medical University, 109 Longmian Road, Nanjing, Jiangsu, 211002, P.R. China
| | - Weixiao Zhang
- Department of Radiology, Sir RunRun Hospital affiliated to Nanjing Medical University, 109 Longmian Road, Nanjing, Jiangsu, 211002, P.R. China
| | - Yanfen Xing
- Department of Radiology, Sir RunRun Hospital affiliated to Nanjing Medical University, 109 Longmian Road, Nanjing, Jiangsu, 211002, P.R. China
| | - Xueli Zhang
- Department of Radiology, Shanghai TenthPeople's Hospital, Tongji University School of Medicine, 301 Middle Yanchang Road, Shanghai, 200072, P.R. China
| | - Guangyu Tang
- Department of Radiology, Clinical Medical College of Shanghai Tenth People's Hospital of Nanjing Medical University, 301 Middle Yanchang Road, Shanghai, 200072, P.R. China. .,Department of Radiology, Shanghai TenthPeople's Hospital, Tongji University School of Medicine, 301 Middle Yanchang Road, Shanghai, 200072, P.R. China.
| |
Collapse
|
12
|
Atasever S, Azginoglu N, Terzi DS, Terzi R. A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning. Clin Imaging 2023; 94:18-41. [PMID: 36462229 DOI: 10.1016/j.clinimag.2022.11.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 10/17/2022] [Accepted: 11/01/2022] [Indexed: 11/13/2022]
Abstract
This survey aims to identify commonly used methods, datasets, future trends, knowledge gaps, constraints, and limitations in the field to provide an overview of current solutions used in medical image analysis in parallel with the rapid developments in transfer learning (TL). Unlike previous studies, this survey grouped the last five years of current studies for the period between January 2017 and February 2021 according to different anatomical regions and detailed the modality, medical task, TL method, source data, target data, and public or private datasets used in medical imaging. Also, it provides readers with detailed information on technical challenges, opportunities, and future research trends. In this way, an overview of recent developments is provided to help researchers to select the most effective and efficient methods and access widely used and publicly available medical datasets, research gaps, and limitations of the available literature.
Collapse
Affiliation(s)
- Sema Atasever
- Computer Engineering Department, Nevsehir Hacı Bektas Veli University, Nevsehir, Turkey.
| | - Nuh Azginoglu
- Computer Engineering Department, Kayseri University, Kayseri, Turkey.
| | | | - Ramazan Terzi
- Computer Engineering Department, Amasya University, Amasya, Turkey.
| |
Collapse
|
13
|
Waisberg E, Ong J, Kamran SA, Paladugu P, Zaman N, Lee AG, Tavakkoli A. Transfer learning as an AI-based solution to address limited datasets in space medicine. LIFE SCIENCES IN SPACE RESEARCH 2023; 36:36-38. [PMID: 36682827 DOI: 10.1016/j.lssr.2022.12.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/03/2022] [Accepted: 12/30/2022] [Indexed: 06/17/2023]
Abstract
The advent of artificial intelligence (AI) has a promising role in the future long-duration spaceflight missions. Traditional AI algorithms rely on training and testing data from the same domain. However, astronaut medical data is naturally limited to a small sample size and often difficult to collect, leading to extremely limited datasets. This significantly limits the ability of traditional machine learning methodologies. Transfer learning is a potential solution to overcome this dataset size limitation and can help improve training time and performance of a neural networks. We discuss the unique challenges of space medicine in producing datasets and transfer learning as an emerging technique to address these issues.
Collapse
Affiliation(s)
- Ethan Waisberg
- University College Dublin School of Medicine, Belfield, Dublin, Ireland.
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, United States
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, Nevada, United States
| | - Phani Paladugu
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania, United States; Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, Nevada, United States
| | - Andrew G Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, Texas, United States; Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, Texas, United States; The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, Texas, United States; Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, New York, United States; Department of Ophthalmology, University of Texas Medical Branch, Galveston, Texas, United States; University of Texas MD Anderson Cancer Center, Houston, Texas, United States; Texas A&M College of Medicine, Texas, United States; Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, Iowa, United States
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, Nevada, United States
| |
Collapse
|
14
|
Feng B, Chen X, Chen Y, Yu T, Duan X, Liu K, Li K, Liu Z, Lin H, Li S, Chen X, Ke Y, Li Z, Cui E, Long W, Liu X. Identifying Solitary Granulomatous Nodules from Solid Lung Adenocarcinoma: Exploring Robust Image Features with Cross-Domain Transfer Learning. Cancers (Basel) 2023; 15:892. [PMID: 36765850 PMCID: PMC9913209 DOI: 10.3390/cancers15030892] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/17/2023] [Accepted: 01/18/2023] [Indexed: 02/04/2023] Open
Abstract
PURPOSE This study aimed to find suitable source domain data in cross-domain transfer learning to extract robust image features. Then, a model was built to preoperatively distinguish lung granulomatous nodules (LGNs) from lung adenocarcinoma (LAC) in solitary pulmonary solid nodules (SPSNs). METHODS Data from 841 patients with SPSNs from five centres were collected retrospectively. First, adaptive cross-domain transfer learning was used to construct transfer learning signatures (TLS) under different source domain data and conduct a comparative analysis. The Wasserstein distance was used to assess the similarity between the source domain and target domain data in cross-domain transfer learning. Second, a cross-domain transfer learning radiomics model (TLRM) combining the best performing TLS, clinical factors and subjective CT findings was constructed. Finally, the performance of the model was validated through multicentre validation cohorts. RESULTS Relative to other source domain data, TLS based on lung whole slide images as source domain data (TLS-LW) had the best performance in all validation cohorts (AUC range: 0.8228-0.8984). Meanwhile, the Wasserstein distance of TLS-LW was 1.7108, which was minimal. Finally, TLS-LW, age, spiculated sign and lobulated shape were used to build the TLRM. In all validation cohorts, The AUC ranges were 0.9074-0.9442. Compared with other models, decision curve analysis and integrated discrimination improvement showed that TLRM had better performance. CONCLUSIONS The TLRM could assist physicians in preoperatively differentiating LGN from LAC in SPSNs. Furthermore, compared with other images, cross-domain transfer learning can extract robust image features when using lung whole slide images as source domain data and has a better effect.
Collapse
Affiliation(s)
- Bao Feng
- Department of Radiology, Jiangmen Central Hospital, Jiangmen 529000, China
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin 541004, China
| | - Xiangmeng Chen
- Department of Radiology, Jiangmen Central Hospital, Jiangmen 529000, China
| | - Yehang Chen
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin 541004, China
| | - Tianyou Yu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
| | - Xiaobei Duan
- Department of Nuclear Medicine, Jiangmen Central Hospital, Jiangmen 529000, China
| | - Kunfeng Liu
- Department of Radiology, Sun Yat-sen University Cancer Center, Guangzhou 510060, China
| | - Kunwei Li
- Department of Radiology, Fifth Affiliated Hospital Sun Yat-sen University, Zhuhai 519000, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Huan Lin
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Sheng Li
- Department of Radiology, Sun Yat-sen University Cancer Center, Guangzhou 510060, China
| | - Xiaodong Chen
- Department of Radiology, Affiliated Hospital of Guangdong Medical University, Zhanjiang 524000, China
| | - Yuting Ke
- Department of Radiology, Affiliated Hospital of Guangdong Medical University, Zhanjiang 524000, China
| | - Zhi Li
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin 541004, China
| | - Enming Cui
- Department of Radiology, Jiangmen Central Hospital, Jiangmen 529000, China
| | - Wansheng Long
- Department of Radiology, Jiangmen Central Hospital, Jiangmen 529000, China
| | - Xueguo Liu
- Department of Radiology, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen 518000, China
| |
Collapse
|
15
|
Liu Y, Cui E. Classification of tumor from computed tomography images: A brain-inspired multisource transfer learning under probability distribution adaptation. Front Hum Neurosci 2022; 16:1040536. [PMID: 36337851 PMCID: PMC9632652 DOI: 10.3389/fnhum.2022.1040536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 10/07/2022] [Indexed: 12/07/2022] Open
Abstract
Preoperative diagnosis of gastric cancer and primary gastric lymphoma is challenging and has important clinical significance. Inspired by the inductive reasoning learning of the human brain, transfer learning can improve diagnosis performance of target task by utilizing the knowledge learned from the other domains (source domain). However, most studies focus on single-source transfer learning and may lead to model performance degradation when a large domain shift exists between the single-source domain and target domain. By simulating the multi-modal information learning and transfer mechanism of human brain, this study designed a multisource transfer learning feature extraction and classification framework, which can enhance the prediction performance of the target model by using multisource medical data (domain). First, this manuscript designs a feature extraction network that takes the maximum mean difference based on the Wasserstein distance as an adaptive measure of probability distribution and extracts the domain-specific invariant representations between source and target domain data. Then, aiming at the random generation of parameters bringing uncertainties to prediction accuracy and generalization ability of extreme learning machine network, the 1-norm regularization is used to implement sparse constraints of the output weight matrix and improve the robustness of the model. Finally, some experiments are carried out on the data of two medical centers. The experimental results show that the area under curves (AUCs) of the method are 0.958 and 0.929 in the two validation cohorts, respectively. The method in this manuscript can provide doctors with a better diagnostic reference, which has certain practical significance.
Collapse
Affiliation(s)
- Yu Liu
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, Guangxi, China
| | - Enming Cui
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, Guangdong, China
- *Correspondence: Enming Cui,
| |
Collapse
|
16
|
Chen Y, Chen X. A brain-like classification method for computed tomography images based on adaptive feature matching dual-source domain heterogeneous transfer learning. Front Hum Neurosci 2022; 16:1019564. [PMID: 36304588 PMCID: PMC9592699 DOI: 10.3389/fnhum.2022.1019564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/07/2022] [Indexed: 12/04/2022] Open
Abstract
Transfer learning can improve the robustness of deep learning in the case of small samples. However, when the semantic difference between the source domain data and the target domain data is large, transfer learning easily introduces redundant features and leads to negative transfer. According the mechanism of the human brain focusing on effective features while ignoring redundant features in recognition tasks, a brain-like classification method based on adaptive feature matching dual-source domain heterogeneous transfer learning is proposed for the preoperative aided diagnosis of lung granuloma and lung adenocarcinoma for patients with solitary pulmonary solid nodule in the case of small samples. The method includes two parts: (1) feature extraction and (2) feature classification. In the feature extraction part, first, By simulating the feature selection mechanism of the human brain in the process of drawing inferences about other cases from one instance, an adaptive selected-based dual-source domain feature matching network is proposed to determine the matching weight of each pair of feature maps and each pair of convolution layers between the two source networks and the target network, respectively. These two weights can, respectively, adaptive select the features in the source network that are conducive to the learning of the target task, and the destination of feature transfer to improve the robustness of the target network. Meanwhile, a target network based on diverse branch block is proposed, which made the target network have different receptive fields and complex paths to further improve the feature expression ability of the target network. Second, the convolution kernel of the target network is used as the feature extractor to extract features. In the feature classification part, an ensemble classifier based on sparse Bayesian extreme learning machine is proposed that can automatically decide how to combine the output of base classifiers to improve the classification performance. Finally, the experimental results (the AUCs were 0.9542 and 0.9356, respectively) on the data of two center data show that this method can provide a better diagnostic reference for doctors.
Collapse
Affiliation(s)
- Yehang Chen
- Laboratory of Artificial Intelligence of Biomedicine, Guilin University of Aerospace Technology, Guilin, China
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Xiangmeng Chen
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
- *Correspondence: Xiangmeng Chen,
| |
Collapse
|
17
|
Luximon DC, Ritter T, Fields E, Neylon J, Petragallo R, Abdulkadir Y, Charters J, Low DA, Lamb JM. Development and inter-institutional validation of an automatic vertebral-body misalignment error detector for Cone-Beam CT guided radiotherapy. Med Phys 2022; 49:6410-6423. [PMID: 35962982 DOI: 10.1002/mp.15927] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 07/20/2022] [Accepted: 08/05/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND In Cone-Beam Computed Tomography (CBCT) guided radiotherapy, off-by-one vertebral body misalignments are rare but serious errors which lead to wrong-site treatments. PURPOSE An automatic error detection algorithm was developed that uses a three-branch convolutional neural network error detection model (EDM) to detect off-by-one vertebral body misalignments using planning computed tomography (CT) images and setup CBCT images. METHODS Algorithm training and test data consisted of planning CTs and CBCTs from 480 patients undergoing radiotherapy treatment in the thoracic and abdominal regions at two radiotherapy clinics. The clinically applied registration was used to derive true-negative (no error) data. The setup and planning images were then misaligned by one vertebral body in both the superior and inferior directions, simulating the most likely misalignment scenarios. For each of the aligned and misaligned 3D image pairs, 2D slice pairs were automatically extracted in each anatomical plane about a point within the vertebral column. The three slice pairs obtained were then inputted to the EDM which returned a probability of vertebral misalignment. One model (EDM1 ) was trained solely on data from institution #1. EDM1 was further trained using a lower learning rate on a dataset from institution #2 to produce a fine-tuned model, EDM2 . Another model, EDM3 , was trained from scratch using a training dataset composed of data from both institutions. These three models were validated on a randomly selected and unseen dataset composed of images from both institutions, for a total of 303 image pairs. The model performances were quantified using a receiver operating characteristic analysis. Due to the rarity of vertebral body misalignments in the clinic, a minimum threshold value yielding a specificity of at least 99% was selected. Using this threshold, the sensitivity was calculated for each model, on each institution's test set separately. RESULTS When applied to the combined test set, EDM1 , EDM2 , and EDM3 resulted in an area under curve of 99.5%, 99.4% and 99.5%, respectively. EDM1 achieved a sensitivity of 96% and 88% on Institution #1 and Institution #2 test set, respectively. EDM2 obtained a sensitivity of 95% on each institution's test set. EDM3 achieved a sensitivity of 95% and 88% on Institution #1 and Institution #2 test set, respectively. CONCLUSION The proposed algorithm demonstrated accuracy in identifying off-by-one vertebral body misalignments in CBCT-guided radiotherapy that was sufficiently high to allow for practical implementation. It was found that fine-tuning the model on a multi-facility dataset can further enhance the generalizability of the algorithm. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Dishane C Luximon
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Timothy Ritter
- Department of Medical Physics, Virginia Commonwealth University, Richmond, VA, USA
| | - Emma Fields
- Department of Medical Physics, Virginia Commonwealth University, Richmond, VA, USA
| | - John Neylon
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Rachel Petragallo
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Yasin Abdulkadir
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - John Charters
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Daniel A Low
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - James M Lamb
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|
18
|
Rezaeifar B, Wolfs CJA, Lieuwes NG, Biemans R, Reniers B, Dubois LJ, Verhaegen F. A deep learning and Monte Carlo based framework for bioluminescence imaging center of mass-guided glioblastoma targeting. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac79f8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 06/17/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. Bioluminescence imaging (BLI) is a valuable tool for non-invasive monitoring of glioblastoma multiforme (GBM) tumor-bearing small animals without incurring x-ray radiation burden. However, the use of this imaging modality is limited due to photon scattering and lack of spatial information. Attempts at reconstructing bioluminescence tomography (BLT) using mathematical models of light propagation show limited progress. Approach. This paper employed a different approach by using a deep convolutional neural network (CNN) to predict the tumor’s center of mass (CoM). Transfer-learning with a sizeable artificial database is employed to facilitate the training process for, the much smaller, target database including Monte Carlo (MC) simulations of real orthotopic glioblastoma models. Predicted CoM was then used to estimate a BLI-based planning target volume (bPTV), by using the CoM as the center of a sphere, encompassing the tumor. The volume of the encompassing target sphere was estimated based on the total number of photons reaching the skin surface. Main results. Results show sub-millimeter accuracy for CoM prediction with a median error of 0.59 mm. The proposed method also provides promising performance for BLI-based tumor targeting with on average 94% of the tumor inside the bPTV while keeping the average healthy tissue coverage below 10%. Significance. This work introduced a framework for developing and using a CNN for targeted radiation studies for GBM based on BLI. The framework will enable biologists to use BLI as their main image-guidance tool to target GBM tumors in rat models, avoiding delivery of high x-ray imaging dose to the animals.
Collapse
|
19
|
van Boven MR, Henke CE, Leemhuis AG, Hoogendoorn M, van Kaam AH, Königs M, Oosterlaan J. Machine Learning Prediction Models for Neurodevelopmental Outcome After Preterm Birth: A Scoping Review and New Machine Learning Evaluation Framework. Pediatrics 2022; 150:188249. [PMID: 35670123 DOI: 10.1542/peds.2021-056052] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/25/2022] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND AND OBJECTIVES Outcome prediction of preterm birth is important for neonatal care, yet prediction performance using conventional statistical models remains insufficient. Machine learning has a high potential for complex outcome prediction. In this scoping review, we provide an overview of the current applications of machine learning models in the prediction of neurodevelopmental outcomes in preterm infants, assess the quality of the developed models, and provide guidance for future application of machine learning models to predict neurodevelopmental outcomes of preterm infants. METHODS A systematic search was performed using PubMed. Studies were included if they reported on neurodevelopmental outcome prediction in preterm infants using predictors from the neonatal period and applying machine learning techniques. Data extraction and quality assessment were independently performed by 2 reviewers. RESULTS Fourteen studies were included, focusing mainly on very or extreme preterm infants, predicting neurodevelopmental outcome before age 3 years, and mostly assessing outcomes using the Bayley Scales of Infant Development. Predictors were most often based on MRI. The most prevalent machine learning techniques included linear regression and neural networks. None of the studies met all newly developed quality assessment criteria. Studies least prone to inflated performance showed promising results, with areas under the curve up to 0.86 for classification and R2 values up to 91% in continuous prediction. A limitation was that only 1 data source was used for the literature search. CONCLUSIONS Studies least prone to inflated prediction results are the most promising. The provided evaluation framework may contribute to improved quality of future machine learning models.
Collapse
Affiliation(s)
- Menne R van Boven
- Departments of Neonatology.,Pediatrics, Follow-Me Program, Emma Neuroscience Group, and Amsterdam Reproduction and Development
| | - Celina E Henke
- Pediatrics, Follow-Me Program, Emma Neuroscience Group, and Amsterdam Reproduction and Development.,Psychosocial Department, Emma Children's Hospital, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands
| | - Aleid G Leemhuis
- Departments of Neonatology.,Pediatrics, Follow-Me Program, Emma Neuroscience Group, and Amsterdam Reproduction and Development
| | - Mark Hoogendoorn
- Faculty of Science, Quantitative Data Analytics Group, Department Computer Science, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Anton H van Kaam
- Departments of Neonatology.,Pediatrics, Follow-Me Program, Emma Neuroscience Group, and Amsterdam Reproduction and Development
| | - Marsh Königs
- Pediatrics, Follow-Me Program, Emma Neuroscience Group, and Amsterdam Reproduction and Development
| | - Jaap Oosterlaan
- Pediatrics, Follow-Me Program, Emma Neuroscience Group, and Amsterdam Reproduction and Development
| |
Collapse
|
20
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 113] [Impact Index Per Article: 56.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
21
|
Feng B, Huang L, Liu Y, Chen Y, Zhou H, Yu T, Xue H, Chen Q, Zhou T, Kuang Q, Yang Z, Chen X, Chen X, Peng Z, Long W. A Transfer Learning Radiomics Nomogram for Preoperative Prediction of Borrmann Type IV Gastric Cancer From Primary Gastric Lymphoma. Front Oncol 2022; 11:802205. [PMID: 35087761 PMCID: PMC8789309 DOI: 10.3389/fonc.2021.802205] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 12/20/2021] [Indexed: 12/12/2022] Open
Abstract
Objective This study aims to differentiate preoperative Borrmann type IV gastric cancer (GC) from primary gastric lymphoma (PGL) by transfer learning radiomics nomogram (TLRN) with whole slide images of GC as source domain data. Materials and Methods This study retrospectively enrolled 438 patients with histopathologic diagnoses of Borrmann type IV GC and PGL. They received CT examinations from three hospitals. Quantitative transfer learning features were extracted by the proposed transfer learning radiopathomic network and used to construct transfer learning radiomics signatures (TLRS). A TLRN, which integrates TLRS, clinical factors, and CT subjective findings, was developed by multivariate logistic regression. The diagnostic TLRN performance was assessed by clinical usefulness in the independent validation set. Results The TLRN was built by TLRS and a high enhanced serosa sign, which showed good agreement by the calibration curve. The TLRN performance was superior to the clinical model and TLRS. Its areas under the curve (AUC) were 0.958 (95% confidence interval [CI], 0.883–0.991), 0.867 (95% CI, 0.794–0.922), and 0.921 (95% CI, 0.860–0.960) in the internal and two external validation cohorts, respectively. Decision curve analysis (DCA) showed that the TLRN was better than any other model. TLRN has potential generalization ability, as shown in the stratification analysis. Conclusions The proposed TLRN based on gastric WSIs may help preoperatively differentiate PGL from Borrmann type IV GC. Borrmann type IV gastric cancer, primary gastric lymphoma, transfer learning, whole slide image, deep learning.
Collapse
Affiliation(s)
- Bao Feng
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China.,School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Liebin Huang
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Yu Liu
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Yehang Chen
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Haoyang Zhou
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Tianyou Yu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
| | - Huimin Xue
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Qinxian Chen
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Tao Zhou
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Qionglian Kuang
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Zhiqi Yang
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Xiangguang Chen
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Xiaofeng Chen
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Zhenpeng Peng
- Department of Radiology, The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Wansheng Long
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| |
Collapse
|
22
|
Qin C, Hu W, Wang X, Ma X. Application of Artificial Intelligence in Diagnosis of Craniopharyngioma. Front Neurol 2022; 12:752119. [PMID: 35069406 PMCID: PMC8770750 DOI: 10.3389/fneur.2021.752119] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 11/12/2021] [Indexed: 12/24/2022] Open
Abstract
Craniopharyngioma is a congenital brain tumor with clinical characteristics of hypothalamic-pituitary dysfunction, increased intracranial pressure, and visual field disorder, among other injuries. Its clinical diagnosis mainly depends on radiological examinations (such as Computed Tomography, Magnetic Resonance Imaging). However, assessing numerous radiological images manually is a challenging task, and the experience of doctors has a great influence on the diagnosis result. The development of artificial intelligence has brought about a great transformation in the clinical diagnosis of craniopharyngioma. This study reviewed the application of artificial intelligence technology in the clinical diagnosis of craniopharyngioma from the aspects of differential classification, prediction of tissue invasion and gene mutation, prognosis prediction, and so on. Based on the reviews, the technical route of intelligent diagnosis based on the traditional machine learning model and deep learning model were further proposed. Additionally, in terms of the limitations and possibilities of the development of artificial intelligence in craniopharyngioma diagnosis, this study discussed the attentions required in future research, including few-shot learning, imbalanced data set, semi-supervised models, and multi-omics fusion.
Collapse
Affiliation(s)
- Caijie Qin
- Institute of Information Engineering, Sanming University, Sanming, China
| | - Wenxing Hu
- University of New South Wales, Sydney, NSW, Australia
| | - Xinsheng Wang
- School of Information Science and Engineering, Harbin Institute of Technology at Weihai, Weihai, China
| | - Xibo Ma
- CBSR & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
23
|
Agrawal T, Choudhary P. Segmentation and classification on chest radiography: a systematic survey. THE VISUAL COMPUTER 2022; 39:875-913. [PMID: 35035008 PMCID: PMC8741572 DOI: 10.1007/s00371-021-02352-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/01/2021] [Indexed: 06/14/2023]
Abstract
Chest radiography (X-ray) is the most common diagnostic method for pulmonary disorders. A trained radiologist is required for interpreting the radiographs. But sometimes, even experienced radiologists can misinterpret the findings. This leads to the need for computer-aided detection diagnosis. For decades, researchers were automatically detecting pulmonary disorders using the traditional computer vision (CV) methods. Now the availability of large annotated datasets and computing hardware has made it possible for deep learning to dominate the area. It is now the modus operandi for feature extraction, segmentation, detection, and classification tasks in medical imaging analysis. This paper focuses on the research conducted using chest X-rays for the lung segmentation and detection/classification of pulmonary disorders on publicly available datasets. The studies performed using the Generative Adversarial Network (GAN) models for segmentation and classification on chest X-rays are also included in this study. GAN has gained the interest of the CV community as it can help with medical data scarcity. In this study, we have also included the research conducted before the popularity of deep learning models to have a clear picture of the field. Many surveys have been published, but none of them is dedicated to chest X-rays. This study will help the readers to know about the existing techniques, approaches, and their significance.
Collapse
Affiliation(s)
- Tarun Agrawal
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| | - Prakash Choudhary
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| |
Collapse
|
24
|
Li X, Davis RC, Xu Y, Wang Z, Souma N, Sotolongo G, Bell J, Ellis M, Howell D, Shen X, Lafata KJ, Barisoni L. Deep learning segmentation of glomeruli on kidney donor frozen sections. J Med Imaging (Bellingham) 2021; 8:067501. [PMID: 34950750 PMCID: PMC8685284 DOI: 10.1117/1.jmi.8.6.067501] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 11/08/2021] [Indexed: 10/15/2023] Open
Abstract
Purpose: Recent advances in computational image analysis offer the opportunity to develop automatic quantification of histologic parameters as aid tools for practicing pathologists. We aim to develop deep learning (DL) models to quantify nonsclerotic and sclerotic glomeruli on frozen sections from donor kidney biopsies. Approach: A total of 258 whole slide images (WSI) from cadaveric donor kidney biopsies performed at our institution ( n = 123 ) and at external institutions ( n = 135 ) were used in this study. WSIs from our institution were divided at the patient level into training and validation datasets (ratio: 0.8:0.2), and external WSIs were used as an independent testing dataset. Nonsclerotic ( n = 22767 ) and sclerotic ( n = 1366 ) glomeruli were manually annotated by study pathologists on all WSIs. A nine-layer convolutional neural network based on the common U-Net architecture was developed and tested for the segmentation of nonsclerotic and sclerotic glomeruli. DL-derived, manual segmentation, and reported glomerular count (standard of care) were compared. Results: The average Dice similarity coefficient testing was 0.90 and 0.83. And the F 1 , recall, and precision scores were 0.93, 0.96, and 0.90, and 0.87, 0.93, and 0.81, for nonsclerotic and sclerotic glomeruli, respectively. DL-derived and manual segmentation-derived glomerular counts were comparable, but statistically different from reported glomerular count. Conclusions: DL segmentation is a feasible and robust approach for automatic quantification of glomeruli. We represent the first step toward new protocols for the evaluation of donor kidney biopsies.
Collapse
Affiliation(s)
- Xiang Li
- Duke University, Department of Electrical and Computer Engineering, Durham, North Carolina, United States
| | - Richard C. Davis
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
| | - Yuemei Xu
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
- Nanjing Drum Tower Hospital, Department of Pathology, Nanjing, China
| | - Zehan Wang
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Nao Souma
- Duke University, Department of Medicine, Division of Nephrology, Durham, North Carolina, United States
| | - Gina Sotolongo
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
| | - Jonathan Bell
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
| | - Matthew Ellis
- Duke University, Department of Medicine, Division of Nephrology, Durham, North Carolina, United States
- Duke University, Department of Surgery, Durham, North Carolina, United States
| | - David Howell
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
| | - Xiling Shen
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Kyle J. Lafata
- Duke University, Department of Electrical and Computer Engineering, Durham, North Carolina, United States
- Duke University, Department of Radiation Oncology, Durham, North Carolina, United States
- Duke University, Department of Radiology, Durham, North Carolina, United States
| | - Laura Barisoni
- Duke University, Department of Pathology, Division of AI and Computational Pathology, Durham, North Carolina, United States
- Duke University, Department of Medicine, Division of Nephrology, Durham, North Carolina, United States
| |
Collapse
|