1
|
Knull E, Smith CW, Ward AD, Fenster A, Hoover DA. Towards U-Net-based intraoperative 2D dose prediction in high dose rate prostate brachytherapy. Brachytherapy 2024:S1538-4721(24)00457-4. [PMID: 39668102 DOI: 10.1016/j.brachy.2024.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2024] [Revised: 10/07/2024] [Accepted: 11/07/2024] [Indexed: 12/14/2024]
Abstract
BACKGROUND Poor needle placement in prostate high-dose-rate brachytherapy (HDR-BT) results in sub-optimal dosimetry and mentally predicting these effects during HDR-BT is difficult, creating a barrier to widespread availability of high-quality prostate HDR-BT. PURPOSE To provide earlier feedback on needle implantation quality, we trained machine learning models to predict 2D dosimetry for prostate HDR-BT on axial TRUS images. METHODS AND MATERIALS Clinical treatment plans from 248 prostate HDR-BT patients were retrospectively collected and randomly split 80/20 for training/testing. Fifteen U-Net models were implemented to predict the 90%, 100%, 120%, 150%, and 200% isodose levels in the prostate base, midgland, and apex. Predicted isodose lines were compared to delivered dose using Dice similarity coefficient (DSC), precision, recall, average symmetric surface distance, area percent difference, and 95th percentile Hausdorff distance. To benchmark performance, 10 cases were retrospectively replanned and compared against the clinical plans using the same metrics. RESULTS Models predicting 90% and 100% isodose lines at midgland performed best, with median DSC of 0.97 and 0.96, respectively. Performance declined as isodose level increased, with median DSC of 0.90, 0.79, and 0.65 in the 120%, 150%, and 200% models. In the base, median DSC was 0.94 for 90% and decreased to 0.64 for 200%. In the apex, median DSC was 0.93 for 90% and decreased to 0.63 for 200%. Median prediction time was 25 ms. CONCLUSION U-Net models accurately predicted HDR-BT isodose lines on 2D TRUS images sufficiently quickly for real-time use. Incorporating auto-segmentation algorithms will allow intra-operative feedback on needle implantation quality.
Collapse
Affiliation(s)
- Eric Knull
- Robarts Research Institute, Western University, London, Ontario, Canada
| | - Christopher W Smith
- Department of Medical Biophysics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; Baines Imaging Research Laboratory, London, Ontario, Canada; Lawson Health Research Institute, London, Ontario, Canada
| | - Aaron D Ward
- Department of Medical Biophysics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; Baines Imaging Research Laboratory, London, Ontario, Canada; Lawson Health Research Institute, London, Ontario, Canada; Department of Oncology, Western University, London, Ontario, Canada; London Regional Cancer Program, London, Ontario, Canada
| | - Aaron Fenster
- Robarts Research Institute, Western University, London, Ontario, Canada; Department of Medical Biophysics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Douglas A Hoover
- Department of Medical Biophysics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; Lawson Health Research Institute, London, Ontario, Canada; Department of Oncology, Western University, London, Ontario, Canada; London Regional Cancer Program, London, Ontario, Canada.
| |
Collapse
|
2
|
Andrade-Miranda G, Vega PS, Taguelmimt K, Dang HP, Visvikis D, Bert J. Exploring transformer reliability in clinically significant prostate cancer segmentation: A comprehensive in-depth investigation. Comput Med Imaging Graph 2024; 118:102459. [PMID: 39566375 DOI: 10.1016/j.compmedimag.2024.102459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 10/20/2024] [Accepted: 10/29/2024] [Indexed: 11/22/2024]
Abstract
Despite the growing prominence of transformers in medical image segmentation, their application to clinically significant prostate cancer (csPCa) has been overlooked. Minimal attention has been paid to domain shift analysis and uncertainty assessment, critical for safely implementing computer-aided diagnosis (CAD) systems. Domain shift in medical imagery refers to differences between the data used to train a model and the data evaluated later, arising from variations in imaging equipment, protocols, patient populations, and acquisition noise. While recent models enhance in-domain performance, areas such as robustness and uncertainty estimation in out-of-domain distributions have received limited investigation, creating indecisiveness about model reliability. In contrast, our study addresses csPCa at voxel, lesion, and image levels, investigating models from traditional U-Net to cutting-edge transformers. We focus on four key points: robustness, calibration, out-of-distribution (OOD), and misclassification detection (MD). Findings show that transformer-based models exhibit enhanced robustness at image and lesion levels, both in and out of domain. However, this improvement is not fully translated to the voxel level, where Convolutional Neural Networks (CNNs) outperform in most robustness metrics. Regarding uncertainty, hybrid transformers and transformer encoders performed better, but this trend depends on misclassification or out-of-distribution tasks.
Collapse
Affiliation(s)
| | - Pedro Soto Vega
- LaTIM UMR1101, INSERM, University of Brest, Brest, France; L@bISEN, Vision-AD and Auto-Rob, ISEN Yncréa Ouest, Brest, France.
| | | | - Hong-Phuong Dang
- LaTIM UMR1101, INSERM, University of Brest, Brest, France; CentraleSupélec, IETR UMR CNRS 6164, Cesson-Sévigné, France.
| | | | - Julien Bert
- LaTIM UMR1101, INSERM, University of Brest, Brest, France; University Hospital of Brest, Brest, France.
| |
Collapse
|
3
|
Liu H, Zeng Y, Li H, Wang F, Chang J, Guo H, Zhang J. DDANet: A deep dilated attention network for intracerebral haemorrhage segmentation. IET Syst Biol 2024; 18:285-297. [PMID: 39582103 DOI: 10.1049/syb2.12103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Revised: 10/08/2024] [Accepted: 10/18/2024] [Indexed: 11/26/2024] Open
Abstract
Intracranial haemorrhage (ICH) is an urgent and potentially fatal medical condition caused by brain blood vessel rupture, leading to blood accumulation in the brain tissue. Due to the pressure and damage it causes to brain tissue, ICH results in severe neurological impairment or even death. Recently, deep neural networks have been widely applied to enhance the speed and precision of ICH detection yet they are still challenged by small or subtle hemorrhages. The authors introduce DDANet, a novel haematoma segmentation model for brain CT images. Specifically, a dilated convolution pooling block is introduced in the intermediate layers of the encoder to enhance feature extraction capabilities of middle layers. Additionally, the authors incorporate a self-attention mechanism to capture global semantic information of high-level features to guide the extraction and processing of low-level features, thereby enhancing the model's understanding of the overall structure while maintaining details. DDANet also integrates residual networks, channel attention, and spatial attention mechanisms for joint optimisation, effectively mitigating the severe class imbalance problem and aiding the training process. Experiments show that DDANet outperforms current methods, achieving the Dice coefficient, Jaccard index, sensitivity, accuracy, and a specificity of 0.712, 0.601, 0.73, 0.997, and 0.998, respectively. The code is available at https://github.com/hpguo1982/DDANet.
Collapse
Affiliation(s)
- Haiyan Liu
- Department of Neurology, Xinyang Central Hospital, Xinyang, China
- School of Medicine, Xinyang Normal University, Xinyang, China
| | - Yu Zeng
- School of Computer and Information Techonology, Xinyang Normal University, Xinyang, China
| | - Hao Li
- Department of Neurology, Xinyang Central Hospital, Xinyang, China
- School of Medicine, Xinyang Normal University, Xinyang, China
| | - Fuxin Wang
- Department of Neurology, Xinyang Central Hospital, Xinyang, China
- School of Medicine, Xinyang Normal University, Xinyang, China
| | - Jianjun Chang
- Department of Neurology, Xinyang Central Hospital, Xinyang, China
| | - Huaping Guo
- School of Computer and Information Techonology, Xinyang Normal University, Xinyang, China
| | - Jian Zhang
- School of Computer and Information Techonology, Xinyang Normal University, Xinyang, China
| |
Collapse
|
4
|
Yu K, Chen Y, Feng Z, Wang G, Deng Y, Li J, Ling L, Xu R, Xiao P, Yuan J. Segmentation and multiparametric evaluation of corneal whorl-like nerves for in vivo confocal microscopy images in dry eye disease. BMJ Open Ophthalmol 2024; 9:e001861. [PMID: 39375151 PMCID: PMC11459327 DOI: 10.1136/bmjophth-2024-001861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 09/15/2024] [Indexed: 10/09/2024] Open
Abstract
OBJECTIVE To establish an automated corneal nerve analysis system for corneal in vivo confocal microscopy (IVCM) images from both the whorl-like corneal nerves in the inferior whorl (IW) region and the straight ones in the central cornea and to characterise the geometric features of cornea nerves in dry eye disease (DED). METHODS AND ANALYSIS An encoder-decoder-based semi-supervised method was proposed for corneal nerve segmentation. This model's performance was compared with the ground truth provided by experienced clinicians, using Dice similarity coefficient (DSC), mean intersection over union (mIoU), accuracy (Acc), sensitivity (Sen) and specificity (Spe). The corneal nerve total length (CNFL), tortuosity (CNTor), fractal dimension (CNDf) and number of branching points (CNBP) were used for further analysis in an independent DED dataset including 50 patients with DED and 30 healthy controls. RESULTS The model achieved 95.72% Acc, 97.88% Spe, 80.61% Sen, 75.26% DSC, 77.57% mIoU and an area under the curve value of 0.98. For clinical evaluation, the CNFL, CNBP and CNDf for whorl-like and straight nerves showed a significant decrease in DED patients compared with healthy controls (p<0.05). Additionally, significantly elevated CNTor was detected in the IW in DED patients (p<0.05). The CNTor for straight corneal nerves, however, showed no significant alteration in DED patients (p>0.05). CONCLUSION The proposed method segments both whorl-like and straight corneal nerves in IVCM images with high accuracy and offered parameters to objectively quantify DED-induced corneal nerve injury. The IW is an effective region to detect alterations of multiple geometric indices in DED patients.
Collapse
Affiliation(s)
- Kang Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yupei Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ziqing Feng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Gengyuan Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yuqing Deng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jiaxiong Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Lirong Ling
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ruiwen Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Peng Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jin Yuan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
5
|
Ostmeier S, Axelrod B, Liu Y, Yu Y, Jiang B, Yuen N, Pulli B, Verhaaren BFJ, Kaka H, Wintermark M, Michel P, Mahammedi A, Federau C, Lansberg MG, Albers GW, Moseley ME, Zaharchuk G, Heit JJ. Random expert sampling for deep learning segmentation of acute ischemic stroke on non-contrast CT. J Neurointerv Surg 2024:jnis-2023-021283. [PMID: 38302420 PMCID: PMC11291713 DOI: 10.1136/jnis-2023-021283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 01/15/2024] [Indexed: 02/03/2024]
Abstract
BACKGROUND Outlining acutely infarcted tissue on non-contrast CT is a challenging task for which human inter-reader agreement is limited. We explored two different methods for training a supervised deep learning algorithm: one that used a segmentation defined by majority vote among experts and another that trained randomly on separate individual expert segmentations. METHODS The data set consisted of 260 non-contrast CT studies in 233 patients with acute ischemic stroke recruited from the multicenter DEFUSE 3 (Endovascular Therapy Following Imaging Evaluation for Ischemic Stroke 3) trial. Additional external validation was performed using 33 patients with matched stroke onset times from the University Hospital Lausanne. A benchmark U-Net was trained on the reference annotations of three experienced neuroradiologists to segment ischemic brain tissue using majority vote and random expert sampling training schemes. The median of volume, overlap, and distance segmentation metrics were determined for agreement in lesion segmentations between (1) three experts, (2) the majority model and each expert, and (3) the random model and each expert. The two sided Wilcoxon signed rank test was used to compare performances (1) to 2) and (1) to (3). We further compared volumes with the 24 hour follow-up diffusion weighted imaging (DWI, final infarct core) and correlations with clinical outcome (modified Rankin Scale (mRS) at 90 days) with the Spearman method. RESULTS The random model outperformed the inter-expert agreement ((1) to (2)) and the majority model ((1) to (3)) (dice 0.51±0.04 vs 0.36±0.05 (P<0.0001) vs 0.45±0.05 (P<0.0001)). The random model predicted volume correlated with clinical outcome (0.19, P<0.05), whereas the median expert volume and majority model volume did not. There was no significant difference when comparing the volume correlations between random model, median expert volume, and majority model to 24 hour follow-up DWI volume (P>0.05, n=51). CONCLUSION The random model for ischemic injury delineation on non-contrast CT surpassed the inter-expert agreement ((1) to (2)) and the performance of the majority model ((1) to (3)). We showed that the random model volumetric measures of the model were consistent with 24 hour follow-up DWI.
Collapse
Affiliation(s)
- Sophie Ostmeier
- Department of Radiology, Stanford University, Stanford, California, USA
| | | | - Yongkai Liu
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Yannan Yu
- Department of Radiology, University of California San Francisco, San Francisco, California, USA
| | - Bin Jiang
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Nicole Yuen
- Department of Neurology, Stanford University School of Medicine, Stanford, California, USA
| | - Benjamin Pulli
- Department of Radiology, Stanford University, Stanford, California, USA
| | | | - Hussam Kaka
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Max Wintermark
- Department of Radiology, University of Virginia, Charlottesville, Virginia, USA
| | - Patrik Michel
- Department of Neurology Service, University of Lausanne, Lausanne, Switzerland
| | | | | | - Maarten G Lansberg
- Department of Neurology, Stanford University School of Medicine, Stanford, California, USA
| | - Gregory W Albers
- Department of Neurology, Stanford University School of Medicine, Stanford, California, USA
| | - Michael E Moseley
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Gregory Zaharchuk
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Jeremy J Heit
- Department of Radiology, Neuroadiology and Neurointervention Division, Stanford University School of Medicine, Palo Alto, CA, USA
| |
Collapse
|
6
|
Cepeda S, Romero R, Luque L, García-Pérez D, Blasco G, Luppino LT, Kuttner S, Esteban-Sinovas O, Arrese I, Solheim O, Eikenes L, Karlberg A, Pérez-Núñez Á, Zanier O, Serra C, Staartjes VE, Bianconi A, Rossi LF, Garbossa D, Escudero T, Hornero R, Sarabia R. Deep learning-based postoperative glioblastoma segmentation and extent of resection evaluation: Development, external validation, and model comparison. Neurooncol Adv 2024; 6:vdae199. [PMID: 39659831 PMCID: PMC11631186 DOI: 10.1093/noajnl/vdae199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2024] Open
Abstract
Background The pursuit of automated methods to assess the extent of resection (EOR) in glioblastomas is challenging, requiring precise measurement of residual tumor volume. Many algorithms focus on preoperative scans, making them unsuitable for postoperative studies. Our objective was to develop a deep learning-based model for postoperative segmentation using magnetic resonance imaging (MRI). We also compared our model's performance with other available algorithms. Methods To develop the segmentation model, a training cohort from 3 research institutions and 3 public databases was used. Multiparametric MRI scans with ground truth labels for contrast-enhancing tumor (ET), edema, and surgical cavity, served as training data. The models were trained using MONAI and nnU-Net frameworks. Comparisons were made with currently available segmentation models using an external cohort from a research institution and a public database. Additionally, the model's ability to classify EOR was evaluated using the RANO-Resect classification system. To further validate our best-trained model, an additional independent cohort was used. Results The study included 586 scans: 395 for model training, 52 for model comparison, and 139 scans for independent validation. The nnU-Net framework produced the best model with median Dice scores of 0.81 for contrast ET, 0.77 for edema, and 0.81 for surgical cavities. Our best-trained model classified patients into maximal and submaximal resection categories with 96% accuracy in the model comparison dataset and 84% in the independent validation cohort. Conclusions Our nnU-Net-based model outperformed other algorithms in both segmentation and EOR classification tasks, providing a freely accessible tool with promising clinical applicability.
Collapse
Affiliation(s)
- Santiago Cepeda
- Department of Neurosurgery, Río Hortega University Hospital, Valladolid, Spain
| | - Roberto Romero
- Center for Biomedical Research in Network of Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Valladolid, Spain
- Biomedical Engineering Group, Universidad de Valladolid, Valladolid, Spain
| | - Lidia Luque
- Department of Physics and Computational Radiology, Clinic for Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
- Department of Physics, University of Oslo, Oslo, Norway
- Computational Radiology and Artificial Intelligence (CRAI), Department of Physics and Computational Radiology, Clinic for Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | | | - Guillermo Blasco
- Department of Neurosurgery, La Princesa University Hospital, Madrid, Spain
| | - Luigi Tommaso Luppino
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, Norway
| | - Samuel Kuttner
- The PET Imaging Center, University Hospital of North Norway, Tromsø, Norway
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, Norway
| | | | - Ignacio Arrese
- Department of Neurosurgery, Río Hortega University Hospital, Valladolid, Spain
| | - Ole Solheim
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Neurosurgery, St. Olavs University Hospital, Trondheim, Norway
| | - Live Eikenes
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Anna Karlberg
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Ángel Pérez-Núñez
- Instituto de Investigación Sanitaria, 12 de Octubre University Hospital (i + 12), Madrid, Spain
- Department of Surgery, School of Medicine, Complutense University, Madrid, Spain
- Department of Neurosurgery, 12 de Octubre University Hospital (i + 12), Madrid, Spain
| | - Olivier Zanier
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zürich, University of Zürich, Zürich, Switzerland
| | - Carlo Serra
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zürich, University of Zürich, Zürich, Switzerland
| | - Victor E Staartjes
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zürich, University of Zürich, Zürich, Switzerland
| | - Andrea Bianconi
- Neurosurgery Unit, Department of Neuroscience “Rita Levi Montalcini,” University of Turin, Turin, Italy
- Division of Neurosurgery, Ospedale Policlinico San Martino, IRCCS for Oncology and Neurosciences, Genoa, Italy
| | | | - Diego Garbossa
- Neurosurgery Unit, Department of Neuroscience “Rita Levi Montalcini,” University of Turin, Turin, Italy
| | - Trinidad Escudero
- Department of Radiology, Río Hortega University Hospital, Valladolid, Spain
| | - Roberto Hornero
- Institute for Research in Mathematics (IMUVA), University of Valladolid, Valladolid, Spain
- Center for Biomedical Research in Network of Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Valladolid, Spain
- Biomedical Engineering Group, Universidad de Valladolid, Valladolid, Spain
| | - Rosario Sarabia
- Department of Neurosurgery, Río Hortega University Hospital, Valladolid, Spain
| |
Collapse
|
7
|
Ostmeier S, Axelrod B, Verhaaren BFJ, Christensen S, Mahammedi A, Liu Y, Pulli B, Li LJ, Zaharchuk G, Heit JJ. Non-inferiority of deep learning ischemic stroke segmentation on non-contrast CT within 16-hours compared to expert neuroradiologists. Sci Rep 2023; 13:16153. [PMID: 37752162 PMCID: PMC10522706 DOI: 10.1038/s41598-023-42961-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 09/17/2023] [Indexed: 09/28/2023] Open
Abstract
We determined if a convolutional neural network (CNN) deep learning model can accurately segment acute ischemic changes on non-contrast CT compared to neuroradiologists. Non-contrast CT (NCCT) examinations from 232 acute ischemic stroke patients who were enrolled in the DEFUSE 3 trial were included in this study. Three experienced neuroradiologists independently segmented hypodensity that reflected the ischemic core on each scan. The neuroradiologist with the most experience (expert A) served as the ground truth for deep learning model training. Two additional neuroradiologists' (experts B and C) segmentations were used for data testing. The 232 studies were randomly split into training and test sets. The training set was further randomly divided into 5 folds with training and validation sets. A 3-dimensional CNN architecture was trained and optimized to predict the segmentations of expert A from NCCT. The performance of the model was assessed using a set of volume, overlap, and distance metrics using non-inferiority thresholds of 20%, 3 ml, and 3 mm, respectively. The optimized model trained on expert A was compared to test experts B and C. We used a one-sided Wilcoxon signed-rank test to test for the non-inferiority of the model-expert compared to the inter-expert agreement. The final model performance for the ischemic core segmentation task reached a performance of 0.46 ± 0.09 Surface Dice at Tolerance 5mm and 0.47 ± 0.13 Dice when trained on expert A. Compared to the two test neuroradiologists the model-expert agreement was non-inferior to the inter-expert agreement, [Formula: see text]. The before, CNN accurately delineates the hypodense ischemic core on NCCT in acute ischemic stroke patients with an accuracy comparable to neuroradiologists.
Collapse
Affiliation(s)
| | - Brian Axelrod
- Department of Computer Science, Stanford University, Stanford, USA
| | | | | | | | | | | | - Li-Jia Li
- Stanford School of Medicine, Stanford, USA
| | | | | |
Collapse
|