1
|
Wang K, Zheng F, Cheng L, Dai HN, Dou Q, Qin J. Breast Cancer Classification from Digital Pathology Images via Connectivity-aware Graph Transformer. IEEE Trans Med Imaging 2024; PP:1-1. [PMID: 38526888 DOI: 10.1109/tmi.2024.3381239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Automated classification of breast cancer subtypes from digital pathology images has been an extremely challenging task due to the complicated spatial patterns of cells in the tissue micro-environment. While newly proposed graph transformers are able to capture more long-range dependencies to enhance accuracy, they largely ignore the topological connectivity between graph nodes, which is nevertheless critical to extract more representative features to address this difficult task. In this paper, we propose a novel connectivity-aware graph transformer (CGT) for phenotyping the topology connectivity of the tissue graph constructed from digital pathology images for breast cancer classification. Our CGT seamlessly integrates connectivity embedding to node feature at every graph transformer layer by using local connectivity aggregation, in order to yield more comprehensive graph representations to distinguish different breast cancer subtypes. In light of the realistic intercellular communication mode, we then encode the spatial distance between two arbitrary nodes as connectivity bias in self-attention calculation, thereby allowing the CGT to distinctively harness the connectivity embedding based on the distance of two nodes. We extensively evaluate the proposed CGT on a large cohort of breast carcinoma digital pathology images stained by Haematoxylin & Eosin. Experimental results demonstrate the effectiveness of our CGT, which outperforms state-of-the-art methods by a large margin. Codes are released on https://github.com/wang-kang-6/CGT.
Collapse
|
2
|
Lu Y, Chen W, Lu B, Zhou J, Chen Z, Dou Q, Liu YH. Adaptive Online Learning and Robust 3-D Shape Servoing of Continuum and Soft Robots in Unstructured Environments. Soft Robot 2024. [PMID: 38324014 DOI: 10.1089/soro.2022.0158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
In this article, we present a novel and generic data-driven method to servo-control the 3-D shape of continuum and soft robots based on proprioceptive sensing feedback. Developments of 3-D shape perception and control technologies are crucial for continuum and soft robots to perform tasks autonomously in surgical interventions. However, owing to the nonlinear properties of continuum robots, one main difficulty lies in the modeling of them, especially for soft robots with variable stiffness. To address this problem, we propose a versatile learning-based adaptive shape controller by leveraging proprioception of 3-D configuration from fiber Bragg grating (FBG) sensors, which can online estimate the unknown model of continuum robot against unexpected disturbances and exhibit an adaptive behavior to the unmodeled system without priori data exploration. Based on a new composite adaptation algorithm, the asymptotic convergences of the closed-loop system with learning parameters have been proven by Lyapunov theory. To validate the proposed method, we present a comprehensive experimental study using two continuum and soft robots both integrated with multicore FBGs, including a robotic-assisted colonoscope and multisection extensible soft manipulators. The results demonstrate the feasibility, adaptability, and superiority of our controller in various unstructured environments, as well as phantom experiments.
Collapse
Affiliation(s)
- Yiang Lu
- Department of Mechanical and Automation Engineering, T Stone Robotics Institute, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Wei Chen
- Department of Mechanical and Automation Engineering, T Stone Robotics Institute, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Bo Lu
- The Robotics and Microsystems Center, School of Mechanical and Electric Engineering, Soochow University, Suzhou, China
| | - Jianshu Zhou
- Department of Mechanical and Automation Engineering, T Stone Robotics Institute, The Chinese University of Hong Kong, Shatin, Hong Kong
- Hong Kong Center for Logistics Robotics, Shatin, Hong Kong
| | - Zhi Chen
- Department of Mechanical and Automation Engineering, T Stone Robotics Institute, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Yun-Hui Liu
- Department of Mechanical and Automation Engineering, T Stone Robotics Institute, The Chinese University of Hong Kong, Shatin, Hong Kong
- Hong Kong Center for Logistics Robotics, Shatin, Hong Kong
| |
Collapse
|
3
|
Ip B, Pan S, Yuan Z, Hung T, Ko H, Leng X, Liu Y, Li S, Lee SY, Cheng C, Chan H, Mok V, Soo Y, Wu X, Lui LT, Chan R, Abrigo J, Dou Q, Seiffge D, Leung T. Prothrombin Complex Concentrate vs Conservative Management in ICH Associated With Direct Oral Anticoagulants. JAMA Netw Open 2024; 7:e2354916. [PMID: 38319661 PMCID: PMC10848059 DOI: 10.1001/jamanetworkopen.2023.54916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 12/14/2023] [Indexed: 02/07/2024] Open
Abstract
Importance Intracerebral hemorrhage (ICH) associated with direct oral anticoagulant (DOAC) use carries extremely high morbidity and mortality. The clinical effectiveness of hemostatic therapy is unclear. Objective To compare the clinical and radiological outcomes of DOAC-associated ICH treated with prothrombin complex concentrate (PCC) vs conservative management. Design, Setting, and Participants In this population-based, propensity score-weighted retrospective cohort study, patients who developed DOAC-associated ICH from January 1, 2016, to December 31, 2021, in Hong Kong were identified. The outcomes of patients who received 25 to 50 IU/kg PCC with those who received no hemostatic agents were compared. Data were analyzed from May 1, 2022, to June 30, 2023. Main Outcomes and Measures The primary outcome was modified Rankin scale of 0 to 3 or returning to baseline functional status at 3 months. Secondary outcomes were mortality at 90 days, in-hospital mortality, and hematoma expansion. Weighted logistic regression was performed to evaluate the association of PCC with study outcomes. In unweighted logistic regression models, factors associated with good neurological outcome and hematoma expansion in DOAC-associated ICH were identified. Results A total of 232 patients with DOAC-associated ICH, with a mean (SD) age of 77.2 (9.3) years and 101 (44%) female patients, were included. Among these, 116 (50%) received conservative treatment and 102 (44%) received PCC. Overall, 74 patients (31%) patients had good neurological recovery and 92 (39%) died within 90 days. Median (IQR) baseline hematoma volume was 21.7 mL (3.6-66.1 mL). Compared with conservative management, PCC was not associated with improved neurological recovery (adjusted odds ratio [aOR], 0.62; 95% CI, 0.33-1.16; P = .14), mortality at 90 days (aOR, 1.03; 95% CI, 0.70-1.53; P = .88), in-hospital mortality (aOR, 1.11; 95% CI, 0.69-1.79; P = .66), or reduced hematoma expansion (aOR, 0.94; 95% CI, 0.38-2.31; P = .90). Higher baseline hematoma volume, lower Glasgow coma scale, and intraventricular hemorrhage were associated with lower odds of good neurological outcome but not hematoma expansion. Conclusions and Relevance In this cohort study, Chinese patients with DOAC-associated ICH had large baseline hematoma volumes and high rates of mortality and functional disability. PCC treatment was not associated with improved functional outcome, hematoma expansion, or mortality. Further studies on novel hemostatic agents as well as neurosurgical and adjunctive medical therapies are needed to identify the best management algorithm for DOAC-associated ICH.
Collapse
Affiliation(s)
- Bonaventure Ip
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
- Li Ka Shing Institute of Health Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR
| | - Sangqi Pan
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
| | - Zhong Yuan
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR
| | - Trista Hung
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
| | - Ho Ko
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
- Li Ka Shing Institute of Health Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR
| | - Xinyi Leng
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
| | - Yuying Liu
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
| | - Shuang Li
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
| | - Sing Yau Lee
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
| | - Cyrus Cheng
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
| | - Howard Chan
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
| | - Vincent Mok
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
- Li Ka Shing Institute of Health Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR
| | - Yannie Soo
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
| | - Xiaoli Wu
- Department of Electrical Engineering, The City University of Hong Kong, Hong Kong SAR
| | - Leong Ting Lui
- Department of Electrical Engineering, The City University of Hong Kong, Hong Kong SAR
| | - Rosa Chan
- Department of Electrical Engineering, The City University of Hong Kong, Hong Kong SAR
| | - Jill Abrigo
- Department of Diagnostic Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong SAR
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR
| | - David Seiffge
- Department of Neurology, Inselspital University Hospital Bern and University of Bern, Bern, Switzerland
| | - Thomas Leung
- Department of Medicine and Therapeutics, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR
- Li Ka Shing Institute of Health Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR
| |
Collapse
|
4
|
Sudre CH, Van Wijnen K, Dubost F, Adams H, Atkinson D, Barkhof F, Birhanu MA, Bron EE, Camarasa R, Chaturvedi N, Chen Y, Chen Z, Chen S, Dou Q, Evans T, Ezhov I, Gao H, Girones Sanguesa M, Gispert JD, Gomez Anson B, Hughes AD, Ikram MA, Ingala S, Jaeger HR, Kofler F, Kuijf HJ, Kutnar D, Lee M, Li B, Lorenzini L, Menze B, Molinuevo JL, Pan Y, Puybareau E, Rehwald R, Su R, Shi P, Smith L, Tillin T, Tochon G, Urien H, van der Velden BHM, van der Velpen IF, Wiestler B, Wolters FJ, Yilmaz P, de Groot M, Vernooij MW, de Bruijne M. Where is VALDO? VAscular Lesions Detection and segmentatiOn challenge at MICCAI 2021. Med Image Anal 2024; 91:103029. [PMID: 37988921 DOI: 10.1016/j.media.2023.103029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/09/2023] [Accepted: 11/13/2023] [Indexed: 11/23/2023]
Abstract
Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1-EPVS, 9 for Task 2-Microbleeds and 6 for Task 3-Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1-EPVS and Task 2-Microbleeds and not practically useful results yet for Task 3-Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level.
Collapse
Affiliation(s)
- Carole H Sudre
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom; Centre for Medical Image Computing, University College London, London, United Kingdom; School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom.
| | - Kimberlin Van Wijnen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Florian Dubost
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Hieab Adams
- Department of Clinical Genetics and Radiology, Erasmus MC, Rotterdam, The Netherlands
| | - David Atkinson
- Centre for Medical Imaging, University College London, London, United Kingdom
| | - Frederik Barkhof
- Centre for Medical Image Computing, University College London, London, United Kingdom; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, Amsterdam, The Netherlands
| | - Mahlet A Birhanu
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Esther E Bron
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Robin Camarasa
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Nish Chaturvedi
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom
| | - Yuan Chen
- Department of Radiology, University of Massachusetts Medical School, Worcester, USA
| | - Zihao Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Shuai Chen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Tavia Evans
- Department of Clinical Genetics and Radiology, Erasmus MC, Rotterdam, The Netherlands
| | - Ivan Ezhov
- Department of Informatics, Technische Universitat Munchen, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Haojun Gao
- Department of Radiology, Zhejiang University, Hangzhou, China
| | | | - Juan Domingo Gispert
- Barcelonaß Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; Hospital del Mar Medical Research Institute (IMIM), Barcelona, Spain; Centro de Investigación Biomédica en Red Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Barcelona, Spain
| | | | - Alun D Hughes
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom
| | - M Arfan Ikram
- Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Silvia Ingala
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, Amsterdam, The Netherlands
| | - H Rolf Jaeger
- Institute of Neurology, University College London, London, United Kingdom
| | - Florian Kofler
- Department of Informatics, Technische Universitat Munchen, Munich, Germany; Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Hugo J Kuijf
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Denis Kutnar
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - Bo Li
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Luigi Lorenzini
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, Amsterdam, The Netherlands
| | - Bjoern Menze
- Department of Informatics, Technische Universitat Munchen, Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| | - Jose Luis Molinuevo
- Barcelonaß Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; H. Lundbeck A/S, Copenhagen, Denmark
| | - Yiwei Pan
- Department of Electronic and Information Engineering, Harbin Institute of Technology at Shenzhen, Shenzhen, China
| | | | - Rafael Rehwald
- Institute of Neurology, University College London, London, United Kingdom
| | - Ruisheng Su
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Pengcheng Shi
- Department of Electronic and Information Engineering, Harbin Institute of Technology at Shenzhen, Shenzhen, China
| | | | - Therese Tillin
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom
| | | | - Hélène Urien
- ISEP-Institut Supérieur d'Électronique de Paris, Issy-les-Moulineaux, France
| | | | - Isabelle F van der Velpen
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Germany
| | - Frank J Wolters
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Pinar Yilmaz
- Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Marius de Groot
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; GlaxoSmithKline Research, Stevenage, United Kingdom
| | - Meike W Vernooij
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Marleen de Bruijne
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
5
|
Rockall AG, Li X, Johnson N, Lavdas I, Santhakumaran S, Prevost AT, Punwani S, Goh V, Barwick TD, Bharwani N, Sandhu A, Sidhu H, Plumb A, Burn J, Fagan A, Wengert GJ, Koh DM, Reczko K, Dou Q, Warwick J, Liu X, Messiou C, Tunariu N, Boavida P, Soneji N, Johnston EW, Kelly-Morland C, De Paepe KN, Sokhi H, Wallitt K, Lakhani A, Russell J, Salib M, Vinnicombe S, Haq A, Aboagye EO, Taylor S, Glocker B. Development and Evaluation of Machine Learning in Whole-Body Magnetic Resonance Imaging for Detecting Metastases in Patients With Lung or Colon Cancer: A Diagnostic Test Accuracy Study. Invest Radiol 2023; 58:823-831. [PMID: 37358356 PMCID: PMC10662596 DOI: 10.1097/rli.0000000000000996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/01/2023] [Indexed: 06/27/2023]
Abstract
OBJECTIVES Whole-body magnetic resonance imaging (WB-MRI) has been demonstrated to be efficient and cost-effective for cancer staging. The study aim was to develop a machine learning (ML) algorithm to improve radiologists' sensitivity and specificity for metastasis detection and reduce reading times. MATERIALS AND METHODS A retrospective analysis of 438 prospectively collected WB-MRI scans from multicenter Streamline studies (February 2013-September 2016) was undertaken. Disease sites were manually labeled using Streamline reference standard. Whole-body MRI scans were randomly allocated to training and testing sets. A model for malignant lesion detection was developed based on convolutional neural networks and a 2-stage training strategy. The final algorithm generated lesion probability heat maps. Using a concurrent reader paradigm, 25 radiologists (18 experienced, 7 inexperienced in WB-/MRI) were randomly allocated WB-MRI scans with or without ML support to detect malignant lesions over 2 or 3 reading rounds. Reads were undertaken in the setting of a diagnostic radiology reading room between November 2019 and March 2020. Reading times were recorded by a scribe. Prespecified analysis included sensitivity, specificity, interobserver agreement, and reading time of radiology readers to detect metastases with or without ML support. Reader performance for detection of the primary tumor was also evaluated. RESULTS Four hundred thirty-three evaluable WB-MRI scans were allocated to algorithm training (245) or radiology testing (50 patients with metastases, from primary 117 colon [n = 117] or lung [n = 71] cancer). Among a total 562 reads by experienced radiologists over 2 reading rounds, per-patient specificity was 86.2% (ML) and 87.7% (non-ML) (-1.5% difference; 95% confidence interval [CI], -6.4%, 3.5%; P = 0.39). Sensitivity was 66.0% (ML) and 70.0% (non-ML) (-4.0% difference; 95% CI, -13.5%, 5.5%; P = 0.344). Among 161 reads by inexperienced readers, per-patient specificity in both groups was 76.3% (0% difference; 95% CI, -15.0%, 15.0%; P = 0.613), with sensitivity of 73.3% (ML) and 60.0% (non-ML) (13.3% difference; 95% CI, -7.9%, 34.5%; P = 0.313). Per-site specificity was high (>90%) for all metastatic sites and experience levels. There was high sensitivity for the detection of primary tumors (lung cancer detection rate of 98.6% with and without ML [0.0% difference; 95% CI, -2.0%, 2.0%; P = 1.00], colon cancer detection rate of 89.0% with and 90.6% without ML [-1.7% difference; 95% CI, -5.6%, 2.2%; P = 0.65]). When combining all reads from rounds 1 and 2, reading times fell by 6.2% (95% CI, -22.8%, 10.0%) when using ML. Round 2 read-times fell by 32% (95% CI, 20.8%, 42.8%) compared with round 1. Within round 2, there was a significant decrease in read-time when using ML support, estimated as 286 seconds (or 11%) quicker ( P = 0.0281), using regression analysis to account for reader experience, read round, and tumor type. Interobserver variance suggests moderate agreement, Cohen κ = 0.64; 95% CI, 0.47, 0.81 (with ML), and Cohen κ = 0.66; 95% CI, 0.47, 0.81 (without ML). CONCLUSIONS There was no evidence of a significant difference in per-patient sensitivity and specificity for detecting metastases or the primary tumor using concurrent ML compared with standard WB-MRI. Radiology read-times with or without ML support fell for round 2 reads compared with round 1, suggesting that readers familiarized themselves with the study reading method. During the second reading round, there was a significant reduction in reading time when using ML support.
Collapse
|
6
|
Li Z, Kamnitsas K, Dou Q, Qin C, Glocker B. Joint Optimization of Class-Specific Training- and Test-Time Data Augmentation in Segmentation. IEEE Trans Med Imaging 2023; 42:3323-3335. [PMID: 37276115 DOI: 10.1109/tmi.2023.3282728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper presents an effective and general data augmentation framework for medical image segmentation. We adopt a computationally efficient and data-efficient gradient-based meta-learning scheme to explicitly align the distribution of training and validation data which is used as a proxy for unseen test data. We improve the current data augmentation strategies with two core designs. First, we learn class-specific training-time data augmentation (TRA) effectively increasing the heterogeneity within the training subsets and tackling the class imbalance common in segmentation. Second, we jointly optimize TRA and test-time data augmentation (TEA), which are closely connected as both aim to align the training and test data distribution but were so far considered separately in previous works. We demonstrate the effectiveness of our method on four medical image segmentation tasks across different scenarios with two state-of-the-art segmentation models, DeepMedic and nnU-Net. Extensive experimentation shows that the proposed data augmentation framework can significantly and consistently improve the segmentation performance when compared to existing solutions. Code is publicly available at https://github.com/ZerojumpLine/JCSAugment.
Collapse
|
7
|
Cao J, Yip HC, Chen Y, Scheppach M, Luo X, Yang H, Cheng MK, Long Y, Jin Y, Chiu PWY, Yam Y, Meng HML, Dou Q. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun 2023; 14:6676. [PMID: 37865629 PMCID: PMC10590425 DOI: 10.1038/s41467-023-42451-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 10/11/2023] [Indexed: 10/23/2023] Open
Abstract
Recent advancements in artificial intelligence have witnessed human-level performance; however, AI-enabled cognitive assistance for therapeutic procedures has not been fully explored nor pre-clinically validated. Here we propose AI-Endo, an intelligent surgical workflow recognition suit, for endoscopic submucosal dissection (ESD). Our AI-Endo is trained on high-quality ESD cases from an expert endoscopist, covering a decade time expansion and consisting of 201,026 labeled frames. The learned model demonstrates outstanding performance on validation data, including cases from relatively junior endoscopists with various skill levels, procedures conducted with different endoscopy systems and therapeutic skills, and cohorts from international multi-centers. Furthermore, we integrate our AI-Endo with the Olympus endoscopic system and validate the AI-enabled cognitive assistance system with animal studies in live ESD training sessions. Dedicated data analysis from surgical phase recognition results is summarized in an automatically generated report for skill assessment.
Collapse
Affiliation(s)
- Jianfeng Cao
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hon-Chi Yip
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong, China.
| | - Yueyao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Markus Scheppach
- Internal Medicine III-Gastroenterology, University Hospital of Augsburg, Augsburg, Germany
| | - Xiaobei Luo
- Guangdong Provincial Key Laboratory of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Hongzheng Yang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Ming Kit Cheng
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yonghao Long
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yueming Jin
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore
| | - Philip Wai-Yan Chiu
- Multi-scale Medical Robotics Center and The Chinese University of Hong Kong, Hong Kong, China.
| | - Yeung Yam
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong, China.
- Multi-scale Medical Robotics Center and The Chinese University of Hong Kong, Hong Kong, China.
- Centre for Perceptual and Interactive Intelligence and The Chinese University of Hong Kong, Hong Kong, China.
| | - Helen Mei-Ling Meng
- Centre for Perceptual and Interactive Intelligence and The Chinese University of Hong Kong, Hong Kong, China.
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| |
Collapse
|
8
|
Hu J, Zheng C, Yu Q, Zhong L, Yu K, Chen Y, Wang Z, Zhang B, Dou Q, Zhang X. DeepKOA: a deep-learning model for predicting progression in knee osteoarthritis using multimodal magnetic resonance images from the osteoarthritis initiative. Quant Imaging Med Surg 2023; 13:4852-4866. [PMID: 37581080 PMCID: PMC10423358 DOI: 10.21037/qims-22-1251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 05/11/2023] [Indexed: 08/16/2023]
Abstract
Background No investigations have thoroughly explored the feasibility of combining magnetic resonance (MR) images and deep-learning methods for predicting the progression of knee osteoarthritis (KOA). We thus aimed to develop a potential deep-learning model for predicting OA progression based on MR images for the clinical setting. Methods A longitudinal case-control study was performed using data from the Foundation for the National Institutes of Health (FNIH), composed of progressive cases [182 osteoarthritis (OA) knees with both radiographic and pain progression for 24-48 months] and matched controls (182 OA knees not meeting the case definition). DeepKOA was developed through 3-dimensional (3D) DenseNet169 to predict KOA progression over 24-48 months based on sagittal intermediate-weighted turbo-spin echo sequences with fat-suppression (SAG-IW-TSE-FS), sagittal 3D dual-echo steady-state water excitation (SAG-3D-DESS-WE) and its axial and coronal multiplanar reformation, and their combined MR images with patient-level labels at baseline, 12, and 24 months to eventually determine the probability of progression. The classification performance of the DeepKOA was evaluated using 5-fold cross-validation. An X-ray-based model and traditional models that used clinical variables via multilayer perceptron were built. Combined models were also constructed, which integrated clinical variables with DeepKOA. The area under the curve (AUC) was used as the evaluation metric. Results The performance of SAG-IW-TSE-FS in predicting OA progression was similar or higher to that of other single and combined sequences. The DeepKOA based on SAG-IW-TSE-FS achieved an AUC of 0.664 (95% CI: 0.585-0.743) at baseline, 0.739 (95% CI: 0.703-0.775) at 12 months, and 0.775 (95% CI: 0.686-0.865) at 24 months. The X-ray-based model achieved an AUC ranging from 0.573 to 0.613 at 3 time points. However, adding clinical variables to DeepKOA did not improve performance (P>0.05). Initial visualizations from gradient-weighted class activation mapping (Grad-CAM) indicated that the frequency with which the patellofemoral joint was highlighted increased as time progressed, which contrasted the trend observed in the tibiofemoral joint. The meniscus, the infrapatellar fat pad, and muscles posterior to the knee were highlighted to varying degrees. Conclusions This study initially demonstrated the feasibility of DeepKOA in the prediction of KOA progression and identified the potential responsible structures which may enlighten the future development of more clinically practical methods.
Collapse
Affiliation(s)
- Jiaping Hu
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics Guangdong Province), Guangzhou, China
| | - Chuanyang Zheng
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qingling Yu
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics Guangdong Province), Guangzhou, China
| | - Lijie Zhong
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics Guangdong Province), Guangzhou, China
| | - Keyan Yu
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Yanjun Chen
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics Guangdong Province), Guangzhou, China
| | - Zhao Wang
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Bin Zhang
- Department of Radiology, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Qi Dou
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Xiaodong Zhang
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics Guangdong Province), Guangzhou, China
| |
Collapse
|
9
|
Collins T, Dou Q, Unberath M. IJCARS-IPCAI 2023 special issue: conference information processing for computer-assisted interventions, 14th International Conference 2023-part 1. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02972-5. [PMID: 37268803 DOI: 10.1007/s11548-023-02972-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/19/2023] [Indexed: 06/04/2023]
Affiliation(s)
| | - Qi Dou
- The Chinese University of Hong Kong, Hong Kong, China
| | | |
Collapse
|
10
|
Huaulmé A, Harada K, Nguyen QM, Park B, Hong S, Choi MK, Peven M, Li Y, Long Y, Dou Q, Kumar S, Lalithkumar S, Hongliang R, Matsuzaki H, Ishikawa Y, Harai Y, Kondo S, Mitsuishi M, Jannin P. PEg TRAnsfer Workflow recognition challenge report: Do multimodal data improve recognition? Comput Methods Programs Biomed 2023; 236:107561. [PMID: 37119774 DOI: 10.1016/j.cmpb.2023.107561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 04/06/2023] [Accepted: 04/18/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND AND OBJECTIVE In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| | - Kanako Harada
- Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan
| | | | - Bogyu Park
- VisionAI hutom, Seoul, Republic of Korea
| | | | | | | | | | - Yonghao Long
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Qi Dou
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong
| | | | | | - Ren Hongliang
- National University of Singapore, Singapore, Singapore; The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Hiroki Matsuzaki
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | - Yuto Ishikawa
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | - Yuriko Harai
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | | | - Manoru Mitsuishi
- Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan
| | - Pierre Jannin
- Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| |
Collapse
|
11
|
Lyu J, Li G, Wang C, Qin C, Wang S, Dou Q, Qin J. Region-focused multi-view transformer-based generative adversarial network for cardiac cine MRI reconstruction. Med Image Anal 2023; 85:102760. [PMID: 36720188 DOI: 10.1016/j.media.2023.102760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 01/20/2023] [Accepted: 01/24/2023] [Indexed: 01/30/2023]
Abstract
Cardiac cine magnetic resonance imaging (MRI) reconstruction is challenging due to spatial and temporal resolution trade-offs. Temporal correlation in cardiac cine MRI is informative and vital for understanding cardiac dynamic motion. Exploiting the temporal correlations in cine reconstruction is crucial to resolve aliasing artifacts and maintaining the cardiac motion patterns. However, existing methods have the following shortcomings: (1) they simultaneously compute pairwise correlations along spatial and temporal dimensions to establish dependencies, ignoring that learning spatial contextual information first will benefit the temporal modeling. (2) most studies neglect to focus on reconstructing the local cardiac regions, resulting in insufficient reconstruction accuracy due to a relatively large field of view. To address these problems, we propose a region-focused multi-view transformer-based generative adversarial network for cardiac cine MRI reconstruction. The proposed transformer divides consecutive cardiac frames into multiple views for cross-view feature extraction, establishing long-distance dependencies among features and effectively learning the spatio-temporal information. We further design a cross-view attention for spatio-temporal information fusion, ensuring the interaction of different spatio-temporal information in each view and capturing more temporal correlations of the cardiac motion. In addition, we introduce a cardiac region detection loss for improving the reconstruction quality of the cardiac region. Experimental results demonstrated that our method outperforms state-of-the-art methods. Especially with an acceleration factor as high as 10×, our method can reconstruct images with better accuracy and perceptual quality.
Collapse
Affiliation(s)
- Jun Lyu
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Guangyuan Li
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Chen Qin
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, UK
| | - Shuo Wang
- Digital Medical Research Center, Fudan University, Shanghai, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| |
Collapse
|
12
|
Jiang M, Yang H, Cheng C, Dou Q. IOP-FL: Inside-Outside Personalization for Federated Medical Image Segmentation. IEEE Trans Med Imaging 2023; PP:1-1. [PMID: 37030858 DOI: 10.1109/tmi.2023.3263072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Federated learning (FL) allows multiple medical institutions to collaboratively learn a global model without centralizing client data. It is difficult, if possible at all, for such a global model to commonly achieve optimal performance for each individual client, due to the heterogeneity of medical images from various scanners and patient demographics. This problem becomes even more significant when deploying the global model to unseen clients outside the FL with unseen distributions not presented during federated training. To optimize the prediction accuracy of each individual client for medical imaging tasks, we propose a novel unified framework for both Inside and Outside model Personalization in FL (IOP-FL). Our inside personalization uses a lightweight gradient-based approach that exploits the local adapted model for each client, by accumulating both the global gradients for common knowledge and the local gradients for client-specific optimization. Moreover, and importantly, the obtained local personalized models and the global model can form a diverse and informative routing space to personalize an adapted model for outside FL clients. Hence, we design a new test-time routing scheme using the consistency loss with a shape constraint to dynamically incorporate the models, given the distribution information conveyed by the test data. Our extensive experimental results on two medical image segmentation tasks present significant improvements over SOTA methods on both inside and outside personalization, demonstrating the potential of our IOP-FL scheme for clinical practice. Code is available at https://github.com/med-air/IOP-FL.
Collapse
|
13
|
Lyu J, Li G, Wang C, Cai Q, Dou Q, Zhang D, Qin J. Multicontrast MRI Super-Resolution via Transformer-Empowered Multiscale Contextual Matching and Aggregation. IEEE Trans Neural Netw Learn Syst 2023; PP:1-11. [PMID: 37028326 DOI: 10.1109/tnnls.2023.3250491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Magnetic resonance imaging (MRI) possesses the unique versatility to acquire images under a diverse array of distinct tissue contrasts, which makes multicontrast super-resolution (SR) techniques possible and needful. Compared with single-contrast MRI SR, multicontrast SR is expected to produce higher quality images by exploiting a variety of complementary information embedded in different imaging contrasts. However, existing approaches still have two shortcomings: 1) most of them are convolution-based methods and, hence, weak in capturing long-range dependencies, which are essential for MR images with complicated anatomical patterns and 2) they ignore to make full use of the multicontrast features at different scales and lack effective modules to match and aggregate these features for faithful SR. To address these issues, we develop a novel multicontrast MRI SR network via transformer-empowered multiscale feature matching and aggregation, dubbed McMRSR ++ . First, we tame transformers to model long-range dependencies in both reference and target images at different scales. Then, a novel multiscale feature matching and aggregation method is proposed to transfer corresponding contexts from reference features at different scales to the target features and interactively aggregate them Furthermore, a texture-preserving branch and a contrastive constraint are incorporated into our framework for enhancing the textural details in the SR images. Experimental results on both public and clinical in vivo datasets show that McMRSR ++ outperforms state-of-the-art methods under peak signal to noise ratio (PSNR), structure similarity index measure (SSIM), and root mean square error (RMSE) metrics significantly. Visual results demonstrate the superiority of our method in restoring structures, demonstrating its great potential to improve scan efficiency in clinical practice.
Collapse
|
14
|
Wagner M, Müller-Stich BP, Kisilenko A, Tran D, Heger P, Mündermann L, Lubotsky DM, Müller B, Davitashvili T, Capek M, Reinke A, Reid C, Yu T, Vardazaryan A, Nwoye CI, Padoy N, Liu X, Lee EJ, Disch C, Meine H, Xia T, Jia F, Kondo S, Reiter W, Jin Y, Long Y, Jiang M, Dou Q, Heng PA, Twick I, Kirtac K, Hosgor E, Bolmgren JL, Stenzel M, von Siemens B, Zhao L, Ge Z, Sun H, Xie D, Guo M, Liu D, Kenngott HG, Nickel F, Frankenberg MV, Mathis-Ullrich F, Kopp-Schneider A, Maier-Hein L, Speidel S, Bodenstedt S. Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the HeiChole benchmark. Med Image Anal 2023; 86:102770. [PMID: 36889206 DOI: 10.1016/j.media.2023.102770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 02/03/2023] [Accepted: 02/08/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.
Collapse
Affiliation(s)
- Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany.
| | - Beat-Peter Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Anna Kisilenko
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Duc Tran
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Patrick Heger
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Lars Mündermann
- Data Assisted Solutions, Corporate Research & Technology, KARL STORZ SE & Co. KG, Dr. Karl-Storz-Str. 34, 78332 Tuttlingen
| | - David M Lubotsky
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Benjamin Müller
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Tornike Davitashvili
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Manuela Capek
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Annika Reinke
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; HIP Helmholtz Imaging Platform, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Im Neuenheimer Feld 205, 69120 Heidelberg
| | - Carissa Reid
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 280, Heidelberg, Germany
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Armine Vardazaryan
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Chinedu Innocent Nwoye
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, 111 Michigan Ave NW, Washington, DC 20010, USA
| | - Eung-Joo Lee
- University of Maryland, College Park, 2405 A V Williams Building, College Park, MD 20742, USA
| | - Constantin Disch
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, 28359 Bremen, Germany
| | - Hans Meine
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, 28359 Bremen, Germany; University of Bremen, FB3, Medical Image Computing Group, ℅ Fraunhofer MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Tong Xia
- Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Fucang Jia
- Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Satoshi Kondo
- Konika Minolta, Inc., 1-2, Sakura-machi, Takatsuki, Oasak 569-8503, Japan
| | - Wolfgang Reiter
- Wintegral GmbH, Ehrenbreitsteiner Str. 36, 80993 München, Germany
| | - Yueming Jin
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Yonghao Long
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Meirui Jiang
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Qi Dou
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Pheng Ann Heng
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Isabell Twick
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | - Kadir Kirtac
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | - Enes Hosgor
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | | | | | | | - Long Zhao
- Hikvision Research Institute, Hangzhou, China
| | - Zhenxiao Ge
- Hikvision Research Institute, Hangzhou, China
| | - Haiming Sun
- Hikvision Research Institute, Hangzhou, China
| | - Di Xie
- Hikvision Research Institute, Hangzhou, China
| | - Mengqi Guo
- School of Computing, National University of Singapore, Computing 1, No.13 Computing Drive, 117417, Singapore
| | - Daochang Liu
- National Engineering Research Center of Visual Technology, School of Computer Science, Peking University, Beijing, China
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Felix Nickel
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Moritz von Frankenberg
- Department of Surgery, Salem Hospital of the Evangelische Stadtmission Heidelberg, Zeppelinstrasse 11-33, 69121 Heidelberg, Germany
| | - Franziska Mathis-Ullrich
- Health Robotics and Automation Laboratory, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Geb. 40.28, KIT Campus Süd, Engler-Bunte-Ring 8, 76131 Karlsruhe, Germany
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 280, Heidelberg, Germany
| | - Lena Maier-Hein
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; HIP Helmholtz Imaging Platform, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Im Neuenheimer Feld 205, 69120 Heidelberg; Medical Faculty, Heidelberg University, Im Neuenheimer Feld 672, 69120 Heidelberg
| | - Stefanie Speidel
- Div. Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universität Dresden, 01062 Dresden, Germany
| | - Sebastian Bodenstedt
- Div. Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universität Dresden, 01062 Dresden, Germany
| |
Collapse
|
15
|
Zhang Y, Luo L, Dou Q, Heng PA. Triplet attention and dual-pool contrastive learning for clinic-driven multi-label medical image classification. Med Image Anal 2023; 86:102772. [PMID: 36822050 DOI: 10.1016/j.media.2023.102772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 11/21/2022] [Accepted: 02/10/2023] [Indexed: 02/18/2023]
Abstract
Multi-label classification (MLC) can attach multiple labels on single image, and has achieved promising results on medical images. But existing MLC methods still face challenging clinical realities in practical use, such as: (1) medical risks arising from misclassification, (2) sample imbalance problem among different diseases, (3) inability to classify the diseases that are not pre-defined (unseen diseases). Here, we design a hybrid label to improve the flexibility of MLC methods and alleviate the sample imbalance problem. Specifically, in the labeled training set, we remain independent labels for high-frequency diseases with enough samples and use a hybrid label to merge low-frequency diseases with fewer samples. The hybrid label can also be used to put unseen diseases in practical use. In this paper, we propose Triplet Attention and Dual-pool Contrastive Learning (TA-DCL) for multi-label medical image classification based on the aforementioned label representation. TA-DCL architecture is a triplet attention network (TAN), which combines category-attention, self-attention and cross-attention together to learn high-quality label embeddings for all disease labels by mining effective information from medical images. DCL includes dual-pool contrastive training (DCT) and dual-pool contrastive inference (DCI). DCT optimizes the clustering centers of label embeddings belonging to different disease labels to improve the discrimination of label embeddings. DCI relieves the error classification of sick cases for reducing the clinical risk and improving the ability to detect unseen diseases by contrast of differences. TA-DCL is validated on two public medical image datasets, ODIR and NIH-ChestXray14, showing superior performance than other state-of-the-art MLC methods. Code is available at https://github.com/ZhangYH0502/TA-DCL.
Collapse
Affiliation(s)
- Yuhan Zhang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Institute of Medical Intelligence and XR, The Chinese University of Hong Kong, Hong Kong, China; Shenzhen Research Institute, The Chinese University of Hong Kong, Hong Kong, China.
| | - Luyang Luo
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Institute of Medical Intelligence and XR, The Chinese University of Hong Kong, Hong Kong, China.
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Institute of Medical Intelligence and XR, The Chinese University of Hong Kong, Hong Kong, China.
| |
Collapse
|
16
|
Bilic P, Christ P, Li HB, Vorontsov E, Ben-Cohen A, Kaissis G, Szeskin A, Jacobs C, Mamani GEH, Chartrand G, Lohöfer F, Holch JW, Sommer W, Hofmann F, Hostettler A, Lev-Cohain N, Drozdzal M, Amitai MM, Vivanti R, Sosna J, Ezhov I, Sekuboyina A, Navarro F, Kofler F, Paetzold JC, Shit S, Hu X, Lipková J, Rempfler M, Piraud M, Kirschke J, Wiestler B, Zhang Z, Hülsemeyer C, Beetz M, Ettlinger F, Antonelli M, Bae W, Bellver M, Bi L, Chen H, Chlebus G, Dam EB, Dou Q, Fu CW, Georgescu B, Giró-I-Nieto X, Gruen F, Han X, Heng PA, Hesser J, Moltz JH, Igel C, Isensee F, Jäger P, Jia F, Kaluva KC, Khened M, Kim I, Kim JH, Kim S, Kohl S, Konopczynski T, Kori A, Krishnamurthi G, Li F, Li H, Li J, Li X, Lowengrub J, Ma J, Maier-Hein K, Maninis KK, Meine H, Merhof D, Pai A, Perslev M, Petersen J, Pont-Tuset J, Qi J, Qi X, Rippel O, Roth K, Sarasua I, Schenk A, Shen Z, Torres J, Wachinger C, Wang C, Weninger L, Wu J, Xu D, Yang X, Yu SCH, Yuan Y, Yue M, Zhang L, Cardoso J, Bakas S, Braren R, Heinemann V, Pal C, Tang A, Kadoury S, Soler L, van Ginneken B, Greenspan H, Joskowicz L, Menze B. The Liver Tumor Segmentation Benchmark (LiTS). Med Image Anal 2023; 84:102680. [PMID: 36481607 PMCID: PMC10631490 DOI: 10.1016/j.media.2022.102680] [Citation(s) in RCA: 54] [Impact Index Per Article: 54.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 09/27/2022] [Accepted: 10/29/2022] [Indexed: 11/18/2022]
Abstract
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.
Collapse
Affiliation(s)
- Patrick Bilic
- Department of Informatics, Technical University of Munich, Germany
| | - Patrick Christ
- Department of Informatics, Technical University of Munich, Germany
| | - Hongwei Bran Li
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland.
| | | | - Avi Ben-Cohen
- Department of Biomedical Engineering, Tel-Aviv University, Israel
| | - Georgios Kaissis
- Institute for AI in Medicine, Technical University of Munich, Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Computing, Imperial College London, London, United Kingdom
| | - Adi Szeskin
- School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel
| | - Colin Jacobs
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Gabriel Chartrand
- The University of Montréal Hospital Research Centre (CRCHUM) Montréal, Québec, Canada
| | - Fabian Lohöfer
- Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany
| | - Julian Walter Holch
- Department of Medicine III, University Hospital, LMU Munich, Munich, Germany; Comprehensive Cancer Center Munich, Munich, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Wieland Sommer
- Department of Radiology, University Hospital, LMU Munich, Germany
| | - Felix Hofmann
- Department of General, Visceral and Transplantation Surgery, University Hospital, LMU Munich, Germany; Department of Radiology, University Hospital, LMU Munich, Germany
| | - Alexandre Hostettler
- Department of Surgical Data Science, Institut de Recherche contre les Cancers de l'Appareil Digestif (IRCAD), France
| | - Naama Lev-Cohain
- Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel
| | | | | | | | - Jacob Sosna
- Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel
| | - Ivan Ezhov
- Department of Informatics, Technical University of Munich, Germany
| | - Anjany Sekuboyina
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| | - Fernando Navarro
- Department of Informatics, Technical University of Munich, Germany; Department of Radiation Oncology and Radiotherapy, Klinikum rechts der Isar, Technical University of Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Florian Kofler
- Department of Informatics, Technical University of Munich, Germany; Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany; Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Johannes C Paetzold
- Department of Computing, Imperial College London, London, United Kingdom; Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany
| | - Suprosanna Shit
- Department of Informatics, Technical University of Munich, Germany
| | - Xiaobin Hu
- Department of Informatics, Technical University of Munich, Germany
| | - Jana Lipková
- Brigham and Women's Hospital, Harvard Medical School, USA
| | - Markus Rempfler
- Department of Informatics, Technical University of Munich, Germany
| | - Marie Piraud
- Department of Informatics, Technical University of Munich, Germany; Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany
| | - Jan Kirschke
- Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany
| | - Benedikt Wiestler
- Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany
| | - Zhiheng Zhang
- Department of Hepatobiliary Surgery, the Affiliated Drum Tower Hospital of Nanjing University Medical School, China
| | | | - Marcel Beetz
- Department of Informatics, Technical University of Munich, Germany
| | | | - Michela Antonelli
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | | | | | - Lei Bi
- School of Computer Science, the University of Sydney, Australia
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, China
| | - Grzegorz Chlebus
- Fraunhofer MEVIS, Bremen, Germany; Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Erik B Dam
- Department of Computer Science, University of Copenhagen, Denmark
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Chi-Wing Fu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Xavier Giró-I-Nieto
- Signal Theory and Communications Department, Universitat Politecnica de Catalunya, Catalonia, Spain
| | - Felix Gruen
- Institute of Control Engineering, Technische Universität Braunschweig, Germany
| | - Xu Han
- Department of computer science, UNC Chapel Hill, USA
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Jürgen Hesser
- Mannheim Institute for Intelligent Systems in Medicine, department of Medicine Mannheim, Heidelberg University, Germany; Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany; Central Institute for Computer Engineering (ZITI), Heidelberg University, Germany
| | | | - Christian Igel
- Department of Computer Science, University of Copenhagen, Denmark
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | - Paul Jäger
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Krishna Chaitanya Kaluva
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Mahendra Khened
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | | | - Jae-Hun Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, South Korea
| | | | - Simon Kohl
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tomasz Konopczynski
- Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany
| | - Avinash Kori
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Ganapathy Krishnamurthi
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Fan Li
- Sensetime, Shanghai, China
| | - Hongchao Li
- Department of Computer Science, Guangdong University of Foreign Studies, China
| | - Junbo Li
- Philips Research China, Philips China Innovation Campus, Shanghai, China
| | - Xiaomeng Li
- Department of Electrical and Electronic Engineering, The University of Hong Kong, China
| | - John Lowengrub
- Departments of Mathematics, Biomedical Engineering, University of California, Irvine, USA; Center for Complex Biological Systems, University of California, Irvine, USA; Chao Family Comprehensive Cancer Center, University of California, Irvine, USA
| | - Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, China
| | - Klaus Maier-Hein
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | | | - Hans Meine
- Fraunhofer MEVIS, Bremen, Germany; Medical Image Computing Group, FB3, University of Bremen, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | - Akshay Pai
- Department of Computer Science, University of Copenhagen, Denmark
| | - Mathias Perslev
- Department of Computer Science, University of Copenhagen, Denmark
| | - Jens Petersen
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jordi Pont-Tuset
- Eidgenössische Technische Hochschule Zurich (ETHZ), Zurich, Switzerland
| | - Jin Qi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, The University of Hong Kong, China
| | - Oliver Rippel
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | | | - Ignacio Sarasua
- Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany
| | - Andrea Schenk
- Fraunhofer MEVIS, Bremen, Germany; Institute for Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
| | - Zengming Shen
- Beckman Institute, University of Illinois at Urbana-Champaign, USA; Siemens Healthineers, USA
| | - Jordi Torres
- Barcelona Supercomputing Center, Barcelona, Spain; Universitat Politecnica de Catalunya, Catalonia, Spain
| | - Christian Wachinger
- Department of Informatics, Technical University of Munich, Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany
| | - Chunliang Wang
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Sweden
| | - Leon Weninger
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | - Jianrong Wu
- Tencent Healthcare (Shenzhen) Co., Ltd, China
| | | | - Xiaoping Yang
- Department of Mathematics, Nanjing University, China
| | - Simon Chun-Ho Yu
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, NY, USA
| | - Miao Yue
- CGG Services (Singapore) Pte. Ltd., Singapore
| | - Liping Zhang
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, USA; Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, PA, USA
| | - Rickmer Braren
- German Cancer Consortium (DKTK), Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Comprehensive Cancer Center Munich, Munich, Germany
| | - Volker Heinemann
- Department of Hematology/Oncology & Comprehensive Cancer Center Munich, LMU Klinikum Munich, Germany
| | | | - An Tang
- Department of Radiology, Radiation Oncology and Nuclear Medicine, University of Montréal, Canada
| | | | - Luc Soler
- Department of Surgical Data Science, Institut de Recherche contre les Cancers de l'Appareil Digestif (IRCAD), France
| | - Bram van Ginneken
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Hayit Greenspan
- Department of Biomedical Engineering, Tel-Aviv University, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel
| | - Bjoern Menze
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| |
Collapse
|
17
|
Pati S, Baid U, Edwards B, Sheller M, Wang SH, Reina GA, Foley P, Gruzdev A, Karkada D, Davatzikos C, Sako C, Ghodasara S, Bilello M, Mohan S, Vollmuth P, Brugnara G, Preetha CJ, Sahm F, Maier-Hein K, Zenk M, Bendszus M, Wick W, Calabrese E, Rudie J, Villanueva-Meyer J, Cha S, Ingalhalikar M, Jadhav M, Pandey U, Saini J, Garrett J, Larson M, Jeraj R, Currie S, Frood R, Fatania K, Huang RY, Chang K, Balaña C, Capellades J, Puig J, Trenkler J, Pichler J, Necker G, Haunschmidt A, Meckel S, Shukla G, Liem S, Alexander GS, Lombardo J, Palmer JD, Flanders AE, Dicker AP, Sair HI, Jones CK, Venkataraman A, Jiang M, So TY, Chen C, Heng PA, Dou Q, Kozubek M, Lux F, Michálek J, Matula P, Keřkovský M, Kopřivová T, Dostál M, Vybíhal V, Vogelbaum MA, Mitchell JR, Farinhas J, Maldjian JA, Yogananda CGB, Pinho MC, Reddy D, Holcomb J, Wagner BC, Ellingson BM, Cloughesy TF, Raymond C, Oughourlian T, Hagiwara A, Wang C, To MS, Bhardwaj S, Chong C, Agzarian M, Falcão AX, Martins SB, Teixeira BCA, Sprenger F, Menotti D, Lucio DR, LaMontagne P, Marcus D, Wiestler B, Kofler F, Ezhov I, Metz M, Jain R, Lee M, Lui YW, McKinley R, Slotboom J, Radojewski P, Meier R, Wiest R, Murcia D, Fu E, Haas R, Thompson J, Ormond DR, Badve C, Sloan AE, Vadmal V, Waite K, Colen RR, Pei L, Ak M, Srinivasan A, Bapuraj JR, Rao A, Wang N, Yoshiaki O, Moritani T, Turk S, Lee J, Prabhudesai S, Morón F, Mandel J, Kamnitsas K, Glocker B, Dixon LVM, Williams M, Zampakis P, Panagiotopoulos V, Tsiganos P, Alexiou S, Haliassos I, Zacharaki EI, Moustakas K, Kalogeropoulou C, Kardamakis DM, Choi YS, Lee SK, Chang JH, Ahn SS, Luo B, Poisson L, Wen N, Tiwari P, Verma R, Bareja R, Yadav I, Chen J, Kumar N, Smits M, van der Voort SR, Alafandi A, Incekara F, Wijnenga MMJ, Kapsas G, Gahrmann R, Schouten JW, Dubbink HJ, Vincent AJPE, van den Bent MJ, French PJ, Klein S, Yuan Y, Sharma S, Tseng TC, Adabi S, Niclou SP, Keunen O, Hau AC, Vallières M, Fortin D, Lepage M, Landman B, Ramadass K, Xu K, Chotai S, Chambless LB, Mistry A, Thompson RC, Gusev Y, Bhuvaneshwar K, Sayah A, Bencheqroun C, Belouali A, Madhavan S, Booth TC, Chelliah A, Modat M, Shuaib H, Dragos C, Abayazeed A, Kolodziej K, Hill M, Abbassy A, Gamal S, Mekhaimar M, Qayati M, Reyes M, Park JE, Yun J, Kim HS, Mahajan A, Muzi M, Benson S, Beets-Tan RGH, Teuwen J, Herrera-Trujillo A, Trujillo M, Escobar W, Abello A, Bernal J, Gómez J, Choi J, Baek S, Kim Y, Ismael H, Allen B, Buatti JM, Kotrotsou A, Li H, Weiss T, Weller M, Bink A, Pouymayou B, Shaykh HF, Saltz J, Prasanna P, Shrestha S, Mani KM, Payne D, Kurc T, Pelaez E, Franco-Maldonado H, Loayza F, Quevedo S, Guevara P, Torche E, Mendoza C, Vera F, Ríos E, López E, Velastin SA, Ogbole G, Soneye M, Oyekunle D, Odafe-Oyibotha O, Osobu B, Shu'aibu M, Dorcas A, Dako F, Simpson AL, Hamghalam M, Peoples JJ, Hu R, Tran A, Cutler D, Moraes FY, Boss MA, Gimpel J, Veettil DK, Schmidt K, Bialecki B, Marella S, Price C, Cimino L, Apgar C, Shah P, Menze B, Barnholtz-Sloan JS, Martin J, Bakas S. Author Correction: Federated learning enables big data for rare cancer boundary detection. Nat Commun 2023; 14:436. [PMID: 36702828 PMCID: PMC9879935 DOI: 10.1038/s41467-023-36188-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Affiliation(s)
- Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Bavaria, Germany
| | - Ujjwal Baid
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | | | | | | | | | | | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Chiharu Sako
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Satyam Ghodasara
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Suyash Mohan
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | | | - Felix Sahm
- Clinical Cooperation Unit Neuropathology, German Cancer Consortium (DKTK) within the German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Neuropathology, Heidelberg University Hospital, Heidelberg, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Maximilian Zenk
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Wolfgang Wick
- Clinical Cooperation Unit Neuropathology, German Cancer Consortium (DKTK) within the German Cancer Research Center (DKFZ), Heidelberg, Germany
- Neurology Clinic, Heidelberg University Hospital, Heidelberg, Germany
| | - Evan Calabrese
- Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Jeffrey Rudie
- Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Javier Villanueva-Meyer
- Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Soonmee Cha
- Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Madhura Ingalhalikar
- Symbiosis Center for Medical Image Analysis, Symbiosis International University, Pune, Maharashtra, India
| | - Manali Jadhav
- Symbiosis Center for Medical Image Analysis, Symbiosis International University, Pune, Maharashtra, India
| | - Umang Pandey
- Symbiosis Center for Medical Image Analysis, Symbiosis International University, Pune, Maharashtra, India
| | - Jitender Saini
- Department of Neuroimaging and Interventional Radiology, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - John Garrett
- Department of Radiology, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
- Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Matthew Larson
- Department of Radiology, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Robert Jeraj
- Department of Radiology, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
- Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Stuart Currie
- Leeds Teaching Hospitals Trust, Department of Radiology, Leeds, UK
| | - Russell Frood
- Leeds Teaching Hospitals Trust, Department of Radiology, Leeds, UK
| | - Kavi Fatania
- Leeds Teaching Hospitals Trust, Department of Radiology, Leeds, UK
| | - Raymond Y Huang
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | | | | | - Josep Puig
- Department of Radiology (IDI), Girona Biomedical Research Institute (IdIBGi), Josep Trueta University Hospital, Girona, Spain
| | - Johannes Trenkler
- Institute of Neuroradiology, Neuromed Campus (NMC), Kepler University Hospital Linz, Linz, Austria
| | - Josef Pichler
- Department of Neurooncology, Neuromed Campus (NMC), Kepler University Hospital Linz, Linz, Austria
| | - Georg Necker
- Institute of Neuroradiology, Neuromed Campus (NMC), Kepler University Hospital Linz, Linz, Austria
| | - Andreas Haunschmidt
- Institute of Neuroradiology, Neuromed Campus (NMC), Kepler University Hospital Linz, Linz, Austria
| | - Stephan Meckel
- Institute of Neuroradiology, Neuromed Campus (NMC), Kepler University Hospital Linz, Linz, Austria
- Institute of Diagnostic and Interventional Neuroradiology, RKH Klinikum Ludwigsburg, Ludwigsburg, Germany
| | - Gaurav Shukla
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiation Oncology, Christiana Care Health System, Philadelphia, PA, USA
| | - Spencer Liem
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Gregory S Alexander
- Department of Radiation Oncology, University of Maryland, Baltimore, MD, USA
| | - Joseph Lombardo
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
- Department of Radiation Oncology, Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA
| | - Joshua D Palmer
- Department of Radiation Oncology, The James Cancer Hospital and Solove Research Institute, The Ohio State University Comprehensive Cancer Center, Columbus, OH, USA
| | - Adam E Flanders
- Department of Radiology, Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA
| | - Adam P Dicker
- Department of Radiation Oncology, Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- The Malone Center for Engineering in Healthcare, The Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Craig K Jones
- The Malone Center for Engineering in Healthcare, The Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Archana Venkataraman
- Department of Electrical and Computer Engineering, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Meirui Jiang
- The Chinese University of Hong Kong, Hong Kong, China
| | - Tiffany Y So
- The Chinese University of Hong Kong, Hong Kong, China
| | - Cheng Chen
- The Chinese University of Hong Kong, Hong Kong, China
| | | | - Qi Dou
- The Chinese University of Hong Kong, Hong Kong, China
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Filip Lux
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Jan Michálek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Petr Matula
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Miloš Keřkovský
- Department of Radiology and Nuclear Medicine, Faculty of Medicine, Masaryk University, Brno and University Hospital Brno, Brno, Czech Republic
| | - Tereza Kopřivová
- Department of Radiology and Nuclear Medicine, Faculty of Medicine, Masaryk University, Brno and University Hospital Brno, Brno, Czech Republic
| | - Marek Dostál
- Department of Radiology and Nuclear Medicine, Faculty of Medicine, Masaryk University, Brno and University Hospital Brno, Brno, Czech Republic
- Department of Biophysics, Faculty of Medicine, Masaryk University, Brno, Czech Republic
| | - Václav Vybíhal
- Department of Neurosurgery, Faculty of Medicine, Masaryk University, Brno, and University Hospital and Czech Republic, Brno, Czech Republic
| | - Michael A Vogelbaum
- Department of Neuro Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| | - J Ross Mitchell
- University of Alberta, Edmonton, AB, Canada
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada
| | - Joaquim Farinhas
- Department of Radiology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| | | | | | - Marco C Pinho
- University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Divya Reddy
- University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - James Holcomb
- University of Texas Southwestern Medical Center, Dallas, TX, USA
| | | | - Benjamin M Ellingson
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
- UCLA Neuro-Oncology Program, Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CaA, USA
| | - Timothy F Cloughesy
- UCLA Neuro-Oncology Program, Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CaA, USA
| | - Catalina Raymond
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Talia Oughourlian
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Akifumi Hagiwara
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Chencai Wang
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Minh-Son To
- College of Medicine and Public Health, Flinders University, Bedford Park, SA, Australia
- Division of Surgery and Perioperative Medicine, Flinders Medical Centre, Bedford Park, SA, Australia
| | - Sargam Bhardwaj
- College of Medicine and Public Health, Flinders University, Bedford Park, SA, Australia
| | - Chee Chong
- South Australia Medical Imaging, Flinders Medical Centre, Bedford Park, SA, Australia
| | - Marc Agzarian
- South Australia Medical Imaging, Flinders Medical Centre, Bedford Park, SA, Australia
- Department of Neurology, Baylor College of Medicine, Houston, TX, USA
| | | | | | - Bernardo C A Teixeira
- Instituto de Neurologia de Curitiba, Curitiba, Paraná, Brazil
- Department of Radiology, Hospital de Clínicas da Universidade Federal do Paraná, Curitiba, Paraná, Brazil
| | - Flávia Sprenger
- Department of Radiology, Hospital de Clínicas da Universidade Federal do Paraná, Curitiba, Paraná, Brazil
| | - David Menotti
- Department of Informatics, Universidade Federal do Paraná, Curitiba, Paraná, Brazil
| | - Diego R Lucio
- Department of Informatics, Universidade Federal do Paraná, Curitiba, Paraná, Brazil
| | - Pamela LaMontagne
- Department of Radiology, Washington University in St. Louis, St. Louis, MO, USA
| | - Daniel Marcus
- Department of Radiology, Washington University in St. Louis, St. Louis, MO, USA
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM (Zentralinstitut für translationale Krebsforschung der Technischen Universität München), Klinikum rechts der Isar, Munich, Germany
| | - Florian Kofler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM (Zentralinstitut für translationale Krebsforschung der Technischen Universität München), Klinikum rechts der Isar, Munich, Germany
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
| | - Ivan Ezhov
- Department of Informatics, Technical University of Munich, Munich, Bavaria, Germany
- TranslaTUM (Zentralinstitut für translationale Krebsforschung der Technischen Universität München), Klinikum rechts der Isar, Munich, Germany
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
| | - Marie Metz
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Rajan Jain
- Department of Radiology, NYU Grossman School of Medicine, New York, NY, USA
- Department of Neurosurgery, NYU Grossman School of Medicine, New York, NY, USA
| | - Matthew Lee
- Department of Radiology, NYU Grossman School of Medicine, New York, NY, USA
| | - Yvonne W Lui
- Department of Radiology, NYU Grossman School of Medicine, New York, NY, USA
| | - Richard McKinley
- Support Center for Advanced Neuroimaging, University Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, Inselspital, University of Bern, Bern, Switzerland
| | - Johannes Slotboom
- Support Center for Advanced Neuroimaging, University Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, Inselspital, University of Bern, Bern, Switzerland
| | - Piotr Radojewski
- Support Center for Advanced Neuroimaging, University Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, Inselspital, University of Bern, Bern, Switzerland
| | - Raphael Meier
- Support Center for Advanced Neuroimaging, University Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, Inselspital, University of Bern, Bern, Switzerland
| | - Roland Wiest
- Support Center for Advanced Neuroimaging, University Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, Inselspital, University of Bern, Bern, Switzerland
| | - Derrick Murcia
- Department of Neurosurgery, Anschutz Medical Campus, University of Colorado, Aurora, CO, USA
| | - Eric Fu
- Department of Neurosurgery, Anschutz Medical Campus, University of Colorado, Aurora, CO, USA
| | - Rourke Haas
- Department of Neurosurgery, Anschutz Medical Campus, University of Colorado, Aurora, CO, USA
| | - John Thompson
- Department of Neurosurgery, Anschutz Medical Campus, University of Colorado, Aurora, CO, USA
| | - David Ryan Ormond
- Department of Neurosurgery, Anschutz Medical Campus, University of Colorado, Aurora, CO, USA
| | - Chaitra Badve
- Department of Radiology, University Hospitals Cleveland, Cleveland, OH, USA
| | - Andrew E Sloan
- Department of Neurological Surgery, University Hospitals-Seidman Cancer Center, Cleveland, OH, USA
- Case Comprehensive Cancer Center, Cleveland, OH, USA
- Department of Neurosurgery, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Vachan Vadmal
- Department of Neurosurgery, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Kristin Waite
- National Cancer Institute, National Institute of Health, Division of Cancer Epidemiology and Genetics, Bethesda, MD, USA
| | - Rivka R Colen
- Department of Radiology, Neuroradiology Division, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Linmin Pei
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Murat Ak
- Department of Radiology, Neuroradiology Division, University of Pittsburgh, Pittsburgh, PA, USA
| | - Ashok Srinivasan
- Department of Neuroradiology, University of Michigan, Ann Arbor, MI, USA
| | - J Rajiv Bapuraj
- Department of Neuroradiology, University of Michigan, Ann Arbor, MI, USA
| | - Arvind Rao
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Nicholas Wang
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Ota Yoshiaki
- Department of Neuroradiology, University of Michigan, Ann Arbor, MI, USA
| | - Toshio Moritani
- Department of Neuroradiology, University of Michigan, Ann Arbor, MI, USA
| | - Sevcan Turk
- Department of Neuroradiology, University of Michigan, Ann Arbor, MI, USA
| | - Joonsang Lee
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Snehal Prabhudesai
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Fanny Morón
- Department of Radiology, Baylor College of Medicine, Houston, TX, USA
| | - Jacob Mandel
- Department of Neurology, Baylor College of Medicine, Houston, TX, USA
| | - Konstantinos Kamnitsas
- Department of Computing, Imperial College London, London, UK
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Ben Glocker
- Department of Computing, Imperial College London, London, UK
| | - Luke V M Dixon
- Department of Radiology, Imperial College NHS Healthcare Trust, London, UK
| | - Matthew Williams
- Computational Oncology Group, Institute for Global Health Innovation, Imperial College London, London, UK
| | - Peter Zampakis
- Department of NeuroRadiology, University of Patras, Patras, Greece
| | | | - Panagiotis Tsiganos
- Clinical Radiology Laboratory, Department of Medicine, University of Patras, Patras, Greece
| | - Sotiris Alexiou
- Department of Electrical and Computer Engineering, University of Patras, Patras, Greece
| | - Ilias Haliassos
- Department of Neuro-Oncology, University of Patras, Patras, Greece
| | - Evangelia I Zacharaki
- Department of Electrical and Computer Engineering, University of Patras, Patras, Greece
| | | | | | | | | | | | | | - Sung Soo Ahn
- Yonsei University College of Medicine, Seoul, Korea
| | - Bing Luo
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Laila Poisson
- Public Health Sciences, Henry Ford Health System, Detroit, MI, USA
| | - Ning Wen
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
- SJTU-Ruijin-UIH Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | | | - Ruchika Verma
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada
- Case Western Reserve University, Cleveland, OH, USA
| | - Rohan Bareja
- Case Western Reserve University, Cleveland, OH, USA
| | - Ipsa Yadav
- Case Western Reserve University, Cleveland, OH, USA
| | | | - Neeraj Kumar
- University of Alberta, Edmonton, AB, Canada
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada
| | - Marion Smits
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Sebastian R van der Voort
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Ahmed Alafandi
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Fatih Incekara
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
- Department of Neurosurgery, Brain Tumor Center, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Maarten M J Wijnenga
- Department of Neurology, Brain Tumor Center, Erasmus MC Cancer Institute, Rotterdam, Netherlands
| | - Georgios Kapsas
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Renske Gahrmann
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Joost W Schouten
- Department of Neurosurgery, Brain Tumor Center, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Hendrikus J Dubbink
- Department of Pathology, Brain Tumor Center, Erasmus MC Cancer Institute, Rotterdam, Netherlands
| | - Arnaud J P E Vincent
- Department of Neurosurgery, Brain Tumor Center, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Martin J van den Bent
- Department of Neurology, Brain Tumor Center, Erasmus MC Cancer Institute, Rotterdam, Netherlands
| | - Pim J French
- Department of Neurology, Brain Tumor Center, Erasmus MC Cancer Institute, Rotterdam, Netherlands
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Sonam Sharma
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Tzu-Chi Tseng
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Saba Adabi
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Simone P Niclou
- NORLUX Neuro-Oncology Laboratory, Department of Cancer Research, Luxembourg Institute of Health, Luxembourg, Luxembourg
| | - Olivier Keunen
- Translation Radiomics, Department of Cancer Research, Luxembourg Institute of Health, Luxembourg, Luxembourg
| | - Ann-Christin Hau
- NORLUX Neuro-Oncology Laboratory, Department of Cancer Research, Luxembourg Institute of Health, Luxembourg, Luxembourg
- Luxembourg Center of Neuropathology, Laboratoire National De Santé, Luxembourg, Luxembourg
| | - Martin Vallières
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, QC, Canada
- Centre de Recherche du Centre Hospitalière Universitaire de Sherbrooke, Sherbrooke, QC, Canada
| | - David Fortin
- Centre de Recherche du Centre Hospitalière Universitaire de Sherbrooke, Sherbrooke, QC, Canada
- Division of Neurosurgery and Neuro-Oncology, Faculty of Medicine and Health Science, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Martin Lepage
- Centre de Recherche du Centre Hospitalière Universitaire de Sherbrooke, Sherbrooke, QC, Canada
- Department of Nuclear Medicine and Radiobiology, Sherbrooke Molecular Imaging Centre, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Bennett Landman
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Karthik Ramadass
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Kaiwen Xu
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Silky Chotai
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Lola B Chambless
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Akshitkumar Mistry
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Reid C Thompson
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Yuriy Gusev
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Krithika Bhuvaneshwar
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Anousheh Sayah
- Division of Neuroradiology & Neurointerventional Radiology, Department of Radiology, MedStar Georgetown University Hospital, Washington, DC, USA
| | - Camelia Bencheqroun
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Anas Belouali
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Subha Madhavan
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Thomas C Booth
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Department of Neuroradiology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, UK
| | - Alysha Chelliah
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Haris Shuaib
- Stoke Mandeville Hospital, Mandeville Road, Aylesbury, UK
- Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada
| | - Carmen Dragos
- Stoke Mandeville Hospital, Mandeville Road, Aylesbury, UK
| | | | | | | | | | - Shady Gamal
- University of Cairo School of Medicine, Giza, Egypt
| | | | | | | | - Ji Eun Park
- Department of Radiology, Asan Medical Center, Seoul, South Korea
| | - Jihye Yun
- Department of Radiology, Asan Medical Center, Seoul, South Korea
| | - Ho Sung Kim
- Department of Radiology, Asan Medical Center, Seoul, South Korea
| | - Abhishek Mahajan
- The Clatterbridge Cancer Centre NHS Foundation Trust Pembroke Place, Liverpool, UK
| | - Mark Muzi
- Department of Radiology, University of Washington, Seattle, WA, USA
| | - Sean Benson
- Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Regina G H Beets-Tan
- Department of Radiology, Netherlands Cancer Institute, Amsterdam, Netherlands
- GROW School of Oncology and Developmental Biology, Maastricht, Netherlands
| | - Jonas Teuwen
- Netherlands Cancer Institute, Amsterdam, Netherlands
| | | | | | - William Escobar
- Clínica Imbanaco Grupo Quirón Salud, Cali, Colombia
- Universidad del Valle, Cali, Colombia
| | | | - Jose Bernal
- Universidad del Valle, Cali, Colombia
- The University of Edinburgh, Edinburgh, UK
| | | | - Joseph Choi
- Department of Industrial and Systems Engineering, University of Iowa, Iowa, USA
| | - Stephen Baek
- Department of Industrial and Systems Engineering, Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | - Yusung Kim
- Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | - Heba Ismael
- Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | - Bryan Allen
- Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | - John M Buatti
- Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | | | - Hongwei Li
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Tobias Weiss
- Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Michael Weller
- Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Andrea Bink
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Bertrand Pouymayou
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | | | - Joel Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York, USA
| | - Prateek Prasanna
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York, USA
| | - Sampurna Shrestha
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York, USA
| | - Kartik M Mani
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York, USA
- Department of Radiation Oncology, Stony Brook University, Stony Brook, NY, USA
| | - David Payne
- Department of Radiology, Stony Brook University, Stony Brook, NY, USA
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York, USA
- Scientific Data Group, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Enrique Pelaez
- Escuela Superior Politecnica del Litoral, Guayaquil, Guayas, Ecuador
| | | | - Francis Loayza
- Escuela Superior Politecnica del Litoral, Guayaquil, Guayas, Ecuador
| | | | | | | | | | - Franco Vera
- Universidad de Concepción, Concepción, Biobío, Chile
| | - Elvis Ríos
- Universidad de Concepción, Concepción, Biobío, Chile
| | - Eduardo López
- Universidad de Concepción, Concepción, Biobío, Chile
| | - Sergio A Velastin
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Godwin Ogbole
- Department of Radiology, University College Hospital Ibadan, Oyo, Nigeria
| | - Mayowa Soneye
- Department of Radiology, University College Hospital Ibadan, Oyo, Nigeria
| | - Dotun Oyekunle
- Department of Radiology, University College Hospital Ibadan, Oyo, Nigeria
| | | | - Babatunde Osobu
- Department of Radiology, University College Hospital Ibadan, Oyo, Nigeria
| | - Mustapha Shu'aibu
- Department of Radiology, Muhammad Abdullahi Wase Teaching Hospital, Kano, Nigeria
| | - Adeleye Dorcas
- Department of Radiology, Obafemi Awolowo University Ile-Ife, Ile-Ife, Osun, Nigeria
| | - Farouk Dako
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Center for Global Health, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Amber L Simpson
- Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada
- School of Computing, Queen's University, Kingston, ON, Canada
| | - Mohammad Hamghalam
- School of Computing, Queen's University, Kingston, ON, Canada
- Department of Electrical Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran
| | - Jacob J Peoples
- School of Computing, Queen's University, Kingston, ON, Canada
| | - Ricky Hu
- School of Computing, Queen's University, Kingston, ON, Canada
| | - Anh Tran
- School of Computing, Queen's University, Kingston, ON, Canada
| | - Danielle Cutler
- The Faculty of Arts & Sciences, Queen's University, Kingston, ON, Canada
| | - Fabio Y Moraes
- Department of Oncology, Queen's University, Kingston, ON, Canada
| | - Michael A Boss
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - James Gimpel
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - Deepak Kattil Veettil
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - Kendall Schmidt
- Data Science Institute, American College of Radiology, Reston, VA, USA
| | - Brian Bialecki
- Data Science Institute, American College of Radiology, Reston, VA, USA
| | - Sailaja Marella
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - Cynthia Price
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - Lisa Cimino
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - Charles Apgar
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | | | - Bjoern Menze
- Department of Informatics, Technical University of Munich, Munich, Bavaria, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Jill S Barnholtz-Sloan
- National Cancer Institute, National Institute of Health, Division of Cancer Epidemiology and Genetics, Bethesda, MD, USA
- Center for Biomedical Informatics and Information Technology, National Cancer Institute (NCI), National Institute of Health, Bethesda, MD, USA
| | | | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA.
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
18
|
Pati S, Baid U, Edwards B, Sheller M, Wang SH, Reina GA, Foley P, Gruzdev A, Karkada D, Davatzikos C, Sako C, Ghodasara S, Bilello M, Mohan S, Vollmuth P, Brugnara G, Preetha CJ, Sahm F, Maier-Hein K, Zenk M, Bendszus M, Wick W, Calabrese E, Rudie J, Villanueva-Meyer J, Cha S, Ingalhalikar M, Jadhav M, Pandey U, Saini J, Garrett J, Larson M, Jeraj R, Currie S, Frood R, Fatania K, Huang RY, Chang K, Balaña C, Capellades J, Puig J, Trenkler J, Pichler J, Necker G, Haunschmidt A, Meckel S, Shukla G, Liem S, Alexander GS, Lombardo J, Palmer JD, Flanders AE, Dicker AP, Sair HI, Jones CK, Venkataraman A, Jiang M, So TY, Chen C, Heng PA, Dou Q, Kozubek M, Lux F, Michálek J, Matula P, Keřkovský M, Kopřivová T, Dostál M, Vybíhal V, Vogelbaum MA, Mitchell JR, Farinhas J, Maldjian JA, Yogananda CGB, Pinho MC, Reddy D, Holcomb J, Wagner BC, Ellingson BM, Cloughesy TF, Raymond C, Oughourlian T, Hagiwara A, Wang C, To MS, Bhardwaj S, Chong C, Agzarian M, Falcão AX, Martins SB, Teixeira BCA, Sprenger F, Menotti D, Lucio DR, LaMontagne P, Marcus D, Wiestler B, Kofler F, Ezhov I, Metz M, Jain R, Lee M, Lui YW, McKinley R, Slotboom J, Radojewski P, Meier R, Wiest R, Murcia D, Fu E, Haas R, Thompson J, Ormond DR, Badve C, Sloan AE, Vadmal V, Waite K, Colen RR, Pei L, Ak M, Srinivasan A, Bapuraj JR, Rao A, Wang N, Yoshiaki O, Moritani T, Turk S, Lee J, Prabhudesai S, Morón F, Mandel J, Kamnitsas K, Glocker B, Dixon LVM, Williams M, Zampakis P, Panagiotopoulos V, Tsiganos P, Alexiou S, Haliassos I, Zacharaki EI, Moustakas K, Kalogeropoulou C, Kardamakis DM, Choi YS, Lee SK, Chang JH, Ahn SS, Luo B, Poisson L, Wen N, Tiwari P, Verma R, Bareja R, Yadav I, Chen J, Kumar N, Smits M, van der Voort SR, Alafandi A, Incekara F, Wijnenga MMJ, Kapsas G, Gahrmann R, Schouten JW, Dubbink HJ, Vincent AJPE, van den Bent MJ, French PJ, Klein S, Yuan Y, Sharma S, Tseng TC, Adabi S, Niclou SP, Keunen O, Hau AC, Vallières M, Fortin D, Lepage M, Landman B, Ramadass K, Xu K, Chotai S, Chambless LB, Mistry A, Thompson RC, Gusev Y, Bhuvaneshwar K, Sayah A, Bencheqroun C, Belouali A, Madhavan S, Booth TC, Chelliah A, Modat M, Shuaib H, Dragos C, Abayazeed A, Kolodziej K, Hill M, Abbassy A, Gamal S, Mekhaimar M, Qayati M, Reyes M, Park JE, Yun J, Kim HS, Mahajan A, Muzi M, Benson S, Beets-Tan RGH, Teuwen J, Herrera-Trujillo A, Trujillo M, Escobar W, Abello A, Bernal J, Gómez J, Choi J, Baek S, Kim Y, Ismael H, Allen B, Buatti JM, Kotrotsou A, Li H, Weiss T, Weller M, Bink A, Pouymayou B, Shaykh HF, Saltz J, Prasanna P, Shrestha S, Mani KM, Payne D, Kurc T, Pelaez E, Franco-Maldonado H, Loayza F, Quevedo S, Guevara P, Torche E, Mendoza C, Vera F, Ríos E, López E, Velastin SA, Ogbole G, Soneye M, Oyekunle D, Odafe-Oyibotha O, Osobu B, Shu'aibu M, Dorcas A, Dako F, Simpson AL, Hamghalam M, Peoples JJ, Hu R, Tran A, Cutler D, Moraes FY, Boss MA, Gimpel J, Veettil DK, Schmidt K, Bialecki B, Marella S, Price C, Cimino L, Apgar C, Shah P, Menze B, Barnholtz-Sloan JS, Martin J, Bakas S. Federated learning enables big data for rare cancer boundary detection. Nat Commun 2022; 13:7346. [PMID: 36470898 PMCID: PMC9722782 DOI: 10.1038/s41467-022-33407-5] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 09/16/2022] [Indexed: 12/12/2022] Open
Abstract
Although machine learning (ML) has shown promise across disciplines, out-of-sample generalizability is concerning. This is currently addressed by sharing multi-site data, but such centralization is challenging/infeasible to scale due to various limitations. Federated ML (FL) provides an alternative paradigm for accurate and generalizable ML, by only sharing numerical model updates. Here we present the largest FL study to-date, involving data from 71 sites across 6 continents, to generate an automatic tumor boundary detector for the rare disease of glioblastoma, reporting the largest such dataset in the literature (n = 6, 314). We demonstrate a 33% delineation improvement for the surgically targetable tumor, and 23% for the complete tumor extent, over a publicly trained model. We anticipate our study to: 1) enable more healthcare studies informed by large diverse data, ensuring meaningful results for rare diseases and underrepresented populations, 2) facilitate further analyses for glioblastoma by releasing our consensus model, and 3) demonstrate the FL effectiveness at such scale and task-complexity as a paradigm shift for multi-site collaborations, alleviating the need for data-sharing.
Collapse
Affiliation(s)
- Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Bavaria, Germany
| | - Ujjwal Baid
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | | | | | | | | | | | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Chiharu Sako
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Satyam Ghodasara
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Suyash Mohan
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | | | - Felix Sahm
- Clinical Cooperation Unit Neuropathology, German Cancer Consortium (DKTK) within the German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Neuropathology, Heidelberg University Hospital, Heidelberg, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Maximilian Zenk
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Wolfgang Wick
- Clinical Cooperation Unit Neuropathology, German Cancer Consortium (DKTK) within the German Cancer Research Center (DKFZ), Heidelberg, Germany
- Neurology Clinic, Heidelberg University Hospital, Heidelberg, Germany
| | - Evan Calabrese
- Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Jeffrey Rudie
- Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Javier Villanueva-Meyer
- Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Soonmee Cha
- Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Madhura Ingalhalikar
- Symbiosis Center for Medical Image Analysis, Symbiosis International University, Pune, Maharashtra, India
| | - Manali Jadhav
- Symbiosis Center for Medical Image Analysis, Symbiosis International University, Pune, Maharashtra, India
| | - Umang Pandey
- Symbiosis Center for Medical Image Analysis, Symbiosis International University, Pune, Maharashtra, India
| | - Jitender Saini
- Department of Neuroimaging and Interventional Radiology, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - John Garrett
- Department of Radiology, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
- Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Matthew Larson
- Department of Radiology, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Robert Jeraj
- Department of Radiology, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
- Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Stuart Currie
- Leeds Teaching Hospitals Trust, Department of Radiology, Leeds, UK
| | - Russell Frood
- Leeds Teaching Hospitals Trust, Department of Radiology, Leeds, UK
| | - Kavi Fatania
- Leeds Teaching Hospitals Trust, Department of Radiology, Leeds, UK
| | - Raymond Y Huang
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | | | | | - Josep Puig
- Department of Radiology (IDI), Girona Biomedical Research Institute (IdIBGi), Josep Trueta University Hospital, Girona, Spain
| | - Johannes Trenkler
- Institute of Neuroradiology, Neuromed Campus (NMC), Kepler University Hospital Linz, Linz, Austria
| | - Josef Pichler
- Department of Neurooncology, Neuromed Campus (NMC), Kepler University Hospital Linz, Linz, Austria
| | - Georg Necker
- Institute of Neuroradiology, Neuromed Campus (NMC), Kepler University Hospital Linz, Linz, Austria
| | - Andreas Haunschmidt
- Institute of Neuroradiology, Neuromed Campus (NMC), Kepler University Hospital Linz, Linz, Austria
| | - Stephan Meckel
- Institute of Neuroradiology, Neuromed Campus (NMC), Kepler University Hospital Linz, Linz, Austria
- Institute of Diagnostic and Interventional Neuroradiology, RKH Klinikum Ludwigsburg, Ludwigsburg, Germany
| | - Gaurav Shukla
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiation Oncology, Christiana Care Health System, Philadelphia, PA, USA
| | - Spencer Liem
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Gregory S Alexander
- Department of Radiation Oncology, University of Maryland, Baltimore, MD, USA
| | - Joseph Lombardo
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
- Department of Radiation Oncology, Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA
| | - Joshua D Palmer
- Department of Radiation Oncology, The James Cancer Hospital and Solove Research Institute, The Ohio State University Comprehensive Cancer Center, Columbus, OH, USA
| | - Adam E Flanders
- Department of Radiology, Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA
| | - Adam P Dicker
- Department of Radiation Oncology, Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- The Malone Center for Engineering in Healthcare, The Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Craig K Jones
- The Malone Center for Engineering in Healthcare, The Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Archana Venkataraman
- Department of Electrical and Computer Engineering, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Meirui Jiang
- The Chinese University of Hong Kong, Hong Kong, China
| | - Tiffany Y So
- The Chinese University of Hong Kong, Hong Kong, China
| | - Cheng Chen
- The Chinese University of Hong Kong, Hong Kong, China
| | | | - Qi Dou
- The Chinese University of Hong Kong, Hong Kong, China
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Filip Lux
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Jan Michálek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Petr Matula
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Miloš Keřkovský
- Department of Radiology and Nuclear Medicine, Faculty of Medicine, Masaryk University, Brno and University Hospital Brno, Brno, Czech Republic
| | - Tereza Kopřivová
- Department of Radiology and Nuclear Medicine, Faculty of Medicine, Masaryk University, Brno and University Hospital Brno, Brno, Czech Republic
| | - Marek Dostál
- Department of Radiology and Nuclear Medicine, Faculty of Medicine, Masaryk University, Brno and University Hospital Brno, Brno, Czech Republic
- Department of Biophysics, Faculty of Medicine, Masaryk University, Brno, Czech Republic
| | - Václav Vybíhal
- Department of Neurosurgery, Faculty of Medicine, Masaryk University, Brno, and University Hospital and Czech Republic, Brno, Czech Republic
| | - Michael A Vogelbaum
- Department of Neuro Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| | - J Ross Mitchell
- University of Alberta, Edmonton, AB, Canada
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada
| | - Joaquim Farinhas
- Department of Radiology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| | | | | | - Marco C Pinho
- University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Divya Reddy
- University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - James Holcomb
- University of Texas Southwestern Medical Center, Dallas, TX, USA
| | | | - Benjamin M Ellingson
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
- UCLA Neuro-Oncology Program, Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CaA, USA
| | - Timothy F Cloughesy
- UCLA Neuro-Oncology Program, Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CaA, USA
| | - Catalina Raymond
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Talia Oughourlian
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Akifumi Hagiwara
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Chencai Wang
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Minh-Son To
- College of Medicine and Public Health, Flinders University, Bedford Park, SA, Australia
- Division of Surgery and Perioperative Medicine, Flinders Medical Centre, Bedford Park, SA, Australia
| | - Sargam Bhardwaj
- College of Medicine and Public Health, Flinders University, Bedford Park, SA, Australia
| | - Chee Chong
- South Australia Medical Imaging, Flinders Medical Centre, Bedford Park, SA, Australia
| | - Marc Agzarian
- South Australia Medical Imaging, Flinders Medical Centre, Bedford Park, SA, Australia
- Department of Neurology, Baylor College of Medicine, Houston, TX, USA
| | | | | | - Bernardo C A Teixeira
- Instituto de Neurologia de Curitiba, Curitiba, Paraná, Brazil
- Department of Radiology, Hospital de Clínicas da Universidade Federal do Paraná, Curitiba, Paraná, Brazil
| | - Flávia Sprenger
- Department of Radiology, Hospital de Clínicas da Universidade Federal do Paraná, Curitiba, Paraná, Brazil
| | - David Menotti
- Department of Informatics, Universidade Federal do Paraná, Curitiba, Paraná, Brazil
| | - Diego R Lucio
- Department of Informatics, Universidade Federal do Paraná, Curitiba, Paraná, Brazil
| | - Pamela LaMontagne
- Department of Radiology, Washington University in St. Louis, St. Louis, MO, USA
| | - Daniel Marcus
- Department of Radiology, Washington University in St. Louis, St. Louis, MO, USA
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM (Zentralinstitut für translationale Krebsforschung der Technischen Universität München), Klinikum rechts der Isar, Munich, Germany
| | - Florian Kofler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM (Zentralinstitut für translationale Krebsforschung der Technischen Universität München), Klinikum rechts der Isar, Munich, Germany
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
| | - Ivan Ezhov
- Department of Informatics, Technical University of Munich, Munich, Bavaria, Germany
- TranslaTUM (Zentralinstitut für translationale Krebsforschung der Technischen Universität München), Klinikum rechts der Isar, Munich, Germany
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
| | - Marie Metz
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Rajan Jain
- Department of Radiology, NYU Grossman School of Medicine, New York, NY, USA
- Department of Neurosurgery, NYU Grossman School of Medicine, New York, NY, USA
| | - Matthew Lee
- Department of Radiology, NYU Grossman School of Medicine, New York, NY, USA
| | - Yvonne W Lui
- Department of Radiology, NYU Grossman School of Medicine, New York, NY, USA
| | - Richard McKinley
- Support Center for Advanced Neuroimaging, University Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, Inselspital, University of Bern, Bern, Switzerland
| | - Johannes Slotboom
- Support Center for Advanced Neuroimaging, University Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, Inselspital, University of Bern, Bern, Switzerland
| | - Piotr Radojewski
- Support Center for Advanced Neuroimaging, University Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, Inselspital, University of Bern, Bern, Switzerland
| | - Raphael Meier
- Support Center for Advanced Neuroimaging, University Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, Inselspital, University of Bern, Bern, Switzerland
| | - Roland Wiest
- Support Center for Advanced Neuroimaging, University Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, Inselspital, University of Bern, Bern, Switzerland
| | - Derrick Murcia
- Department of Neurosurgery, Anschutz Medical Campus, University of Colorado, Aurora, CO, USA
| | - Eric Fu
- Department of Neurosurgery, Anschutz Medical Campus, University of Colorado, Aurora, CO, USA
| | - Rourke Haas
- Department of Neurosurgery, Anschutz Medical Campus, University of Colorado, Aurora, CO, USA
| | - John Thompson
- Department of Neurosurgery, Anschutz Medical Campus, University of Colorado, Aurora, CO, USA
| | - David Ryan Ormond
- Department of Neurosurgery, Anschutz Medical Campus, University of Colorado, Aurora, CO, USA
| | - Chaitra Badve
- Department of Radiology, University Hospitals Cleveland, Cleveland, OH, USA
| | - Andrew E Sloan
- Department of Neurological Surgery, University Hospitals-Seidman Cancer Center, Cleveland, OH, USA
- Case Comprehensive Cancer Center, Cleveland, OH, USA
- Department of Neurosurgery, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Vachan Vadmal
- Department of Neurosurgery, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Kristin Waite
- National Cancer Institute, National Institute of Health, Division of Cancer Epidemiology and Genetics, Bethesda, MD, USA
| | - Rivka R Colen
- Department of Radiology, Neuroradiology Division, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Linmin Pei
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Murat Ak
- Department of Radiology, Neuroradiology Division, University of Pittsburgh, Pittsburgh, PA, USA
| | - Ashok Srinivasan
- Department of Neuroradiology, University of Michigan, Ann Arbor, MI, USA
| | - J Rajiv Bapuraj
- Department of Neuroradiology, University of Michigan, Ann Arbor, MI, USA
| | - Arvind Rao
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Nicholas Wang
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Ota Yoshiaki
- Department of Neuroradiology, University of Michigan, Ann Arbor, MI, USA
| | - Toshio Moritani
- Department of Neuroradiology, University of Michigan, Ann Arbor, MI, USA
| | - Sevcan Turk
- Department of Neuroradiology, University of Michigan, Ann Arbor, MI, USA
| | - Joonsang Lee
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Snehal Prabhudesai
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Fanny Morón
- Department of Radiology, Baylor College of Medicine, Houston, TX, USA
| | - Jacob Mandel
- Department of Neurology, Baylor College of Medicine, Houston, TX, USA
| | - Konstantinos Kamnitsas
- Department of Computing, Imperial College London, London, UK
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Ben Glocker
- Department of Computing, Imperial College London, London, UK
| | - Luke V M Dixon
- Department of Radiology, Imperial College NHS Healthcare Trust, London, UK
| | - Matthew Williams
- Computational Oncology Group, Institute for Global Health Innovation, Imperial College London, London, UK
| | - Peter Zampakis
- Department of NeuroRadiology, University of Patras, Patras, Greece
| | | | - Panagiotis Tsiganos
- Clinical Radiology Laboratory, Department of Medicine, University of Patras, Patras, Greece
| | - Sotiris Alexiou
- Department of Electrical and Computer Engineering, University of Patras, Patras, Greece
| | - Ilias Haliassos
- Department of Neuro-Oncology, University of Patras, Patras, Greece
| | - Evangelia I Zacharaki
- Department of Electrical and Computer Engineering, University of Patras, Patras, Greece
| | | | | | | | | | | | | | - Sung Soo Ahn
- Yonsei University College of Medicine, Seoul, Korea
| | - Bing Luo
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Laila Poisson
- Public Health Sciences, Henry Ford Health System, Detroit, MI, USA
| | - Ning Wen
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
- SJTU-Ruijin-UIH Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | | | - Ruchika Verma
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada
- Case Western Reserve University, Cleveland, OH, USA
| | - Rohan Bareja
- Case Western Reserve University, Cleveland, OH, USA
| | - Ipsa Yadav
- Case Western Reserve University, Cleveland, OH, USA
| | | | - Neeraj Kumar
- University of Alberta, Edmonton, AB, Canada
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada
| | - Marion Smits
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Sebastian R van der Voort
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Ahmed Alafandi
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Fatih Incekara
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
- Department of Neurosurgery, Brain Tumor Center, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Maarten M J Wijnenga
- Department of Neurology, Brain Tumor Center, Erasmus MC Cancer Institute, Rotterdam, Netherlands
| | - Georgios Kapsas
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Renske Gahrmann
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Joost W Schouten
- Department of Neurosurgery, Brain Tumor Center, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Hendrikus J Dubbink
- Department of Pathology, Brain Tumor Center, Erasmus MC Cancer Institute, Rotterdam, Netherlands
| | - Arnaud J P E Vincent
- Department of Neurosurgery, Brain Tumor Center, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Martin J van den Bent
- Department of Neurology, Brain Tumor Center, Erasmus MC Cancer Institute, Rotterdam, Netherlands
| | - Pim J French
- Department of Neurology, Brain Tumor Center, Erasmus MC Cancer Institute, Rotterdam, Netherlands
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Sonam Sharma
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Tzu-Chi Tseng
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Saba Adabi
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Simone P Niclou
- NORLUX Neuro-Oncology Laboratory, Department of Cancer Research, Luxembourg Institute of Health, Luxembourg, Luxembourg
| | - Olivier Keunen
- Translation Radiomics, Department of Cancer Research, Luxembourg Institute of Health, Luxembourg, Luxembourg
| | - Ann-Christin Hau
- NORLUX Neuro-Oncology Laboratory, Department of Cancer Research, Luxembourg Institute of Health, Luxembourg, Luxembourg
- Luxembourg Center of Neuropathology, Laboratoire National De Santé, Luxembourg, Luxembourg
| | - Martin Vallières
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, QC, Canada
- Centre de Recherche du Centre Hospitalière Universitaire de Sherbrooke, Sherbrooke, QC, Canada
| | - David Fortin
- Centre de Recherche du Centre Hospitalière Universitaire de Sherbrooke, Sherbrooke, QC, Canada
- Division of Neurosurgery and Neuro-Oncology, Faculty of Medicine and Health Science, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Martin Lepage
- Centre de Recherche du Centre Hospitalière Universitaire de Sherbrooke, Sherbrooke, QC, Canada
- Department of Nuclear Medicine and Radiobiology, Sherbrooke Molecular Imaging Centre, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Bennett Landman
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Karthik Ramadass
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Kaiwen Xu
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Silky Chotai
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Lola B Chambless
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Akshitkumar Mistry
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Reid C Thompson
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Yuriy Gusev
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Krithika Bhuvaneshwar
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Anousheh Sayah
- Division of Neuroradiology & Neurointerventional Radiology, Department of Radiology, MedStar Georgetown University Hospital, Washington, DC, USA
| | - Camelia Bencheqroun
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Anas Belouali
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Subha Madhavan
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Thomas C Booth
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Department of Neuroradiology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, UK
| | - Alysha Chelliah
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Haris Shuaib
- Stoke Mandeville Hospital, Mandeville Road, Aylesbury, UK
- Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada
| | - Carmen Dragos
- Stoke Mandeville Hospital, Mandeville Road, Aylesbury, UK
| | | | | | | | | | - Shady Gamal
- University of Cairo School of Medicine, Giza, Egypt
| | | | | | | | - Ji Eun Park
- Department of Radiology, Asan Medical Center, Seoul, South Korea
| | - Jihye Yun
- Department of Radiology, Asan Medical Center, Seoul, South Korea
| | - Ho Sung Kim
- Department of Radiology, Asan Medical Center, Seoul, South Korea
| | - Abhishek Mahajan
- The Clatterbridge Cancer Centre NHS Foundation Trust Pembroke Place, Liverpool, UK
| | - Mark Muzi
- Department of Radiology, University of Washington, Seattle, WA, USA
| | - Sean Benson
- Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Regina G H Beets-Tan
- Department of Radiology, Netherlands Cancer Institute, Amsterdam, Netherlands
- GROW School of Oncology and Developmental Biology, Maastricht, Netherlands
| | - Jonas Teuwen
- Netherlands Cancer Institute, Amsterdam, Netherlands
| | | | | | - William Escobar
- Clínica Imbanaco Grupo Quirón Salud, Cali, Colombia
- Universidad del Valle, Cali, Colombia
| | | | - Jose Bernal
- Universidad del Valle, Cali, Colombia
- The University of Edinburgh, Edinburgh, UK
| | | | - Joseph Choi
- Department of Industrial and Systems Engineering, University of Iowa, Iowa, USA
| | - Stephen Baek
- Department of Industrial and Systems Engineering, Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | - Yusung Kim
- Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | - Heba Ismael
- Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | - Bryan Allen
- Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | - John M Buatti
- Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | | | - Hongwei Li
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Tobias Weiss
- Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Michael Weller
- Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Andrea Bink
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Bertrand Pouymayou
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | | | - Joel Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York, USA
| | - Prateek Prasanna
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York, USA
| | - Sampurna Shrestha
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York, USA
| | - Kartik M Mani
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York, USA
- Department of Radiation Oncology, Stony Brook University, Stony Brook, NY, USA
| | - David Payne
- Department of Radiology, Stony Brook University, Stony Brook, NY, USA
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York, USA
- Scientific Data Group, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Enrique Pelaez
- Escuela Superior Politecnica del Litoral, Guayaquil, Guayas, Ecuador
| | | | - Francis Loayza
- Escuela Superior Politecnica del Litoral, Guayaquil, Guayas, Ecuador
| | | | | | | | | | - Franco Vera
- Universidad de Concepción, Concepción, Biobío, Chile
| | - Elvis Ríos
- Universidad de Concepción, Concepción, Biobío, Chile
| | - Eduardo López
- Universidad de Concepción, Concepción, Biobío, Chile
| | - Sergio A Velastin
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Godwin Ogbole
- Department of Radiology, University College Hospital Ibadan, Oyo, Nigeria
| | - Mayowa Soneye
- Department of Radiology, University College Hospital Ibadan, Oyo, Nigeria
| | - Dotun Oyekunle
- Department of Radiology, University College Hospital Ibadan, Oyo, Nigeria
| | | | - Babatunde Osobu
- Department of Radiology, University College Hospital Ibadan, Oyo, Nigeria
| | - Mustapha Shu'aibu
- Department of Radiology, Muhammad Abdullahi Wase Teaching Hospital, Kano, Nigeria
| | - Adeleye Dorcas
- Department of Radiology, Obafemi Awolowo University Ile-Ife, Ile-Ife, Osun, Nigeria
| | - Farouk Dako
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Center for Global Health, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Amber L Simpson
- Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada
- School of Computing, Queen's University, Kingston, ON, Canada
| | - Mohammad Hamghalam
- School of Computing, Queen's University, Kingston, ON, Canada
- Department of Electrical Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran
| | - Jacob J Peoples
- School of Computing, Queen's University, Kingston, ON, Canada
| | - Ricky Hu
- School of Computing, Queen's University, Kingston, ON, Canada
| | - Anh Tran
- School of Computing, Queen's University, Kingston, ON, Canada
| | - Danielle Cutler
- The Faculty of Arts & Sciences, Queen's University, Kingston, ON, Canada
| | - Fabio Y Moraes
- Department of Oncology, Queen's University, Kingston, ON, Canada
| | - Michael A Boss
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - James Gimpel
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - Deepak Kattil Veettil
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - Kendall Schmidt
- Data Science Institute, American College of Radiology, Reston, VA, USA
| | - Brian Bialecki
- Data Science Institute, American College of Radiology, Reston, VA, USA
| | - Sailaja Marella
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - Cynthia Price
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - Lisa Cimino
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | - Charles Apgar
- Center for Research and Innovation, American College of Radiology, Philadelphia, PA, USA
| | | | - Bjoern Menze
- Department of Informatics, Technical University of Munich, Munich, Bavaria, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Jill S Barnholtz-Sloan
- National Cancer Institute, National Institute of Health, Division of Cancer Epidemiology and Genetics, Bethesda, MD, USA
- Center for Biomedical Informatics and Information Technology, National Cancer Institute (NCI), National Institute of Health, Bethesda, MD, USA
| | | | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA.
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
19
|
Yang H, Chen C, Jiang M, Liu Q, Cao J, Heng PA, Dou Q. DLTTA: Dynamic Learning Rate for Test-Time Adaptation on Cross-Domain Medical Images. IEEE Trans Med Imaging 2022; 41:3575-3586. [PMID: 35839185 DOI: 10.1109/tmi.2022.3191535] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Test-time adaptation (TTA) has increasingly been an important topic to efficiently tackle the cross-domain distribution shift at test time for medical images from different institutions. Previous TTA methods have a common limitation of using a fixed learning rate for all the test samples. Such a practice would be sub-optimal for TTA, because test data may arrive sequentially therefore the scale of distribution shift would change frequently. To address this problem, we propose a novel dynamic learning rate adjustment method for test-time adaptation, called DLTTA, which dynamically modulates the amount of weights update for each test image to account for the differences in their distribution shift. Specifically, our DLTTA is equipped with a memory bank based estimation scheme to effectively measure the discrepancy of a given test sample. Based on this estimated discrepancy, a dynamic learning rate adjustment strategy is then developed to achieve a suitable degree of adaptation for each test sample. The effectiveness and general applicability of our DLTTA is extensively demonstrated on three tasks including retinal optical coherence tomography (OCT) segmentation, histopathological image classification, and prostate 3D MRI segmentation. Our method achieves effective and fast test-time adaptation with consistent performance improvement over current state-of-the-art test-time adaptation methods. Code is available at https://github.com/med-air/DLTTA.
Collapse
|
20
|
Nguyen TX, Ran AR, Hu X, Yang D, Jiang M, Dou Q, Cheung CY. Federated Learning in Ocular Imaging: Current Progress and Future Direction. Diagnostics (Basel) 2022; 12:2835. [PMID: 36428895 PMCID: PMC9689273 DOI: 10.3390/diagnostics12112835] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/11/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Advances in artificial intelligence deep learning (DL) have made tremendous impacts on the field of ocular imaging over the last few years. Specifically, DL has been utilised to detect and classify various ocular diseases on retinal photographs, optical coherence tomography (OCT) images, and OCT-angiography images. In order to achieve good robustness and generalisability of model performance, DL training strategies traditionally require extensive and diverse training datasets from various sites to be transferred and pooled into a "centralised location". However, such a data transferring process could raise practical concerns related to data security and patient privacy. Federated learning (FL) is a distributed collaborative learning paradigm which enables the coordination of multiple collaborators without the need for sharing confidential data. This distributed training approach has great potential to ensure data privacy among different institutions and reduce the potential risk of data leakage from data pooling or centralisation. This review article aims to introduce the concept of FL, provide current evidence of FL in ocular imaging, and discuss potential challenges as well as future applications.
Collapse
Affiliation(s)
- Truong X. Nguyen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Xiaoyan Hu
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Meirui Jiang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
21
|
Sui L, Zeng J, Zhao H, Ye L, Martin T, Sanders A, Ruge F, Jiang A, Dou Q, Hargest R, Song X, Jiang W. Death associated protein‑3 (DAP3) and DAP3 binding cell death enhancer‑1 (DELE1) in human colorectal cancer, and their impacts on clinical outcome and chemoresistance. Int J Oncol 2022; 62:7. [DOI: 10.3892/ijo.2022.5455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/19/2022] [Indexed: 11/17/2022] Open
Affiliation(s)
- Laijian Sui
- Cardiff China Medical Research Collaborative, Division of Cancer and Genetics, Cardiff University School of Medicine, Cardiff, CF14 4XN, UK
| | - Jianyuan Zeng
- Cardiff China Medical Research Collaborative, Division of Cancer and Genetics, Cardiff University School of Medicine, Cardiff, CF14 4XN, UK
| | - Huishan Zhao
- Cardiff China Medical Research Collaborative, Division of Cancer and Genetics, Cardiff University School of Medicine, Cardiff, CF14 4XN, UK
| | - Lin Ye
- Cardiff China Medical Research Collaborative, Division of Cancer and Genetics, Cardiff University School of Medicine, Cardiff, CF14 4XN, UK
| | - Tracey Martin
- Cardiff China Medical Research Collaborative, Division of Cancer and Genetics, Cardiff University School of Medicine, Cardiff, CF14 4XN, UK
| | - Andrew Sanders
- Cardiff China Medical Research Collaborative, Division of Cancer and Genetics, Cardiff University School of Medicine, Cardiff, CF14 4XN, UK
| | - Fiona Ruge
- Cardiff China Medical Research Collaborative, Division of Cancer and Genetics, Cardiff University School of Medicine, Cardiff, CF14 4XN, UK
| | - Aihua Jiang
- Cardiff China Medical Research Collaborative, Division of Cancer and Genetics, Cardiff University School of Medicine, Cardiff, CF14 4XN, UK
| | - Q. Dou
- Barbara Ann Karmanos Cancer Institute, Departments of Oncology, Pharmacology and Pathology, School of Medicine, Wayne State University, Detroit, MI 48201, USA
| | - Rachel Hargest
- Cardiff China Medical Research Collaborative, Division of Cancer and Genetics, Cardiff University School of Medicine, Cardiff, CF14 4XN, UK
| | - Xicheng Song
- Yantai Yuhuangding Hospital, Yantai, Shandong 264000, P.R. China
| | - Wen Jiang
- Cardiff China Medical Research Collaborative, Division of Cancer and Genetics, Cardiff University School of Medicine, Cardiff, CF14 4XN, UK
| |
Collapse
|
22
|
Long Y, Li C, Dou Q. Robotic surgery remote mentoring via AR with 3D scene streaming and hand interaction. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2022. [DOI: 10.1080/21681163.2022.2145498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Yonghao Long
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Chengkun Li
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
23
|
Jin Y, Long Y, Gao X, Stoyanov D, Dou Q, Heng PA. Trans-SVNet: hybrid embedding aggregation Transformer for surgical workflow analysis. Int J Comput Assist Radiol Surg 2022; 17:2193-2202. [PMID: 36129573 DOI: 10.1007/s11548-022-02743-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 08/31/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Real-time surgical workflow analysis has been a key component for computer-assisted intervention system to improve cognitive assistance. Most existing methods solely rely on conventional temporal models and encode features with a successive spatial-temporal arrangement. Supportive benefits of intermediate features are partially lost from both visual and temporal aspects. In this paper, we rethink feature encoding to attend and preserve the critical information for accurate workflow recognition and anticipation. METHODS We introduce Transformer in surgical workflow analysis, to reconsider complementary effects of spatial and temporal representations. We propose a hybrid embedding aggregation Transformer, named Trans-SVNet, to effectively interact with the designed spatial and temporal embeddings, by employing spatial embedding to query temporal embedding sequence. We jointly optimized by loss objectives from both analysis tasks to leverage their high correlation. RESULTS We extensively evaluate our method on three large surgical video datasets. Our method consistently outperforms the state-of-the-arts across three datasets on workflow recognition task. Jointly learning with anticipation, recognition results can gain a large improvement. Our approach also shows its effectiveness on anticipation with promising performance achieved. Our model achieves a real-time inference speed of 0.0134 second per frame. CONCLUSION Experimental results demonstrate the efficacy of our hybrid embeddings integration by rediscovering the crucial cues from complementary spatial-temporal embeddings. The better performance by multi-task learning indicates that anticipation task brings the additional knowledge to recognition task. Promising effectiveness and efficiency of our method also show its promising potential to be used in operating room.
Collapse
Affiliation(s)
- Yueming Jin
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Department of Computer Science, University College, London, UK
| | - Yonghao Long
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, HK, China
| | - Xiaojie Gao
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, HK, China
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Department of Computer Science, University College, London, UK
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, HK, China. .,Institute of Medical Intelligence and XR, The Chinese University of Hong Kong, Shatin, HK, China.
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, HK, China.,Institute of Medical Intelligence and XR, The Chinese University of Hong Kong, Shatin, HK, China
| |
Collapse
|
24
|
Xiao BH, Zhu MSY, Du EZ, Liu WH, Ma JB, Huang H, Gong JS, Diacinti D, Zhang K, Gao B, Liu H, Jiang RF, Ji ZY, Xiong XB, He LC, Wu L, Xu CJ, Du MM, Wang XR, Chen LM, Wu KY, Yang L, Xu MS, Diacinti D, Dou Q, Kwok TYC, Wáng YXJ. A software program for automated compressive vertebral fracture detection on elderly women's lateral chest radiograph: Ofeye 1.0. Quant Imaging Med Surg 2022; 12:4259-4271. [PMID: 35919046 PMCID: PMC9338385 DOI: 10.21037/qims-22-433] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 05/25/2022] [Indexed: 11/17/2022]
Abstract
Background Because osteoporotic vertebral fracture (OVF) on chest radiographs is commonly missed in radiological reports, we aimed to develop a software program which offers automated detection of compressive vertebral fracture (CVF) on lateral chest radiographs, and which emphasizes CVF detection specificity with a low false positivity rate. Methods For model training, we retrieved 3,991 spine radiograph cases and 1,979 chest radiograph cases from 16 sources, with among them in total 1,404 cases had OVF. For model testing, we retrieved 542 chest radiograph cases and 162 spine radiograph cases from four independent clinics, with among them 215 cases had OVF. All cases were female subjects, and except for 31 training data cases which were spine trauma cases, all the remaining cases were post-menopausal women. Image data included DICOM (Digital Imaging and Communications in Medicine) format, hard film scanned PNG (Portable Network Graphics) format, DICOM exported PNG format, and PACS (Picture Archiving and Communication System) downloaded resolution reduced DICOM format. OVF classification included: minimal and mild grades with <20% or ≥20–25% vertebral height loss respectively, moderate grade with ≥25–40% vertebral height loss, severe grade with ≥40%–2/3 vertebral height loss, and collapsed grade with ≥2/3 vertebral height loss. The CVF detection base model was mainly composed of convolution layers that include convolution kernels of different sizes, pooling layers, up-sampling layers, feature merging layers, and residual modules. When the model loss function could not be further decreased with additional training, the model was considered to be optimal and termed ‘base-model 1.0’. A user-friendly interface was also developed, with the synthesized software termed ‘Ofeye 1.0’. Results Counting cases and with minimal and mild OVFs included, base-model 1.0 demonstrated a specificity of 97.1%, a sensitivity of 86%, and an accuracy of 93.9% for the 704 testing cases. In total, 33 OVFs in 30 cases had a false negative reading, which constituted a false negative rate of 14.0% (30/215) by counting all OVF cases. Eighteen OVFs in 15 cases had OVFs of ≥ moderate grades missed, which constituted a false negative rate of 7.0% (15/215, i.e., sensitivity 93%) if only counting cases with ≥ moderate grade OVFs missed. False positive reading was recorded in 13 vertebrae in 13 cases (one vertebra in each case), which constituted a false positivity rate of 2.7% (13/489). These vertebrae with false positivity labeling could be readily differentiated from a true OVF by a human reader. The software Ofeye 1.0 allows ‘batch processing’, for example, 100 radiographs can be processed in a single operation. This software can be integrated into hospital PACS, or installed in a standalone personal computer. Conclusions A user-friendly software program was developed for CVF detection on elderly women’s lateral chest radiographs. It has an overall low false positivity rate, and for moderate and severe CVFs an acceptably low false negativity rate. The integration of this software into radiological practice is expected to improve osteoporosis management for elderly women.
Collapse
Affiliation(s)
- Ben-Heng Xiao
- Department of Imaging and Interventional Radiology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | | | - Er-Zhu Du
- Department of Radiology, Dongguan Traditional Chinese Medicine Hospital, Dongguan, China
| | - Wei-Hong Liu
- Department of Radiology, General Hospital of China Resources & Wuhan Iron and Steel Corporation, Wuhan, China
| | - Jian-Bing Ma
- Department of Radiology, the First Hospital of Jiaxing, The Affiliated Hospital of Jiaxing University, Jiaxing, China
| | - Hua Huang
- Department of Radiology, The Third People's Hospital of Shenzhen, The Second Affiliated Hospital of Southern University of Science and Technology, National Clinical Research Center for Infectious Diseases, Shenzhen, China
| | - Jing-Shan Gong
- Department of Radiology, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, China
| | - Davide Diacinti
- Department of Radiological Sciences, Oncology and Pathology, Sapienza University of Rome, Sapienza University of Rome, Rome, Italy.,Department of Diagnostic and Molecular Imaging, Radiology and Radiotherapy, University Foundation Hospital Tor Vergata, Rome, Italy
| | - Kun Zhang
- Department of Radiology, First Affiliated Hospital of Hunan University of Chinese Medicine, Changsha, China
| | - Bo Gao
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Heng Liu
- Department of Radiology, the Affiliated Hospital of Zunyi Medical University, Zunyi, China
| | - Ri-Feng Jiang
- Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, China
| | - Zhong-You Ji
- PET-CT Center, Fujian Medical University Union Hospital, Fuzhou, China
| | - Xiao-Bao Xiong
- Department of Radiology, Zhejiang Provincial Tongde Hospital, Hangzhou, China
| | - Lai-Chang He
- Department of Radiology, the First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Lei Wu
- Department of Radiology, the First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Chuan-Jun Xu
- Department of Radiology, The Second Hospital of Nanjing, Nanjing University of Chinese Medicine, Nanjing, China
| | - Mei-Mei Du
- Department of Radiology, The Second Affiliated Hospital and Yuying Children's Hospital, Wenzhou Medical University, Wenzhou, China
| | - Xiao-Rong Wang
- Department of Radiology, Ningbo First Hospital, Ningbo, China
| | - Li-Mei Chen
- Department of Imaging and Interventional Radiology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Kong-Yang Wu
- Department of Imaging and Interventional Radiology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China.,College of Electrical and Information Engineering, Jinan University, Guangzhou, China
| | - Liu Yang
- Department of Imaging and Interventional Radiology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Mao-Sheng Xu
- Department of Radiology, the First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Daniele Diacinti
- Department of Radiological Sciences, Oncology and Pathology, Sapienza University of Rome, Sapienza University of Rome, Rome, Italy
| | - Qi Dou
- Department of Computer Science and Engineering, Faculty of Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Timothy Y C Kwok
- JC Centre for Osteoporosis Care and Control, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Yì Xiáng J Wáng
- Department of Imaging and Interventional Radiology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
25
|
Wei R, Li B, Mo H, Lu B, Long Y, Yang B, Dou Q, Liu Y, Sun D. Stereo Dense Scene Reconstruction and Accurate Localization for Learning-Based Navigation of Laparoscope in Minimally Invasive Surgery. IEEE Trans Biomed Eng 2022; 70:488-500. [PMID: 35905063 DOI: 10.1109/tbme.2022.3195027] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE The computation of anatomical information and laparoscope position is a fundamental block of surgical navigation in Minimally Invasive Surgery (MIS). Recovering a dense 3D structure of surgical scene using visual cues remains a challenge, and the online laparoscopic tracking primarily relies on external sensors, which increases system complexity. METHODS Here, we propose a learning-driven framework, in which an image-guided laparoscopic localization with 3D reconstructions of complex anatomical structures is obtained. To reconstruct the 3D structure of the whole surgical environment, we first fine-tune a learning-based stereoscopic depth perception method, which is robust to the texture-less and variant soft tissues, for depth estimation. Then, we develop a dense visual reconstruction algorithm to represent the scene by surfels, estimate the laparoscope poses and fuse the depth maps into a unified reference coordinate for tissue reconstruction. To estimate poses of new laparoscope views, we achieve a coarse-to-fine localization method, which incorporates our reconstructed 3D model. RESULTS We evaluate the reconstruction method and the localization module on three datasets, namely, the stereo correspondence and reconstruction of endoscopic data (SCARED), the ex-vivo phantom and tissue data collected with Universal Robot (UR) and Karl Storz Laparoscope, and the in-vivo DaVinci robotic surgery dataset, where the reconstructed 3D structures have rich details of surface texture with an accuracy error under 1.71 mm and the localization module can accurately track the laparoscope with only images as input. CONCLUSIONS Experimental results demonstrate the superior performance of the proposed method in 3D anatomy reconstruction and laparoscopic localization. SIGNIFICANCE The proposed framework can be potentially extended to the current surgical navigation system.
Collapse
|
26
|
Lian J, Long Y, Huang F, Ng K, Lee FMY, Lam DL, Fang BL, Dou Q, Vardhanabhuti V. Imaging-Based Deep Graph Neural Networks for Survival Analysis in Early Stage Lung Cancer Using CT: A Multicenter Study. Front Oncol 2022; 12:868186. [PMID: 35936706 PMCID: PMC9351205 DOI: 10.3389/fonc.2022.868186] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 06/16/2022] [Indexed: 11/25/2022] Open
Abstract
Background Lung cancer is the leading cause of cancer-related mortality, and accurate prediction of patient survival can aid treatment planning and potentially improve outcomes. In this study, we proposed an automated system capable of lung segmentation and survival prediction using graph convolution neural network (GCN) with CT data in non-small cell lung cancer (NSCLC) patients. Methods In this retrospective study, we segmented 10 parts of the lung CT images and built individual lung graphs as inputs to train a GCN model to predict 5-year overall survival. A Cox proportional-hazard model, a set of machine learning (ML) models, a convolutional neural network based on tumor (Tumor-CNN), and the current TNM staging system were used as comparison. Findings A total of 1,705 patients (main cohort) and 125 patients (external validation cohort) with lung cancer (stages I and II) were included. The GCN model was significantly predictive of 5-year overall survival with an AUC of 0.732 (p < 0.0001). The model stratified patients into low- and high-risk groups, which were associated with overall survival (HR = 5.41; 95% CI:, 2.32–10.14; p < 0.0001). On external validation dataset, our GCN model achieved the AUC score of 0.678 (95% CI: 0.564–0.792; p < 0.0001). Interpretation The proposed GCN model outperformed all ML, Tumor-CNN, and TNM staging models. This study demonstrated the value of utilizing medical imaging graph structure data, resulting in a robust and effective model for the prediction of survival in early-stage lung cancer.
Collapse
Affiliation(s)
- Jie Lian
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Yonghao Long
- Department of Computer Science, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Fan Huang
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Kei Shing Ng
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Faith M. Y. Lee
- Faculty of Medicine, University College London, London, United Kingdom
| | - David C. L. Lam
- Department of Medicine, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Benjamin X. L. Fang
- Department of Radiology, Queen Mary Hospital, Hong Kong, Hong Kong SAR, China
| | - Qi Dou
- Department of Computer Science, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- *Correspondence: Varut Vardhanabhuti, ; Qi Dou,
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
- *Correspondence: Varut Vardhanabhuti, ; Qi Dou,
| |
Collapse
|
27
|
Li B, Lu B, Wang Z, Zhong F, Dou Q, Liu YH. Learning Laparoscope Actions via Video Features for Proactive Robotic Field-of-View Control. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3173442] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Bin Li
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Bo Lu
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Ziyi Wang
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Fangxun Zhong
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Qi Dou
- Department of Computer Science and Engineering, and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Yun-Hui Liu
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
28
|
Xue C, Yu L, Chen P, Dou Q, Heng PA. Robust Medical Image Classification From Noisy Labeled Data With Global and Local Representation Guided Co-Training. IEEE Trans Med Imaging 2022; 41:1371-1382. [PMID: 34982680 DOI: 10.1109/tmi.2021.3140140] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep neural networks have achieved remarkable success in a wide variety of natural image and medical image computing tasks. However, these achievements indispensably rely on accurately annotated training data. If encountering some noisy-labeled images, the network training procedure would suffer from difficulties, leading to a sub-optimal classifier. This problem is even more severe in the medical image analysis field, as the annotation quality of medical images heavily relies on the expertise and experience of annotators. In this paper, we propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification from noisy-labeled data to combat the lack of high quality annotated medical data. Specifically, we employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples. Then, the clean samples are trained by a collaborative training strategy to eliminate the disturbance from imperfect labeled samples. Notably, we further design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples in a self-supervised manner. We evaluated our proposed robust learning strategy on four public medical image classification datasets with three types of label noise, i.e., random noise, computer-generated label noise, and inter-observer variability noise. Our method outperforms other learning from noisy label methods and we also conducted extensive experiments to analyze each component of our method.
Collapse
|
29
|
Dou Q, So TY, Jiang M, Liu Q, Vardhanabhuti V, Kaissis G, Li Z, Si W, Lee HHC, Yu K, Feng Z, Dong L, Burian E, Jungmann F, Braren R, Makowski M, Kainz B, Rueckert D, Glocker B, Yu SCH, Heng PA. Author Correction: Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study. NPJ Digit Med 2022; 5:56. [PMID: 35462562 PMCID: PMC9035308 DOI: 10.1038/s41746-022-00600-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
|
30
|
Li X, Cao R, Feng Y, Chen K, Yang B, Fu CW, Li Y, Dou Q, Liu YH, Heng PA. A Sim-to-Real Object Recognition and Localization Framework for Industrial Robotic Bin Picking. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3149026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Xianzhi Li
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong kong
| | - Rui Cao
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong kong
| | - Yidan Feng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong kong
| | - Kai Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong kong
| | - Biqi Yang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong kong
| | - Chi-Wing Fu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong kong
| | - Yichuan Li
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong kong
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong kong
| | - Yun-Hui Liu
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong kong
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong kong
| |
Collapse
|
31
|
Lau CPY, Ma W, Law KY, Lacambra MD, Wong KC, Lee CW, Lee OK, Dou Q, Kumta SM. Development of deep learning algorithms to discriminate giant cell tumors of bone from adjacent normal tissues by confocal Raman spectroscopy. Analyst 2022; 147:1425-1439. [PMID: 35253812 DOI: 10.1039/d1an01554k] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Raman spectroscopy is a non-destructive analysis technique that provides detailed information about the chemical structure of tumors. Raman spectra of 52 giant cell tumors of bone (GCTB) and 21 adjacent normal tissues of formalin-fixed paraffin embedded (FFPE) and frozen specimens were obtained using a confocal Raman spectrometer and analyzed with machine learning and deep learning algorithms. We discovered characteristic Raman shifts in the GCTB specimens. They were assigned to phenylalanine and tyrosine. Based on the spectroscopic data, classification algorithms including support vector machine, k-nearest neighbors and long short-term memory (LSTM) were successfully applied to discriminate GCTB from adjacent normal tissues of both the FFPE and frozen specimens, with the accuracy ranging from 82.8% to 94.5%. Importantly, our LSTM algorithm showed the best performance in the discrimination of the frozen specimens, with a sensitivity and specificity of 93.9% and 95.1% respectively, and the AUC was 0.97. The results of our study suggest that confocal Raman spectroscopy accomplished by the LSTM network could non-destructively evaluate a tumor margin by its inherent biochemical specificity which may allow intraoperative assessment of the adequacy of tumor clearance.
Collapse
Affiliation(s)
- Carol P Y Lau
- Institute for Tissue Engineering and Regenerative Medicine, The Chinese University of Hong Kong, Hong Kong.,School of Science and Technology, Hong Kong Metropolitan University, Hong Kong
| | - Wenao Ma
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong.
| | - Kwan Yau Law
- The Hong Kong Institute of Biotechnology Limited, Hong Kong
| | - Maribel D Lacambra
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Hong Kong
| | - Kwok Chuen Wong
- Department of Orthopaedics and Traumatology, The Chinese University of Hong Kong, Hong Kong.
| | - Chien Wei Lee
- Institute for Tissue Engineering and Regenerative Medicine, The Chinese University of Hong Kong, Hong Kong
| | - Oscar K Lee
- Department of Orthopaedics and Traumatology, The Chinese University of Hong Kong, Hong Kong.
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong.
| | - Shekhar M Kumta
- Department of Orthopaedics and Traumatology, The Chinese University of Hong Kong, Hong Kong.
| |
Collapse
|
32
|
Chen C, Dou Q, Jin Y, Liu Q, Heng PA. Learning With Privileged Multimodal Knowledge for Unimodal Segmentation. IEEE Trans Med Imaging 2022; 41:621-632. [PMID: 34633927 DOI: 10.1109/tmi.2021.3119385] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Multimodal learning usually requires a complete set of modalities during inference to maintain performance. Although training data can be well-prepared with high-quality multiple modalities, in many cases of clinical practice, only one modality can be acquired and important clinical evaluations have to be made based on the limited single modality information. In this work, we propose a privileged knowledge learning framework with the 'Teacher-Student' architecture, in which the complete multimodal knowledge that is only available in the training data (called privileged information) is transferred from a multimodal teacher network to a unimodal student network, via both a pixel-level and an image-level distillation scheme. Specifically, for the pixel-level distillation, we introduce a regularized knowledge distillation loss which encourages the student to mimic the teacher's softened outputs in a pixel-wise manner and incorporates a regularization factor to reduce the effect of incorrect predictions from the teacher. For the image-level distillation, we propose a contrastive knowledge distillation loss which encodes image-level structured information to enrich the knowledge encoding in combination with the pixel-level distillation. We extensively evaluate our method on two different multi-class segmentation tasks, i.e., cardiac substructure segmentation and brain tumor segmentation. Experimental results on both tasks demonstrate that our privileged knowledge learning is effective in improving unimodal segmentation and outperforms previous methods.
Collapse
|
33
|
Ou C, Li C, Qian Y, Duan CZ, Si W, Zhang X, Li X, Morgan M, Dou Q, Heng PA. Morphology-aware multi-source fusion-based intracranial aneurysms rupture prediction. Eur Radiol 2022; 32:5633-5641. [PMID: 35182202 DOI: 10.1007/s00330-022-08608-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Revised: 12/29/2021] [Accepted: 01/23/2022] [Indexed: 11/04/2022]
Abstract
OBJECTIVES We proposed a new approach to train deep learning model for aneurysm rupture prediction which only uses a limited amount of labeled data. METHOD Using segmented aneurysm mask as input, a backbone model was pretrained using a self-supervised method to learn deep embeddings of aneurysm morphology from 947 unlabeled cases of angiographic images. Subsequently, the backbone model was finetuned using 120 labeled cases with known rupture status. Clinical information was integrated with deep embeddings to further improve prediction performance. The proposed model was compared with radiomics and conventional morphology models in prediction performance. An assistive diagnosis system was also developed based on the model and was tested with five neurosurgeons. RESULT Our method achieved an area under the receiver operating characteristic curve (AUC) of 0.823, outperforming deep learning model trained from scratch (0.787). By integrating with clinical information, the proposed model's performance was further improved to AUC = 0.853, making the results significantly better than model based on radiomics (AUC = 0.805, p = 0.007) or model based on conventional morphology parameters (AUC = 0.766, p = 0.001). Our model also achieved the highest sensitivity, PPV, NPV, and accuracy among the others. Neurosurgeons' prediction performance was improved from AUC=0.877 to 0.945 (p = 0.037) with the assistive diagnosis system. CONCLUSION Our proposed method could develop competitive deep learning model for rupture prediction using only a limited amount of data. The assistive diagnosis system could be useful for neurosurgeons to predict rupture. KEY POINTS • A self-supervised learning method was proposed to mitigate the data-hungry issue of deep learning, enabling training deep neural network with a limited amount of data. • Using the proposed method, deep embeddings were extracted to represent intracranial aneurysm morphology. Prediction model based on deep embeddings was significantly better than conventional morphology model and radiomics model. • An assistive diagnosis system was developed using deep embeddings for case-based reasoning, which was shown to significantly improve neurosurgeons' performance to predict rupture.
Collapse
Affiliation(s)
- Chubin Ou
- Neurosurgery Center, Department of Cerebrovascular Surgery, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China.,Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia
| | - Caizi Li
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yi Qian
- Neurosurgery Center, Department of Cerebrovascular Surgery, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China.
| | - Chuan-Zhi Duan
- Neurosurgery Center, Department of Cerebrovascular Surgery, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China.
| | - Weixin Si
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
| | - Xin Zhang
- Neurosurgery Center, Department of Cerebrovascular Surgery, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Xifeng Li
- Neurosurgery Center, Department of Cerebrovascular Surgery, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Michael Morgan
- Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| |
Collapse
|
34
|
Huaulmé A, Sarikaya D, Le Mut K, Despinoy F, Long Y, Dou Q, Chng CB, Lin W, Kondo S, Bravo-Sánchez L, Arbeláez P, Reiter W, Mitsuishi M, Harada K, Jannin P. MIcro-surgical anastomose workflow recognition challenge report. Comput Methods Programs Biomed 2021; 212:106452. [PMID: 34688174 DOI: 10.1016/j.cmpb.2021.106452] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/28/2021] [Indexed: 05/22/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic surgical workflow recognition is an essential step in developing context-aware computer-assisted surgical systems. Video recordings of surgeries are becoming widely accessible, as the operational field view is captured during laparoscopic surgeries. Head and ceiling mounted cameras are also increasingly being used to record videos in open surgeries. This makes videos a common choice in surgical workflow recognition. Additional modalities, such as kinematic data captured during robot-assisted surgeries, could also improve workflow recognition. This paper presents the design and results of the MIcro-Surgical Anastomose Workflow recognition on training sessions (MISAW) challenge whose objective was to develop workflow recognition models based on kinematic data and/or videos. METHODS The MISAW challenge provided a data set of 27 sequences of micro-surgical anastomosis on artificial blood vessels. This data set was composed of videos, kinematics, and workflow annotations. The latter described the sequences at three different granularity levels: phase, step, and activity. Four tasks were proposed to the participants: three of them were related to the recognition of surgical workflow at three different granularity levels, while the last one addressed the recognition of all granularity levels in the same model. We used the average application-dependent balanced accuracy (AD-Accuracy) as the evaluation metric. This takes unbalanced classes into account and it is more clinically relevant than a frame-by-frame score. RESULTS Six teams participated in at least one task. All models employed deep learning models, such as convolutional neural networks (CNN), recurrent neural networks (RNN), or a combination of both. The best models achieved accuracy above 95%, 80%, 60%, and 75% respectively for recognition of phases, steps, activities, and multi-granularity. The RNN-based models outperformed the CNN-based ones as well as the dedicated modality models compared to the multi-granularity except for activity recognition. CONCLUSION For high levels of granularity, the best models had a recognition rate that may be sufficient for applications such as prediction of remaining surgical time. However, for activities, the recognition rate was still low for applications that can be employed clinically. The MISAW data set is publicly available at http://www.synapse.org/MISAW to encourage further research in surgical workflow recognition.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| | - Duygu Sarikaya
- Gazi University, Faculty of Engineering; Department of Computer Engineering, Ankara, Turkey
| | - Kévin Le Mut
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France
| | | | - Yonghao Long
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, China; T Stone Robotics Institute, The Chinese University of Hong Kong, China
| | - Qi Dou
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, China; T Stone Robotics Institute, The Chinese University of Hong Kong, China
| | - Chin-Boon Chng
- National University of Singapore(NUS), Singapore, Singapore; Southern University of Science and Technology (SUSTech), Shenzhen, China
| | - Wenjun Lin
- National University of Singapore(NUS), Singapore, Singapore; Southern University of Science and Technology (SUSTech), Shenzhen, China
| | | | - Laura Bravo-Sánchez
- Center for Research and Formation in Artificial Intelligence, Department of Biomedical Engineering, Universidad de los Andes, Bogotá, Colombia
| | - Pablo Arbeláez
- Center for Research and Formation in Artificial Intelligence, Department of Biomedical Engineering, Universidad de los Andes, Bogotá, Colombia
| | | | - Manoru Mitsuishi
- Department of Mechanical Engineering, the University of Tokyo,Tokyo 113-8656, Japan
| | - Kanako Harada
- Department of Mechanical Engineering, the University of Tokyo,Tokyo 113-8656, Japan
| | - Pierre Jannin
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| |
Collapse
|
35
|
Huang YJ, Dou Q, Wang ZX, Liu LZ, Jin Y, Li CF, Wang L, Chen H, Xu RH. 3-D RoI-Aware U-Net for Accurate and Efficient Colorectal Tumor Segmentation. IEEE Trans Cybern 2021; 51:5397-5408. [PMID: 32248143 DOI: 10.1109/tcyb.2020.2980145] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Segmentation of colorectal cancerous regions from 3-D magnetic resonance (MR) images is a crucial procedure for radiotherapy. Automatic delineation from 3-D whole volumes is in urgent demand yet very challenging. Drawbacks of existing deep-learning-based methods for this task are two-fold: 1) extensive graphics processing unit (GPU) memory footprint of 3-D tensor limits the trainable volume size, shrinks effective receptive field, and therefore, degrades speed and segmentation performance and 2) in-region segmentation methods supported by region-of-interest (RoI) detection are either blind to global contexts, detail richness compromising, or too expensive for 3-D tasks. To tackle these drawbacks, we propose a novel encoder-decoder-based framework for 3-D whole volume segmentation, referred to as 3-D RoI-aware U-Net (3-D RU-Net). 3-D RU-Net fully utilizes the global contexts covering large effective receptive fields. Specifically, the proposed model consists of a global image encoder for global understanding-based RoI localization, and a local region decoder that operates on pyramid-shaped in-region global features, which is GPU memory efficient and thereby enables training and prediction with large 3-D whole volumes. To facilitate the global-to-local learning procedure and enhance contour detail richness, we designed a dice-based multitask hybrid loss function. The efficiency of the proposed framework enables an extensive model ensemble for further performance gain at acceptable extra computational costs. Over a dataset of 64 T2-weighted MR images, the experimental results of four-fold cross-validation show that our method achieved 75.5% dice similarity coefficient (DSC) in 0.61 s per volume on a GPU, which significantly outperforms competing methods in terms of accuracy and efficiency. The code is publicly available.
Collapse
|
36
|
Dou Q, Chen Q, Rong Y, Feng X. Patch-Based DCNN Method for CBCT Image Enhancement. Int J Radiat Oncol Biol Phys 2021. [DOI: 10.1016/j.ijrobp.2021.07.471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
37
|
Zhao Z, Jin Y, Chen J, Lu B, Ng CF, Liu YH, Dou Q, Heng PA. Anchor-guided online meta adaptation for fast one-Shot instrument segmentation from robotic surgical videos. Med Image Anal 2021; 74:102240. [PMID: 34614476 DOI: 10.1016/j.media.2021.102240] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 09/02/2021] [Accepted: 09/13/2021] [Indexed: 11/29/2022]
Abstract
The scarcity of annotated surgical data in robot-assisted surgery (RAS) motivates prior works to borrow related domain knowledge to achieve promising segmentation results in surgical images by adaptation. For dense instrument tracking in a robotic surgical video, collecting one initial scene to specify target instruments (or parts of tools) is desirable and feasible during the preoperative preparation. In this paper, we study the challenging one-shot instrument segmentation for robotic surgical videos, in which only the first frame mask of each video is provided at test time, such that the pre-trained model (learned from easily accessible source) can adapt to the target instruments. Straightforward methods transfer the domain knowledge by fine-tuning the model on each given mask. Such one-shot optimization takes hundred of iterations and the test runtime is unfeasible. We present anchor-guided online meta adaptation (AOMA) for this problem. We achieve fast one-shot test time optimization by meta-learning a good model initialization and learning rates from source videos to avoid the laborious and handcrafted fine-tuning. The trainable two components are optimized in a video-specific task space with a matching-aware loss. Furthermore, we design an anchor-guided online adaptation to tackle the performance drop throughout a robotic surgical sequence. The model is continuously adapted on motion-insensitive pseudo-masks supported by anchor matching. AOMA achieves state-of-the-art results on two practical scenarios: (1) general videos to surgical videos, (2) public surgical videos to in-house surgical videos, while reducing the test runtime substantially.
Collapse
Affiliation(s)
- Zixu Zhao
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, HKSAR, China
| | - Yueming Jin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, HKSAR, China.
| | - Junming Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, HKSAR, China
| | - Bo Lu
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, HKSAR, China; T-Stone Robotics Institute, The Chinese University of Hong Kong, HKSAR, China
| | - Chi-Fai Ng
- Department of Surgery, The Chinese University of Hong Kong, HKSAR, China
| | - Yun-Hui Liu
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, HKSAR, China; T-Stone Robotics Institute, The Chinese University of Hong Kong, HKSAR, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, HKSAR, China; T-Stone Robotics Institute, The Chinese University of Hong Kong, HKSAR, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, HKSAR, China; T-Stone Robotics Institute, The Chinese University of Hong Kong, HKSAR, China
| |
Collapse
|
38
|
Li C, Dong L, Dou Q, Lin F, Zhang K, Feng Z, Si W, Deng X, Deng Z, Heng PA. Self-Ensembling Co-Training Framework for Semi-Supervised COVID-19 CT Segmentation. IEEE J Biomed Health Inform 2021; 25:4140-4151. [PMID: 34375293 PMCID: PMC8904133 DOI: 10.1109/jbhi.2021.3103646] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
The coronavirus disease 2019 (COVID-19) has become a severe worldwide health emergency and is spreading at a rapid rate. Segmentation of COVID lesions from computed tomography (CT) scans is of great importance for supervising disease progression and further clinical treatment. As labeling COVID-19 CT scans is labor-intensive and time-consuming, it is essential to develop a segmentation method based on limited labeled data to conduct this task. In this paper, we propose a self-ensembled co-training framework, which is trained by limited labeled data and large-scale unlabeled data, to automatically extract COVID lesions from CT scans. Specifically, to enrich the diversity of unsupervised information, we build a co-training framework consisting of two collaborative models, in which the two models teach each other during training by using their respective predicted pseudo-labels of unlabeled data. Moreover, to alleviate the adverse impacts of noisy pseudo-labels for each model, we propose a self-ensembling strategy to perform consistency regularization for the up-to-date predictions of unlabeled data, in which the predictions of unlabeled data are gradually ensembled via moving average at the end of every training epoch. We evaluate our framework on a COVID-19 dataset containing 103 CT scans. Experimental results show that our proposed method achieves better performance in the case of only 4 labeled CT scans compared to the state-of-the-art semi-supervised segmentation networks.
Collapse
|
39
|
Shi X, Jin Y, Dou Q, Heng PA. Semi-supervised learning with progressive unlabeled data excavation for label-efficient surgical workflow recognition. Med Image Anal 2021; 73:102158. [PMID: 34325149 DOI: 10.1016/j.media.2021.102158] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 06/04/2021] [Accepted: 06/29/2021] [Indexed: 11/16/2022]
Abstract
Surgical workflow recognition is a fundamental task in computer-assisted surgery and a key component of various applications in operating rooms. Existing deep learning models have achieved promising results for surgical workflow recognition, heavily relying on a large amount of annotated videos. However, obtaining annotation is time-consuming and requires the domain knowledge of surgeons. In this paper, we propose a novel two-stage Semi-Supervised Learning method for label-efficient Surgical workflow recognition, named as SurgSSL. Our proposed SurgSSL progressively leverages the inherent knowledge held in the unlabeled data to a larger extent: from implicit unlabeled data excavation via motion knowledge excavation, to explicit unlabeled data excavation via pre-knowledge pseudo labeling. Specifically, we first propose a novel intra-sequence Visual and Temporal Dynamic Consistency (VTDC) scheme for implicit excavation. It enforces prediction consistency of the same data under perturbations in both spatial and temporal spaces, encouraging model to capture rich motion knowledge. We further perform explicit excavation by optimizing the model towards our pre-knowledge pseudo label. It is naturally generated by the VTDC regularized model with prior knowledge of unlabeled data encoded, and demonstrates superior reliability for model supervision compared with the label generated by existing methods. We extensively evaluate our method on two public surgical datasets of Cholec80 and M2CAI challenge dataset. Our method surpasses the state-of-the-art semi-supervised methods by a large margin, e.g., improving 10.5% Accuracy under the severest annotation regime of M2CAI dataset. Using only 50% labeled videos on Cholec80, our approach achieves competitive performance compared with full-data training method.
Collapse
Affiliation(s)
- Xueying Shi
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Yueming Jin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong.
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong; T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong; T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
40
|
Jin Y, Long Y, Chen C, Zhao Z, Dou Q, Heng PA. Temporal Memory Relation Network for Workflow Recognition From Surgical Video. IEEE Trans Med Imaging 2021; 40:1911-1923. [PMID: 33780335 DOI: 10.1109/tmi.2021.3069471] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Automatic surgical workflow recognition is a key component for developing context-aware computer-assisted systems in the operating theatre. Previous works either jointly modeled the spatial features with short fixed-range temporal information, or separately learned visual and long temporal cues. In this paper, we propose a novel end-to-end temporal memory relation network (TMRNet) for relating long-range and multi-scale temporal patterns to augment the present features. We establish a long-range memory bank to serve as a memory cell storing the rich supportive information. Through our designed temporal variation layer, the supportive cues are further enhanced by multi-scale temporal-only convolutions. To effectively incorporate the two types of cues without disturbing the joint learning of spatio-temporal features, we introduce a non-local bank operator to attentively relate the past to the present. In this regard, our TMRNet enables the current feature to view the long-range temporal dependency, as well as tolerate complex temporal extents. We have extensively validated our approach on two benchmark surgical video datasets, M2CAI challenge dataset and Cholec80 dataset. Experimental results demonstrate the outstanding performance of our method, consistently exceeding the state-of-the-art methods by a large margin (e.g., 67.0% v.s. 78.9% Jaccard on Cholec80 dataset).
Collapse
|
41
|
Xie CY, Pang CL, Chan B, Wong EYY, Dou Q, Vardhanabhuti V. Machine Learning and Radiomics Applications in Esophageal Cancers Using Non-Invasive Imaging Methods-A Critical Review of Literature. Cancers (Basel) 2021; 13:2469. [PMID: 34069367 PMCID: PMC8158761 DOI: 10.3390/cancers13102469] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 05/12/2021] [Accepted: 05/15/2021] [Indexed: 11/16/2022] Open
Abstract
Esophageal cancer (EC) is of public health significance as one of the leading causes of cancer death worldwide. Accurate staging, treatment planning and prognostication in EC patients are of vital importance. Recent advances in machine learning (ML) techniques demonstrate their potential to provide novel quantitative imaging markers in medical imaging. Radiomics approaches that could quantify medical images into high-dimensional data have been shown to improve the imaging-based classification system in characterizing the heterogeneity of primary tumors and lymph nodes in EC patients. In this review, we aim to provide a comprehensive summary of the evidence of the most recent developments in ML application in imaging pertinent to EC patient care. According to the published results, ML models evaluating treatment response and lymph node metastasis achieve reliable predictions, ranging from acceptable to outstanding in their validation groups. Patients stratified by ML models in different risk groups have a significant or borderline significant difference in survival outcomes. Prospective large multi-center studies are suggested to improve the generalizability of ML techniques with standardized imaging protocols and harmonization between different centers.
Collapse
Affiliation(s)
- Chen-Yi Xie
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China;
| | - Chun-Lap Pang
- Department of Radiology, The Christies’ Hospital, Manchester M20 4BX, UK;
- Division of Dentistry, School of Medical Sciences, University of Manchester, Manchester M15 6FH, UK
| | - Benjamin Chan
- Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China; (B.C.); (E.Y.-Y.W.)
| | - Emily Yuen-Yuen Wong
- Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China; (B.C.); (E.Y.-Y.W.)
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China;
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China;
| |
Collapse
|
42
|
Sun Y, Gao K, Wu Z, Li G, Zong X, Lei Z, Wei Y, Ma J, Yang X, Feng X, Zhao L, Le Phan T, Shin J, Zhong T, Zhang Y, Yu L, Li C, Basnet R, Ahmad MO, Swamy MNS, Ma W, Dou Q, Bui TD, Noguera CB, Landman B, Gotlib IH, Humphreys KL, Shultz S, Li L, Niu S, Lin W, Jewells V, Shen D, Li G, Wang L. Multi-Site Infant Brain Segmentation Algorithms: The iSeg-2019 Challenge. IEEE Trans Med Imaging 2021; 40:1363-1376. [PMID: 33507867 PMCID: PMC8246057 DOI: 10.1109/tmi.2021.3055428] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
To better understand early brain development in health and disorder, it is critical to accurately segment infant brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). Deep learning-based methods have achieved state-of-the-art performance; h owever, one of the major limitations is that the learning-based methods may suffer from the multi-site issue, that is, the models trained on a dataset from one site may not be applicable to the datasets acquired from other sites with different imaging protocols/scanners. To promote methodological development in the community, the iSeg-2019 challenge (http://iseg2019.web.unc.edu) provides a set of 6-month infant subjects from multiple sites with different protocols/scanners for the participating methods. T raining/validation subjects are from UNC (MAP) and testing subjects are from UNC/UMN (BCP), Stanford University, and Emory University. By the time of writing, there are 30 automatic segmentation methods participated in the iSeg-2019. In this article, 8 top-ranked methods were reviewed by detailing their pipelines/implementations, presenting experimental results, and evaluating performance across different sites in terms of whole brain, regions of interest, and gyral landmark curves. We further pointed out their limitations and possible directions for addressing the multi-site issue. We find that multi-site consistency is still an open issue. We hope that the multi-site dataset in the iSeg-2019 and this review article will attract more researchers to address the challenging and critical multi-site issue in practice.
Collapse
|
43
|
Lin H, Gao Q, Chu X, Dou Q, Deguet A, Kazanzides P, Au KWS. Learning Deep Nets for Gravitational Dynamics With Unknown Disturbance Through Physical Knowledge Distillation: Initial Feasibility Study. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3062351] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
44
|
Zhu Z, Cao Y, Qin C, Rao Y, Lin D, Dou Q, Ni D, Wang Y. Joint affine and deformable three-dimensional networks for brain MRI registration. Med Phys 2021; 48:1182-1196. [PMID: 33341975 DOI: 10.1002/mp.14674] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 12/11/2020] [Accepted: 12/11/2020] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Volumetric medical image registration has important clinical significance. Traditional registration methods may be time-consuming when processing large volumetric data due to their iterative optimizations. In contrast, existing deep learning-based networks can obtain the registration quickly. However, most of them require independent rigid alignment before deformable registration; these two steps are often performed separately and cannot be end-to-end. METHODS We propose an end-to-end joint affine and deformable network for three-dimensional (3D) medical image registration. The proposed network combines two deformation methods; the first one is for obtaining affine alignment and the second one is a deformable subnetwork for achieving the nonrigid registration. The parameters of the two subnetworks are shared. The global and local similarity measures are used as loss functions for the two subnetworks, respectively. Moreover, an anatomical similarity loss is devised to weakly supervise the training of the whole registration network. Finally, the trained network can perform deformable registration in one forward pass. RESULTS The efficacy of our network was extensively evaluated on three public brain MRI datasets including Mindboggle101, LPBA40, and IXI. Experimental results demonstrate our network consistently outperformed several state-of-the-art methods with respect to the metrics of Dice index (DSC), Hausdorff distance (HD), and average symmetric surface distance (ASSD). CONCLUSIONS The proposed network provides accurate and robust volumetric registration without any pre-alignment requirement, which facilitates the end-to-end deformable registration.
Collapse
Affiliation(s)
- Zhenyu Zhu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yiqin Cao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Chenchen Qin
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yi Rao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Di Lin
- The College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Qi Dou
- Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yi Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| |
Collapse
|
45
|
Cao Y, Zhu Z, Rao Y, Qin C, Lin D, Dou Q, Ni D, Wang Y. Edge-Aware Pyramidal Deformable Network for Unsupervised Registration of Brain MR Images. Front Neurosci 2021; 14:620235. [PMID: 33551730 PMCID: PMC7859447 DOI: 10.3389/fnins.2020.620235] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 12/28/2020] [Indexed: 01/17/2023] Open
Abstract
Deformable image registration is of essential important for clinical diagnosis, treatment planning, and surgical navigation. However, most existing registration solutions require separate rigid alignment before deformable registration, and may not well handle the large deformation circumstances. We propose a novel edge-aware pyramidal deformable network (referred as EPReg) for unsupervised volumetric registration. Specifically, we propose to fully exploit the useful complementary information from the multi-level feature pyramids to predict multi-scale displacement fields. Such coarse-to-fine estimation facilitates the progressive refinement of the predicted registration field, which enables our network to handle large deformations between volumetric data. In addition, we integrate edge information with the original images as dual-inputs, which enhances the texture structures of image content, to impel the proposed network pay extra attention to the edge-aware information for structure alignment. The efficacy of our EPReg was extensively evaluated on three public brain MRI datasets including Mindboggle101, LPBA40, and IXI30. Experiments demonstrate our EPReg consistently outperformed several cutting-edge methods with respect to the metrics of Dice index (DSC), Hausdorff distance (HD), and average symmetric surface distance (ASSD). The proposed EPReg is a general solution for the problem of deformable volumetric registration.
Collapse
Affiliation(s)
- Yiqin Cao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Zhenyu Zhu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yi Rao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | | | - Di Lin
- The College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Qi Dou
- Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yi Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| |
Collapse
|
46
|
Liu Q, Yu L, Luo L, Dou Q, Heng PA. Semi-Supervised Medical Image Classification With Relation-Driven Self-Ensembling Model. IEEE Trans Med Imaging 2020; 39:3429-3440. [PMID: 32746096 DOI: 10.1109/tmi.2020.2995518] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Training deep neural networks usually requires a large amount of labeled data to obtain good performance. However, in medical image analysis, obtaining high-quality labels for the data is laborious and expensive, as accurately annotating medical images demands expertise knowledge of the clinicians. In this paper, we present a novel relation-driven semi-supervised framework for medical image classification. It is a consistency-based method which exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations, and leverages a self-ensembling model to produce high-quality consistency targets for the unlabeled data. Considering that human diagnosis often refers to previous analogous cases to make reliable decisions, we introduce a novel sample relation consistency (SRC) paradigm to effectively exploit unlabeled data by modeling the relationship information among different samples. Superior to existing consistency-based methods which simply enforce consistency of individual predictions, our framework explicitly enforces the consistency of semantic relation among different samples under perturbations, encouraging the model to explore extra semantic information from unlabeled data. We have conducted extensive experiments to evaluate our method on two public benchmark medical image classification datasets, i.e., skin lesion diagnosis with ISIC 2018 challenge and thorax disease classification with ChestX-ray14. Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
Collapse
|
47
|
Wang Z, Liu Q, Dou Q. Contrastive Cross-Site Learning With Redesigned Net for COVID-19 CT Classification. IEEE J Biomed Health Inform 2020; 24:2806-2813. [PMID: 32915751 PMCID: PMC8545175 DOI: 10.1109/jbhi.2020.3023246] [Citation(s) in RCA: 81] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Revised: 08/13/2020] [Accepted: 09/02/2020] [Indexed: 11/09/2022]
Abstract
The pandemic of coronavirus disease 2019 (COVID-19) has lead to a global public health crisis spreading hundreds of countries. With the continuous growth of new infections, developing automated tools for COVID-19 identification with CT image is highly desired to assist the clinical diagnosis and reduce the tedious workload of image interpretation. To enlarge the datasets for developing machine learning methods, it is essentially helpful to aggregate the cases from different medical systems for learning robust and generalizable models. This paper proposes a novel joint learning framework to perform accurate COVID-19 identification by effectively learning with heterogeneous datasets with distribution discrepancy. We build a powerful backbone by redesigning the recently proposed COVID-Net in aspects of network architecture and learning strategy to improve the prediction accuracy and learning efficiency. On top of our improved backbone, we further explicitly tackle the cross-site domain shift by conducting separate feature normalization in latent space. Moreover, we propose to use a contrastive training objective to enhance the domain invariance of semantic embeddings for boosting the classification performance on each dataset. We develop and evaluate our method with two public large-scale COVID-19 diagnosis datasets made up of CT images. Extensive experiments show that our approach consistently improves the performanceson both datasets, outperforming the original COVID-Net trained on each dataset by 12.16% and 14.23% in AUC respectively, also exceeding existing state-of-the-art multi-site learning methods.
Collapse
Affiliation(s)
- Zhao Wang
- College of Information Science and Electronic EngineeringZhejiang UniversityHangzhouChina
| | - Quande Liu
- Department of Computer Science and EngineeringThe Chinese University of Hong KongHong KongChina
| | - Qi Dou
- Department of Computer Science and EngineeringThe Chinese University of Hong KongHong KongChina
| |
Collapse
|
48
|
Liu Q, Dou Q, Yu L, Heng PA. MS-Net: Multi-Site Network for Improving Prostate Segmentation With Heterogeneous MRI Data. IEEE Trans Med Imaging 2020; 39:2713-2724. [PMID: 32078543 DOI: 10.1109/tmi.2020.2974574] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Automated prostate segmentation in MRI is highly demanded for computer-assisted diagnosis. Recently, a variety of deep learning methods have achieved remarkable progress in this task, usually relying on large amounts of training data. Due to the nature of scarcity for medical images, it is important to effectively aggregate data from multiple sites for robust model training, to alleviate the insufficiency of single-site samples. However, the prostate MRIs from different sites present heterogeneity due to the differences in scanners and imaging protocols, raising challenges for effective ways of aggregating multi-site data for network training. In this paper, we propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations, leveraging multiple sources of data. To compensate for the inter-site heterogeneity of different MRI datasets, we develop Domain-Specific Batch Normalization layers in the network backbone, enabling the network to estimate statistics and perform feature normalization for each site separately. Considering the difficulty of capturing the shared knowledge from multiple datasets, a novel learning paradigm, i.e., Multi-site-guided Knowledge Transfer, is proposed to enhance the kernels to extract more generic representations from multi-site data. Extensive experiments on three heterogeneous prostate MRI datasets demonstrate that our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
Collapse
|
49
|
Wang X, Chen H, Gan C, Lin H, Dou Q, Tsougenis E, Huang Q, Cai M, Heng PA. Weakly Supervised Deep Learning for Whole Slide Lung Cancer Image Analysis. IEEE Trans Cybern 2020; 50:3950-3962. [PMID: 31484154 DOI: 10.1109/tcyb.2019.2935141] [Citation(s) in RCA: 107] [Impact Index Per Article: 26.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Histopathology image analysis serves as the gold standard for cancer diagnosis. Efficient and precise diagnosis is quite critical for the subsequent therapeutic treatment of patients. So far, computer-aided diagnosis has not been widely applied in pathological field yet as currently well-addressed tasks are only the tip of the iceberg. Whole slide image (WSI) classification is a quite challenging problem. First, the scarcity of annotations heavily impedes the pace of developing effective approaches. Pixelwise delineated annotations on WSIs are time consuming and tedious, which poses difficulties in building a large-scale training dataset. In addition, a variety of heterogeneous patterns of tumor existing in high magnification field are actually the major obstacle. Furthermore, a gigapixel scale WSI cannot be directly analyzed due to the immeasurable computational cost. How to design the weakly supervised learning methods to maximize the use of available WSI-level labels that can be readily obtained in clinical practice is quite appealing. To overcome these challenges, we present a weakly supervised approach in this article for fast and effective classification on the whole slide lung cancer images. Our method first takes advantage of a patch-based fully convolutional network (FCN) to retrieve discriminative blocks and provides representative deep features with high efficiency. Then, different context-aware block selection and feature aggregation strategies are explored to generate globally holistic WSI descriptor which is ultimately fed into a random forest (RF) classifier for the image-level prediction. To the best of our knowledge, this is the first study to exploit the potential of image-level labels along with some coarse annotations for weakly supervised learning. A large-scale lung cancer WSI dataset is constructed in this article for evaluation, which validates the effectiveness and feasibility of the proposed method. Extensive experiments demonstrate the superior performance of our method that surpasses the state-of-the-art approaches by a significant margin with an accuracy of 97.3%. In addition, our method also achieves the best performance on the public lung cancer WSIs dataset from The Cancer Genome Atlas (TCGA). We highlight that a small number of coarse annotations can contribute to further accuracy improvement. We believe that weakly supervised learning methods have great potential to assist pathologists in histology image diagnosis in the near future.
Collapse
|
50
|
Abstract
Multi-modal learning is typically performed with network architectures containing modality-specific layers and shared layers, utilizing co-registered images of different modalities. We propose a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy. In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI, and only employ modality-specific internal normalization layers which compute respective statistics. To effectively train such a highly compact model, we introduce a novel loss term inspired by knowledge distillation, by explicitly constraining the KL-divergence of our derived prediction distributions between modalities. We have extensively validated our approach on two multi-class segmentation problems: i) cardiac structure segmentation, and ii) abdominal organ segmentation. Different network settings, i.e., 2D dilated network and 3D U-net, are utilized to investigate our method's general efficacy. Experimental results on both tasks demonstrate that our novel multi-modal learning scheme consistently outperforms single-modal training and previous multi-modal approaches.
Collapse
|