1
|
Kalsi S, French H, Chhaya S, Madani H, Mir R, Anosova A, Dubash S. The Evolving Role of Artificial Intelligence in Radiotherapy Treatment Planning-A Literature Review. Clin Oncol (R Coll Radiol) 2024; 36:596-605. [PMID: 38981781 DOI: 10.1016/j.clon.2024.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 05/30/2024] [Accepted: 06/11/2024] [Indexed: 07/11/2024]
Abstract
This paper examines the integration of artificial intelligence (AI) in radiotherapy for cancer treatment. The importance of radiotherapy in cancer management and its time-intensive planning process make AI adoption appealing especially with the escalating demand for radiotherapy. This review highlights the efficacy of AI across medical domains, where it surpasses human capabilities in areas such as cardiology and dermatology. Focusing on radiotherapy, the paper details AI's applications in target segmentation, dose optimization, and outcome prediction. It discusses adaptive radiotherapy's benefits and AI's potential to enhance patient outcomes with much improved treatment accuracy. The paper explores ethical concerns, including data privacy and bias, stressing the need for robust guidelines. Educating healthcare professionals and patients about AI's role is crucial as it acknowledges potential job-role changes and concerns about patients' trust in the use of AI. Overall, the integration of AI in radiotherapy holds transformative potential in streamlining processes, improving outcomes, and reducing costs. AI's potential to reduce healthcare costs underscores its significance with impactful change globally. However, successful implementation hinges on addressing ethical and logistical challenges and fostering collaboration among healthcare professionals and patient population data sets for its optimal utilization. Rigorous education, collaborative efforts, and global data sharing will be the compass guiding its' success in radiotherapy and healthcare.
Collapse
Affiliation(s)
- S Kalsi
- Lister Hospital, Stevenage, United Kingdom.
| | - H French
- University of Chester, United Kingdom
| | - S Chhaya
- New Cross Hospital, Wolverhampton, United Kingdom
| | - H Madani
- Lister Hospital, Stevenage, United Kingdom
| | - R Mir
- Mount Vernon Cancer Centre, Northwood, United Kingdom; University of Manchester, Manchester, United Kingdom
| | - A Anosova
- Mount Vernon Cancer Centre, East & North Hertfordshire NHS Trust, United Kingdom
| | - S Dubash
- Mount Vernon Cancer Centre, Northwood, United Kingdom; Department of Surgery and Cancer, Imperial College, London, United Kingdom
| |
Collapse
|
2
|
Shi Y, Tang S, Li Y, He Z, Tang S, Wang R, Zheng W, Chen Z, Zhou Y. Continual learning for seizure prediction via memory projection strategy. Comput Biol Med 2024; 181:109028. [PMID: 39173485 DOI: 10.1016/j.compbiomed.2024.109028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 06/30/2024] [Accepted: 08/12/2024] [Indexed: 08/24/2024]
Abstract
Despite extensive algorithms for epilepsy prediction via machine learning, most models are tailored for offline scenarios and cannot handle actual scenarios where data changes over time. Catastrophic forgetting(CF) for learned electroencephalogram(EEG) data occurs when EEG changes dynamically in the clinical setting. This paper implements a continual learning(CL) strategy Memory Projection(MP) for epilepsy prediction, which can be combined with other algorithms to avoid CF. Such a strategy enables the model to learn EEG data from each patient in dynamic subspaces with weak correlation layer by layer to minimize interference and promote knowledge transfer. Regularization Loss Reconstruction Algorithm and Matrix Dimensionality Reduction Algorithm are introduced into the core of MP. Experimental results show that MP exhibits excellent performance and low forgetting rates in sequential learning of seizure prediction. The forgetting rate of accuracy and sensitivity under multiple experiments are below 5%. When learning from multi-center datasets, the forgetting rates for accuracy and sensitivity decrease to 0.65% and 1.86%, making it comparable to state-of-the-art CL strategies. Through ablation experiments, we have analyzed that MP can operate with minimal storage and computational cost, which demonstrates practical potential for seizure prediction in clinical scenarios.
Collapse
Affiliation(s)
- Yufei Shi
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, Guangdong, China
| | - Shishi Tang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, Guangdong, China
| | - Yuxuan Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, Guangdong, China
| | - Zhipeng He
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, Guangdong, China
| | - Shengsheng Tang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, Guangdong, China
| | - Ruixuan Wang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, 510006, Guangdong, China
| | - Weishi Zheng
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, 510006, Guangdong, China
| | - Ziyi Chen
- The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, Guangdong, China
| | - Yi Zhou
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, Guangdong, China.
| |
Collapse
|
3
|
Huynh BN, Groendahl AR, Tomic O, Liland KH, Knudtsen IS, Hoebers F, van Elmpt W, Dale E, Malinen E, Futsaether CM. Deep learning with uncertainty estimation for automatic tumor segmentation in PET/CT of head and neck cancers: impact of model complexity, image processing and augmentation. Biomed Phys Eng Express 2024; 10:055038. [PMID: 39127060 DOI: 10.1088/2057-1976/ad6dcd] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 08/09/2024] [Indexed: 08/12/2024]
Abstract
Objective.Target volumes for radiotherapy are usually contoured manually, which can be time-consuming and prone to inter- and intra-observer variability. Automatic contouring by convolutional neural networks (CNN) can be fast and consistent but may produce unrealistic contours or miss relevant structures. We evaluate approaches for increasing the quality and assessing the uncertainty of CNN-generated contours of head and neck cancers with PET/CT as input.Approach.Two patient cohorts with head and neck squamous cell carcinoma and baseline18F-fluorodeoxyglucose positron emission tomography and computed tomography images (FDG-PET/CT) were collected retrospectively from two centers. The union of manual contours of the gross primary tumor and involved nodes was used to train CNN models for generating automatic contours. The impact of image preprocessing, image augmentation, transfer learning and CNN complexity, architecture, and dimension (2D or 3D) on model performance and generalizability across centers was evaluated. A Monte Carlo dropout technique was used to quantify and visualize the uncertainty of the automatic contours.Main results. CNN models provided contours with good overlap with the manually contoured ground truth (median Dice Similarity Coefficient: 0.75-0.77), consistent with reported inter-observer variations and previous auto-contouring studies. Image augmentation and model dimension, rather than model complexity, architecture, or advanced image preprocessing, had the largest impact on model performance and cross-center generalizability. Transfer learning on a limited number of patients from a separate center increased model generalizability without decreasing model performance on the original training cohort. High model uncertainty was associated with false positive and false negative voxels as well as low Dice coefficients.Significance.High quality automatic contours can be obtained using deep learning architectures that are not overly complex. Uncertainty estimation of the predicted contours shows potential for highlighting regions of the contour requiring manual revision or flagging segmentations requiring manual inspection and intervention.
Collapse
Affiliation(s)
- Bao Ngoc Huynh
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Aurora Rosvoll Groendahl
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Section of Oncology, Vestre Viken Hospital Trust, Drammen, Norway
| | - Oliver Tomic
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Kristian Hovde Liland
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Ingerid Skjei Knudtsen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Frank Hoebers
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Reproduction, Maastricht, Netherlands
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Reproduction, Maastricht, Netherlands
| | - Einar Dale
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Eirik Malinen
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | | |
Collapse
|
4
|
Zhu Z, Ma X, Wang W, Dong S, Wang K, Wu L, Luo G, Wang G, Li S. Boosting knowledge diversity, accuracy, and stability via tri-enhanced distillation for domain continual medical image segmentation. Med Image Anal 2024; 94:103112. [PMID: 38401270 DOI: 10.1016/j.media.2024.103112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 01/10/2024] [Accepted: 02/20/2024] [Indexed: 02/26/2024]
Abstract
Domain continual medical image segmentation plays a crucial role in clinical settings. This approach enables segmentation models to continually learn from a sequential data stream across multiple domains. However, it faces the challenge of catastrophic forgetting. Existing methods based on knowledge distillation show potential to address this challenge via a three-stage process: distillation, transfer, and fusion. Yet, each stage presents its unique issues that, collectively, amplify the problem of catastrophic forgetting. To address these issues at each stage, we propose a tri-enhanced distillation framework. (1) Stochastic Knowledge Augmentation reduces redundancy in knowledge, thereby increasing both the diversity and volume of knowledge derived from the old network. (2) Adaptive Knowledge Transfer selectively captures critical information from the old knowledge, facilitating a more accurate knowledge transfer. (3) Global Uncertainty-Guided Fusion introduces a global uncertainty view of the dataset to fuse the old and new knowledge with reduced bias, promoting a more stable knowledge fusion. Our experimental results not only validate the feasibility of our approach, but also demonstrate its superior performance compared to state-of-the-art methods. We suggest that our innovative tri-enhanced distillation framework may establish a robust benchmark for domain continual medical image segmentation.
Collapse
Affiliation(s)
- Zhanshi Zhu
- Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | - Xinghua Ma
- Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | - Wei Wang
- Faculty of Computing, Harbin Institute of Technology, Shenzhen, China.
| | - Suyu Dong
- College of Computer and Control Engineering, Northeast Forestry University, Harbin, China
| | - Kuanquan Wang
- Faculty of Computing, Harbin Institute of Technology, Harbin, China.
| | - Lianming Wu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Gongning Luo
- Faculty of Computing, Harbin Institute of Technology, Harbin, China.
| | - Guohua Wang
- College of Computer and Control Engineering, Northeast Forestry University, Harbin, China
| | - Shuo Li
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
5
|
Langs G. Artificial intelligence in medical imaging is a tool for clinical routine and scientific discovery. Semin Arthritis Rheum 2024; 64S:152321. [PMID: 38007360 DOI: 10.1016/j.semarthrit.2023.152321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 11/09/2023] [Indexed: 11/27/2023]
Abstract
The emergence of powerful machine learning methodology together with an increasing amount of data collected during clinical routine have fostered a growing role of artificial intelligence (AI) in medicine. Algorithms have become part of clinical care enhancing image reconstruction, detecting cancer or predicting individual risk to support treatment decisions and patient management. The entry into clinical care is determined by technological feasibility, integration into effective workflows, and immediacy of benefits. At the same time, research is advancing the integration of imaging data and other modalities such as genomics, and the linking of observations made at large scale with the understanding of underlying biological processes. AI will have impact in imaging and precision medicine not only because of the successful application of techniques established in other domains, but primarily because of the effective joint development of new technology and corresponding advance of diagnosis and care.
Collapse
Affiliation(s)
- Georg Langs
- Computational Imaging Research Lab, Christian Doppler Laboratory for Machine Learning Driven Precision Imaging, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Spitalgasse 23, Vienna 1090, Austria.
| |
Collapse
|
6
|
Fuchs M, Gonzalez C, Frisch Y, Hahn P, Matthies P, Gruening M, Pinto Dos Santos D, Dratsch T, Kim M, Nensa F, Trenz M, Mukhopadhyay A. Closing the loop for AI-ready radiology. ROFO-FORTSCHR RONTG 2024; 196:154-162. [PMID: 37582385 DOI: 10.1055/a-2124-1958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
BACKGROUND In recent years, AI has made significant advancements in medical diagnosis and prognosis. However, the incorporation of AI into clinical practice is still challenging and under-appreciated. We aim to demonstrate a possible vertical integration approach to close the loop for AI-ready radiology. METHOD This study highlights the importance of two-way communication for AI-assisted radiology. As a key part of the methodology, it demonstrates the integration of AI systems into clinical practice with structured reports and AI visualization, giving more insight into the AI system. By integrating cooperative lifelong learning into the AI system, we ensure the long-term effectiveness of the AI system, while keeping the radiologist in the loop. RESULTS: We demonstrate the use of lifelong learning for AI systems by incorporating AI visualization and structured reports. We evaluate Memory Aware-Synapses and Rehearsal approach and find that both approaches work in practice. Furthermore, we see the advantage of lifelong learning algorithms that do not require the storing or maintaining of samples from previous datasets. CONCLUSION In conclusion, incorporating AI into the clinical routine of radiology requires a two-way communication approach and seamless integration of the AI system, which we achieve with structured reports and visualization of the insight gained by the model. Closing the loop for radiology leads to successful integration, enabling lifelong learning for the AI system, which is crucial for sustainable long-term performance. KEY POINTS · The integration of AI systems into the clinical routine with structured reports and AI visualization.. · Two-way communication between AI and radiologists is necessary to enable AI that keeps the radiologist in the loop.. · Closing the loop enables lifelong learning, which is crucial for long-term, high-performing AI in radiology..
Collapse
Affiliation(s)
| | | | | | | | | | - Maximilian Gruening
- Interorganisational Informationssystems, Georg-August-Universität Göttingen, Goettingen, Germany
| | - Daniel Pinto Dos Santos
- Institute for Diagnostic and Interventional Radiology, Uniklinik Koln, Germany
- Institute for Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt am Main, Germany
| | - Thomas Dratsch
- Institute for Diagnostic and Interventional Radiology, Uniklinik Koln, Germany
| | - Moon Kim
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, Universitätsklinikum Essen, Germany
- Institute for Artificial Intelligence in Medicine, Universitätsklinikum Essen, Germany
| | - Felix Nensa
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, Universitätsklinikum Essen, Germany
- Institute for Artificial Intelligence in Medicine, Universitätsklinikum Essen, Germany
| | - Manuel Trenz
- Interorganisational Informationssystems, Georg-August-Universität Göttingen, Goettingen, Germany
| | | |
Collapse
|
7
|
Bo ZH, Guo Y, Lyu J, Liang H, He J, Deng S, Xu F, Lou X, Dai Q. Relay learning: a physically secure framework for clinical multi-site deep learning. NPJ Digit Med 2023; 6:204. [PMID: 37925578 PMCID: PMC10625523 DOI: 10.1038/s41746-023-00934-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 09/25/2023] [Indexed: 11/06/2023] Open
Abstract
Big data serves as the cornerstone for constructing real-world deep learning systems across various domains. In medicine and healthcare, a single clinical site lacks sufficient data, thus necessitating the involvement of multiple sites. Unfortunately, concerns regarding data security and privacy hinder the sharing and reuse of data across sites. Existing approaches to multi-site clinical learning heavily depend on the security of the network firewall and system implementation. To address this issue, we propose Relay Learning, a secure deep-learning framework that physically isolates clinical data from external intruders while still leveraging the benefits of multi-site big data. We demonstrate the efficacy of Relay Learning in three medical tasks of different diseases and anatomical structures, including structure segmentation of retina fundus, mediastinum tumors diagnosis, and brain midline localization. We evaluate Relay Learning by comparing its performance to alternative solutions through multi-site validation and external validation. Incorporating a total of 41,038 medical images from 21 medical hosts, including 7 external hosts, with non-uniform distributions, we observe significant performance improvements with Relay Learning across all three tasks. Specifically, it achieves an average performance increase of 44.4%, 24.2%, and 36.7% for retinal fundus segmentation, mediastinum tumor diagnosis, and brain midline localization, respectively. Remarkably, Relay Learning even outperforms central learning on external test sets. In the meanwhile, Relay Learning keeps data sovereignty locally without cross-site network connections. We anticipate that Relay Learning will revolutionize clinical multi-site collaboration and reshape the landscape of healthcare in the future.
Collapse
Affiliation(s)
- Zi-Hao Bo
- School of Software, Tsinghua University, Beijing, China
- BNRist, Tsinghua University, Beijing, China
| | - Yuchen Guo
- BNRist, Tsinghua University, Beijing, China.
| | - Jinhao Lyu
- Department of Radiology, Chinese PLA General Hospital / Chinese PLA Medical School, Beijing, China
| | - Hengrui Liang
- Department of Thoracic Oncology and Surgery, China State Key Laboratory of Respiratory Disease & National Clinical Research Center for Respiratory Disease, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Jianxing He
- Department of Thoracic Oncology and Surgery, China State Key Laboratory of Respiratory Disease & National Clinical Research Center for Respiratory Disease, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Shijie Deng
- Department of Radiology, The 921st Hospital of Chinese PLA, Changsha, China
| | - Feng Xu
- School of Software, Tsinghua University, Beijing, China.
- BNRist, Tsinghua University, Beijing, China.
| | - Xin Lou
- Department of Radiology, Chinese PLA General Hospital / Chinese PLA Medical School, Beijing, China.
| | - Qionghai Dai
- BNRist, Tsinghua University, Beijing, China.
- Department of Automation, Tsinghua University, Beijing, China.
| |
Collapse
|
8
|
Yang S, Cai Z. Cross Domain Lifelong Learning Based on Task Similarity. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:11612-11623. [PMID: 37195848 DOI: 10.1109/tpami.2023.3276991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Humans gradually learn a sequence of cross-domain tasks and seldom experience catastrophic forgetting. In contrast, deep neural networks achieve good performance only in specific tasks within a single domain. To equip the network with lifelong learning capabilities, we propose a Cross-Domain Lifelong Learning (CDLL) framework that fully explores task similarities. Specifically, we employ a Dual Siamese Network (DSN) to learn the essential similarity features of tasks across different domains. To further understand similarity information across domains, we introduce a Domain-Invariant Feature Enhancement Module (DFEM) to better extract domain-invariant features. Moreover, we propose a Spatial Attention Network (SAN) that assigns different weights to various tasks based on the learned similarity features. Ultimately, to maximize the use of model parameters for learning new tasks, we propose a Structural Sparsity Loss (SSL) that can make the SAN as sparse as possible while ensuring accuracy. Experimental results show that our method effectively reduces catastrophic forgetting compared with state-of-the-art methods when continuously learning multiple tasks across different domains. It is worth noting that the proposed method scarcely forgets old knowledge while consistently enhancing the performance of learned tasks, more closely aligning with human learning.
Collapse
|
9
|
Shen M, Chen D, Hu S, Xu G. Class incremental learning of remote sensing images based on class similarity distillation. PeerJ Comput Sci 2023; 9:e1583. [PMID: 37810339 PMCID: PMC10557500 DOI: 10.7717/peerj-cs.1583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 08/20/2023] [Indexed: 10/10/2023]
Abstract
When a well-trained model learns a new class, the data distribution differences between the new and old classes inevitably cause catastrophic forgetting in order to perform better in the new class. This behavior differs from human learning. In this article, we propose a class incremental object detection method for remote sensing images to address the problem of catastrophic forgetting caused by distribution differences among different classes. First, we introduce a class similarity distillation (CSD) loss based on the similarity between new and old class prototypes, ensuring the model's plasticity to learn new classes and stability to detect old classes. Second, to better extract class similarity features, we propose a global similarity distillation (GSD) loss that maximizes the mutual information between the new class feature and old class features. Additionally, we present a region proposal network (RPN)-based method that assigns positive and negative labels to prevent mislearning issues. Experiments demonstrate that our method is more accurate for class incremental learning on public DOTA and DIOR datasets and significantly improves training efficiency compared to state-of-the-art class incremental object detection methods.
Collapse
Affiliation(s)
- Mingge Shen
- Zhejiang College of Security Technology, College of Intelligent Equipment, Wenzhou, Zhejiang, China
- Zhejiang College of Security Technology, Wenzhou Key Laboratory of Stereoscopic and Intelligent Monitoring and Warning of Natural Disasters, Wenzhou, Zhejiang, China
| | - Dehu Chen
- Wenzhou University of Technology, College of Architecture and Energy Engineering, Wenzhou, Zhejiang, China
- Wenzhou University of Technology, Wenzhou Key Laboratory of Intelligent Lifeline Protection and Emergency Technology for Resilient City, Wenzhou, Zhejiang, China
| | - Silan Hu
- Macau University of Science and Technology, Faculty of Innovation Engineering, Macau, Macau, China
| | - Gang Xu
- Zhejiang College of Security Technology, College of Intelligent Equipment, Wenzhou, Zhejiang, China
- Zhejiang College of Security Technology, Wenzhou Key Laboratory of Stereoscopic and Intelligent Monitoring and Warning of Natural Disasters, Wenzhou, Zhejiang, China
| |
Collapse
|
10
|
González C, Ranem A, Pinto Dos Santos D, Othman A, Mukhopadhyay A. Lifelong nnU-Net: a framework for standardized medical continual learning. Sci Rep 2023; 13:9381. [PMID: 37296233 PMCID: PMC10256748 DOI: 10.1038/s41598-023-34484-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 05/02/2023] [Indexed: 06/12/2023] Open
Abstract
As the enthusiasm surrounding Deep Learning grows, both medical practitioners and regulatory bodies are exploring ways to safely introduce image segmentation in clinical practice. One frontier to overcome when translating promising research into the clinical open world is the shift from static to continual learning. Continual learning, the practice of training models throughout their lifecycle, is seeing growing interest but is still in its infancy in healthcare. We present Lifelong nnU-Net, a standardized framework that places continual segmentation at the hands of researchers and clinicians. Built on top of the nnU-Net-widely regarded as the best-performing segmenter for multiple medical applications-and equipped with all necessary modules for training and testing models sequentially, we ensure broad applicability and lower the barrier to evaluating new methods in a continual fashion. Our benchmark results across three medical segmentation use cases and five continual learning methods give a comprehensive outlook on the current state of the field and signify a first reproducible benchmark.
Collapse
Affiliation(s)
- Camila González
- Technical University of Darmstadt, Karolinenpl. 5, 64289, Darmstadt, Germany.
| | - Amin Ranem
- Technical University of Darmstadt, Karolinenpl. 5, 64289, Darmstadt, Germany
| | - Daniel Pinto Dos Santos
- University Hospital Cologne, Kerpener Str. 62, 50937, Cologne, Germany
- University Hospital Frankfurt, Theodor-Stern-Kai 7, 60590, Frankfurt, Germany
| | - Ahmed Othman
- University Medical Center Mainz, Langenbeckstraße 1, 55131, Mainz, Germany
| | | |
Collapse
|
11
|
Towards precision medicine based on a continuous deep learning optimization and ensemble approach. NPJ Digit Med 2023; 6:18. [PMID: 36737644 PMCID: PMC9898519 DOI: 10.1038/s41746-023-00759-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 01/17/2023] [Indexed: 02/05/2023] Open
Abstract
We developed a continuous learning system (CLS) based on deep learning and optimization and ensemble approach, and conducted a retrospective data simulated prospective study using ultrasound images of breast masses for precise diagnoses. We extracted 629 breast masses and 2235 images from 561 cases in the institution to train the model in six stages to diagnose benign and malignant tumors, pathological types, and diseases. We randomly selected 180 out of 3098 cases from two external institutions. The CLS was tested with seven independent datasets and compared with 21 physicians, and the system's diagnostic ability exceeded 20 physicians by training stage six. The optimal integrated method we developed is expected accurately diagnose breast masses. This method can also be extended to the intelligent diagnosis of masses in other organs. Overall, our findings have potential value in further promoting the application of AI diagnosis in precision medicine.
Collapse
|
12
|
Beuque M, Magee DR, Chatterjee A, Woodruff HC, Langley RE, Allum W, Nankivell MG, Cunningham D, Lambin P, Grabsch HI. Automated detection and delineation of lymph nodes in haematoxylin & eosin stained digitised slides. J Pathol Inform 2023; 14:100192. [PMID: 36818020 PMCID: PMC9932489 DOI: 10.1016/j.jpi.2023.100192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 01/16/2023] [Accepted: 01/17/2023] [Indexed: 01/26/2023] Open
Abstract
Treatment of patients with oesophageal and gastric cancer (OeGC) is guided by disease stage, patient performance status and preferences. Lymph node (LN) status is one of the strongest prognostic factors for OeGC patients. However, survival varies between patients with the same disease stage and LN status. We recently showed that LN size from patients with OeGC might also have prognostic value, thus making delineations of LNs essential for size estimation and the extraction of other imaging biomarkers. We hypothesized that a machine learning workflow is able to: (1) find digital H&E stained slides containing LNs, (2) create a scoring system providing degrees of certainty for the results, and (3) delineate LNs in those images. To train and validate the pipeline, we used 1695 H&E slides from the OE02 trial. The dataset was divided into training (80%) and validation (20%). The model was tested on an external dataset of 826 H&E slides from the OE05 trial. U-Net architecture was used to generate prediction maps from which predefined features were extracted. These features were subsequently used to train an XGBoost model to determine if a region truly contained a LN. With our innovative method, the balanced accuracies of the LN detection were 0.93 on the validation dataset (0.83 on the test dataset) compared to 0.81 (0.81) on the validation (test) datasets when using the standard method of thresholding U-Net predictions to arrive at a binary mask. Our method allowed for the creation of an "uncertain" category, and partly limited false-positive predictions on the external dataset. The mean Dice score was 0.73 (0.60) per-image and 0.66 (0.48) per-LN for the validation (test) datasets. Our pipeline detects images with LNs more accurately than conventional methods, and high-throughput delineation of LNs can facilitate future LN content analyses of large datasets.
Collapse
Affiliation(s)
- Manon Beuque
- Department of Precision Medicine, GROW School for Oncology and Reproduction, Maastricht University, Universiteitssingel 40, 6229 ER Maastricht, the Netherlands
| | - Derek R. Magee
- School of Computing, University of Leeds, LS2 9JT Leeds, United Kingdom
- HeteroGenius Limited, Leeds, United Kingdom
| | - Avishek Chatterjee
- Department of Precision Medicine, GROW School for Oncology and Reproduction, Maastricht University, Universiteitssingel 40, 6229 ER Maastricht, the Netherlands
| | - Henry C. Woodruff
- Department of Precision Medicine, GROW School for Oncology and Reproduction, Maastricht University, Universiteitssingel 40, 6229 ER Maastricht, the Netherlands
- Department of Radiology and Nuclear Medicine, GROW School for Oncology and Reproduction, Maastricht University Medical Center+, P. Debyelaan, 25 6229 HX Maastricht, The Netherlands
| | - Ruth E. Langley
- MRC Clinical Trials Unit at University College London, 90 High Holborn, WC1V 6LJ London, United Kingdom
| | - William Allum
- Department of Surgery, Royal Marsden Hospital, The Royal Marsden Fulham Road, SW3 6JJ London, United Kingdom
| | - Matthew G. Nankivell
- MRC Clinical Trials Unit at University College London, 90 High Holborn, WC1V 6LJ London, United Kingdom
| | - David Cunningham
- Department of Medicine, The Royal Marsden NHS Trust, The Royal Marsden Fulham Road, SW3 6JJ London, United Kingdom
| | - Philippe Lambin
- Department of Precision Medicine, GROW School for Oncology and Reproduction, Maastricht University, Universiteitssingel 40, 6229 ER Maastricht, the Netherlands
- Department of Radiology and Nuclear Medicine, GROW School for Oncology and Reproduction, Maastricht University Medical Center+, P. Debyelaan, 25 6229 HX Maastricht, The Netherlands
| | - Heike I. Grabsch
- Department of Pathology, GROW School for Oncology and Reproduction, Maastricht University Medical Center+, P. Debyelaan, 25 6229 HX Maastricht, The Netherlands
- Pathology & Data Analytics, Leeds Institute of Medical Research at St. James’s, University of Leeds, LS2 9JT Leeds, United Kingdom
- Corresponding author at: Department of Pathology, GROW School for Oncology and Reproduction, P. Debyelaan 25, 6229 HX Maastricht, The Netherlands.
| |
Collapse
|
13
|
Mohammadi-Nejad AR, Allen RJ, Kraven LM, Leavy OC, Jenkins RG, Wain LV, Auer DP, Sotiropoulos SN. Mapping brain endophenotypes associated with idiopathic pulmonary fibrosis genetic risk. EBioMedicine 2022; 86:104356. [PMID: 36413936 PMCID: PMC9677133 DOI: 10.1016/j.ebiom.2022.104356] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 10/16/2022] [Accepted: 10/24/2022] [Indexed: 11/21/2022] Open
Abstract
BACKGROUND Idiopathic pulmonary fibrosis (IPF) is a serious disease of the lung parenchyma. It has a known polygenetic risk, with at least seventeen regions of the genome implicated to date. Growing evidence suggests linked multimorbidity of IPF with neurodegenerative or affective disorders. However, no study so far has explicitly explored links between IPF, associated genetic risk profiles, and specific brain features. METHODS We exploited imaging and genetic data from more than 32,000 participants available through the UK Biobank population-level resource to explore links between IPF genetic risk and imaging-derived brain endophenotypes. We performed a brain-wide imaging-genetics association study between the presence of 17 known IPF risk variants and 1248 multi-modal imaging-derived features, which characterise brain structure and function. FINDINGS We identified strong associations between cortical morphological features, white matter microstructure and IPF risk loci in chromosomes 17 (17q21.31) and 8 (DEPTOR). Through co-localisation analysis, we confirmed that cortical thickness in the anterior cingulate and more widespread white matter microstructure changes share a single causal variant with IPF at the chromosome 8 locus. Post-hoc preliminary analysis suggested that forced vital capacity may partially mediate the association between the DEPTOR variant and white matter microstructure, but not between the DEPTOR risk variant and cortical thickness. INTERPRETATION Our results reveal the associations between IPF genetic risk and differences in brain structure, for both cortex and white matter. Differences in tissue-specific imaging signatures suggest distinct underlying mechanisms with focal cortical thinning in regions with known high DEPTOR expression, unrelated to lung function, and more widespread microstructural white matter changes consistent with hypoxia or neuroinflammation with potential mediation by lung function. FUNDING This study was supported by the NIHR Nottingham Biomedical Research Centre and the UK Medical Research Council.
Collapse
Affiliation(s)
- Ali-Reza Mohammadi-Nejad
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Queens Medical Centre, Nottingham, United Kingdom; Sir Peter Mansfield Imaging Centre & Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Richard J Allen
- Department of Health Sciences, University of Leicester, Leicester, United Kingdom
| | - Luke M Kraven
- Department of Health Sciences, University of Leicester, Leicester, United Kingdom
| | - Olivia C Leavy
- Department of Health Sciences, University of Leicester, Leicester, United Kingdom
| | - R Gisli Jenkins
- National Heart and Lung Institute, Imperial College London, London, United Kingdom; Department of Interstitial Lung Disease, Royal Brompton and Harefield Hospital, Guys and St Thomas' NHS Foundation Trust, London, United Kingdom
| | - Louise V Wain
- Department of Health Sciences, University of Leicester, Leicester, United Kingdom; National Institute for Health Research (NIHR) Leicester Respiratory Biomedical Research Centre, Glenfield Hospital, Leicester, United Kingdom
| | - Dorothee P Auer
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Queens Medical Centre, Nottingham, United Kingdom; Sir Peter Mansfield Imaging Centre & Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom.
| | - Stamatios N Sotiropoulos
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Queens Medical Centre, Nottingham, United Kingdom; Sir Peter Mansfield Imaging Centre & Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom.
| | | |
Collapse
|
14
|
Li H, Whitney HM, Ji Y, Edwards A, Papaioannou J, Liu P, Giger ML. Impact of continuous learning on diagnostic breast MRI AI: evaluation on an independent clinical dataset. J Med Imaging (Bellingham) 2022; 9:034502. [DOI: 10.1117/1.jmi.9.3.034502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 05/12/2022] [Indexed: 11/14/2022] Open
Affiliation(s)
- Hui Li
- University of Chicago, Department of Radiology, Chicago, Illinois
| | | | - Yu Ji
- Tianjin Medical University, Tianjin Medical University Cancer Institute and Hospital, National Clini
| | | | - John Papaioannou
- University of Chicago, Department of Radiology, Chicago, Illinois
| | - Peifang Liu
- Tianjin Medical University, Tianjin Medical University Cancer Institute and Hospital, National Clini
| | | |
Collapse
|
15
|
Assessing radiomics feature stability with simulated CT acquisitions. Sci Rep 2022; 12:4732. [PMID: 35304508 PMCID: PMC8933485 DOI: 10.1038/s41598-022-08301-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Accepted: 03/03/2022] [Indexed: 11/29/2022] Open
Abstract
Medical imaging quantitative features had once disputable usefulness in clinical studies. Nowadays, advancements in analysis techniques, for instance through machine learning, have enabled quantitative features to be progressively useful in diagnosis and research. Tissue characterisation is improved via the “radiomics” features, whose extraction can be automated. Despite the advances, stability of quantitative features remains an important open problem. As features can be highly sensitive to variations of acquisition details, it is not trivial to quantify stability and efficiently select stable features. In this work, we develop and validate a Computed Tomography (CT) simulator environment based on the publicly available ASTRA toolbox (www.astra-toolbox.com). We show that the variability, stability and discriminative power of the radiomics features extracted from the virtual phantom images generated by the simulator are similar to those observed in a tandem phantom study. Additionally, we show that the variability is matched between a multi-center phantom study and simulated results. Consequently, we demonstrate that the simulator can be utilised to assess radiomics features’ stability and discriminative power.
Collapse
|
16
|
Lie SO, Lysdahlgaard S. Detection of metallic objects on digital radiographs with convolutional neural networks: A MRI screening tool. Radiography (Lond) 2022; 28:466-472. [PMID: 35042664 DOI: 10.1016/j.radi.2022.01.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 12/22/2021] [Accepted: 01/04/2022] [Indexed: 11/17/2022]
Abstract
INTRODUCTION Screening for metallic implants and foreign bodies before magnetic resonance imaging (MRI) examinations, are crucial for patient safety. History of health are supplied by the patient, a family member, screening of electronic health records or the picture and archive systems (PACS). PACS securely store and transmits digital radiographs (DR) and related reports with patient information. Convolutional neural networks (CNN) can be used to detect metallic objects in DRs stored in PACS. This study evaluates the accuracy of CNNs in the detection of metallic objects on DRs as an MRI screening tool. METHODS The musculoskeletal radiographs (MURA) dataset consisting of 14.863 upper extremity studies were stratified into datasets with and without metal. For each anatomical region: Elbow, finger, hand, humerus, forearm, shoulder and wrist we trained and validated CNN algorithms to classify radiographs with and without metal. Algorithm performance was evaluated with area under the receiver-operating curve (AUC), sensitivity, specificity, predictive values and accuracies compared with a reference standard of manually labelling. RESULTS Sensitivities, specificities and area under the ROC-curves (AUC) for the six anatomic regions ranged from 85.33% (95% CI: 78.64%-90.57%) to 100.00% (95% CI: 98.16%-100.00%), 75.44% (95% CI: 62.24%-85.87%) to 93.57% (95% CI: 88.78%-96.75%) and 0.95 to 0.99, respectively. CONCLUSION CNN algorithms classify DRs with metallic objects for six different anatomic regions with near-perfect accuracy. The rapid and iterative capability of the algorithms allows for scalable expansion and as a substitute MRI screening tool for metallic objects. IMPLICATIONS FOR PRACTICE All CNNs would be able to assist in metal detection of digital radiographs prior to MRI, an substantially decrease screening time.
Collapse
Affiliation(s)
- S O Lie
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark
| | - S Lysdahlgaard
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark; Department of Regional Health Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark; Imaging Research Initiative Southwest (IRIS), Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark.
| |
Collapse
|