1
|
Fionda B, Placidi E, de Ridder M, Strigari L, Patarnello S, Tanderup K, Hannoun-Levi JM, Siebert FA, Boldrini L, Antonietta Gambacorta M, De Spirito M, Sala E, Tagliaferri L. Artificial intelligence in interventional radiotherapy (brachytherapy): Enhancing patient-centered care and addressing patients' needs. Clin Transl Radiat Oncol 2024; 49:100865. [PMID: 39381628 PMCID: PMC11459626 DOI: 10.1016/j.ctro.2024.100865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 09/11/2024] [Accepted: 09/20/2024] [Indexed: 10/10/2024] Open
Abstract
This review explores the integration of artificial intelligence (AI) in interventional radiotherapy (IRT), emphasizing its potential to streamline workflows and enhance patient care. Through a systematic analysis of 78 relevant papers spanning from 2002 to 2024, we identified significant advancements in contouring, treatment planning, outcome prediction, and quality assurance. AI-driven approaches offer promise in reducing procedural times, personalizing treatments, and improving treatment outcomes for oncological patients. However, challenges such as clinical validation and quality assurance protocols persist. Nonetheless, AI presents a transformative opportunity to optimize IRT and meet evolving patient needs.
Collapse
Affiliation(s)
- Bruno Fionda
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
| | - Elisa Placidi
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
| | - Mischa de Ridder
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Lidia Strigari
- Department of Medical Physics, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy
| | - Stefano Patarnello
- Real World Data Facility, Gemelli Generator, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
| | - Kari Tanderup
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Jean-Michel Hannoun-Levi
- Department of Radiation Oncology, Antoine Lacassagne Cancer Centre, University of Côte d’Azur, Nice, France
| | - Frank-André Siebert
- Clinic of Radiotherapy (Radiooncology), University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | - Luca Boldrini
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
| | - Maria Antonietta Gambacorta
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Marco De Spirito
- Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
- Dipartimento di Neuroscienze, Sezione di Fisica, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Evis Sala
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Luca Tagliaferri
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, Rome, Italy
| |
Collapse
|
2
|
Yuan Y, Hou S, Wu X, Wang Y, Sun Y, Yang Z, Yin S, Zhang F. Application of deep-learning to the automatic segmentation and classification of lateral lymph nodes on ultrasound images of papillary thyroid carcinoma. Asian J Surg 2024; 47:3892-3898. [PMID: 38453612 DOI: 10.1016/j.asjsur.2024.02.140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 01/29/2024] [Accepted: 02/23/2024] [Indexed: 03/09/2024] Open
Abstract
PURPOSE It is crucial to preoperatively diagnose lateral cervical lymph node (LN) metastases (LNMs) in papillary thyroid carcinoma (PTC) patients. This study aims to develop deep-learning models for the automatic segmentation and classification of LNM on original ultrasound images. METHODS This study included 1000 lateral cervical LN ultrasound images (consisting of 512 benign and 558 metastatic LNs) collected from 728 patients at the Chongqing General Hospital between March 2022 and July 2023. Three instance segmentation models (MaskRCNN, SOLO and Mask2Former) were constructed to segment and classify ultrasound images of lateral cervical LNs by recognizing each object individually and in a pixel-by-pixel manner. The segmentation and classification results of the three models were compared with an experienced sonographer in the test set. RESULTS Upon completion of a 200-epoch learning cycle, the loss among the three unique models became negligible. To evaluate the performance of the deep-learning models, the intersection over union threshold was set at 0.75. The mean average precision scores for MaskRCNN, SOLO and Mask2Former were 88.8%, 86.7% and 89.5%, respectively. The segmentation accuracies of the MaskRCNN, SOLO, Mask2Former models and sonographer were 85.6%, 88.0%, 89.5% and 82.3%, respectively. The classification AUCs of the MaskRCNN, SOLO, Mask2Former models and sonographer were 0.886, 0.869, 0.90.2 and 0.852 in the test set, respectively. CONCLUSIONS The deep learning models could automatically segment and classify lateral cervical LNs with an AUC of 0.92. This approach may serve as a promising tool to assist sonographers in diagnosing lateral cervical LNMs among patients with PTC.
Collapse
Affiliation(s)
- Yuquan Yuan
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Shaodong Hou
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China; Clinical Medical College, North Sichuan Medical College, Nanchong, Sichuan, China
| | - Xing Wu
- College of Computer Science, Chongqing University, Chongqing, China
| | - Yuteng Wang
- College of Computer Science, Chongqing University, Chongqing, China
| | - Yiceng Sun
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Zeyu Yang
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China.
| | - Supeng Yin
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China; Chongqing Hospital of Traditional Chinese Medicine, Chongqing, China.
| | - Fan Zhang
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China; Clinical Medical College, North Sichuan Medical College, Nanchong, Sichuan, China; Chongqing Hospital of Traditional Chinese Medicine, Chongqing, China.
| |
Collapse
|
3
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
4
|
Hui X, Rajendran P, Ling T, Dai X, Xing L, Pramanik M. Ultrasound-guided needle tracking with deep learning: A novel approach with photoacoustic ground truth. PHOTOACOUSTICS 2023; 34:100575. [PMID: 38174105 PMCID: PMC10761306 DOI: 10.1016/j.pacs.2023.100575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/15/2023] [Accepted: 11/27/2023] [Indexed: 01/05/2024]
Abstract
Accurate needle guidance is crucial for safe and effective clinical diagnosis and treatment procedures. Conventional ultrasound (US)-guided needle insertion often encounters challenges in consistency and precisely visualizing the needle, necessitating the development of reliable methods to track the needle. As a powerful tool in image processing, deep learning has shown promise for enhancing needle visibility in US images, although its dependence on manual annotation or simulated data as ground truth can lead to potential bias or difficulties in generalizing to real US images. Photoacoustic (PA) imaging has demonstrated its capability for high-contrast needle visualization. In this study, we explore the potential of PA imaging as a reliable ground truth for deep learning network training without the need for expert annotation. Our network (UIU-Net), trained on ex vivo tissue image datasets, has shown remarkable precision in localizing needles within US images. The evaluation of needle segmentation performance extends across previously unseen ex vivo data and in vivo human data (collected from an open-source data repository). Specifically, for human data, the Modified Hausdorff Distance (MHD) value stands at approximately 3.73, and the targeting error value is around 2.03, indicating the strong similarity and small needle orientation deviation between the predicted needle and actual needle location. A key advantage of our method is its applicability beyond US images captured from specific imaging systems, extending to images from other US imaging systems.
Collapse
Affiliation(s)
- Xie Hui
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore 637459, Singapore
| | - Praveenbalaji Rajendran
- Stanford University, Department of Radiation Oncology, Stanford, California 94305, United States
| | - Tong Ling
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore 637459, Singapore
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 637459, Singapore
| | - Xianjin Dai
- Stanford University, Department of Radiation Oncology, Stanford, California 94305, United States
| | - Lei Xing
- Stanford University, Department of Radiation Oncology, Stanford, California 94305, United States
| | - Manojit Pramanik
- Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, United States
| |
Collapse
|
5
|
Masoumi N, Rivaz H, Hacihaliloglu I, Ahmad MO, Reinertsen I, Xiao Y. The Big Bang of Deep Learning in Ultrasound-Guided Surgery: A Review. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:909-919. [PMID: 37028313 DOI: 10.1109/tuffc.2023.3255843] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Ultrasound (US) imaging is a paramount modality in many image-guided surgeries and percutaneous interventions, thanks to its high portability, temporal resolution, and cost-efficiency. However, due to its imaging principles, the US is often noisy and difficult to interpret. Appropriate image processing can greatly enhance the applicability of the imaging modality in clinical practice. Compared with the classic iterative optimization and machine learning (ML) approach, deep learning (DL) algorithms have shown great performance in terms of accuracy and efficiency for US processing. In this work, we conduct a comprehensive review on deep-learning algorithms in the applications of US-guided interventions, summarize the current trends, and suggest future directions on the topic.
Collapse
|
6
|
Zhao JZ, Ni R, Chow R, Rink A, Weersink R, Croke J, Raman S. Artificial intelligence applications in brachytherapy: A literature review. Brachytherapy 2023; 22:429-445. [PMID: 37248158 DOI: 10.1016/j.brachy.2023.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/02/2023] [Accepted: 04/07/2023] [Indexed: 05/31/2023]
Abstract
PURPOSE Artificial intelligence (AI) has the potential to simplify and optimize various steps of the brachytherapy workflow, and this literature review aims to provide an overview of the work done in this field. METHODS AND MATERIALS We conducted a literature search in June 2022 on PubMed, Embase, and Cochrane for papers that proposed AI applications in brachytherapy. RESULTS A total of 80 papers satisfied inclusion/exclusion criteria. These papers were categorized as follows: segmentation (24), registration and image processing (6), preplanning (13), dose prediction and treatment planning (11), applicator/catheter/needle reconstruction (16), and quality assurance (10). AI techniques ranged from classical models such as support vector machines and decision tree-based learning to newer techniques such as U-Net and deep reinforcement learning, and were applied to facilitate small steps of a process (e.g., optimizing applicator selection) or even automate the entire step of the workflow (e.g., end-to-end preplanning). Many of these algorithms demonstrated human-level performance and offer significant improvements in speed. CONCLUSIONS AI has potential to augment, automate, and/or accelerate many steps of the brachytherapy workflow. We recommend that future studies adhere to standard reporting guidelines. We also stress the importance of using larger sample sizes and reporting results using clinically interpretable measures.
Collapse
Affiliation(s)
- Jonathan Zl Zhao
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Ruiyan Ni
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Ronald Chow
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Alexandra Rink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Robert Weersink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Jennifer Croke
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Srinivas Raman
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada.
| |
Collapse
|
7
|
Yan W, Ding Q, Chen J, Yan K, Tang RSY, Cheng SS. Learning-based needle tip tracking in 2D ultrasound by fusing visual tracking and motion prediction. Med Image Anal 2023; 88:102847. [PMID: 37307759 DOI: 10.1016/j.media.2023.102847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 01/29/2023] [Accepted: 05/17/2023] [Indexed: 06/14/2023]
Abstract
Visual trackers are the most commonly adopted approach for needle tip tracking in ultrasound (US)-based procedures. However, they often perform unsatisfactorily in biological tissues due to the significant background noise and anatomical occlusion. This paper presents a learning-based needle tip tracking system, which consists of not only a visual tracking module, but also a motion prediction module. In the visual tracking module, two sets of masks are designed to improve the tracker's discriminability, and a template update submodule is used to keep up to date with the needle tip's current appearance. In the motion prediction module, a Transformer network-based prediction architecture estimates the target's current position according to its historical position data to tackle the problem of target's temporary disappearance. A data fusion module then integrates the results from the visual tracking and motion prediction modules to provide robust and accurate tracking results. Our proposed tracking system showed distinct improvement against other state-of-the-art trackers during the motorized needle insertion experiments in both gelatin phantom and biological tissue environments (e.g. 78% against <60% in terms of the tracking success rate in the most challenging scenario of "In-plane-static" during the tissue experiments). Its robustness was also verified in manual needle insertion experiments under varying needle velocities and directions, and occasional temporary needle tip disappearance, with its tracking success rate being >18% higher than the second best performing tracking system. The proposed tracking system, with its computational efficiency, tracking robustness, and tracking accuracy, will lead to safer targeting during existing clinical practice of US-guided needle operations and potentially be integrated in a tissue biopsy robotic system.
Collapse
Affiliation(s)
- Wanquan Yan
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Qingpeng Ding
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Jianghua Chen
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Kim Yan
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Shing-Yan Tang
- Department of Medicine and Therapeutics and Institute of Digestive Disease, The Chinese University of Hong Kong, Hong Kong
| | - Shing Shin Cheng
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong; Institute of Medical Intelligence and XR, Multi-scale Medical Robotics Center, and Shun Hing Institute of Advanced Engineering, The Chinese University of Hong Kong, Hong Kong.
| |
Collapse
|
8
|
Zhang Y, Dai X, Tian Z, Lei Y, Wynne JF, Patel P, Chen Y, Liu T, Yang X. Landmark tracking in liver US images using cascade convolutional neural networks with long short-term memory. MEASUREMENT SCIENCE & TECHNOLOGY 2023; 34:054002. [PMID: 36743834 PMCID: PMC9893725 DOI: 10.1088/1361-6501/acb5b3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 01/03/2023] [Accepted: 01/24/2023] [Indexed: 05/13/2023]
Abstract
Accurate tracking of anatomic landmarks is critical for motion management in liver radiation therapy. Ultrasound (US) is a safe, low-cost technology that is broadly available and offer real-time imaging capability. This study proposed a deep learning-based tracking method for the US image-guided radiation therapy. The proposed cascade deep learning model is composed of an attention network, a mask region-based convolutional neural network (mask R-CNN), and a long short-term memory (LSTM) network. The attention network learns a mapping from an US image to a suspected area of landmark motion in order to reduce the search region. The mask R-CNN then produces multiple region-of-interest proposals in the reduced region and identifies the proposed landmark via three network heads: bounding box regression, proposal classification, and landmark segmentation. The LSTM network models the temporal relationship among the successive image frames for bounding box regression and proposal classification. To consolidate the final proposal, a selection method is designed according to the similarities between sequential frames. The proposed method was tested on the liver US tracking datasets used in the medical image computing and computer assisted interventions 2015 challenges, where the landmarks were annotated by three experienced observers to obtain their mean positions. Five-fold cross validation on the 24 given US sequences with ground truths shows that the mean tracking error for all landmarks is 0.65 ± 0.56 mm, and the errors of all landmarks are within 2 mm. We further tested the proposed model on 69 landmarks from the testing dataset that have the similar image pattern with the training pattern, resulting in a mean tracking error of 0.94 ± 0.83 mm. The proposed deep-learning model was implemented on a graphics processing unit (GPU), tracking 47-81 frames s-1. Our experimental results have demonstrated the feasibility and accuracy of our proposed method in tracking liver anatomic landmarks using US images, providing a potential solution for real-time liver tracking for active motion management during radiation therapy.
Collapse
Affiliation(s)
- Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xianjin Dai
- Department of Radiation Oncology, Stanford University, Stanford, CA 94035, United States of America
| | - Zhen Tian
- Department of Radiation & Cellular Oncology, University of Chicago, Chicago, IL 60637, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Jacob F Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yue Chen
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University School of Medicine, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University School of Medicine, Atlanta, GA 30322, United States of America
| |
Collapse
|
9
|
A Novel Deep Learning Model for Sea State Classification Using Visual-Range Sea Images. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071487] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
Wind-waves exhibit variations both in shape and steepness, and their asymmetrical nature is a well-known feature. One of the important characteristics of the sea surface is the front-back asymmetry of wind-wave crests. The wind-wave conditions on the surface of the sea constitute a sea state, which is listed as an essential climate variable by the Global Climate Observing System and is considered a critical factor for structural safety and optimal operations of offshore oil and gas platforms. Methods such as statistical representations of sensor-based wave parameters observations and numerical modeling are used to classify sea states. However, for offshore structures such as oil and gas platforms, these methods induce high capital expenditures (CAPEX) and operating expenses (OPEX), along with extensive computational power and time requirements. To address this issue, in this paper, we propose a novel, low-cost deep learning-based sea state classification model using visual-range sea images. Firstly, a novel visual-range sea state image dataset was designed and developed for this purpose. The dataset consists of 100,800 images covering four sea states. The dataset was then benchmarked on state-of-the-art deep learning image classification models. The highest classification accuracy of 81.8% was yielded by NASNet-Mobile. Secondly, a novel sea state classification model was proposed. The model took design inspiration from GoogLeNet, which was identified as the optimal reference model for sea state classification. Systematic changes in GoogLeNet’s inception block were proposed, which resulted in an 8.5% overall classification accuracy improvement in comparison with NASNet-Mobile and a 7% improvement from the reference model (i.e., GoogLeNet). Additionally, the proposed model took 26% less training time, and its per-image classification time remains competitive.
Collapse
|
10
|
Daoud MI, Abu-Hani AF, Shtaiyat A, Ali MZ, Alazrai R. Needle detection using ultrasound B-mode and power Doppler analyses. Med Phys 2022; 49:4999-5013. [PMID: 35608237 DOI: 10.1002/mp.15725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 03/31/2022] [Accepted: 04/13/2022] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND Ultrasound is employed in needle interventions to visualize the anatomical structures and track the needle. Nevertheless, needle detection in ultrasound images is a difficult task, specifically at steep insertion angles. PURPOSE A new method is presented to enable effective needle detection using ultrasound B-mode and power Doppler analyses. METHODS A small buzzer is used to excite the needle and an ultrasound system is utilized to acquire B-mode and power Doppler images for the needle. The B-mode and power Doppler images are processed using Radon transform and local phase analysis to initially detect the axis of the needle. The detection of the needle axis is improved by processing the power Doppler image using alpha shape analysis to define a region of interest (ROI) that contains the needle. Also, a set of feature maps are extracted from the ROI in the B-mode image. The feature maps are processed using a machine learning classifier to construct a likelihood image that visualizes the posterior needle likelihoods of the pixels. Radon transform is applied to the likelihood image to achieve an improved needle axis detection. Additionally, the region in the B-mode image surrounding the needle axis is analyzed to identify the needle tip using a custom-made probabilistic approach. Our method was utilized to detect needles inserted in ex vivo animal tissues at shallow [20° -40°), moderate [40° -60°), and steep [60° -85°] angles. RESULTS Our method detected the needles with failure rates equal to 0% and mean angle, axis, and tip errors less than or equal to 0.7°, 0.6 mm, and 0.7 mm, respectively. Additionally, our method achieved favorable results compared to two recently introduced needle detection methods. CONCLUSIONS The results indicate the potential of applying our method to achieve effective needle detection in ultrasound images. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Mohammad I Daoud
- Department of Computer Engineering, German Jordanian University, Amman, 11180, Jordan
| | - Ayah F Abu-Hani
- Department of Electrical and Computer Engineering, Technical University of Munich, Munich, 80333, Germany
| | - Ahmad Shtaiyat
- Department of Computer Engineering, German Jordanian University, Amman, 11180, Jordan
| | - Mostafa Z Ali
- Department of Computer Information Systems, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Rami Alazrai
- Department of Computer Engineering, German Jordanian University, Amman, 11180, Jordan
| |
Collapse
|
11
|
Eidex Z, Wang T, Lei Y, Axente M, Akin-Akintayo OO, Ojo OAA, Akintayo AA, Roper J, Bradley JD, Liu T, Schuster DM, Yang X. MRI-based prostate and dominant lesion segmentation using cascaded scoring convolutional neural network. Med Phys 2022; 49:5216-5224. [PMID: 35533237 PMCID: PMC9388615 DOI: 10.1002/mp.15687] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 03/18/2022] [Accepted: 04/16/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Dose escalation to dominant intraprostatic lesions (DILs) is a novel treatment strategy to improve the treatment outcome of prostate radiation therapy. Treatment planning requires accurate and fast delineation of the prostate and DILs. In this study, a 3D cascaded scoring convolutional neural network is proposed to automatically segment the prostate and DILs from MRI. METHODS AND MATERIALS The proposed cascaded scoring convolutional neural network performs end-to-end segmentation by locating a region-of-interest (ROI), identifying the object within the ROI, and defining the target. A scoring strategy, which is learned to judge the segmentation quality of DIL, is integrated into cascaded convolutional neural network to solve the challenge of segmenting the irregular shapes of the DIL. To evaluate the proposed method, 77 patients who underwent MRI and PET/CT were retrospectively investigated. The prostate and DIL ground truth contours were delineated by experienced radiologists. The proposed method was evaluated with five-fold cross validation and holdout testing. RESULTS The average centroid distance, volume difference, and Dice similarity coefficient (DSC) value for prostate/DIL are 4.3±7.5mm/3.73±3.78mm, 4.5±7.9cc/0.41±0.59cc and 89.6±8.9%/84.3±11.9%, respectively. Comparable results were obtained in the holdout test. Similar or superior segmentation outcomes were seen when compared the results of the proposed method to those of competing segmentation approaches CONCLUSIONS: : The proposed automatic segmentation method can accurately and simultaneously segment both the prostate and DILs. The intended future use for this algorithm is focal boost prostate radiation therapy. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA
| | - Marian Axente
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | | | | | | | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jeffery D Bradley
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| |
Collapse
|
12
|
Feng J, Jiang J. Deep Learning-Based Chest CT Image Features in Diagnosis of Lung Cancer. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:4153211. [PMID: 35096129 PMCID: PMC8791752 DOI: 10.1155/2022/4153211] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 11/28/2021] [Accepted: 12/18/2021] [Indexed: 11/17/2022]
Abstract
This study was to evaluate the diagnostic value of deep learning-optimized chest CT in the patients with lung cancer. 90 patients who were diagnosed with lung cancer by surgery or puncture in hospital were selected as the research subjects. The Mask Region Convolutional Neural Network (Mask-RCNN) model was a typical end-to-end image segmentation model, and Dual Path Network (DPN) was used in nodule detection. The results showed that the accuracy of DPN algorithm model in detecting lung lesions in lung cancer patients was 88.74%, the accuracy of CT diagnosis of lung cancer was 88.37%, the sensitivity was 82.91%, and the specificity was 87.43%. Deep learning-based CT examination combined with serum tumor detection, factoring into Neurospecific enolase (N S E), cytokeratin 19 fragment (CYFRA21), Carcinoembryonic antigen (CEA), and squamous cell carcinoma (SCC) antigen, improved the accuracy to 97.94%, the sensitivity to 98.12%, and the specificity to 100%, all showing significant differences (P < 0.05). In conclusion, this study provides a scientific basis for improving the diagnostic efficiency of CT imaging in lung cancer and theoretical support for subsequent lung cancer diagnosis and treatment.
Collapse
Affiliation(s)
- Jianxin Feng
- Department of Interventional Therapy, People's Hospital of Baoji, Baoji City, 721000 Shaanxi Province, China
| | - Jun Jiang
- Department of Interventional Therapy, People's Hospital of Baoji, Baoji City, 721000 Shaanxi Province, China
| |
Collapse
|
13
|
Lee HH, Kwon BM, Yang CK, Yeh CY, Lee J. Measurement of laryngeal elevation by automated segmentation using Mask R-CNN. Medicine (Baltimore) 2021; 100:e28112. [PMID: 34941054 PMCID: PMC8702111 DOI: 10.1097/md.0000000000028112] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 11/16/2021] [Indexed: 01/05/2023] Open
Abstract
The methods of measuring laryngeal elevation during swallowing are time-consuming. We aimed to propose a quick-to-use neural network (NN) model for measuring laryngeal elevation quantitatively using anatomical structures auto-segmented by Mask region-based convolutional NN (R-CNN) in videofluoroscopic swallowing study. Twelve videofluoroscopic swallowing study video clips were collected. One researcher drew the anatomical structure, including the thyroid cartilage and vocal fold complex (TVC) on respective video frames. The dataset was split into 11 videos (4686 frames) for model development and one video (532 frames) for derived model testing. The validity of the trained model was evaluated using the intersection over the union. The mean intersections over union of the C1 spinous process and TVC were 0.73 ± 0.07 [0-0.88] and 0.43 ± 0.19 [0-0.79], respectively. The recall rates for the auto-segmentation of the TVC and C1 spinous process by the Mask R-CNN were 86.8% and 99.8%, respectively. Actual displacement of the larynx was calculated using the midpoint of the auto-segmented TVC and C1 spinous process and diagonal lengths of the C3 and C4 vertebral bodies on magnetic resonance imaging, which measured 35.1 mm. Mask R-CNN segmented the TVC with high accuracy. The proposed method measures laryngeal elevation using the midpoint of the TVC and C1 spinous process, auto-segmented by Mask R-CNN. Mask R-CNN auto-segmented the TVC with considerably high accuracy. Therefore, we can expect that the proposed method will quantitatively and quickly determine laryngeal elevation in clinical settings.
Collapse
Affiliation(s)
- Hyun Haeng Lee
- Department of Rehabilitation Medicine, Konkuk University School of Medicine and Konkuk University Medical Center, Seoul, Korea
| | - Bo Mi Kwon
- Department of Rehabilitation Medicine, Konkuk University School of Medicine and Konkuk University Medical Center, Seoul, Korea
| | | | | | - Jongmin Lee
- Department of Rehabilitation Medicine, Konkuk University School of Medicine and Konkuk University Medical Center, Seoul, Korea
- Center for Neuroscience Research, Institute of Biomedical Science & Technology, Konkuk University, Seoul, Korea
| |
Collapse
|
14
|
Lei Y, Wang T, Roper J, Jani AB, Patel SA, Curran WJ, Patel P, Liu T, Yang X. Male pelvic multi-organ segmentation on transrectal ultrasound using anchor-free mask CNN. Med Phys 2021; 48:3055-3064. [PMID: 33894057 DOI: 10.1002/mp.14895] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 03/13/2021] [Accepted: 04/06/2021] [Indexed: 02/01/2023] Open
Abstract
PURPOSE Current prostate brachytherapy uses transrectal ultrasound images for implant guidance, where contours of the prostate and organs-at-risk are necessary for treatment planning and dose evaluation. This work aims to develop a deep learning-based method for male pelvic multi-organ segmentation on transrectal ultrasound images. METHODS We developed an anchor-free mask convolutional neural network (CNN) that consists of three subnetworks, that is, a backbone, a fully convolutional one-state object detector (FCOS), and a mask head. The backbone extracts multi-level and multi-scale features from an ultrasound (US) image. The FOCS utilizes these features to detect and label (classify) the volume-of-interests (VOIs) of organs. In contrast to the design of a previously investigated mask regional CNN (Mask R-CNN), the FCOS is anchor-free, which can capture the spatial correlation of multiple organs. The mask head performs segmentation on each detected VOI, where a spatial attention strategy is integrated into the mask head to focus on informative feature elements and suppress noise. For evaluation, we retrospectively investigated 83 prostate cancer patients by fivefold cross-validation and a hold-out test. The prostate, bladder, rectum, and urethra were segmented and compared with manual contours using the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95 ), mean surface distance (MSD), center of mass distance (CMD), and volume difference (VD). RESULTS The proposed method visually outperforms two competing methods, showing better agreement with manual contours and fewer misidentified speckles. In the cross-validation study, the respective DSC and HD95 results were as follows for each organ: bladder 0.75 ± 0.12, 2.58 ± 0.7 mm; prostate 0.93 ± 0.03, 2.28 ± 0.64 mm; rectum 0.90 ± 0.07, 1.65 ± 0.52 mm; and urethra 0.86 ± 0.07, 1.85 ± 1.71 mm. For the hold-out tests, the DSC and HD95 results were as follows: bladder 0.76 ± 0.13, 2.93 ± 1.29 mm; prostate 0.94 ± 0.03, 2.27 ± 0.79 mm; rectum 0.92 ± 0.03, 1.90 ± 0.28 mm; and urethra 0.85 ± 0.06, 1.81 ± 0.72 mm. Segmentation was performed in under 5 seconds. CONCLUSION The proposed method demonstrated fast and accurate multi-organ segmentation performance. It can expedite the contouring step of prostate brachytherapy and potentially enable auto-planning and auto-evaluation.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Sagar A Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
15
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 77] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
16
|
Feng L, Zhao Y, Sun Y, Zhao W, Tang J. Action Recognition Using a Spatial-Temporal Network for Wild Felines. Animals (Basel) 2021; 11:485. [PMID: 33673162 PMCID: PMC7917733 DOI: 10.3390/ani11020485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 01/22/2021] [Accepted: 02/08/2021] [Indexed: 11/16/2022] Open
Abstract
Behavior analysis of wild felines has significance for the protection of a grassland ecological environment. Compared with human action recognition, fewer researchers have focused on feline behavior analysis. This paper proposes a novel two-stream architecture that incorporates spatial and temporal networks for wild feline action recognition. The spatial portion outlines the object region extracted by Mask region-based convolutional neural network (R-CNN) and builds a Tiny Visual Geometry Group (VGG) network for static action recognition. Compared with VGG16, the Tiny VGG network can reduce the number of network parameters and avoid overfitting. The temporal part presents a novel skeleton-based action recognition model based on the bending angle fluctuation amplitude of the knee joints in a video clip. Due to its temporal features, the model can effectively distinguish between different upright actions, such as standing, ambling, and galloping, particularly when the felines are occluded by objects such as plants, fallen trees, and so on. The experimental results showed that the proposed two-stream network model can effectively outline the wild feline targets in captured images and can significantly improve the performance of wild feline action recognition due to its spatial and temporal features.
Collapse
Affiliation(s)
- Liqi Feng
- College of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China; (L.F.); (W.Z.); (J.T.)
| | - Yaqin Zhao
- College of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China; (L.F.); (W.Z.); (J.T.)
| | - Yichao Sun
- Kidswant Children Products Co., Ltd., Nanjing 211135, China;
| | - Wenxuan Zhao
- College of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China; (L.F.); (W.Z.); (J.T.)
| | - Jiaxi Tang
- College of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China; (L.F.); (W.Z.); (J.T.)
| |
Collapse
|
17
|
Andersén C, Rydén T, Thunberg P, Lagerlöf JH. Deep learning-based digitization of prostate brachytherapy needles in ultrasound images. Med Phys 2020; 47:6414-6420. [PMID: 33012023 PMCID: PMC7821271 DOI: 10.1002/mp.14508] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Revised: 09/12/2020] [Accepted: 09/21/2020] [Indexed: 12/12/2022] Open
Abstract
PURPOSE To develop, and evaluate the performance of, a deep learning-based three-dimensional (3D) convolutional neural network (CNN) artificial intelligence (AI) algorithm aimed at finding needles in ultrasound images used in prostate brachytherapy. METHODS Transrectal ultrasound (TRUS) image volumes from 1102 treatments were used to create a clinical ground truth (CGT) including 24422 individual needles that had been manually digitized by medical physicists during brachytherapy procedures. A 3D CNN U-net with 128 × 128 × 128 TRUS image volumes as input was trained using 17215 needle examples. Predictions of voxels constituting a needle were combined to yield a 3D linear function describing the localization of each needle in a TRUS volume. Manual and AI digitizations were compared in terms of the root-mean-square distance (RMSD) along each needle, expressed as median and interquartile range (IQR). The method was evaluated on a data set including 7207 needle examples. A subgroup of the evaluation data set (n = 188) was created, where the needles were digitized once more by a medical physicist (G1) trained in brachytherapy. The digitization procedure was timed. RESULTS The RMSD between the AI and CGT was 0.55 (IQR: 0.35-0.86) mm. In the smaller subset, the RMSD between AI and CGT was similar (0.52 [IQR: 0.33-0.79] mm) but significantly smaller (P < 0.001) than the difference of 0.75 (IQR: 0.49-1.20) mm between AI and G1. The difference between CGT and G1 was 0.80 (IQR: 0.48-1.18) mm, implying that the AI performed as well as the CGT in relation to G1. The mean time needed for human digitization was 10 min 11 sec, while the time needed for the AI was negligible. CONCLUSIONS A 3D CNN can be trained to identify needles in TRUS images. The performance of the network was similar to that of a medical physicist trained in brachytherapy. Incorporating a CNN for needle identification can shorten brachytherapy treatment procedures substantially.
Collapse
Affiliation(s)
- Christoffer Andersén
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
| | - Tobias Rydén
- Department of Medical Physics and Biomedical EngineeringSahlgrenska University HospitalGothenburgSweden
| | - Per Thunberg
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
| | - Jakob H. Lagerlöf
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
- Department of Medical PhysicsKarlstad Central HospitalKarlstadSweden
| |
Collapse
|