1
|
Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, Acharya R. Deep learning techniques in PET/CT imaging: A comprehensive review from sinogram to image space. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107880. [PMID: 37924769 DOI: 10.1016/j.cmpb.2023.107880] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/16/2023] [Accepted: 10/21/2023] [Indexed: 11/06/2023]
Abstract
Positron emission tomography/computed tomography (PET/CT) is increasingly used in oncology, neurology, cardiology, and emerging medical fields. The success stems from the cohesive information that hybrid PET/CT imaging offers, surpassing the capabilities of individual modalities when used in isolation for different malignancies. However, manual image interpretation requires extensive disease-specific knowledge, and it is a time-consuming aspect of physicians' daily routines. Deep learning algorithms, akin to a practitioner during training, extract knowledge from images to facilitate the diagnosis process by detecting symptoms and enhancing images. This acquired knowledge aids in supporting the diagnosis process through symptom detection and image enhancement. The available review papers on PET/CT imaging have a drawback as they either included additional modalities or examined various types of AI applications. However, there has been a lack of comprehensive investigation specifically focused on the highly specific use of AI, and deep learning, on PET/CT images. This review aims to fill that gap by investigating the characteristics of approaches used in papers that employed deep learning for PET/CT imaging. Within the review, we identified 99 studies published between 2017 and 2022 that applied deep learning to PET/CT images. We also identified the best pre-processing algorithms and the most effective deep learning models reported for PET/CT while highlighting the current limitations. Our review underscores the potential of deep learning (DL) in PET/CT imaging, with successful applications in lesion detection, tumor segmentation, and disease classification in both sinogram and image spaces. Common and specific pre-processing techniques are also discussed. DL algorithms excel at extracting meaningful features, and enhancing accuracy and efficiency in diagnosis. However, limitations arise from the scarcity of annotated datasets and challenges in explainability and uncertainty. Recent DL models, such as attention-based models, generative models, multi-modal models, graph convolutional networks, and transformers, are promising for improving PET/CT studies. Additionally, radiomics has garnered attention for tumor classification and predicting patient outcomes. Ongoing research is crucial to explore new applications and improve the accuracy of DL models in this rapidly evolving field.
Collapse
Affiliation(s)
- Maryam Fallahpoor
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia
| | - Subrata Chakraborty
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia; School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia; Earth Observation Centre, Institute of Climate Change, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia.
| | - Oliver Faust
- School of Computing and Information Science, Anglia Ruskin University Cambridge Campus, United Kingdom
| | - Prabal Datta Barua
- School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia; Faculty of Engineering and Information Technology, University of Technology Sydney, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia
| | | | - Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD, Australia
| |
Collapse
|
2
|
Garcea F, Serra A, Lamberti F, Morra L. Data augmentation for medical imaging: A systematic literature review. Comput Biol Med 2023; 152:106391. [PMID: 36549032 DOI: 10.1016/j.compbiomed.2022.106391] [Citation(s) in RCA: 33] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 11/22/2022] [Accepted: 11/29/2022] [Indexed: 12/13/2022]
Abstract
Recent advances in Deep Learning have largely benefited from larger and more diverse training sets. However, collecting large datasets for medical imaging is still a challenge due to privacy concerns and labeling costs. Data augmentation makes it possible to greatly expand the amount and variety of data available for training without actually collecting new samples. Data augmentation techniques range from simple yet surprisingly effective transformations such as cropping, padding, and flipping, to complex generative models. Depending on the nature of the input and the visual task, different data augmentation strategies are likely to perform differently. For this reason, it is conceivable that medical imaging requires specific augmentation strategies that generate plausible data samples and enable effective regularization of deep neural networks. Data augmentation can also be used to augment specific classes that are underrepresented in the training set, e.g., to generate artificial lesions. The goal of this systematic literature review is to investigate which data augmentation strategies are used in the medical domain and how they affect the performance of clinical tasks such as classification, segmentation, and lesion detection. To this end, a comprehensive analysis of more than 300 articles published in recent years (2018-2022) was conducted. The results highlight the effectiveness of data augmentation across organs, modalities, tasks, and dataset sizes, and suggest potential avenues for future research.
Collapse
Affiliation(s)
- Fabio Garcea
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Alessio Serra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Fabrizio Lamberti
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Lia Morra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy.
| |
Collapse
|
3
|
Dirks I, Keyaerts M, Neyns B, Vandemeulebroucke J. Computer-aided detection and segmentation of malignant melanoma lesions on whole-body 18F-FDG PET/CT using an interpretable deep learning approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106902. [PMID: 35636357 DOI: 10.1016/j.cmpb.2022.106902] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 04/27/2022] [Accepted: 05/21/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE In oncology, 18-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) / computed tomography (CT) is widely used to identify and analyse metabolically-active tumours. The combination of the high sensitivity and specificity from 18F-FDG PET and the high resolution from CT makes accurate assessment of disease status and treatment response possible. Since cancer is a systemic disease, whole-body imaging is of high interest. Moreover, whole-body metabolic tumour burden is emerging as a promising new biomarker predicting outcome for innovative immunotherapy in different tumour types. However, this comes with certain challenges such as the large amount of data for manual reading, different appearance of lesions across the body and cumbersome reporting, hampering its use in clinical routine. Automation of the reading can facilitate the process, maximise the information retrieved from the images and support clinicians in making treatment decisions. METHODS This work proposes a fully automated system for lesion detection and segmentation on whole-body 18F-FDG PET/CT. The novelty of the method stems from the fact that the same two-step approach used when manually reading the images was adopted, consisting of an intensity-based thresholding on PET followed by a classification that specifies which regions represent normal physiological uptake and which are malignant tissue. The dataset contained 69 patients treated for malignant melanoma. Baseline and follow-up scans together offered 267 images for training and testing. RESULTS On an unseen dataset of 53 PET/CT images, a median F1-score of 0.7500 was achieved with, on average, 1.566 false positive lesions per scan. Metabolically-active tumours were segmented with a median dice score of 0.8493 and absolute volume difference of 0.2986 ml. CONCLUSIONS The proposed fully automated method for the segmentation and detection of metabolically-active lesions on whole-body 18F-FDG PET/CT achieved competitive results. Moreover, it was compared to a direct segmentation approach which it outperformed for all metrics.
Collapse
Affiliation(s)
- Ine Dirks
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), Brussels, Belgium; imec, Leuven, Belgium.
| | - Marleen Keyaerts
- Vrije Universiteit Brussel (VUB), Universitair Ziekenhuis Brussel (UZ Brussel), Department of Nuclear Medicine, Brussels, Belgium
| | - Bart Neyns
- Vrije Universiteit Brussel (VUB), Universitair Ziekenhuis Brussel (UZ Brussel), Department of Medical Oncology, Brussels, Belgium
| | - Jef Vandemeulebroucke
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), Brussels, Belgium; imec, Leuven, Belgium; Vrije Universiteit Brussel (VUB), Universitair Ziekenhuis Brussel (UZ Brussel), Department of Radiology, Brussels, Belgium
| |
Collapse
|
4
|
Egger J, Wild D, Weber M, Bedoya CAR, Karner F, Prutsch A, Schmied M, Dionysio C, Krobath D, Jin Y, Gsaxner C, Li J, Pepe A. Studierfenster: an Open Science Cloud-Based Medical Imaging Analysis Platform. J Digit Imaging 2022; 35:340-355. [PMID: 35064372 PMCID: PMC8782222 DOI: 10.1007/s10278-021-00574-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 12/14/2021] [Accepted: 12/16/2021] [Indexed: 02/06/2023] Open
Abstract
Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster (www.studierfenster.at): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448–456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia.
- Computer Algorithms for Medicine Laboratory, Graz, Austria.
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.
| | - Daniel Wild
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Maximilian Weber
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christopher A Ramirez Bedoya
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Florian Karner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Alexander Prutsch
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Michael Schmied
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Dionysio
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Dominik Krobath
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, 311121, Hangzhou, Zhejiang, China
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| |
Collapse
|
5
|
Egger J, Pepe A, Gsaxner C, Jin Y, Li J, Kern R. Deep learning-a first meta-survey of selected reviews across scientific disciplines, their commonalities, challenges and research impact. PeerJ Comput Sci 2021; 7:e773. [PMID: 34901429 PMCID: PMC8627237 DOI: 10.7717/peerj-cs.773] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 10/15/2021] [Indexed: 05/07/2023]
Abstract
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term 'deep learning', and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
- Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, Zhejiang, China
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University of Graz, Graz, Austria
| | - Roman Kern
- Knowledge Discovery, Know-Center, Graz, Austria
- Institute of Interactive Systems and Data Science, Graz University of Technology, Graz, Austria
| |
Collapse
|
6
|
Mehrotra D, Markus A. Emerging simulation technologies in global craniofacial surgical training. J Oral Biol Craniofac Res 2021; 11:486-499. [PMID: 34345584 PMCID: PMC8319526 DOI: 10.1016/j.jobcr.2021.06.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Accepted: 06/22/2021] [Indexed: 12/14/2022] Open
Abstract
The last few decades have seen an exponential growth in the development and adoption of novel technologies in medical and surgical training of residents globally. Simulation is an active and innovative teaching method, and can be achieved via physical or digital models. Simulation allows the learners to repeatedly practice without the risk of causing any error in an actual patient and enhance their surgical skills and efficiency. Simulation may also allow the clinical instructor to objectively test the ability of the trainee to carry out the clinical procedure competently and independently prior to trainee's completion of the program. This review aims to explore the role of emerging simulation technologies globally in craniofacial training of students and residents in improving their surgical knowledge and skills. These technologies include 3D printed biomodels, virtual and augmented reality, use of google glass, hololens and haptic feedback, surgical boot camps, serious games and escape games and how they can be implemented in low and middle income countries. Craniofacial surgical training methods will probably go through a sea change in the coming years, with the integration of these new technologies in the surgical curriculum, allowing learning in a safe environment with a virtual patient, through repeated exercise. In future, it may also be used as an assessment tool to perform any specific procedure, without putting the actual patient on risk. Although these new technologies are being enthusiastically welcomed by the young surgeons, they should only be used as an addition to the actual curriculum and not as a replacement to the conventional tools, as the mentor-mentee relationship can never be replaced by any technology.
Collapse
Affiliation(s)
- Divya Mehrotra
- Department of Oral and Maxillofacial Surgery KGMU, Lucknow, India
| | - A.F. Markus
- Emeritus Consultant Maxillofacial Surgeon, Poole Hospital University of Bournemouth, University of Duisburg-Essen, Trinity College, Dublin, Ireland
| |
Collapse
|
7
|
Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 2021; 65:545-563. [PMID: 34145766 DOI: 10.1111/1754-9485.13261] [Citation(s) in RCA: 174] [Impact Index Per Article: 58.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/23/2021] [Indexed: 12/21/2022]
Abstract
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Collapse
Affiliation(s)
- Phillip Chlap
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia
| | - Hang Min
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Nym Vandenberg
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Jason Dowling
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia.,Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
8
|
Abstract
Positron emission tomography (PET)/computed tomography (CT) are nuclear diagnostic imaging modalities that are routinely deployed for cancer staging and monitoring. They hold the advantage of detecting disease related biochemical and physiologic abnormalities in advance of anatomical changes, thus widely used for staging of disease progression, identification of the treatment gross tumor volume, monitoring of disease, as well as prediction of outcomes and personalization of treatment regimens. Among the arsenal of different functional imaging modalities, nuclear imaging has benefited from early adoption of quantitative image analysis starting from simple standard uptake value normalization to more advanced extraction of complex imaging uptake patterns; thanks to application of sophisticated image processing and machine learning algorithms. In this review, we discuss the application of image processing and machine/deep learning techniques to PET/CT imaging with special focus on the oncological radiotherapy domain as a case study and draw examples from our work and others to highlight current status and future potentials.
Collapse
Affiliation(s)
- Lise Wei
- Department of Radiation Oncology, Physics Division, University of Michigan, Ann Arbor, MI
| | - Issam El Naqa
- Department of Radiation Oncology, Physics Division, University of Michigan, Ann Arbor, MI.
| |
Collapse
|
9
|
Zukotynski K, Gaudet V, Uribe CF, Mathotaarachchi S, Smith KC, Rosa-Neto P, Bénard F, Black SE. Machine Learning in Nuclear Medicine: Part 2-Neural Networks and Clinical Aspects. J Nucl Med 2020; 62:22-29. [PMID: 32978286 DOI: 10.2967/jnumed.119.231837] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Accepted: 08/13/2020] [Indexed: 12/12/2022] Open
Abstract
This article is the second part in our machine learning series. Part 1 provided a general overview of machine learning in nuclear medicine. Part 2 focuses on neural networks. We start with an example illustrating how neural networks work and a discussion of potential applications. Recognizing that there is a spectrum of applications, we focus on recent publications in the areas of image reconstruction, low-dose PET, disease detection, and models used for diagnosis and outcome prediction. Finally, since the way machine learning algorithms are reported in the literature is extremely variable, we conclude with a call to arms regarding the need for standardized reporting of design and outcome metrics and we propose a basic checklist our community might follow going forward.
Collapse
Affiliation(s)
- Katherine Zukotynski
- Departments of Medicine and Radiology, McMaster University, Hamilton, Ontario, Canada
| | - Vincent Gaudet
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | - Carlos F Uribe
- PET Functional Imaging, BC Cancer, Vancouver, British Columbia, Canada
| | | | - Kenneth C Smith
- Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Pedro Rosa-Neto
- Translational Neuroimaging Lab, McGill University, Montreal, Quebec, Canada
| | - François Bénard
- PET Functional Imaging, BC Cancer, Vancouver, British Columbia, Canada.,Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; and
| | - Sandra E Black
- Department of Medicine (Neurology), Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
10
|
Abstract
CLINICAL ISSUE Hybrid imaging enables the precise visualization of cellular metabolism by combining anatomical and metabolic information. Advances in artificial intelligence (AI) offer new methods for processing and evaluating this data. METHODOLOGICAL INNOVATIONS This review summarizes current developments and applications of AI methods in hybrid imaging. Applications in image processing as well as methods for disease-related evaluation are presented and discussed. MATERIALS AND METHODS This article is based on a selective literature search with the search engines PubMed and arXiv. ASSESSMENT Currently, there are only a few AI applications using hybrid imaging data and no applications are established in clinical routine yet. Although the first promising approaches are emerging, they still need to be evaluated prospectively. In the future, AI applications will support radiologists and nuclear medicine radiologists in diagnosis and therapy.
Collapse
Affiliation(s)
- Christian Strack
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland
- Heidelberg University, Heidelberg, Deutschland
| | - Robert Seifert
- Department of Nuclear Medicine, Medical Faculty, University Hospital Essen, Essen, Deutschland
| | - Jens Kleesiek
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland.
- German Cancer Consortium (DKTK), Heidelberg, Deutschland.
| |
Collapse
|
11
|
Weber M, Wild D, Wallner J, Egger J. A Client/Server based Online Environment for the Calculation of Medical Segmentation Scores. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:3463-3467. [PMID: 31946624 DOI: 10.1109/embc.2019.8856481] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Image segmentation plays a major role in medical imaging. Especially in radiology, the detection and development of tumors and other diseases can be supported by image segmentation applications. Tools that provide image segmentation and calculation of segmentation scores are not available at any time for every device due to the size and scope of functionalities they offer. These tools need huge periodic updates and do not properly work on old or weak systems. However, medical use-cases often require fast and accurate results. A complex and slow software can lead to additional stress and thus unnecessary errors. The aim of this contribution is the development of a cross-platform tool for medical segmentation use-cases. The goal is a device-independent and always available possibility for medical imaging including manual segmentation and metric calculation. The result is Studierfenster (studierfenster.at), a web-tool for manual segmentation and segmentation metric calculation. In this contribution, the focus lies on the segmentation metric calculation part of the tool. It provides the functionalities of calculating directed and undirected Hausdorff Distance (HD) and Dice Similarity Coefficient (DSC) scores for two uploaded volumes, filtering for specific values, searching for specific values in the calculated metrics and exporting filtered metric lists in different file formats.
Collapse
|