101
|
Duclos V, Iep A, Gomez L, Goldfarb L, Besson FL. PET Molecular Imaging: A Holistic Review of Current Practice and Emerging Perspectives for Diagnosis, Therapeutic Evaluation and Prognosis in Clinical Oncology. Int J Mol Sci 2021; 22:4159. [PMID: 33923839 PMCID: PMC8073681 DOI: 10.3390/ijms22084159] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Revised: 04/14/2021] [Accepted: 04/15/2021] [Indexed: 02/06/2023] Open
Abstract
PET/CT molecular imaging has been imposed in clinical oncological practice over the past 20 years, driven by its two well-grounded foundations: quantification and radiolabeled molecular probe vectorization. From basic visual interpretation to more sophisticated full kinetic modeling, PET technology provides a unique opportunity to characterize various biological processes with different levels of analysis. In clinical practice, many efforts have been made during the last two decades to standardize image analyses at the international level, but advanced metrics are still under use in practice. In parallel, the integration of PET imaging with radionuclide therapy, also known as radiolabeled theranostics, has paved the way towards highly sensitive radionuclide-based precision medicine, with major breakthroughs emerging in neuroendocrine tumors and prostate cancer. PET imaging of tumor immunity and beyond is also emerging, emphasizing the unique capabilities of PET molecular imaging to constantly adapt to emerging oncological challenges. However, these new horizons face the growing complexity of multidimensional data. In the era of precision medicine, statistical and computer sciences are currently revolutionizing image-based decision making, paving the way for more holistic cancer molecular imaging analyses at the whole-body level.
Collapse
Affiliation(s)
- Valentin Duclos
- Department of Biophysics and Nuclear Medicine-Molecular Imaging, Hôpitaux Universitaires Paris Saclay, Assistance Publique-Hôpitaux de Paris, CHU Bicêtre, 94270 Le Kremlin-Bicêtre, France; (V.D.); (A.I.); (L.G.)
| | - Alex Iep
- Department of Biophysics and Nuclear Medicine-Molecular Imaging, Hôpitaux Universitaires Paris Saclay, Assistance Publique-Hôpitaux de Paris, CHU Bicêtre, 94270 Le Kremlin-Bicêtre, France; (V.D.); (A.I.); (L.G.)
| | - Léa Gomez
- Department of Biophysics and Nuclear Medicine-Molecular Imaging, Hôpitaux Universitaires Paris Saclay, Assistance Publique-Hôpitaux de Paris, CHU Bicêtre, 94270 Le Kremlin-Bicêtre, France; (V.D.); (A.I.); (L.G.)
| | - Lucas Goldfarb
- Service Hospitalier Frédéric Joliot-CEA, 91401 Orsay, France;
| | - Florent L. Besson
- Department of Biophysics and Nuclear Medicine-Molecular Imaging, Hôpitaux Universitaires Paris Saclay, Assistance Publique-Hôpitaux de Paris, CHU Bicêtre, 94270 Le Kremlin-Bicêtre, France; (V.D.); (A.I.); (L.G.)
- Université Paris Saclay, CEA, CNRS, Inserm, BioMaps, 91401 Orsay, France
- School of Medicine, Université Paris Saclay, 94720 Le Kremlin-Bicêtre, France
| |
Collapse
|
102
|
An FP, Liu JE, Wang JR. Medical image segmentation algorithm based on positive scaling invariant-self encoding CCA. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102395] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
103
|
Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaría J, Fadhel MA, Al-Amidie M, Farhan L. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. JOURNAL OF BIG DATA 2021; 8:53. [PMID: 33816053 PMCID: PMC8010506 DOI: 10.1186/s40537-021-00444-8] [Citation(s) in RCA: 985] [Impact Index Per Article: 246.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 03/22/2021] [Indexed: 05/04/2023]
Abstract
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad, 10001 Iraq
| | - Ayad Al-Dujaili
- Electrical Engineering Technical College, Middle Technical University, Baghdad, 10001 Iraq
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar, 64005 Iraq
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Laith Farhan
- School of Engineering, Manchester Metropolitan University, Manchester, M1 5GD UK
| |
Collapse
|
104
|
Iantsen A, Ferreira M, Lucia F, Jaouen V, Reinhold C, Bonaffini P, Alfieri J, Rovira R, Masson I, Robin P, Mervoyer A, Rousseau C, Kridelka F, Decuypere M, Lovinfosse P, Pradier O, Hustinx R, Schick U, Visvikis D, Hatt M. Convolutional neural networks for PET functional volume fully automatic segmentation: development and validation in a multi-center setting. Eur J Nucl Med Mol Imaging 2021; 48:3444-3456. [PMID: 33772335 PMCID: PMC8440243 DOI: 10.1007/s00259-021-05244-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 02/07/2021] [Indexed: 11/12/2022]
Abstract
Purpose In this work, we addressed fully automatic determination of tumor functional uptake from positron emission tomography (PET) images without relying on other image modalities or additional prior constraints, in the context of multicenter images with heterogeneous characteristics. Methods In cervical cancer, an additional challenge is the location of the tumor uptake near or even stuck to the bladder. PET datasets of 232 patients from five institutions were exploited. To avoid unreliable manual delineations, the ground truth was generated with a semi-automated approach: a volume containing the tumor and excluding the bladder was first manually determined, then a well-validated, semi-automated approach relying on the Fuzzy locally Adaptive Bayesian (FLAB) algorithm was applied to generate the ground truth. Our model built on the U-Net architecture incorporates residual blocks with concurrent spatial squeeze and excitation modules, as well as learnable non-linear downsampling and upsampling blocks. Experiments relied on cross-validation (four institutions for training and validation, and the fifth for testing). Results The model achieved good Dice similarity coefficient (DSC) with little variability across institutions (0.80 ± 0.03), with higher recall (0.90 ± 0.05) than precision (0.75 ± 0.05) and improved results over the standard U-Net (DSC 0.77 ± 0.05, recall 0.87 ± 0.02, precision 0.74 ± 0.08). Both vastly outperformed a fixed threshold at 40% of SUVmax (DSC 0.33 ± 0.15, recall 0.52 ± 0.17, precision 0.30 ± 0.16). In all cases, the model could determine the tumor uptake without including the bladder. Neither shape priors nor anatomical information was required to achieve efficient training. Conclusion The proposed method could facilitate the deployment of a fully automated radiomics pipeline in such a challenging multicenter context. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05244-z.
Collapse
Affiliation(s)
- Andrei Iantsen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France.
| | - Marta Ferreira
- GIGA-CRC in vivo Imaging, University of Liège, Liège, Belgium
| | - Francois Lucia
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Vincent Jaouen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Caroline Reinhold
- Department of Radiology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Pietro Bonaffini
- Department of Radiology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Joanne Alfieri
- Department of Radiation Oncology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Ramon Rovira
- Gynecology Oncology and Laparoscopy Department, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Ingrid Masson
- Department of Radiation Oncology, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Philippe Robin
- Nuclear Medicine Department, University Hospital, Brest, France
| | - Augustin Mervoyer
- Department of Radiation Oncology, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Caroline Rousseau
- Nuclear Medicine Department, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Frédéric Kridelka
- Division of Oncological Gynecology, University Hospital of Liège, Liège, Belgium
| | - Marjolein Decuypere
- Division of Oncological Gynecology, University Hospital of Liège, Liège, Belgium
| | - Pierre Lovinfosse
- Division of Nuclear Medicine and Oncological Imaging, University Hospital of Liège, Liège, Belgium
| | | | - Roland Hustinx
- GIGA-CRC in vivo Imaging, University of Liège, Liège, Belgium
| | - Ulrike Schick
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | | | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| |
Collapse
|
105
|
da Costa-Luis CO, Reader AJ. Micro-Networks for Robust MR-Guided Low Count PET Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:202-212. [PMID: 33681546 PMCID: PMC7931458 DOI: 10.1109/trpms.2020.2986414] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 12/30/2019] [Accepted: 02/08/2020] [Indexed: 01/18/2023]
Abstract
Noise suppression is particularly important in low count positron emission tomography (PET) imaging. Post-smoothing (PS) and regularization methods which aim to reduce noise also tend to reduce resolution and introduce bias. Alternatively, anatomical information from another modality such as magnetic resonance (MR) imaging can be used to improve image quality. Convolutional neural networks (CNNs) are particularly well suited to such joint image processing, but usually require large amounts of training data and have mostly been applied outside the field of medical imaging or focus on classification and segmentation, leaving PET image quality improvement relatively understudied. This article proposes the use of a relatively low-complexity CNN (micro-net) as a post-reconstruction MR-guided image processing step to reduce noise and reconstruction artefacts while also improving resolution in low count PET scans. The CNN is designed to be fully 3-D, robust to very limited amounts of training data, and to accept multiple inputs (including competitive denoising methods). Application of the proposed CNN on simulated low (30 M) count data (trained to produce standard (300 M) count reconstructions) results in a 36% lower normalized root mean squared error (NRMSE, calculated over ten realizations against the ground truth) compared to maximum-likelihood expectation maximization (MLEM) used in clinical practice. In contrast, a decrease of only 25% in NRMSE is obtained when an optimized (using knowledge of the ground truth) PS is performed. A 26% NRMSE decrease is obtained with both RM and optimized PS. Similar improvement is also observed for low count real patient datasets. Overfitting to training data is demonstrated to occur as the network size is increased. In an extreme case, a U-net (which produces better predictions for training data) is shown to completely fail on test data due to overfitting to this case of very limited training data. Meanwhile, the resultant images from the proposed CNN (which has low training data requirements) have lower noise, reduced ringing, and partial volume effects, as well as sharper edges and improved resolution compared to conventional MLEM.
Collapse
Affiliation(s)
- Casper O. da Costa-Luis
- Department of Biomedical EngineeringSchool of Biomedical Engineering and Imaging Sciences, St. Thomas’ HospitalKing’s College LondonLondonSE1 7EHU.K.
| | - Andrew J. Reader
- Department of Biomedical EngineeringSchool of Biomedical Engineering and Imaging Sciences, St. Thomas’ HospitalKing’s College LondonLondonSE1 7EHU.K.
| |
Collapse
|
106
|
Chang S, Chen X, Duan J, Mou X. A CNN-Based Hybrid Ring Artifact Reduction Algorithm for CT Images. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2983391] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
107
|
Shi C, Zhou Z, Lin H, Gao J. Imaging Beyond Seeing: Early Prognosis of Cancer Treatment. SMALL METHODS 2021; 5:e2001025. [PMID: 34927817 DOI: 10.1002/smtd.202001025] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 11/24/2020] [Indexed: 06/14/2023]
Abstract
Assessing cancer response to therapeutic interventions has been realized as an important course to early predict curative efficacy and treatment outcomes due to tumor heterogeneity. Compared to the traditional invasive tissue biopsy method, molecular imaging techniques have fundamentally revolutionized the ability to evaluate cancer response in a spatiotemporal manner. The past few years has witnessed a paradigm shift on the efforts from manufacturing functional molecular imaging probes for seeing a tumor to a vantage stage of interpreting the tumor response during different treatments. This review is to stand by the current development of advanced imaging technologies aiming to predict the treatment response in cancer therapy. Special interest is placed on the systems that are able to provide rapid and noninvasive assessment of pharmacokinetic drug fates (e.g., drug distribution, release, and activation) and tumor microenvironment heterogeneity (e.g., tumor cells, macrophages, dendritic cells (DCs), T cells, and inflammatory cells). The current status, practical significance, and future challenges of the emerging artificial intelligence (AI) technology and machine learning in the applications of medical imaging fields is overviewed. Ultimately, the authors hope that this review is timely to spur research interest in molecular imaging and precision medicine.
Collapse
Affiliation(s)
- Changrong Shi
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics and Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen, 361102, China
| | - Zijian Zhou
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics and Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen, 361102, China
| | - Hongyu Lin
- State Key Laboratory of Physical Chemistry of Solid Surfaces, The Key Laboratory for Chemical Biology of Fujian Province and Department of Chemical Biology, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
| | - Jinhao Gao
- State Key Laboratory of Physical Chemistry of Solid Surfaces, The Key Laboratory for Chemical Biology of Fujian Province and Department of Chemical Biology, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
| |
Collapse
|
108
|
Gong Y, Shan H, Teng Y, Tu N, Li M, Liang G, Wang G, Wang S. Parameter-Transferred Wasserstein Generative Adversarial Network (PT-WGAN) for Low-Dose PET Image Denoising. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:213-223. [PMID: 35402757 PMCID: PMC8993163 DOI: 10.1109/trpms.2020.3025071] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
Due to the widespread use of positron emission tomography (PET) in clinical practice, the potential risk of PET-associated radiation dose to patients needs to be minimized. However, with the reduction in the radiation dose, the resultant images may suffer from noise and artifacts that compromise diagnostic performance. In this paper, we propose a parameter-transferred Wasserstein generative adversarial network (PT-WGAN) for low-dose PET image denoising. The contributions of this paper are twofold: i) a PT-WGAN framework is designed to denoise low-dose PET images without compromising structural details, and ii) a task-specific initialization based on transfer learning is developed to train PT-WGAN using trainable parameters transferred from a pretrained model, which significantly improves the training efficiency of PT-WGAN. The experimental results on clinical data show that the proposed network can suppress image noise more effectively while preserving better image fidelity than recently published state-of-the-art methods. We make our code available at https://github.com/90n9-yu/PT-WGAN.
Collapse
Affiliation(s)
- Yu Gong
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China, and Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China, and the Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 201210, China
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China, and the Key Laboratory of Intelligent Computing in Medical Images, Ministry of Education, Shenyang 110169, China
| | - Ning Tu
- PET-CT/MRI Center and Molecular Imaging Center, Wuhan University Renmin Hospital, Wuhan, 430060, China
| | - Ming Li
- Neusoft Medical Systems Co., Ltd, Shenyang 110167, China
| | - Guodong Liang
- Neusoft Medical Systems Co., Ltd, Shenyang 110167, China
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
109
|
Torres-Velázquez M, Chen WJ, Li X, McMillan AB. Application and Construction of Deep Learning Networks in Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:137-159. [PMID: 34017931 PMCID: PMC8132932 DOI: 10.1109/trpms.2020.3030611] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning (DL) approaches are part of the machine learning (ML) subfield concerned with the development of computational models to train artificial intelligence systems. DL models are characterized by automatically extracting high-level features from the input data to learn the relationship between matching datasets. Thus, its implementation offers an advantage over common ML methods that often require the practitioner to have some domain knowledge of the input data to select the best latent representation. As a result of this advantage, DL has been successfully applied within the medical imaging field to address problems, such as disease classification and tumor segmentation for which it is difficult or impossible to determine which image features are relevant. Therefore, taking into consideration the positive impact of DL on the medical imaging field, this article reviews the key concepts associated with its evolution and implementation. The sections of this review summarize the milestones related to the development of the DL field, followed by a description of the elements of deep neural network and an overview of its application within the medical imaging field. Subsequently, the key steps necessary to implement a supervised DL application are defined, and associated limitations are discussed.
Collapse
Affiliation(s)
- Maribel Torres-Velázquez
- Department of Biomedical Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Wei-Jie Chen
- Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Xue Li
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA, and also with the Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705 USA
| |
Collapse
|
110
|
Barragán-Montero A, Javaid U, Valdés G, Nguyen D, Desbordes P, Macq B, Willems S, Vandewinckele L, Holmström M, Löfman F, Michiels S, Souris K, Sterpin E, Lee JA. Artificial intelligence and machine learning for medical imaging: A technology review. Phys Med 2021; 83:242-256. [PMID: 33979715 PMCID: PMC8184621 DOI: 10.1016/j.ejmp.2021.04.016] [Citation(s) in RCA: 115] [Impact Index Per Article: 28.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/15/2021] [Accepted: 04/18/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Umair Javaid
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, USA
| | - Paul Desbordes
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Benoit Macq
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Steven Michiels
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium; KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|
111
|
Dong X, Li H, Jiang Z, Grünleitner T, Güler İ, Dong J, Wang K, Köhler MH, Jakobi M, Menze BH, Yetisen AK, Sharp ID, Stier AV, Finley JJ, Koch AW. 3D Deep Learning Enables Accurate Layer Mapping of 2D Materials. ACS NANO 2021; 15:3139-3151. [PMID: 33464815 DOI: 10.1021/acsnano.0c09685] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Layered, two-dimensional (2D) materials are promising for next-generation photonics devices. Typically, the thickness of mechanically cleaved flakes and chemical vapor deposited thin films is distributed randomly over a large area, where accurate identification of atomic layer numbers is time-consuming. Hyperspectral imaging microscopy yields spectral information that can be used to distinguish the spectral differences of varying thickness specimens. However, its spatial resolution is relatively low due to the spectral imaging nature. In this work, we present a 3D deep learning solution called DALM (deep-learning-enabled atomic layer mapping) to merge hyperspectral reflection images (high spectral resolution) and RGB images (high spatial resolution) for the identification and segmentation of MoS2 flakes with mono-, bi-, tri-, and multilayer thicknesses. DALM is trained on a small set of labeled images, automatically predicts layer distributions and segments individual layers with high accuracy, and shows robustness to illumination and contrast variations. Further, we show its advantageous performance over the state-of-the-art model that is solely based on RGB microscope images. This AI-supported technique with high speed, spatial resolution, and accuracy allows for reliable computer-aided identification of atomically thin materials.
Collapse
Affiliation(s)
- Xingchen Dong
- Institute for Measurement Systems and Sensor Technology, Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich Germany
| | - Hongwei Li
- Department of Computer Science, Technical University of Munich, 85748 Garching, Germany
| | - Zhutong Jiang
- Institute for Measurement Systems and Sensor Technology, Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich Germany
| | - Theresa Grünleitner
- Walter Schottky Institut and Physik Department, Technische Universität München, Am Coulombwall 4, 85748 Garching, Germany
| | - İnci Güler
- Walter Schottky Institut and Physik Department, Technische Universität München, Am Coulombwall 4, 85748 Garching, Germany
| | - Jie Dong
- Institute for Measurement Systems and Sensor Technology, Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich Germany
| | - Kun Wang
- Institute for Measurement Systems and Sensor Technology, Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich Germany
| | - Michael H Köhler
- Institute for Measurement Systems and Sensor Technology, Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich Germany
| | - Martin Jakobi
- Institute for Measurement Systems and Sensor Technology, Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich Germany
| | - Bjoern H Menze
- Department of Computer Science, Technical University of Munich, 85748 Garching, Germany
| | - Ali K Yetisen
- Institute for Measurement Systems and Sensor Technology, Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich Germany
| | - Ian D Sharp
- Walter Schottky Institut and Physik Department, Technische Universität München, Am Coulombwall 4, 85748 Garching, Germany
| | - Andreas V Stier
- Walter Schottky Institut and Physik Department, Technische Universität München, Am Coulombwall 4, 85748 Garching, Germany
| | - Jonathan J Finley
- Walter Schottky Institut and Physik Department, Technische Universität München, Am Coulombwall 4, 85748 Garching, Germany
| | - Alexander W Koch
- Institute for Measurement Systems and Sensor Technology, Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich Germany
| |
Collapse
|
112
|
Bhattacharya S, Reddy Maddikunta PK, Pham QV, Gadekallu TR, Krishnan S SR, Chowdhary CL, Alazab M, Jalil Piran M. Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey. SUSTAINABLE CITIES AND SOCIETY 2021; 65:102589. [PMID: 33169099 PMCID: PMC7642729 DOI: 10.1016/j.scs.2020.102589] [Citation(s) in RCA: 168] [Impact Index Per Article: 42.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Since December 2019, the coronavirus disease (COVID-19) outbreak has caused many death cases and affected all sectors of human life. With gradual progression of time, COVID-19 was declared by the world health organization (WHO) as an outbreak, which has imposed a heavy burden on almost all countries, especially ones with weaker health systems and ones with slow responses. In the field of healthcare, deep learning has been implemented in many applications, e.g., diabetic retinopathy detection, lung nodule classification, fetal localization, and thyroid diagnosis. Numerous sources of medical images (e.g., X-ray, CT, and MRI) make deep learning a great technique to combat the COVID-19 outbreak. Motivated by this fact, a large number of research works have been proposed and developed for the initial months of 2020. In this paper, we first focus on summarizing the state-of-the-art research works related to deep learning applications for COVID-19 medical image processing. Then, we provide an overview of deep learning and its applications to healthcare found in the last decade. Next, three use cases in China, Korea, and Canada are also presented to show deep learning applications for COVID-19 medical image processing. Finally, we discuss several challenges and issues related to deep learning implementations for COVID-19 medical image processing, which are expected to drive further studies in controlling the outbreak and controlling the crisis, which results in smart healthy cities.
Collapse
Affiliation(s)
- Sweta Bhattacharya
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | | | - Quoc-Viet Pham
- Research Institute of Computer, Information and Communication, Pusan National University, Busan 46241, Republic of Korea
| | - Thippa Reddy Gadekallu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Siva Rama Krishnan S
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Chiranji Lal Chowdhary
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Mamoun Alazab
- College of Engineering, IT & Environment, Charles Darwin University, Australia
| | - Md Jalil Piran
- Department of Computer Science and Engineering, Sejong University, 05006, Seoul, Republic of Korea
| |
Collapse
|
113
|
Naser MA, van Dijk LV, He R, Wahid KA, Fuller CD. Tumor Segmentation in Patients with Head and Neck Cancers Using Deep Learning Based-on Multi-modality PET/CT Images. HEAD AND NECK TUMOR SEGMENTATION : FIRST CHALLENGE, HECKTOR 2020, HELD IN CONJUNCTION WITH MICCAI 2020, LIMA, PERU, OCTOBER 4, 2020, PROCEEDINGS 2021; 12603:85-98. [PMID: 33724743 PMCID: PMC7929493 DOI: 10.1007/978-3-030-67194-5_10] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Segmentation of head and neck cancer (HNC) primary tumors onmedical images is an essential, yet labor-intensive, aspect of radiotherapy.PET/CT imaging offers a unique ability to capture metabolic and anatomicinformation, which is invaluable for tumor detection and border definition. Anautomatic segmentation tool that could leverage the dual streams of informationfrom PET and CT imaging simultaneously, could substantially propel HNCradiotherapy workflows forward. Herein, we leverage a multi-institutionalPET/CT dataset of 201 HNC patients, as part of the MICCAI segmentationchallenge, to develop novel deep learning architectures for primary tumor auto-segmentation for HNC patients. We preprocess PET/CT images by normalizingintensities and applying data augmentation to mitigate overfitting. Both 2D and3D convolutional neural networks based on the U-net architecture, which wereoptimized with a model loss function based on a combination of dice similaritycoefficient (DSC) and binary cross entropy, were implemented. The median andmean DSC values comparing the predicted tumor segmentation with the groundtruth achieved by the models through 5-fold cross validation are 0.79 and 0.69for the 3D model, respectively, and 0.79 and 0.67 for the 2D model, respec-tively. These promising results show potential to provide an automatic, accurate,and efficient approach for primary tumor auto-segmentation to improve theclinical practice of HNC treatment.
Collapse
Affiliation(s)
- Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Lisanne V van Dijk
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| |
Collapse
|
114
|
Angulakshmi M, Deepa M. A Review on Deep Learning Architecture and Methods for MRI Brain Tumour Segmentation. Curr Med Imaging 2021; 17:695-706. [PMID: 33423651 DOI: 10.2174/1573405616666210108122048] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 10/03/2020] [Accepted: 10/15/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND The automatic segmentation of brain tumour from MRI medical images is mainly covered in this review. Recently, state-of-the-art performance is provided by deep learning- based approaches in the field of image classification, segmentation, object detection, and tracking tasks. INTRODUCTION The core feature deep learning approach is the hierarchical representation of features from images, thus avoiding domain-specific handcrafted features. METHODS In this review paper, we have dealt with a review of Deep Learning Architecture and Methods for MRI Brain Tumour Segmentation. First, we have discussed the basic architecture and approaches for deep learning methods. Secondly, we have discussed the literature survey of MRI brain tumour segmentation using deep learning methods and its multimodality fusion. Then, the advantages and disadvantages of each method are analyzed and finally, it is concluded with a discussion on the merits and challenges of deep learning techniques. RESULTS The review of brain tumour identification using deep learning. CONCLUSION Techniques may help the researchers to have a better focus on it.
Collapse
Affiliation(s)
- M Angulakshmi
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - M Deepa
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| |
Collapse
|
115
|
Kandarpa VSS, Bousse A, Benoit D, Visvikis D. DUG-RECON: A Framework for Direct Image Reconstruction Using Convolutional Generative Networks. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3033172] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
116
|
DPIR-Net: Direct PET Image Reconstruction Based on the Wasserstein Generative Adversarial Network. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2995717] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
117
|
Jiang Y, Gu X, Wu D, Hang W, Xue J, Qiu S, Lin CT. A Novel Negative-Transfer-Resistant Fuzzy Clustering Model With a Shared Cross-Domain Transfer Latent Space and its Application to Brain CT Image Segmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:40-52. [PMID: 31905144 DOI: 10.1109/tcbb.2019.2963873] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Traditional clustering algorithms for medical image segmentation can only achieve satisfactory clustering performance under relatively ideal conditions, in which there is adequate data from the same distribution, and the data is rarely disturbed by noise or outliers. However, a sufficient amount of medical images with representative manual labels are often not available, because medical images are frequently acquired with different scanners (or different scan protocols) or polluted by various noises. Transfer learning improves learning in the target domain by leveraging knowledge from related domains. Given some target data, the performance of transfer learning is determined by the degree of relevance between the source and target domains. To achieve positive transfer and avoid negative transfer, a negative-transfer-resistant mechanism is proposed by computing the weight of transferred knowledge. Extracting a negative-transfer-resistant fuzzy clustering model with a shared cross-domain transfer latent space (called NTR-FC-SCT) is proposed by integrating negative-transfer-resistant and maximum mean discrepancy (MMD) into the framework of fuzzy c-means clustering. Experimental results show that the proposed NTR-FC-SCT model outperformed several traditional non-transfer and related transfer clustering algorithms.
Collapse
|
118
|
Qian X, Fu H, Shi W, Chen T, Fu Y, Shan F, Xue X. M 3Lung-Sys: A Deep Learning System for Multi-Class Lung Pneumonia Screening From CT Imaging. IEEE J Biomed Health Inform 2020; 24:3539-3550. [PMID: 33048773 PMCID: PMC8545176 DOI: 10.1109/jbhi.2020.3030853] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 09/26/2020] [Accepted: 10/06/2020] [Indexed: 11/10/2022]
Abstract
To counter the outbreak of COVID-19, the accurate diagnosis of suspected cases plays a crucial role in timely quarantine, medical treatment, and preventing the spread of the pandemic. Considering the limited training cases and resources (e.g, time and budget), we propose a Multi-task Multi-slice Deep Learning System (M 3Lung-Sys) for multi-class lung pneumonia screening from CT imaging, which only consists of two 2D CNN networks, i.e., slice- and patient-level classification networks. The former aims to seek the feature representations from abundant CT slices instead of limited CT volumes, and for the overall pneumonia screening, the latter one could recover the temporal information by feature refinement and aggregation between different slices. In addition to distinguish COVID-19 from Healthy, H1N1, and CAP cases, our M 3Lung-Sys also be able to locate the areas of relevant lesions, without any pixel-level annotation. To further demonstrate the effectiveness of our model, we conduct extensive experiments on a chest CT imaging dataset with a total of 734 patients (251 healthy people, 245 COVID-19 patients, 105 H1N1 patients, and 133 CAP patients). The quantitative results with plenty of metrics indicate the superiority of our proposed model on both slice- and patient-level classification tasks. More importantly, the generated lesion location maps make our system interpretable and more valuable to clinicians.
Collapse
Affiliation(s)
- Xuelin Qian
- Shanghai Key Lab of Intelligent Information Processing, School of Computer ScienceFudan UniversityShanghai200433China
| | - Huazhu Fu
- Inception Institute of Artificial IntelligenceAbu DhabiUAE
| | - Weiya Shi
- Department of Radiology, Shanghai Public Health Clinical CenterFudan UniversityShanghai201508China
| | - Tao Chen
- School of Information Science and TechnologyFudan UniversityShanghai200433China
| | - Yanwei Fu
- School of Data Science, MOE Frontiers Center for Brain Science, Shanghai Key Lab of Intelligent Information ProcessingFudan UniversityShanghai200433China
| | - Fei Shan
- Department of Radiology, Shanghai Public Health Clinical CenterFudan UniversityShanghai201508China
| | - Xiangyang Xue
- Shanghai Key Lab of Intelligent Information Processing, School of Computer ScienceFudan UniversityShanghai200433China
| |
Collapse
|
119
|
Muzammil SR, Maqsood S, Haider S, Damaševičius R. CSID: A Novel Multimodal Image Fusion Algorithm for Enhanced Clinical Diagnosis. Diagnostics (Basel) 2020; 10:E904. [PMID: 33167376 PMCID: PMC7694345 DOI: 10.3390/diagnostics10110904] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 10/28/2020] [Accepted: 10/29/2020] [Indexed: 12/19/2022] Open
Abstract
Technology-assisted clinical diagnosis has gained tremendous importance in modern day healthcare systems. To this end, multimodal medical image fusion has gained great attention from the research community. There are several fusion algorithms that merge Computed Tomography (CT) and Magnetic Resonance Images (MRI) to extract detailed information, which is used to enhance clinical diagnosis. However, these algorithms exhibit several limitations, such as blurred edges during decomposition, excessive information loss that gives rise to false structural artifacts, and high spatial distortion due to inadequate contrast. To resolve these issues, this paper proposes a novel algorithm, namely Convolutional Sparse Image Decomposition (CSID), that fuses CT and MR images. CSID uses contrast stretching and the spatial gradient method to identify edges in source images and employs cartoon-texture decomposition, which creates an overcomplete dictionary. Moreover, this work proposes a modified convolutional sparse coding method and employs improved decision maps and the fusion rule to obtain the final fused image. Simulation results using six datasets of multimodal images demonstrate that CSID achieves superior performance, in terms of visual quality and enriched information extraction, in comparison with eminent image fusion algorithms.
Collapse
Affiliation(s)
- Shah Rukh Muzammil
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan; (S.R.M.); (S.H.)
| | - Sarmad Maqsood
- Department of Software Engineering, Kaunas University of Technology, Kaunas 51368, Lithuania;
| | - Shahab Haider
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan; (S.R.M.); (S.H.)
| | - Robertas Damaševičius
- Department of Software Engineering, Kaunas University of Technology, Kaunas 51368, Lithuania;
| |
Collapse
|
120
|
Hatt M, Cheze Le Rest C, Antonorsi N, Tixier F, Tankyevych O, Jaouen V, Lucia F, Bourbonne V, Schick U, Badic B, Visvikis D. Radiomics in PET/CT: Current Status and Future AI-Based Evolutions. Semin Nucl Med 2020; 51:126-133. [PMID: 33509369 DOI: 10.1053/j.semnuclmed.2020.09.002] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
This short review aims at providing the readers with an update on the current status, as well as future perspectives in the quickly evolving field of radiomics applied to the field of PET/CT imaging. Numerous pitfalls have been identified in study design, data acquisition, segmentation, features calculation and modeling by the radiomics community, and these are often the same issues across all image modalities and clinical applications, however some of these are specific to PET/CT (and SPECT/CT) imaging and therefore the present paper focuses on those. In most cases, recommendations and potential methodological solutions do exist and should therefore be followed to improve the overall quality and reproducibility of published studies. In terms of future evolutions, the techniques from the larger field of artificial intelligence (AI), including those relying on deep neural networks (also known as deep learning) have already shown impressive potential to provide solutions, especially in terms of automation, but also to maybe fully replace the tools the radiomics community has been using until now in order to build the usual radiomics workflow. Some important challenges remain to be addressed before the full impact of AI may be realized but overall the field has made striking advances over the last few years and it is expected advances will continue at a rapid pace.
Collapse
Affiliation(s)
- Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France
| | - Catherine Cheze Le Rest
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France; Nuclear Medicine Department, CHU Milétrie, Poitiers, France
| | - Nils Antonorsi
- Nuclear Medicine Department, CHU Milétrie, Poitiers, France
| | - Florent Tixier
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States of America
| | | | - Vincent Jaouen
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France; IMT-Atlantique, Plouzané, France
| | - Francois Lucia
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France
| | | | - Ulrike Schick
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France
| | - Bogdan Badic
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France
| | | |
Collapse
|
121
|
Zhang L, Ji L, Jiang H, Yang F, Wang X. Multi-modal Image Fusion Algorithm based on Variable Parameter Fractional Difference Enhancement. J Imaging Sci Technol 2020. [DOI: 10.2352/j.imagingsci.technol.2020.64.6.060402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
|
122
|
Liu R, Jia Y, He X, Li Z, Cai J, Li H, Yang X. Ensemble Learning with Multiclassifiers on Pediatric Hand Radiograph Segmentation for Bone Age Assessment. Int J Biomed Imaging 2020; 2020:8866700. [PMID: 33178255 PMCID: PMC7609149 DOI: 10.1155/2020/8866700] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 09/22/2020] [Accepted: 09/28/2020] [Indexed: 11/17/2022] Open
Abstract
In the study of pediatric automatic bone age assessment (BAA) in clinical practice, the extraction of the object area in hand radiographs is an important part, which directly affects the prediction accuracy of the BAA. But no perfect segmentation solution has been found yet. This work is to develop an automatic hand radiograph segmentation method with high precision and efficiency. We considered the hand segmentation task as a classification problem. The optimal segmentation threshold for each image was regarded as the prediction target. We utilized the normalized histogram, mean value, and variance of each image as input features to train the classification model, based on ensemble learning with multiple classifiers. 600 left-hand radiographs with the bone age ranging from 1 to 18 years old were included in the dataset. Compared with traditional segmentation methods and the state-of-the-art U-Net network, the proposed method performed better with a higher precision and less computational load, achieving an average PSNR of 52.43 dB, SSIM of 0.97, DSC of 0.97, and JSI of 0.91, which is more suitable in clinical application. Furthermore, the experimental results also verified that hand radiograph segmentation could bring an average improvement for BAA performance of at least 13%.
Collapse
Affiliation(s)
- Rui Liu
- Department of Medical Informatics, Chongqing Medical University, Chongqing 401331, China
- Chengdu Second People's Hospital, Chengdu 610017, China
| | - Yuanyuan Jia
- Department of Medical Informatics, Chongqing Medical University, Chongqing 401331, China
| | - Xiangqian He
- Department of Medical Informatics, Chongqing Medical University, Chongqing 401331, China
| | - Zhe Li
- Department of Medical Informatics, Chongqing Medical University, Chongqing 401331, China
| | - Jinhua Cai
- Department of Radiology, Children's Hospital Affiliated to Chongqing Medical University, Chongqing 400014, China
| | - Hao Li
- Department of Radiology, Children's Hospital Affiliated to Chongqing Medical University, Chongqing 400014, China
| | - Xiao Yang
- Department of Mechanical and Electrical Engineering, University of Electronic Science and Technology, Chengdu 611731, China
| |
Collapse
|
123
|
袁 鑫, 郑 秀, 吉 彬, 李 淼, 李 彬. [Joint optic disc and cup segmentation based on residual multi-scale fully convolutional neural network]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2020; 37:875-884. [PMID: 33140612 PMCID: PMC10320543 DOI: 10.7507/1001-5515.201909006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Indexed: 11/03/2022]
Abstract
Glaucoma is the leading cause of irreversible blindness, but its early symptoms are not obvious and are easily overlooked, so early screening for glaucoma is particularly important. The cup to disc ratio is an important indicator for clinical glaucoma screening, and accurate segmentation of the optic cup and disc is the key to calculating the cup to disc ratio. In this paper, a full convolutional neural network with residual multi-scale convolution module was proposed for the optic cup and disc segmentation. First, the fundus image was contrast enhanced and polar transformation was introduced. Subsequently, W-Net was used as the backbone network, which replaced the standard convolution unit with the residual multi-scale full convolution module, the input port was added to the image pyramid to construct the multi-scale input, and the side output layer was used as the early classifier to generate the local prediction output. Finally, a new multi-tag loss function was proposed to guide network segmentation. The mean intersection over union of the optic cup and disc segmentation in the REFUGE dataset was 0.904 0 and 0.955 3 respectively, and the overlapping error was 0.178 0 and 0.066 5 respectively. The results show that this method not only realizes the joint segmentation of cup and disc, but also improves the segmentation accuracy effectively, which could be helpful for the promotion of large-scale early glaucoma screening.
Collapse
Affiliation(s)
- 鑫 袁
- 四川大学 电气工程学院 自动化系(成都 610065)Department of Automation, College of Electrical Engineering, Sichuan University, Chengdu 610065, P.R.China
| | - 秀娟 郑
- 四川大学 电气工程学院 自动化系(成都 610065)Department of Automation, College of Electrical Engineering, Sichuan University, Chengdu 610065, P.R.China
| | - 彬 吉
- 四川大学 电气工程学院 自动化系(成都 610065)Department of Automation, College of Electrical Engineering, Sichuan University, Chengdu 610065, P.R.China
- 中国移动(成都)产业研究院(成都 610041)China Mobile (Chengdu) Industrial Research Institute, Chengdu 610041, P.R.China
| | - 淼 李
- 四川大学 电气工程学院 自动化系(成都 610065)Department of Automation, College of Electrical Engineering, Sichuan University, Chengdu 610065, P.R.China
| | - 彬 李
- 四川大学 电气工程学院 自动化系(成都 610065)Department of Automation, College of Electrical Engineering, Sichuan University, Chengdu 610065, P.R.China
| |
Collapse
|
124
|
Li L, Lu W, Tan S. Variational PET/CT Tumor Co-segmentation Integrated with PET Restoration. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020; 4:37-49. [PMID: 32939423 DOI: 10.1109/trpms.2019.2911597] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
PET and CT are widely used imaging modalities in radiation oncology. PET imaging has a high contrast but blurry tumor edges due to its limited spatial resolution, while CT imaging has a high resolution but a low contrast between tumor and soft normal tissues. Tumor segmentation from either a single PET or CT image is difficult. It is known that co-segmentation methods utilizing the complementary information between PET and CT can improve segmentation accuracy. These information can be either consistent or inconsistent in the image-level. How to correctly localize tumor edges with these inconsistent information is a major challenge for co-segmentation methods. In this study, we proposed a novel variational method for tumor co-segmentation in PET/CT, with a fusion strategy specifically designed to handle the information inconsistency between PET and CT in an adaptive way - the method can automatically decide which modality should be more trustful when PET and CT disagree to each other for localizing the tumor boundary. The proposed method was constructed based on the Γ-convergence approximation of the Mumford-Shah (MS) segmentation model. A PET restoration process was integrated into the co-segmentation process, which further eliminate the uncertainty for tumor segmentation introduced by the blurring of tumor edges in PET. The performance of the proposed method was validated on a test dataset with fifty non-small cell lung cancer patients. Experimental results demonstrated that the proposed method had a high accuracy for PET/CT co-segmentation and PET restoration, and can accurately estimate the blur kernel of the PET scanner as well. For those complex images in which the tumors exhibit Fluorodeoxyglucose (FDG) uptake inhomogeneity or even invade adjacent soft normal tissues, the proposed method can still accurately segment the tumors. It achieved an average dice similarity indexes (DSI) of 0.85 ± 0.06, volume error (VE) of 0.09 ± 0.08, and classification error (CE) of 0.31 ± 0.13.
Collapse
Affiliation(s)
- Laquan Li
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York 10065, USA
| | - Shan Tan
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
125
|
Liu S, Yin L, Miao S, Ma J, Cong S, Hu S. Multimodal Medical Image Fusion using Rolling Guidance Filter with CNN and Nuclear Norm Minimization. Curr Med Imaging 2020; 16:1243-1258. [PMID: 32807062 DOI: 10.2174/1573405616999200817103920] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2020] [Revised: 06/27/2020] [Accepted: 07/01/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Medical image fusion is very important for the diagnosis and treatment of diseases. In recent years, there have been a number of different multi-modal medical image fusion algorithms that can provide delicate contexts for disease diagnosis more clearly and more conveniently. Recently, nuclear norm minimization and deep learning have been used effectively in image processing. METHODS A multi-modality medical image fusion method using a rolling guidance filter (RGF) with a convolutional neural network (CNN) based feature mapping and nuclear norm minimization (NNM) is proposed. At first, we decompose medical images to base layer components and detail layer components by using RGF. In the next step, we get the basic fused image through the pretrained CNN model. The CNN model with pre-training is used to obtain the significant characteristics of the base layer components. And we can compute the activity level measurement from the regional energy of CNN-based fusion maps. Then, a detail fused image is gained by NNM. That is, we use NNM to fuse the detail layer components. At last, the basic and detail fused images are integrated into the fused result. RESULTS From the comparison with the most advanced fusion algorithms, the results of experiments indicate that this fusion algorithm has the best effect in visual evaluation and objective standard. CONCLUSION The fusion algorithm using RGF and CNN-based feature mapping, combined with NNM, can improve fusion effects and suppress artifacts and blocking effects in the fused results.
Collapse
Affiliation(s)
- Shuaiqi Liu
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Lu Yin
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Siyu Miao
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Jian Ma
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Shuai Cong
- Industrial and Commercial College, Hebei University, Baoding Hebei, China
| | - Shaohai Hu
- College of Computer and Information, Beijing Jiaotong University, Beijing, China
| |
Collapse
|
126
|
Conti A, Duggento A, Indovina I, Guerrisi M, Toschi N. Radiomics in breast cancer classification and prediction. Semin Cancer Biol 2020; 72:238-250. [PMID: 32371013 DOI: 10.1016/j.semcancer.2020.04.002] [Citation(s) in RCA: 191] [Impact Index Per Article: 38.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2019] [Revised: 03/30/2020] [Accepted: 04/01/2020] [Indexed: 12/15/2022]
Abstract
Breast Cancer (BC) is the common form of cancer in women. Its diagnosis and screening are usually performed through different imaging modalities such as mammography, magnetic resonance imaging and ultrasound. However, mammography and ultrasound-imaging techniques have limited sensitivity and specificity both in identifying lesions and in differentiating malign from benign lesions, especially in presence of dense breast parenchyma. Due to the higher resolution of magnetic resonance images, MRI represents the method with the higher specificity and sensitivity among all the available tools, in both lesions' identification and diagnosis. However, especially for diagnosis, even MRI has limitations that are only partially solved if combined with mammography. Unfortunately, due to the limits of all these imaging tools, in order to have a certain diagnosis, patients often receive painful and costly bioptics procedures. In this context, several computational approaches have been developed to increase sensitivity, while maintaining the same specificity, in BC diagnosis and screening. Amongst these, radiomics has been increasingly gaining ground in oncology to improve cancer diagnosis, prognosis and treatment. Radiomics derives multiple quantitative features from single or multiple medical imaging modalities, highlighting image traits which are not visible to the naked eye and hence significantly augmenting the discriminatory and predictive potential of medical imaging. This review article aims to summarize the state of the art in radiomics-based BC research. The dominating evidence extracted from the literature points towards a high potential of radiomics in disentangling malignant from benign breast lesions, classifying BC types and grades and also in predicting treatment response and recurrence risk. In the era of personalized medicine, radiomics has the potential to improve diagnosis, prognosis, prediction, monitoring, image-based intervention, and assessment of therapeutic response in BC.
Collapse
Affiliation(s)
- Allegra Conti
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Via Ardeatina, 306, 00179, Rome, Italy; Department of Biomedicine and Prevention, University of Rome Tor Vergata, Rome, Italy.
| | - Andrea Duggento
- Department of Biomedicine and Prevention, University of Rome Tor Vergata, Rome, Italy.
| | - Iole Indovina
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Via Ardeatina, 306, 00179, Rome, Italy; Department of Medicine and Surgery, Saint Camillus International University of Health and Medical Sciences, Rome, Italy
| | - Maria Guerrisi
- Department of Biomedicine and Prevention, University of Rome Tor Vergata, Rome, Italy
| | - Nicola Toschi
- Department of Biomedicine and Prevention, University of Rome Tor Vergata, Rome, Italy; Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, United States.
| |
Collapse
|
127
|
Sbei A, ElBedoui K, Barhoumi W, Maktouf C. Gradient-based generation of intermediate images for heterogeneous tumor segmentation within hybrid PET/MRI scans. Comput Biol Med 2020; 119:103669. [PMID: 32339115 DOI: 10.1016/j.compbiomed.2020.103669] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 02/17/2020] [Accepted: 02/17/2020] [Indexed: 10/25/2022]
Abstract
Segmentation of tumors from hybrid PET/MRI scans plays an essential role in accurate diagnosis and treatment planning. However, when treating tumors, several challenges, notably heterogeneity and the problem of leaking into surrounding tissues with similar high uptake, have to be considered. To address these issues, we propose an automated method for accurate delineation of tumors in hybrid PET/MRI scans. The method is mainly based on creating intermediate images. In fact, an automatic detection technique that determines a preliminary Interesting Uptake Region (IUR) is firstly performed. To overcome the leakage problem, a separation technique is adopted to generate the final IUR. Then, smart seeds are provided for the Graph Cut (GC) technique to obtain the tumor map. To create intermediate images that tend to reduce heterogeneity faced on the original images, the tumor map gradient is combined with the gradient image. Lastly, segmentation based on the GCsummax technique is applied to the generated images. The proposed method has been validated on PET phantoms as well as on real-world PET/MRI scans of prostate, liver and pancreatic tumors. Experimental comparison revealed the superiority of the proposed method over state-of-the-art methods. This confirms the crucial role of automatically creating intermediate images in addressing the problem of wrongly estimating arc weights for heterogeneous targets.
Collapse
Affiliation(s)
- Arafet Sbei
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia
| | - Khaoula ElBedoui
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035 Tunis-Carthage, Tunisia
| | - Walid Barhoumi
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035 Tunis-Carthage, Tunisia.
| | - Chokri Maktouf
- Nuclear Medicine Department, Pasteur Institute of Tunis, Tunis, Tunisia
| |
Collapse
|
128
|
Tixier F, Cheze-le-Rest C, Schick U, Simon B, Dufour X, Key S, Pradier O, Aubry M, Hatt M, Corcos L, Visvikis D. Transcriptomics in cancer revealed by Positron Emission Tomography radiomics. Sci Rep 2020; 10:5660. [PMID: 32221360 PMCID: PMC7101432 DOI: 10.1038/s41598-020-62414-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 03/13/2020] [Indexed: 11/09/2022] Open
Abstract
Metabolic images from Positron Emission Tomography (PET) are used routinely for diagnosis, follow-up or treatment planning purposes of cancer patients. In this study we aimed at determining if radiomic features extracted from 18F-Fluoro Deoxy Glucose (FDG) PET images could mirror tumor transcriptomics. In this study we analyzed 45 patients with locally advanced head and neck cancer (H&N) that underwent FDG-PET scans at the time of diagnosis and transcriptome analysis using RNAs from both cancer and healthy tissues on microarrays. Association between PET radiomics and transcriptomics was carried out with the Genomica software and a functional annotation was used to associate PET radiomics, gene expression and altered biological pathways. We identified relationships between PET radiomics and genes involved in cell-cycle, disease, DNA repair, extracellular matrix organization, immune system, metabolism or signal transduction pathways, according to the Reactome classification. Our results suggest that these FDG PET radiomic features could be used to infer tissue gene expression and cellular pathway activity in H&N cancers. These observations strengthen the value of radiomics as a promising approach to personalize treatments through targeting tumor-specific molecular processes.
Collapse
Affiliation(s)
- Florent Tixier
- Department of Nuclear Medicine, Poitiers University Hospital, Poitiers, France.
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France.
| | - Catherine Cheze-le-Rest
- Department of Nuclear Medicine, Poitiers University Hospital, Poitiers, France
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
| | - Ulrike Schick
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
- Radiation Oncology Department, University Hospital, Brest, France
| | - Brigitte Simon
- INSERM, UMR 1078, Université de Brest, Génétique Génomique Fonctionnelle et Biotechnologies, Etablissement Français du Sang, Brest, France
| | - Xavier Dufour
- Head and Neck Department, Poitiers University Hospital, Poitiers, France
| | - Stéphane Key
- Radiation Oncology Department, University Hospital, Brest, France
| | - Olivier Pradier
- Radiation Oncology Department, University Hospital, Brest, France
| | - Marc Aubry
- CNRS, UMR 6290, IGDR, Université de Rennes 1, Rennes, France
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
| | - Laurent Corcos
- INSERM, UMR 1078, Université de Brest, Génétique Génomique Fonctionnelle et Biotechnologies, Etablissement Français du Sang, Brest, France
| | | |
Collapse
|
129
|
Chaki J, Dey N. Data Tagging in Medical Images: A Survey of the State-of-Art. Curr Med Imaging 2020; 16:1214-1228. [PMID: 32108002 DOI: 10.2174/1573405616666200218130043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 12/02/2019] [Accepted: 12/16/2019] [Indexed: 11/22/2022]
Abstract
A huge amount of medical data is generated every second, and a significant percentage of the data are images that need to be analyzed and processed. One of the key challenges in this regard is the recovery of the data of medical images. The medical image recovery procedure should be done automatically by the computers that are the method of identifying object concepts and assigning homologous tags to them. To discover the hidden concepts in the medical images, the lowlevel characteristics should be used to achieve high-level concepts and that is a challenging task. In any specific case, it requires human involvement to determine the significance of the image. To allow machine-based reasoning on the medical evidence collected, the data must be accompanied by additional interpretive semantics; a change from a pure data-intensive methodology to a model of evidence rich in semantics. In this state-of-art, data tagging methods related to medical images are surveyed which is an important aspect for the recognition of a huge number of medical images. Different types of tags related to the medical image, prerequisites of medical data tagging, different techniques to develop medical image tags, different medical image tagging algorithms and different tools that are used to create the tags are discussed in this paper. The aim of this state-of-art paper is to produce a summary and a set of guidelines for using the tags for the identification of medical images and to identify the challenges and future research directions of tagging medical images.
Collapse
Affiliation(s)
- Jyotismita Chaki
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Nilanjan Dey
- Department of Information Technology, Techno India College of Technology, West Bengal, India
| |
Collapse
|
130
|
Hatt M, Tixier F, Desseroit MC, Badic B, Laurent B, Visvikis D, Rest CCL. Revisiting the identification of tumor sub-volumes predictive of residual uptake after (chemo)radiotherapy: influence of segmentation methods on 18F-FDG PET/CT images. Sci Rep 2019; 9:14925. [PMID: 31624321 PMCID: PMC6797734 DOI: 10.1038/s41598-019-51096-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 09/19/2019] [Indexed: 12/19/2022] Open
Abstract
Our aim was to evaluate the impact of the accuracy of image segmentation techniques on establishing an overlap between pre-treatment and post-treatment functional tumour volumes in 18FDG-PET/CT imaging. Simulated images and a clinical cohort were considered. Three different configurations (large, small or non-existent overlap) of a single simulated example was used to elucidate the behaviour of each approach. Fifty-four oesophageal and head and neck (H&N) cancer patients treated with radiochemotherapy with both pre- and post-treatment PET/CT scans were retrospectively analysed. Images were registered and volumes were determined using combinations of thresholds and the fuzzy locally adaptive Bayesian (FLAB) algorithm. Four overlap metrics were calculated. The simulations showed that thresholds lead to biased overlap estimation and that accurate metrics are obtained despite spatially inaccurate volumes. In the clinical dataset, only 17 patients exhibited residual uptake smaller than the pre-treatment volume. Overlaps obtained with FLAB were consistently moderate for esophageal and low for H&N cases across all metrics. Overlaps obtained using threshold combinations varied greatly depending on thresholds and metrics. In both cases overlaps were variable across patients. Our findings do not support optimisation of radiotherapy planning based on pre-treatment 18FDG-PET/CT image definition of high-uptake sub-volumes. Combinations of thresholds may have led to overestimation of overlaps in previous studies.
Collapse
Affiliation(s)
- Mathieu Hatt
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France.
| | - Florent Tixier
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
- Nuclear Medicine department, CHU Milétrie, Poitiers, France
| | - Marie-Charlotte Desseroit
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
- Nuclear Medicine department, CHU Milétrie, Poitiers, France
| | - Bogdan Badic
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
| | | | | | - Catherine Cheze Le Rest
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
- Nuclear Medicine department, CHU Milétrie, Poitiers, France
| |
Collapse
|
131
|
Guo Z, Guo N, Gong K, Zhong S, Li Q. Gross tumor volume segmentation for head and neck cancer radiotherapy using deep dense multi-modality network. Phys Med Biol 2019; 64:205015. [PMID: 31514173 PMCID: PMC7186044 DOI: 10.1088/1361-6560/ab440d] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
In radiation therapy, the accurate delineation of gross tumor volume (GTV) is crucial for treatment planning. However, it is challenging for head and neck cancer (HNC) due to the morphology complexity of various organs in the head, low targets to background contrast and potential artifacts on conventional planning CT images. Thus, manual delineation of GTV on anatomical images is extremely time consuming and suffers from inter-observer variability that leads to planning uncertainty. With the wide use of PET/CT imaging in oncology, complementary functional and anatomical information can be utilized for tumor contouring and bring a significant advantage for radiation therapy planning. In this study, by taking advantage of multi-modality PET and CT images, we propose an automatic GTV segmentation framework based on deep learning for HNC. The backbone of this segmentation framework is based on 3D convolution with dense connections which enables a better information propagation and takes full advantage of the features extracted from multi-modality input images. We evaluate our proposed framework on a dataset including 250 HNC patients. Each patient receives both planning CT and PET/CT imaging before radiation therapy (RT). Manually delineated GTV contours by radiation oncologists are used as ground truth in this study. To further investigate the advantage of our proposed Dense-Net framework, we also compared with the framework using 3D U-Net which is the state-of-the-art in segmentation tasks. Meanwhile, for each frame, the performance comparison between single modality input (PET or CT image) and multi-modality input (both PET/CT) is conducted. Dice coefficient, mean surface distance (MSD), 95th-percentile Hausdorff distance (HD95) and displacement of mass centroid (DMC) are calculated for quantitative evaluation. The dataset is split into train (140 patients), validation (35 patients) and test (75 patients) groups to optimize the network. Based on the results on independent test group, our proposed multi-modality Dense-Net (Dice 0.73) shows better performance than the compared network (Dice 0.71). Furthermore, the proposed Dense-Net structure has less trainable parameters than the 3D U-Net, which reduces the prediction variability. In conclusion, our proposed multi-modality Dense-Net can enable satisfied GTV segmentation for HNC using multi-modality images and yield superior performance than conventional methods. Our proposed method provides an automatic, fast and consistent solution for GTV segmentation and shows potentials to be generally applied for radiation therapy planning of a variety of cancer (e.g. lung, sarcoma, liver and so on).
Collapse
Affiliation(s)
- Zhe Guo
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China 100081
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA 02114
| | - Ning Guo
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA 02114
| | - Kuang Gong
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA 02114
| | - Shun’an Zhong
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China 100081
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA 02114
| |
Collapse
|
132
|
Wei L, Osman S, Hatt M, El Naqa I. Machine learning for radiomics-based multimodality and multiparametric modeling. THE QUARTERLY JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING : OFFICIAL PUBLICATION OF THE ITALIAN ASSOCIATION OF NUCLEAR MEDICINE (AIMN) [AND] THE INTERNATIONAL ASSOCIATION OF RADIOPHARMACOLOGY (IAR), [AND] SECTION OF THE SOCIETY OF RADIOPHARMACEUTICAL CHEMISTRY AND BIOLOGY 2019; 63:323-338. [PMID: 31527580 DOI: 10.23736/s1824-4785.19.03213-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Due to the recent developments of both hardware and software technologies, multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Previously, the application of multimodality imaging in oncology has been mainly related to combining anatomical and functional imaging to improve diagnostic specificity and/or target definition, such as positron emission tomography/computed tomography (PET/CT) and single-photon emission CT (SPECT)/CT. More recently, the fusion of various images, such as multiparametric magnetic resonance imaging (MRI) sequences, different PET tracer images, PET/MRI, has become more prevalent, which has enabled more comprehensive characterization of the tumor phenotype. In order to take advantage of these valuable multimodal data for clinical decision making using radiomics, we present two ways to implement the multimodal image analysis, namely radiomic (handcrafted feature) based and deep learning (machine learned feature) based methods. Applying advanced machine (deep) learning algorithms across multimodality images have shown better results compared with single modality modeling for prognostic and/or prediction of clinical outcomes. This holds great potentials for providing more personalized treatment for patients and achieve better outcomes.
Collapse
Affiliation(s)
- Lise Wei
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| | - Sarah Osman
- Centre for Cancer Research and Cell Biology, Queens' University, Belfast, UK
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University of Brest, Brest, France
| | - Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA -
| |
Collapse
|
133
|
Hatt M, Le Rest CC, Tixier F, Badic B, Schick U, Visvikis D. Radiomics: Data Are Also Images. J Nucl Med 2019; 60:38S-44S. [DOI: 10.2967/jnumed.118.220582] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 03/28/2019] [Indexed: 12/14/2022] Open
|
134
|
Hatt M, Lucia F, Schick U, Visvikis D. Multicentric validation of radiomics findings: challenges and opportunities. EBioMedicine 2019; 47:20-21. [PMID: 31474549 PMCID: PMC6796519 DOI: 10.1016/j.ebiom.2019.08.054] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 08/23/2019] [Indexed: 12/24/2022] Open
Affiliation(s)
- Mathieu Hatt
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France.
| | - François Lucia
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France; Radiotherapy Department, CHRU Brest, Brest, France
| | - Ulrike Schick
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France; Radiotherapy Department, CHRU Brest, Brest, France
| | | |
Collapse
|
135
|
Zhou T, Ruan S, Canu S. A review: Deep learning for medical image segmentation using multi-modality fusion. ARRAY 2019. [DOI: 10.1016/j.array.2019.100004] [Citation(s) in RCA: 198] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
136
|
Visvikis D, Cheze Le Rest C, Jaouen V, Hatt M. Artificial intelligence, machine (deep) learning and radio(geno)mics: definitions and nuclear medicine imaging applications. Eur J Nucl Med Mol Imaging 2019; 46:2630-2637. [DOI: 10.1007/s00259-019-04373-w] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Accepted: 05/23/2019] [Indexed: 12/14/2022]
|