1
|
Lien CY, Deng RJ, Fuh JL, Ting YN, Yang AC. Enhancing facial feature de-identification in multiframe brain images: A generative adversarial network approach. PROGRESS IN BRAIN RESEARCH 2024; 290:141-156. [PMID: 39448110 DOI: 10.1016/bs.pbr.2024.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 07/17/2024] [Accepted: 07/24/2024] [Indexed: 10/26/2024]
Abstract
The collection of head images for public datasets in the field of brain science has grown remarkably in recent years, underscoring the need for robust de-identification methods to adhere with privacy regulations. This paper elucidates a novel deep learning-based approach to deidentifying facial features in brain images using a generative adversarial network to synthesize new facial features and contours. We employed the precision of the three-dimensional U-Net model to detect specific features such as the ears, nose, mouth, and eyes. Results: Our method diverges from prior studies by highlighting partial regions of the head image rather than comprehensive full-head images. We trained and tested our model on a dataset comprising 490 cases from a publicly available head computed tomography image dataset and an additional 70 cases with head MR images. Integrated data proved advantageous, with promising results. The nose, mouth, and eye detection achieved 100% accuracy, while ear detection reached 85.03% in the training dataset. In the testing dataset, ear detection accuracy was 65.98%, and the validation dataset ear detection attained 100%. Analysis of pixel value histograms demonstrated varying degrees of similarity, as measured by the Structural Similarity Index (SSIM), between raw and generated features across different facial features. The proposed methodology, tailored for partial head image processing, is well suited for real-world imaging examination scenarios and holds potential for future clinical applications contributing to the advancement of research in de-identification technologies, thus fortifying privacy safeguards.
Collapse
Affiliation(s)
- Chung-Yueh Lien
- Department of Information Management, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan
| | - Rui-Jun Deng
- Department of Information Management, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan
| | - Jong-Ling Fuh
- Department of Neurology, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Brain Research Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yun-Ni Ting
- Department of Neurology, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan; Brain Research Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Albert C Yang
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Brain Research Center, National Yang Ming Chiao Tung University, Taipei, Taiwan; Digital Medicine and Smart Healthcare Research Center, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan.
| |
Collapse
|
2
|
Ryu DW, Lee C, Lee HJ, Shim YS, Hong YJ, Cho JH, Kim S, Lee JM, Yang DW. Assessing the Impact of Defacing Algorithms on Brain Volumetry Accuracy in MRI Analyses. Dement Neurocogn Disord 2024; 23:127-135. [PMID: 39113754 PMCID: PMC11300685 DOI: 10.12779/dnd.2024.23.3.127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 04/26/2024] [Accepted: 04/26/2024] [Indexed: 08/10/2024] Open
Abstract
Background and Purpose To ensure data privacy, the development of defacing processes, which anonymize brain images by obscuring facial features, is crucial. However, the impact of these defacing methods on brain imaging analysis poses significant concern. This study aimed to evaluate the reliability of three different defacing methods in automated brain volumetry. Methods Magnetic resonance imaging with three-dimensional T1 sequences was performed on ten patients diagnosed with subjective cognitive decline. Defacing was executed using mri_deface, BioImage Suite Web-based defacing, and Defacer. Brain volumes were measured employing the QBraVo program and FreeSurfer, assessing intraclass correlation coefficient (ICC) and the mean differences in brain volume measurements between the original and defaced images. Results The mean age of the patients was 71.10±6.17 years, with 4 (40.0%) being male. The total intracranial volume, total brain volume, and ventricle volume exhibited high ICCs across the three defacing methods and 2 volumetry analyses. All regional brain volumes showed high ICCs with all three defacing methods. Despite variations among some brain regions, no significant mean differences in regional brain volume were observed between the original and defaced images across all regions. Conclusions The three defacing algorithms evaluated did not significantly affect the results of image analysis for the entire brain or specific cerebral regions. These findings suggest that these algorithms can serve as robust methods for defacing in neuroimaging analysis, thereby supporting data anonymization without compromising the integrity of brain volume measurements.
Collapse
Affiliation(s)
- Dong-Woo Ryu
- Department of Neurology, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - ChungHwee Lee
- Department of Neurology, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Hyuk-je Lee
- Department of Neurology, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Yong S Shim
- Department of Neurology, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Yun Jeong Hong
- Department of Neurology, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Jung Hee Cho
- Department of Neurology, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Seonggyu Kim
- Department of Electronic Engineering, Hanyang University, Seoul, Korea
| | - Jong-Min Lee
- Department of Biomedical Engineering, Hanyang University, Seoul, Korea
| | - Dong Won Yang
- Department of Neurology, College of Medicine, The Catholic University of Korea, Seoul, Korea
| |
Collapse
|
3
|
Filippi CG, Stein JM, Wang Z, Bakas S, Liu Y, Chang PD, Lui Y, Hess C, Barboriak DP, Flanders AE, Wintermark M, Zaharchuk G, Wu O. Ethical Considerations and Fairness in the Use of Artificial Intelligence for Neuroradiology. AJNR Am J Neuroradiol 2023; 44:1242-1248. [PMID: 37652578 PMCID: PMC10631523 DOI: 10.3174/ajnr.a7963] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 07/07/2023] [Indexed: 09/02/2023]
Abstract
In this review, concepts of algorithmic bias and fairness are defined qualitatively and mathematically. Illustrative examples are given of what can go wrong when unintended bias or unfairness in algorithmic development occurs. The importance of explainability, accountability, and transparency with respect to artificial intelligence algorithm development and clinical deployment is discussed. These are grounded in the concept of "primum no nocere" (first, do no harm). Steps to mitigate unfairness and bias in task definition, data collection, model definition, training, testing, deployment, and feedback are provided. Discussions on the implementation of fairness criteria that maximize benefit and minimize unfairness and harm to neuroradiology patients will be provided, including suggestions for neuroradiologists to consider as artificial intelligence algorithms gain acceptance into neuroradiology practice and become incorporated into routine clinical workflow.
Collapse
Affiliation(s)
- C G Filippi
- From the Department of Radiology (C.G.F.), Tufts University School of Medicine, Boston, Massachusetts
| | - J M Stein
- Department of Radiology (J.M.S., S.B.), University of Pennsylvania, Philadelphia, Pennsylvania
| | - Z Wang
- Athinoula A. Martinos Center for Biomedical Imaging (Z.W., Y. Liu, O.W.), Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - S Bakas
- Department of Radiology (J.M.S., S.B.), University of Pennsylvania, Philadelphia, Pennsylvania
| | - Y Liu
- Athinoula A. Martinos Center for Biomedical Imaging (Z.W., Y. Liu, O.W.), Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - P D Chang
- Department of Radiological Sciences (P.D.C.), University of California, Irvine, California
| | - Y Lui
- Department of Neuroradiology (Y. Lui), NYU Langone Health, New York, New York
| | - C Hess
- Department of Radiology and Biomedical Imaging (C.H.), University of California, San Francisco, San Francisco, California
| | - D P Barboriak
- Department of Radiology (D.P.B.), Duke University School of Medicine, Durham, North Carolina
| | - A E Flanders
- Department of Neuroradiology/Otolaryngology (ENT) Radiology (A.E.F.), Thomas Jefferson University, Philadelphia, Pennsylvania
| | - M Wintermark
- Department of Neuroradiology (M.W.), Division of Diagnostic Imaging, MD Anderson Cancer Center, Houston, Texas
| | - G Zaharchuk
- Department of Radiology (G.Z.), Stanford University, Stanford, California
| | - O Wu
- Athinoula A. Martinos Center for Biomedical Imaging (Z.W., Y. Liu, O.W.), Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| |
Collapse
|
4
|
Gao C, Landman BA, Prince JL, Carass A. Reproducibility evaluation of the effects of MRI defacing on brain segmentation. J Med Imaging (Bellingham) 2023; 10:064001. [PMID: 38074632 PMCID: PMC10704191 DOI: 10.1117/1.jmi.10.6.064001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 09/22/2023] [Accepted: 10/24/2023] [Indexed: 12/20/2023] Open
Abstract
Purpose Recent advances in magnetic resonance (MR) scanner quality and the rapidly improving nature of facial recognition software have necessitated the introduction of MR defacing algorithms to protect patient privacy. As a result, there are a number of MR defacing algorithms available to the neuroimaging community, with several appearing in just the last 5 years. While some qualities of these defacing algorithms, such as patient identifiability, have been explored in the previous works, the potential impact of defacing on neuroimage processing has yet to be explored. Approach We qualitatively evaluate eight MR defacing algorithms on 179 subjects from the OASIS-3 cohort and 21 subjects from the Kirby-21 dataset. We also evaluate the effects of defacing on two neuroimaging pipelines-SLANT and FreeSurfer-by comparing the segmentation consistency between the original and defaced images. Results Defacing can alter brain segmentation and even lead to catastrophic failures, which are more frequent with some algorithms, such as Quickshear, MRI_Deface, and FSL_deface. Compared to FreeSurfer, SLANT is less affected by defacing. On outputs that pass the quality check, the effects of defacing are less pronounced than those of rescanning, as measured by the Dice similarity coefficient. Conclusions The effects of defacing are noticeable and should not be disregarded. Extra attention, in particular, should be paid to the possibility of catastrophic failures. It is crucial to adopt a robust defacing algorithm and perform a thorough quality check before releasing defaced datasets. To improve the reliability of analysis in scenarios involving defaced MRIs, it is encouraged to include multiple brain segmentation pipelines.
Collapse
Affiliation(s)
- Chenyu Gao
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Jerry L. Prince
- The Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
| | - Aaron Carass
- The Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
| |
Collapse
|
5
|
Patel R, Provenzano D, Loew M. Anonymization and validation of three-dimensional volumetric renderings of computed tomography data using commercially available T1-weighted magnetic resonance imaging-based algorithms. J Med Imaging (Bellingham) 2023; 10:066501. [PMID: 38074629 PMCID: PMC10704182 DOI: 10.1117/1.jmi.10.6.066501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 11/03/2023] [Accepted: 11/07/2023] [Indexed: 02/12/2024] Open
Abstract
Purpose Previous studies have demonstrated that three-dimensional (3D) volumetric renderings of magnetic resonance imaging (MRI) brain data can be used to identify patients using facial recognition. We have shown that facial features can be identified on simulation-computed tomography (CT) images for radiation oncology and mapped to face images from a database. We aim to determine whether CT images can be anonymized using anonymization software that was designed for T1-weighted MRI data. Approach Our study examines (1) the ability of off-the-shelf anonymization algorithms to anonymize CT data and (2) the ability of facial recognition algorithms to identify whether faces could be detected from a database of facial images. Our study generated 3D renderings from 57 head CT scans from The Cancer Imaging Archive database. Data were anonymized using AFNI (deface, reface, and 3Dskullstrip) and FSL's BET. Anonymized data were compared to the original renderings and passed through facial recognition algorithms (VGG-Face, FaceNet, DLib, and SFace) using a facial database (labeled faces in the wild) to determine what matches could be found. Results Our study found that all modules were able to process CT data and that AFNI's 3Dskullstrip and FSL's BET data consistently showed lower reidentification rates compared to the original. Conclusions The results from this study highlight the potential usage of anonymization algorithms as a clinical standard for deidentifying brain CT data. Our study demonstrates the importance of continued vigilance for patient privacy in publicly shared datasets and the importance of continued evaluation of anonymization methods for CT data.
Collapse
Affiliation(s)
- Rahil Patel
- George Washington University School of Engineering and Applied Science, Department of Biomedical Engineering, Washington, District of Columbia, United States
| | - Destie Provenzano
- George Washington University School of Engineering and Applied Science, Department of Biomedical Engineering, Washington, District of Columbia, United States
| | - Murray Loew
- George Washington University School of Engineering and Applied Science, Department of Biomedical Engineering, Washington, District of Columbia, United States
| |
Collapse
|
6
|
Gao C, Landman BA, Prince JL, Carass A. A reproducibility evaluation of the effects of MRI defacing on brain segmentation. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.05.15.23289995. [PMID: 37293070 PMCID: PMC10246049 DOI: 10.1101/2023.05.15.23289995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Recent advances in magnetic resonance (MR) scanner quality and the rapidly improving nature of facial recognition software have necessitated the introduction of MR defacing algorithms to protect patient privacy. As a result, there are a number of MR defacing algorithms available to the neuroimaging community, with several appearing in just the last five years. While some qualities of these defacing algorithms, such as patient identifiability, have been explored in previous works, the potential impact of defacing on neuroimage processing has yet to be explored. Approach We qualitatively evaluate eight MR defacing algorithms on 179 subjects from the OASIS-3 cohort and the 21 subjects from the Kirby-21 dataset. We also evaluate the effects of defacing on two neuroimaging pipelines-SLANT and FreeSurfer-by comparing the segmentation consistency between the original and defaced images. Results Defacing can alter brain segmentation and even lead to catastrophic failures, which are more frequent with some algorithms such as Quickshear, MRI_Deface, and FSL_deface. Compared to FreeSurfer, SLANT is less affected by defacing. On outputs that pass the quality check, the effects of defacing are less pronounced than those of rescanning, as measured by the Dice similarity coefficient. Conclusions The effects of defacing are noticeable and should not be disregarded. Extra attention, in particular, should be paid to the possibility of catastrophic failures. It is crucial to adopt a robust defacing algorithm and perform a thorough quality check before releasing defaced datasets. To improve the reliability of analysis in scenarios involving defaced MRIs, it's encouraged to include multiple brain segmentation pipelines.
Collapse
Affiliation(s)
- Chenyu Gao
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, 37235
| | - Bennett A. Landman
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, 37235
| | - Jerry L. Prince
- The Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, 21218
| | - Aaron Carass
- The Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, 21218
| |
Collapse
|
7
|
Neri E, Aghakhanyan G, Zerunian M, Gandolfo N, Grassi R, Miele V, Giovagnoni A, Laghi A. Explainable AI in radiology: a white paper of the Italian Society of Medical and Interventional Radiology. LA RADIOLOGIA MEDICA 2023:10.1007/s11547-023-01634-5. [PMID: 37155000 DOI: 10.1007/s11547-023-01634-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 04/19/2023] [Indexed: 05/10/2023]
Abstract
The term Explainable Artificial Intelligence (xAI) groups together the scientific body of knowledge developed while searching for methods to explain the inner logic behind the AI algorithm and the model inference based on knowledge-based interpretability. The xAI is now generally recognized as a core area of AI. A variety of xAI methods currently are available to researchers; nonetheless, the comprehensive classification of the xAI methods is still lacking. In addition, there is no consensus among the researchers with regards to what an explanation exactly is and which are salient properties that must be considered to make it understandable for every end-user. The SIRM introduces an xAI-white paper, which is intended to aid Radiologists, medical practitioners, and scientists in the understanding an emerging field of xAI, the black-box problem behind the success of the AI, the xAI methods to unveil the black-box into a glass-box, the role, and responsibilities of the Radiologists for appropriate use of the AI-technology. Due to the rapidly changing and evolution of AI, a definitive conclusion or solution is far away from being defined. However, one of our greatest responsibilities is to keep up with the change in a critical manner. In fact, ignoring and discrediting the advent of AI a priori will not curb its use but could result in its application without awareness. Therefore, learning and increasing our knowledge about this very important technological change will allow us to put AI at our service and at the service of the patients in a conscious way, pushing this paradigm shift as far as it will benefit us.
Collapse
Affiliation(s)
- Emanuele Neri
- Academic Radiology, Department of Translational Research and of New Surgical and Medical Technology, University of Pisa, Pisa, Italy
| | - Gayane Aghakhanyan
- Academic Radiology, Department of Translational Research and of New Surgical and Medical Technology, University of Pisa, Pisa, Italy.
| | - Marta Zerunian
- Medical-Surgical Sciences and Translational Medicine, Sapienza University of Rome, Sant'Andrea Hospital, Rome, Italy
| | - Nicoletta Gandolfo
- Diagnostic Imaging Department, VillaScassi Hospital-ASL 3, Corso Scassi 1, Genoa, Italy
| | - Roberto Grassi
- Radiology Unit, Università Degli Studi Della Campania Luigi Vanvitelli, Naples, Italy
| | - Vittorio Miele
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Andrea Giovagnoni
- Department of Radiological Sciences, Radiology Clinic, Azienda Ospedaliera Universitaria, Ospedali Riuniti Di Ancona, Ancona, Italy
| | - Andrea Laghi
- Medical-Surgical Sciences and Translational Medicine, Sapienza University of Rome, Sant'Andrea Hospital, Rome, Italy
| |
Collapse
|
8
|
Sahlsten J, Wahid KA, Glerean E, Jaskari J, Naser MA, He R, Kann BH, Mäkitie A, Fuller CD, Kaski K. Segmentation stability of human head and neck cancer medical images for radiotherapy applications under de-identification conditions: Benchmarking data sharing and artificial intelligence use-cases. Front Oncol 2023; 13:1120392. [PMID: 36925936 PMCID: PMC10011442 DOI: 10.3389/fonc.2023.1120392] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023] Open
Abstract
Background Demand for head and neck cancer (HNC) radiotherapy data in algorithmic development has prompted increased image dataset sharing. Medical images must comply with data protection requirements so that re-use is enabled without disclosing patient identifiers. Defacing, i.e., the removal of facial features from images, is often considered a reasonable compromise between data protection and re-usability for neuroimaging data. While defacing tools have been developed by the neuroimaging community, their acceptability for radiotherapy applications have not been explored. Therefore, this study systematically investigated the impact of available defacing algorithms on HNC organs at risk (OARs). Methods A publicly available dataset of magnetic resonance imaging scans for 55 HNC patients with eight segmented OARs (bilateral submandibular glands, parotid glands, level II neck lymph nodes, level III neck lymph nodes) was utilized. Eight publicly available defacing algorithms were investigated: afni_refacer, DeepDefacer, defacer, fsl_deface, mask_face, mri_deface, pydeface, and quickshear. Using a subset of scans where defacing succeeded (N=29), a 5-fold cross-validation 3D U-net based OAR auto-segmentation model was utilized to perform two main experiments: 1.) comparing original and defaced data for training when evaluated on original data; 2.) using original data for training and comparing the model evaluation on original and defaced data. Models were primarily assessed using the Dice similarity coefficient (DSC). Results Most defacing methods were unable to produce any usable images for evaluation, while mask_face, fsl_deface, and pydeface were unable to remove the face for 29%, 18%, and 24% of subjects, respectively. When using the original data for evaluation, the composite OAR DSC was statistically higher (p ≤ 0.05) for the model trained with the original data with a DSC of 0.760 compared to the mask_face, fsl_deface, and pydeface models with DSCs of 0.742, 0.736, and 0.449, respectively. Moreover, the model trained with original data had decreased performance (p ≤ 0.05) when evaluated on the defaced data with DSCs of 0.673, 0.693, and 0.406 for mask_face, fsl_deface, and pydeface, respectively. Conclusion Defacing algorithms may have a significant impact on HNC OAR auto-segmentation model training and testing. This work highlights the need for further development of HNC-specific image anonymization methods.
Collapse
Affiliation(s)
- Jaakko Sahlsten
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Kareem A. Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Joel Jaskari
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Mohamed A. Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Benjamin H. Kann
- Artificial Intelligence in Medicine Program, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Antti Mäkitie
- Department of Otorhinolaryngology, Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Kimmo Kaski
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| |
Collapse
|
9
|
Wahid KA, Glerean E, Sahlsten J, Jaskari J, Kaski K, Naser MA, He R, Mohamed ASR, Fuller CD. Artificial Intelligence for Radiation Oncology Applications Using Public Datasets. Semin Radiat Oncol 2022; 32:400-414. [PMID: 36202442 PMCID: PMC9587532 DOI: 10.1016/j.semradonc.2022.06.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Artificial intelligence (AI) has exceptional potential to positively impact the field of radiation oncology. However, large curated datasets - often involving imaging data and corresponding annotations - are required to develop radiation oncology AI models. Importantly, the recent establishment of Findable, Accessible, Interoperable, Reusable (FAIR) principles for scientific data management have enabled an increasing number of radiation oncology related datasets to be disseminated through data repositories, thereby acting as a rich source of data for AI model building. This manuscript reviews the current and future state of radiation oncology data dissemination, with a particular emphasis on published imaging datasets, AI data challenges, and associated infrastructure. Moreover, we provide historical context of FAIR data dissemination protocols, difficulties in the current distribution of radiation oncology data, and recommendations regarding data dissemination for eventual utilization in AI models. Through FAIR principles and standardized approaches to data dissemination, radiation oncology AI research has nothing to lose and everything to gain.
Collapse
Affiliation(s)
- Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland; Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Jaakko Sahlsten
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Joel Jaskari
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Kimmo Kaski
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
| |
Collapse
|
10
|
Wang L, Zhao Q. Deformation Analysis and Research of Building Envelope by Deep Learning Technology under the Reinforcement of the Diaphragm Wall. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9489445. [PMID: 36156955 PMCID: PMC9492380 DOI: 10.1155/2022/9489445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 08/17/2022] [Accepted: 08/25/2022] [Indexed: 11/17/2022]
Abstract
The safety analysis of underground buildings is the most crucial problem in the construction industry. This work aims to optimize the safety analysis results of the underground building envelope and comprehensively improve the safety of the underground building. Long short-term memory (LSTM) can make long-term and short-term predictions, thus reducing the model's prediction error. Applying it to the deformation analysis, data prediction of the underground building envelope can improve the accuracy of the deformation prediction of the envelope. This work deeply discusses deep learning technology and the principle of the LSTM model. Based on the safety analysis concept of the underground building envelope, LSTM underground building envelope deformation's prediction model is established and comprehensively evaluated. The results show that in the prediction of horizontal displacement of foundation pit pile of diaphragm wall, the mean relative error (MRE) of the prediction results of the designed model range in 10%-18%, and the calculation time ranges 15-36 s. In the settlement displacement prediction, the model's MRE is within the range of 5%-7%, and the calculation time is within the range of 17-40 s. With the increase of training times, the prediction accuracy of the model increases, and the calculation time becomes relatively stable. Compared with other models, the relative error of prediction results is about 5.4% at the highest and 1.8% at the lowest. This work provides technical support for improving the safety prediction accuracy of the underground building envelope and provides some reference value for the comprehensive development of the underground building industry.
Collapse
Affiliation(s)
- Lijuan Wang
- State Key Laboratory of GeoHazard Prevention and GeoEnvironment Protection, Chengdu University of Technology, Chengdu 610059, China
| | - Qihua Zhao
- State Key Laboratory of GeoHazard Prevention and GeoEnvironment Protection, Chengdu University of Technology, Chengdu 610059, China
| |
Collapse
|
11
|
Li G, Wu X, Ma X. Artificial intelligence in radiotherapy. Semin Cancer Biol 2022; 86:160-171. [PMID: 35998809 DOI: 10.1016/j.semcancer.2022.08.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 08/18/2022] [Indexed: 11/19/2022]
Abstract
Radiotherapy is a discipline closely integrated with computer science. Artificial intelligence (AI) has developed rapidly over the past few years. With the explosive growth of medical big data, AI promises to revolutionize the field of radiotherapy through highly automated workflow, enhanced quality assurance, improved regional balances of expert experiences, and individualized treatment guided by multi-omics. In addition to independent researchers, the increasing number of large databases, biobanks, and open challenges significantly facilitated AI studies on radiation oncology. This article reviews the latest research, clinical applications, and challenges of AI in each part of radiotherapy including image processing, contouring, planning, quality assurance, motion management, and outcome prediction. By summarizing cutting-edge findings and challenges, we aim to inspire researchers to explore more future possibilities and accelerate the arrival of AI radiotherapy.
Collapse
Affiliation(s)
- Guangqi Li
- Division of Biotherapy, Cancer Center, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China
| | - Xin Wu
- Head & Neck Oncology ward, Division of Radiotherapy Oncology, Cancer Center, West China Hospital, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China
| | - Xuelei Ma
- Division of Biotherapy, Cancer Center, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China.
| |
Collapse
|
12
|
Yang HC, Rahmanti AR, Huang CW, Li YCJ. How Can Research on Artificial Empathy Be Enhanced by Applying Deepfakes? J Med Internet Res 2022; 24:e29506. [PMID: 35254278 PMCID: PMC8933806 DOI: 10.2196/29506] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 12/06/2021] [Accepted: 12/28/2021] [Indexed: 11/13/2022] Open
Abstract
We propose the idea of using an open data set of doctor-patient interactions to develop artificial empathy based on facial emotion recognition. Facial emotion recognition allows a doctor to analyze patients' emotions, so that they can reach out to their patients through empathic care. However, face recognition data sets are often difficult to acquire; many researchers struggle with small samples of face recognition data sets. Further, sharing medical images or videos has not been possible, as this approach may violate patient privacy. The use of deepfake technology is a promising approach to deidentifying video recordings of patients’ clinical encounters. Such technology can revolutionize the implementation of facial emotion recognition by replacing a patient's face in an image or video with an unrecognizable face—one with a facial expression that is similar to that of the original. This technology will further enhance the potential use of artificial empathy in helping doctors provide empathic care to achieve good doctor-patient therapeutic relationships, and this may result in better patient satisfaction and adherence to treatment.
Collapse
Affiliation(s)
- Hsuan-Chia Yang
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
- International Center for Health Information Technology, Taipei Medical University, Taipei, Taiwan
- Research Center of Big Data and Meta-analysis, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
- Clinical Big Data Research Center, Taipei Medical University Hospital, Taipei, Taiwan
| | - Annisa Ristya Rahmanti
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
- International Center for Health Information Technology, Taipei Medical University, Taipei, Taiwan
- Department of Health Policy Management, Faculty of Medicine, Public Health, and Nursing, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Chih-Wei Huang
- International Center for Health Information Technology, Taipei Medical University, Taipei, Taiwan
| | - Yu-Chuan Jack Li
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
- International Center for Health Information Technology, Taipei Medical University, Taipei, Taiwan
- Research Center of Big Data and Meta-analysis, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
- Department of Dermatology, Wanfang Hospital, Taipei, Taiwan
| |
Collapse
|
13
|
Han W, Han X, Zhou S, Zhu Q. The Development History and Research Tendency of Medical Informatics: Topic Evolution Analysis. JMIR Med Inform 2022; 10:e31918. [PMID: 35084351 PMCID: PMC8832275 DOI: 10.2196/31918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 11/15/2021] [Accepted: 12/19/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Medical informatics has attracted the attention of researchers worldwide. It is necessary to understand the development of its research hot spots as well as directions for future research. OBJECTIVE The aim of this study is to explore the evolution of medical informatics research topics by analyzing research articles published between 1964 and 2020. METHODS A total of 56,466 publications were collected from 27 representative medical informatics journals indexed by the Web of Science Core Collection. We identified the research stages based on the literature growth curve, extracted research topics using the latent Dirichlet allocation model, and analyzed topic evolution patterns by calculating the cosine similarity between topics from the adjacent stages. RESULTS The following three research stages were identified: early birth, early development, and rapid development. Medical informatics has entered the fast development stage, with literature growing exponentially. Research topics in medical informatics can be classified into the following two categories: data-centered studies and people-centered studies. Medical data analysis has been a research hot spot across all 3 stages, and the integration of emerging technologies into data analysis might be a future hot spot. Researchers have focused more on user needs in the last 2 stages. Another potential hot spot might be how to meet user needs and improve the usability of health tools. CONCLUSIONS Our study provides a comprehensive understanding of research hot spots in medical informatics, as well as evolution patterns among them, which was helpful for researchers to grasp research trends and design their studies.
Collapse
Affiliation(s)
- Wenting Han
- School of Management & Engineering, Nanjing University, Nanjing, China
| | - Xi Han
- School of Business Administration, Guangdong University of Finance & Economics, Guangzhou, China
| | - Sijia Zhou
- Department of Information Systems, City University of Hong Kong, Hong Kong, Hong Kong
| | - Qinghua Zhu
- School of Information Management, Nanjing University, Nanjing, China
| |
Collapse
|
14
|
Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 2021; 65:545-563. [PMID: 34145766 DOI: 10.1111/1754-9485.13261] [Citation(s) in RCA: 174] [Impact Index Per Article: 58.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/23/2021] [Indexed: 12/21/2022]
Abstract
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Collapse
Affiliation(s)
- Phillip Chlap
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia
| | - Hang Min
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Nym Vandenberg
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Jason Dowling
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia.,Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|