1
|
Boitsios G, Simoni P. Photographs coupled with radiographs in children: perspectives and ethical challenges. Pediatr Radiol 2023; 53:1967-1968. [PMID: 37097480 DOI: 10.1007/s00247-023-05671-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 04/06/2023] [Accepted: 04/10/2023] [Indexed: 04/26/2023]
Affiliation(s)
- Grammatina Boitsios
- Paediatric Imaging Department, Queen Fabiola Children's Hospital (HUDERF) - Brussels, Avenue Jean Joseph Crocq 15, 1020, Brussels, Belgium.
| | - Paolo Simoni
- Paediatric Imaging Department, Queen Fabiola Children's Hospital (HUDERF) - Brussels, Avenue Jean Joseph Crocq 15, 1020, Brussels, Belgium
| |
Collapse
|
2
|
Selfridge AR, Spencer BA, Abdelhafez YG, Nakagawa K, Tupin JD, Badawi RD. Facial Anonymization and Privacy Concerns in Total-Body PET/CT. J Nucl Med 2023; 64:1304-1309. [PMID: 37268426 PMCID: PMC10394314 DOI: 10.2967/jnumed.122.265280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/30/2023] [Indexed: 06/04/2023] Open
Abstract
Total-body PET/CT images can be rendered to produce images of a subject's face and body. In response to privacy and identifiability concerns when sharing data, we have developed and validated a workflow that obscures (defaces) a subject's face in 3-dimensional volumetric data. Methods: To validate our method, we measured facial identifiability before and after defacing images from 30 healthy subjects who were imaged with both [18F]FDG PET and CT at either 3 or 6 time points. Briefly, facial embeddings were calculated using Google's FaceNet, and an analysis of clustering was used to estimate identifiability. Results: Faces rendered from CT images were correctly matched to CT scans at other time points at a rate of 93%, which decreased to 6% after defacing. Faces rendered from PET images were correctly matched to PET images at other time points at a maximum rate of 64% and to CT images at a maximum rate of 50%, both of which decreased to 7% after defacing. We further demonstrated that defaced CT images can be used for attenuation correction during PET reconstruction, introducing a maximum bias of -3.3% in regions of the cerebral cortex nearest the face. Conclusion: We believe that the proposed method provides a baseline of anonymity and discretion when sharing image data online or between institutions and will help to facilitate collaboration and future regulatory compliance.
Collapse
Affiliation(s)
- Aaron R Selfridge
- Department of Biomedical Engineering, University of California-Davis, Davis, California;
| | - Benjamin A Spencer
- Department of Biomedical Engineering, University of California-Davis, Davis, California
- Department of Radiology, University of California-Davis, Davis, California
| | - Yasser G Abdelhafez
- Department of Radiology, University of California-Davis, Davis, California
- Radiotherapy and Nuclear Medicine Department, South Egypt Cancer Institute, Assiut University, Assiut, Egypt
| | - Keisuke Nakagawa
- Cloud Innovation Center, University of California-Davis, Davis, California; and
| | - John D Tupin
- IRB Administration, University of California-Davis, Davis, California
| | - Ramsey D Badawi
- Department of Biomedical Engineering, University of California-Davis, Davis, California
- Department of Radiology, University of California-Davis, Davis, California
| |
Collapse
|
3
|
Uchida T, Kin T, Saito T, Shono N, Kiyofuji S, Koike T, Sato K, Niwa R, Takashima I, Oyama H, Saito N. De-Identification Technique with Facial Deformation in Head CT Images. Neuroinformatics 2023; 21:575-587. [PMID: 37226013 PMCID: PMC10406725 DOI: 10.1007/s12021-023-09631-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/08/2023] [Indexed: 05/26/2023]
Abstract
Head CT, which includes the facial region, can visualize faces using 3D reconstruction, raising concern that individuals may be identified. We developed a new de-identification technique that distorts the faces of head CT images. Head CT images that were distorted were labeled as "original images" and the others as "reference images." Reconstructed face models of both were created, with 400 control points on the facial surfaces. All voxel positions in the original image were moved and deformed according to the deformation vectors required to move to corresponding control points on the reference image. Three face detection and identification programs were used to determine face detection rates and match confidence scores. Intracranial volume equivalence tests were performed before and after deformation, and correlation coefficients between intracranial pixel value histograms were calculated. Output accuracy of the deep learning model for intracranial segmentation was determined using Dice Similarity Coefficient before and after deformation. The face detection rate was 100%, and match confidence scores were < 90. Equivalence testing of the intracranial volume revealed statistical equivalence before and after deformation. The median correlation coefficient between intracranial pixel value histograms before and after deformation was 0.9965, indicating high similarity. Dice Similarity Coefficient values of original and deformed images were statistically equivalent. We developed a technique to de-identify head CT images while maintaining the accuracy of deep-learning models. The technique involves deforming images to prevent face identification, with minimal changes to the original information.
Collapse
Affiliation(s)
- Tatsuya Uchida
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Taichi Kin
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
- Department of Medical Information Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Toki Saito
- Department of Medical Information Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naoyuki Shono
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Satoshi Kiyofuji
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Tsukasa Koike
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Katsuya Sato
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Ryoko Niwa
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Ikumi Takashima
- Data Science Office, Clinical Research Promotion Center, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Hiroshi Oyama
- Department of Clinical Information Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, 113-8655, Tokyo, Japan
| | - Nobuhito Saito
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
4
|
Khosravi P, Schweitzer M. Artificial intelligence in neuroradiology: a scoping review of some ethical challenges. FRONTIERS IN RADIOLOGY 2023; 3:1149461. [PMID: 37492387 PMCID: PMC10365008 DOI: 10.3389/fradi.2023.1149461] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 04/27/2023] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) has great potential to increase accuracy and efficiency in many aspects of neuroradiology. It provides substantial opportunities for insights into brain pathophysiology, developing models to determine treatment decisions, and improving current prognostication as well as diagnostic algorithms. Concurrently, the autonomous use of AI models introduces ethical challenges regarding the scope of informed consent, risks associated with data privacy and protection, potential database biases, as well as responsibility and liability that might potentially arise. In this manuscript, we will first provide a brief overview of AI methods used in neuroradiology and segue into key methodological and ethical challenges. Specifically, we discuss the ethical principles affected by AI approaches to human neuroscience and provisions that might be imposed in this domain to ensure that the benefits of AI frameworks remain in alignment with ethics in research and healthcare in the future.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, NY, United States
| | - Mark Schweitzer
- Office of the Vice President for Health Affairs Office of the Vice President, Wayne State University, Detroit, MI, United States
| |
Collapse
|
5
|
Schwarz CG, Kremers WK, Lowe VJ, Savvides M, Gunter JL, Senjem ML, Vemuri P, Kantarci K, Knopman DS, Petersen RC, Jack CR. Face recognition from research brain PET: An unexpected PET problem. Neuroimage 2022; 258:119357. [PMID: 35660089 PMCID: PMC9358410 DOI: 10.1016/j.neuroimage.2022.119357] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 05/31/2022] [Accepted: 06/02/2022] [Indexed: 11/04/2022] Open
Abstract
It is well known that de-identified research brain images from MRI and CT can potentially be re-identified using face recognition; however, this has not been examined for PET images. We generated face reconstruction images of 182 volunteers using amyloid, tau, and FDG PET scans, and we measured how accurately commercial face recognition software (Microsoft Azure’s Face API) automatically matched them with the individual participants’ face photographs. We then compared this accuracy with the same experiments using participants’ CT and MRI. Face reconstructions from PET images from PET/CT scanners were correctly matched at rates of 42% (FDG), 35% (tau), and 32% (amyloid), while CT were matched at 78% and MRI at 97–98%. We propose that these recognition rates are high enough that research studies should consider using face de-identification (“de-facing”) software on PET images, in addition to CT and structural MRI, before data sharing. We also updated our mri_reface de-identification software with extended functionality to replace face imagery in PET and CT images. Rates of face recognition on de-faced images were reduced to 0–4% for PET, 5% for CT, and 8% for MRI. We measured the effects of de-facing on regional amyloid PET measurements from two different measurement pipelines (PETSurfer/FreeSurfer 6.0, and one in-house method based on SPM12 and ANTs), and these effects were small: ICC values between de-faced and original images were > 0.98, biases were <2%, and median relative errors were <2%. Effects on global amyloid PET SUVR measurements were even smaller: ICC values were 1.00, biases were <0.5%, and median relative errors were also <0.5%.
Collapse
Affiliation(s)
| | - Walter K Kremers
- Department of Health Sciences Research, Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, MN, USA
| | - Val J Lowe
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Marios Savvides
- CyLab Biometrics Center and Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | | | - Matthew L Senjem
- Department of Radiology, Mayo Clinic, Rochester, MN, USA; Department of Information Technology, Mayo Clinic, Rochester, MN, USA
| | | | - Kejal Kantarci
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | | | | | | |
Collapse
|
6
|
Aiello M, Esposito G, Pagliari G, Borrelli P, Brancato V, Salvatore M. How does DICOM support big data management? Investigating its use in medical imaging community. Insights Imaging 2021; 12:164. [PMID: 34748101 PMCID: PMC8574146 DOI: 10.1186/s13244-021-01081-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 08/25/2021] [Indexed: 12/15/2022] Open
Abstract
The diagnostic imaging field is experiencing considerable growth, followed by increasing production of massive amounts of data. The lack of standardization and privacy concerns are considered the main barriers to big data capitalization. This work aims to verify whether the advanced features of the DICOM standard, beyond imaging data storage, are effectively used in research practice. This issue will be analyzed by investigating the publicly shared medical imaging databases and assessing how much the most common medical imaging software tools support DICOM in all its potential. Therefore, 100 public databases and ten medical imaging software tools were selected and examined using a systematic approach. In particular, the DICOM fields related to privacy, segmentation and reporting have been assessed in the selected database; software tools have been evaluated for reading and writing the same DICOM fields. From our analysis, less than a third of the databases examined use the DICOM format to record meaningful information to manage the images. Regarding software, the vast majority does not allow the management, reading and writing of some or all the DICOM fields. Surprisingly, if we observe chest computed tomography data sharing to address the COVID-19 emergency, there are only two datasets out of 12 released in DICOM format. Our work shows how the DICOM can potentially fully support big data management; however, further efforts are still needed from the scientific and technological community to promote the use of the existing standard, encouraging data sharing and interoperability for a concrete development of big data analytics.
Collapse
Affiliation(s)
- Marco Aiello
- IRCCS SDN, Via Emanuele Gianturco 113, 80143, Naples, Italy.
| | | | | | | | | | | |
Collapse
|
7
|
Rethinking Patient Consent in the Era of Artificial Intelligence and Big Data. J Am Coll Radiol 2021; 18:180-184. [PMID: 33413897 DOI: 10.1016/j.jacr.2020.09.022] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 09/01/2020] [Accepted: 09/01/2020] [Indexed: 12/17/2022]
|
8
|
Artificial intelligence and medical imaging 2018: French Radiology Community white paper. Diagn Interv Imaging 2018; 99:727-742. [PMID: 30470627 DOI: 10.1016/j.diii.2018.10.003] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Accepted: 10/04/2018] [Indexed: 11/29/2022]
|
9
|
Parks CL, Monson KL. Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications. J Digit Imaging 2017. [DOI: 10.1007/s10278-016-9932-7 order by 8029-- awyx] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023] Open
|
10
|
Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications. J Digit Imaging 2017. [DOI: 10.1007/s10278-016-9932-7 order by 1-- #] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023] Open
|
11
|
Parks CL, Monson KL. Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications. J Digit Imaging 2017. [DOI: 10.1007/s10278-016-9932-7 order by 1-- gadu] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023] Open
|
12
|
Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications. J Digit Imaging 2017. [DOI: 10.1007/s10278-016-9932-7 and 1880=1880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023] Open
|
13
|
Parks CL, Monson KL. Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications. J Digit Imaging 2017. [DOI: 10.1007/s10278-016-9932-7 order by 1-- -] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023] Open
|
14
|
Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications. J Digit Imaging 2017. [DOI: 10.1007/s10278-016-9932-7 order by 8029-- -] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023] Open
|
15
|
Parks CL, Monson KL. Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications. J Digit Imaging 2017. [DOI: 10.1007/s10278-016-9932-7 order by 8029-- #] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023] Open
|
16
|
Parks CL, Monson KL. Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications. J Digit Imaging 2017; 30:204-214. [PMID: 28025730 PMCID: PMC5359214 DOI: 10.1007/s10278-016-9932-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
Abstract
The recognizability of facial images extracted from publically available medical scans raises patient privacy concerns. This study examined how accurately facial images extracted from computed tomography (CT) scans are objectively matched with corresponding photographs of the scanned individuals. The test subjects were 128 adult Americans ranging in age from 18 to 60 years, representing both sexes and three self-identified population (ancestral descent) groups (African, European, and Hispanic). Using facial recognition software, the 2D images of the extracted facial models were compared for matches against five differently sized photo galleries. Depending on the scanning protocol and gallery size, in 6-61 % of the cases, a correct life photo match for a CT-derived facial image was the top ranked image in the generated candidate lists, even when blind searching in excess of 100,000 images. In 31-91 % of the cases, a correct match was located within the top 50 images. Few significant differences (p > 0.05) in match rates were observed between the sexes or across the three age cohorts. Highly significant differences (p < 0.01) were, however, observed across the three ancestral cohorts and between the two CT scanning protocols. Results suggest that the probability of a match between a facial image extracted from a medical scan and a photograph of the individual is moderately high. The facial image data inherent in commonly employed medical imaging modalities may need to consider a potentially identifiable form of "comparable" facial imagery and protected as such under patient privacy legislation.
Collapse
Affiliation(s)
- Connie L Parks
- Counterterrorismand Forensic Science Research Unit, Visiting Scientist Program, FBI Laboratory Division, 2501 Investigation Parkway, Quantico, VA, 22135, USA
| | - Keith L Monson
- Counterterrorism and Forensic Science Research Unit, FBI Laboratory Division, 2501 Investigation Parkway, Quantico, VA, 22135, USA.
| |
Collapse
|