1
|
Ueda Y, Ogawa D, Ishida T. Patient Re-Identification Based on Deep Metric Learning in Trunk Computed Tomography Images Acquired from Devices from Different Vendors. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1124-1136. [PMID: 38366292 PMCID: PMC11169436 DOI: 10.1007/s10278-024-01017-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 12/05/2023] [Accepted: 12/27/2023] [Indexed: 02/18/2024]
Abstract
During radiologic interpretation, radiologists read patient identifiers from the metadata of medical images to recognize the patient being examined. However, it is challenging for radiologists to identify "incorrect" metadata and patient identification errors. We propose a method that uses a patient re-identification technique to link correct metadata to an image set of computed tomography images of a trunk with lost or wrongly assigned metadata. This method is based on a feature vector matching technique that uses a deep feature extractor to adapt to the cross-vendor domain contained in the scout computed tomography image dataset. To identify "incorrect" metadata, we calculated the highest similarity score between a follow-up image and a stored baseline image linked to the correct metadata. The re-identification performance tests whether the image with the highest similarity score belongs to the same patient, i.e., whether the metadata attached to the image are correct. The similarity scores between the follow-up and baseline images for the same "correct" patients were generally greater than those for "incorrect" patients. The proposed feature extractor was sufficiently robust to extract individual distinguishable features without additional training, even for unknown scout computed tomography images. Furthermore, the proposed augmentation technique further improved the re-identification performance of the subset for different vendors by incorporating changes in width magnification due to changes in patient table height during each examination. We believe that metadata checking using the proposed method would help detect the metadata with an "incorrect" patient identifier assigned due to unavoidable errors such as human error.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Daiki Ogawa
- School of Allied Health Sciences, Faculty of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Takayuki Ishida
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
2
|
Ueda Y, Morishita J. Patient Identification Based on Deep Metric Learning for Preventing Human Errors in Follow-up X-Ray Examinations. J Digit Imaging 2023; 36:1941-1953. [PMID: 37308675 PMCID: PMC10501972 DOI: 10.1007/s10278-023-00850-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 05/08/2023] [Accepted: 05/10/2023] [Indexed: 06/14/2023] Open
Abstract
Biological fingerprints extracted from clinical images can be used for patient identity verification to determine misfiled clinical images in picture archiving and communication systems. However, such methods have not been incorporated into clinical use, and their performance can degrade with variability in the clinical images. Deep learning can be used to improve the performance of these methods. A novel method is proposed to automatically identify individuals among examined patients using posteroanterior (PA) and anteroposterior (AP) chest X-ray images. The proposed method uses deep metric learning based on a deep convolutional neural network (DCNN) to overcome the extreme classification requirements for patient validation and identification. It was trained on the NIH chest X-ray dataset (ChestX-ray8) in three steps: preprocessing, DCNN feature extraction with an EfficientNetV2-S backbone, and classification with deep metric learning. The proposed method was evaluated using two public datasets and two clinical chest X-ray image datasets containing data from patients undergoing screening and hospital care. A 1280-dimensional feature extractor pretrained for 300 epochs performed the best with an area under the receiver operating characteristic curve of 0.9894, an equal error rate of 0.0269, and a top-1 accuracy of 0.839 on the PadChest dataset containing both PA and AP view positions. The findings of this study provide considerable insights into the development of automated patient identification to reduce the possibility of medical malpractice due to human errors.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Department of Medical Physics and Engineering, Area of Medical Imaging Technology and Science, Graduate School of Medicine, Division of Health Sciences, Osaka University, Osaka, Japan.
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| |
Collapse
|
3
|
Ueda Y, Morishita J, Kudomi S. Biological fingerprint for patient verification using trunk scout views at various scan ranges in computed tomography. Radiol Phys Technol 2022; 15:398-408. [PMID: 36155890 DOI: 10.1007/s12194-022-00682-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/21/2022] [Accepted: 09/22/2022] [Indexed: 10/14/2022]
Abstract
Immediate verification of whether a patient being examined is correct is desirable, even if the scan ranges change during different examinations for the same patient. This study proposes an advanced biological fingerprint technique for the rapid and reliable verification of various scan ranges in computed tomography (CT) scans of the torso of the same patient. The method comprises the following steps: geometric correction of different scans, local feature extraction, mismatch elimination, and similarity evaluation. The geometric magnification correction was aligned at the scanner table height in the first two steps, and the local maxima were calculated as the local features. In the third step, local features from the follow-up scout image are matched to those in the corresponding baseline scout image via template matching and outlier elimination via a robust estimator. We evaluated the correspondence rate based on the inlier ratio between corresponding scout images. The ratio of inliers between the baseline and follow-up scout images was assessed as the similarity score. The clinical dataset, including chest, abdomen-pelvis, and chest-abdomen-pelvis scans, included 600 patients (372 men, 68 ± 12 years) who underwent two routine torso CT examinations. The highest area under the receiver operating characteristic curve (AUC) was 0.996, which was sufficient for patient verification. Moreover, the verification results were comparable to the conventional method, which uses scout images in the same scan range. Patient identity verification was achieved before the main scan, even in follow-up torso CT, under different scan ranges.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Department of Medical Physics and Engineering, Area of Medical Imaging Technology and Science, Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, Fukuoka, 812-8582, Japan
| | - Shohei Kudomi
- Department of Radiological Technology, Yamaguchi University Hospital, 1-1-1 Minamikogushi, Ube, Yamaguchi, 755-8505, Japan
| |
Collapse
|
4
|
Ueda Y, Morishita J, Hongyo T. Biological fingerprint using scout computed tomographic images for positive patient identification. Med Phys 2019; 46:4600-4609. [PMID: 31442297 DOI: 10.1002/mp.13779] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 08/16/2019] [Accepted: 08/16/2019] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Management of patient identification is an important issue that should be addressed to ensure patient safety while using modern healthcare systems. Patient identification errors can be mainly attributed to human errors or system problems. An error-tolerant system, such as a biometric system, should be able to prevent or mitigate potential misidentification occurrences. Herein, we propose the use of scout computed tomography (CT) images for biometric patient identity verification and present the quantitative accuracy outcomes of using this technique in a clinical setting. METHODS Scout CT images acquired from routine examinations of the chest, abdomen, and pelvis were used as biological fingerprints. We evaluated the resemblance of the follow-up with the baseline image by comparing the estimates of the image characteristics using local feature extraction and matching algorithms. The verification performance was evaluated according to the receiver operating characteristic (ROC) curves, area under the ROC curves (AUC), and equal error rates (EER). The closed-set identification performance was evaluated according to the cumulative match characteristic curves and rank-one identification rates (R1). RESULTS A total of 619 (383 males, 236 females, age range 21-92 years) patients who underwent baseline and follow-up chest-abdomen-pelvis CT scans on the same CT system were analyzed for verification and closed-set identification. The highest performances of AUC, EER, and R1 were 0.998, 1.22%, and 99.7%, respectively, in the considered evaluation range. Furthermore, to determine whether the performance decreased in the presence of metal artifacts, the patients were classified into two groups, namely scout images with (255 patients) and without (364 patients) metal artifacts, and the significance test was performed for two ROC curves using the unpaired Delong's test. No significant differences were found between the ROC performances in the presence and absence of metal artifacts when using a sufficient number of local features. Our proposed technique demonstrated that the performance was comparable to that of conventional biometrics methods when using chest, abdomen, and pelvis scout CT images. Thus, this method has the potential to discover inadequate patient information using the available chest, abdomen, and pelvis scout CT image; moreover, it can be applied widely to routine adult CT scans where no significant body structure effects due to illness or aging are present. CONCLUSIONS Our proposed method can obtain accurate patient information available at the point-of-care and help healthcare providers verify whether a patient's identity is matched accurately. We believe the method to be a key solution for patient misidentification problems.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Tadashi Hongyo
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
5
|
Ueda Y, Morishita J, Kudomi S, Ueda K. Usefulness of biological fingerprint in magnetic resonance imaging for patient verification. Med Biol Eng Comput 2015; 54:1341-51. [PMID: 26341617 DOI: 10.1007/s11517-015-1380-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2015] [Accepted: 08/21/2015] [Indexed: 11/25/2022]
Abstract
The purpose of our study is to investigate the feasibility of automated patient verification using multi-planar reconstruction (MPR) images generated from three-dimensional magnetic resonance (MR) imaging of the brain. Several anatomy-related MPR images generated from three-dimensional fast scout scan of each MR examination were used as biological fingerprint images in this study. The database of this study consisted of 730 temporal pairs of MR examination of the brain. We calculated the correlation value between current and prior biological fingerprint images of the same patient and also all combinations of two images for different patients to evaluate the effectiveness of our method for patient verification. The best performance of our system were as follows: a half-total error rate of 1.59 % with a false acceptance rate of 0.023 % and a false rejection rate of 3.15 %, an equal error rate of 1.37 %, and a rank-one identification rate of 98.6 %. Our method makes it possible to verify the identity of the patient using only some existing medical images without the addition of incidental equipment. Also, our method will contribute to patient misidentification error management caused by human errors.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Japan.
- Department of Radiological Technology, Yamaguchi University Hospital, 1-1-1, Minamikogushi, Ube, Yamaguchi, Japan.
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Japan
| | - Shohei Kudomi
- Department of Radiological Technology, Yamaguchi University Hospital, 1-1-1, Minamikogushi, Ube, Yamaguchi, Japan
| | - Katsuhiko Ueda
- Department of Radiological Technology, Yamaguchi University Hospital, 1-1-1, Minamikogushi, Ube, Yamaguchi, Japan
| |
Collapse
|
6
|
Derrick SM, Raxter MH, Hipp JA, Goel P, Chan EF, Love JC, Wiersema JM, Akella NS. Development of a computer-assisted forensic radiographic identification method using the lateral cervical and lumbar spine. J Forensic Sci 2014; 60:5-12. [PMID: 24961154 DOI: 10.1111/1556-4029.12531] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2013] [Revised: 09/06/2013] [Accepted: 09/14/2013] [Indexed: 11/29/2022]
Abstract
Medical examiners and coroners (ME/C) in the United States hold statutory responsibility to identify deceased individuals who fall under their jurisdiction. The computer-assisted decedent identification (CADI) project was designed to modify software used in diagnosis and treatment of spinal injuries into a mathematically validated tool for ME/C identification of fleshed decedents. CADI software analyzes the shapes of targeted vertebral bodies imaged in an array of standard radiographs and quantifies the likelihood that any two of the radiographs contain matching vertebral bodies. Six validation tests measured the repeatability, reliability, and sensitivity of the method, and the effects of age, sex, and number of radiographs in array composition. CADI returned a 92-100% success rate in identifying the true matching pair of vertebrae within arrays of five to 30 radiographs. Further development of CADI is expected to produce a novel identification method for use in ME/C offices that is reliable, timely, and cost-effective.
Collapse
Affiliation(s)
- Sharon M Derrick
- Harris County Institute of Forensic Sciences, 1885 Old Spanish Trail, Houston, TX, 77054
| | | | | | | | | | | | | | | |
Collapse
|
7
|
Shamir L, Yerby C, Simpson R, von Benda-Beckmann AM, Tyack P, Samarra F, Miller P, Wallin J. Classification of large acoustic datasets using machine learning and crowdsourcing: application to whale calls. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:953-962. [PMID: 25234903 DOI: 10.1121/1.4861348] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Vocal communication is a primary communication method of killer and pilot whales, and is used for transmitting a broad range of messages and information for short and long distance. The large variation in call types of these species makes it challenging to categorize them. In this study, sounds recorded by audio sensors carried by ten killer whales and eight pilot whales close to the coasts of Norway, Iceland, and the Bahamas were analyzed using computer methods and citizen scientists as part of the Whale FM project. Results show that the computer analysis automatically separated the killer whales into Icelandic and Norwegian whales, and the pilot whales were separated into Norwegian long-finned and Bahamas short-finned pilot whales, showing that at least some whales from these two locations have different acoustic repertoires that can be sensed by the computer analysis. The citizen science analysis was also able to separate the whales to locations by their sounds, but the separation was somewhat less accurate compared to the computer method.
Collapse
MESH Headings
- Acoustics
- Animals
- Artificial Intelligence
- Crowdsourcing
- Data Mining/methods
- Databases, Factual/classification
- Ecosystem
- Motion
- Pattern Recognition, Automated
- Signal Processing, Computer-Assisted
- Sound
- Sound Spectrography
- Species Specificity
- Time Factors
- Vocalization, Animal
- Whale, Killer/classification
- Whale, Killer/physiology
- Whale, Killer/psychology
- Whales, Pilot/classification
- Whales, Pilot/physiology
- Whales, Pilot/psychology
Collapse
Affiliation(s)
- Lior Shamir
- Lawrence Technological University, 21000 Ten Mile Road, Southfield, Michigan 48075
| | - Carol Yerby
- Lawrence Technological University, 21000 Ten Mile Road, Southfield, Michigan 48075
| | - Robert Simpson
- University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH, United Kingdom
| | - Alexander M von Benda-Beckmann
- The Netherlands Organization for Applied Scientific Research, P.O. Box 96864, The Hague, Zuid Holland, 2509 JG, The Netherlands
| | - Peter Tyack
- University of St. Andrews, St. Andrews, Fife, KY16 9ST, Scotland, United Kingdom
| | - Filipa Samarra
- University of St. Andrews, St. Andrews, Fife, KY16 9ST, Scotland, United Kingdom
| | - Patrick Miller
- University of St. Andrews, St. Andrews, Fife, KY16 9ST, Scotland, United Kingdom
| | - John Wallin
- Middle Tennessee State University, 1301 East Main Street, Murfreesboro, Tennessee 37130
| |
Collapse
|
8
|
Orlov NV, Eckley DM, Shamir L, Goldberg IG. Improving class separability using extended pixel planes: a comparative study. MACHINE VISION AND APPLICATIONS 2012; 23:1047-1058. [PMID: 23074356 PMCID: PMC3470430 DOI: 10.1007/s00138-011-0349-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this work we explored class separability in feature spaces built on extended representations of pixel planes (EPP) produced using scale pyramid, subband pyramid, and image transforms. The image transforms included Chebyshev, Fourier, wavelets, gradient and Laplacian; we also utilized transform combinations, including Fourier, Chebyshev and wavelets of the gradient transform, as well as Fourier of the Laplacian transform. We demonstrate that all three types of EPP promote class separation. We also explored the effect of EPP on suboptimal feature libraries, using only textural features in one case and only Haralick features in another. The effect of EPP was especially clear for these suboptimal libraries, where the transform-based representations were found to increase separability to a greater extent than scale or subband pyramids. EPP can be particularly useful in new applications where optimal features have not yet been developed.
Collapse
Affiliation(s)
- Nikita V Orlov
- National Institute on Aging /National Institutes of Health 251 Bayview Blvd, Bayview Research Center Bld, Suite 100, Baltimore, MD 21224, U.S
| | | | | | | |
Collapse
|
9
|
Shamir L. A computer analysis method for correlating knee X-rays with continuous indicators. Int J Comput Assist Radiol Surg 2011; 6:699-704. [PMID: 21373920 DOI: 10.1007/s11548-011-0550-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2010] [Accepted: 02/17/2011] [Indexed: 10/18/2022]
Abstract
PURPOSE To develop an image analysis method that can automatically find correlations between a set of plain radiographs and continuous clinical or physiological indicators. METHODS Knee X-rays taken from the Baltimore Longitudinal Study of Aging are used in this study. The computer analysis method is based on the WND-CHARM image feature set filtered by using the Pearson correlation of each feature with the continuous variable, and the estimated value is determined by a weighted nearest neighbor interpolation. RESULTS Experimental results using 300 radiographs show that the proposed method can correlate knee X-rays with physiological indicators such as sex, age, height, weight, and BMI. For instance, the Pearson correlation between the X-ray images and the height and weight were 0.59 and 0.62, respectively. CONCLUSIONS Using computer analysis, X-ray images can be correlated to continuous physiological variables that might not have a direct and straightforward link to the visual content of the radiograph. This approach of radiology image analysis can be used in population studies for detecting biomarkers and also in genome-wide association studies for studying the link between genes and anatomy.
Collapse
Affiliation(s)
- Lior Shamir
- Lawrence Technological University, 21000 W Ten Mile Rd., Southfield, MI 48075, USA.
| |
Collapse
|
10
|
Orlov NV, Chen WW, Eckley DM, Macura TJ, Shamir L, Jaffe ES, Goldberg IG. Automatic classification of lymphoma images with transform-based global features. ACTA ACUST UNITED AC 2010; 14:1003-13. [PMID: 20659835 DOI: 10.1109/titb.2010.2050695] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
We propose a report on automatic classification of three common types of malignant lymphoma: chronic lymphocytic leukemia, follicular lymphoma, and mantle cell lymphoma. The goal was to find patterns indicative of lymphoma malignancies and allowing classifying these malignancies by type. We used a computer vision approach for quantitative characterization of image content. A unique two-stage approach was employed in this study. At the outer level, raw pixels were transformed with a set of transforms into spectral planes. Simple (Fourier, Chebyshev, and wavelets) and compound transforms (Chebyshev of Fourier and wavelets of Fourier) were computed. Raw pixels and spectral planes were then routed to the second stage (the inner level). At the inner level, the set of multipurpose global features was computed on each spectral plane by the same feature bank. All computed features were fused into a single feature vector. The specimens were stained with hematoxylin (H) and eosin (E) stains. Several color spaces were used: RGB, gray, CIE-L*a*b*, and also the specific stain-attributed H&E space, and experiments on image classification were carried out for these sets. The best signal (98%-99% on earlier unseen images) was found for the HE, H, and E channels of the H&E data set.
Collapse
Affiliation(s)
- Nikita V Orlov
- National Institute on Aging, NIH, Baltimore, MD 21224, USA.
| | | | | | | | | | | | | |
Collapse
|