1
|
Kim Y, Yoon Y, Matsunobu Y, Usumoto Y, Eto N, Morishita J. Gray-Scale Extraction of Bone Features from Chest Radiographs Based on Deep Learning Technique for Personal Identification and Classification in Forensic Medicine. Diagnostics (Basel) 2024; 14:1778. [PMID: 39202266 PMCID: PMC11353895 DOI: 10.3390/diagnostics14161778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 08/10/2024] [Accepted: 08/13/2024] [Indexed: 09/03/2024] Open
Abstract
Post-mortem (PM) imaging has potential for identifying individuals by comparing ante-mortem (AM) and PM images. Radiographic images of bones contain significant information for personal identification. However, PM images are affected by soft tissue decomposition; therefore, it is desirable to extract only images of bones that change little over time. This study evaluated the effectiveness of U-Net for bone image extraction from two-dimensional (2D) X-ray images. Two types of pseudo 2D X-ray images were created from the PM computed tomography (CT) volumetric data using ray-summation processing for training U-Net. One was a projection of all body tissues, and the other was a projection of only bones. The performance of the U-Net for bone extraction was evaluated using Intersection over Union, Dice coefficient, and the area under the receiver operating characteristic curve. Additionally, AM chest radiographs were used to evaluate its performance with real 2D images. Our results indicated that bones could be extracted visually and accurately from both AM and PM images using U-Net. The extracted bone images could provide useful information for personal identification in forensic pathology.
Collapse
Affiliation(s)
- Yeji Kim
- Department of Multidisciplinary Radiological Sciences, Graduate School of Dongseo University, 47 Jurye-ro, Sasang-gu, Busan 47011, Republic of Korea
| | - Yongsu Yoon
- Department of Multidisciplinary Radiological Sciences, Graduate School of Dongseo University, 47 Jurye-ro, Sasang-gu, Busan 47011, Republic of Korea
| | - Yusuke Matsunobu
- Department of Radiological Sciences, Fukuoka International University of Health and Welfare, 3-6-40, Momochihama, Sawara-ku, Fukuoka 814-0001, Japan
| | - Yosuke Usumoto
- Department of Forensic Pathology and Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Nozomi Eto
- Department of Forensic Pathology and Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Junji Morishita
- Department of Radiological Sciences, Fukuoka International University of Health and Welfare, 3-6-40, Momochihama, Sawara-ku, Fukuoka 814-0001, Japan
| |
Collapse
|
2
|
Ueda Y, Ogawa D, Ishida T. Patient Re-Identification Based on Deep Metric Learning in Trunk Computed Tomography Images Acquired from Devices from Different Vendors. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1124-1136. [PMID: 38366292 PMCID: PMC11169436 DOI: 10.1007/s10278-024-01017-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 12/05/2023] [Accepted: 12/27/2023] [Indexed: 02/18/2024]
Abstract
During radiologic interpretation, radiologists read patient identifiers from the metadata of medical images to recognize the patient being examined. However, it is challenging for radiologists to identify "incorrect" metadata and patient identification errors. We propose a method that uses a patient re-identification technique to link correct metadata to an image set of computed tomography images of a trunk with lost or wrongly assigned metadata. This method is based on a feature vector matching technique that uses a deep feature extractor to adapt to the cross-vendor domain contained in the scout computed tomography image dataset. To identify "incorrect" metadata, we calculated the highest similarity score between a follow-up image and a stored baseline image linked to the correct metadata. The re-identification performance tests whether the image with the highest similarity score belongs to the same patient, i.e., whether the metadata attached to the image are correct. The similarity scores between the follow-up and baseline images for the same "correct" patients were generally greater than those for "incorrect" patients. The proposed feature extractor was sufficiently robust to extract individual distinguishable features without additional training, even for unknown scout computed tomography images. Furthermore, the proposed augmentation technique further improved the re-identification performance of the subset for different vendors by incorporating changes in width magnification due to changes in patient table height during each examination. We believe that metadata checking using the proposed method would help detect the metadata with an "incorrect" patient identifier assigned due to unavoidable errors such as human error.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Daiki Ogawa
- School of Allied Health Sciences, Faculty of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Takayuki Ishida
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
3
|
Ueda Y, Morishita J. Patient Identification Based on Deep Metric Learning for Preventing Human Errors in Follow-up X-Ray Examinations. J Digit Imaging 2023; 36:1941-1953. [PMID: 37308675 PMCID: PMC10501972 DOI: 10.1007/s10278-023-00850-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 05/08/2023] [Accepted: 05/10/2023] [Indexed: 06/14/2023] Open
Abstract
Biological fingerprints extracted from clinical images can be used for patient identity verification to determine misfiled clinical images in picture archiving and communication systems. However, such methods have not been incorporated into clinical use, and their performance can degrade with variability in the clinical images. Deep learning can be used to improve the performance of these methods. A novel method is proposed to automatically identify individuals among examined patients using posteroanterior (PA) and anteroposterior (AP) chest X-ray images. The proposed method uses deep metric learning based on a deep convolutional neural network (DCNN) to overcome the extreme classification requirements for patient validation and identification. It was trained on the NIH chest X-ray dataset (ChestX-ray8) in three steps: preprocessing, DCNN feature extraction with an EfficientNetV2-S backbone, and classification with deep metric learning. The proposed method was evaluated using two public datasets and two clinical chest X-ray image datasets containing data from patients undergoing screening and hospital care. A 1280-dimensional feature extractor pretrained for 300 epochs performed the best with an area under the receiver operating characteristic curve of 0.9894, an equal error rate of 0.0269, and a top-1 accuracy of 0.839 on the PadChest dataset containing both PA and AP view positions. The findings of this study provide considerable insights into the development of automated patient identification to reduce the possibility of medical malpractice due to human errors.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Department of Medical Physics and Engineering, Area of Medical Imaging Technology and Science, Graduate School of Medicine, Division of Health Sciences, Osaka University, Osaka, Japan.
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| |
Collapse
|
4
|
Ueda Y, Morishita J, Kudomi S. Biological fingerprint for patient verification using trunk scout views at various scan ranges in computed tomography. Radiol Phys Technol 2022; 15:398-408. [PMID: 36155890 DOI: 10.1007/s12194-022-00682-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/21/2022] [Accepted: 09/22/2022] [Indexed: 10/14/2022]
Abstract
Immediate verification of whether a patient being examined is correct is desirable, even if the scan ranges change during different examinations for the same patient. This study proposes an advanced biological fingerprint technique for the rapid and reliable verification of various scan ranges in computed tomography (CT) scans of the torso of the same patient. The method comprises the following steps: geometric correction of different scans, local feature extraction, mismatch elimination, and similarity evaluation. The geometric magnification correction was aligned at the scanner table height in the first two steps, and the local maxima were calculated as the local features. In the third step, local features from the follow-up scout image are matched to those in the corresponding baseline scout image via template matching and outlier elimination via a robust estimator. We evaluated the correspondence rate based on the inlier ratio between corresponding scout images. The ratio of inliers between the baseline and follow-up scout images was assessed as the similarity score. The clinical dataset, including chest, abdomen-pelvis, and chest-abdomen-pelvis scans, included 600 patients (372 men, 68 ± 12 years) who underwent two routine torso CT examinations. The highest area under the receiver operating characteristic curve (AUC) was 0.996, which was sufficient for patient verification. Moreover, the verification results were comparable to the conventional method, which uses scout images in the same scan range. Patient identity verification was achieved before the main scan, even in follow-up torso CT, under different scan ranges.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Department of Medical Physics and Engineering, Area of Medical Imaging Technology and Science, Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, Fukuoka, 812-8582, Japan
| | - Shohei Kudomi
- Department of Radiological Technology, Yamaguchi University Hospital, 1-1-1 Minamikogushi, Ube, Yamaguchi, 755-8505, Japan
| |
Collapse
|
5
|
Sato M, Kondo Y, Okamoto M. Development of a computer-aided quality assurance support system for identifying hand X-ray image direction using deep convolutional neural network. Radiol Phys Technol 2022; 15:358-366. [PMID: 36001273 DOI: 10.1007/s12194-022-00675-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 08/15/2022] [Accepted: 08/16/2022] [Indexed: 11/29/2022]
Abstract
The convenience of imaging has improved with digitization; however, there has been no progress in the methods used to prevent human error. Therefore, radiographic incidents and accidents are not prevented. In Japan, image interpretation is conducted for incident prevention; nevertheless, in some cases, incidents are overlooked. Thus, assistance from a computer-aided quality assurance support system is important. This study developed a method to identify hand image direction, which is an elementary technology of a computer-aided quality assurance support system. In total, 14,236 hand X-ray images were used to classify hand directions (upward, downward, rightward, and leftward) commonly evaluated in clinical settings. The accuracy of the conventional classification method using original images, classification method with histogram equation images, and a novel classification method using binarization images for background removal via U-Net segmentation was evaluated. The following classification accuracy rates were achieved: 89.20% if the original image was input, 99.10% if the histogram equation image was input, and 99.70% if binarization images for background removal via U-Net segmentation was input. Our computer-aided quality assurance support system can be used to identify hand direction with high accuracy.
Collapse
Affiliation(s)
- Mitsuru Sato
- Department of Radiological Technology, School of Health Sciences, Niigata University, 2-746 Asahimachi-dori, Chuo-ku, Niigata, Niigata, 951-8518, Japan.
| | - Yohan Kondo
- Department of Radiological Technology, School of Health Sciences, Niigata University, 2-746 Asahimachi-dori, Chuo-ku, Niigata, Niigata, 951-8518, Japan
| | - Masashi Okamoto
- Department of Radiological Technology, School of Health Sciences, Niigata University, 2-746 Asahimachi-dori, Chuo-ku, Niigata, Niigata, 951-8518, Japan
| |
Collapse
|
6
|
Mitsutake H, Watanabe H, Sakaguchi A, Uchiyama K, Lee Y, Hayashi N, Shimosegawa M, Ogura T. [Evaluation of Radiograph Accuracy in Skull X-ray Images Using Deep Learning]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2022; 78:23-32. [PMID: 35046219 DOI: 10.6009/jjrt.780104] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE Accurate positioning is essential for radiography, and it is especially important to maintain image reproducibility in follow-up observations. The decision on re-taking radiographs is entrusting to the individual radiological technologist. The evaluation is a visual and qualitative evaluation and there are individual variations in the acceptance criteria. In this study, we propose a method of image evaluation using a deep convolutional neural network (DCNN) for skull X-ray images. METHOD The radiographs were obtained from 5 skull phantoms and were classified by simple network and VGG16. The discrimination ability of DCNN was verified by recognizing the X-ray projection angle and the retake of the radiograph. DCNN architectures were used with the different input image sizes and were evaluated by 5-fold cross-validation and leave-one-out cross-validation. RESULT Using the 5-fold cross-validation, the classification accuracy was 99.75% for the simple network and 80.00% for the VGG16 in small input image sizes, and when the input image size was general image size, simple network and VGG16 showed 79.58% and 80.00%, respectively. CONCLUSION The experimental results showed that the combination between the small input image size, and the shallow DCNN architecture was suitable for the four-category classification in X-ray projection angles. The classification accuracy was up to 99.75%. The proposed method has the potential to automatically recognize the slight projection angles and the re-taking images to the acceptance criteria. It is considered that our proposed method can contribute to feedback for re-taking images and to reduce radiation dose due to individual subjectivity.
Collapse
Affiliation(s)
| | - Haruyuki Watanabe
- School of Radiological Technology, Gunma Prefectural College of Health Sciences
| | - Aya Sakaguchi
- School of Radiological Technology, Gunma Prefectural College of Health Sciences (Current address: Department of Radiological Technology, Seikei-kai Chiba Medical Center)
| | - Kiyoshi Uchiyama
- Department of Radiological Technology, Teikyo University Hospital
| | - Yongbum Lee
- School of Health Sciences, Faculty of Medicine, Niigata University
| | - Norio Hayashi
- School of Radiological Technology, Gunma Prefectural College of Health Sciences
| | | | - Toshihiro Ogura
- School of Radiological Technology, Gunma Prefectural College of Health Sciences
| |
Collapse
|
7
|
Morishita J, Ueda Y. New solutions for automated image recognition and identification: challenges to radiologic technology and forensic pathology. Radiol Phys Technol 2021; 14:123-133. [PMID: 33710498 DOI: 10.1007/s12194-021-00611-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 02/26/2021] [Accepted: 02/28/2021] [Indexed: 11/30/2022]
Abstract
This paper outlines the history of biometrics for personal identification, the current status of the initial biological fingerprint techniques for digital chest radiography, and patient verification during medical imaging, such as computed tomography and magnetic resonance imaging. Automated image recognition and identification developed for clinical images without metadata could also be applied to the identification of victims in mass disasters or other unidentified individuals. The development of methods that are adaptive to a wide range of recent imaging modalities in the fields of radiologic technology, patient safety, forensic pathology, and forensic odontology is still in its early stages. However, its importance in practice will continue to increase in the future.
Collapse
Affiliation(s)
- Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka, 812-8582, Japan.
| | - Yasuyuki Ueda
- Department of Medical Physics and Engineering, Area of Medical Imaging Technology and Science, Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| |
Collapse
|
8
|
Ueda Y, Morishita J, Hongyo T. Biological fingerprint using scout computed tomographic images for positive patient identification. Med Phys 2019; 46:4600-4609. [PMID: 31442297 DOI: 10.1002/mp.13779] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 08/16/2019] [Accepted: 08/16/2019] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Management of patient identification is an important issue that should be addressed to ensure patient safety while using modern healthcare systems. Patient identification errors can be mainly attributed to human errors or system problems. An error-tolerant system, such as a biometric system, should be able to prevent or mitigate potential misidentification occurrences. Herein, we propose the use of scout computed tomography (CT) images for biometric patient identity verification and present the quantitative accuracy outcomes of using this technique in a clinical setting. METHODS Scout CT images acquired from routine examinations of the chest, abdomen, and pelvis were used as biological fingerprints. We evaluated the resemblance of the follow-up with the baseline image by comparing the estimates of the image characteristics using local feature extraction and matching algorithms. The verification performance was evaluated according to the receiver operating characteristic (ROC) curves, area under the ROC curves (AUC), and equal error rates (EER). The closed-set identification performance was evaluated according to the cumulative match characteristic curves and rank-one identification rates (R1). RESULTS A total of 619 (383 males, 236 females, age range 21-92 years) patients who underwent baseline and follow-up chest-abdomen-pelvis CT scans on the same CT system were analyzed for verification and closed-set identification. The highest performances of AUC, EER, and R1 were 0.998, 1.22%, and 99.7%, respectively, in the considered evaluation range. Furthermore, to determine whether the performance decreased in the presence of metal artifacts, the patients were classified into two groups, namely scout images with (255 patients) and without (364 patients) metal artifacts, and the significance test was performed for two ROC curves using the unpaired Delong's test. No significant differences were found between the ROC performances in the presence and absence of metal artifacts when using a sufficient number of local features. Our proposed technique demonstrated that the performance was comparable to that of conventional biometrics methods when using chest, abdomen, and pelvis scout CT images. Thus, this method has the potential to discover inadequate patient information using the available chest, abdomen, and pelvis scout CT image; moreover, it can be applied widely to routine adult CT scans where no significant body structure effects due to illness or aging are present. CONCLUSIONS Our proposed method can obtain accurate patient information available at the point-of-care and help healthcare providers verify whether a patient's identity is matched accurately. We believe the method to be a key solution for patient misidentification problems.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Tadashi Hongyo
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
9
|
Sakai Y, Takahashi K, Shimizu Y, Ishibashi E, Kato T, Morishita J. Clinical application of biological fingerprints extracted from averaged chest radiographs and template-matching technique for preventing left-right flipping mistakes in chest radiography. Radiol Phys Technol 2019; 12:216-223. [PMID: 30784015 DOI: 10.1007/s12194-019-00504-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 02/07/2019] [Accepted: 02/11/2019] [Indexed: 11/28/2022]
Abstract
We aimed to evaluate the identification performance achieved using biological fingerprints extracted from averaged chest radiographs and template-matching techniques for the prevention of left-right flipping mistakes. We produced averaged chest radiographs for each sex by averaging 100 posteroanterior chest radiographs. Further, 400 and 566 chest radiographs were used in consistency and validation tests, respectively, and they were flipped horizontally to produce flipped chest radiographs under the assumption that the left-right flipping mistake occurred. The correlation values obtained with chest radiographs and those obtained with flipped chest radiographs were calculated. When we used correlation indices calculated from the correlation values from four biological fingerprints except for the lung apex, 96.5% (386/400) and 95.8% (542/566) of the left or right sides were identified correctly in the consistency and validation tests, respectively. This result indicates that our proposed method would be promising for the prevention of left-right flipping mistakes.
Collapse
Affiliation(s)
- Yuki Sakai
- Division of Radiology, Department of Medical Technology, Kyushu University Hospital, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan.
| | - Keita Takahashi
- Division of Radiology, Department of Medical Technology, Kyushu University Hospital, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Yoichiro Shimizu
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Emi Ishibashi
- Division of Radiology, Department of Medical Technology, Kyushu University Hospital, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Toyoyuki Kato
- Division of Radiology, Department of Medical Technology, Kyushu University Hospital, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| |
Collapse
|
10
|
Kawazoe Y, Morishita J, Matsunobu Y, Okumura M, Shin S, Usumoto Y, Ikeda N. A simple method for semi-automatic readjustment for positioning in post-mortem head computed tomography imaging. ACTA ACUST UNITED AC 2019. [DOI: 10.1016/j.jofri.2019.01.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
11
|
Sakai Y, Takahashi K, Iwase K, Shimizu Y, Hattori A, Kato T. [Usefulness of Biological Fingerprints and Template Matching Techniques in Bedside Chest Radiography for Patient Identification and Preventing Filing Mistakes]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2018; 74:1154-1162. [PMID: 30344212 DOI: 10.6009/jjrt.2018_jsrt_74.10.1154] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The purpose of this study was to investigate whether patients can be identified by using biological fingerprints extracted from bedside chest radiographs and template matching techniques for preventing filing mistakes in a picture archiving and communication system (PACS) server. A total of 400 bedside chest radiographs from 100 male and 100 female patients with current and previous images were used for evaluating patient identification performance. Five biological fingerprints were extracted from 200 previous images using the averaged bedside chest radiographs, produced for each sex and detector size. The correlation values of 200 same patients and 39,800 different patients were calculated as a similarity index, and used for the receiver operating characteristic (ROC) analysis. The patient identification performance was examined by using the correlation index calculated by the summation of correlation values obtained from five biological fingerprints. The sensitivity at 90.0% specificity was calculated using the correlation index. The correlation index for same patients was higher than that for different patients. The area under the ROC curve was 0.974. The patient identification performance was 76.0% (152/200), and the sensitivity at 90.0% specificity was 93.4% (37168/39800). Our results suggest that the proposed method may potentially be useful for preventing filing mistakes in bedside chest radiographs on a PACS server.
Collapse
Affiliation(s)
- Yuki Sakai
- Division of Radiology, Department of Medical Technology, Kyushu University Hospital
| | - Keita Takahashi
- Division of Radiology, Department of Medical Technology, Kyushu University Hospital
| | - Kensuke Iwase
- Department of Radiology, University Hospital of Occupational and Environmental Health
| | - Yoichiro Shimizu
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University
| | - Akiko Hattori
- Division of Radiology, Department of Medical Technology, Kyushu University Hospital
| | - Toyoyuki Kato
- Division of Radiology, Department of Medical Technology, Kyushu University Hospital
| |
Collapse
|
12
|
Shimizu Y, Morishita J. Development of a method of automated extraction of biological fingerprints from chest radiographs as preprocessing of patient recognition and identification. Radiol Phys Technol 2017; 10:376-381. [DOI: 10.1007/s12194-017-0400-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Revised: 04/17/2017] [Accepted: 04/23/2017] [Indexed: 10/19/2022]
|