1
|
Morita D, Kawarazaki A, Soufi M, Otake Y, Sato Y, Numajiri T. Automatic detection of midfacial fractures in facial bone CT images using deep learning-based object detection models. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101914. [PMID: 38750725 DOI: 10.1016/j.jormas.2024.101914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 04/24/2024] [Accepted: 05/12/2024] [Indexed: 05/18/2024]
Abstract
BACKGROUND Midfacial fractures are among the most frequent facial fractures. Surgery is recommended within 2 weeks of injury, but this time frame is often extended because the fracture is missed on diagnostic imaging in the busy emergency medicine setting. Using deep learning technology, which has progressed markedly in various fields, we attempted to develop a system for the automatic detection of midfacial fractures. The purpose of this study was to use this system to diagnose fractures accurately and rapidly, with the intention of benefiting both patients and emergency room physicians. METHODS One hundred computed tomography images that included midfacial fractures (e.g., maxillary, zygomatic, nasal, and orbital fractures) were prepared. In each axial image, the fracture area was surrounded by a rectangular region to create the annotation data. Eighty images were randomly classified as the training dataset (3736 slices) and 20 as the validation dataset (883 slices). Training and validation were performed using Single Shot MultiBox Detector (SSD) and version 8 of You Only Look Once (YOLOv8), which are object detection algorithms. RESULTS The performance indicators for SSD and YOLOv8 were respectively: precision, 0.872 and 0.871; recall, 0.823 and 0.775; F1 score, 0.846 and 0.82; average precision, 0.899 and 0.769. CONCLUSIONS The use of deep learning techniques allowed the automatic detection of midfacial fractures with good accuracy and high speed. The system developed in this study is promising for automated detection of midfacial fractures and may provide a quick and accurate solution for emergency medical care and other settings.
Collapse
Affiliation(s)
- Daiki Morita
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan; Department of Plastic and Reconstructive Surgery, Tokai University School of Medicine, Kanagawa, Japan.
| | - Ayako Kawarazaki
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Mazen Soufi
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshito Otake
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshinobu Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Toshiaki Numajiri
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
2
|
Shetty S, Mubarak AS, R David L, Al Jouhari MO, Talaat W, Al-Rawi N, AlKawas S, Shetty S, Uzun Ozsahin D. The Application of Mask Region-Based Convolutional Neural Networks in the Detection of Nasal Septal Deviation Using Cone Beam Computed Tomography Images: Proof-of-Concept Study. JMIR Form Res 2024; 8:e57335. [PMID: 39226096 PMCID: PMC11408888 DOI: 10.2196/57335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 05/07/2024] [Accepted: 05/27/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) models are being increasingly studied for the detection of variations and pathologies in different imaging modalities. Nasal septal deviation (NSD) is an important anatomical structure with clinical implications. However, AI-based radiographic detection of NSD has not yet been studied. OBJECTIVE This research aimed to develop and evaluate a real-time model that can detect probable NSD using cone beam computed tomography (CBCT) images. METHODS Coronal section images were obtained from 204 full-volume CBCT scans. The scans were classified as normal and deviated by 2 maxillofacial radiologists. The images were then used to train and test the AI model. Mask region-based convolutional neural networks (Mask R-CNNs) comprising 3 different backbones-ResNet50, ResNet101, and MobileNet-were used to detect deviated nasal septum in 204 CBCT images. To further improve the detection, an image preprocessing technique (contrast enhancement [CEH]) was added. RESULTS The best-performing model-CEH-ResNet101-achieved a mean average precision of 0.911, with an area under the curve of 0.921. CONCLUSIONS The performance of the model shows that the model is capable of detecting nasal septal deviation. Future research in this field should focus on additional preprocessing of images and detection of NSD based on multiple planes using 3D images.
Collapse
Affiliation(s)
- Shishir Shetty
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Auwalu Saleh Mubarak
- Operational Research Center in Healthcare, Near East University, Nicosia, Turkey
| | - Leena R David
- Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| | - Mhd Omar Al Jouhari
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Wael Talaat
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Natheer Al-Rawi
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Sausan AlKawas
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Sunaina Shetty
- Department of Preventive and Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Dilber Uzun Ozsahin
- Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
3
|
Lu CY, Wang YH, Chen HL, Goh YX, Chiu IM, Hou YY, Kuo KH, Lin WC. Artificial Intelligence Application in Skull Bone Fracture with Segmentation Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01156-0. [PMID: 38954293 DOI: 10.1007/s10278-024-01156-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 05/12/2024] [Accepted: 05/27/2024] [Indexed: 07/04/2024]
Abstract
This study aims to evaluate an AI model designed to automatically classify skull fractures and visualize segmentation on emergent CT scans. The model's goal is to boost diagnostic accuracy, alleviate radiologists' workload, and hasten diagnosis, thereby enhancing patient outcomes. Unique to this research, both pediatric and post-operative patients were not excluded, and diagnostic durations were analyzed. Our testing dataset for the observer studies involved 671 patients, with a mean age of 58.88 years and fairly balanced gender representation. Model 1 of our AI algorithm, trained with 1499 fracture-positive cases, showed a sensitivity of 0.94 and specificity of 0.87, with a DICE score of 0.65. Implementing post-processing rules (specifically Rule B) improved the model's performance, resulting in a sensitivity of 0.94, specificity of 0.99, and a DICE score of 0.63. AI-assisted diagnosis resulted in significantly enhanced performance for all participants, with sensitivity almost doubling for junior radiology residents and other specialists. Additionally, diagnostic durations were significantly reduced (p < 0.01) with AI assistance across all participant categories. Our skull fracture detection model, employing a segmentation approach, demonstrated high performance, enhancing diagnostic accuracy and efficiency for radiologists and clinical physicians. This underlines the potential of AI integration in medical imaging analysis to improve patient care.
Collapse
Affiliation(s)
- Chia-Yin Lu
- Department of Diagnostic Radiology, Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| | - Yu-Hsin Wang
- Department of Diagnostic Radiology, Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| | - Hsiu-Ling Chen
- Department of Diagnostic Radiology, Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| | - Yu-Xin Goh
- Department of Neurology, Shuang Ho Hospital, Ministry of Health and Welfare, Taipei Medical University, New Taipei City, Taiwan
| | - I-Min Chiu
- Department of Emergency Medicine, Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| | - Ya-Yuan Hou
- Department of Neurology, Kaohsiung Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| | - Kuei-Hong Kuo
- Division of Medical Image, Far Eastern Memorial Hospital, No. 21, Sec. 2, Nan Ya South Road., Banqiao District, New Taipei City, Taiwan.
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| | - Wei-Che Lin
- Department of Diagnostic Radiology, Chang Gung Memorial Hospital, Kaohsiung, Taiwan.
- Department of Radiology, Jen Ai Chang Gung Health Dali Branch, Taichung, Taiwan.
| |
Collapse
|
4
|
Pham TD, Holmes SB, Coulthard P. A review on artificial intelligence for the diagnosis of fractures in facial trauma imaging. Front Artif Intell 2024; 6:1278529. [PMID: 38249794 PMCID: PMC10797131 DOI: 10.3389/frai.2023.1278529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/11/2023] [Indexed: 01/23/2024] Open
Abstract
Patients with facial trauma may suffer from injuries such as broken bones, bleeding, swelling, bruising, lacerations, burns, and deformity in the face. Common causes of facial-bone fractures are the results of road accidents, violence, and sports injuries. Surgery is needed if the trauma patient would be deprived of normal functioning or subject to facial deformity based on findings from radiology. Although the image reading by radiologists is useful for evaluating suspected facial fractures, there are certain challenges in human-based diagnostics. Artificial intelligence (AI) is making a quantum leap in radiology, producing significant improvements of reports and workflows. Here, an updated literature review is presented on the impact of AI in facial trauma with a special reference to fracture detection in radiology. The purpose is to gain insights into the current development and demand for future research in facial trauma. This review also discusses limitations to be overcome and current important issues for investigation in order to make AI applications to the trauma more effective and realistic in practical settings. The publications selected for review were based on their clinical significance, journal metrics, and journal indexing.
Collapse
Affiliation(s)
- Tuan D. Pham
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | | | | |
Collapse
|
5
|
Jeong Y, Jeong C, Sung KY, Moon G, Lim J. Development of AI-Based Diagnostic Algorithm for Nasal Bone Fracture Using Deep Learning. J Craniofac Surg 2024; 35:29-32. [PMID: 38294297 DOI: 10.1097/scs.0000000000009856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 10/08/2023] [Indexed: 02/01/2024] Open
Abstract
Facial bone fractures are relatively common, with the nasal bone the most frequently fractured facial bone. Computed tomography is the gold standard for diagnosing such fractures. Most nasal bone fractures can be treated using a closed reduction. However, delayed diagnosis may cause nasal deformity or other complications that are difficult and expensive to treat. In this study, the authors developed an algorithm for diagnosing nasal fractures by learning computed tomography images of facial bones with artificial intelligence through deep learning. A significant concordance with human doctors' reading results of 100% sensitivity and 77% specificity was achieved. Herein, the authors report the results of a pilot study on the first stage of developing an algorithm for analyzing fractures in the facial bone.
Collapse
Affiliation(s)
- Yeonjin Jeong
- Department of Plastic and Reconstructive Surgery, National Medical Center, Seoul, Korea
| | - Chanho Jeong
- Department of Plastic and Reconstructive Surgery, Kangwon National University Hospital, Kangwon-do, Korea
| | - Kun-Yong Sung
- Department of Plastic and Reconstructive Surgery, Kangwon National University Hospital, Kangwon-do, Korea
| | - Gwiseong Moon
- Department of Computer Science and Engineering, Kangwon National University, Kangwon-do, Korea
| | - Jinsoo Lim
- Department of Plastic and Reconstructive Surgery, College of Medicine, The Catholic University of Korea, St. Vincent's Hospital, Gyeonggi-do, Korea
| |
Collapse
|
6
|
Rahman H, Khan AR, Sadiq T, Farooqi AH, Khan IU, Lim WH. A Systematic Literature Review of 3D Deep Learning Techniques in Computed Tomography Reconstruction. Tomography 2023; 9:2158-2189. [PMID: 38133073 PMCID: PMC10748093 DOI: 10.3390/tomography9060169] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 11/27/2023] [Accepted: 12/01/2023] [Indexed: 12/23/2023] Open
Abstract
Computed tomography (CT) is used in a wide range of medical imaging diagnoses. However, the reconstruction of CT images from raw projection data is inherently complex and is subject to artifacts and noise, which compromises image quality and accuracy. In order to address these challenges, deep learning developments have the potential to improve the reconstruction of computed tomography images. In this regard, our research aim is to determine the techniques that are used for 3D deep learning in CT reconstruction and to identify the training and validation datasets that are accessible. This research was performed on five databases. After a careful assessment of each record based on the objective and scope of the study, we selected 60 research articles for this review. This systematic literature review revealed that convolutional neural networks (CNNs), 3D convolutional neural networks (3D CNNs), and deep learning reconstruction (DLR) were the most suitable deep learning algorithms for CT reconstruction. Additionally, two major datasets appropriate for training and developing deep learning systems were identified: 2016 NIH-AAPM-Mayo and MSCT. These datasets are important resources for the creation and assessment of CT reconstruction models. According to the results, 3D deep learning may increase the effectiveness of CT image reconstruction, boost image quality, and lower radiation exposure. By using these deep learning approaches, CT image reconstruction may be made more precise and effective, improving patient outcomes, diagnostic accuracy, and healthcare system productivity.
Collapse
Affiliation(s)
- Hameedur Rahman
- Department of Computer Games Development, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Abdur Rehman Khan
- Department of Creative Technologies, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Touseef Sadiq
- Centre for Artificial Intelligence Research, Department of Information and Communication Technology, University of Agder, Jon Lilletuns vei 9, 4879 Grimstad, Norway
| | - Ashfaq Hussain Farooqi
- Department of Computer Science, Faculty of Computing AI, Air University, Islamabad 44000, Pakistan;
| | - Inam Ullah Khan
- Department of Electronic Engineering, School of Engineering & Applied Sciences (SEAS), Isra University, Islamabad Campus, Islamabad 44000, Pakistan;
| | - Wei Hong Lim
- Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur 56000, Malaysia;
| |
Collapse
|
7
|
Sarmah M, Neelima A, Singh HR. Survey of methods and principles in three-dimensional reconstruction from two-dimensional medical images. Vis Comput Ind Biomed Art 2023; 6:15. [PMID: 37495817 PMCID: PMC10371974 DOI: 10.1186/s42492-023-00142-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 06/27/2023] [Indexed: 07/28/2023] Open
Abstract
Three-dimensional (3D) reconstruction of human organs has gained attention in recent years due to advances in the Internet and graphics processing units. In the coming years, most patient care will shift toward this new paradigm. However, development of fast and accurate 3D models from medical images or a set of medical scans remains a daunting task due to the number of pre-processing steps involved, most of which are dependent on human expertise. In this review, a survey of pre-processing steps was conducted, and reconstruction techniques for several organs in medical diagnosis were studied. Various methods and principles related to 3D reconstruction were highlighted. The usefulness of 3D reconstruction of organs in medical diagnosis was also highlighted.
Collapse
Affiliation(s)
- Mriganka Sarmah
- Department of Computer Science and Engineering, National Institute of Technology, Nagaland, 797103, India.
| | - Arambam Neelima
- Department of Computer Science and Engineering, National Institute of Technology, Nagaland, 797103, India
| | - Heisnam Rohen Singh
- Department of Information Technology, Nagaland University, Nagaland, 797112, India
| |
Collapse
|
8
|
Maxillofacial fracture detection and classification in computed tomography images using convolutional neural network-based models. Sci Rep 2023; 13:3434. [PMID: 36859660 PMCID: PMC9978019 DOI: 10.1038/s41598-023-30640-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 02/27/2023] [Indexed: 03/03/2023] Open
Abstract
The purpose of this study was to evaluate the performance of convolutional neural network-based models for the detection and classification of maxillofacial fractures in computed tomography (CT) maxillofacial bone window images. A total of 3407 CT images, 2407 of which contained maxillofacial fractures, were retrospectively obtained from the regional trauma center from 2016 to 2020. Multiclass image classification models were created by using DenseNet-169 and ResNet-152. Multiclass object detection models were created by using faster R-CNN and YOLOv5. DenseNet-169 and ResNet-152 were trained to classify maxillofacial fractures into frontal, midface, mandibular and no fracture classes. Faster R-CNN and YOLOv5 were trained to automate the placement of bounding boxes to specifically detect fracture lines in each fracture class. The performance of each model was evaluated on an independent test dataset. The overall accuracy of the best multiclass classification model, DenseNet-169, was 0.70. The mean average precision of the best multiclass detection model, faster R-CNN, was 0.78. In conclusion, DenseNet-169 and faster R-CNN have potential for the detection and classification of maxillofacial fractures in CT images.
Collapse
|
9
|
Nam Y, Choi Y, Kang J, Seo M, Heo SJ, Lee MK. Diagnosis of nasal bone fractures on plain radiographs via convolutional neural networks. Sci Rep 2022; 12:21510. [PMID: 36513751 PMCID: PMC9747951 DOI: 10.1038/s41598-022-26161-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 12/12/2022] [Indexed: 12/14/2022] Open
Abstract
This study aimed to assess the performance of deep learning (DL) algorithms in the diagnosis of nasal bone fractures on radiographs and compare it with that of experienced radiologists. In this retrospective study, 6713 patients whose nasal radiographs were examined for suspected nasal bone fractures between January 2009 and October 2020 were assessed. Our dataset was randomly split into training (n = 4325), validation (n = 481), and internal test (n = 1250) sets; a separate external dataset (n = 102) was used. The area under the receiver operating characteristic curve (AUC), sensitivity, and specificity of the DL algorithm and the two radiologists were compared. The AUCs of the DL algorithm for the internal and external test sets were 0.85 (95% CI, 0.83-0.86) and 0.86 (95% CI, 0.78-0.93), respectively, and those of the two radiologists for the external test set were 0.80 (95% CI, 0.73-0.87) and 0.75 (95% CI, 0.68-0.82). The DL algorithm therefore significantly exceeded radiologist 2 (P = 0.021) but did not significantly differ from radiologist 1 (P = 0.142). The sensitivity and specificity of the DL algorithm were 83.1% (95% CI, 71.2-93.2%) and 83.7% (95% CI, 69.8-93.0%), respectively. Our DL algorithm performs comparably to experienced radiologists in diagnosing nasal bone fractures on radiographs.
Collapse
Affiliation(s)
- Yoonho Nam
- grid.440932.80000 0001 2375 5180Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin-si, Gyeonggi‐do Republic of Korea
| | - Yangsean Choi
- grid.411947.e0000 0004 0470 4224Department of Radiology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Junghwa Kang
- grid.440932.80000 0001 2375 5180Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin-si, Gyeonggi‐do Republic of Korea
| | - Minkook Seo
- grid.411947.e0000 0004 0470 4224Department of Radiology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Soo Jin Heo
- grid.411947.e0000 0004 0470 4224Department of Radiology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Min Kyoung Lee
- grid.411947.e0000 0004 0470 4224Department of Radiology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
10
|
Zech JR, Santomartino SM, Yi PH. Artificial Intelligence (AI) for Fracture Diagnosis: An Overview of Current Products and Considerations for Clinical Adoption, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2022; 219:869-878. [PMID: 35731103 DOI: 10.2214/ajr.22.27873] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Fractures are common injuries that can be difficult to diagnose, with missed fractures accounting for most misdiagnoses in the emergency department. Artificial intelligence (AI) and, specifically, deep learning have shown a strong ability to accurately detect fractures and augment the performance of radiologists in proof-of-concept research settings. Although the number of real-world AI products available for clinical use continues to increase, guidance for practicing radiologists in the adoption of this new technology is limited. This review describes how AI and deep learning algorithms can help radiologists to better diagnose fractures. The article also provides an overview of commercially available U.S. FDA-cleared AI tools for fracture detection as well as considerations for the clinical adoption of these tools by radiology practices.
Collapse
Affiliation(s)
- John R Zech
- Department of Radiology, Columbia University Irving Medical Center/New York-Presbyterian Hospital, New York, NY
| | - Samantha M Santomartino
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland Medical Intelligent Imaging (UM2ii) Center, University of Maryland School of Medicine, 670 W Baltimore St, First Fl, Rm 1172, Baltimore, MD 21201
| | - Paul H Yi
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland Medical Intelligent Imaging (UM2ii) Center, University of Maryland School of Medicine, 670 W Baltimore St, First Fl, Rm 1172, Baltimore, MD 21201
| |
Collapse
|
11
|
Yang C, Yang L, Gao GD, Zong HQ, Gao D. Assessment of artificial intelligence-aided reading in the detection of nasal bone fractures. Technol Health Care 2022; 31:1017-1025. [PMID: 36442167 DOI: 10.3233/thc-220501] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
BACKGROUND: Artificial intelligence (AI) technology is a promising diagnostic adjunct in fracture detection. However, few studies describe the improvement of clinicians’ diagnostic accuracy for nasal bone fractures with the aid of AI technology. OBJECTIVE: This study aims to determine the value of the AI model in improving the diagnostic accuracy for nasal bone fractures compared with manual reading. METHODS: A total of 252 consecutive patients who had undergone facial computed tomography (CT) between January 2020 and January 2021 were enrolled in this study. The presence or absence of a nasal bone fracture was determined by two experienced radiologists. An AI algorithm based on the deep-learning algorithm was engineered, trained and validated to detect fractures on CT images. Twenty readers with various experience were invited to read CT images with or without AI. The accuracy, sensitivity and specificity with the aid of the AI model were calculated by the readers. RESULTS: The deep-learning AI model had 84.78% sensitivity, 86.67% specificity, 0.857 area under the curve (AUC) and a 0.714 Youden index in identifying nasal bone fractures. For all readers, regardless of experience, AI-aided reading had higher sensitivity ([94.00 ± 3.17]% vs [83.52 ± 10.16]%, P< 0.001), specificity ([89.75 ± 6.15]% vs [77.55 ± 11.38]%, P< 0.001) and AUC (0.92 ± 0.04 vs 0.81 ± 0.10, P< 0.001) compared with reading without AI. With the aid of AI, the sensitivity, specificity and AUC were significantly improved in readers with 1–5 years or 6–10 years of experience (all P< 0.05, Table 4). For readers with 11–15 years of experience, no evidence suggested that AI could improve sensitivity and AUC (P= 0.124 and 0.152, respectively). CONCLUSION: The AI model might aid less experienced physicians and radiologists in improving their diagnostic performance for the localisation of nasal bone fractures on CT images.
Collapse
Affiliation(s)
- Cun Yang
- Department of Medical Equipment, The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Lei Yang
- Department of Medical Imaging, The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Guo-Dong Gao
- Department of Medical Imaging, The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Hui-Qian Zong
- Department of Medical Equipment, The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Duo Gao
- Department of Medical Imaging, The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| |
Collapse
|
12
|
Huang Y, Si Y, Hu B, Zhang Y, Wu S, Wu D, Wang Q. Transformer-based factorized encoder for classification of pneumoconiosis on 3D CT images. Comput Biol Med 2022; 150:106137. [PMID: 36191395 DOI: 10.1016/j.compbiomed.2022.106137] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 09/13/2022] [Accepted: 09/18/2022] [Indexed: 11/22/2022]
Abstract
In the past decade, deep learning methods have been implemented in the medical image fields and have achieved good performance. Recently, deep learning algorithms have been successful in the evaluation of diagnosis on lung images. Although chest radiography (CR) is the standard data modality for diagnosing pneumoconiosis, computed tomography (CT) typically provides more details of the lesions in the lung. Thus, a transformer-based factorized encoder (TBFE) was proposed and first applied for the classification of pneumoconiosis depicted on 3D CT images. Specifically, a factorized encoder consists of two transformer encoders. The first transformer encoder enables the interaction of intra-slice by encoding feature maps from the same slice of CT. The second transformer encoder explores the inter-slice interaction by encoding feature maps from different slices. In addition, the lack of grading standards on CT for labeling the pneumoconiosis lesions. Thus, an acknowledged CR-based grading system was applied to mark the corresponding pneumoconiosis CT stage. Then, we pre-trained the 3D convolutional autoencoder on the public LIDC-IDRI dataset and fixed the parameters of the last convolutional layer of the encoder to extract CT feature maps with underlying spatial structural information from our 3D CT dataset. Experimental results demonstrated the superiority of the TBFE over other 3D-CNN networks, achieving an accuracy of 97.06%, a recall of 89.33%, precision of 90%, and an F1-score of 93.33%, using 10-fold cross-validation.
Collapse
Affiliation(s)
- Yingying Huang
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an 710119, Shanxi, China; University of Chinese Academy of Sciences, Beijing 100049, China; Key laboratory of Biomedical Spectroscopy, Xi'an 710119, Shanxi, China.
| | - Yang Si
- Sichuan Academy of Medical Science and Sichuan Provincial People's Hospital, Department of Neurology, Chengdu, Sichuan, China; University of Electronic Science and Technology of China, Chengdu, Sichuan, China.
| | - Bingliang Hu
- Key laboratory of Biomedical Spectroscopy, Xi'an 710119, Shanxi, China.
| | - Yan Zhang
- Department of Radiology, West China School of Public Health and West China Fourth Hospital, Sichuan University, Chengdu, Sichuan, China.
| | - Shuang Wu
- Department of Radiology, West China School of Public Health and West China Fourth Hospital, Sichuan University, Chengdu, Sichuan, China.
| | - Dongsheng Wu
- Department of Radiology, West China School of Public Health and West China Fourth Hospital, Sichuan University, Chengdu, Sichuan, China; Research Center of Artificial Intelligence in Medicine, West China-PUMC C.C. Chen Institute of Health, Sichuan University, Chengdu, Sichuan, China.
| | - Quan Wang
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an 710119, Shanxi, China; Key laboratory of Biomedical Spectroscopy, Xi'an 710119, Shanxi, China.
| |
Collapse
|
13
|
Generalizability assessment of COVID-19 3D CT data for deep learning-based disease detection. Comput Biol Med 2022; 145:105464. [PMID: 35390746 PMCID: PMC8971071 DOI: 10.1016/j.compbiomed.2022.105464] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 03/25/2022] [Accepted: 03/25/2022] [Indexed: 12/16/2022]
Abstract
BACKGROUND Artificial intelligence technologies in classification/detection of COVID-19 positive cases suffer from generalizability. Moreover, accessing and preparing another large dataset is not always feasible and time-consuming. Several studies have combined smaller COVID-19 CT datasets into "supersets" to maximize the number of training samples. This study aims to assess generalizability by splitting datasets into different portions based on 3D CT images using deep learning. METHOD Two large datasets, including 1110 3D CT images, were split into five segments of 20% each. Each dataset's first 20% segment was separated as a holdout test set. 3D-CNN training was performed with the remaining 80% from each dataset. Two small external datasets were also used to independently evaluate the trained models. RESULTS The total combination of 80% of each dataset has an accuracy of 91% on Iranmehr and 83% on Moscow holdout test datasets. Results indicated that 80% of the primary datasets are adequate for fully training a model. The additional fine-tuning using 40% of a secondary dataset helps the model generalize to a third, unseen dataset. The highest accuracy achieved through transfer learning was 85% on LDCT dataset and 83% on Iranmehr holdout test sets when retrained on 80% of Iranmehr dataset. CONCLUSION While the total combination of both datasets produced the best results, different combinations and transfer learning still produced generalizable results. Adopting the proposed methodology may help to obtain satisfactory results in the case of limited external datasets.
Collapse
|