1
|
Moharrami M, Farmer J, Singhal S, Watson E, Glogauer M, Johnson AEW, Schwendicke F, Quinonez C. Detecting dental caries on oral photographs using artificial intelligence: A systematic review. Oral Dis 2024; 30:1765-1783. [PMID: 37392423 DOI: 10.1111/odi.14659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/19/2023] [Accepted: 06/15/2023] [Indexed: 07/03/2023]
Abstract
OBJECTIVES This systematic review aimed at evaluating the performance of artificial intelligence (AI) models in detecting dental caries on oral photographs. METHODS Methodological characteristics and performance metrics of clinical studies reporting on deep learning and other machine learning algorithms were assessed. The risk of bias was evaluated using the quality assessment of diagnostic accuracy studies 2 (QUADAS-2) tool. A systematic search was conducted in EMBASE, Medline, and Scopus. RESULTS Out of 3410 identified records, 19 studies were included with six and seven studies having low risk of biases and applicability concerns for all the domains, respectively. Metrics varied widely and were assessed on multiple levels. F1-scores for classification and detection tasks were 68.3%-94.3% and 42.8%-95.4%, respectively. Irrespective of the task, F1-scores were 68.3%-95.4% for professional cameras, 78.8%-87.6%, for intraoral cameras, and 42.8%-80% for smartphone cameras. Limited studies allowed assessing AI performance for lesions of different severity. CONCLUSION Automatic detection of dental caries using AI may provide objective verification of clinicians' diagnoses and facilitate patient-clinician communication and teledentistry. Future studies should consider more robust study designs, employ comparable and standardized metrics, and focus on the severity of caries lesions.
Collapse
Affiliation(s)
- Mohammad Moharrami
- Faculty of Dentistry, University of Toronto, Toronto, Ontario, Canada
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Geneva, Switzerland
| | - Julie Farmer
- Faculty of Dentistry, University of Toronto, Toronto, Ontario, Canada
| | - Sonica Singhal
- Faculty of Dentistry, University of Toronto, Toronto, Ontario, Canada
- Health Promotion, Chronic Disease and Injury Prevention Department, Public Health Ontario, Toronto, Canada
| | - Erin Watson
- Faculty of Dentistry, University of Toronto, Toronto, Ontario, Canada
- Department of Dental Oncology, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Michael Glogauer
- Faculty of Dentistry, University of Toronto, Toronto, Ontario, Canada
- Department of Dental Oncology, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Dentistry, Centre for Advanced Dental Research and Care, Mount Sinai Hospital, Toronto, Ontario, Canada
| | - Alistair E W Johnson
- Program in Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Falk Schwendicke
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Geneva, Switzerland
- Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Carlos Quinonez
- Faculty of Dentistry, University of Toronto, Toronto, Ontario, Canada
- Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| |
Collapse
|
2
|
Ndiaye AD, Gasqui MA, Millioz F, Perard M, Leye Benoist F, Grosgogeat B. Exploring the Methodological Approaches of Studies on Radiographic Databases Used in Cariology to Feed Artificial Intelligence: A Systematic Review. Caries Res 2024; 58:117-140. [PMID: 38342096 DOI: 10.1159/000536277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 01/04/2024] [Indexed: 02/13/2024] Open
Abstract
INTRODUCTION A growing number of studies on diagnostic imaging show superior efficiency and accuracy of computer-aided diagnostic systems compared to those of certified dentists. This methodological systematic review aimed to evaluate the different methodological approaches used by studies focusing on machine learning and deep learning that have used radiographic databases to classify, detect, and segment dental caries. METHODS The protocol was registered in PROSPERO before data collection (CRD42022348097). Literature research was performed in MEDLINE, Embase, IEEE Xplore, and Web of Science until December 2022, without language restrictions. Studies and surveys using a dental radiographic database for the classification, detection, or segmentation of carious lesions were sought. Records deemed eligible were retrieved and further assessed for inclusion by two reviewers who resolved any discrepancies through consensus. A third reviewer was consulted when any disagreements or discrepancies persisted between the two reviewers. After data extraction, the same reviewers assessed the methodological quality using the CLAIM and QUADAS-AI checklists. RESULTS After screening 325 articles, 35 studies were eligible and included. The bitewing was the most commonly used radiograph (n = 17) at the time when detection (n = 15) was the most explored computer vision task. The sample sizes used ranged from 95 to 38,437, while the augmented training set ranged from 300 to 315,786. Convolutional neural network was the most commonly used model. The mean completeness of CLAIM items was 49% (SD ± 34%). The applicability of the CLAIM checklist items revealed several weaknesses in the methodology of the selected studies: most of the studies were monocentric, and only 9% of them used an external test set when evaluating the model's performance. The QUADAS-AI tool revealed that only 43% of the studies included in this systematic review were at low risk of bias concerning the standard reference domain. CONCLUSION This review demonstrates that the overall scientific quality of studies conducted to feed artificial intelligence algorithms is low. Some improvement in the design and validation of studies can be made with the development of a standardized guideline for the reproducibility and generalizability of results and, thus, their clinical applications.
Collapse
Affiliation(s)
- Amadou Diaw Ndiaye
- Service d'Odontologie Conservatrice-Endodontie, Université Cheikh Anta Diop, Dakar, Senegal,
| | - Marie Agnès Gasqui
- Laboratoire des Multimatériaux et Interfaces (LMI), UMR CNRS, Université Claude Bernard Lyon 1, Lyon, France
- Service d'Odontologie, Hospices Civils de Lyon, Lyon, France
| | - Fabien Millioz
- CREATIS (Centre de Recherche en Acquisition et Traitement de l'Image pour la Santé) - CNRS UMR - INSERM U1294 - Université Claude Bernard Lyon 1 - INSA Lyon, Lyon - Université Jean Monnet Saint-Etienne, Saint-Etienne, France
| | - Matthieu Perard
- University Rennes, INSERM, Rennes, France
- CHU Rennes, Rennes, France
| | - Fatou Leye Benoist
- Service d'Odontologie Conservatrice-Endodontie, Université Cheikh Anta Diop, Dakar, Senegal
| | - Brigitte Grosgogeat
- Laboratoire des Multimatériaux et Interfaces (LMI), UMR CNRS, Université Claude Bernard Lyon 1, Lyon, France
- Service d'Odontologie, Hospices Civils de Lyon, Lyon, France
| |
Collapse
|
3
|
Ma L, Li G, Feng X, Fan Q, Liu L. TiCNet: Transformer in Convolutional Neural Network for Pulmonary Nodule Detection on CT Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:196-208. [PMID: 38343213 DOI: 10.1007/s10278-023-00904-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 07/19/2023] [Accepted: 08/10/2023] [Indexed: 03/02/2024]
Abstract
Lung cancer is the leading cause of cancer death. Since lung cancer appears as nodules in the early stage, detecting the pulmonary nodules in an early phase could enhance the treatment efficiency and improve the survival rate of patients. The development of computer-aided analysis technology has made it possible to automatically detect lung nodules in Computed Tomography (CT) screening. In this paper, we propose a novel detection network, TiCNet. It is attempted to embed a transformer module in the 3D Convolutional Neural Network (CNN) for pulmonary nodule detection on CT images. First, we integrate the transformer and CNN in an end-to-end structure to capture both the short- and long-range dependency to provide rich information on the characteristics of nodules. Second, we design the attention block and multi-scale skip pathways for improving the detection of small nodules. Last, we develop a two-head detector to guarantee high sensitivity and specificity. Experimental results on the LUNA16 dataset and PN9 dataset showed that our proposed TiCNet achieved superior performance compared with existing lung nodule detection methods. Moreover, the effectiveness of each module has been proven. The proposed TiCNet model is an effective tool for pulmonary nodule detection. Validation revealed that this model exhibited excellent performance, suggesting its potential usefulness to support lung cancer screening.
Collapse
Affiliation(s)
- Ling Ma
- College of Software, Nankai University, Tianjin, China
| | - Gen Li
- College of Software, Nankai University, Tianjin, China
| | - Xingyu Feng
- College of Software, Nankai University, Tianjin, China
| | - Qiliang Fan
- College of Software, Nankai University, Tianjin, China
| | - Lizhi Liu
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangdong, China.
| |
Collapse
|
4
|
Azad R, Kazerouni A, Heidari M, Aghdam EK, Molaei A, Jia Y, Jose A, Roy R, Merhof D. Advances in medical image analysis with vision Transformers: A comprehensive review. Med Image Anal 2024; 91:103000. [PMID: 37883822 DOI: 10.1016/j.media.2023.103000] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 10/28/2023]
Abstract
The remarkable performance of the Transformer architecture in natural language processing has recently also triggered broad interest in Computer Vision. Among other merits, Transformers are witnessed as capable of learning long-range dependencies and spatial correlations, which is a clear advantage over convolutional neural networks (CNNs), which have been the de facto standard in Computer Vision problems so far. Thus, Transformers have become an integral part of modern medical image analysis. In this review, we provide an encyclopedic review of the applications of Transformers in medical imaging. Specifically, we present a systematic and thorough review of relevant recent Transformer literature for different medical image analysis tasks, including classification, segmentation, detection, registration, synthesis, and clinical report generation. For each of these applications, we investigate the novelty, strengths and weaknesses of the different proposed strategies and develop taxonomies highlighting key properties and contributions. Further, if applicable, we outline current benchmarks on different datasets. Finally, we summarize key challenges and discuss different future research directions. In addition, we have provided cited papers with their corresponding implementations in https://github.com/mindflow-institue/Awesome-Transformer.
Collapse
Affiliation(s)
- Reza Azad
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Amirhossein Kazerouni
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Moein Heidari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | | | - Amirali Molaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Yiwei Jia
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Abin Jose
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Rijo Roy
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| |
Collapse
|
5
|
Radha RC, Raghavendra BS, Subhash BV, Rajan J, Narasimhadhan AV. Machine learning techniques for periodontitis and dental caries detection: A narrative review. Int J Med Inform 2023; 178:105170. [PMID: 37595373 DOI: 10.1016/j.ijmedinf.2023.105170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 07/07/2023] [Accepted: 07/31/2023] [Indexed: 08/20/2023]
Abstract
OBJECTIVES In recent years, periodontitis, and dental caries have become common in humans and need to be diagnosed in the early stage to prevent severe complications and tooth loss. These dental issues are diagnosed by visual inspection, measuring pocket probing depth, and radiographs findings from experienced dentists. Though a glut of machine learning (ML) algorithms has been proposed for the automated detection of periodontitis, and dental caries, determining which ML techniques are suitable for clinical practice remains under debate. This review aims to identify the research challenges by analyzing the limitations of current methods and how to address these to obtain robust systems suitable for clinical use or point-of-care testing. METHODS An extensive search of the literature published from 2015 to 2022 written in English, related to the subject of study was sought by searching the electronic databases: PubMed, Institute of Electrical and Electronics Engineers (IEEE) Xplore, and ScienceDirect. RESULTS The initial electronic search yielded 1743 titles, and 55 studies were eventually included based on the selection criteria adopted in this review. Studies selected were on ML applications for the automatic detection of periodontitis and dental caries and related dental issues: Apical lessons, Periodontal bone loss, and Vertical root fracture. CONCLUSION While most of the ML-based studies use radiograph images for the detection of periodontitis and dental caries, few pieces of the literature revealed that good diagnostic accuracy could be achieved by training the ML model even with mobile photos representing the images of dental issues. Nowadays smartphones are used in every sector for different applications. Training the ML model with as many images of dental issues captured by the smartphone can achieve good accuracy, reduce the cost of clinical diagnosis, and provide user interaction.
Collapse
Affiliation(s)
- R C Radha
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - B S Raghavendra
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India
| | - B V Subhash
- Department of Oral Medicine and Radiology, DAPM R V Dental College, Bengaluru, India
| | - Jeny Rajan
- Department of Computer Science and Engineering, National Institute of Technology Karnataka, Surathkal, India
| | - A V Narasimhadhan
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India
| |
Collapse
|
6
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 27] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
7
|
Tareq A, Faisal MI, Islam MS, Rafa NS, Chowdhury T, Ahmed S, Farook TH, Mohammed N, Dudley J. Visual Diagnostics of Dental Caries through Deep Learning of Non-Standardised Photographs Using a Hybrid YOLO Ensemble and Transfer Learning Model. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:5351. [PMID: 37047966 PMCID: PMC10094335 DOI: 10.3390/ijerph20075351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/16/2023] [Accepted: 03/29/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND Access to oral healthcare is not uniform globally, particularly in rural areas with limited resources, which limits the potential of automated diagnostics and advanced tele-dentistry applications. The use of digital caries detection and progression monitoring through photographic communication, is influenced by multiple variables that are difficult to standardize in such settings. The objective of this study was to develop a novel and cost-effective virtual computer vision AI system to predict dental cavitations from non-standardised photographs with reasonable clinical accuracy. METHODS A set of 1703 augmented images was obtained from 233 de-identified teeth specimens. Images were acquired using a consumer smartphone, without any standardised apparatus applied. The study utilised state-of-the-art ensemble modeling, test-time augmentation, and transfer learning processes. The "you only look once" algorithm (YOLO) derivatives, v5s, v5m, v5l, and v5x, were independently evaluated, and an ensemble of the best results was augmented, and transfer learned with ResNet50, ResNet101, VGG16, AlexNet, and DenseNet. The outcomes were evaluated using precision, recall, and mean average precision (mAP). RESULTS The YOLO model ensemble achieved a mean average precision (mAP) of 0.732, an accuracy of 0.789, and a recall of 0.701. When transferred to VGG16, the final model demonstrated a diagnostic accuracy of 86.96%, precision of 0.89, and recall of 0.88. This surpassed all other base methods of object detection from free-hand non-standardised smartphone photographs. CONCLUSION A virtual computer vision AI system, blending a model ensemble, test-time augmentation, and transferred deep learning processes, was developed to predict dental cavitations from non-standardised photographs with reasonable clinical accuracy. This model can improve access to oral healthcare in rural areas with limited resources, and has the potential to aid in automated diagnostics and advanced tele-dentistry applications.
Collapse
Affiliation(s)
- Abu Tareq
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Mohammad Imtiaz Faisal
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Md. Shahidul Islam
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Nafisa Shamim Rafa
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Tashin Chowdhury
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Saif Ahmed
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Taseef Hasan Farook
- Adelaide Dental School, The University of Adelaide, Adelaide, SA 5005, Australia
| | - Nabeel Mohammed
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - James Dudley
- Adelaide Dental School, The University of Adelaide, Adelaide, SA 5005, Australia
| |
Collapse
|