1
|
Karimullah S, Khan M, Shaik F, Alabduallah B, Almjally A. An integrated method for detecting lung cancer via CT scanning via optimization, deep learning, and IoT data transmission. Front Oncol 2024; 14:1435041. [PMID: 39435294 PMCID: PMC11491319 DOI: 10.3389/fonc.2024.1435041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 09/16/2024] [Indexed: 10/23/2024] Open
Abstract
With its increasing global prevalence, lung cancer remains a critical health concern. Despite the advancement of screening programs, patient selection and risk stratification pose significant challenges. This study addresses the pressing need for early detection through a novel diagnostic approach that leverages innovative image processing techniques. The urgency of early lung cancer detection is emphasized by its alarming growth worldwide. While computed tomography (CT) surpasses traditional X-ray methods, a comprehensive diagnosis requires a combination of imaging techniques. This research introduces an advanced diagnostic tool implemented through image processing methodologies. The methodology commences with histogram equalization, a crucial step in artifact removal from CT images sourced from a medical database. Accurate lung CT image segmentation, which is vital for cancer diagnosis, follows. The Otsu thresholding method and optimization, employing Colliding Bodies Optimization (CBO), enhance the precision of the segmentation process. A local binary pattern (LBP) is deployed for feature extraction, enabling the identification of nodule sizes and precise locations. The resulting image underwent classification using the densely connected CNN (DenseNet) deep learning algorithm, which effectively distinguished between benign and malignant tumors. The proposed CBO+DenseNet CNN exhibits remarkable performance improvements over traditional methods. Notable enhancements in accuracy (98.17%), specificity (97.32%), precision (97.46%), and recall (97.89%) are observed, as evidenced by the results from the fractional randomized voting model (FRVM). These findings highlight the potential of the proposed model as an advanced diagnostic tool. Its improved metrics promise heightened accuracy in tumor classification and localization. The proposed model uniquely combines Colliding Bodies Optimization (CBO) with DenseNet CNN, enhancing segmentation and classification accuracy for lung cancer detection, setting it apart from traditional methods with superior performance metrics.
Collapse
Affiliation(s)
- Shaik Karimullah
- Department of Electronics and Communications Engineering, Annamacharya Institute of Technology and Sciences (Autonomous), Rajampet, Andhra Pradesh, India
| | - Mudassir Khan
- Department of Computer Science, College of Science & Arts, Tanumah, King Khalid University, Abha, Saudi Arabia
| | - Fahimuddin Shaik
- Department of Electronics and Communications Engineering, Annamacharya Institute of Technology and Sciences (Autonomous), Rajampet, Andhra Pradesh, India
| | - Bayan Alabduallah
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Abrar Almjally
- Department of Information Technology, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| |
Collapse
|
2
|
Zhicheng H, Yipeng W, Xiao L. Deep Learning-Based Detection of Impacted Teeth on Panoramic Radiographs. Biomed Eng Comput Biol 2024; 15:11795972241288319. [PMID: 39372969 PMCID: PMC11456186 DOI: 10.1177/11795972241288319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 09/16/2024] [Indexed: 10/08/2024] Open
Abstract
Objective The aim is to detect impacted teeth in panoramic radiology by refining the pretrained MedSAM model. Study design Impacted teeth are dental issues that can cause complications and are diagnosed via radiographs. We modified SAM model for individual tooth segmentation using 1016 X-ray images. The dataset was split into training, validation, and testing sets, with a ratio of 16:3:1. We enhanced the SAM model to automatically detect impacted teeth by focusing on the tooth's center for more accurate results. Results With 200 epochs, batch size equals to 1, and a learning rate of 0.001, random images trained the model. Results on the test set showcased performance up to an accuracy of 86.73%, F1-score of 0.5350, and IoU of 0.3652 on SAM-related models. Conclusion This study fine-tunes MedSAM for impacted tooth segmentation in X-ray images, aiding dental diagnoses. Further improvements on model accuracy and selection are essential for enhancing dental practitioners' diagnostic capabilities.
Collapse
Affiliation(s)
- He Zhicheng
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing, PR China
| | - Wang Yipeng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, PR China
| | - Li Xiao
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing, PR China
| |
Collapse
|
3
|
Morant R, Gräwingholt A, Subelack J, Kuklinski D, Vogel J, Blum M, Eichenberger A, Geissler A. [The possible benefit of artificial intelligence in an organized population-related screening program : Initial results and perspective]. RADIOLOGIE (HEIDELBERG, GERMANY) 2024; 64:773-778. [PMID: 39017722 PMCID: PMC11422457 DOI: 10.1007/s00117-024-01345-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/18/2024] [Indexed: 07/18/2024]
Abstract
BACKGROUND Mammography screening programs (MSP) have shown that breast cancer can be detected at an earlier stage enabling less invasive treatment and leading to a better survival rate. The considerable numbers of interval breast cancer (IBC) and the additional examinations required, the majority of which turn out not to be cancer, are critically assessed. OBJECTIVE In recent years companies and universities have used machine learning (ML) to develop powerful algorithms that demonstrate astonishing abilities to read mammograms. Can such algorithms be used to improve the quality of MSP? METHOD The original screening mammographies of 251 cases with IBC were retrospectively analyzed using the software ProFound AI® (iCAD) and the results were compared (case score, risk score) with a control group. The relevant current literature was also studied. RESULTS The distributions of the case scores and the risk scores were markedly shifted to higher risks compared to the control group, comparable to the results of other studies. CONCLUSION Retrospective studies as well as our own data show that artificial intelligence (AI) could change our approach to MSP in the future in the direction of personalized screening and could enable a significant reduction in the workload of radiologists, fewer additional examinations and a reduced number of IBCs; however, the results of prospective studies are needed before implementation.
Collapse
Affiliation(s)
- R Morant
- Krebsliga Ostschweiz, Flurhofstrasse 7, 9000, St. Gallen, Schweiz
| | - A Gräwingholt
- Radiologie am Theater, 33098, Paderborn, Deutschland
| | - J Subelack
- School of Medicine, Lehrstuhl für Gesundheitsökonomie, -Politik und -Management, Universität St. Gallen, 9000, St. Gallen, Schweiz
| | - D Kuklinski
- School of Medicine, Lehrstuhl für Gesundheitsökonomie, -Politik und -Management, Universität St. Gallen, 9000, St. Gallen, Schweiz.
| | - J Vogel
- School of Medicine, Lehrstuhl für Gesundheitsökonomie, -Politik und -Management, Universität St. Gallen, 9000, St. Gallen, Schweiz
| | - M Blum
- Krebsliga Ostschweiz, Flurhofstrasse 7, 9000, St. Gallen, Schweiz
| | - A Eichenberger
- Krebsliga Ostschweiz, Flurhofstrasse 7, 9000, St. Gallen, Schweiz
| | - A Geissler
- School of Medicine, Lehrstuhl für Gesundheitsökonomie, -Politik und -Management, Universität St. Gallen, 9000, St. Gallen, Schweiz
| |
Collapse
|
4
|
Zubair M, Owais M, Mahmood T, Iqbal S, Usman SM, Hussain I. Enhanced gastric cancer classification and quantification interpretable framework using digital histopathology images. Sci Rep 2024; 14:22533. [PMID: 39342030 PMCID: PMC11439054 DOI: 10.1038/s41598-024-73823-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 09/20/2024] [Indexed: 10/01/2024] Open
Abstract
Recent developments have highlighted the critical role that computer-aided diagnosis (CAD) systems play in analyzing whole-slide digital histopathology images for detecting gastric cancer (GC). We present a novel framework for gastric histology classification and segmentation (GHCS) that offers modest yet meaningful improvements over existing CAD models for GC classification and segmentation. Our methodology achieves marginal improvements over conventional deep learning (DL) and machine learning (ML) models by adaptively focusing on pertinent characteristics of images. This contributes significantly to our study, highlighting that the proposed model, which performs well on normalized images, is robust in certain respects, particularly in handling variability and generalizing to different datasets. We anticipate that this robustness will lead to better results across various datasets. An expectation-maximizing Naïve Bayes classifier that uses an updated Gaussian Mixture Model is at the heart of the suggested GHCS framework. The effectiveness of our classifier is demonstrated by experimental validation on two publicly available datasets, which produced exceptional classification accuracies of 98.87% and 97.28% on validation sets and 98.47% and 97.31% on test sets. Our framework shows a slight but consistent improvement over previously existing techniques in gastric histopathology image classification tasks, as demonstrated by comparative analysis. This may be attributed to its ability to capture critical features of gastric histopathology images better. Furthermore, using an improved Fuzzy c-means method, our study produces good results in GC histopathology picture segmentation, outperforming state-of-the-art segmentation models with a Dice coefficient of 65.21% and a Jaccard index of 60.24%. The model's interpretability is complemented by Grad-CAM visualizations, which help understand the decision-making process and increase the model's trustworthiness for end-users, especially clinicians.
Collapse
Affiliation(s)
- Muhammad Zubair
- Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab, Pakistan
| | - Muhammad Owais
- Khalifa University Center for Autonomous Robotic Systems (KUCARS) and Department of Mechanical & Nuclear Engineering, Khalifa University, Abu Dhabi, United Arab Emirates.
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Korea
| | - Saeed Iqbal
- Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab, Pakistan
| | - Syed Muhammad Usman
- Department of Computer Science, School of Engineering and Applied Sciences, Bahria University, Islamabad, Pakistan
| | - Irfan Hussain
- Khalifa University Center for Autonomous Robotic Systems (KUCARS) and Department of Mechanical & Nuclear Engineering, Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
5
|
Rogalla P, Fratesi J, Kandel S, Patsios D, Khalvati F, Carey S. Development and Evaluation of an Automated Protocol Recommendation System for Chest CT Using Natural Language Processing With CLEVER Terminology Word Replacement. Can Assoc Radiol J 2024:8465371241280219. [PMID: 39315514 DOI: 10.1177/08465371241280219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/25/2024] Open
Abstract
Purpose: To evaluate the clinical performance of a Protocol Recommendation System (PRS) automatic protocolling of chest CT imaging requests. Materials and Methods: 322 387 consecutive historical imaging requests for chest CT between 2017 and 2022 were extracted from a radiology information system (RIS) database containing 16 associated patient information values. Records with missing fields and protocols with <100 occurrences were removed, leaving 18 protocols for training. After freetext pre-processing and applying CLEVER terminology word replacements, the features of a bag-of-words model were used to train a multinomial logistic regression classifier. Four readers protocolled 300 clinically executed protocols (CEP) based on all clinically available information. After their selection was made, the PRS and CEP were unblinded, and the readers were asked to score their agreement (1 = severe error, 2 = moderate error, 3 = disagreement but acceptable, 4 = agreement). The ground truth was established by the readers' majority selection, a judge helped break ties. For the PRS and CEP, the accuracy and clinical acceptability (scores 3 and 4) were calculated. The readers' protocolling reliability was measured using Fleiss' Kappa. Results: Four readers agreed on 203/300 protocols, 3 on 82/300 cases, and in 15 cases, a judge was needed. PRS errors were found by the 4 readers in 1%, 2.7%, 1%, and 0.7% of the cases, respectively. The accuracy/clinical acceptability of the PRS and CEP were 84.3%/98.6% and 83.0%/99.3%, respectively. The Fleiss' Kappa for all readers and all protocols was 0.805. Conclusion: The PRS achieved similar accuracy to human performance and may help radiologists master the ever-increasing workload.
Collapse
Affiliation(s)
- Patrik Rogalla
- Joint Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | - Jennifer Fratesi
- Joint Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | - Sonja Kandel
- Joint Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | - Demetris Patsios
- Joint Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | - Farzad Khalvati
- Departments of Medical Imaging and Computer Science, University of Toronto, Toronto, ON, Canada
| | - Sean Carey
- Joint Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
6
|
Abel L, Wasserthal J, Meyer MT, Vosshenrich J, Yang S, Donners R, Obmann M, Boll D, Merkle E, Breit HC, Segeroth M. Intra-Individual Reproducibility of Automated Abdominal Organ Segmentation-Performance of TotalSegmentator Compared to Human Readers and an Independent nnU-Net Model. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01265-w. [PMID: 39294417 DOI: 10.1007/s10278-024-01265-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 08/26/2024] [Accepted: 09/08/2024] [Indexed: 09/20/2024]
Abstract
The purpose of this study is to assess segmentation reproducibility of artificial intelligence-based algorithm, TotalSegmentator, across 34 anatomical structures using multiphasic abdominal CT scans comparing unenhanced, arterial, and portal venous phases in the same patients. A total of 1252 multiphasic abdominal CT scans acquired at our institution between January 1, 2012, and December 31, 2022, were retrospectively included. TotalSegmentator was used to derive volumetric measurements of 34 abdominal organs and structures from the total of 3756 CT series. Reproducibility was evaluated across three contrast phases per CT and compared to two human readers and an independent nnU-Net trained on the BTCV dataset. Relative deviation in segmented volumes and absolute volume deviations (AVD) were reported. Volume deviation within 5% was considered reproducible. Thus, non-inferiority testing was conducted using a 5% margin. Twenty-nine out of 34 structures had volume deviations within 5% and were considered reproducible. Volume deviations for the adrenal glands, gallbladder, spleen, and duodenum were above 5%. Highest reproducibility was observed for bones (- 0.58% [95% CI: - 0.58, - 0.57]) and muscles (- 0.33% [- 0.35, - 0.32]). Among abdominal organs, volume deviation was 1.67% (1.60, 1.74). TotalSegmentator outperformed the reproducibility of the nnU-Net trained on the BTCV dataset with an AVD of 6.50% (6.41, 6.59) vs. 10.03% (9.86, 10.20; p < 0.0001), most notably in cases with pathologic findings. Similarly, TotalSegmentator's AVD between different contrast phases was superior compared to the interreader AVD for the same contrast phase (p = 0.036). TotalSegmentator demonstrated high intra-individual reproducibility for most abdominal structures in multiphasic abdominal CT scans. Although reproducibility was lower in pathologic cases, it outperforms both human readers and a nnU-Net trained on the BTCV dataset.
Collapse
Affiliation(s)
- Lorraine Abel
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Jakob Wasserthal
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Manfred T Meyer
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Jan Vosshenrich
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Shan Yang
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Ricardo Donners
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Markus Obmann
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Daniel Boll
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Elmar Merkle
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Hanns-Christian Breit
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Martin Segeroth
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland.
| |
Collapse
|
7
|
Ravipati A, Elman SA. The state of artificial intelligence for systemic dermatoses: Background and applications for psoriasis, systemic sclerosis, and much more. Clin Dermatol 2024; 42:487-491. [PMID: 38909858 DOI: 10.1016/j.clindermatol.2024.06.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
Artificial intelligence (AI) has been steadily integrated into dermatology, with AI platforms already attempting to identify skin cancers and diagnose benign versus malignant lesions. Although not as widely known, AI programs have also been utilized as diagnostic and prognostic tools for dermatologic conditions with systemic or extracutaneous involvement, especially for diseases with autoimmune etiologies. We have provided a primer on commonly used AI platforms and the practical applicability of these algorithms in dealing with psoriasis, systemic sclerosis, and dermatomyositis as a microcosm for future directions in the field. With a rapidly changing landscape in dermatology and medicine as a whole, AI could be a versatile tool to support clinicians and enhance access to care.
Collapse
Affiliation(s)
- Advaitaa Ravipati
- Dr. Phillip Frost Department of Dermatology and Cutaneous Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Scott A Elman
- Dr. Phillip Frost Department of Dermatology and Cutaneous Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA.
| |
Collapse
|
8
|
Rahman MF, Tseng TL(B, Pokojovy M, McCaffrey P, Walser E, Moen S, Vo A, Ho JC. Machine-Learning-Enabled Diagnostics with Improved Visualization of Disease Lesions in Chest X-ray Images. Diagnostics (Basel) 2024; 14:1699. [PMID: 39202188 PMCID: PMC11353848 DOI: 10.3390/diagnostics14161699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2024] [Revised: 07/31/2024] [Accepted: 08/02/2024] [Indexed: 09/03/2024] Open
Abstract
The class activation map (CAM) represents the neural-network-derived region of interest, which can help clarify the mechanism of the convolutional neural network's determination of any class of interest. In medical imaging, it can help medical practitioners diagnose diseases like COVID-19 or pneumonia by highlighting the suspicious regions in Computational Tomography (CT) or chest X-ray (CXR) film. Many contemporary deep learning techniques only focus on COVID-19 classification tasks using CXRs, while few attempt to make it explainable with a saliency map. To fill this research gap, we first propose a VGG-16-architecture-based deep learning approach in combination with image enhancement, segmentation-based region of interest (ROI) cropping, and data augmentation steps to enhance classification accuracy. Later, a multi-layer Gradient CAM (ML-Grad-CAM) algorithm is integrated to generate a class-specific saliency map for improved visualization in CXR images. We also define and calculate a Severity Assessment Index (SAI) from the saliency map to quantitatively measure infection severity. The trained model achieved an accuracy score of 96.44% for the three-class CXR classification task, i.e., COVID-19, pneumonia, and normal (healthy patients), outperforming many existing techniques in the literature. The saliency maps generated from the proposed ML-GRAD-CAM algorithm are compared with the original Gran-CAM algorithm.
Collapse
Affiliation(s)
- Md Fashiar Rahman
- Department of Industrial, Manufacturing and Systems Engineering, The University of Texas, El Paso, TX 79968, USA
| | - Tzu-Liang (Bill) Tseng
- Department of Industrial, Manufacturing and Systems Engineering, The University of Texas, El Paso, TX 79968, USA
| | - Michael Pokojovy
- Department of Mathematics and Statistics, Old Dominion University, Norfolk, VA 23529, USA;
| | - Peter McCaffrey
- Department of Radiology, The University of Texas Medical Branch, Galveston, TX 77550, USA; (P.M.); (E.W.); (S.M.); (A.V.)
| | - Eric Walser
- Department of Radiology, The University of Texas Medical Branch, Galveston, TX 77550, USA; (P.M.); (E.W.); (S.M.); (A.V.)
| | - Scott Moen
- Department of Radiology, The University of Texas Medical Branch, Galveston, TX 77550, USA; (P.M.); (E.W.); (S.M.); (A.V.)
| | - Alex Vo
- Department of Radiology, The University of Texas Medical Branch, Galveston, TX 77550, USA; (P.M.); (E.W.); (S.M.); (A.V.)
| | - Johnny C. Ho
- Department of Management and Marketing, Turner College of Business, Columbus State University, Columbus, GA 31907, USA;
| |
Collapse
|
9
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Pinto Dos Santos D, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA. J Am Coll Radiol 2024; 21:1292-1310. [PMID: 38276923 DOI: 10.1016/j.jacr.2023.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Artificial intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. KEY POINTS.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, Alabama; American College of Radiology Data Science Institute, Reston, Virginia
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, California; Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, California
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, California
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany; Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, Massachusetts; Tufts University Medical School, Boston, Massachusetts; Commision on Informatics, and Member, Board of Chancellors, American College of Radiology, Virginia
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, Australia; College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
10
|
Li H, Han J, Zhang H, Zhang X, Si Y, Zhang Y, Liu Y, Yang H. Clinical knowledge-based ECG abnormalities detection using dual-view CNN-Transformer and external attention mechanism. Comput Biol Med 2024; 178:108751. [PMID: 38936078 DOI: 10.1016/j.compbiomed.2024.108751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 06/09/2024] [Accepted: 06/10/2024] [Indexed: 06/29/2024]
Abstract
BACKGROUND Automatic abnormalities detection based on Electrocardiogram (ECG) contributes greatly to early prevention, computer aided diagnosis, and dynamic analysis of cardiovascular diseases. In order to achieve cardiologist-level performance, deep neural networks have been widely utilized to extract abstract feature representations. However, the mechanical stacking of numerous computationally intensive operations makes traditional deep neural networks suffer from inadequate learning, poor interpretability, and high complexity. METHOD To address these limitations, a clinical knowledge-based ECG abnormalities detection model using dual-view CNN-Transformer and external attention mechanism is proposed by mimicking the diagnosis of the clinicians. Considering the clinical knowledge that both the detailed waveform changes within a single heartbeat and the global changes throughout the entire recording have complementary roles in abnormalities detection, we presented a dual-view CNN-Transformer to extract and fuse spatial-temporal features from different views. In addition, the locations of the ECG where abnormalities occur provide more information than other areas. Therefore, two external attention mechanisms are designed and added to the corresponding views to help the network learn efficiently. RESULTS Experiment results on the 9-class dataset show that the proposed model achieves an average F1-score of 0.854±0.01 with a higher interpretability and a lower complexity, outperforming the state-of-the-art model. CONCLUSIONS Combining all these excellent features, this study provides a credible solution for automatic ECG abnormalities detection.
Collapse
Affiliation(s)
- Hui Li
- School of Life Sciences, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China; Engineering Research Center of Chinese Ministry of Education for Biological Diagnosis, Treatment and Protection Technology, Xi'an, Shaanxi 710072, China
| | - Jiyang Han
- School of Life Sciences, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China; Engineering Research Center of Chinese Ministry of Education for Biological Diagnosis, Treatment and Protection Technology, Xi'an, Shaanxi 710072, China
| | - Honghao Zhang
- School of Mechanical Engineering, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China
| | - Xi Zhang
- School of Life Sciences, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China; Engineering Research Center of Chinese Ministry of Education for Biological Diagnosis, Treatment and Protection Technology, Xi'an, Shaanxi 710072, China
| | - Yingjun Si
- School of Life Sciences, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China; Engineering Research Center of Chinese Ministry of Education for Biological Diagnosis, Treatment and Protection Technology, Xi'an, Shaanxi 710072, China
| | - Yu Zhang
- School of Computer Science, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China
| | - Yu Liu
- Department of Cardiology, Nanjing University Medical School Affiliated Nanjing Drum Tower Hospital, Nanjing 210008, China
| | - Hui Yang
- School of Life Sciences, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China; Engineering Research Center of Chinese Ministry of Education for Biological Diagnosis, Treatment and Protection Technology, Xi'an, Shaanxi 710072, China.
| |
Collapse
|
11
|
Bardoni C, Spaggiari L, Bertolaccini L. Artificial intelligence in lung cancer. ANNALS OF TRANSLATIONAL MEDICINE 2024; 12:79. [PMID: 39118944 PMCID: PMC11304431 DOI: 10.21037/atm-22-2918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 01/12/2024] [Indexed: 08/10/2024]
Affiliation(s)
- Claudia Bardoni
- Department of Thoracic Surgery, IEO, European Institute of Oncology IRCCS, Milan, Italy
| | - Lorenzo Spaggiari
- Department of Thoracic Surgery, IEO, European Institute of Oncology IRCCS, Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Luca Bertolaccini
- Department of Thoracic Surgery, IEO, European Institute of Oncology IRCCS, Milan, Italy
| |
Collapse
|
12
|
Martinez-Murcia FJ, Arco JE, Jimenez-Mesa C, Segovia F, Illan IA, Ramirez J, Gorriz JM. Bridging Imaging and Clinical Scores in Parkinson's Progression via Multimodal Self-Supervised Deep Learning. Int J Neural Syst 2024; 34:2450043. [PMID: 38770651 DOI: 10.1142/s0129065724500436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Neurodegenerative diseases pose a formidable challenge to medical research, demanding a nuanced understanding of their progressive nature. In this regard, latent generative models can effectively be used in a data-driven modeling of different dimensions of neurodegeneration, framed within the context of the manifold hypothesis. This paper proposes a joint framework for a multi-modal, common latent generative model to address the need for a more comprehensive understanding of the neurodegenerative landscape in the context of Parkinson's disease (PD). The proposed architecture uses coupled variational autoencoders (VAEs) to joint model a common latent space to both neuroimaging and clinical data from the Parkinson's Progression Markers Initiative (PPMI). Alternative loss functions, different normalization procedures, and the interpretability and explainability of latent generative models are addressed, leading to a model that was able to predict clinical symptomatology in the test set, as measured by the unified Parkinson's disease rating scale (UPDRS), with R2 up to 0.86 for same-modality and 0.441 cross-modality (using solely neuroimaging). The findings provide a foundation for further advancements in the field of clinical research and practice, with potential applications in decision-making processes for PD. The study also highlights the limitations and capabilities of the proposed model, emphasizing its direct interpretability and potential impact on understanding and interpreting neuroimaging patterns associated with PD symptomatology.
Collapse
Affiliation(s)
- Francisco J Martinez-Murcia
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Center for Advanced Studies, Ludwig-Maximilien Universität München, München, Germany
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Juan Eloy Arco
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Carmen Jimenez-Mesa
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Fermin Segovia
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Ignacio A Illan
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Javier Ramirez
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Juan Manuel Gorriz
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Center for Advanced Studies, Ludwig-Maximilien Universität München, München, Germany
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| |
Collapse
|
13
|
Hong S, Wu J, Zhu L, Chen W. Brain tumor classification in VIT-B/16 based on relative position encoding and residual MLP. PLoS One 2024; 19:e0298102. [PMID: 38954731 PMCID: PMC11218980 DOI: 10.1371/journal.pone.0298102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 06/15/2024] [Indexed: 07/04/2024] Open
Abstract
Brain tumors pose a significant threat to health, and their early detection and classification are crucial. Currently, the diagnosis heavily relies on pathologists conducting time-consuming morphological examinations of brain images, leading to subjective outcomes and potential misdiagnoses. In response to these challenges, this study proposes an improved Vision Transformer-based algorithm for human brain tumor classification. To overcome the limitations of small existing datasets, Homomorphic Filtering, Channels Contrast Limited Adaptive Histogram Equalization, and Unsharp Masking techniques are applied to enrich dataset images, enhancing information and improving model generalization. Addressing the limitation of the Vision Transformer's self-attention structure in capturing input token sequences, a novel relative position encoding method is employed to enhance the overall predictive capabilities of the model. Furthermore, the introduction of residual structures in the Multi-Layer Perceptron tackles convergence degradation during training, leading to faster convergence and enhanced algorithm accuracy. Finally, this study comprehensively analyzes the network model's performance on validation sets in terms of accuracy, precision, and recall. Experimental results demonstrate that the proposed model achieves a classification accuracy of 91.36% on an augmented open-source brain tumor dataset, surpassing the original VIT-B/16 accuracy by 5.54%. This validates the effectiveness of the proposed approach in brain tumor classification, offering potential reference for clinical diagnoses by medical practitioners.
Collapse
Affiliation(s)
- Shuang Hong
- School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Jin Wu
- School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Lei Zhu
- School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Weijie Chen
- School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan, Hubei, China
| |
Collapse
|
14
|
Dominguez-Morales JP, Duran-Lopez L, Marini N, Vicente-Diaz S, Linares-Barranco A, Atzori M, Müller H. A systematic comparison of deep learning methods for Gleason grading and scoring. Med Image Anal 2024; 95:103191. [PMID: 38728903 DOI: 10.1016/j.media.2024.103191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 01/16/2024] [Accepted: 05/02/2024] [Indexed: 05/12/2024]
Abstract
Prostate cancer is the second most frequent cancer in men worldwide after lung cancer. Its diagnosis is based on the identification of the Gleason score that evaluates the abnormality of cells in glands through the analysis of the different Gleason patterns within tissue samples. The recent advancements in computational pathology, a domain aiming at developing algorithms to automatically analyze digitized histopathology images, lead to a large variety and availability of datasets and algorithms for Gleason grading and scoring. However, there is no clear consensus on which methods are best suited for each problem in relation to the characteristics of data and labels. This paper provides a systematic comparison on nine datasets with state-of-the-art training approaches for deep neural networks (including fully-supervised learning, weakly-supervised learning, semi-supervised learning, Additive-MIL, Attention-Based MIL, Dual-Stream MIL, TransMIL and CLAM) applied to Gleason grading and scoring tasks. The nine datasets are collected from pathology institutes and openly accessible repositories. The results show that the best methods for Gleason grading and Gleason scoring tasks are fully supervised learning and CLAM, respectively, guiding researchers to the best practice to adopt depending on the task to solve and the labels that are available.
Collapse
Affiliation(s)
- Juan P Dominguez-Morales
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain.
| | - Lourdes Duran-Lopez
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain
| | - Niccolò Marini
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Technopôle 3, Sierre 3960, Switzerland; Centre Universitaire d'Informatique, University of Geneva, Carouge 1227, Switzerland
| | - Saturnino Vicente-Diaz
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain
| | - Alejandro Linares-Barranco
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Technopôle 3, Sierre 3960, Switzerland; Department of Neuroscience, University of Padua, Via Giustiniani 2, Padua, 35128, Italy
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Technopôle 3, Sierre 3960, Switzerland; Medical faculty, University of Geneva, Geneva 1211, Switzerland
| |
Collapse
|
15
|
Wu R, Lu X, Yao Z, Ma Y. MFMSNet: A Multi-frequency and Multi-scale Interactive CNN-Transformer Hybrid Network for breast ultrasound image segmentation. Comput Biol Med 2024; 177:108616. [PMID: 38795419 DOI: 10.1016/j.compbiomed.2024.108616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 03/21/2024] [Accepted: 05/11/2024] [Indexed: 05/28/2024]
Abstract
Breast tumor segmentation in ultrasound images is fundamental for quantitative analysis and plays a crucial role in the diagnosis and treatment of breast cancer. Recently, existing methods have mainly focused on spatial domain implementations, with less attention to the frequency domain. In this paper, we propose a Multi-frequency and Multi-scale Interactive CNN-Transformer Hybrid Network (MFMSNet). Specifically, we utilize Octave convolutions instead of conventional convolutions to effectively separate high-frequency and low-frequency components while reducing computational complexity. Introducing the Multi-frequency Transformer block (MF-Trans) enables efficient interaction between high-frequency and low-frequency information, thereby capturing long-range dependencies. Additionally, we incorporate Multi-scale interactive fusion module (MSIF) to merge high-frequency feature maps of different sizes, enhancing the emphasis on tumor edges by integrating local contextual information. Experimental results demonstrate the superiority of our MFMSNet over seven state-of-the-art methods on two publicly available breast ultrasound datasets and one thyroid ultrasound dataset. In the evaluation of MFMSNet, tests were conducted on the BUSI, BUI, and DDTI datasets, comprising 130 images (BUSI), 47 images (BUI), and 128 images (DDTI) in the respective test sets. Employing a five-fold cross-validation approach, the obtained dice coefficients are as follows: 83.42 % (BUSI), 90.79 % (BUI), and 79.96 % (DDTI). The code is available at https://github.com/wrc990616/MFMSNet.
Collapse
Affiliation(s)
- Ruichao Wu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Xiangyu Lu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Zihuan Yao
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Yide Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China.
| |
Collapse
|
16
|
Yilmaz S, Tasyurek M, Amuk M, Celik M, Canger EM. Developing deep learning methods for classification of teeth in dental panoramic radiography. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:118-127. [PMID: 37316425 DOI: 10.1016/j.oooo.2023.02.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 09/13/2022] [Accepted: 02/10/2023] [Indexed: 06/16/2023]
Abstract
OBJECTIVES We aimed to develop an artificial intelligence-based clinical dental decision-support system using deep-learning methods to reduce diagnostic interpretation error and time and increase the effectiveness of dental treatment and classification. STUDY DESIGN We compared the performance of 2 deep-learning methods, You Only Look Once V4 (YOLO-V4) and Faster Regions with the Convolutional Neural Networks (R-CNN), for tooth classification in dental panoramic radiography for tooth classification in dental panoramic radiography to determine which is more successful in terms of accuracy, time, and detection ability. Using a method based on deep-learning models trained on a semantic segmentation task, we analyzed 1200 panoramic radiographs selected retrospectively. In the classification process, our model identified 36 classes, including 32 teeth and 4 impacted teeth. RESULTS The YOLO-V4 method achieved a mean 99.90% precision, 99.18% recall, and 99.54% F1 score. The Faster R-CNN method achieved a mean 93.67% precision, 90.79% recall, and 92.21% F1 score. Experimental evaluations showed that the YOLO-V4 method outperformed the Faster R-CNN method in terms of accuracy of predicted teeth in the tooth classification process, speed of tooth classification, and ability to detect impacted and erupted third molars. CONCLUSIONS The YOLO-V4 method outperforms the Faster R-CNN method in terms of accuracy of tooth prediction, speed of detection, and ability to detect impacted third molars and erupted third molars. The proposed deep learning based methods can assist dentists in clinical decision making, save time, and reduce the negative effects of stress and fatigue in daily practice.
Collapse
Affiliation(s)
- Serkan Yilmaz
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey
| | - Murat Tasyurek
- Department of Computer Engineering, Kayseri University, Kayseri, Turkey
| | - Mehmet Amuk
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey
| | - Mete Celik
- Department of Computer Engineering, Erciyes University, Kayseri, Turkey
| | - Emin Murat Canger
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey.
| |
Collapse
|
17
|
Kasuga I, Yokoe Y, Gamo S, Sugiyama T, Tokura M, Noguchi M, Okayama M, Nagakura R, Ohmori N, Tsuchiya T, Sofuni A, Itoi T, Ohtsubo O. Which is a real valuable screening tool for lung cancer and measure thoracic diseases, chest radiography or low-dose computed tomography?: A review on the current status of Japan and other countries. Medicine (Baltimore) 2024; 103:e38161. [PMID: 38728453 PMCID: PMC11081589 DOI: 10.1097/md.0000000000038161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 04/17/2024] [Indexed: 05/12/2024] Open
Abstract
Chest radiography (CR) has been used as a screening tool for lung cancer and the use of low-dose computed tomography (LDCT) is not recommended in Japan. We need to reconsider whether CR really contributes to the early detection of lung cancer. In addition, we have not well discussed about other major thoracic disease detection by CR and LDCT compared with lung cancer despite of its high frequency. We review the usefulness of CR and LDCT as veridical screening tools for lung cancer and other thoracic diseases. In the case of lung cancer, many studies showed that LDCT has capability of early detection and improving outcomes compared with CR. Recent large randomized trial also supports former results. In the case of chronic obstructive pulmonary disease (COPD), LDCT contributes to early detection and leads to the implementation of smoking cessation treatments. In the case of pulmonary infections, LDCT can reveal tiny inflammatory changes that are not observed on CR, though many of these cases improve spontaneously. Therefore, LDCT screening for pulmonary infections may be less useful. CR screening is more suitable for the detection of pulmonary infections. In the case of cardiovascular disease (CVD), CR may be a better screening tool for detecting cardiomegaly, whereas LDCT may be a more useful tool for detecting vascular changes. Therefore, the current status of thoracic disease screening is that LDCT may be a better screening tool for detecting lung cancer, COPD, and vascular changes. CR may be a suitable screening tool for pulmonary infections and cardiomegaly.
Collapse
Affiliation(s)
- Ikuma Kasuga
- Department of Medicine, Healthcare Center, Shinjuku Oiwake Clinic and Ladies Branch, Seikokai, Tokyo, Japan
- Department of Internal Medicine, Faculty of Medicine, Tokyo Medical University, Tokyo, Japan
- Department of Nursing, Faculty of Human Care, Tohto University, Saitama, Japan
| | - Yoshimi Yokoe
- Department of Medicine, Healthcare Center, Shinjuku Oiwake Clinic and Ladies Branch, Seikokai, Tokyo, Japan
| | - Sanae Gamo
- Department of Medicine, Healthcare Center, Shinjuku Oiwake Clinic and Ladies Branch, Seikokai, Tokyo, Japan
| | - Tomoko Sugiyama
- Department of Medicine, Healthcare Center, Shinjuku Oiwake Clinic and Ladies Branch, Seikokai, Tokyo, Japan
| | - Michiyo Tokura
- Department of Medicine, Healthcare Center, Shinjuku Oiwake Clinic and Ladies Branch, Seikokai, Tokyo, Japan
| | - Maiko Noguchi
- Department of Medicine, Healthcare Center, Shinjuku Oiwake Clinic and Ladies Branch, Seikokai, Tokyo, Japan
| | - Mayumi Okayama
- Department of Medicine, Healthcare Center, Shinjuku Oiwake Clinic and Ladies Branch, Seikokai, Tokyo, Japan
| | - Rei Nagakura
- Department of Medicine, Healthcare Center, Shinjuku Oiwake Clinic and Ladies Branch, Seikokai, Tokyo, Japan
| | - Nariko Ohmori
- Department of Medicine, Healthcare Center, Shinjuku Oiwake Clinic and Ladies Branch, Seikokai, Tokyo, Japan
| | - Takayoshi Tsuchiya
- Department of Gastroenterology and Hepatology, Tokyo Medical University, Tokyo, Japan
| | - Atsushi Sofuni
- Department of Gastroenterology and Hepatology, Tokyo Medical University, Tokyo, Japan
- Department of Clinical Oncology, Tokyo Medical University, Tokyo Japan
| | - Takao Itoi
- Department of Gastroenterology and Hepatology, Tokyo Medical University, Tokyo, Japan
| | - Osamu Ohtsubo
- Department of Nursing, Faculty of Human Care, Tohto University, Saitama, Japan
- Department of Medicine, Kenkoigaku Association, Tokyo Japan
| |
Collapse
|
18
|
Aasem M, Javed Iqbal M. Toward explainable AI in radiology: Ensemble-CAM for effective thoracic disease localization in chest X-ray images using weak supervised learning. Front Big Data 2024; 7:1366415. [PMID: 38756502 PMCID: PMC11096460 DOI: 10.3389/fdata.2024.1366415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Accepted: 04/08/2024] [Indexed: 05/18/2024] Open
Abstract
Chest X-ray (CXR) imaging is widely employed by radiologists to diagnose thoracic diseases. Recently, many deep learning techniques have been proposed as computer-aided diagnostic (CAD) tools to assist radiologists in minimizing the risk of incorrect diagnosis. From an application perspective, these models have exhibited two major challenges: (1) They require large volumes of annotated data at the training stage and (2) They lack explainable factors to justify their outcomes at the prediction stage. In the present study, we developed a class activation mapping (CAM)-based ensemble model, called Ensemble-CAM, to address both of these challenges via weakly supervised learning by employing explainable AI (XAI) functions. Ensemble-CAM utilizes class labels to predict the location of disease in association with interpretable features. The proposed work leverages ensemble and transfer learning with class activation functions to achieve three objectives: (1) minimizing the dependency on strongly annotated data when locating thoracic diseases, (2) enhancing confidence in predicted outcomes by visualizing their interpretable features, and (3) optimizing cumulative performance via fusion functions. Ensemble-CAM was trained on three CXR image datasets and evaluated through qualitative and quantitative measures via heatmaps and Jaccard indices. The results reflect the enhanced performance and reliability in comparison to existing standalone and ensembled models.
Collapse
Affiliation(s)
- Muhammad Aasem
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | | |
Collapse
|
19
|
Tyndall DA, Price JB, Gaalaas L, Spin-Neto R. Surveying the landscape of diagnostic imaging in dentistry's future: Four emerging technologies with promise. J Am Dent Assoc 2024; 155:364-378. [PMID: 38520421 DOI: 10.1016/j.adaj.2024.01.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 01/04/2024] [Accepted: 01/07/2024] [Indexed: 03/25/2024]
Abstract
BACKGROUND Advances in digital radiography for both intraoral and panoramic imaging and cone-beam computed tomography have led the way to an increase in diagnostic capabilities for the dental care profession. In this article, the authors provide information on 4 emerging technologies with promise. TYPES OF STUDIES REVIEWED The authors feature the following: artificial intelligence in the form of deep learning using convolutional neural networks, dental magnetic resonance imaging, stationary intraoral tomosynthesis, and second-generation cone-beam computed tomography sources based on carbon nanotube technology and multispectral imaging. The authors review and summarize articles featuring these technologies. RESULTS The history and background of these emerging technologies are previewed along with their development and potential impact on the practice of dental diagnostic imaging. The authors conclude that these emerging technologies have the potential to have a substantial influence on the practice of dentistry as these systems mature. The degree of influence most likely will vary, with artificial intelligence being the most influential of the 4. CONCLUSIONS AND PRACTICAL IMPLICATIONS The readers are informed about these emerging technologies and the potential effects on their practice going forward, giving them information on which to base decisions on adopting 1 or more of these technologies. The 4 technologies reviewed in this article have the potential to improve imaging diagnostics in dentistry thereby leading to better patient care and heightened professional satisfaction.
Collapse
|
20
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA. Can Assoc Radiol J 2024; 75:226-244. [PMID: 38251882 DOI: 10.1177/08465371231222229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever‑growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi‑society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, AL, USA
- Data Science Institute, American College of Radiology, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, SA, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, QC, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston, MA, USA
- American College of Radiology, Reston, VA, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, SA, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, SA, Australia
| |
Collapse
|
21
|
Wang AQ, Karaman BK, Kim H, Rosenthal J, Saluja R, Young SI, Sabuncu MR. A Framework for Interpretability in Machine Learning for Medical Imaging. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2024; 12:53277-53292. [PMID: 39421804 PMCID: PMC11486155 DOI: 10.1109/access.2024.3387702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research. However, there is a general sense of murkiness in what interpretability means. Why does the need for interpretability in MLMI arise? What goals does one actually seek to address when interpretability is needed? To answer these questions, we identify a need to formalize the goals and elements of interpretability in MLMI. By reasoning about real-world tasks and goals common in both medical image analysis and its intersection with machine learning, we identify five core elements of interpretability: localization, visual recognizability, physical attribution, model transparency, and actionability. From this, we arrive at a framework for interpretability in MLMI, which serves as a step-by-step guide to approaching interpretability in this context. Overall, this paper formalizes interpretability needs in the context of medical imaging, and our applied perspective clarifies concrete MLMI-specific goals and considerations in order to guide method design and improve real-world usage. Our goal is to provide practical and didactic information for model designers and practitioners, inspire developers of models in the medical imaging field to reason more deeply about what interpretability is achieving, and suggest future directions of interpretability research.
Collapse
Affiliation(s)
- Alan Q Wang
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Batuhan K Karaman
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Heejong Kim
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Jacob Rosenthal
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
- Weill Cornell/Rockefeller/Sloan Kettering Tri-Institutional M.D.-Ph.D. Program, New York City, NY 10065, USA
| | - Rachit Saluja
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Sean I Young
- Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA
| | - Mert R Sabuncu
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| |
Collapse
|
22
|
Veiga-Canuto D, Cerdá Alberich L, Fernández-Patón M, Jiménez Pastor A, Lozano-Montoya J, Miguel Blanco A, Martínez de Las Heras B, Sangüesa Nebot C, Martí-Bonmatí L. Imaging biomarkers and radiomics in pediatric oncology: a view from the PRIMAGE (PRedictive In silico Multiscale Analytics to support cancer personalized diaGnosis and prognosis, Empowered by imaging biomarkers) project. Pediatr Radiol 2024; 54:562-570. [PMID: 37747582 DOI: 10.1007/s00247-023-05770-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 09/01/2023] [Accepted: 09/03/2023] [Indexed: 09/26/2023]
Abstract
This review paper presents the practical development of imaging biomarkers in the scope of the PRIMAGE (PRedictive In silico Multiscale Analytics to support cancer personalized diaGnosis and prognosis, Empowered by imaging biomarkers) project, as a noninvasive and reliable way to improve the diagnosis and prognosis in pediatric oncology. The PRIMAGE project is a European multi-center research initiative that focuses on developing medical imaging-derived artificial intelligence (AI) solutions designed to enhance overall management and decision-making for two types of pediatric cancer: neuroblastoma and diffuse intrinsic pontine glioma. To allow this, the PRIMAGE project has created an open-cloud platform that combines imaging, clinical, and molecular data together with AI models developed from this data, creating a comprehensive decision support environment for clinicians managing patients with these two cancers. In order to achieve this, a standardized data processing and analysis workflow was implemented to generate robust and reliable predictions for different clinical endpoints. Magnetic resonance (MR) image harmonization and registration was performed as part of the workflow. Subsequently, an automated tool for the detection and segmentation of tumors was trained and internally validated. The Dice similarity coefficient obtained for the independent validation dataset was 0.997, indicating compatibility with the manual segmentation variability. Following this, radiomics and deep features were extracted and correlated with clinical endpoints. Finally, reproducible and relevant imaging quantitative features were integrated with clinical and molecular data to enrich both the predictive models and a set of visual analytics tools, making the PRIMAGE platform a complete clinical decision aid system. In order to ensure the advancement of research in this field and to foster engagement with the wider research community, the PRIMAGE data repository and platform are currently being integrated into the European Federation for Cancer Images (EUCAIM), which is the largest European cancer imaging research infrastructure created to date.
Collapse
Affiliation(s)
- Diana Veiga-Canuto
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A planta 7, 46026, Valencia, Spain.
- Área Clínica de Imagen Médica, Área Clínica de Imagen Médica, Hospital Universitari i Politècnic La Fe, Avinguda Fernando Abril Martorell, 106 Torre E planta 0, 46026, València, Spain.
| | - Leonor Cerdá Alberich
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A planta 7, 46026, Valencia, Spain
| | - Matías Fernández-Patón
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A planta 7, 46026, Valencia, Spain
| | | | | | - Ana Miguel Blanco
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A planta 7, 46026, Valencia, Spain
| | - Blanca Martínez de Las Heras
- Pediatric Oncology Department, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre G planta 2, 46026, Valencia, Spain
| | - Cinta Sangüesa Nebot
- Área Clínica de Imagen Médica, Área Clínica de Imagen Médica, Hospital Universitari i Politècnic La Fe, Avinguda Fernando Abril Martorell, 106 Torre E planta 0, 46026, València, Spain
| | - Luis Martí-Bonmatí
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A planta 7, 46026, Valencia, Spain
- Área Clínica de Imagen Médica, Área Clínica de Imagen Médica, Hospital Universitari i Politècnic La Fe, Avinguda Fernando Abril Martorell, 106 Torre E planta 0, 46026, València, Spain
| |
Collapse
|
23
|
Sindhu A, Jadhav U, Ghewade B, Bhanushali J, Yadav P. Revolutionizing Pulmonary Diagnostics: A Narrative Review of Artificial Intelligence Applications in Lung Imaging. Cureus 2024; 16:e57657. [PMID: 38707160 PMCID: PMC11070215 DOI: 10.7759/cureus.57657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 04/04/2024] [Indexed: 05/07/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a transformative force in healthcare, particularly in pulmonary diagnostics. This comprehensive review explores the impact of AI on revolutionizing lung imaging, focusing on its applications in detecting abnormalities, diagnosing pulmonary conditions, and predicting disease prognosis. We provide an overview of traditional pulmonary diagnostic methods and highlight the importance of accurate and efficient lung imaging for early intervention and improved patient outcomes. Through the lens of AI, we examine machine learning algorithms, deep learning techniques, and natural language processing for analyzing radiology reports. Case studies and examples showcase the successful implementation of AI in pulmonary diagnostics, alongside challenges faced and lessons learned. Finally, we discuss future directions, including integrating AI into clinical workflows, ethical considerations, and the need for further research and collaboration in this rapidly evolving field. This review underscores the transformative potential of AI in enhancing the accuracy, efficiency, and accessibility of pulmonary healthcare.
Collapse
Affiliation(s)
- Arman Sindhu
- Respiratory Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Ulhas Jadhav
- Respiratory Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Babaji Ghewade
- Respiratory Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Jay Bhanushali
- Respiratory Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Pallavi Yadav
- Obstetrics and Gynecology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| |
Collapse
|
24
|
Mărginean L, Ştefan PA, Filep RC, Csutak C, Lebovici A, Gherman D, Lupean RA, Suciu BA. Radiomics in the CT diagnosis of ovarian cystic malignancies - a pilot study. Med Pharm Rep 2024; 97:169-177. [PMID: 38746030 PMCID: PMC11090276 DOI: 10.15386/mpr-2594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 02/07/2023] [Accepted: 03/06/2023] [Indexed: 05/16/2024] Open
Abstract
Background and aims The conventional computed tomography (CT) appearance of ovarian cystic masses is often insufficient to adequately differentiate between benign and malignant entities. This study aims to investigate whether texture analysis of the fluid component can augment the CT diagnosis of ovarian cystic tumors. Methods Eighty-four patients with adnexal cystic lesions who underwent CT examinations were retrospectively included. All patients had a final diagnosis that was established by histological analysis in forty four cases. The texture features of the lesions content were extracted using dedicated software and further used for comparing benign and malignant lesions, primary tumors and metastases, malignant and borderline lesions, and benign and borderline lesions. Texture features' discriminatory ability was evaluated through univariate and receiver operating characteristics analysis and also by the use of the k-nearest-neighbor classifier. Results The univariate analysis showed statistically significant results when comparing benign and malignant lesions (the Difference Variance parameter, p=0.0074) and malignant and borderline tumors (the Correlation parameter, p=0.488). The highest accuracy (83.33%) was achieved by the classifier when discriminating primary tumors from ovarian metastases. Conclusion Texture parameters were able to successfully discriminate between different types of ovarian cystic lesions based on their content, but it is not entirely clear whether these differences are a result of the physical properties of the fluids or their appartenance to a particular histopathological group. If further validated, radiomics can offer a rapid and non-invasive alternative in the diagnosis of ovarian cystic tumors.
Collapse
Affiliation(s)
- Lucian Mărginean
- Radiology and Medical Imaging, Clinical Sciences Department, "George Emil Palade" University of Medicine, Pharmacy, Science, and Technology, Târgu Mureş, Romania
- Interventional Radiology Department, Târgu Mureş County Emergency Clinical Hospital, Târgu Mureş, Romania
| | - Paul-Andrei Ştefan
- Interventional Radiology Department, Târgu Mureş County Emergency Clinical Hospital, Târgu Mureş, Romania
- Department of Biomedical Imaging and Image-Guided Therapy, General Hospital of Vienna (AKH), Medical University of Vienna, Austria
- Department of Anatomy and Embriology, Morphological Sciences, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
- Department of Radiology and Imaging, Cluj County Emergency Clinical Hospital, Cluj-Napoca, Romania
| | - Rareş Cristian Filep
- Radiology and Medical Imaging, Clinical Sciences Department, "George Emil Palade" University of Medicine, Pharmacy, Science, and Technology, Târgu Mureş, Romania
- Interventional Radiology Department, Târgu Mureş County Emergency Clinical Hospital, Târgu Mureş, Romania
| | - Csaba Csutak
- Department of Radiology and Imaging, Cluj County Emergency Clinical Hospital, Cluj-Napoca, Romania
- Department of Radiology and Imaging, Surgical Specialties, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
| | - Andrei Lebovici
- Department of Radiology and Imaging, Cluj County Emergency Clinical Hospital, Cluj-Napoca, Romania
- Department of Radiology and Imaging, Surgical Specialties, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
| | - Diana Gherman
- Department of Radiology and Imaging, Cluj County Emergency Clinical Hospital, Cluj-Napoca, Romania
- Department of Radiology and Imaging, Surgical Specialties, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
| | - Roxana-Adelina Lupean
- Department of Histology, Morphological Sciences, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
- "Dominic Stanca" Obstetrics and Gynecology Clinic, Cluj County Emergency Clinical Hospital, Cluj-Napoca, Romania
| | - Bogdan Andrei Suciu
- First Surgical Clinic, Târgu Mureş County Emergency Clinical Hospital, Târgu Mureş, Romania
- Department of Anatomy, Morphological Sciences, "George Emil Palade" University of Medicine, Pharmacy, Science, and Technology, Târgu Mureş, Romania
| |
Collapse
|
25
|
Viar-Hernández D, Rodriguez-Vila B, Gil-Correa M, Malpica N, Torrado-Carvajal Á. A case study of medical image software evolution and its impact in the medical imaging community. Heliyon 2024; 10:e26408. [PMID: 38434256 PMCID: PMC10907511 DOI: 10.1016/j.heliyon.2024.e26408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 02/12/2024] [Accepted: 02/13/2024] [Indexed: 03/05/2024] Open
Abstract
Objective: We present the evolution of medical imaging software and its impact on the medical imaging community through the study of four open-source image analysis software platforms: 3D Slicer, FreeSurfer, FSL, and SPM. Materials and methods: We have studied the impact of these software tools over time, measured by the number of scientific citations. Additionally, we have also studied the source code evolution by measuring the lines of code and the tarball size of the stable releases and the changes in programming languages. Results and discussion: The rising number of related scientific publications confirms the popularity of these software tools in the research community, albeit some differences can be observed in the popularity of the tools. Moreover, we demonstrate that source code has evolved to modernize and optimize, at least partially thanks to the collaboration and code sharing with the user community. Furthermore, this evolution reveals an increased use of higher-level programming languages and meta-languages. Conclusions: The study of four open-source packages has revealed certain patterns in the evolution of medical imaging software and their impact on the medical image community. Further analyses and complementary metrics are suggested.
Collapse
Affiliation(s)
- David Viar-Hernández
- Medical Image Analysis and Biometry Laboratory, Universidad Rey Juan Carlos, Tulipán s/n, 28933, Madrid, Spain
| | - Borja Rodriguez-Vila
- Medical Image Analysis and Biometry Laboratory, Universidad Rey Juan Carlos, Tulipán s/n, 28933, Madrid, Spain
| | - Mario Gil-Correa
- Medical Image Analysis and Biometry Laboratory, Universidad Rey Juan Carlos, Tulipán s/n, 28933, Madrid, Spain
| | - Norberto Malpica
- Medical Image Analysis and Biometry Laboratory, Universidad Rey Juan Carlos, Tulipán s/n, 28933, Madrid, Spain
| | - Ángel Torrado-Carvajal
- Medical Image Analysis and Biometry Laboratory, Universidad Rey Juan Carlos, Tulipán s/n, 28933, Madrid, Spain
| |
Collapse
|
26
|
Chaibub Neto E, Yadav V, Sieberts SK, Omberg L. A novel estimator for the two-way partial AUC. BMC Med Inform Decis Mak 2024; 24:57. [PMID: 38378636 PMCID: PMC10877829 DOI: 10.1186/s12911-023-02382-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 11/27/2023] [Indexed: 02/22/2024] Open
Abstract
BACKGROUND The two-way partial AUC has been recently proposed as a way to directly quantify partial area under the ROC curve with simultaneous restrictions on the sensitivity and specificity ranges of diagnostic tests or classifiers. The metric, as originally implemented in the tpAUC R package, is estimated using a nonparametric estimator based on a trimmed Mann-Whitney U-statistic, which becomes computationally expensive in large sample sizes. (Its computational complexity is of order [Formula: see text], where [Formula: see text] and [Formula: see text] represent the number of positive and negative cases, respectively). This is problematic since the statistical methodology for comparing estimates generated from alternative diagnostic tests/classifiers relies on bootstrapping resampling and requires repeated computations of the estimator on a large number of bootstrap samples. METHODS By leveraging the graphical and probabilistic representations of the AUC, partial AUCs, and two-way partial AUC, we derive a novel estimator for the two-way partial AUC, which can be directly computed from the output of any software able to compute AUC and partial AUCs. We implemented our estimator using the computationally efficient pROC R package, which leverages a nonparametric approach using the trapezoidal rule for the computation of AUC and partial AUC scores. (Its computational complexity is of order [Formula: see text], where [Formula: see text].). We compare the empirical bias and computation time of the proposed estimator against the original estimator provided in the tpAUC package in a series of simulation studies and on two real datasets. RESULTS Our estimator tended to be less biased than the original estimator based on the trimmed Mann-Whitney U-statistic across all experiments (and showed considerably less bias in the experiments based on small sample sizes). But, most importantly, because the computational complexity of the proposed estimator is of order [Formula: see text], rather than [Formula: see text], it is much faster to compute when sample sizes are large. CONCLUSIONS The proposed estimator provides an improvement for the computation of two-way partial AUC, and allows the comparison of diagnostic tests/machine learning classifiers in large datasets where repeated computations of the original estimator on bootstrap samples become too expensive to compute.
Collapse
Affiliation(s)
| | - Vijay Yadav
- Sage Bionetworks, 2901 Third Avenue, 98121, Seattle, USA
| | | | - Larsson Omberg
- Sage Bionetworks, 2901 Third Avenue, 98121, Seattle, USA
| |
Collapse
|
27
|
Alam MK, Alftaikhah SAA, Issrani R, Ronsivalle V, Lo Giudice A, Cicciù M, Minervini G. Applications of artificial intelligence in the utilisation of imaging modalities in dentistry: A systematic review and meta-analysis of in-vitro studies. Heliyon 2024; 10:e24221. [PMID: 38317889 PMCID: PMC10838702 DOI: 10.1016/j.heliyon.2024.e24221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 01/02/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024] Open
Abstract
Background In the past, dentistry heavily relied on manual image analysis and diagnostic procedures, which could be time-consuming and prone to human error. The advent of artificial intelligence (AI) has brought transformative potential to the field, promising enhanced accuracy and efficiency in various dental imaging tasks. This systematic review and meta-analysis aimed to comprehensively evaluate the applications of AI in dental imaging modalities, focusing on in-vitro studies. Methods A systematic literature search was conducted, in accordance with the PRISMA guidelines. The following databases were systematically searched: PubMed/MEDLINE, Embase, Web of Science, Scopus, IEEE Xplore, Cochrane Library, CINAHL (Cumulative Index to Nursing and Allied Health Literature), and Google Scholar. The meta-analysis employed fixed-effects models to assess AI accuracy, calculating odds ratios (OR) for true positive rate (TPR), true negative rate (TNR), positive predictive value (PPV), and negative predictive value (NPV) with 95 % confidence intervals (CI). Heterogeneity and overall effect tests were applied to ensure the reliability of the findings. Results 9 studies were selected that encompassed various objectives, such as tooth segmentation and classification, caries detection, maxillofacial bone segmentation, and 3D surface model creation. AI techniques included convolutional neural networks (CNNs), deep learning algorithms, and AI-driven tools. Imaging parameters assessed in these studies were specific to the respective dental tasks. The analysis of combined ORs indicated higher odds of accurate dental image assessments, highlighting the potential for AI to improve TPR, TNR, PPV, and NPV. The studies collectively revealed a statistically significant overall effect in favor of AI in dental imaging applications. Conclusion In summary, this systematic review and meta-analysis underscore the transformative impact of AI on dental imaging. AI has the potential to revolutionize the field by enhancing accuracy, efficiency, and time savings in various dental tasks. While further research in clinical settings is needed to validate these findings and address study limitations, the future implications of integrating AI into dental practice hold great promise for advancing patient care and the field of dentistry.
Collapse
Affiliation(s)
- Mohammad Khursheed Alam
- Preventive Dentistry Department, College of Dentistry, Jouf University, Sakaka, 72345, Saudi Arabia
- Department of Dental Research Cell, Saveetha Institute of Medical and Technical Sciences, Saveetha Dental College and Hospitals, Chennai, 600077, India
- Department of Public Health, Faculty of Allied Health Sciences, Daffodil International University, Dhaka, 1207, Bangladesh
| | | | - Rakhi Issrani
- Preventive Dentistry Department, College of Dentistry, Jouf University, Sakaka, 72345, Saudi Arabia
| | - Vincenzo Ronsivalle
- Department of Biomedical and Surgical and Biomedical Sciences, Catania University, 95123, Catania, Italy
| | - Antonino Lo Giudice
- Department of Biomedical and Surgical and Biomedical Sciences, Catania University, 95123, Catania, Italy
| | - Marco Cicciù
- Department of Biomedical and Surgical and Biomedical Sciences, Catania University, 95123, Catania, Italy
| | - Giuseppe Minervini
- Multidisciplinary Department of Medical-Surgical and Odontostomatological Specialties, University of Campania “Luigi Vanvitelli”, 80121, Naples, Italy
- Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Science (SIMATS), Saveetha University, Chennai, Tamil Nadu, India
| |
Collapse
|
28
|
Hassan J, Saeed SM, Deka L, Uddin MJ, Das DB. Applications of Machine Learning (ML) and Mathematical Modeling (MM) in Healthcare with Special Focus on Cancer Prognosis and Anticancer Therapy: Current Status and Challenges. Pharmaceutics 2024; 16:260. [PMID: 38399314 PMCID: PMC10892549 DOI: 10.3390/pharmaceutics16020260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/29/2024] [Accepted: 02/07/2024] [Indexed: 02/25/2024] Open
Abstract
The use of data-driven high-throughput analytical techniques, which has given rise to computational oncology, is undisputed. The widespread use of machine learning (ML) and mathematical modeling (MM)-based techniques is widely acknowledged. These two approaches have fueled the advancement in cancer research and eventually led to the uptake of telemedicine in cancer care. For diagnostic, prognostic, and treatment purposes concerning different types of cancer research, vast databases of varied information with manifold dimensions are required, and indeed, all this information can only be managed by an automated system developed utilizing ML and MM. In addition, MM is being used to probe the relationship between the pharmacokinetics and pharmacodynamics (PK/PD interactions) of anti-cancer substances to improve cancer treatment, and also to refine the quality of existing treatment models by being incorporated at all steps of research and development related to cancer and in routine patient care. This review will serve as a consolidation of the advancement and benefits of ML and MM techniques with a special focus on the area of cancer prognosis and anticancer therapy, leading to the identification of challenges (data quantity, ethical consideration, and data privacy) which are yet to be fully addressed in current studies.
Collapse
Affiliation(s)
- Jasmin Hassan
- Drug Delivery & Therapeutics Lab, Dhaka 1212, Bangladesh; (J.H.); (S.M.S.)
| | | | - Lipika Deka
- Faculty of Computing, Engineering and Media, De Montfort University, Leicester LE1 9BH, UK;
| | - Md Jasim Uddin
- Department of Pharmaceutical Technology, Faculty of Pharmacy, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Diganta B. Das
- Department of Chemical Engineering, Loughborough University, Loughborough LE11 3TU, UK
| |
Collapse
|
29
|
Jiang L, Ma LY, Zeng TY, Ying SH. UFPS: A unified framework for partially annotated federated segmentation in heterogeneous data distribution. PATTERNS (NEW YORK, N.Y.) 2024; 5:100917. [PMID: 38370123 PMCID: PMC10873159 DOI: 10.1016/j.patter.2024.100917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 08/14/2023] [Accepted: 01/03/2024] [Indexed: 02/20/2024]
Abstract
Partially supervised segmentation is a label-saving method based on datasets with fractional classes labeled and intersectant. Its practical application in real-world medical scenarios is, however, hindered by privacy concerns and data heterogeneity. To address these issues without compromising privacy, federated partially supervised segmentation (FPSS) is formulated in this work. The primary challenges for FPSS are class heterogeneity and client drift. We propose a unified federated partially labeled segmentation (UFPS) framework to segment pixels within all classes for partially annotated datasets by training a comprehensive global model that avoids class collision. Our framework includes unified label learning (ULL) and sparse unified sharpness aware minimization (sUSAM) for class and feature space unification, respectively. Through empirical studies, we find that traditional methods in partially supervised segmentation and federated learning often struggle with class collision when combined. Our extensive experiments on real medical datasets demonstrate better deconflicting and generalization capabilities of UFPS.
Collapse
Affiliation(s)
- Le Jiang
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Li Yan Ma
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Tie Yong Zeng
- Department of Mathematics, Chinese University of Hong Kong, Hongkong, China
| | - Shi Hui Ying
- Department of Mathematics, Shanghai University, Shanghai, China
| |
Collapse
|
30
|
Lombi L, Rossero E. How artificial intelligence is reshaping the autonomy and boundary work of radiologists. A qualitative study. SOCIOLOGY OF HEALTH & ILLNESS 2024; 46:200-218. [PMID: 37573551 DOI: 10.1111/1467-9566.13702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 07/19/2023] [Indexed: 08/15/2023]
Abstract
The application of artificial intelligence (AI) in medical practice is spreading, especially in technologically dense fields such as radiology, which could consequently undergo profound transformations in the near future. This article aims to qualitatively explore the potential influence of AI technologies on the professional identity of radiologists. Drawing on 12 in-depth interviews with a subgroup of radiologists who participated in a larger study, this article investigated (1) whether radiologists perceived AI as a threat to their decision-making autonomy; and (2) how radiologists perceived the future of their profession compared to other health-care professions. The findings revealed that while AI did not generally affect radiologists' decision-making autonomy, it threatened their professional and epistemic authority. Two discursive strategies were identified to explain these findings. The first strategy emphasised radiologists' specific expertise and knowledge that extends beyond interpreting images, a task performed with high accuracy by AI machines. The second strategy underscored the fostering of radiologists' professional prestige through developing expertise in using AI technologies, a skill that would distinguish them from other clinicians who did not pose this knowledge. This study identifies AI machines as status objects and useful tools in performing boundary work in and around the radiological profession.
Collapse
Affiliation(s)
- Linda Lombi
- Department of Sociology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Eleonora Rossero
- Fundamental Rights Laboratory, Collegio Carlo Alberto, Turin, Italy
| |
Collapse
|
31
|
Rabie AH, Saleh AI. Diseases diagnosis based on artificial intelligence and ensemble classification. Artif Intell Med 2024; 148:102753. [PMID: 38325931 DOI: 10.1016/j.artmed.2023.102753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 12/11/2023] [Accepted: 12/22/2023] [Indexed: 02/09/2024]
Abstract
BACKGROUND In recent years, Computer Aided Diagnosis (CAD) has become an important research area that attracted a lot of researchers. In medical diagnostic systems, several attempts have been made to build and enhance CAD applications to avoid errors that can cause dangerously misleading medical treatments. The most exciting opportunity for promoting the performance of CAD system can be accomplished by integrating Artificial Intelligence (AI) in medicine. This allows the effective automation of traditional manual workflow, which is slow, inaccurate and affected by human errors. AIMS This paper aims to provide a complete Computer Aided Disease Diagnosis (CAD2) strategy based on Machine Learning (ML) techniques that can help clinicians to make better medical decisions. METHODS The proposed CAD2 consists of three main sequential phases, namely; (i) Outlier Rejection Phase (ORP), (ii) Feature Selection Phase (FSP), and (iii) Classification Phase (CP). ORP is implemented to reject outliers using new Outlier Rejection Technique (ORT) that contains two sequential stages called Fast Outlier Rejection (FOR) and Accurate Outlier Rejection (AOR). The most informative features are selected through FSP using Hybrid Selection Technique (HST). HST includes two main stages called Quick Selection Stage (QS2) using fisher score as a filter method and Precise Selection Stage (PS2) using a Hybrid Bio-inspired Optimization (HBO) technique as a wrapper method. Finally, actual diagnose takes place through CP, which relies on Ensemble Classification Technique (ECT). RESULTS The proposed CAD2 has been tested experimentally against recent disease diagnostic strategies using two different datasets in which the first contains several diseases, while the second includes data for Covid-19 patients only. Experimental results have proven the high efficiency of the proposed CAD2 in terms of accuracy, error, precision, and recall compared with other competitors. Additionally, CAD2 strategy provides the best Wilcoxon signed rank test and Friedman test measurements against other strategies according to both datasets. CONCLUSION It is concluded that CAD2 strategy based on ORP, FSP, and CP gave an accurate diagnosis compared to other strategies because it gave the highest accuracy and the lowest error and implementation time.
Collapse
Affiliation(s)
- Asmaa H Rabie
- Computer Engineering and Systems Dept., Faculty of Engineering, Mansoura University, Mansoura, Egypt.
| | - Ahmed I Saleh
- Computer Engineering and Systems Dept., Faculty of Engineering, Mansoura University, Mansoura, Egypt
| |
Collapse
|
32
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Pinto Dos Santos D, Tang A, Wald C, Slavotinek J. Developing, purchasing, implementing and monitoring AI tools in radiology: Practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. J Med Imaging Radiat Oncol 2024; 68:7-26. [PMID: 38259140 DOI: 10.1111/1754-9485.13612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 11/23/2023] [Indexed: 01/24/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, Alabama, USA
- American College of Radiology Data Science Institute, Reston, Virginia, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, California, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, California, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montreal, Quebec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, Massachusetts, USA
- Tufts University Medical School, Boston, Massachusetts, USA
- Commision On Informatics, and Member, Board of Chancellors, American College of Radiology, Reston, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, South Australia, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
33
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, purchasing, implementing and monitoring AI tools in radiology: practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. Insights Imaging 2024; 15:16. [PMID: 38246898 PMCID: PMC10800328 DOI: 10.1186/s13244-023-01541-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones.This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.Key points • The incorporation of artificial intelligence (AI) in radiological practice demands increased monitoring of its utility and safety.• Cooperation between developers, clinicians, and regulators will allow all involved to address ethical issues and monitor AI performance.• AI can fulfil its promise to advance patient well-being if all steps from development to integration in healthcare are rigorously evaluated.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, AL, USA
- American College of Radiology Data Science Institute, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston, MA, USA
- Commision On Informatics, and Member, Board of Chancellors, American College of Radiology, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
34
|
Zhang L, Xiao X, Wen J, Li H. MDKLoss: Medicine domain knowledge loss for skin lesion recognition. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:2671-2690. [PMID: 38454701 DOI: 10.3934/mbe.2024118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2024]
Abstract
Methods based on deep learning have shown good advantages in skin lesion recognition. However, the diversity of lesion shapes and the influence of noise disturbances such as hair, bubbles, and markers leads to large intra-class differences and small inter-class similarities, which existing methods have not yet effectively resolved. In addition, most existing methods enhance the performance of skin lesion recognition by improving deep learning models without considering the guidance of medical knowledge of skin lesions. In this paper, we innovatively construct feature associations between different lesions using medical knowledge, and design a medical domain knowledge loss function (MDKLoss) based on these associations. By expanding the gap between samples of various lesion categories, MDKLoss enhances the capacity of deep learning models to differentiate between different lesions and consequently boosts classification performance. Extensive experiments on ISIC2018 and ISIC2019 datasets show that the proposed method achieves a maximum of 91.6% and 87.6% accuracy. Furthermore, compared with existing state-of-the-art loss functions, the proposed method demonstrates its effectiveness, universality, and superiority.
Collapse
Affiliation(s)
- Li Zhang
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou 510515, China
- Department of Dermatology, Guangdong Second Provincial General Hospital, Guangzhou 510317, China
- Department of Dermatology, Ningbo No. 6 Hospital, Ningbo 315040, China
| | - Xiangling Xiao
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Ju Wen
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou 510515, China
- Department of Dermatology, Guangdong Second Provincial General Hospital, Guangzhou 510317, China
| | - Huihui Li
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou 510665, China
| |
Collapse
|
35
|
Yen TY, Ho CS, Chen YP, Pei YC. Diagnostic Accuracy of Deep Learning for the Prediction of Osteoporosis Using Plain X-rays: A Systematic Review and Meta-Analysis. Diagnostics (Basel) 2024; 14:207. [PMID: 38248083 PMCID: PMC10814351 DOI: 10.3390/diagnostics14020207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/04/2024] [Accepted: 01/16/2024] [Indexed: 01/23/2024] Open
Abstract
(1) Background: This meta-analysis assessed the diagnostic accuracy of deep learning model-based osteoporosis prediction using plain X-ray images. (2) Methods: We searched PubMed, Web of Science, SCOPUS, and Google Scholar from no set beginning date to 28 February 2023, for eligible studies that applied deep learning methods for diagnosing osteoporosis using X-ray images. The quality of studies was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 criteria. The area under the receiver operating characteristic curve (AUROC) was used to quantify the predictive performance. Subgroup, meta-regression, and sensitivity analyses were performed to identify the potential sources of study heterogeneity. (3) Results: Six studies were included; the pooled AUROC, sensitivity, and specificity were 0.88 (95% confidence interval [CI] 0.85-0.91), 0.81 (95% CI 0.78-0.84), and 0.87 (95% CI 0.81-0.92), respectively, indicating good performance. Moderate heterogeneity was observed. Mega-regression and subgroup analyses were not performed due to the limited number of studies included. (4) Conclusion: Deep learning methods effectively extract bone density information from plain radiographs, highlighting their potential for opportunistic screening. Nevertheless, additional prospective multicenter studies involving diverse patient populations are required to confirm the applicability of this novel technique.
Collapse
Affiliation(s)
- Tzu-Yun Yen
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Linkou No. 5, Fuxing Street, Guishan District, Taoyuan City 333, Taiwan; (T.-Y.Y.); (C.-S.H.)
- School of Medicine, Chang Gung University, No. 259, Wenhua 1st Road, Guishan District, Taoyuan City 333, Taiwan
| | - Chan-Shien Ho
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Linkou No. 5, Fuxing Street, Guishan District, Taoyuan City 333, Taiwan; (T.-Y.Y.); (C.-S.H.)
- School of Medicine, Chang Gung University, No. 259, Wenhua 1st Road, Guishan District, Taoyuan City 333, Taiwan
| | - Yueh-Peng Chen
- Center for Artificial Intelligence in Medicine, Chang Gung Memorial Hospital, Linkou No. 5, Fuxing Street, Guishan District, Taoyuan City 333, Taiwan;
- Master of Science Degree Program in Innovation for Smart Medicine, Chang Gung University, No. 259, Wenhua 1st Road, Guishan District, Taoyuan City 333, Taiwan
| | - Yu-Cheng Pei
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Linkou No. 5, Fuxing Street, Guishan District, Taoyuan City 333, Taiwan; (T.-Y.Y.); (C.-S.H.)
- School of Medicine, Chang Gung University, No. 259, Wenhua 1st Road, Guishan District, Taoyuan City 333, Taiwan
- Master of Science Degree Program in Innovation for Smart Medicine, Chang Gung University, No. 259, Wenhua 1st Road, Guishan District, Taoyuan City 333, Taiwan
- Center of Vascularized Tissue Allograft, Gung Memorial Hospital, Linkou No. 5, Fuxing Street, Guishan District, Taoyuan City 333, Taiwan
| |
Collapse
|
36
|
Shankarnarayan SA, Charlebois DA. Machine learning to identify clinically relevant Candida yeast species. Med Mycol 2024; 62:myad134. [PMID: 38130236 DOI: 10.1093/mmy/myad134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/06/2023] [Accepted: 12/19/2023] [Indexed: 12/23/2023] Open
Abstract
Fungal infections, especially due to Candida species, are on the rise. Multi-drug resistant organisms such as Candida auris are difficult and time consuming to identify accurately. Machine learning is increasingly being used in health care, especially in medical imaging. In this study, we evaluated the effectiveness of six convolutional neural networks (CNNs) to identify four clinically important Candida species. Wet-mounted images were captured using bright field live-cell microscopy followed by separating single-cells, budding-cells, and cell-group images which were then subjected to different machine learning algorithms (custom CNN, VGG16, ResNet50, InceptionV3, EfficientNetB0, and EfficientNetB7) to learn and predict Candida species. Among the six algorithms tested, the InceptionV3 model performed best in predicting Candida species from microscopy images. All models performed poorly on raw images obtained directly from the microscope. The performance of all models increased when trained on single and budding cell images. The InceptionV3 model identified budding cells of C. albicans, C. auris, C. glabrata (Nakaseomyces glabrata), and C. haemulonii in 97.0%, 74.0%, 68.0%, and 66.0% cases, respectively. For single cells of C. albicans, C. auris, C. glabrata, and C. haemulonii InceptionV3 identified 97.0%, 73.0%, 69.0%, and 73.0% cases, respectively. The sensitivity and specificity of InceptionV3 were 77.1% and 92.4%, respectively. Overall, this study provides proof of the concept that microscopy images from wet-mounted slides can be used to identify Candida yeast species using machine learning quickly and accurately.
Collapse
Affiliation(s)
| | - Daniel A Charlebois
- Department of Physics, University of Alberta, Edmonton, Alberta, T6G-2E1, Canada
- Department of Physics, Department of Biological Sciences, University of Alberta, Edmonton, Alberta, T6G-2E9, Canada
| |
Collapse
|
37
|
Zhong S, Yin X, Li X, Feng C, Gao Z, Liao X, Yang S, He S. Artificial intelligence applications in bone fractures: A bibliometric and science mapping analysis. Digit Health 2024; 10:20552076241279238. [PMID: 39257873 PMCID: PMC11384526 DOI: 10.1177/20552076241279238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 08/13/2024] [Indexed: 09/12/2024] Open
Abstract
Background Bone fractures are a common medical issue worldwide, causing a serious economic burden on society. In recent years, the application of artificial intelligence (AI) in the field of fracture has developed rapidly, especially in fracture diagnosis, where AI has shown significant capabilities comparable to those of professional orthopedic surgeons. This study aimed to review the development process and applications of AI in the field of fracture using bibliometric analysis, while analyzing the research hotspots and future trends in the field. Materials and methods Studies on AI and fracture were retrieved from the Web of Science Core Collections since 1990, a retrospective bibliometric and visualized study of the filtered data was conducted through CiteSpace and Bibliometrix R package. Results A total of 1063 publications were included in the analysis, with the annual publication rapidly growing since 2017. China had the most publications, and the United States had the most citations. Technical University of Munich, Germany, had the most publications. Doornberg JN was the most productive author. Most research in this field was published in Scientific Reports. Doi K's 2007 review in Computerized Medical Imaging and Graphics was the most influential paper. Conclusion AI application in fracture has achieved outstanding results and will continue to progress. In this study, we used a bibliometric analysis to assist researchers in understanding the basic knowledge structure, research hotspots, and future trends in this field, to further promote the development of AI applications in fracture.
Collapse
Affiliation(s)
- Sen Zhong
- Department of Orthopedic, Spinal Pain Research Institute, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China
| | - Xiaobing Yin
- Nursing Department, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China
| | - Xiaolan Li
- Fuzhou Medical College of Nanchang University, School of Stomatology, Fuzhou, China
| | - Chaobo Feng
- National Key Clinical Pain Medicine of China, Huazhong University of Science and Technology Union Shenzhen Hospital, Shenzhen, China
| | - Zhiqiang Gao
- Department of Joint Surgery, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China
| | - Xiang Liao
- National Key Clinical Pain Medicine of China, Huazhong University of Science and Technology Union Shenzhen Hospital, Shenzhen, China
| | - Sheng Yang
- Department of Orthopedic, Spinal Pain Research Institute, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China
| | - Shisheng He
- Department of Orthopedic, Spinal Pain Research Institute, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
38
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
39
|
Tai DT, Nhu NT, Tuan PA, Sulieman A, Omer H, Alirezaei Z, Bradley D, Chow JCL. A user-friendly deep learning application for accurate lung cancer diagnosis. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:611-622. [PMID: 38607727 DOI: 10.3233/xst-230255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2024]
Abstract
BACKGROUND Accurate diagnosis and subsequent delineated treatment planning require the experience of clinicians in the handling of their case numbers. However, applying deep learning in image processing is useful in creating tools that promise faster high-quality diagnoses, but the accuracy and precision of 3-D image processing from 2-D data may be limited by factors such as superposition of organs, distortion and magnification, and detection of new pathologies. The purpose of this research is to use radiomics and deep learning to develop a tool for lung cancer diagnosis. METHODS This study applies radiomics and deep learning in the diagnosis of lung cancer to help clinicians accurately analyze the images and thereby provide the appropriate treatment planning. 86 patients were recruited from Bach Mai Hospital, and 1012 patients were collected from an open-source database. First, deep learning has been applied in the process of segmentation by U-NET and cancer classification via the use of the DenseNet model. Second, the radiomics were applied for measuring and calculating diameter, surface area, and volume. Finally, the hardware also was designed by connecting between Arduino Nano and MFRC522 module for reading data from the tag. In addition, the displayed interface was created on a web platform using Python through Streamlit. RESULTS The applied segmentation model yielded a validation loss of 0.498, a train loss of 0.27, a cancer classification validation loss of 0.78, and a training accuracy of 0.98. The outcomes of the diagnostic capabilities of lung cancer (recognition and classification of lung cancer from chest CT scans) were quite successful. CONCLUSIONS The model provided means for storing and updating patients' data directly on the interface which allowed the results to be readily available for the health care providers. The developed system will improve clinical communication and information exchange. Moreover, it can manage efforts by generating correlated and coherent summaries of cancer diagnoses.
Collapse
Affiliation(s)
- Duong Thanh Tai
- Department of Medical Physics, Faculty of Medicine, Nguyen Tat Thanh University, Ho Chi Minh City, Vietnam
| | - Nguyen Tan Nhu
- School of Biomedical Engineering, Ho Chi Minh City International University (VNU-HCM), Ho Chi Minh City, Vietnam
- Vietnam National University Ho Chi Minh City, Vietnam
| | - Pham Anh Tuan
- Nuclear Medicine and Oncology Centre, Bach Mai Hospital, Ha Noi, Vietnam
| | - Abdelmoneim Sulieman
- Radiology and Medical Imaging Department Prince Sattam Bin Abdulaziz University College of Applied Medical Sciences, Al-Kharj, Saudi Arabia
- Radiological Science Department, College of Applied Medical Sciences, Al Ahsa, Saudi Arabia, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Hiba Omer
- Department of Basic Sciences, Deanship of Preparatory Year and Supporting Studies, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Zahra Alirezaei
- Radiology Department, Paramedical School, Bushehr University of Medical Sciences, Bushehr, Iran
| | - David Bradley
- Applied Physics and Radiation Technologies Group, CCDCU, Sunway University, Subang Jaya, PJ, Malaysia
- School of Mathematics and Physics, University of Surrey, Guildford, UK
| | - James C L Chow
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
- Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada
| |
Collapse
|
40
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA. Radiol Artif Intell 2024; 6:e230513. [PMID: 38251899 PMCID: PMC10831521 DOI: 10.1148/ryai.230513] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513). Keywords: Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical
Center, Birmingham, AL, USA
- American College of Radiology Data Science
Institute, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich
School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and
Interventional Radiology, Medical Center, Faculty of Medicine, University of
Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA,
USA
- Stanford Center for Artificial
Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical
Imaging, University of California, San Francisco, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning,
University of Adelaide, Adelaide, Australia
| | - Daniel Pinto dos Santos
- Department of Radiology, University
Hospital of Cologne, Cologne, Germany
- Department of Radiology, University
Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation
Oncology, and Nuclear Medicine, Université de Montréal,
Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital
& Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston,
MA, USA
- Commission On Informatics, and Member,
Board of Chancellors, American College of Radiology, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging,
Flinders Medical Centre Adelaide, Adelaide, Australia
- College of Medicine and Public Health,
Flinders University, Adelaide, Australia
| |
Collapse
|
41
|
Wu D, Ni J, Fan W, Jiang Q, Wang L, Sun L, Cai Z. Opportunities and challenges of computer aided diagnosis in new millennium: A bibliometric analysis from 2000 to 2023. Medicine (Baltimore) 2023; 102:e36703. [PMID: 38134105 PMCID: PMC10735127 DOI: 10.1097/md.0000000000036703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND After entering the new millennium, computer-aided diagnosis (CAD) is rapidly developing as an emerging technology worldwide. Expanding the spectrum of CAD-related diseases is a possible future research trend. Nevertheless, bibliometric studies in this area have not yet been reported. This study aimed to explore the hotspots and frontiers of research on CAD from 2000 to 2023, which may provide a reference for researchers in this field. METHODS In this paper, we use bibliometrics to analyze CAD-related literature in the Web of Science database between 2000 and 2023. The scientometric softwares VOSviewer and CiteSpace were used to visually analyze the countries, institutions, authors, journals, references and keywords involved in the literature. Keywords burst analysis were utilized to further explore the current state and development trends of research on CAD. RESULTS A total of 13,970 publications were included in this study, with a noticeably rising annual publication trend. China and the United States are major contributors to the publication, with the United States being the dominant position in CAD research. The American research institutions, lead by the University of Chicago, are pioneers of CAD. Acharya UR, Zheng B and Chan HP are the most prolific authors. Institute of Electrical and Electronics Engineers Transactions on Medical Imaging focuses on CAD and publishes the most articles. New computer technologies related to CAD are in the forefront of attention. Currently, CAD is used extensively in breast diseases, pulmonary diseases and brain diseases. CONCLUSION Expanding the spectrum of CAD-related diseases is a possible future research trend. How to overcome the lack of large sample datasets and establish a universally accepted standard for the evaluation of CAD system performance are urgent issues for CAD development and validation. In conclusion, this paper provides valuable information on the current state of CAD research and future developments.
Collapse
Affiliation(s)
- Di Wu
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Jiachun Ni
- Department of Coloproctology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Wenbin Fan
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Qiong Jiang
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Ling Wang
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Li Sun
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Zengjin Cai
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| |
Collapse
|
42
|
Khan Z, Tahir MA. Real time anatomical landmarks and abnormalities detection in gastrointestinal tract. PeerJ Comput Sci 2023; 9:e1685. [PMID: 38192480 PMCID: PMC10773696 DOI: 10.7717/peerj-cs.1685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 10/16/2023] [Indexed: 01/10/2024]
Abstract
Gastrointestinal (GI) endoscopy is an active research field due to the lethal cancer diseases in the GI tract. Cancer treatments result better if diagnosed early and it increases the survival chances. There is a high miss rate in the detection of the abnormalities in the GI tract during endoscopy or colonoscopy due to the lack of attentiveness, tiring procedures, or the lack of required training. The procedure of the detection can be automated to the reduction of the risks by identifying and flagging the suspicious frames. A suspicious frame may have some of the abnormality or the information about anatomical landmark in the frame. The frame then can be analysed for the anatomical landmarks and the abnormalities for the detection of disease. In this research, a real-time endoscopic abnormalities detection system is presented that detects the abnormalities and the landmarks. The proposed system is based on a combination of handcrafted and deep features. Deep features are extracted from lightweight MobileNet convolutional neural network (CNN) architecture. There are some of the classes with a small inter-class difference and a higher intra-class differences, for such classes the same detection threshold is unable to distinguish. The threshold of such classes is learned from the training data using genetic algorithm. The system is evaluated on various benchmark datasets and resulted in an accuracy of 0.99 with the F1-score of 0.91 and Matthews correlation coefficient (MCC) of 0.91 on Kvasir datasets and F1-score of 0.93 on the dataset of DowPK. The system detects abnormalities in real-time with the detection speed of 41 frames per second.
Collapse
Affiliation(s)
- Zeshan Khan
- FAST School of Computing, National University of Computer and Emerging Sciences, Islamabad, Karachi, Sindh, Pakistan
| | - Muhammad Atif Tahir
- FAST School of Computing, National University of Computer and Emerging Sciences, Islamabad, Karachi, Sindh, Pakistan
| |
Collapse
|
43
|
Bhat S, Mansoor A, Georgescu B, Panambur AB, Ghesu FC, Islam S, Packhäuser K, Rodríguez-Salas D, Grbic S, Maier A. AUCReshaping: improved sensitivity at high-specificity. Sci Rep 2023; 13:21097. [PMID: 38036602 PMCID: PMC10689839 DOI: 10.1038/s41598-023-48482-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 11/27/2023] [Indexed: 12/02/2023] Open
Abstract
The evaluation of deep-learning (DL) systems typically relies on the Area under the Receiver-Operating-Curve (AU-ROC) as a performance metric. However, AU-ROC, in its holistic form, does not sufficiently consider performance within specific ranges of sensitivity and specificity, which are critical for the intended operational context of the system. Consequently, two systems with identical AU-ROC values can exhibit significantly divergent real-world performance. This issue is particularly pronounced in the context of anomaly detection tasks, a commonly employed application of DL systems across various research domains, including medical imaging, industrial automation, manufacturing, cyber security, fraud detection, and drug research, among others. The challenge arises from the heavy class imbalance in training datasets, with the abnormality class often incurring a considerably higher misclassification cost compared to the normal class. Traditional DL systems address this by adjusting the weighting of the cost function or optimizing for specific points along the ROC curve. While these approaches yield reasonable results in many cases, they do not actively seek to maximize performance for the desired operating point. In this study, we introduce a novel technique known as AUCReshaping, designed to reshape the ROC curve exclusively within the specified sensitivity and specificity range, by optimizing sensitivity at a predetermined specificity level. This reshaping is achieved through an adaptive and iterative boosting mechanism that allows the network to focus on pertinent samples during the learning process. We primarily investigated the impact of AUCReshaping in the context of abnormality detection tasks, specifically in Chest X-Ray (CXR) analysis, followed by breast mammogram and credit card fraud detection tasks. The results reveal a substantial improvement, ranging from 2 to 40%, in sensitivity at high-specificity levels for binary classification tasks.
Collapse
Affiliation(s)
- Sheethal Bhat
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany.
- Digital Technology and Innovation, Siemens Healthineers, Erlangen, Germany.
| | - Awais Mansoor
- Digital Technology and Innovation, Siemens Medical Solutions, Princeton, NJ, 08540, USA
| | - Bogdan Georgescu
- Digital Technology and Innovation, Siemens Medical Solutions, Princeton, NJ, 08540, USA
| | - Adarsh B Panambur
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
- Digital Technology and Innovation, Siemens Healthineers, Erlangen, Germany
| | - Florin C Ghesu
- Digital Technology and Innovation, Siemens Medical Solutions, Princeton, NJ, 08540, USA
| | - Saahil Islam
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
- Digital Technology and Innovation, Siemens Healthineers, Erlangen, Germany
| | - Kai Packhäuser
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| | - Dalia Rodríguez-Salas
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| | - Sasa Grbic
- Digital Technology and Innovation, Siemens Medical Solutions, Princeton, NJ, 08540, USA
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| |
Collapse
|
44
|
Yoon J, Han J, Ko J, Choi S, Park JI, Hwang JS, Han JM, Hwang DDJ. Developing and Evaluating an AI-Based Computer-Aided Diagnosis System for Retinal Disease: Diagnostic Study for Central Serous Chorioretinopathy. J Med Internet Res 2023; 25:e48142. [PMID: 38019564 PMCID: PMC10719821 DOI: 10.2196/48142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 10/29/2023] [Accepted: 11/05/2023] [Indexed: 11/30/2023] Open
Abstract
BACKGROUND Although previous research has made substantial progress in developing high-performance artificial intelligence (AI)-based computer-aided diagnosis (AI-CAD) systems in various medical domains, little attention has been paid to developing and evaluating AI-CAD system in ophthalmology, particularly for diagnosing retinal diseases using optical coherence tomography (OCT) images. OBJECTIVE This diagnostic study aimed to determine the usefulness of a proposed AI-CAD system in assisting ophthalmologists with the diagnosis of central serous chorioretinopathy (CSC), which is known to be difficult to diagnose, using OCT images. METHODS For the training and evaluation of the proposed deep learning model, 1693 OCT images were collected and annotated. The data set included 929 and 764 cases of acute and chronic CSC, respectively. In total, 66 ophthalmologists (2 groups: 36 retina and 30 nonretina specialists) participated in the observer performance test. To evaluate the deep learning algorithm used in the proposed AI-CAD system, the training, validation, and test sets were split in an 8:1:1 ratio. Further, 100 randomly sampled OCT images from the test set were used for the observer performance test, and the participants were instructed to select a CSC subtype for each of these images. Each image was provided under different conditions: (1) without AI assistance, (2) with AI assistance with a probability score, and (3) with AI assistance with a probability score and visual evidence heatmap. The sensitivity, specificity, and area under the receiver operating characteristic curve were used to measure the diagnostic performance of the model and ophthalmologists. RESULTS The proposed system achieved a high detection performance (99% of the area under the curve) for CSC, outperforming the 66 ophthalmologists who participated in the observer performance test. In both groups, ophthalmologists with the support of AI assistance with a probability score and visual evidence heatmap achieved the highest mean diagnostic performance compared with that of those subjected to other conditions (without AI assistance or with AI assistance with a probability score). Nonretina specialists achieved expert-level diagnostic performance with the support of the proposed AI-CAD system. CONCLUSIONS Our proposed AI-CAD system improved the diagnosis of CSC by ophthalmologists, which may support decision-making regarding retinal disease detection and alleviate the workload of ophthalmologists.
Collapse
Affiliation(s)
- Jeewoo Yoon
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
- Raondata, Seoul, Republic of Korea
| | - Jinyoung Han
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
- Department of Human-Artificial Intelligence Interaction, Sungkyunkwan University, Seoul, Republic of Korea
| | - Junseo Ko
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
- Raondata, Seoul, Republic of Korea
| | - Seong Choi
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
- Raondata, Seoul, Republic of Korea
| | - Ji In Park
- Department of Medicine, Kangwon National University School of Medicine, Kangwon National University Hospital, Chuncheon, Republic of Korea
| | | | - Jeong Mo Han
- Seoul Bombit Eye Clinic, Sejong, Republic of Korea
| | - Daniel Duck-Jin Hwang
- Department of Ophthalmology, Hangil Eye Hospital, Incheon, Republic of Korea
- Lux Mind, Incheon, Republic of Korea
| |
Collapse
|
45
|
Ong W, Liu RW, Makmur A, Low XZ, Sng WJ, Tan JH, Kumar N, Hallinan JTPD. Artificial Intelligence Applications for Osteoporosis Classification Using Computed Tomography. Bioengineering (Basel) 2023; 10:1364. [PMID: 38135954 PMCID: PMC10741220 DOI: 10.3390/bioengineering10121364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 11/21/2023] [Accepted: 11/23/2023] [Indexed: 12/24/2023] Open
Abstract
Osteoporosis, marked by low bone mineral density (BMD) and a high fracture risk, is a major health issue. Recent progress in medical imaging, especially CT scans, offers new ways of diagnosing and assessing osteoporosis. This review examines the use of AI analysis of CT scans to stratify BMD and diagnose osteoporosis. By summarizing the relevant studies, we aimed to assess the effectiveness, constraints, and potential impact of AI-based osteoporosis classification (severity) via CT. A systematic search of electronic databases (PubMed, MEDLINE, Web of Science, ClinicalTrials.gov) was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 39 articles were retrieved from the databases, and the key findings were compiled and summarized, including the regions analyzed, the type of CT imaging, and their efficacy in predicting BMD compared with conventional DXA studies. Important considerations and limitations are also discussed. The overall reported accuracy, sensitivity, and specificity of AI in classifying osteoporosis using CT images ranged from 61.8% to 99.4%, 41.0% to 100.0%, and 31.0% to 100.0% respectively, with areas under the curve (AUCs) ranging from 0.582 to 0.994. While additional research is necessary to validate the clinical efficacy and reproducibility of these AI tools before incorporating them into routine clinical practice, these studies demonstrate the promising potential of using CT to opportunistically predict and classify osteoporosis without the need for DEXA.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
| | - Ren Wei Liu
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Weizhong Jonathan Sng
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore; (J.H.T.); (N.K.)
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore; (J.H.T.); (N.K.)
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
46
|
Dehghan Rouzi M, Moshiri B, Khoshnevisan M, Akhaee MA, Jaryani F, Salehi Nasab S, Lee M. Breast Cancer Detection with an Ensemble of Deep Learning Networks Using a Consensus-Adaptive Weighting Method. J Imaging 2023; 9:247. [PMID: 37998094 PMCID: PMC10671922 DOI: 10.3390/jimaging9110247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/20/2023] [Accepted: 10/24/2023] [Indexed: 11/25/2023] Open
Abstract
Breast cancer's high mortality rate is often linked to late diagnosis, with mammograms as key but sometimes limited tools in early detection. To enhance diagnostic accuracy and speed, this study introduces a novel computer-aided detection (CAD) ensemble system. This system incorporates advanced deep learning networks-EfficientNet, Xception, MobileNetV2, InceptionV3, and Resnet50-integrated via our innovative consensus-adaptive weighting (CAW) method. This method permits the dynamic adjustment of multiple deep networks, bolstering the system's detection capabilities. Our approach also addresses a major challenge in pixel-level data annotation of faster R-CNNs, highlighted in a prominent previous study. Evaluations on various datasets, including the cropped DDSM (Digital Database for Screening Mammography), DDSM, and INbreast, demonstrated the system's superior performance. In particular, our CAD system showed marked improvement on the cropped DDSM dataset, enhancing detection rates by approximately 1.59% and achieving an accuracy of 95.48%. This innovative system represents a significant advancement in early breast cancer detection, offering the potential for more precise and timely diagnosis, ultimately fostering improved patient outcomes.
Collapse
Affiliation(s)
- Mohammad Dehghan Rouzi
- School of Electrical and computer Engineering, College of Engineering, University of Tehran, Tehran 14174-66191, Iran; (M.D.R.); (B.M.); (M.A.A.)
| | - Behzad Moshiri
- School of Electrical and computer Engineering, College of Engineering, University of Tehran, Tehran 14174-66191, Iran; (M.D.R.); (B.M.); (M.A.A.)
- Department of Electrical and Computer Engineering, University of Waterloo, Ontario, ON N2L 3G1, Canada
| | | | - Mohammad Ali Akhaee
- School of Electrical and computer Engineering, College of Engineering, University of Tehran, Tehran 14174-66191, Iran; (M.D.R.); (B.M.); (M.A.A.)
| | - Farhang Jaryani
- Human Genome Sequencing Center, Baylor College of Medicine, Houston, TX 77030, USA;
| | - Samaneh Salehi Nasab
- Department of Computer Engineering, Lorestan University, Khorramabad 68151-44316, Iran;
| | - Myeounggon Lee
- College of Health Sciences, Dong-A University, Saha-gu, Busan 49315, Republic of Korea
| |
Collapse
|
47
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
48
|
Bousson V, Attané G, Benoist N, Perronne L, Diallo A, Hadid-Beurrier L, Martin E, Hamzi L, Depil Duval A, Revue E, Vicaut E, Salvat C. Artificial Intelligence for Detecting Acute Fractures in Patients Admitted to an Emergency Department: Real-Life Performance of Three Commercial Algorithms. Acad Radiol 2023; 30:2118-2139. [PMID: 37468377 DOI: 10.1016/j.acra.2023.06.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 06/08/2023] [Accepted: 06/20/2023] [Indexed: 07/21/2023]
Abstract
RATIONALE AND OBJECTIVES Interpreting radiographs in emergency settings is stressful and a burden for radiologists. The main objective was to assess the performance of three commercially available artificial intelligence (AI) algorithms for detecting acute peripheral fractures on radiographs in daily emergency practice. MATERIALS AND METHODS Radiographs were collected from consecutive patients admitted for skeletal trauma at our emergency department over a period of 2 months. Three AI algorithms-SmartUrgence, Rayvolve, and BoneView-were used to analyze 13 body regions. Four musculoskeletal radiologists determined the ground truth from radiographs. The diagnostic performance of the three AI algorithms was calculated at the level of the radiography set. Accuracies, sensitivities, and specificities for each algorithm and two-by-two comparisons between algorithms were obtained. Analyses were performed for the whole population and for subgroups of interest (sex, age, body region). RESULTS A total of 1210 patients were included (mean age 41.3 ± 18.5 years; 742 [61.3%] men), corresponding to 1500 radiography sets. The fracture prevalence among the radiography sets was 23.7% (356/1500). Accuracy was 90.1%, 71.0%, and 88.8% for SmartUrgence, Rayvolve, and BoneView, respectively; sensitivity 90.2%, 92.6%, and 91.3%, with specificity 92.5%, 70.4%, and 90.5%. Accuracy and specificity were significantly higher for SmartUrgence and BoneView than Rayvolve for the whole population (P < .0001) and for subgroups. The three algorithms did not differ in sensitivity (P = .27). For SmartUrgence, subgroups did not significantly differ in accuracy, specificity, or sensitivity. For Rayvolve, accuracy and specificity were significantly higher with age 27-36 than ≥53 years (P = .0029 and P = .0019). Specificity was higher for the subgroup knee than foot (P = .0149). For BoneView, accuracy was significantly higher for the subgroups knee than foot (P = .0006) and knee than wrist/hand (P = .0228). Specificity was significantly higher for the subgroups knee than foot (P = .0003) and ankle than foot (P = .0195). CONCLUSION The performance of AI detection of acute peripheral fractures in daily radiological practice in an emergency department was good to high and was related to the AI algorithm, patient age, and body region examined.
Collapse
Affiliation(s)
- Valérie Bousson
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.).
| | - Grégoire Attané
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Nicolas Benoist
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Laetitia Perronne
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Abdourahmane Diallo
- Clinical Research Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D., E.V.)
| | - Lama Hadid-Beurrier
- Medical Physics Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (L.H.-B., C.S.)
| | - Emmanuel Martin
- Information Technology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (E.M.)
| | - Lounis Hamzi
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Arnaud Depil Duval
- Emergency Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D.D., E.R.); Emergency Department, Saint-Joseph's Hospital, Paris, France (A.D.D.)
| | - Eric Revue
- Emergency Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D.D., E.R.)
| | - Eric Vicaut
- Clinical Research Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D., E.V.)
| | - Cécile Salvat
- Medical Physics Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (L.H.-B., C.S.)
| |
Collapse
|
49
|
Luo N, Zhong X, Su L, Cheng Z, Ma W, Hao P. Artificial intelligence-assisted dermatology diagnosis: From unimodal to multimodal. Comput Biol Med 2023; 165:107413. [PMID: 37703714 DOI: 10.1016/j.compbiomed.2023.107413] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 08/02/2023] [Accepted: 08/28/2023] [Indexed: 09/15/2023]
Abstract
Artificial Intelligence (AI) is progressively permeating medicine, notably in the realm of assisted diagnosis. However, the traditional unimodal AI models, reliant on large volumes of accurately labeled data and single data type usage, prove insufficient to assist dermatological diagnosis. Augmenting these models with text data from patient narratives, laboratory reports, and image data from skin lesions, dermoscopy, and pathologies could significantly enhance their diagnostic capacity. Large-scale pre-training multimodal models offer a promising solution, exploiting the burgeoning reservoir of clinical data and amalgamating various data types. This paper delves into unimodal models' methodologies, applications, and shortcomings while exploring how multimodal models can enhance accuracy and reliability. Furthermore, integrating cutting-edge technologies like federated learning and multi-party privacy computing with AI can substantially mitigate patient privacy concerns in dermatological datasets and further fosters a move towards high-precision self-diagnosis. Diagnostic systems underpinned by large-scale pre-training multimodal models can facilitate dermatology physicians in formulating effective diagnostic and treatment strategies and herald a transformative era in healthcare.
Collapse
Affiliation(s)
- Nan Luo
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| | - Xiaojing Zhong
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| | - Luxin Su
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| | - Zilin Cheng
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| | - Wenyi Ma
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| | - Pingsheng Hao
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| |
Collapse
|
50
|
Ostrowski DA, Logan JR, Antony M, Broms R, Weiss DA, Van Batavia J, Long CJ, Smith AL, Zderic SA, Edwins RC, Pominville RJ, Hannick JH, Woo LL, Fan Y, Tasian GE, Weaver JK. Automated Society of Fetal Urology (SFU) grading of hydronephrosis on ultrasound imaging using a convolutional neural network. J Pediatr Urol 2023; 19:566.e1-566.e8. [PMID: 37286464 DOI: 10.1016/j.jpurol.2023.05.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 03/14/2023] [Accepted: 05/23/2023] [Indexed: 06/09/2023]
Abstract
INTRODUCTION Grading of hydronephrosis severity on postnatal renal ultrasound guides management decisions in antenatal hydronephrosis (ANH). Multiple systems exist to help standardize hydronephrosis grading, yet poor inter-observer reliability persists. Machine learning methods may provide tools to improve the efficiency and accuracy of hydronephrosis grading. OBJECTIVE To develop an automated convolutional neural network (CNN) model to classify hydronephrosis on renal ultrasound imaging according to the Society of Fetal Urology (SFU) system as potential clinical adjunct. STUDY DESIGN A cross-sectional, single-institution cohort of postnatal renal ultrasounds with radiologist SFU grading from pediatric patients with and without hydronephrosis of stable severity was obtained. Imaging labels were used to automatedly select sagittal and transverse grey-scale renal images from all available studies from each patient. A VGG16 pre-trained ImageNet CNN model analyzed these preprocessed images. Three-fold stratified cross-validation was used to build and evaluate the model that was used to classify renal ultrasounds on a per patient basis into five classes based on the SFU system (normal, SFU I, SFU II, SFU III, or SFU IV). These predictions were compared to radiologist grading. Confusion matrices evaluated model performance. Gradient class activation mapping demonstrated imaging features driving model predictions. RESULTS We identified 710 patients with 4659 postnatal renal ultrasound series. Per radiologist grading, 183 were normal, 157 were SFU I, 132 were SFU II, 100 were SFU III, and 138 were SFU IV. The machine learning model predicted hydronephrosis grade with 82.0% (95% CI: 75-83%) overall accuracy and classified 97.6% (95% CI: 95-98%) of the patients correctly or within one grade of the radiologist grade. The model classified 92.3% (95% CI: 86-95%) normal, 73.2% (95% CI: 69-76%) SFU I, 73.5% (95% CI: 67-75%) SFU II, 79.0% (95% CI: 73-82%) SFU III, and 88.4% (95% CI: 85-92%) SFU IV patients accurately. Gradient class activation mapping demonstrated that the ultrasound appearance of the renal collecting system drove the model's predictions. DISCUSSION The CNN-based model classified hydronephrosis on renal ultrasounds automatically and accurately based on the expected imaging features in the SFU system. Compared to prior studies, the model functioned more automatically with greater accuracy. Limitations include the retrospective, relatively small cohort, and averaging across multiple imaging studies per patient. CONCLUSIONS An automated CNN-based system classified hydronephrosis on renal ultrasounds according to the SFU system with promising accuracy based on appropriate imaging features. These findings suggest a possible adjunctive role for machine learning systems in the grading of ANH.
Collapse
Affiliation(s)
- David A Ostrowski
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Division of Urology, Department of Surgery, University of Pennsylvania Health System, Philadelphia, PA, USA
| | - Joseph R Logan
- Division of Urology, Department of Surgery, Children's Hospital of Philadelphia, Philadelphia, PA, USA; Translational Research Informatics Group, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Maria Antony
- Division of Urology, Department of Surgery, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Reilly Broms
- Division of Urology, Department of Surgery, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Dana A Weiss
- Division of Urology, Department of Surgery, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Jason Van Batavia
- Division of Urology, Department of Surgery, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Christopher J Long
- Division of Urology, Department of Surgery, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Ariana L Smith
- Division of Urology, Department of Surgery, University of Pennsylvania Health System, Philadelphia, PA, USA
| | - Stephen A Zderic
- Division of Urology, Department of Surgery, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Rebecca C Edwins
- Urology Institute, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Raymond J Pominville
- Urology Institute, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Jessica H Hannick
- Division of Pediatric Urology, University Hospitals Rainbow Babies and Children's Hospital, Cleveland, OH, USA
| | - Lynn L Woo
- Division of Pediatric Urology, University Hospitals Rainbow Babies and Children's Hospital, Cleveland, OH, USA
| | - Yong Fan
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Gregory E Tasian
- Division of Urology, Department of Surgery, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - John K Weaver
- Division of Pediatric Urology, University Hospitals Rainbow Babies and Children's Hospital, Cleveland, OH, USA.
| |
Collapse
|