1
|
Balel Y. ScholarGPT's performance in oral and maxillofacial surgery. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024:102114. [PMID: 39389541 DOI: 10.1016/j.jormas.2024.102114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2024] [Revised: 09/23/2024] [Accepted: 10/07/2024] [Indexed: 10/12/2024]
Abstract
OBJECTIVE The purpose of this study is to evaluate the performance of Scholar GPT in answering technical questions in the field of oral and maxillofacial surgery and to conduct a comparative analysis with the results of a previous study that assessed the performance of ChatGPT. MATERIALS AND METHODS Scholar GPT was accessed via ChatGPT (www.chatgpt.com) on March 20, 2024. A total of 60 technical questions (15 each on impacted teeth, dental implants, temporomandibular joint disorders, and orthognathic surgery) from our previous study were used. Scholar GPT's responses were evaluated using a modified Global Quality Scale (GQS). The questions were randomized before scoring using an online randomizer (www.randomizer.org). A single researcher performed the evaluations at three different times, three weeks apart, with each evaluation preceded by a new randomization. In cases of score discrepancies, a fourth evaluation was conducted to determine the final score. RESULTS Scholar GPT performed well across all technical questions, with an average GQS score of 4.48 (SD=0.93). Comparatively, ChatGPT's average GQS score in previous study was 3.1 (SD=1.492). The Wilcoxon Signed-Rank Test indicated a statistically significant higher average score for Scholar GPT compared to ChatGPT (Mean Difference = 2.00, SE = 0.163, p < 0.001). The Kruskal-Wallis Test showed no statistically significant differences among the topic groups (χ² = 0.799, df = 3, p = 0.850, ε² = 0.0135). CONCLUSION Scholar GPT demonstrated a generally high performance in technical questions within oral and maxillofacial surgery and produced more consistent and higher-quality responses compared to ChatGPT. The findings suggest that GPT models based on academic databases can provide more accurate and reliable information. Additionally, developing a specialized GPT model for oral and maxillofacial surgery could ensure higher quality and consistency in artificial intelligence-generated information.
Collapse
Affiliation(s)
- Yunus Balel
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Sivas Cumhuriyet University, Sivas 58000, Turkiye.
| |
Collapse
|
2
|
Li X, Li Z, Hu T, Long M, Ma X, Huang J, Liu Y, Yalikun Y, Liu S, Wang D, Wu J, Mei L, Lei C. MSGM: An Advanced Deep Multi-Size Guiding Matching Network for Whole Slide Histopathology Images Addressing Staining Variation and Low Visibility Challenges. IEEE J Biomed Health Inform 2024; 28:6019-6030. [PMID: 38913517 DOI: 10.1109/jbhi.2024.3417937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Matching whole slide histopathology images to provide comprehensive information on homologous tissues is beneficial for cancer diagnosis. However, the challenge arises with the Giga-pixel whole slide images (WSIs) when aiming for high-accuracy matching. Learning-based methods are difficult to generalize well with large-size WSIs, necessitating the integration of traditional matching methods to enhance accuracy as the size increases. In this paper, we propose a multi-size guiding matching method applicable high-accuracy requirements. Specifically, we design learning multiscale texture to train deep descriptors, called TDescNet, that trains 64 × 64 × 256 and 256 × 256 × 128 size convolution layer as C64 and C256 descriptors to overcome staining variation and low visibility challenges. Furthermore, we develop the 3D-ring descriptor using sparse keypoints to support the description of large-size WSIs. Finally, we employ C64, C256, and 3D-ring descriptors to progressively guide refined local matching, utilizing geometric consistency to identify correct matching results. Experiments show that when matching WSIs of size 4096 × 4096 pixels, our average matching error is 123.48 μm and the success rate is 93.02 % in 43 cases. Notably, our method achieves an average improvement of 65.52 μm in matching accuracy compared to recent state-of-the-art methods, with enhancements ranging from 36.27 μm to 131.66 μm. Therefore, we achieve high-fidelity whole-slice image matching, and overcome staining variation and low visibility challenges, enabling assistance in comprehensive cancer diagnosis through matched WSIs.
Collapse
|
3
|
Yan P, Li M, Zhang J, Li G, Jiang Y, Luo H. Cold SegDiffusion: A novel diffusion model for medical image segmentation. Knowl Based Syst 2024; 301:112350. [DOI: 10.1016/j.knosys.2024.112350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/03/2024]
|
4
|
Sarangi PK, Datta S, Swarup MS, Panda S, Nayak DSK, Malik A, Datta A, Mondal H. Radiologic Decision-Making for Imaging in Pulmonary Embolism: Accuracy and Reliability of Large Language Models-Bing, Claude, ChatGPT, and Perplexity. Indian J Radiol Imaging 2024; 34:653-660. [PMID: 39318561 PMCID: PMC11419749 DOI: 10.1055/s-0044-1787974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2024] Open
Abstract
Background Artificial intelligence chatbots have demonstrated potential to enhance clinical decision-making and streamline health care workflows, potentially alleviating administrative burdens. However, the contribution of AI chatbots to radiologic decision-making for clinical scenarios remains insufficiently explored. This study evaluates the accuracy and reliability of four prominent Large Language Models (LLMs)-Microsoft Bing, Claude, ChatGPT 3.5, and Perplexity-in offering clinical decision support for initial imaging for suspected pulmonary embolism (PE). Methods Open-ended (OE) and select-all-that-apply (SATA) questions were crafted, covering four variants of case scenarios of PE in-line with the American College of Radiology Appropriateness Criteria. These questions were presented to the LLMs by three radiologists from diverse geographical regions and setups. The responses were evaluated based on established scoring criteria, with a maximum achievable score of 2 points for OE responses and 1 point for each correct answer in SATA questions. To enable comparative analysis, scores were normalized (score divided by the maximum achievable score). Result In OE questions, Perplexity achieved the highest accuracy (0.83), while Claude had the lowest (0.58), with Bing and ChatGPT each scoring 0.75. For SATA questions, Bing led with an accuracy of 0.96, Perplexity was the lowest at 0.56, and both Claude and ChatGPT scored 0.6. Overall, OE questions saw higher scores (0.73) compared to SATA (0.68). There is poor agreement among radiologists' scores for OE (Intraclass Correlation Coefficient [ICC] = -0.067, p = 0.54), while there is strong agreement for SATA (ICC = 0.875, p < 0.001). Conclusion The study revealed variations in accuracy across LLMs for both OE and SATA questions. Perplexity showed superior performance in OE questions, while Bing excelled in SATA questions. OE queries yielded better overall results. The current inconsistencies in LLM accuracy highlight the importance of further refinement before these tools can be reliably integrated into clinical practice, with a need for additional LLM fine-tuning and judicious selection by radiologists to achieve consistent and reliable support for decision-making.
Collapse
Affiliation(s)
- Pradosh Kumar Sarangi
- Department of Radiodiagnosis, All India Institute of Medical Sciences Deoghar, Deoghar, Jharkhand, India
| | - Suvrankar Datta
- Department of Radiodiagnosis, All India Institute of Medical Sciences New Delhi, New Delhi, India
| | - M. Sarthak Swarup
- Department of Radiodiagnosis, Vardhman Mahavir Medical College and Safdarjung Hospital New Delhi, New Delhi, India
| | - Swaha Panda
- Department of Otorhinolaryngology and Head and Neck Surgery, All India Institute of Medical Sciences Deoghar, Deoghar, Jharkhand, India
| | - Debasish Swapnesh Kumar Nayak
- Department of Computer Science and Engineering, SOET, Centurion University of Technology and Management, Bhubaneswar, Odisha, India
| | - Archana Malik
- Department of Pulmonary Medicine, All India Institute of Medical Sciences Deoghar, Deoghar, Jharkhand, India
| | - Ananda Datta
- Department of Pulmonary Medicine, All India Institute of Medical Sciences Deoghar, Deoghar, Jharkhand, India
| | - Himel Mondal
- Department of Physiology, All India Institute of Medical Sciences Deoghar, Deoghar, Jharkhand, India
| |
Collapse
|
5
|
Zhu Z. Advancements in automated classification of chronic obstructive pulmonary disease based on computed tomography imaging features through deep learning approaches. Respir Med 2024; 234:107809. [PMID: 39299523 DOI: 10.1016/j.rmed.2024.107809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 09/16/2024] [Accepted: 09/17/2024] [Indexed: 09/22/2024]
Abstract
Chronic Obstructive Pulmonary Disease (COPD) represents a global public health issue that significantly impairs patients' quality of life and overall health. As one of the primary causes of chronic respiratory diseases and global mortality, effective diagnosis and classification of COPD are crucial for clinical management. Pulmonary function tests (PFTs) are standard for diagnosing COPD, yet their accuracy is influenced by patient compliance and other factors, and they struggle to detect early disease pathologies. Furthermore, the complexity of COPD pathological changes poses additional challenges for clinical diagnosis, increasing the difficulty for physicians in practice. Recently, deep learning (DL) technologies have demonstrated significant potential in medical image analysis, particularly for the diagnosis and classification of COPD. By analyzing key radiological features such as airway alterations, emphysema, and vascular characteristics in Computed Tomography (CT) scan images, DL enhances diagnostic accuracy and efficiency, providing more precise treatment plans for COPD patients. This article reviews the latest research advancements in DL methods based on principal radiological features of COPD for its classification and discusses the advantages, challenges, and future research directions of DL in this field, aiming to provide new perspectives for the personalized management and treatment of COPD.
Collapse
Affiliation(s)
- Zirui Zhu
- School of Medicine, Xiamen University, Xiamen 361102, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, 361102, China.
| |
Collapse
|
6
|
Fathi M, Eshraghi R, Behzad S, Tavasol A, Bahrami A, Tafazolimoghadam A, Bhatt V, Ghadimi D, Gholamrezanezhad A. Potential strength and weakness of artificial intelligence integration in emergency radiology: a review of diagnostic utilizations and applications in patient care optimization. Emerg Radiol 2024:10.1007/s10140-024-02278-2. [PMID: 39190230 DOI: 10.1007/s10140-024-02278-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Accepted: 08/08/2024] [Indexed: 08/28/2024]
Abstract
Artificial intelligence (AI) and its recent increasing healthcare integration has created both new opportunities and challenges in the practice of radiology and medical imaging. Recent advancements in AI technology have allowed for more workplace efficiency, higher diagnostic accuracy, and overall improvements in patient care. Limitations of AI such as data imbalances, the unclear nature of AI algorithms, and the challenges in detecting certain diseases make it difficult for its widespread adoption. This review article presents cases involving the use of AI models to diagnose intracranial hemorrhage, spinal fractures, and rib fractures, while discussing how certain factors like, type, location, size, presence of artifacts, calcification, and post-surgical changes, affect AI model performance and accuracy. While the use of artificial intelligence has the potential to improve the practice of emergency radiology, it is important to address its limitations to maximize its advantages while ensuring the safety of patients overall.
Collapse
Affiliation(s)
- Mobina Fathi
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Sciences, Tehran, Iran
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Reza Eshraghi
- Student Research Committee, Kashan University of Medical Science, Kashan, Iran
| | | | - Arian Tavasol
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ashkan Bahrami
- Student Research Committee, Kashan University of Medical Science, Kashan, Iran
| | | | - Vivek Bhatt
- School of Medicine, University of California, Riverside, CA, USA
| | - Delaram Ghadimi
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ali Gholamrezanezhad
- Keck School of Medicine of University of Southern California, Los Angeles, CA, USA.
- Department of Radiology, Division of Emergency Radiology, Keck School of Medicine, Cedars Sinai Hospital, University of Southern California, 1500 San Pablo Street, Los Angeles, CA, 90033, USA.
| |
Collapse
|
7
|
Quang-Huy T, Sharma B, Theu LT, Tran DT, Chowdhury S, Karthik C, Gurusamy S. Frequency-hopping along with resolution-turning for fast and enhanced reconstruction in ultrasound tomography. Sci Rep 2024; 14:15483. [PMID: 38969737 PMCID: PMC11226711 DOI: 10.1038/s41598-024-66138-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 06/27/2024] [Indexed: 07/07/2024] Open
Abstract
The distorted Born iterative (DBI) method is considered to obtain images with high-contrast and resolution. Besides satisfying the Born approximation condition, the frequency-hopping (FH) technique is necessary to gradually update the sound contrast from the first iteration and progress to the actual sound contrast of the imaged object in subsequent iterations. Inspired by the fact that the higher the frequency, the higher the resolution. Because low-frequency allows for low-resolution object imaging, hence for high-resolution imaging requirements, using low-frequency to possess a high-resolution image from the first iteration will be less efficient. For an effective reconstruction, the object's resolution at low frequencies should be small. And similarly, with high frequencies, the object resolution should be larger. Therefore, in this paper, the FH, and the resolution-turning (RT) technique are proposed to obtain object images with high-contrast and -resolution. The convergence speed in the initial iterations is rapidly achieved by utilizing low frequency in the frequency-turning technique and low image resolution in the resolution-turning technique. It is crucial to ensure accurate object reconstruction for subsequent iterations. The desired spatial resolution is attained by employing high frequency and large image resolution. The resolution-turning distorted Born iterative (RT-DBI) and frequency-hopping distorted Born iterative (FH-DBI) solutions are thoroughly investigated to exploit their best performance. This makes sense because if it is not good to choose the number of iterations for the frequency f1 in FH-DBI and for the resolution of N1 × N1 in RT-DBI, then these solutions give even worse quality than traditional DBI. After that, the RT-FH-DBI integration was investigated in two sub-solutions. We found that the lower frequency f1 used both before and after the RT would get the best performance. Consequently, compared to the traditional DBI approaches, the normalized error and total runtime for the reconstruction process were dramatically decreased, at 83.6% and 18.6%, respectively. Besides fast and quality imaging, the proposed solution RT-FH-DBI is promised to produce high-contrast and high-resolution object images, aiming at object reconstruction at the biological tissue. The development of 3D imaging and experimental verification will be studied further.
Collapse
Affiliation(s)
- Tran Quang-Huy
- Faculty of Physics, Hanoi Pedagogical University 2, Xuan Hoa Ward, Phuc Yen City, Vinh Phuc Province, Vietnam
| | - Bhisham Sharma
- Centre of Research Impact and Outcome, Chitkara University, Rajpura, Punjab, 140401, India
| | | | - Duc-Tan Tran
- Faculty of Electrical and Electronic Engineering, Phenikaa University, Hanoi, 12116, Vietnam
| | - Subrata Chowdhury
- Department of Computer Science and Engineering, Sreenivasa Institute of Technology and Management Studies (SITAMS), Bangalore, India
| | - Chandran Karthik
- Robotics and Automation, Jyothi Engineering College, Thrissur, India
| | - Saravanakumar Gurusamy
- Department of Electrical and Electronics Technology, FDRE Technical and Vocational Training Institute, Addis Ababa, Ethiopia.
| |
Collapse
|
8
|
VanDecker WA. The Integrative Sport of Cardiac Imaging and Clinical Cardiology: Machine Augmentation and an Evolving Odyssey. JACC Cardiovasc Imaging 2024; 17:792-794. [PMID: 38613557 DOI: 10.1016/j.jcmg.2024.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 02/13/2024] [Indexed: 04/15/2024]
Affiliation(s)
- William A VanDecker
- Lewis Katz School of Medicine at Temple University, Philadelphia, Pennsylvania, USA.
| |
Collapse
|
9
|
Glaudemans AW. Heliyon medical imaging: Shaping the future of health. Heliyon 2024; 10:e32395. [PMID: 39183843 PMCID: PMC11341280 DOI: 10.1016/j.heliyon.2024.e32395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 06/03/2024] [Indexed: 08/27/2024] Open
Affiliation(s)
- Andor W.J.M. Glaudemans
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| |
Collapse
|
10
|
Al-Kadi OS, Al-Emaryeen R, Al-Nahhas S, Almallahi I, Braik R, Mahafza W. Empowering brain cancer diagnosis: harnessing artificial intelligence for advanced imaging insights. Rev Neurosci 2024; 35:399-419. [PMID: 38291768 DOI: 10.1515/revneuro-2023-0115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 12/10/2023] [Indexed: 02/01/2024]
Abstract
Artificial intelligence (AI) is increasingly being used in the medical field, specifically for brain cancer imaging. In this review, we explore how AI-powered medical imaging can impact the diagnosis, prognosis, and treatment of brain cancer. We discuss various AI techniques, including deep learning and causality learning, and their relevance. Additionally, we examine current applications that provide practical solutions for detecting, classifying, segmenting, and registering brain tumors. Although challenges such as data quality, availability, interpretability, transparency, and ethics persist, we emphasise the enormous potential of intelligent applications in standardising procedures and enhancing personalised treatment, leading to improved patient outcomes. Innovative AI solutions have the power to revolutionise neuro-oncology by enhancing the quality of routine clinical practice.
Collapse
Affiliation(s)
- Omar S Al-Kadi
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Roa'a Al-Emaryeen
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Sara Al-Nahhas
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Isra'a Almallahi
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Ruba Braik
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Waleed Mahafza
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| |
Collapse
|
11
|
Balasubramanian AA, Al-Heejawi SMA, Singh A, Breggia A, Ahmad B, Christman R, Ryan ST, Amal S. Ensemble Deep Learning-Based Image Classification for Breast Cancer Subtype and Invasiveness Diagnosis from Whole Slide Image Histopathology. Cancers (Basel) 2024; 16:2222. [PMID: 38927927 PMCID: PMC11201924 DOI: 10.3390/cancers16122222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 06/07/2024] [Accepted: 06/10/2024] [Indexed: 06/28/2024] Open
Abstract
Cancer diagnosis and classification are pivotal for effective patient management and treatment planning. In this study, a comprehensive approach is presented utilizing ensemble deep learning techniques to analyze breast cancer histopathology images. Our datasets were based on two widely employed datasets from different centers for two different tasks: BACH and BreakHis. Within the BACH dataset, a proposed ensemble strategy was employed, incorporating VGG16 and ResNet50 architectures to achieve precise classification of breast cancer histopathology images. Introducing a novel image patching technique to preprocess a high-resolution image facilitated a focused analysis of localized regions of interest. The annotated BACH dataset encompassed 400 WSIs across four distinct classes: Normal, Benign, In Situ Carcinoma, and Invasive Carcinoma. In addition, the proposed ensemble was used on the BreakHis dataset, utilizing VGG16, ResNet34, and ResNet50 models to classify microscopic images into eight distinct categories (four benign and four malignant). For both datasets, a five-fold cross-validation approach was employed for rigorous training and testing. Preliminary experimental results indicated a patch classification accuracy of 95.31% (for the BACH dataset) and WSI image classification accuracy of 98.43% (BreakHis). This research significantly contributes to ongoing endeavors in harnessing artificial intelligence to advance breast cancer diagnosis, potentially fostering improved patient outcomes and alleviating healthcare burdens.
Collapse
Affiliation(s)
| | | | - Akarsh Singh
- College of Engineering, Northeastern University, Boston, MA 02115, USA; (S.M.A.A.-H.); (A.S.)
| | - Anne Breggia
- MaineHealth Institute for Research, Scarborough, ME 04074, USA;
| | - Bilal Ahmad
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Robert Christman
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Stephen T. Ryan
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Saeed Amal
- The Roux Institute, Department of Bioengineering, College of Engineering, Northeastern University, Boston, MA 02115, USA
| |
Collapse
|
12
|
Alsaleh AM, Albalawi E, Algosaibi A, Albakheet SS, Khan SB. Few-Shot Learning for Medical Image Segmentation Using 3D U-Net and Model-Agnostic Meta-Learning (MAML). Diagnostics (Basel) 2024; 14:1213. [PMID: 38928629 PMCID: PMC11202447 DOI: 10.3390/diagnostics14121213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 05/24/2024] [Accepted: 05/30/2024] [Indexed: 06/28/2024] Open
Abstract
Deep learning has attained state-of-the-art results in general image segmentation problems; however, it requires a substantial number of annotated images to achieve the desired outcomes. In the medical field, the availability of annotated images is often limited. To address this challenge, few-shot learning techniques have been successfully adapted to rapidly generalize to new tasks with only a few samples, leveraging prior knowledge. In this paper, we employ a gradient-based method known as Model-Agnostic Meta-Learning (MAML) for medical image segmentation. MAML is a meta-learning algorithm that quickly adapts to new tasks by updating a model's parameters based on a limited set of training samples. Additionally, we use an enhanced 3D U-Net as the foundational network for our models. The enhanced 3D U-Net is a convolutional neural network specifically designed for medical image segmentation. We evaluate our approach on the TotalSegmentator dataset, considering a few annotated images for four tasks: liver, spleen, right kidney, and left kidney. The results demonstrate that our approach facilitates rapid adaptation to new tasks using only a few annotated images. In 10-shot settings, our approach achieved mean dice coefficients of 93.70%, 85.98%, 81.20%, and 89.58% for liver, spleen, right kidney, and left kidney segmentation, respectively. In five-shot sittings, the approach attained mean Dice coefficients of 90.27%, 83.89%, 77.53%, and 87.01% for liver, spleen, right kidney, and left kidney segmentation, respectively. Finally, we assess the effectiveness of our proposed approach on a dataset collected from a local hospital. Employing five-shot sittings, we achieve mean Dice coefficients of 90.62%, 79.86%, 79.87%, and 78.21% for liver, spleen, right kidney, and left kidney segmentation, respectively.
Collapse
Affiliation(s)
- Aqilah M. Alsaleh
- College of Computer Science and Information Technology, King Faisal University, Al Hofuf 400-31982, AlAhsa, Saudi Arabia; (E.A.); (A.A.)
- Department of Information Technology, AlAhsa Health Cluster, Al Hofuf 3158-36421, AlAhsa, Saudi Arabia
| | - Eid Albalawi
- College of Computer Science and Information Technology, King Faisal University, Al Hofuf 400-31982, AlAhsa, Saudi Arabia; (E.A.); (A.A.)
| | - Abdulelah Algosaibi
- College of Computer Science and Information Technology, King Faisal University, Al Hofuf 400-31982, AlAhsa, Saudi Arabia; (E.A.); (A.A.)
| | - Salman S. Albakheet
- Department of Radiology, King Faisal General Hospital, Al Hofuf 36361, AlAhsa, Saudi Arabia;
| | - Surbhi Bhatia Khan
- Department of Data Science, School of Science Engineering and Environment, University of Salford, Manchester M5 4WT, UK;
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
| |
Collapse
|
13
|
Zhang X, Chen S, Zhang P, Wang C, Wang Q, Zhou X. Staging of Liver Fibrosis Based on Energy Valley Optimization Multiple Stacking (EVO-MS) Model. Bioengineering (Basel) 2024; 11:485. [PMID: 38790352 PMCID: PMC11117710 DOI: 10.3390/bioengineering11050485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 05/09/2024] [Accepted: 05/10/2024] [Indexed: 05/26/2024] Open
Abstract
Currently, staging the degree of liver fibrosis predominantly relies on liver biopsy, a method fraught with potential risks, such as bleeding and infection. With the rapid development of medical imaging devices, quantification of liver fibrosis through image processing technology has become feasible. Stacking technology is one of the effective ensemble techniques for potential usage, but precise tuning to find the optimal configuration manually is challenging. Therefore, this paper proposes a novel EVO-MS model-a multiple stacking ensemble learning model optimized by the energy valley optimization (EVO) algorithm to select most informatic features for fibrosis quantification. Liver contours are profiled from 415 biopsied proven CT cases, from which 10 shape features are calculated and inputted into a Support Vector Machine (SVM) classifier to generate the accurate predictions, then the EVO algorithm is applied to find the optimal parameter combination to fuse six base models: K-Nearest Neighbors (KNNs), Decision Tree (DT), Naive Bayes (NB), Extreme Gradient Boosting (XGB), Gradient Boosting Decision Tree (GBDT), and Random Forest (RF), to create a well-performing ensemble model. Experimental results indicate that selecting 3-5 feature parameters yields satisfactory results in classification, with features such as the contour roundness non-uniformity (Rmax), maximum peak height of contour (Rp), and maximum valley depth of contour (Rm) significantly influencing classification accuracy. The improved EVO algorithm, combined with a multiple stacking model, achieves an accuracy of 0.864, a precision of 0.813, a sensitivity of 0.912, a specificity of 0.824, and an F1-score of 0.860, which demonstrates the effectiveness of our EVO-MS model in staging the degree of liver fibrosis.
Collapse
Affiliation(s)
- Xuejun Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China; (X.Z.); (P.Z.); (C.W.)
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, China
| | - Shengxiang Chen
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China; (X.Z.); (P.Z.); (C.W.)
| | - Pengfei Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China; (X.Z.); (P.Z.); (C.W.)
| | - Chun Wang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China; (X.Z.); (P.Z.); (C.W.)
| | - Qibo Wang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China; (X.Z.); (P.Z.); (C.W.)
| | - Xiangrong Zhou
- Department of Electrical, Electronic and Computer Engineering, Gifu University, Gifu 501-1193, Japan;
| |
Collapse
|
14
|
Chen WW, Kuo L, Lin YX, Yu WC, Tseng CC, Lin YJ, Huang CC, Chang SL, Wu JCH, Chen CK, Weng CY, Chan S, Lin WW, Hsieh YC, Lin MC, Fu YC, Chen T, Chen SA, Lu HHS. A Deep Learning Approach to Classify Fabry Cardiomyopathy from Hypertrophic Cardiomyopathy Using Cine Imaging on Cardiac Magnetic Resonance. Int J Biomed Imaging 2024; 2024:6114826. [PMID: 38706878 PMCID: PMC11068448 DOI: 10.1155/2024/6114826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 03/20/2024] [Accepted: 03/23/2024] [Indexed: 05/07/2024] Open
Abstract
A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an F1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an F1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.
Collapse
Affiliation(s)
- Wei-Wen Chen
- Institute of Computer Science and Engineering, National Yang-Ming University, Hsinchu, Taiwan
| | - Ling Kuo
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan
| | - Yi-Xun Lin
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Wen-Chung Yu
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Chien-Chao Tseng
- Institute of Computer Science and Engineering, National Yang-Ming University, Hsinchu, Taiwan
| | - Yenn-Jiang Lin
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Ching-Chun Huang
- Institute of Computer Science and Engineering, National Yang-Ming University, Hsinchu, Taiwan
| | - Shih-Lin Chang
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Jacky Chung-Hao Wu
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Chun-Ku Chen
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Ching-Yao Weng
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Siwa Chan
- Department of Radiology, Taichung Veterans General Hospital, Taichung, Taiwan
- Department of Post-Baccalaureate Medicine, National Chung Hsing University, Taichung, Taiwan
| | - Wei-Wen Lin
- Cardiovascular Center, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yu-Cheng Hsieh
- Cardiovascular Center, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Ming-Chih Lin
- Department of Post-Baccalaureate Medicine, National Chung Hsing University, Taichung, Taiwan
- Department of Pediatric Cardiology, Taichung Veterans General Hospital, Taichung, Taiwan
- Children's Medical Center, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yun-Ching Fu
- Department of Pediatric Cardiology, Taichung Veterans General Hospital, Taichung, Taiwan
- Children's Medical Center, Taichung Veterans General Hospital, Taichung, Taiwan
- Department of Pediatrics, School of Medicine, National Chung-Hsing University, Taichung, Taiwan
| | - Tsung Chen
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Shih-Ann Chen
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
- Cardiovascular Center, Taichung Veterans General Hospital, Taichung, Taiwan
- College of Medicine, National Chung Hsing University, Taichung, Taiwan
| | - Henry Horng-Shing Lu
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
- Department of Statistics and Data Science, Cornell University, Ithaca, New York, USA
| |
Collapse
|
15
|
Wang G, Zhou M, Ning X, Tiwari P, Zhu H, Yang G, Yap CH. US2Mask: Image-to-mask generation learning via a conditional GAN for cardiac ultrasound image segmentation. Comput Biol Med 2024; 172:108282. [PMID: 38503085 DOI: 10.1016/j.compbiomed.2024.108282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 02/29/2024] [Accepted: 03/12/2024] [Indexed: 03/21/2024]
Abstract
Cardiac ultrasound (US) image segmentation is vital for evaluating clinical indices, but it often demands a large dataset and expert annotations, resulting in high costs for deep learning algorithms. To address this, our study presents a framework utilizing artificial intelligence generation technology to produce multi-class RGB masks for cardiac US image segmentation. The proposed approach directly performs semantic segmentation of the heart's main structures in US images from various scanning modes. Additionally, we introduce a novel learning approach based on conditional generative adversarial networks (CGAN) for cardiac US image segmentation, incorporating a conditional input and paired RGB masks. Experimental results from three cardiac US image datasets with diverse scan modes demonstrate that our approach outperforms several state-of-the-art models, showcasing improvements in five commonly used segmentation metrics, with lower noise sensitivity. Source code is available at https://github.com/energy588/US2mask.
Collapse
Affiliation(s)
- Gang Wang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, Chongqing; Department of Bioengineering, Imperial College London, London, UK
| | - Mingliang Zhou
- School of Computer Science, Chongqing University, Chongqing, Chongqing.
| | - Xin Ning
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Prayag Tiwari
- School of Information Technology, Halmstad University, Halmstad, Sweden
| | | | - Guang Yang
- Department of Bioengineering, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
16
|
Zangrossi P, Martini M, Guerrini F, DE Bonis P, Spena G. Large language model, AI and scientific research: why ChatGPT is only the beginning. J Neurosurg Sci 2024; 68:216-224. [PMID: 38261307 DOI: 10.23736/s0390-5616.23.06171-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
ChatGPT, a conversational artificial intelligence model based on the generative pre-trained transformer GPT architecture, has garnered widespread attention due to its user-friendly nature and diverse capabilities. This technology enables users of all backgrounds to effortlessly engage in human-like conversations and receive coherent and intelligible responses. Beyond casual interactions, ChatGPT offers compelling prospects for scientific research, facilitating tasks like literature review and content summarization, ultimately expediting and enhancing the academic writing process. Still, in the field of medicine and surgery, it has already shown its endless potential in many tasks (enhancing decision-making processes, aiding in surgical planning and simulation, providing real-time assistance during surgery, improving postoperative care and rehabilitation, contributing to training, education, research, and development). However, it is crucial to acknowledge the model's limitations, encompassing knowledge constraints and the potential for erroneous responses, as well as ethical and legal considerations. This paper explores the potential benefits and pitfalls of these innovative technologies in scientific research, shedding light on their transformative impact while addressing concerns surrounding their use.
Collapse
Affiliation(s)
- Pietro Zangrossi
- Department of Neurosurgery, Sant'Anna University Hospital, Ferrara, Italy -
- Department of Translational Medicine, University of Ferrara, Ferrara, Italy -
| | - Massimo Martini
- R&D Department, Gate-away.com, Grottammare, Ascoli Piceno, Italy
| | - Francesco Guerrini
- Department of Neurosurgery, San Matteo Polyclinic IRCCS Foundation, Pavia, Italy
| | - Pasquale DE Bonis
- Department of Neurosurgery, Sant'Anna University Hospital, Ferrara, Italy
- Department of Translational Medicine, University of Ferrara, Ferrara, Italy
- Unit of Minimally Invasive Neurosurgery, Ferrara University Hospital, Ferrara, Italy
| | - Giannantonio Spena
- Department of Neurosurgery, San Matteo Polyclinic IRCCS Foundation, Pavia, Italy
| |
Collapse
|
17
|
Agharia S, Szatkowski J, Fraval A, Stevens J, Zhou Y. The ability of artificial intelligence tools to formulate orthopaedic clinical decisions in comparison to human clinicians: An analysis of ChatGPT 3.5, ChatGPT 4, and Bard. J Orthop 2024; 50:1-7. [PMID: 38148925 PMCID: PMC10749221 DOI: 10.1016/j.jor.2023.11.063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Accepted: 11/22/2023] [Indexed: 12/28/2023] Open
Abstract
Background Recent advancements in artificial intelligence (AI) have sparked interest in its integration into clinical medicine and education. This study evaluates the performance of three AI tools compared to human clinicians in addressing complex orthopaedic decisions in real-world clinical cases. Questions/purposes To evaluate the ability of commonly used AI tools to formulate orthopaedic clinical decisions in comparison to human clinicians. Patients and methods The study used OrthoBullets Cases, a publicly available clinical cases collaboration platform where surgeons from around the world choose treatment options based on peer-reviewed standardised treatment polls. The clinical cases cover various orthopaedic categories. Three AI tools, (ChatGPT 3.5, ChatGPT 4, and Bard), were evaluated. Uniform prompts were used to input case information including questions relating to the case, and the AI tools' responses were analysed for alignment with the most popular response, within 10%, and within 20% of the most popular human responses. Results In total, 8 clinical categories comprising of 97 questions were analysed. ChatGPT 4 demonstrated the highest proportion of most popular responses (proportion of most popular response: ChatGPT 4 68.0%, ChatGPT 3.5 40.2%, Bard 45.4%, P value < 0.001), outperforming other AI tools. AI tools performed poorer in questions that were considered controversial (where disagreement occurred in human responses). Inter-tool agreement, as evaluated using Cohen's kappa coefficient, ranged from 0.201 (ChatGPT 4 vs. Bard) to 0.634 (ChatGPT 3.5 vs. Bard). However, AI tool responses varied widely, reflecting a need for consistency in real-world clinical applications. Conclusions While AI tools demonstrated potential use in educational contexts, their integration into clinical decision-making requires caution due to inconsistent responses and deviations from peer consensus. Future research should focus on specialised clinical AI tool development to maximise utility in clinical decision-making. Level of evidence IV.
Collapse
Affiliation(s)
- Suzen Agharia
- Department of Orthopaedic Surgery, St. Vincent's Hospital, Melbourne, Victoria, Australia
| | - Jan Szatkowski
- Department of Orthopaedic Surgery, Indiana University Health Methodist Hospital, Indianapolis, IN, USA
| | - Andrew Fraval
- Department of Orthopaedic Surgery, St. Vincent's Hospital, Melbourne, Victoria, Australia
| | - Jarrad Stevens
- Department of Orthopaedic Surgery, St. Vincent's Hospital, Melbourne, Victoria, Australia
| | - Yushy Zhou
- Department of Orthopaedic Surgery, St. Vincent's Hospital, Melbourne, Victoria, Australia
| |
Collapse
|
18
|
Ghosh Moulic A, Deshmukh P, Gaurkar SS. A Comprehensive Review on Biofilms in Otorhinolaryngology: Understanding the Pathogenesis, Diagnosis, and Treatment Strategies. Cureus 2024; 16:e57634. [PMID: 38707023 PMCID: PMC11070220 DOI: 10.7759/cureus.57634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 04/04/2024] [Indexed: 05/07/2024] Open
Abstract
Biofilms, structured communities of microorganisms encased in a self-produced matrix, pose significant challenges in otorhinolaryngology due to their role in chronic and recurrent infections affecting the ear, nose, and throat (ENT) region. This review provides an overview of biofilms, emphasizing their formation, pathogenesis, diagnosis, and treatment strategies in otorhinolaryngological disorders. Biofilms are pivotal in chronic rhinosinusitis (CRS), otitis media, laryngopharyngeal reflux (LPR), and tonsillitis, contributing to treatment resistance and disease recurrence. Current diagnostic techniques, including imaging modalities, microbiological cultures, and molecular techniques, are discussed, alongside emerging technologies. Treatment strategies, ranging from conventional antibiotics to alternative therapies, such as biofilm disruptors, phage therapy, and immunomodulation, are evaluated in terms of their efficacy and potential clinical applications. The review underscores the significance of understanding biofilms in otorhinolaryngology and highlights the need for tailored approaches to diagnosis and management to improve patient outcomes.
Collapse
Affiliation(s)
- Ayushi Ghosh Moulic
- Otorhinolaryngology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| | - Prasad Deshmukh
- Otorhinolaryngology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| | - Sagar S Gaurkar
- Otorhinolaryngology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| |
Collapse
|
19
|
Maleki Varnosfaderani S, Forouzanfar M. The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century. Bioengineering (Basel) 2024; 11:337. [PMID: 38671759 PMCID: PMC11047988 DOI: 10.3390/bioengineering11040337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/28/2024] Open
Abstract
As healthcare systems around the world face challenges such as escalating costs, limited access, and growing demand for personalized care, artificial intelligence (AI) is emerging as a key force for transformation. This review is motivated by the urgent need to harness AI's potential to mitigate these issues and aims to critically assess AI's integration in different healthcare domains. We explore how AI empowers clinical decision-making, optimizes hospital operation and management, refines medical image analysis, and revolutionizes patient care and monitoring through AI-powered wearables. Through several case studies, we review how AI has transformed specific healthcare domains and discuss the remaining challenges and possible solutions. Additionally, we will discuss methodologies for assessing AI healthcare solutions, ethical challenges of AI deployment, and the importance of data privacy and bias mitigation for responsible technology use. By presenting a critical assessment of AI's transformative potential, this review equips researchers with a deeper understanding of AI's current and future impact on healthcare. It encourages an interdisciplinary dialogue between researchers, clinicians, and technologists to navigate the complexities of AI implementation, fostering the development of AI-driven solutions that prioritize ethical standards, equity, and a patient-centered approach.
Collapse
Affiliation(s)
| | - Mohamad Forouzanfar
- Département de Génie des Systèmes, École de Technologie Supérieure (ÉTS), Université du Québec, Montréal, QC H3C 1K3, Canada
- Centre de Recherche de L’institut Universitaire de Gériatrie de Montréal (CRIUGM), Montréal, QC H3W 1W5, Canada
| |
Collapse
|
20
|
Unger M, Kather JN. Deep learning in cancer genomics and histopathology. Genome Med 2024; 16:44. [PMID: 38539231 PMCID: PMC10976780 DOI: 10.1186/s13073-024-01315-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 03/13/2024] [Indexed: 07/08/2024] Open
Abstract
Histopathology and genomic profiling are cornerstones of precision oncology and are routinely obtained for patients with cancer. Traditionally, histopathology slides are manually reviewed by highly trained pathologists. Genomic data, on the other hand, is evaluated by engineered computational pipelines. In both applications, the advent of modern artificial intelligence methods, specifically machine learning (ML) and deep learning (DL), have opened up a fundamentally new way of extracting actionable insights from raw data, which could augment and potentially replace some aspects of traditional evaluation workflows. In this review, we summarize current and emerging applications of DL in histopathology and genomics, including basic diagnostic as well as advanced prognostic tasks. Based on a growing body of evidence, we suggest that DL could be the groundwork for a new kind of workflow in oncology and cancer research. However, we also point out that DL models can have biases and other flaws that users in healthcare and research need to know about, and we propose ways to address them.
Collapse
Affiliation(s)
- Michaela Unger
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany.
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany.
- Department of Medicine I, University Hospital Dresden, Dresden, Germany.
- Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
21
|
Li X, Xiang S, Li G. Application of artificial intelligence in brain arteriovenous malformations: Angioarchitectures, clinical symptoms and prognosis prediction. Interv Neuroradiol 2024:15910199241238798. [PMID: 38515371 DOI: 10.1177/15910199241238798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has rapidly advanced in the medical field, leveraging its intelligence and automation for the management of various diseases. Brain arteriovenous malformations (AVM) are particularly noteworthy, experiencing rapid development in recent years and yielding remarkable results. This paper aims to summarize the applications of AI in the management of AVMs management. METHODS Literatures published in PubMed during 1999-2022, discussing AI application in AVMs management were reviewed. RESULTS AI algorithms have been applied in various aspects of AVM management, particularly in machine learning and deep learning models. Automatic lesion segmentation or delineation is a promising application that can be further developed and verified. Prognosis prediction using machine learning algorithms with radiomic-based analysis is another meaningful application. CONCLUSIONS AI has been widely used in AVMs management. This article summarizes the current research progress, limitations and future research directions.
Collapse
Affiliation(s)
- Xiangyu Li
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Sishi Xiang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Guilin Li
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
22
|
García-Jaramillo M, Luque C, León-Vargas F. Machine Learning and Deep Learning Techniques Applied to Diabetes Research: A Bibliometric Analysis. J Diabetes Sci Technol 2024; 18:287-301. [PMID: 38047451 DOI: 10.1177/19322968231215350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
BACKGROUND The use of machine learning and deep learning techniques in the research on diabetes has garnered attention in recent times. Nonetheless, few studies offer a thorough picture of the knowledge generation landscape in this field. To address this, a bibliometric analysis of scientific articles published from 2000 to 2022 was conducted to discover global research trends and networks and to emphasize the most prominent countries, institutions, journals, articles, and key topics in this domain. METHODS The Scopus database was used to identify and retrieve high-quality scientific documents. The results were classified into categories of detection (covering diagnosis, screening, identification, segmentation, among others), prediction (prognosis, forecasting, estimation), and management (treatment, control, monitoring, education, telemedicine integration). Biblioshiny and RStudio were used to analyze the data. RESULTS A total of 1773 articles were collected and analyzed. The number of publications and citations increased substantially since 2012, with a notable increase in the last 3 years. Of the 3 categories considered, detection was the most dominant, followed by prediction and management. Around 53.2% of the total journals started disseminating articles on this subject in 2020. China, India, and the United States were the most productive countries. Although no evidence of outstanding leadership by specific authors was found, the University of California emerged as the most influential institution for the development of scientific production. CONCLUSION This is an evolving field that has experienced a rapid increase in productivity, especially over the last years with exponential growth. This trend is expected to continue in the coming years.
Collapse
Affiliation(s)
| | - Carolina Luque
- Faculty of Engineering, Universidad EAN, Bogotá, Colombia
| | - Fabian León-Vargas
- Faculty of Mechanical, Electronic and Biomedical Engineering, Universidad Antonio Nariño, Bogotá, Colombia
| |
Collapse
|
23
|
Butt SR, Soulat A, Lal PM, Fakhor H, Patel SK, Ali MB, Arwani S, Mohan A, Majumder K, Kumar V, Tejwaney U, Kumar S. Impact of artificial intelligence on the diagnosis, treatment and prognosis of endometrial cancer. Ann Med Surg (Lond) 2024; 86:1531-1539. [PMID: 38463097 PMCID: PMC10923372 DOI: 10.1097/ms9.0000000000001733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 01/08/2024] [Indexed: 03/12/2024] Open
Abstract
Endometrial cancer is one of the most prevalent tumours in females and holds an 83% survival rate within 5 years of diagnosis. Hypoestrogenism is a major risk factor for the development of endometrial carcinoma (EC) therefore two major types are derived, type 1 being oestrogen-dependent and type 2 being oestrogen independent. Surgery, chemotherapeutic drugs, and radiation therapy are only a few of the treatment options for EC. Treatment of gynaecologic malignancies greatly depends on diagnosis or prognostic prediction. Diagnostic imaging data and clinical course prediction are the two core pillars of artificial intelligence (AI) applications. One of the most popular imaging techniques for spotting preoperative endometrial cancer is MRI, although this technique can only produce qualitative data. When used to classify patients, AI improves the effectiveness of visual feature extraction. In general, AI has the potential to enhance the precision and effectiveness of endometrial cancer diagnosis and therapy. This review aims to highlight the current status of applications of AI in endometrial cancer and provide a comprehensive understanding of how recent advancements in AI have assisted clinicians in making better diagnosis and improving prognosis of endometrial cancer. Still, additional study is required to comprehend its strengths and limits fully.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Anmol Mohan
- Karachi Medical and Dental College, Karachi, Pakistan
| | | | | | | | | |
Collapse
|
24
|
Lokaj B, Pugliese MT, Kinkel K, Lovis C, Schmid J. Barriers and facilitators of artificial intelligence conception and implementation for breast imaging diagnosis in clinical practice: a scoping review. Eur Radiol 2024; 34:2096-2109. [PMID: 37658895 PMCID: PMC10873444 DOI: 10.1007/s00330-023-10181-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/07/2023] [Accepted: 07/10/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVE Although artificial intelligence (AI) has demonstrated promise in enhancing breast cancer diagnosis, the implementation of AI algorithms in clinical practice encounters various barriers. This scoping review aims to identify these barriers and facilitators to highlight key considerations for developing and implementing AI solutions in breast cancer imaging. METHOD A literature search was conducted from 2012 to 2022 in six databases (PubMed, Web of Science, CINHAL, Embase, IEEE, and ArXiv). The articles were included if some barriers and/or facilitators in the conception or implementation of AI in breast clinical imaging were described. We excluded research only focusing on performance, or with data not acquired in a clinical radiology setup and not involving real patients. RESULTS A total of 107 articles were included. We identified six major barriers related to data (B1), black box and trust (B2), algorithms and conception (B3), evaluation and validation (B4), legal, ethical, and economic issues (B5), and education (B6), and five major facilitators covering data (F1), clinical impact (F2), algorithms and conception (F3), evaluation and validation (F4), and education (F5). CONCLUSION This scoping review highlighted the need to carefully design, deploy, and evaluate AI solutions in clinical practice, involving all stakeholders to yield improvement in healthcare. CLINICAL RELEVANCE STATEMENT The identification of barriers and facilitators with suggested solutions can guide and inform future research, and stakeholders to improve the design and implementation of AI for breast cancer detection in clinical practice. KEY POINTS • Six major identified barriers were related to data; black-box and trust; algorithms and conception; evaluation and validation; legal, ethical, and economic issues; and education. • Five major identified facilitators were related to data, clinical impact, algorithms and conception, evaluation and validation, and education. • Coordinated implication of all stakeholders is required to improve breast cancer diagnosis with AI.
Collapse
Affiliation(s)
- Belinda Lokaj
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland.
- Faculty of Medicine, University of Geneva, Geneva, Switzerland.
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland.
| | - Marie-Thérèse Pugliese
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| | - Karen Kinkel
- Réseau Hospitalier Neuchâtelois, Neuchâtel, Switzerland
| | - Christian Lovis
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Jérôme Schmid
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| |
Collapse
|
25
|
Ying Y, Huang X, Song G, Zhao Y, Zhao X, Shi L, Gao Z, Li A, Gao T, Lu H, Fan G. 3D-CAM: a novel context-aware feature extraction framework for neurological disease classification. Front Neurosci 2024; 18:1364338. [PMID: 38486967 PMCID: PMC10938914 DOI: 10.3389/fnins.2024.1364338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 02/16/2024] [Indexed: 03/17/2024] Open
Abstract
In clinical practice and research, the classification and diagnosis of neurological diseases such as Parkinson's Disease (PD) and Multiple System Atrophy (MSA) have long posed a significant challenge. Currently, deep learning, as a cutting-edge technology, has demonstrated immense potential in computer-aided diagnosis of PD and MSA. However, existing methods rely heavily on manually selecting key feature slices and segmenting regions of interest. This not only increases subjectivity and complexity in the classification process but also limits the model's comprehensive analysis of global data features. To address this issue, this paper proposes a novel 3D context-aware modeling framework, named 3D-CAM. It considers 3D contextual information based on an attention mechanism. The framework, utilizing a 2D slicing-based strategy, innovatively integrates a Contextual Information Module and a Location Filtering Module. The Contextual Information Module can be applied to feature maps at any layer, effectively combining features from adjacent slices and utilizing an attention mechanism to focus on crucial features. The Location Filtering Module, on the other hand, is employed in the post-processing phase to filter significant slice segments of classification features. By employing this method in the fully automated classification of PD and MSA, an accuracy of 85.71%, a recall rate of 86.36%, and a precision of 90.48% were achieved. These results not only demonstrates potential for clinical applications, but also provides a novel perspective for medical image diagnosis, thereby offering robust support for accurate diagnosis of neurological diseases.
Collapse
Affiliation(s)
- Yuhan Ying
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beiing, China
| | - Xin Huang
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Guoli Song
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - Yiwen Zhao
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - XinGang Zhao
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - Lin Shi
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Ziqi Gao
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beiing, China
| | - Andi Li
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beiing, China
| | - Tian Gao
- Shenyang Ligong University, Shenyang, China
| | - Hua Lu
- Department of Neurosurgery, Affiliated Hospital of Jiangnan University, Wuxi, China
| | - Guoguang Fan
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| |
Collapse
|
26
|
Boverhof BJ, Redekop WK, Bos D, Starmans MPA, Birch J, Rockall A, Visser JJ. Radiology AI Deployment and Assessment Rubric (RADAR) to bring value-based AI into radiological practice. Insights Imaging 2024; 15:34. [PMID: 38315288 PMCID: PMC10844175 DOI: 10.1186/s13244-023-01599-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 11/14/2023] [Indexed: 02/07/2024] Open
Abstract
OBJECTIVE To provide a comprehensive framework for value assessment of artificial intelligence (AI) in radiology. METHODS This paper presents the RADAR framework, which has been adapted from Fryback and Thornbury's imaging efficacy framework to facilitate the valuation of radiology AI from conception to local implementation. Local efficacy has been newly introduced to underscore the importance of appraising an AI technology within its local environment. Furthermore, the RADAR framework is illustrated through a myriad of study designs that help assess value. RESULTS RADAR presents a seven-level hierarchy, providing radiologists, researchers, and policymakers with a structured approach to the comprehensive assessment of value in radiology AI. RADAR is designed to be dynamic and meet the different valuation needs throughout the AI's lifecycle. Initial phases like technical and diagnostic efficacy (RADAR-1 and RADAR-2) are assessed pre-clinical deployment via in silico clinical trials and cross-sectional studies. Subsequent stages, spanning from diagnostic thinking to patient outcome efficacy (RADAR-3 to RADAR-5), require clinical integration and are explored via randomized controlled trials and cohort studies. Cost-effectiveness efficacy (RADAR-6) takes a societal perspective on financial feasibility, addressed via health-economic evaluations. The final level, RADAR-7, determines how prior valuations translate locally, evaluated through budget impact analysis, multi-criteria decision analyses, and prospective monitoring. CONCLUSION The RADAR framework offers a comprehensive framework for valuing radiology AI. Its layered, hierarchical structure, combined with a focus on local relevance, aligns RADAR seamlessly with the principles of value-based radiology. CRITICAL RELEVANCE STATEMENT The RADAR framework advances artificial intelligence in radiology by delineating a much-needed framework for comprehensive valuation. KEYPOINTS • Radiology artificial intelligence lacks a comprehensive approach to value assessment. • The RADAR framework provides a dynamic, hierarchical method for thorough valuation of radiology AI. • RADAR advances clinical radiology by bridging the artificial intelligence implementation gap.
Collapse
Affiliation(s)
- Bart-Jan Boverhof
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - W Ken Redekop
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Daniel Bos
- Department of Epidemiology, Erasmus University Medical Centre, Rotterdam, The Netherlands
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Martijn P A Starmans
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | | | - Andrea Rockall
- Department of Surgery & Cancer, Imperial College London, London, UK
| | - Jacob J Visser
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands.
| |
Collapse
|
27
|
Zhang Y, Dong J. MAEF-Net: MLP Attention for Feature Enhancement in U-Net based Medical Image Segmentation Networks. IEEE J Biomed Health Inform 2024; 28:846-857. [PMID: 37976191 DOI: 10.1109/jbhi.2023.3332908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
Medical image segmentation plays an important role in diagnosis. Since the introduction of U-Net, numerous advancements have been implemented to enhance its performance and expand its applicability. The advent of Transformers in computer vision has led to the integration of self-attention mechanisms into U-Net, resulting in significant breakthroughs. However, the inherent complexity of Transformers renders these networks computationally demanding and parameter-heavy. Recent studies have demonstrated that multilayer perceptrons (MLPs), with their simpler architecture, can achieve comparable performance to Transformers in natural language processing and computer vision tasks. Building upon these findings, we have enhanced the previously proposed "Enhanced-Feature-Four-Fold-Net" (EF 3-Net) by introducing an MLP-attention block to learn long-range dependencies and expand the receptive field. This enhanced network is termed "MLP-Attention Enhanced-Feature-four-fold-Net", abbreviated as "MAEF-Net". To further enhance accuracy while reducing computational complexity, the proposed network incorporates additional efficient design elements. MAEF-Net was evaluated against several general and specialized medical image segmentation networks using four challenging medical image datasets. The results demonstrate that the proposed network exhibits high computational efficiency and comparable or superior performance to EF 3-Net and several state-of-the-art methods, particularly in segmenting blurry objects.
Collapse
|
28
|
Yang WT, Ma BY, Chen Y. A narrative review of deep learning in thyroid imaging: current progress and future prospects. Quant Imaging Med Surg 2024; 14:2069-2088. [PMID: 38415152 PMCID: PMC10895129 DOI: 10.21037/qims-23-908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 12/01/2023] [Indexed: 02/29/2024]
Abstract
Background and Objective Deep learning (DL) has contributed substantially to the evolution of image analysis by unlocking increased data and computational power. These DL algorithms have further facilitated the growing trend of implementing precision medicine, particularly in areas of diagnosis and therapy. Thyroid imaging, as a routine means to screening for thyroid diseases on large-scale populations, is a massive data source for the development of DL models. Thyroid disease is a global health problem and involves structural and functional changes. The objective of this study was to evaluate the general rules and future directions of DL networks in thyroid medical image analysis through a review of original articles published between 2018 and 2023. Methods We searched for English-language articles published between April 2018 and September 2023 in the databases of PubMed, Web of Science, and Google Scholar. The keywords used in the search included artificial intelligence or DL, thyroid diseases, and thyroid nodule or thyroid carcinoma. Key Content and Findings The computer vision tasks of DL in thyroid imaging included classification, segmentation, and detection. The current applications of DL in clinical workflow were found to mainly include management of thyroid nodules/carcinoma, risk evaluation of thyroid cancer metastasis, and discrimination of functional thyroid diseases. Conclusions DL is expected to enhance the quality of thyroid images and provide greater precision in the assessment of thyroid images. Specifically, DL can increase the diagnostic accuracy of thyroid diseases and better inform clinical decision-making.
Collapse
Affiliation(s)
- Wan-Ting Yang
- Department of Medical Ultrasound, West China Hospital, Sichuan University, Chengdu, China
| | - Bu-Yun Ma
- Department of Medical Ultrasound, West China Hospital, Sichuan University, Chengdu, China
| | - Yang Chen
- Department of Medical Ultrasound, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
29
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
30
|
Liu L. Implemented classification techniques for osteoporosis using deep learning from the perspective of healthcare analytics. Technol Health Care 2024; 32:1947-1965. [PMID: 38393861 DOI: 10.3233/thc-231517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2024]
Abstract
BACKGROUND Osteoporosis is a medical disorder that causes bone tissue to deteriorate and lose density, increasing the risk of fractures. Applying Neural Networks (NN) to analyze medical imaging data and detect the presence or severity of osteoporosis in patients is known as osteoporosis classification using Deep Learning (DL) algorithms. DL algorithms can extract relevant information from bone images and discover intricate patterns that could indicate osteoporosis. OBJECTIVE DCNN biases must be initialized carefully, much like their weights. Biases that are initialized incorrectly might affect the network's learning dynamics and hinder the model's ability to converge to an ideal solution. In this research, Deep Convolutional Neural Networks (DCNNs) are used, which have several benefits over conventional ML techniques for image processing. METHOD One of the key benefits of DCNNs is the ability to automatically Feature Extraction (FE) from raw data. Feature learning is a time-consuming procedure in conventional ML algorithms. During the training phase of DCNNs, the network learns to recognize relevant characteristics straight from the data. The Squirrel Search Algorithm (SSA) makes use of a combination of Local Search (LS) and Random Search (RS) techniques that are inspired by the foraging habits of squirrels. RESULTS The method made it possible to efficiently explore the search space to find prospective values while using promising areas to refine and improve the solutions. Effectively recognizing optimum or nearly optimal solutions depends on balancing exploration and exploitation. The weight in the DCNN is optimized with the help of SSA, which enhances the performance of the classification. CONCLUSION The comparative analysis with state-of-the-art techniques shows that the proposed SSA-based DCNN is highly accurate, with 96.57% accuracy.
Collapse
|
31
|
Zhang W, Lu F, Su H, Hu Y. Dual-branch multi-information aggregation network with transformer and convolution for polyp segmentation. Comput Biol Med 2024; 168:107760. [PMID: 38064849 DOI: 10.1016/j.compbiomed.2023.107760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 10/21/2023] [Accepted: 11/21/2023] [Indexed: 01/10/2024]
Abstract
Computer-Aided Diagnosis (CAD) for polyp detection offers one of the most notable showcases. By using deep learning technologies, the accuracy of polyp segmentation is surpassing human experts. In such CAD process, a critical step is concerned with segmenting colorectal polyps from colonoscopy images. Despite remarkable successes attained by recent deep learning related works, much improvement is still anticipated to tackle challenging cases. For instance, the effects of motion blur and light reflection can introduce significant noise into the image. The same type of polyps has a diversity of size, color and texture. To address such challenges, this paper proposes a novel dual-branch multi-information aggregation network (DBMIA-Net) for polyp segmentation, which is able to accurately and reliably segment a variety of colorectal polyps with efficiency. Specifically, a dual-branch encoder with transformer and convolutional neural networks (CNN) is employed to extract polyp features, and two multi-information aggregation modules are applied in the decoder to fuse multi-scale features adaptively. Two multi-information aggregation modules include global information aggregation (GIA) module and edge information aggregation (EIA) module. In addition, to enhance the representation learning capability of the potential channel feature association, this paper also proposes a novel adaptive channel graph convolution (ACGC). To validate the effectiveness and advantages of the proposed network, we compare it with several state-of-the-art (SOTA) methods on five public datasets. Experimental results consistently demonstrate that the proposed DBMIA-Net obtains significantly superior segmentation performance across six popularly used evaluation matrices. Especially, we achieve 94.12% mean Dice on CVC-ClinicDB dataset which is 4.22% improvement compared to the previous state-of-the-art method PraNet. Compared with SOTA algorithms, DBMIA-Net has a better fitting ability and stronger generalization ability.
Collapse
Affiliation(s)
- Wenyu Zhang
- School of Information Science and Engineering, Lanzhou University, China
| | - Fuxiang Lu
- School of Information Science and Engineering, Lanzhou University, China.
| | - Hongjing Su
- School of Information Science and Engineering, Lanzhou University, China
| | - Yawen Hu
- School of Information Science and Engineering, Lanzhou University, China
| |
Collapse
|
32
|
S Alshuhri M, Al-Musawi SG, Al-Alwany AA, Uinarni H, Rasulova I, Rodrigues P, Alkhafaji AT, Alshanberi AM, Alawadi AH, Abbas AH. Artificial intelligence in cancer diagnosis: Opportunities and challenges. Pathol Res Pract 2024; 253:154996. [PMID: 38118214 DOI: 10.1016/j.prp.2023.154996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 11/20/2023] [Accepted: 11/27/2023] [Indexed: 12/22/2023]
Abstract
Since cancer is one of the world's top causes of death, early diagnosis is critical to improving patient outcomes. Artificial intelligence (AI) has become a viable technique for cancer diagnosis by using machine learning algorithms to examine large volumes of data for accurate and efficient diagnosis. AI has the potential to alter the way cancer is detected fundamentally. Still, it has several disadvantages, such as requiring a large amount of data, technological limitations, and ethical concerns. This overview looks at the possibilities and restrictions of AI in cancer detection, as well as current applications and possible future developments. We can better understand how to use AI to improve patient outcomes and reduce cancer mortality rates by looking at its potential for cancer detection.
Collapse
Affiliation(s)
- Mohammed S Alshuhri
- Radiology and Medical Imaging Department, College of Applied Medical Sciences, Prince Sattam bin Abdulaziz University, Kharj, Saudi Arabia
| | | | | | - Herlina Uinarni
- Department of Anatomy, School of Medicine and Health Sciences Atma Jaya Catholic University of Indonesia, Indonesia; Radiology department of Pantai Indah Kapuk Hospital Jakarta, Jakarta, Indonesia.
| | - Irodakhon Rasulova
- School of Humanities, Natural & Social Sciences, New Uzbekistan University, 54 Mustaqillik Ave., Tashkent 100007, Uzbekistan; Department of Public Health, Samarkand State Medical University, Amir Temur Street 18, Samarkand, Uzbekistan
| | - Paul Rodrigues
- Department of Computer Engineering, College of Computer Science, King Khalid University, Al-Faraa, Abha, Asir, Kingdom of Saudi Arabia
| | | | - Asim Muhammed Alshanberi
- Department of Community Medicine & Pilgrim Healthcare, Umm Alqura University, Makkah 24382, Saudi Arabia; General Medicine Practice Program, Batterjee Medical College, Jeddah 21442, Saudi Arabia
| | - Ahmed Hussien Alawadi
- College of Technical Engineering, the Islamic University, Najaf, Iraq; College of Technical Engineering, the Islamic University of Al Diwaniyah, Iraq; College of Technical Engineering, the Islamic University of Babylon, Iraq
| | - Ali Hashim Abbas
- College of Technical Engineering, Imam Ja'afar Al-Sadiq University, Al-Muthanna 66002, Iraq
| |
Collapse
|
33
|
Alsadhan A, Al-Anezi F, Almohanna A, Alnaim N, Alzahrani H, Shinawi R, AboAlsamh H, Bakhshwain A, Alenazy M, Arif W, Alyousef S, Alhamidi S, Alghamdi A, AlShrayfi N, Rubaian NB, Alanzi T, AlSahli A, Alturki R, Herzallah N. The opportunities and challenges of adopting ChatGPT in medical research. Front Med (Lausanne) 2023; 10:1259640. [PMID: 38188345 PMCID: PMC10766839 DOI: 10.3389/fmed.2023.1259640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 12/07/2023] [Indexed: 01/09/2024] Open
Abstract
Purpose This study aims to investigate the opportunities and challenges of adopting ChatGPT in medical research. Methods A qualitative approach with focus groups is adopted in this study. A total of 62 participants including academic researchers from different streams in medicine and eHealth, participated in this study. Results A total of five themes with 16 sub-themes related to the opportunities; and a total of five themes with 12 sub-themes related to the challenges were identified. The major opportunities include improved data collection and analysis, improved communication and accessibility, and support for researchers in multiple streams of medical research. The major challenges identified were limitations of training data leading to bias, ethical issues, technical limitations, and limitations in data collection and analysis. Conclusion Although ChatGPT can be used as a potential tool in medical research, there is a need for further evidence to generalize its impact on the different research activities.
Collapse
Affiliation(s)
- Abeer Alsadhan
- Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Fahad Al-Anezi
- Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Asmaa Almohanna
- Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Norah Alnaim
- Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | | | | | - Hoda AboAlsamh
- Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | | | - Maha Alenazy
- King Saud University, Riyadh, Riyadh, Saudi Arabia
| | - Wejdan Arif
- King Saud University, Riyadh, Riyadh, Saudi Arabia
| | | | | | | | - Nour AlShrayfi
- Public Authority for Applied Education and Training, Kuwait City, Kuwait
| | | | - Turki Alanzi
- Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Alaa AlSahli
- King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Rasha Alturki
- Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | | |
Collapse
|
34
|
Liang J, Feng J, Lin Z, Wei J, Luo X, Wang QM, He B, Chen H, Ye Y. Research on prognostic risk assessment model for acute ischemic stroke based on imaging and multidimensional data. Front Neurol 2023; 14:1294723. [PMID: 38192576 PMCID: PMC10773779 DOI: 10.3389/fneur.2023.1294723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 11/30/2023] [Indexed: 01/10/2024] Open
Abstract
Accurately assessing the prognostic outcomes of patients with acute ischemic stroke and adjusting treatment plans in a timely manner for those with poor prognosis is crucial for intervening in modifiable risk factors. However, there is still controversy regarding the correlation between imaging-based predictions of complications in acute ischemic stroke. To address this, we developed a cross-modal attention module for integrating multidimensional data, including clinical information, imaging features, treatment plans, prognosis, and complications, to achieve complementary advantages. The fused features preserve magnetic resonance imaging (MRI) characteristics while supplementing clinical relevant information, providing a more comprehensive and informative basis for clinical diagnosis and treatment. The proposed framework based on multidimensional data for activity of daily living (ADL) scoring in patients with acute ischemic stroke demonstrates higher accuracy compared to other state-of-the-art network models, and ablation experiments confirm the effectiveness of each module in the framework.
Collapse
Affiliation(s)
- Jiabin Liang
- Postgraduate Cultivation Base of Guangzhou University of Chinese Medicine, Panyu Central Hospital, Guangzhou, China
- Graduate School, Guangzhou University of Chinese Medicine, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Jie Feng
- Radiology Department of Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zhijie Lin
- Laboratory for Intelligent Information Processing, Guangdong University of Technology, Guangzhou, China
| | - Jinbo Wei
- Postgraduate Cultivation Base of Guangzhou University of Chinese Medicine, Panyu Central Hospital, Guangzhou, China
| | - Xun Luo
- Kerry Rehabilitation Medicine Research Institute, Shenzhen, China
| | - Qing Mei Wang
- Stroke Biological Recovery Laboratory, Spaulding Rehabilitation Hospital, Teaching Affiliate of Harvard Medical School, Charlestown, MA, United States
| | - Bingjie He
- Panyu Health Management Center, Guangzhou, China
| | - Hanwei Chen
- Postgraduate Cultivation Base of Guangzhou University of Chinese Medicine, Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
- Panyu Health Management Center, Guangzhou, China
| | - Yufeng Ye
- Postgraduate Cultivation Base of Guangzhou University of Chinese Medicine, Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| |
Collapse
|
35
|
Vollmer A, Saravi B, Breitenbuecher N, Mueller-Richter U, Straub A, Šimić L, Kübler A, Vollmer M, Gubik S, Volland J, Hartmann S, Brands RC. Realizing in-house algorithm-driven free fibula flap set up within 24 hours: a pilot study evaluating accuracy with open-source tools. Front Surg 2023; 10:1321217. [PMID: 38162091 PMCID: PMC10755006 DOI: 10.3389/fsurg.2023.1321217] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 12/04/2023] [Indexed: 01/03/2024] Open
Abstract
Objective This study aims to critically evaluate the effectiveness and accuracy of a time safing and cost-efficient open-source algorithm for in-house planning of mandibular reconstructions using the free osteocutaneous fibula graft. The evaluation focuses on quantifying anatomical accuracy and assessing the impact on ischemia time. Methods A pilot study was conducted, including patients who underwent in-house planned computer-aided design and manufacturing (CAD/CAM) of free fibula flaps between 2021 and 2023. Out of all patient cases, we included all with postoperative 3D imaging in the study. The study utilized open-source software tools for the planning step, and three-dimensional (3D) printing techniques. The Hausdorff distance and Dice coefficient metrics were used to evaluate the accuracy of the planning procedure. Results The study assessed eight patients (five males and three females, mean age 61.75 ± 3.69 years) with different diagnoses such as osteoradionecrosis and oral squamous cell carcinoma. The average ischemia time was 68.38 ± 27.95 min. For the evaluation of preoperative planning vs. the postoperative outcome, the mean Hausdorff Distance was 1.22 ± 0.40. The Dice Coefficients yielded a mean of 0.77 ± 0.07, suggesting a satisfactory concordance between the planned and postoperative states. Dice Coefficient and Hausdorff Distance revealed significant correlations with ischemia time (Spearman's rho = -0.810, p = 0.015 and Spearman's rho = 0.762, p = 0.028, respectively). Linear regression models adjusting for disease type further substantiated these findings. Conclusions The in-house planning algorithm not only achieved high anatomical accuracy, as reflected by the Dice Coefficients and Hausdorff Distance metrics, but this accuracy also exhibited a significant correlation with reduced ischemia time. This underlines the critical role of meticulous planning in surgical outcomes. Additionally, the algorithm's open-source nature renders it cost-efficient, easy to learn, and broadly applicable, offering promising avenues for enhancing both healthcare affordability and accessibility.
Collapse
Affiliation(s)
- Andreas Vollmer
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, Würzburg, Germany
| | - Babak Saravi
- Department of Orthopedics and Trauma Surgery, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, Freiburg, Germany
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA,United States
| | - Niko Breitenbuecher
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, Würzburg, Germany
| | - Urs Mueller-Richter
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, Würzburg, Germany
| | - Anton Straub
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, Würzburg, Germany
| | - Luka Šimić
- Faculty of Electrical Engineering, Computer Science and Information Technology Osijek, Josip Juraj Strossmayer University of Osijek, Osijek, Croatia
| | - Alexander Kübler
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, Würzburg, Germany
| | - Michael Vollmer
- Department of Oral and Maxillofacial Surgery, Tuebingen University Hospital, Tuebingen, Germany
| | - Sebastian Gubik
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, Würzburg, Germany
| | - Julian Volland
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, Würzburg, Germany
| | - Stefan Hartmann
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, Würzburg, Germany
| | - Roman C. Brands
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, Würzburg, Germany
| |
Collapse
|
36
|
Imboden S, Liu X, Payne MC, Hsieh CJ, Lin NY. Trustworthy in silico cell labeling via ensemble-based image translation. BIOPHYSICAL REPORTS 2023; 3:100133. [PMID: 38026685 PMCID: PMC10663640 DOI: 10.1016/j.bpr.2023.100133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023]
Abstract
Artificial intelligence (AI) image translation has been a valuable tool for processing image data in biological and medical research. To apply such a tool in mission-critical applications, including drug screening, toxicity study, and clinical diagnostics, it is essential to ensure that the AI prediction is trustworthy. Here, we demonstrate that an ensemble learning method can quantify the uncertainty of AI image translation. We tested the uncertainty evaluation using experimentally acquired images of mesenchymal stromal cells. We find that the ensemble method reports a prediction standard deviation that correlates with the prediction error, estimating the prediction uncertainty. We show that this uncertainty is in agreement with the prediction error and Pearson correlation coefficient. We further show that the ensemble method can detect out-of-distribution input images by reporting increased uncertainty. Altogether, these results suggest that the ensemble-estimated uncertainty can be a useful indicator for identifying erroneous AI image translations.
Collapse
Affiliation(s)
- Sara Imboden
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Xuanqing Liu
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Marie C. Payne
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Cho-Jui Hsieh
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Neil Y.C. Lin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, California
- Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, Los Angeles, California
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California
- Jonsson Comprehensive Cancer Center, University of California, Los Angeles, Los Angeles, California
- Broad Stem Cell Center, University of California, Los Angeles, Los Angeles, California
| |
Collapse
|
37
|
Li X, Long M, Huang J, Wu J, Shen H, Zhou F, Hou J, Xu Y, Wang D, Mei L, Liu Y, Hu T, Lei C. An orientation-free ring feature descriptor with stain-variability normalization for pathology image matching. Comput Biol Med 2023; 167:107675. [PMID: 37976825 DOI: 10.1016/j.compbiomed.2023.107675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 10/08/2023] [Accepted: 11/06/2023] [Indexed: 11/19/2023]
Abstract
Comprehensively analyzing the corresponding regions in the images of serial slices stained using different methods is a common but important operation in pathological diagnosis. To help increase the efficiency of the analysis, various image registration methods are proposed to match the corresponding regions in different images, but their performance is highly influenced by the rotations, deformations, and variations of staining between the serial pathology images. In this work, we propose an orientation-free ring feature descriptor with stain-variability normalization for pathology image matching. Specifically, we normalize image staining to similar levels to minimize the impact of staining differences on pathology image matching. To overcome the rotation and deformation issues, we propose a rotation-invariance orientation-free ring feature descriptor that generates novel adaptive bins from ring features to build feature vectors. We measure the Euclidean distance of the feature vectors to evaluate keypoint similarity to achieve pathology image matching. A total of 46 pairs of clinical pathology images in hematoxylin-eosin and immunohistochemistry straining to verify the performance of our method. Experimental results indicate that our method meets the pathology image matching accuracy requirements (error ¡ 300μm), especially competent for large-angle rotation cases common in clinical practice.
Collapse
Affiliation(s)
- Xiaoxiao Li
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China
| | - Mengping Long
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China; Department of Pathology, Peking University Cancer Hospital, Beijing 100142, China
| | - Jin Huang
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China
| | - Jianghua Wu
- Department of Pathology, Peking University Cancer Hospital, Beijing 100142, China
| | - Hui Shen
- Department of Hematology, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Fuling Zhou
- Department of Hematology, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Jinxuan Hou
- Department of Thyroid and Breast Surgery, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Yu Xu
- Department of Radiation and Medical Oncology, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Du Wang
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China
| | - Liye Mei
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China; School of Computer Science, Hubei University of Technology, Wuhan, 430068, China.
| | - Yiqiang Liu
- Department of Pathology, Peking University Cancer Hospital, Beijing 100142, China
| | - Taobo Hu
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China; Department of Breast Surgery, Peking University People's Hospital, Beijing, 100044, China
| | - Cheng Lei
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China; Suzhou Institute of Wuhan University, Suzhou, 215000, China; Shenzhen Institute of Wuhan University, Shenzhen, 518057, China.
| |
Collapse
|
38
|
Khan RF, Lee BD, Lee MS. Transformers in medical image segmentation: a narrative review. Quant Imaging Med Surg 2023; 13:8747-8767. [PMID: 38106306 PMCID: PMC10722011 DOI: 10.21037/qims-23-542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 09/14/2023] [Indexed: 12/19/2023]
Abstract
Background and Objective Transformers, which have been widely recognized as state-of-the-art tools in natural language processing (NLP), have also come to be recognized for their value in computer vision tasks. With this increasing popularity, they have also been extensively researched in the more complex medical imaging domain. The associated developments have resulted in transformers being on par with sought-after convolution neural networks, particularly for medical image segmentation. Methods combining both types of networks have proven to be especially successful in capturing local and global contexts, thereby significantly boosting their performances in various segmentation problems. Motivated by this success, we have attempted to survey the consequential research focused on innovative transformer networks, specifically those designed to cater to medical image segmentation in an efficient manner. Methods Databases like Google Scholar, arxiv, ResearchGate, Microsoft Academic, and Semantic Scholar have been utilized to find recent developments in this field. Specifically, research in the English language from 2021 to 2023 was considered. Key Content and Findings In this survey, we look into the different types of architectures and attention mechanisms that uniquely improve performance and the structures that are in place to handle complex medical data. Through this survey, we summarize the popular and unconventional transformer-based research as seen through different key angles and analyze quantitatively the strategies that have proven more advanced. Conclusions We have also attempted to discern existing gaps and challenges within current research, notably highlighting the deficiency of annotated medical data for precise deep learning model training. Furthermore, potential future directions for enhancing transformers' utility in healthcare are outlined, encompassing strategies such as transfer learning and exploiting foundation models for specialized medical image segmentation.
Collapse
Affiliation(s)
- Rabeea Fatma Khan
- Department of Computer Science, Graduate School, Kyonggi University, Suwon, Republic of Korea
| | - Byoung-Dai Lee
- Department of Computer Science, Graduate School, Kyonggi University, Suwon, Republic of Korea
| | - Mu Sook Lee
- Department of Radiology, Keimyung University Dongsan Hospital, Daegu, Republic of Korea
| |
Collapse
|
39
|
Guillen-Grima F, Guillen-Aguinaga S, Guillen-Aguinaga L, Alas-Brun R, Onambele L, Ortega W, Montejo R, Aguinaga-Ontoso E, Barach P, Aguinaga-Ontoso I. Evaluating the Efficacy of ChatGPT in Navigating the Spanish Medical Residency Entrance Examination (MIR): Promising Horizons for AI in Clinical Medicine. Clin Pract 2023; 13:1460-1487. [PMID: 37987431 PMCID: PMC10660543 DOI: 10.3390/clinpract13060130] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 11/15/2023] [Accepted: 11/16/2023] [Indexed: 11/22/2023] Open
Abstract
The rapid progress in artificial intelligence, machine learning, and natural language processing has led to increasingly sophisticated large language models (LLMs) for use in healthcare. This study assesses the performance of two LLMs, the GPT-3.5 and GPT-4 models, in passing the MIR medical examination for access to medical specialist training in Spain. Our objectives included gauging the model's overall performance, analyzing discrepancies across different medical specialties, discerning between theoretical and practical questions, estimating error proportions, and assessing the hypothetical severity of errors committed by a physician. MATERIAL AND METHODS We studied the 2022 Spanish MIR examination results after excluding those questions requiring image evaluations or having acknowledged errors. The remaining 182 questions were presented to the LLM GPT-4 and GPT-3.5 in Spanish and English. Logistic regression models analyzed the relationships between question length, sequence, and performance. We also analyzed the 23 questions with images, using GPT-4's new image analysis capability. RESULTS GPT-4 outperformed GPT-3.5, scoring 86.81% in Spanish (p < 0.001). English translations had a slightly enhanced performance. GPT-4 scored 26.1% of the questions with images in English. The results were worse when the questions were in Spanish, 13.0%, although the differences were not statistically significant (p = 0.250). Among medical specialties, GPT-4 achieved a 100% correct response rate in several areas, and the Pharmacology, Critical Care, and Infectious Diseases specialties showed lower performance. The error analysis revealed that while a 13.2% error rate existed, the gravest categories, such as "error requiring intervention to sustain life" and "error resulting in death", had a 0% rate. CONCLUSIONS GPT-4 performs robustly on the Spanish MIR examination, with varying capabilities to discriminate knowledge across specialties. While the model's high success rate is commendable, understanding the error severity is critical, especially when considering AI's potential role in real-world medical practice and its implications for patient safety.
Collapse
Affiliation(s)
- Francisco Guillen-Grima
- Department of Health Sciences, Public University of Navarra, 31008 Pamplona, Spain; (S.G.-A.); (L.G.-A.); (R.A.-B.)
- Healthcare Research Institute of Navarra (IdiSNA), 31008 Pamplona, Spain
- Department of Preventive Medicine, Clinica Universidad de Navarra, 31008 Pamplona, Spain
- CIBER in Epidemiology and Public Health (CIBERESP), Institute of Health Carlos III, 46980 Madrid, Spain
| | - Sara Guillen-Aguinaga
- Department of Health Sciences, Public University of Navarra, 31008 Pamplona, Spain; (S.G.-A.); (L.G.-A.); (R.A.-B.)
| | - Laura Guillen-Aguinaga
- Department of Health Sciences, Public University of Navarra, 31008 Pamplona, Spain; (S.G.-A.); (L.G.-A.); (R.A.-B.)
- Department of Nursing, Kystad Helse-og Velferdssenter, 7026 Trondheim, Norway
| | - Rosa Alas-Brun
- Department of Health Sciences, Public University of Navarra, 31008 Pamplona, Spain; (S.G.-A.); (L.G.-A.); (R.A.-B.)
| | - Luc Onambele
- School of Health Sciences, Catholic University of Central Africa, Yaoundé 1100, Cameroon;
| | - Wilfrido Ortega
- Department of Surgery, Medical and Social Sciences, University of Alcala de Henares, 28871 Alcalá de Henares, Spain;
| | - Rocio Montejo
- Department of Obstetrics and Gynecology, Institute of Clinical Sciences, University of Gothenburg, 413 46 Gothenburg, Sweden;
- Department of Obstetrics and Gynecology, Sahlgrenska University Hospital, 413 46 Gothenburg, Sweden
| | | | - Paul Barach
- Jefferson College of Population Health, Philadelphia, PA 19107, USA;
- School of Medicine, Thomas Jefferson University, Philadelphia, PA 19107, USA
- Interdisciplinary Research Institute for Health Law and Science, Sigmund Freud University, 1020 Vienna, Austria
- Department of Surgery, Imperial College, London SW7 2AZ, UK
| | - Ines Aguinaga-Ontoso
- Department of Health Sciences, Public University of Navarra, 31008 Pamplona, Spain; (S.G.-A.); (L.G.-A.); (R.A.-B.)
- Healthcare Research Institute of Navarra (IdiSNA), 31008 Pamplona, Spain
| |
Collapse
|
40
|
Stoichita A, Ghita M, Mahler B, Vlasceanu S, Ghinet A, Mosteanu M, Cioacata A, Udrea A, Marcu A, Mitra GD, Ionescu CM, Iliesiu A. Imagistic Findings Using Artificial Intelligence in Vaccinated versus Unvaccinated SARS-CoV-2-Positive Patients Receiving In-Care Treatment at a Tertiary Lung Hospital. J Clin Med 2023; 12:7115. [PMID: 38002725 PMCID: PMC10672398 DOI: 10.3390/jcm12227115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 10/27/2023] [Accepted: 11/04/2023] [Indexed: 11/26/2023] Open
Abstract
BACKGROUND In December 2019 the World Health Organization announced that the widespread severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection had become a global pandemic. The most affected organ by the novel virus is the lung, and imaging exploration of the thorax using computer tomography (CT) scanning and X-ray has had an important impact. MATERIALS AND METHODS We assessed the prevalence of lung lesions in vaccinated versus unvaccinated SARS-CoV-2 patients using an artificial intelligence (AI) platform provided by Medicai. The software analyzes the CT scans, performing the lung and lesion segmentation using a variant of the U-net convolutional network. RESULTS We conducted a cohort study at a tertiary lung hospital in which we included 186 patients: 107 (57.52%) male and 59 (42.47%) females, of which 157 (84.40%) were not vaccinated for SARS-CoV-2. Over five times more unvaccinated patients than vaccinated ones are admitted to the hospital and require imaging investigations. More than twice as many unvaccinated patients have more than 75% of the lungs affected. Patients in the age group 30-39 have had the most lung lesions at almost 69% of both lungs affected. Compared to vaccinated patients with comorbidities, unvaccinated patients with comorbidities had developed increased lung lesions by 5%. CONCLUSION The study revealed a higher percentage of lung lesions among unvaccinated SARS-CoV-2-positive patients admitted to The National Institute of Pulmonology "Marius Nasta" in Bucharest, Romania, underlining the importance of vaccination and also the usefulness of artificial intelligence in CT interpretation.
Collapse
Affiliation(s)
- Alexandru Stoichita
- Faculty of Medicine, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (B.M.); (S.V.); (A.I.)
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
| | - Maria Ghita
- Research Group of Dynamical Systems and Control, Ghent University, 9052 Ghent, Belgium; (M.G.); (C.M.I.)
- Faculty of Medicine and Health Sciences, Antwerp University, 2610 Wilrijk, Belgium
| | - Beatrice Mahler
- Faculty of Medicine, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (B.M.); (S.V.); (A.I.)
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
| | - Silviu Vlasceanu
- Faculty of Medicine, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (B.M.); (S.V.); (A.I.)
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
| | - Andreea Ghinet
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
| | - Madalina Mosteanu
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
- Faculty of Medicine, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Andreea Cioacata
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
| | - Andreea Udrea
- Medicai, 020961 Bucharest, Romania; (A.U.); (A.M.); (G.D.M.)
| | - Alina Marcu
- Medicai, 020961 Bucharest, Romania; (A.U.); (A.M.); (G.D.M.)
| | | | - Clara Mihaela Ionescu
- Research Group of Dynamical Systems and Control, Ghent University, 9052 Ghent, Belgium; (M.G.); (C.M.I.)
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Adriana Iliesiu
- Faculty of Medicine, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (B.M.); (S.V.); (A.I.)
- Clinical Hospital “Prof. Dr. Th. Burghele”, 061344 Bucharest, Romania
| |
Collapse
|
41
|
Weidener L, Fischer M. Teaching AI Ethics in Medical Education: A Scoping Review of Current Literature and Practices. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:399-410. [PMID: 37868075 PMCID: PMC10588522 DOI: 10.5334/pme.954] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 10/03/2023] [Indexed: 10/24/2023]
Abstract
Introduction The increasing use of Artificial Intelligence (AI) in medicine has raised ethical concerns, such as patient autonomy, bias, and transparency. Recent studies suggest a need for teaching AI ethics as part of medical curricula. This scoping review aimed to represent and synthesize the literature on teaching AI ethics as part of medical education. Methods The PRISMA-SCR guidelines and JBI methodology guided a literature search in four databases (PubMed, Embase, Scopus, and Web of Science) for the past 22 years (2000-2022). To account for the release of AI-based chat applications, such as ChatGPT, the literature search was updated to include publications until the end of June 2023. Results 1384 publications were originally identified and, after screening titles and abstracts, the full text of 87 publications was assessed. Following the assessment of the full text, 10 publications were included for further analysis. The updated literature search identified two additional relevant publications from 2023 were identified and included in the analysis. All 12 publications recommended teaching AI ethics in medical curricula due to the potential implications of AI in medicine. Anticipated ethical challenges such as bias were identified as the recommended basis for teaching content in addition to basic principles of medical ethics. Case-based teaching using real-world examples in interactive seminars and small groups was recommended as a teaching modality. Conclusion This scoping review reveals a scarcity of literature on teaching AI ethics in medical education, with most of the available literature being recent and theoretical. These findings emphasize the importance of more empirical studies and foundational definitions of AI ethics to guide the development of teaching content and modalities. Recognizing AI's significant impact of AI on medicine, additional research on the teaching of AI ethics in medical education is needed to best prepare medical students for future ethical challenges.
Collapse
Affiliation(s)
- Lukas Weidener
- UMIT TIROL – Private University for Health Sciences and Health Technology, Eduard-Wallnöfer-Zentrum 1, 6060 Hall in Tirol, Austria
| | - Michael Fischer
- Head of the Research Unit for Quality and Ethics in Health Care, UMIT TIROL – Private University for Health Sciences and Health Technology, Austria
| |
Collapse
|
42
|
Sundaresan V, Lehman JF, Maffei C, Haber SN, Yendiki A. Self-supervised segmentation and characterization of fiber bundles in anatomic tracing data. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.30.560310. [PMID: 37873366 PMCID: PMC10592842 DOI: 10.1101/2023.09.30.560310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Anatomic tracing is the gold standard tool for delineating brain connections and for validating more recently developed imaging approaches such as diffusion MRI tractography. A key step in the analysis of data from tracer experiments is the careful, manual charting of fiber trajectories on histological sections. This is a very time-consuming process, which limits the amount of annotated tracer data that are available for validation studies. Thus, there is a need to accelerate this process by developing a method for computer-assisted segmentation. Such a method must be robust to the common artifacts in tracer data, including variations in the intensity of stained axons and background, as well as spatial distortions introduced by sectioning and mounting the tissue. The method should also achieve satisfactory performance using limited manually charted data for training. Here we propose the first deeplearning method, with a self-supervised loss function, for segmentation of fiber bundles on histological sections from macaque brains that have received tracer injections. We address the limited availability of manual labels with a semi-supervised training technique that takes advantage of unlabeled data to improve performance. We also introduce anatomic and across-section continuity constraints to improve accuracy. We show that our method can be trained on manually charted sections from a single case and segment unseen sections from different cases, with a true positive rate of ~0.80. We further demonstrate the utility of our method by quantifying the density of fiber bundles as they travel through different white-matter pathways. We show that fiber bundles originating in the same injection site have different levels of density when they travel through different pathways, a finding that can have implications for microstructure-informed tractography methods. The code for our method is available at https://github.com/v-sundaresan/fiberbundle_seg_tracing.
Collapse
Affiliation(s)
- Vaanathi Sundaresan
- Department of Computational and Data Sciences, Indian Institute of Science, Bengaluru, Karnataka 560012, India
| | - Julia F. Lehman
- Department of Pharmacology and Physiology, University of Rochester School of Medicine, Rochester, NY, United States
| | - Chiara Maffei
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States
| | - Suzanne N. Haber
- Department of Pharmacology and Physiology, University of Rochester School of Medicine, Rochester, NY, United States
- McLean Hospital, Belmont, MA, United States
| | - Anastasia Yendiki
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States
| |
Collapse
|
43
|
Zhang J, Cui Z, Shi Z, Jiang Y, Zhang Z, Dai X, Yang Z, Gu Y, Zhou L, Han C, Huang X, Ke C, Li S, Xu Z, Gao F, Zhou L, Wang R, Liu J, Zhang J, Ding Z, Sun K, Li Z, Liu Z, Shen D. A robust and efficient AI assistant for breast tumor segmentation from DCE-MRI via a spatial-temporal framework. PATTERNS (NEW YORK, N.Y.) 2023; 4:100826. [PMID: 37720328 PMCID: PMC10499873 DOI: 10.1016/j.patter.2023.100826] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/25/2023] [Accepted: 07/21/2023] [Indexed: 09/19/2023]
Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) allows screening, follow up, and diagnosis for breast tumor with high sensitivity. Accurate tumor segmentation from DCE-MRI can provide crucial information of tumor location and shape, which significantly influences the downstream clinical decisions. In this paper, we aim to develop an artificial intelligence (AI) assistant to automatically segment breast tumors by capturing dynamic changes in multi-phase DCE-MRI with a spatial-temporal framework. The main advantages of our AI assistant include (1) robustness, i.e., our model can handle MR data with different phase numbers and imaging intervals, as demonstrated on a large-scale dataset from seven medical centers, and (2) efficiency, i.e., our AI assistant significantly reduces the time required for manual annotation by a factor of 20, while maintaining accuracy comparable to that of physicians. More importantly, as the fundamental step to build an AI-assisted breast cancer diagnosis system, our AI assistant will promote the application of AI in more clinical diagnostic practices regarding breast cancer.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Zhiming Cui
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Zhenwei Shi
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Yingjia Jiang
- Department of Radiology, The Second Xiangya Hospital, Central South University, Hunan 410011, China
| | - Zhiliang Zhang
- School of Medical Imaging, Hangzhou Medical College, Zhejiang 310059, China
| | - Xiaoting Dai
- Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200080, China
| | - Zhenlu Yang
- Department of Radiology, Guizhou Provincial People’s Hospital, Guizhou 550002, China
| | - Yuning Gu
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Lei Zhou
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Chu Han
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Xiaomei Huang
- Department of Medical Imaging, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Chenglu Ke
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Suyun Li
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Zeyan Xu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Fei Gao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Luping Zhou
- School of Electrical and Information Engineering, The University of Sydney, Sydney, NSW 2006, Australia
| | - Rongpin Wang
- Department of Radiology, Guizhou Provincial People’s Hospital, Guizhou 550002, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, Hunan 410011, China
| | - Jiayin Zhang
- Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200080, China
| | - Zhongxiang Ding
- Department of Radiology, Key Laboratory of Clinical Cancer Pharmacology and Toxicology Research of Zhejiang Province, Hangzhou 310003, China
| | - Kun Sun
- Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China
| | - Zhenhui Li
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Kunming 650118, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200230, China
- Shanghai Clinical Research and Trial Center, Shanghai 200052, China
| |
Collapse
|
44
|
Machado TM, Berssaneti FT. Literature review of digital twin in healthcare. Heliyon 2023; 9:e19390. [PMID: 37809792 PMCID: PMC10558347 DOI: 10.1016/j.heliyon.2023.e19390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 05/26/2023] [Accepted: 08/21/2023] [Indexed: 10/10/2023] Open
Abstract
This article aims to make a bibliometric literature review using systematic scientific mapping and content analysis of digital twins in healthcare to know the evolution, domain, keywords, content type, and kind and purpose of digital twin's implementation in healthcare, so a consolidation and future improvement of existing knowledge can be made and gaps for new studies can be identified. The increase in publications of digital twins in healthcare is quite recent and it is still concentrated in the domain of technology sources. The subject is majorly concentrated in patient's digital twin group and in precision medicine and aspects, issues and/or policies subgroups, although the publications keywords mirror it only at the group side. Digital twins in healthcare are probably stepping out of the infancy phase. On the other hand, digital twins in hospital group and the device and facilities management subgroups are more mature with all knowledge gathered from the manufacturing sector. There is an absence of some publication's types in general, device and care subgroup and no whole body or hospital digital twin was reported. Based on the presented arguments, guidelines for future research were presented: advance in the creation of general frameworks, in subgroups not as much explored, and in groups and subgroups already explored, but that need more advancement to achieve the main goals of a whole human or hospital digital twin with the main issues resolved.
Collapse
Affiliation(s)
- Tatiana Mallet Machado
- Production Engineering Department, Polytechnic School University of São Paulo, Av. Prof. Almeida Prado, Brazil
| | - Fernando Tobal Berssaneti
- Production Engineering Department, Polytechnic School University of São Paulo, Av. Prof. Almeida Prado, Brazil
| |
Collapse
|
45
|
Zhang M, Wen G, Zhong J, Chen D, Wang C, Huang X, Zhang S. MLP-Like Model With Convolution Complex Transformation for Auxiliary Diagnosis Through Medical Images. IEEE J Biomed Health Inform 2023; 27:4385-4396. [PMID: 37467088 DOI: 10.1109/jbhi.2023.3292312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/21/2023]
Abstract
Medical images such as facial and tongue images have been widely used for intelligence-assisted diagnosis, which can be regarded as the multi-label classification task for disease location (DL) and disease nature (DN) of biomedical images. Compared with complicated convolutional neural networks and Transformers for this task, recent MLP-like architectures are not only simple and less computationally expensive, but also have stronger generalization capabilities. However, MLP-like models require better input features from the image. Thus, this study proposes a novel convolution complex transformation MLP-like (CCT-MLP) model for the multi-label DL and DN recognition task for facial and tongue images. Notably, the convolutional Tokenizer and multiple convolutional layers are first used to extract the better shallow features from input biomedical images to make up for the loss of spatial information obtained by the simple MLP structure. Subsequently, the Channel-MLP architecture with complex transformations is used to extract deep-level contextual features. In this way, multi-channel features are extracted and mixed to perform the multi-label classification of the input biomedical images. Experimental results on our constructed multi-label facial and tongue image datasets demonstrate that our method outperforms existing methods in terms of both accuracy (Acc) and mean average precision (mAP).
Collapse
|
46
|
Chen IDS, Yang CM, Chen MJ, Chen MC, Weng RM, Yeh CH. Deep Learning-Based Recognition of Periodontitis and Dental Caries in Dental X-ray Images. Bioengineering (Basel) 2023; 10:911. [PMID: 37627796 PMCID: PMC10451544 DOI: 10.3390/bioengineering10080911] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/21/2023] [Accepted: 07/22/2023] [Indexed: 08/27/2023] Open
Abstract
Dental X-ray images are important and useful for dentists to diagnose dental diseases. Utilizing deep learning in dental X-ray images can help dentists quickly and accurately identify common dental diseases such as periodontitis and dental caries. This paper applies image processing and deep learning technologies to dental X-ray images to propose a simultaneous recognition method for periodontitis and dental caries. The single-tooth X-ray image is detected by the YOLOv7 object detection technique and cropped from the periapical X-ray image. Then, it is processed through contrast-limited adaptive histogram equalization to enhance the local contrast, and bilateral filtering to eliminate noise while preserving the edge. The deep learning architecture for classification comprises a pre-trained EfficientNet-B0 and fully connected layers that output two labels by the sigmoid activation function for the classification task. The average precision of tooth detection using YOLOv7 is 97.1%. For the recognition of periodontitis, the area under the curve (AUC) of the receiver operating characteristic (ROC) curve is 98.67%, and the AUC of the precision-recall (PR) curve is 98.38%. For the recognition of dental caries, the AUC of the ROC curve is 98.31%, and the AUC of the PR curve is 97.55%. Different from the conventional deep learning-based methods for a single disease such as periodontitis or dental caries, the proposed approach can provide the recognition of both periodontitis and dental caries simultaneously. This recognition method presents good performance in the identification of periodontitis and dental caries, thus facilitating dental diagnosis.
Collapse
Affiliation(s)
| | - Chieh-Ming Yang
- Department of Electrical Engineering, National Dong Hwa University, Hualien 97401, Taiwan
| | - Mei-Juan Chen
- Department of Electrical Engineering, National Dong Hwa University, Hualien 97401, Taiwan
| | - Ming-Chin Chen
- Department of Electrical Engineering, National Dong Hwa University, Hualien 97401, Taiwan
| | - Ro-Min Weng
- Department of Electrical Engineering, National Dong Hwa University, Hualien 97401, Taiwan
| | - Chia-Hung Yeh
- Department of Electrical Engineering, National Taiwan Normal University, Taipei 10610, Taiwan
- Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
| |
Collapse
|
47
|
Mu J, Lin Y, Meng X, Fan J, Ai D, Chen D, Qiu H, Yang J, Gu Y. M-CSAFN: Multi-Color Space Adaptive Fusion Network for Automated Port-Wine Stains Segmentation. IEEE J Biomed Health Inform 2023; 27:3924-3935. [PMID: 37027679 DOI: 10.1109/jbhi.2023.3247479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
Automatic segmentation of port-wine stains (PWS) from clinical images is critical for accurate diagnosis and objective assessment of PWS. However, this is a challenging task due to the color heterogeneity, low contrast, and indistinguishable appearance of PWS lesions. To address such challenges, we propose a novel multi-color space adaptive fusion network (M-CSAFN) for PWS segmentation. First, a multi-branch detection model is constructed based on six typical color spaces, which utilizes rich color texture information to highlight the difference between lesions and surrounding tissues. Second, an adaptive fusion strategy is used to fuse complementary predictions, which address the significant differences within the lesions caused by color heterogeneity. Third, a structural similarity loss with color information is proposed to measure the detail error between predicted lesions and truth lesions. Additionally, a PWS clinical dataset consisting of 1413 image pairs was established for the development and evaluation of PWS segmentation algorithms. To verify the effectiveness and superiority of the proposed method, we compared it with other state-of-the-art methods on our collected dataset and four publicly available skin lesion datasets (ISIC 2016, ISIC 2017, ISIC 2018, and PH2). The experimental results show that our method achieves remarkable performance in comparison with other state-of-the-art methods on our collected dataset, achieving 92.29% and 86.14% on Dice and Jaccard metrics, respectively. Comparative experiments on other datasets also confirmed the reliability and potential capability of M-CSAFN in skin lesion segmentation.
Collapse
|
48
|
Balma M, Laudicella R, Gallio E, Gusella S, Lorenzon L, Peano S, Costa RP, Rampado O, Farsad M, Evangelista L, Deandreis D, Papaleo A, Liberini V. Applications of Artificial Intelligence and Radiomics in Molecular Hybrid Imaging and Theragnostics for Neuro-Endocrine Neoplasms (NENs). Life (Basel) 2023; 13:1647. [PMID: 37629503 PMCID: PMC10455722 DOI: 10.3390/life13081647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 07/12/2023] [Accepted: 07/25/2023] [Indexed: 08/27/2023] Open
Abstract
Nuclear medicine has acquired a crucial role in the management of patients with neuroendocrine neoplasms (NENs) by improving the accuracy of diagnosis and staging as well as their risk stratification and personalized therapies, including radioligand therapies (RLT). Artificial intelligence (AI) and radiomics can enable physicians to further improve the overall efficiency and accuracy of the use of these tools in both diagnostic and therapeutic settings by improving the prediction of the tumor grade, differential diagnosis from other malignancies, assessment of tumor behavior and aggressiveness, and prediction of treatment response. This systematic review aims to describe the state-of-the-art AI and radiomics applications in the molecular imaging of NENs.
Collapse
Affiliation(s)
- Michele Balma
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100 Cuneo, Italy; (S.P.); (A.P.); (V.L.)
| | - Riccardo Laudicella
- Unit of Nuclear Medicine, Biomedical Department of Internal and Specialist Medicine, University of Palermo, 90133 Palermo, Italy; (R.L.); (R.P.C.)
| | - Elena Gallio
- Medical Physics Unit, A.O.U. Città Della Salute E Della Scienza Di Torino, Corso Bramante 88/90, 10126 Torino, Italy; (E.G.); (O.R.)
| | - Sara Gusella
- Nuclear Medicine, Central Hospital Bolzano, 39100 Bolzano, Italy; (S.G.); (M.F.)
| | - Leda Lorenzon
- Medical Physics Department, Central Bolzano Hospital, 39100 Bolzano, Italy;
| | - Simona Peano
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100 Cuneo, Italy; (S.P.); (A.P.); (V.L.)
| | - Renato P. Costa
- Unit of Nuclear Medicine, Biomedical Department of Internal and Specialist Medicine, University of Palermo, 90133 Palermo, Italy; (R.L.); (R.P.C.)
| | - Osvaldo Rampado
- Medical Physics Unit, A.O.U. Città Della Salute E Della Scienza Di Torino, Corso Bramante 88/90, 10126 Torino, Italy; (E.G.); (O.R.)
| | - Mohsen Farsad
- Nuclear Medicine, Central Hospital Bolzano, 39100 Bolzano, Italy; (S.G.); (M.F.)
| | - Laura Evangelista
- Department of Biomedical Sciences, Humanitas University, 20089 Milan, Italy;
| | - Desiree Deandreis
- Department of Nuclear Medicine and Endocrine Oncology, Gustave Roussy and Université Paris Saclay, 94805 Villejuif, France;
| | - Alberto Papaleo
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100 Cuneo, Italy; (S.P.); (A.P.); (V.L.)
| | - Virginia Liberini
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100 Cuneo, Italy; (S.P.); (A.P.); (V.L.)
| |
Collapse
|
49
|
Walsh G, Stogiannos N, van de Venter R, Rainey C, Tam W, McFadden S, McNulty JP, Mekis N, Lewis S, O'Regan T, Kumar A, Huisman M, Bisdas S, Kotter E, Pinto dos Santos D, Sá dos Reis C, van Ooijen P, Brady AP, Malamateniou C. Responsible AI practice and AI education are central to AI implementation: a rapid review for all medical imaging professionals in Europe. BJR Open 2023; 5:20230033. [PMID: 37953871 PMCID: PMC10636340 DOI: 10.1259/bjro.20230033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 05/27/2023] [Accepted: 05/30/2023] [Indexed: 11/14/2023] Open
Abstract
Artificial intelligence (AI) has transitioned from the lab to the bedside, and it is increasingly being used in healthcare. Radiology and Radiography are on the frontline of AI implementation, because of the use of big data for medical imaging and diagnosis for different patient groups. Safe and effective AI implementation requires that responsible and ethical practices are upheld by all key stakeholders, that there is harmonious collaboration between different professional groups, and customised educational provisions for all involved. This paper outlines key principles of ethical and responsible AI, highlights recent educational initiatives for clinical practitioners and discusses the synergies between all medical imaging professionals as they prepare for the digital future in Europe. Responsible and ethical AI is vital to enhance a culture of safety and trust for healthcare professionals and patients alike. Educational and training provisions for medical imaging professionals on AI is central to the understanding of basic AI principles and applications and there are many offerings currently in Europe. Education can facilitate the transparency of AI tools, but more formalised, university-led training is needed to ensure the academic scrutiny, appropriate pedagogy, multidisciplinarity and customisation to the learners' unique needs are being adhered to. As radiographers and radiologists work together and with other professionals to understand and harness the benefits of AI in medical imaging, it becomes clear that they are faced with the same challenges and that they have the same needs. The digital future belongs to multidisciplinary teams that work seamlessly together, learn together, manage risk collectively and collaborate for the benefit of the patients they serve.
Collapse
Affiliation(s)
- Gemma Walsh
- Division of Midwifery & Radiography, City University of London, London, United Kingdom
| | | | | | - Clare Rainey
- School of Health Sciences, Ulster University, Derry~Londonderry, Northern Ireland
| | - Winnie Tam
- Division of Midwifery & Radiography, City University of London, London, United Kingdom
| | - Sonyia McFadden
- School of Health Sciences, Ulster University, Coleraine, United Kingdom
| | | | - Nejc Mekis
- Medical Imaging and Radiotherapy Department, University of Ljubljana, Faculty of Health Sciences, Ljubljana, Slovenia
| | - Sarah Lewis
- Discipline of Medical Imaging Science, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Tracy O'Regan
- The Society and College of Radiographers, London, United Kingdom
| | - Amrita Kumar
- Frimley Health NHS Foundation Trust, Frimley, United Kingdom
| | - Merel Huisman
- Department of Radiology, University Medical Center Utrecht, Utrecht, Netherlands
| | | | | | | | - Cláudia Sá dos Reis
- School of Health Sciences (HESAV), University of Applied Sciences and Arts Western Switzerland (HES-SO), Lausanne, Switzerland
| | | | | | | |
Collapse
|
50
|
Yang T, Zhu G, Cai L, Yeo JH, Mao Y, Yang J. A benchmark study of convolutional neural networks in fully automatic segmentation of aortic root. Front Bioeng Biotechnol 2023; 11:1171868. [PMID: 37397959 PMCID: PMC10311214 DOI: 10.3389/fbioe.2023.1171868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 06/06/2023] [Indexed: 07/04/2023] Open
Abstract
Recent clinical studies have suggested that introducing 3D patient-specific aortic root models into the pre-operative assessment procedure of transcatheter aortic valve replacement (TAVR) would reduce the incident rate of peri-operative complications. Tradition manual segmentation is labor-intensive and low-efficient, which cannot meet the clinical demands of processing large data volumes. Recent developments in machine learning provided a viable way for accurate and efficient medical image segmentation for 3D patient-specific models automatically. This study quantitively evaluated the auto segmentation quality and efficiency of the four popular segmentation-dedicated three-dimensional (3D) convolutional neural network (CNN) architectures, including 3D UNet, VNet, 3D Res-UNet and SegResNet. All the CNNs were implemented in PyTorch platform, and low-dose CTA image sets of 98 anonymized patients were retrospectively selected from the database for training and testing of the CNNs. The results showed that despite all four 3D CNNs having similar recall, Dice similarity coefficient (DSC), and Jaccard index on the segmentation of the aortic root, the Hausdorff distance (HD) of the segmentation results from 3D Res-UNet is 8.56 ± 2.28, which is only 9.8% higher than that of VNet, but 25.5% and 86.4% lower than that of 3D UNet and SegResNet, respectively. In addition, 3D Res-UNet and VNet also performed better in the 3D deviation location of interest analysis focusing on the aortic valve and the bottom of the aortic root. Although 3D Res-UNet and VNet are evenly matched in the aspect of classical segmentation quality evaluation metrics and 3D deviation location of interest analysis, 3D Res-UNet is the most efficient CNN architecture with an average segmentation time of 0.10 ± 0.04 s, which is 91.2%, 95.3% and 64.3% faster than 3D UNet, VNet and SegResNet, respectively. The results from this study suggested that 3D Res-UNet is a suitable candidate for accurate and fast automatic aortic root segmentation for pre-operative assessment of TAVR.
Collapse
Affiliation(s)
- Tingting Yang
- School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an, China
| | - Guangyu Zhu
- School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an, China
| | - Li Cai
- School of Mathematics and Statistics, Northwestern Polytechnical University, Xi’an, China
| | - Joon Hock Yeo
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
| | - Yu Mao
- Department of Cardiac Surgery, Xijing Hospital, The Fourth Military Medical University, Xi’an, China
| | - Jian Yang
- Department of Cardiac Surgery, Xijing Hospital, The Fourth Military Medical University, Xi’an, China
| |
Collapse
|