51
|
Wen X, Zhao C, Zhao B, Yuan M, Chang J, Liu W, Meng J, Shi L, Yang S, Zeng J, Yang Y. Application of deep learning in radiation therapy for cancer. Cancer Radiother 2024; 28:208-217. [PMID: 38519291 DOI: 10.1016/j.canrad.2023.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 07/17/2023] [Accepted: 07/18/2023] [Indexed: 03/24/2024]
Abstract
In recent years, with the development of artificial intelligence, deep learning has been gradually applied to clinical treatment and research. It has also found its way into the applications in radiotherapy, a crucial method for cancer treatment. This study summarizes the commonly used and latest deep learning algorithms (including transformer, and diffusion models), introduces the workflow of different radiotherapy, and illustrates the application of different algorithms in different radiotherapy modules, as well as the defects and challenges of deep learning in the field of radiotherapy, so as to provide some help for the development of automatic radiotherapy for cancer.
Collapse
Affiliation(s)
- X Wen
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - C Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Minhang District, Shanghai, China
| | - B Zhao
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - M Yuan
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - J Chang
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - W Liu
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - J Meng
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - L Shi
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - S Yang
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - J Zeng
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - Y Yang
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China.
| |
Collapse
|
52
|
Wang ZY, Gong Y, Liu F, Chen D, Zheng JW, Shen JF. Influence of intraoral scanning coverage on the accuracy of digital implant impressions - An in vitro study. J Dent 2024; 143:104929. [PMID: 38458380 DOI: 10.1016/j.jdent.2024.104929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 02/15/2024] [Accepted: 03/05/2024] [Indexed: 03/10/2024] Open
Abstract
OBJECTIVES To evaluate the influence of intraoral scanning coverage (IOSC) on digital implant impression accuracy in various partially edentulous situations and predict the optimal IOSC. METHODS Five types of resin models were fabricated, each simulating single or multiple tooth loss scenarios with inserted implants and scan bodies. IOSC was subgrouped to cover two, four, six, eight, ten, and twelve teeth, as well as full arch. Each group underwent ten scans. A desktop scanner served as the reference. Accuracy was evaluated by measuring the Root mean square error (RMSE) values of scan bodies. A convolutional neural network (CNN) was trained to predict the optimal IOSC with different edentulous situations. Statistical analysis was performed using one-way ANOVA and Tukey's test. RESULTS For single-tooth-missing situations, in anterior sites, significantly better accuracy was observed in groups with IOSC ranging from four teeth to full arch (p < 0.05). In premolar sites, IOSC spanning four to six teeth were more accurate (p < 0.05), while in molar sites, groups with IOSC encompassing two to eight teeth exhibited better accuracy (p < 0.05). For multiple-teeth-missing situations, IOSC covering four, six, and eight teeth, as well as full arch showed better accuracy in anterior gaps (p < 0.05). In posterior gaps, IOSC of two, four, six or eight teeth were more accurate (p < 0.05). The CNN predicted distinct optimal IOSC for different edentulous scenarios. CONCLUSIONS Implant impression accuracy can be significantly impacted by IOSC in different partially edentulous situations. The selection of IOSC should be customized to the specific dentition defect condition. CLINICAL SIGNIFICANCE The number of teeth scanned can significantly affect digital implant impression accuracy. For missing single or four anterior teeth, scan at least four or six neighboring teeth is acceptable. In lateral cases, two neighboring teeth may suffice, but extending over ten teeth, including contralateral side, might deteriorate the scan.
Collapse
Affiliation(s)
- Zhen-Yu Wang
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Chengdu, Sichuan Province, China; West China School of Stomatology, Sichuan University, Chengdu, Sichuan Province, China
| | - Yu Gong
- College of Computer Science, Sichuan University, Chengdu, Sichuan Province, China
| | - Fei Liu
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Chengdu, Sichuan Province, China; West China School of Stomatology, Sichuan University, Chengdu, Sichuan Province, China; West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan Province, China
| | - Du Chen
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Chengdu, Sichuan Province, China; West China School of Stomatology, Sichuan University, Chengdu, Sichuan Province, China
| | - Jia-Wen Zheng
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Chengdu, Sichuan Province, China; West China School of Stomatology, Sichuan University, Chengdu, Sichuan Province, China
| | - Jie-Fei Shen
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Chengdu, Sichuan Province, China; West China School of Stomatology, Sichuan University, Chengdu, Sichuan Province, China; West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan Province, China.
| |
Collapse
|
53
|
Binh LN, Nhu NT, Vy VPT, Son DLH, Hung TNK, Bach N, Huy HQ, Tuan LV, Le NQK, Kang JH. Multi-Class Deep Learning Model for Detecting Pediatric Distal Forearm Fractures Based on the AO/OTA Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:725-733. [PMID: 38308069 PMCID: PMC11031555 DOI: 10.1007/s10278-024-00968-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/19/2023] [Accepted: 01/11/2024] [Indexed: 02/04/2024]
Abstract
Common pediatric distal forearm fractures necessitate precise detection. To support prompt treatment planning by clinicians, our study aimed to create a multi-class convolutional neural network (CNN) model for pediatric distal forearm fractures, guided by the AO Foundation/Orthopaedic Trauma Association (AO/ATO) classification system for pediatric fractures. The GRAZPEDWRI-DX dataset (2008-2018) of wrist X-ray images was used. We labeled images into four fracture classes (FRM, FUM, FRE, and FUE with F, fracture; R, radius; U, ulna; M, metaphysis; and E, epiphysis) based on the pediatric AO/ATO classification. We performed multi-class classification by training a YOLOv4-based CNN object detection model with 7006 images from 1809 patients (80% for training and 20% for validation). An 88-image test set from 34 patients was used to evaluate the model performance, which was then compared to the diagnosis performances of two readers-an orthopedist and a radiologist. The overall mean average precision levels on the validation set in four classes of the model were 0.97, 0.92, 0.95, and 0.94, respectively. On the test set, the model's performance included sensitivities of 0.86, 0.71, 0.88, and 0.89; specificities of 0.88, 0.94, 0.97, and 0.98; and area under the curve (AUC) values of 0.87, 0.83, 0.93, and 0.94, respectively. The best performance among the three readers belonged to the radiologist, with a mean AUC of 0.922, followed by our model (0.892) and the orthopedist (0.830). Therefore, using the AO/OTA concept, our multi-class fracture detection model excelled in identifying pediatric distal forearm fractures.
Collapse
Affiliation(s)
- Le Nguyen Binh
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan
- Department of Orthopedics and Trauma, Cho Ray Hospital, Ho Chi Minh City, Vietnam
- AIBioMed Research Group, Taipei Medical University, Taipei, 11031, Taiwan
| | - Nguyen Thanh Nhu
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan
- Faculty of Medicine, Can Tho University of Medicine and Pharmacy, Can Tho 94117, Can Tho, Vietnam
| | - Vu Pham Thao Vy
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan
| | - Do Le Hoang Son
- Department of Orthopedics and Trauma, Cho Ray Hospital, Ho Chi Minh City, Vietnam
| | | | - Nguyen Bach
- Department of Orthopedics, University Medical Center Ho Chi Minh City, 201 Nguyen Chi Thanh Street, District 5, Ho Chi Minh City, Vietnam
| | - Hoang Quoc Huy
- Department of Orthopedics, University Medical Center Ho Chi Minh City, 201 Nguyen Chi Thanh Street, District 5, Ho Chi Minh City, Vietnam
| | - Le Van Tuan
- Department of Orthopedics and Trauma, Cho Ray Hospital, Ho Chi Minh City, Vietnam
| | - Nguyen Quoc Khanh Le
- AIBioMed Research Group, Taipei Medical University, Taipei, 11031, Taiwan.
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
| | - Jiunn-Horng Kang
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
- Department of Physical Medicine and Rehabilitation, School of Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
- Department of Physical Medicine and Rehabilitation, Taipei Medical University Hospital, Taipei, 11031, Taiwan.
- Graduate Institute of Nanomedicine and Medical Engineering, College of Biomedical Engineering, Taipei Medical University, Xinyi District, No.250, Wuxing Street, Taipei, 11031, Taiwan.
| |
Collapse
|
54
|
Naseri S, Shukla S, Hiwale KM, Jagtap MM, Gadkari P, Gupta K, Deshmukh M, Sagar S. From Pixels to Prognosis: A Narrative Review on Artificial Intelligence's Pioneering Role in Colorectal Carcinoma Histopathology. Cureus 2024; 16:e59171. [PMID: 38807833 PMCID: PMC11129955 DOI: 10.7759/cureus.59171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Accepted: 04/27/2024] [Indexed: 05/30/2024] Open
Abstract
Colorectal carcinoma, a prevalent and deadly malignancy, necessitates precise histopathological assessment for effective diagnosis and prognosis. Artificial intelligence (AI) emerges as a transformative force in this realm, offering innovative solutions to enhance traditional histopathological methods. This narrative review explores AI's pioneering role in colorectal carcinoma histopathology, encompassing its evolution, techniques, and advancements. AI algorithms, notably machine learning and deep learning, have revolutionized image analysis, facilitating accurate diagnosis and prognosis prediction. Furthermore, AI-driven histopathological analysis unveils potential biomarkers and therapeutic targets, heralding personalized treatment approaches. Despite its promise, challenges persist, including data quality, interpretability, and integration. Collaborative efforts among researchers, clinicians, and AI developers are imperative to surmount these hurdles and realize AI's full potential in colorectal carcinoma care. This review underscores AI's transformative impact and implications for future oncology research, clinical practice, and interdisciplinary collaboration.
Collapse
Affiliation(s)
- Suhit Naseri
- Pathology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| | - Samarth Shukla
- Pathology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| | - K M Hiwale
- Pathology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| | - Miheer M Jagtap
- Pathology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| | - Pravin Gadkari
- Pathology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| | - Kartik Gupta
- Radiation Oncology, Delhi State Cancer Institute, Delhi, IND
| | - Mamta Deshmukh
- Pathology, Indian Institute of Medical Sciences and Research, Jalna, IND
| | - Shakti Sagar
- Pathology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| |
Collapse
|
55
|
Raimondo D, Raffone A, Salucci P, Raimondo I, Capobianco G, Galatolo FA, Cimino MGCA, Travaglino A, Maletta M, Ferla S, Virgilio A, Neola D, Casadio P, Seracchioli R. Detection and Classification of Hysteroscopic Images Using Deep Learning. Cancers (Basel) 2024; 16:1315. [PMID: 38610993 PMCID: PMC11011142 DOI: 10.3390/cancers16071315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 03/25/2024] [Accepted: 03/27/2024] [Indexed: 04/14/2024] Open
Abstract
BACKGROUND Although hysteroscopy with endometrial biopsy is the gold standard in the diagnosis of endometrial pathology, the gynecologist experience is crucial for a correct diagnosis. Deep learning (DL), as an artificial intelligence method, might help to overcome this limitation. Unfortunately, only preliminary findings are available, with the absence of studies evaluating the performance of DL models in identifying intrauterine lesions and the possible aid related to the inclusion of clinical factors in the model. AIM To develop a DL model as an automated tool for detecting and classifying endometrial pathologies from hysteroscopic images. METHODS A monocentric observational retrospective cohort study was performed by reviewing clinical records, electronic databases, and stored videos of hysteroscopies from consecutive patients with pathologically confirmed intrauterine lesions at our Center from January 2021 to May 2021. Retrieved hysteroscopic images were used to build a DL model for the classification and identification of intracavitary uterine lesions with or without the aid of clinical factors. Study outcomes were DL model diagnostic metrics in the classification and identification of intracavitary uterine lesions with and without the aid of clinical factors. RESULTS We reviewed 1500 images from 266 patients: 186 patients had benign focal lesions, 25 benign diffuse lesions, and 55 preneoplastic/neoplastic lesions. For both the classification and identification tasks, the best performance was achieved with the aid of clinical factors, with an overall precision of 80.11%, recall of 80.11%, specificity of 90.06%, F1 score of 80.11%, and accuracy of 86.74 for the classification task, and overall detection of 85.82%, precision of 93.12%, recall of 91.63%, and an F1 score of 92.37% for the identification task. CONCLUSION Our DL model achieved a low diagnostic performance in the detection and classification of intracavitary uterine lesions from hysteroscopic images. Although the best diagnostic performance was obtained with the aid of clinical data, such an improvement was slight.
Collapse
Affiliation(s)
- Diego Raimondo
- Division of Gynaecology and Human Reproduction Physiopathology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy; (D.R.); (P.C.); (R.S.)
| | - Antonio Raffone
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40127 Bologna, Italy; (M.M.); (S.F.)
- Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, 80131 Naples, Italy;
| | - Paolo Salucci
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40127 Bologna, Italy; (M.M.); (S.F.)
| | - Ivano Raimondo
- Department of Biomedical Sciences, University of Sassari, 07100 Sassari, Italy;
- Gynecology and Breast Care Center, Mater Olbia Hospital, 07026 Olbia, Italy
| | - Giampiero Capobianco
- Gynecologic and Obstetric Unit, Department of Medical, Surgical and Experimental Sciences, University of Sassari, 07100 Sassari, Italy;
| | - Federico Andrea Galatolo
- Department of Information Engineering, University of Pisa, 56100 Pisa, Italy; (F.A.G.); (M.G.C.A.C.)
| | | | - Antonio Travaglino
- Unit of Pathology, Department of Medicine and Technological Innovation, University of Insubria, 21100 Varese, Italy;
| | - Manuela Maletta
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40127 Bologna, Italy; (M.M.); (S.F.)
| | - Stefano Ferla
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40127 Bologna, Italy; (M.M.); (S.F.)
| | - Agnese Virgilio
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40127 Bologna, Italy; (M.M.); (S.F.)
| | - Daniele Neola
- Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, 80131 Naples, Italy;
| | - Paolo Casadio
- Division of Gynaecology and Human Reproduction Physiopathology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy; (D.R.); (P.C.); (R.S.)
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40127 Bologna, Italy; (M.M.); (S.F.)
| | - Renato Seracchioli
- Division of Gynaecology and Human Reproduction Physiopathology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy; (D.R.); (P.C.); (R.S.)
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40127 Bologna, Italy; (M.M.); (S.F.)
| |
Collapse
|
56
|
Lindroth H, Nalaie K, Raghu R, Ayala IN, Busch C, Bhattacharyya A, Moreno Franco P, Diedrich DA, Pickering BW, Herasevich V. Applied Artificial Intelligence in Healthcare: A Review of Computer Vision Technology Application in Hospital Settings. J Imaging 2024; 10:81. [PMID: 38667979 PMCID: PMC11050909 DOI: 10.3390/jimaging10040081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/08/2024] [Accepted: 03/11/2024] [Indexed: 04/28/2024] Open
Abstract
Computer vision (CV), a type of artificial intelligence (AI) that uses digital videos or a sequence of images to recognize content, has been used extensively across industries in recent years. However, in the healthcare industry, its applications are limited by factors like privacy, safety, and ethical concerns. Despite this, CV has the potential to improve patient monitoring, and system efficiencies, while reducing workload. In contrast to previous reviews, we focus on the end-user applications of CV. First, we briefly review and categorize CV applications in other industries (job enhancement, surveillance and monitoring, automation, and augmented reality). We then review the developments of CV in the hospital setting, outpatient, and community settings. The recent advances in monitoring delirium, pain and sedation, patient deterioration, mechanical ventilation, mobility, patient safety, surgical applications, quantification of workload in the hospital, and monitoring for patient events outside the hospital are highlighted. To identify opportunities for future applications, we also completed journey mapping at different system levels. Lastly, we discuss the privacy, safety, and ethical considerations associated with CV and outline processes in algorithm development and testing that limit CV expansion in healthcare. This comprehensive review highlights CV applications and ideas for its expanded use in healthcare.
Collapse
Affiliation(s)
- Heidi Lindroth
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Center for Aging Research, Regenstrief Institute, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
- Center for Health Innovation and Implementation Science, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Keivan Nalaie
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Roshini Raghu
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Ivan N. Ayala
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Charles Busch
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- College of Engineering, University of Wisconsin-Madison, Madison, WI 53705, USA
| | | | - Pablo Moreno Franco
- Department of Transplantation Medicine, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Daniel A. Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| |
Collapse
|
57
|
Kumar N, Srivastava R. Deep learning in structural bioinformatics: current applications and future perspectives. Brief Bioinform 2024; 25:bbae042. [PMID: 38701422 PMCID: PMC11066934 DOI: 10.1093/bib/bbae042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 01/05/2024] [Accepted: 01/18/2024] [Indexed: 05/05/2024] Open
Abstract
In this review article, we explore the transformative impact of deep learning (DL) on structural bioinformatics, emphasizing its pivotal role in a scientific revolution driven by extensive data, accessible toolkits and robust computing resources. As big data continue to advance, DL is poised to become an integral component in healthcare and biology, revolutionizing analytical processes. Our comprehensive review provides detailed insights into DL, featuring specific demonstrations of its notable applications in bioinformatics. We address challenges tailored for DL, spotlight recent successes in structural bioinformatics and present a clear exposition of DL-from basic shallow neural networks to advanced models such as convolution, recurrent, artificial and transformer neural networks. This paper discusses the emerging use of DL for understanding biomolecular structures, anticipating ongoing developments and applications in the realm of structural bioinformatics.
Collapse
Affiliation(s)
- Niranjan Kumar
- School of Computational and Integrative Sciences, Jawaharlal Nehru University, New Delhi, India
| | - Rakesh Srivastava
- Center for Computational Natural Sciences and Bioinformatics, International Institute of Information Technology, Hyderabad, India
| |
Collapse
|
58
|
Kryszan K, Wylęgała A, Kijonka M, Potrawa P, Walasz M, Wylęgała E, Orzechowska-Wylęgała B. Artificial-Intelligence-Enhanced Analysis of In Vivo Confocal Microscopy in Corneal Diseases: A Review. Diagnostics (Basel) 2024; 14:694. [PMID: 38611606 PMCID: PMC11011861 DOI: 10.3390/diagnostics14070694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 03/13/2024] [Accepted: 03/22/2024] [Indexed: 04/14/2024] Open
Abstract
Artificial intelligence (AI) has seen significant progress in medical diagnostics, particularly in image and video analysis. This review focuses on the application of AI in analyzing in vivo confocal microscopy (IVCM) images for corneal diseases. The cornea, as an exposed and delicate part of the body, necessitates the precise diagnoses of various conditions. Convolutional neural networks (CNNs), a key component of deep learning, are a powerful tool for image data analysis. This review highlights AI applications in diagnosing keratitis, dry eye disease, and diabetic corneal neuropathy. It discusses the potential of AI in detecting infectious agents, analyzing corneal nerve morphology, and identifying the subtle changes in nerve fiber characteristics in diabetic corneal neuropathy. However, challenges still remain, including limited datasets, overfitting, low-quality images, and unrepresentative training datasets. This review explores augmentation techniques and the importance of feature engineering to address these challenges. Despite the progress made, challenges are still present, such as the "black-box" nature of AI models and the need for explainable AI (XAI). Expanding datasets, fostering collaborative efforts, and developing user-friendly AI tools are crucial for enhancing the acceptance and integration of AI into clinical practice.
Collapse
Affiliation(s)
- Katarzyna Kryszan
- Chair and Clinical Department of Ophthalmology, School of Medicine in Zabrze, Medical University of Silesia in Katowice, District Railway Hospital, 40-760 Katowice, Poland; (A.W.); (M.K.); (E.W.)
- Department of Ophthalmology, District Railway Hospital in Katowice, 40-760 Katowice, Poland; (P.P.); (M.W.)
| | - Adam Wylęgała
- Chair and Clinical Department of Ophthalmology, School of Medicine in Zabrze, Medical University of Silesia in Katowice, District Railway Hospital, 40-760 Katowice, Poland; (A.W.); (M.K.); (E.W.)
- Health Promotion and Obesity Management, Pathophysiology Department, Medical University of Silesia in Katowice, 40-752 Katowice, Poland
| | - Magdalena Kijonka
- Chair and Clinical Department of Ophthalmology, School of Medicine in Zabrze, Medical University of Silesia in Katowice, District Railway Hospital, 40-760 Katowice, Poland; (A.W.); (M.K.); (E.W.)
- Department of Ophthalmology, District Railway Hospital in Katowice, 40-760 Katowice, Poland; (P.P.); (M.W.)
| | - Patrycja Potrawa
- Department of Ophthalmology, District Railway Hospital in Katowice, 40-760 Katowice, Poland; (P.P.); (M.W.)
| | - Mateusz Walasz
- Department of Ophthalmology, District Railway Hospital in Katowice, 40-760 Katowice, Poland; (P.P.); (M.W.)
| | - Edward Wylęgała
- Chair and Clinical Department of Ophthalmology, School of Medicine in Zabrze, Medical University of Silesia in Katowice, District Railway Hospital, 40-760 Katowice, Poland; (A.W.); (M.K.); (E.W.)
- Department of Ophthalmology, District Railway Hospital in Katowice, 40-760 Katowice, Poland; (P.P.); (M.W.)
| | - Bogusława Orzechowska-Wylęgała
- Department of Pediatric Otolaryngology, Head and Neck Surgery, Chair of Pediatric Surgery, Medical University of Silesia, 40-760 Katowice, Poland;
| |
Collapse
|
59
|
Zolfaghari S, Yousefi Rezaii T, Meshgini S. Applying Common Spatial Pattern and Convolutional Neural Network to Classify Movements via EEG Signals. Clin EEG Neurosci 2024:15500594241234836. [PMID: 38523306 DOI: 10.1177/15500594241234836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/26/2024]
Abstract
Developing an electroencephalography (EEG)-based brain-computer interface (BCI) system is crucial to enhancing the control of external prostheses by accurately distinguishing various movements through brain signals. This innovation can provide comfortable circumstances for the populace who have movement disabilities. This study combined the most prospering methods used in BCI systems, including one-versus-rest common spatial pattern (OVR-CSP) and convolutional neural network (CNN), to automatically extract features and classify eight different movements of the shoulder, wrist, and elbow via EEG signals. The number of subjects who participated in the experiment was 10, and their EEG signals were recorded while performing movements at fast and slow speeds. We used preprocessing techniques before transforming EEG signals into another space by OVR-CSP, followed by sending signals into the CNN architecture consisting of four convolutional layers. Moreover, we extracted feature vectors after applying OVR-CSP and considered them as inputs to KNN, SVM, and MLP classifiers. Then, the performance of these classifiers was compared with the CNN method. The results demonstrated that the classification of eight movements using the proposed CNN architecture obtained an average accuracy of 97.65% for slow movements and 96.25% for fast movements in the subject-independent model. This method outperformed other classifiers with a substantial difference; ergo, it can be useful in improving BCI systems for better control of prostheses.
Collapse
Affiliation(s)
- Sepideh Zolfaghari
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| | - Tohid Yousefi Rezaii
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| | - Saeed Meshgini
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| |
Collapse
|
60
|
Campion JR, O'Connor DB, Lahiff C. Human-artificial intelligence interaction in gastrointestinal endoscopy. World J Gastrointest Endosc 2024; 16:126-135. [PMID: 38577646 PMCID: PMC10989254 DOI: 10.4253/wjge.v16.i3.126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 01/18/2024] [Accepted: 02/23/2024] [Indexed: 03/14/2024] Open
Abstract
The number and variety of applications of artificial intelligence (AI) in gastrointestinal (GI) endoscopy is growing rapidly. New technologies based on machine learning (ML) and convolutional neural networks (CNNs) are at various stages of development and deployment to assist patients and endoscopists in preparing for endoscopic procedures, in detection, diagnosis and classification of pathology during endoscopy and in confirmation of key performance indicators. Platforms based on ML and CNNs require regulatory approval as medical devices. Interactions between humans and the technologies we use are complex and are influenced by design, behavioural and psychological elements. Due to the substantial differences between AI and prior technologies, important differences may be expected in how we interact with advice from AI technologies. Human–AI interaction (HAII) may be optimised by developing AI algorithms to minimise false positives and designing platform interfaces to maximise usability. Human factors influencing HAII may include automation bias, alarm fatigue, algorithm aversion, learning effect and deskilling. Each of these areas merits further study in the specific setting of AI applications in GI endoscopy and professional societies should engage to ensure that sufficient emphasis is placed on human-centred design in development of new AI technologies.
Collapse
Affiliation(s)
- John R Campion
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| | - Donal B O'Connor
- Department of Surgery, Trinity College Dublin, Dublin D02 R590, Ireland
| | - Conor Lahiff
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| |
Collapse
|
61
|
Ku WL, Min H. Evaluating Machine Learning Stability in Predicting Depression and Anxiety Amidst Subjective Response Errors. Healthcare (Basel) 2024; 12:625. [PMID: 38540589 PMCID: PMC11154473 DOI: 10.3390/healthcare12060625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 02/25/2024] [Accepted: 03/04/2024] [Indexed: 06/09/2024] Open
Abstract
Major Depressive Disorder (MDD) and Generalized Anxiety Disorder (GAD) pose significant burdens on individuals and society, necessitating accurate prediction methods. Machine learning (ML) algorithms utilizing electronic health records and survey data offer promising tools for forecasting these conditions. However, potential bias and inaccuracies inherent in subjective survey responses can undermine the precision of such predictions. This research investigates the reliability of five prominent ML algorithms-a Convolutional Neural Network (CNN), Random Forest, XGBoost, Logistic Regression, and Naive Bayes-in predicting MDD and GAD. A dataset rich in biomedical, demographic, and self-reported survey information is used to assess the algorithms' performance under different levels of subjective response inaccuracies. These inaccuracies simulate scenarios with potential memory recall bias and subjective interpretations. While all algorithms demonstrate commendable accuracy with high-quality survey data, their performance diverges significantly when encountering erroneous or biased responses. Notably, the CNN exhibits superior resilience in this context, maintaining performance and even achieving enhanced accuracy, Cohen's kappa score, and positive precision for both MDD and GAD. This highlights the CNN's superior ability to handle data unreliability, making it a potentially advantageous choice for predicting mental health conditions based on self-reported data. These findings underscore the critical importance of algorithmic resilience in mental health prediction, particularly when relying on subjective data. They emphasize the need for careful algorithm selection in such contexts, with the CNN emerging as a promising candidate due to its robustness and improved performance under data uncertainties.
Collapse
Affiliation(s)
- Wai Lim Ku
- Systems Biology Center, National Heart, Lung and Blood Institute, NIH, Bethesda, MD 20892, USA;
| | - Hua Min
- Department of Health Administration and Policy, College of Public Health, George Mason University, Fairfax, VA 22030, USA
| |
Collapse
|
62
|
Imagawa K, Shiomoto K. Evaluation of Effectiveness of Self-Supervised Learning in Chest X-Ray Imaging to Reduce Annotated Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-00975-5. [PMID: 38459399 DOI: 10.1007/s10278-024-00975-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 11/17/2023] [Accepted: 11/17/2023] [Indexed: 03/10/2024]
Abstract
A significant challenge in machine learning-based medical image analysis is the scarcity of medical images. Obtaining a large number of labeled medical images is difficult because annotating medical images is a time-consuming process that requires specialized knowledge. In addition, inappropriate annotation processes can increase model bias. Self-supervised learning (SSL) is a type of unsupervised learning method that extracts image representations. Thus, SSL can be an effective method to reduce the number of labeled images. In this study, we investigated the feasibility of reducing the number of labeled images in a limited set of unlabeled medical images. The unlabeled chest X-ray (CXR) images were pretrained using the SimCLR framework, and then the representations were fine-tuned as supervised learning for the target task. A total of 2000 task-specific CXR images were used to perform binary classification of coronavirus disease 2019 (COVID-19) and normal cases. The results demonstrate that the performance of pretraining on task-specific unlabeled CXR images can be maintained when the number of labeled CXR images is reduced by approximately 40%. In addition, the performance was significantly better than that obtained without pretraining. In contrast, a large number of pretrained unlabeled images are required to maintain performance regardless of task specificity among a small number of labeled CXR images. In summary, to reduce the number of labeled images using SimCLR, we must consider both the number of images and the task-specific characteristics of the target images.
Collapse
Affiliation(s)
- Kuniki Imagawa
- Faculty of Information Technology, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo, 158-8557, Japan.
| | - Kohei Shiomoto
- Faculty of Information Technology, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo, 158-8557, Japan
| |
Collapse
|
63
|
Bhushan V, Nita-Lazar A. Recent Advancements in Subcellular Proteomics: Growing Impact of Organellar Protein Niches on the Understanding of Cell Biology. J Proteome Res 2024. [PMID: 38451675 DOI: 10.1021/acs.jproteome.3c00839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
The mammalian cell is a complex entity, with membrane-bound and membrane-less organelles playing vital roles in regulating cellular homeostasis. Organellar protein niches drive discrete biological processes and cell functions, thus maintaining cell equilibrium. Cellular processes such as signaling, growth, proliferation, motility, and programmed cell death require dynamic protein movements between cell compartments. Aberrant protein localization is associated with a wide range of diseases. Therefore, analyzing the subcellular proteome of the cell can provide a comprehensive overview of cellular biology. With recent advancements in mass spectrometry, imaging technology, computational tools, and deep machine learning algorithms, studies pertaining to subcellular protein localization and their dynamic distributions are gaining momentum. These studies reveal changing interaction networks because of "moonlighting proteins" and serve as a discovery tool for disease network mechanisms. Consequently, this review aims to provide a comprehensive repository for recent advancements in subcellular proteomics subcontexting methods, challenges, and future perspectives for method developers. In summary, subcellular proteomics is crucial to the understanding of the fundamental cellular mechanisms and the associated diseases.
Collapse
Affiliation(s)
- Vanya Bhushan
- Functional Cellular Networks Section, Laboratory of Immune System Biology, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, Maryland 20892, United States
| | - Aleksandra Nita-Lazar
- Functional Cellular Networks Section, Laboratory of Immune System Biology, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, Maryland 20892, United States
| |
Collapse
|
64
|
Saluja S, Trivedi MC, Saha A. Deep CNNs for glioma grading on conventional MRIs: Performance analysis, challenges, and future directions. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5250-5282. [PMID: 38872535 DOI: 10.3934/mbe.2024232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
The increasing global incidence of glioma tumors has raised significant healthcare concerns due to their high mortality rates. Traditionally, tumor diagnosis relies on visual analysis of medical imaging and invasive biopsies for precise grading. As an alternative, computer-assisted methods, particularly deep convolutional neural networks (DCNNs), have gained traction. This research paper explores the recent advancements in DCNNs for glioma grading using brain magnetic resonance images (MRIs) from 2015 to 2023. The study evaluated various DCNN architectures and their performance, revealing remarkable results with models such as hybrid and ensemble based DCNNs achieving accuracy levels of up to 98.91%. However, challenges persisted in the form of limited datasets, lack of external validation, and variations in grading formulations across diverse literature sources. Addressing these challenges through expanding datasets, conducting external validation, and standardizing grading formulations can enhance the performance and reliability of DCNNs in glioma grading, thereby advancing brain tumor classification and extending its applications to other neurological disorders.
Collapse
Affiliation(s)
- Sonam Saluja
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Munesh Chandra Trivedi
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Ashim Saha
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| |
Collapse
|
65
|
Osiecka-Drewniak N, Deptuch A, Urbańska M, Juszyńska-Gałązka E. A Siamese neural network framework for glass transition recognition. SOFT MATTER 2024; 20:2400-2406. [PMID: 38380675 DOI: 10.1039/d3sm01593a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
A Siamese neural network, which is a deep learning technique, was applied to investigate phase transitions based on polarising microscopic textures of liquid crystals like: antiferroelectric smectic CA* phase and its glass, smectic I phase and its glass, and smectic G and its glass. It is an example of a subtle transition without significant structural changes, where textures above and below the glass transition temperature are similar. The Siamese neural network could distinguish textures of the chosen liquid crystal phases from a glass of that phase. This publication provides details of the Siamese neural network and its implementation based on three different convolutional neural networks has been tested.
Collapse
Affiliation(s)
| | - Aleksandra Deptuch
- Institute of Nuclear Physics Polish Academy of Sciences, PL-31342 Krakow, Poland.
| | - Magdalena Urbańska
- Institute of Chemistry, Military University of Technology, PL-00908 Warsaw, Poland
| | - Ewa Juszyńska-Gałązka
- Institute of Nuclear Physics Polish Academy of Sciences, PL-31342 Krakow, Poland.
- Research Centre for Thermal and Entropic Science, Graduate School of Science, Osaka University, Osaka 565-0871, Japan
| |
Collapse
|
66
|
Davidson SJ, Saggese T, Krajňáková J. Deep learning for automated segmentation and counting of hypocotyl and cotyledon regions in mature Pinus radiata D. Don. somatic embryo images. FRONTIERS IN PLANT SCIENCE 2024; 15:1322920. [PMID: 38495377 PMCID: PMC10940415 DOI: 10.3389/fpls.2024.1322920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 02/12/2024] [Indexed: 03/19/2024]
Abstract
In commercial forestry and large-scale plant propagation, the utilization of artificial intelligence techniques for automated somatic embryo analysis has emerged as a highly valuable tool. Notably, image segmentation plays a key role in the automated assessment of mature somatic embryos. However, to date, the application of Convolutional Neural Networks (CNNs) for segmentation of mature somatic embryos remains unexplored. In this study, we present a novel application of CNNs for delineating mature somatic conifer embryos from background and residual proliferating embryogenic tissue and differentiating various morphological regions within the embryos. A semantic segmentation CNN was trained to assign pixels to cotyledon, hypocotyl, and background regions, while an instance segmentation network was trained to detect individual cotyledons for automated counting. The main dataset comprised 275 high-resolution microscopic images of mature Pinus radiata somatic embryos, with 42 images reserved for testing and validation sets. The evaluation of different segmentation methods revealed that semantic segmentation achieved the highest performance averaged across classes, achieving F1 scores of 0.929 and 0.932, with IoU scores of 0.867 and 0.872 for the cotyledon and hypocotyl regions respectively. The instance segmentation approach demonstrated proficiency in accurate detection and counting of the number of cotyledons, as indicated by a mean squared error (MSE) of 0.79 and mean absolute error (MAE) of 0.60. The findings highlight the efficacy of neural network-based methods in accurately segmenting somatic embryos and delineating individual morphological parts, providing additional information compared to previous segmentation techniques. This opens avenues for further analysis, including quantification of morphological characteristics in each region, enabling the identification of features of desirable embryos in large-scale production systems. These advancements contribute to the improvement of automated somatic embryogenesis systems, facilitating efficient and reliable plant propagation for commercial forestry applications.
Collapse
Affiliation(s)
- Sam J. Davidson
- Data and Geospatial Intelligence, New Zealand Forest Research Institute (Scion), Christchurch, New Zealand
| | - Taryn Saggese
- Forest Genetics and Biotechnology, New Zealand Forest Research Institute (Scion), Rotorua, New Zealand
| | - Jana Krajňáková
- Forest Genetics and Biotechnology, New Zealand Forest Research Institute (Scion), Rotorua, New Zealand
| |
Collapse
|
67
|
Hartmann T, Passauer J, Hartmann J, Schmidberger L, Kneilling M, Volc S. Basic principles of artificial intelligence in dermatology explained using melanoma. J Dtsch Dermatol Ges 2024; 22:339-347. [PMID: 38361141 DOI: 10.1111/ddg.15322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 11/04/2023] [Indexed: 02/17/2024]
Abstract
The use of artificial intelligence (AI) continues to establish itself in the most diverse areas of medicine at an increasingly fast pace. Nevertheless, many healthcare professionals lack the basic technical understanding of how this technology works, which severely limits its application in clinical settings and research. Thus, we would like to discuss the functioning and classification of AI using melanoma as an example in this review to build an understanding of the technology behind AI. For this purpose, elaborate illustrations are used that quickly reveal the technology involved. Previous reviews tend to focus on the potential applications of AI, thereby missing the opportunity to develop a deeper understanding of the subject matter that is so important for clinical application. Malignant melanoma has become a significant burden for healthcare systems. If discovered early, a better prognosis can be expected, which is why skin cancer screening has become increasingly popular and is supported by health insurance. The number of experts remains finite, reducing their availability and leading to longer waiting times. Therefore, innovative ideas need to be implemented to provide the necessary care. Thus, machine learning offers the ability to recognize melanomas from images at a level comparable to experienced dermatologists under optimized conditions.
Collapse
Affiliation(s)
- Tim Hartmann
- Department of Dermatology, University hospital Tübingen, Tübingen, Germany
| | - Johannes Passauer
- Department of Dermatology, University hospital Tübingen, Tübingen, Germany
| | | | - Laura Schmidberger
- Department of Dermatology, University hospital Tübingen, Tübingen, Germany
| | - Manfred Kneilling
- Department of Dermatology, University hospital Tübingen, Tübingen, Germany
- Werner Siemens Imaging Center, Department of Preclinical Imaging and Radiopharmacy, Eberhard Karls University, Tübingen, Germany
- Cluster of Excellence iFIT (EXC 2180) "Image-Guided and Functionally Instructed Tumor Therapies", Eberhard Karls University, Tübingen, Germany
| | - Sebastian Volc
- Department of Dermatology, University hospital Tübingen, Tübingen, Germany
| |
Collapse
|
68
|
Hartmann T, Passauer J, Hartmann J, Schmidberger L, Kneilling M, Volc S. Grundprinzipien der künstlichen Intelligenz in der Dermatologie erklärt am Beispiel des Melanoms. J Dtsch Dermatol Ges 2024; 22:339-349. [PMID: 38450927 DOI: 10.1111/ddg.15322_g] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 11/04/2023] [Indexed: 03/08/2024]
Abstract
ZusammenfassungDer Einsatz von künstlicher Intelligenz (KI) setzt sich in den verschiedensten Bereichen der Medizin immer schneller durch. Dennoch fehlt vielen medizinischen Kollegen das technische Grundverständnis für die Funktionsweise dieser Technologie, was ihre Anwendung in Klinik und Forschung stark einschränkt. Daher möchten wir in dieser Übersichtsarbeit die Funktionsweise und Klassifizierung der KI am Beispiel des Melanoms erörtern, um ein Verständnis für die Technologie hinter der KI zu schaffen. Dazu werden ausführliche Illustrationen verwendet, die die Technologie schnell erklären. Bisherige Übersichten konzentrieren sich eher auf die potenziellen Anwendungen der KI und verpassen die Gelegenheit, ein tieferes Verständnis für die Materie herauszuarbeiten, das für die klinische Anwendung so wichtig ist. Das maligne Melanom ist zu einer erheblichen Belastung für die Gesundheitssysteme geworden. Bei frühzeitiger Entdeckung ist eine bessere Prognose zu erwarten, weshalb das Hautkrebs‐Screening immer populärer und von den Krankenkassen unterstützt wird. Die Zahl der Fachärzte ist jedoch begrenzt, was ihre Verfügbarkeit einschränkt und zu längeren Wartezeiten führt. Daher müssen innovative Ideen umgesetzt werden, um die notwendige Versorgung zu gewährleisten. Das maschinelle Lernen bietet die Möglichkeit, Melanome auf Bildern zu erkennen, und zwar auf einem Niveau, das mit dem von erfahrenen Dermatologen – unter optimierten Bedingungen – vergleichbar ist.
Collapse
Affiliation(s)
- Tim Hartmann
- Hautklinik, Universitätsklinik, Eberhard Karls Universität, Tübingen
| | - Johannes Passauer
- Hautklinik, Universitätsklinik, Eberhard Karls Universität, Tübingen
| | | | | | - Manfred Kneilling
- Hautklinik, Universitätsklinik, Eberhard Karls Universität, Tübingen
- Werner Siemens Imaging Center, Department of Preclinical Imaging and Radiopharmacy, Eberhard Karls University, Tübingen
- Cluster of Excellence iFIT (EXC 2180) "Image-Guided and Functionally Instructed Tumor Therapies", Eberhard Karls Universität, Tübingen
| | - Sebastian Volc
- Hautklinik, Universitätsklinik, Eberhard Karls Universität, Tübingen
| |
Collapse
|
69
|
Vega F, Addeh A, Ganesh A, Smith EE, MacDonald ME. Image Translation for Estimating Two-Dimensional Axial Amyloid-Beta PET From Structural MRI. J Magn Reson Imaging 2024; 59:1021-1031. [PMID: 37921361 DOI: 10.1002/jmri.29070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 11/04/2023] Open
Abstract
BACKGROUND Amyloid-beta and brain atrophy are hallmarks for Alzheimer's Disease that can be targeted with positron emission tomography (PET) and MRI, respectively. MRI is cheaper, less-invasive, and more available than PET. There is a known relationship between amyloid-beta and brain atrophy, meaning PET images could be inferred from MRI. PURPOSE To build an image translation model using a Conditional Generative Adversarial Network able to synthesize Amyloid-beta PET images from structural MRI. STUDY TYPE Retrospective. POPULATION Eight hundred eighty-two adults (348 males/534 females) with different stages of cognitive decline (control, mild cognitive impairment, moderate cognitive impairment, and severe cognitive impairment). Five hundred fifty-two subjects for model training and 331 for testing (80%:20%). FIELD STRENGTH/SEQUENCE 3 T, T1-weighted structural (T1w). ASSESSMENT The testing cohort was used to evaluate the performance of the model using the Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR), comparing the likeness of the overall synthetic PET images created from structural MRI with the overall true PET images. SSIM was computed in the overall image to include the luminance, contrast, and structural similarity components. Experienced observers reviewed the images for quality, performance and tried to determine if they could tell the difference between real and synthetic images. STATISTICAL TESTS Pixel wise Pearson correlation was significant, and had an R2 greater than 0.96 in example images. From blinded readings, a Pearson Chi-squared test showed that there was no significant difference between the real and synthetic images by the observers (P = 0.68). RESULTS A high degree of likeness across the evaluation set, which had a mean SSIM = 0.905 and PSNR = 2.685. The two observers were not able to determine the difference between the real and synthetic images, with accuracies of 54% and 46%, respectively. CONCLUSION Amyloid-beta PET images can be synthesized from structural MRI with a high degree of similarity to the real PET images. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Fernando Vega
- Department of Biomedical, University of Calgary, Calgary, Alberta, Canada
- Department of Electrical and Software Engineering, University of Calgary, Calgary, Alberta, Canada
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| | - Abdoljalil Addeh
- Department of Biomedical, University of Calgary, Calgary, Alberta, Canada
- Department of Electrical and Software Engineering, University of Calgary, Calgary, Alberta, Canada
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| | - Aravind Ganesh
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Eric E Smith
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - M Ethan MacDonald
- Department of Biomedical, University of Calgary, Calgary, Alberta, Canada
- Department of Electrical and Software Engineering, University of Calgary, Calgary, Alberta, Canada
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| |
Collapse
|
70
|
Chhillar I, Singh A. A feature engineering-based machine learning technique to detect and classify lung and colon cancer from histopathological images. Med Biol Eng Comput 2024; 62:913-924. [PMID: 38091162 DOI: 10.1007/s11517-023-02984-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 11/29/2023] [Indexed: 02/22/2024]
Abstract
Globally, lung and colon cancers are among the most prevalent and lethal tumors. Early cancer identification is essential to increase the likelihood of survival. Histopathological images are considered an appropriate tool for diagnosing cancer, which is tedious and error-prone if done manually. Recently, machine learning methods based on feature engineering have gained prominence in automatic histopathological image classification. Furthermore, these methods are more interpretable than deep learning, which operates in a "black box" manner. In the medical profession, the interpretability of a technique is critical to gaining the trust of end users to adopt it. In view of the above, this work aims to create an accurate and interpretable machine-learning technique for the automated classification of lung and colon cancers from histopathology images. In the proposed approach, following the preprocessing steps, texture and color features are retrieved by utilizing the Haralick and Color histogram feature extraction algorithms, respectively. The obtained features are concatenated to form a single feature set. The three feature sets (texture, color, and combined features) are passed into the Light Gradient Boosting Machine (LightGBM) classifier for classification. And their performance is evaluated on the LC25000 dataset using hold-out and stratified 10-fold cross-validation (Stratified 10-FCV) techniques. With a test/hold-out set, the LightGBM with texture, color, and combined features classifies the lung and colon cancer images with 97.72%, 99.92%, and 100% accuracy respectively. In addition, a stratified 10-fold cross-validation method also revealed that LightGBM's combined or color features performed well, with an excellent mean auc_mu score and a low mean multi_logloss value. Thus, this proposed technique can help histologists detect and classify lung and colon histopathology images more efficiently, effectively, and economically, resulting in more productivity.
Collapse
Affiliation(s)
- Indu Chhillar
- Department of Computer Science and Engineering, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Haryana, India.
| | - Ajmer Singh
- Department of Computer Science and Engineering, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Haryana, India
| |
Collapse
|
71
|
Tandon R, Agrawal S, Rathore NPS, Mishra AK, Jain SK. A systematic review on deep learning-based automated cancer diagnosis models. J Cell Mol Med 2024; 28:e18144. [PMID: 38426930 PMCID: PMC10906380 DOI: 10.1111/jcmm.18144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 12/08/2023] [Accepted: 01/16/2024] [Indexed: 03/02/2024] Open
Abstract
Deep learning is gaining importance due to its wide range of applications. Many researchers have utilized deep learning (DL) models for the automated diagnosis of cancer patients. This paper provides a systematic review of DL models for automated diagnosis of cancer patients. Initially, various DL models for cancer diagnosis are presented. Five major categories of cancers such as breast, lung, liver, brain and cervical cancer are considered. As these categories of cancers have a very high percentage of occurrences with high mortality rate. The comparative analysis of different types of DL models is drawn for the diagnosis of cancer at early stages by considering the latest research articles from 2016 to 2022. After comprehensive comparative analysis, it is found that most of the researchers achieved appreciable accuracy with implementation of the convolutional neural network model. These utilized the pretrained models for automated diagnosis of cancer patients. Various shortcomings with the existing DL-based automated cancer diagnosis models are also been presented. Finally, future directions are discussed to facilitate further research for automated diagnosis of cancer patients.
Collapse
Affiliation(s)
| | | | | | - Abhinava K. Mishra
- Molecular, Cellular and Developmental Biology DepartmentUniversity of California Santa BarbaraSanta BarbaraCaliforniaUSA
| | | |
Collapse
|
72
|
Bharadwaj UU, Chin CT, Majumdar S. Practical Applications of Artificial Intelligence in Spine Imaging: A Review. Radiol Clin North Am 2024; 62:355-370. [PMID: 38272627 DOI: 10.1016/j.rcl.2023.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Artificial intelligence (AI), a transformative technology with unprecedented potential in medical imaging, can be applied to various spinal pathologies. AI-based approaches may improve imaging efficiency, diagnostic accuracy, and interpretation, which is essential for positive patient outcomes. This review explores AI algorithms, techniques, and applications in spine imaging, highlighting diagnostic impact and challenges with future directions for integrating AI into spine imaging workflow.
Collapse
Affiliation(s)
- Upasana Upadhyay Bharadwaj
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 1700 4th Street, Byers Hall, Suite 203, Room 203D, San Francisco, CA 94158, USA
| | - Cynthia T Chin
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Avenue, Box 0628, San Francisco, CA 94143, USA.
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 1700 4th Street, Byers Hall, Suite 203, Room 203D, San Francisco, CA 94158, USA
| |
Collapse
|
73
|
Waldner S, Wendelspiess E, Detampel P, Schlepütz CM, Huwyler J, Puchkov M. Advanced analysis of disintegrating pharmaceutical compacts using deep learning-based segmentation of time-resolved micro-tomography images. Heliyon 2024; 10:e26025. [PMID: 38384517 PMCID: PMC10878950 DOI: 10.1016/j.heliyon.2024.e26025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 01/30/2024] [Accepted: 02/06/2024] [Indexed: 02/23/2024] Open
Abstract
The mechanism governing pharmaceutical tablet disintegration is far from fully understood. Despite the importance of controlling a formulation's disintegration process to maximize the active pharmaceutical ingredient's bioavailability and ensure predictable and consistent release profiles, the current understanding of the process is based on indirect or superficial measurements. Formulation science could, therefore, additionally deepen the understanding of the fundamental physical principles governing disintegration based on direct observations of the process. We aim to help bridge the gap by generating a series of time-resolved X-ray micro-computed tomography (μCT) images capturing volumetric images of a broad range of mini-tablet formulations undergoing disintegration. Automated image segmentation was a prerequisite to overcoming the challenges of analyzing multiple time series of heterogeneous tomographic images at high magnification. We devised and trained a convolutional neural network (CNN) based on the U-Net architecture for autonomous, rapid, and consistent image segmentation. We created our own μCT data reconstruction pipeline and parameterized it to deliver image quality optimal for our CNN-based segmentation. Our approach enabled us to visualize the internal microstructures of the tablets during disintegration and to extract parameters of disintegration kinetics from the time-resolved data. We determine by factor analysis the influence of the different formulation components on the disintegration process in terms of both qualitative and quantitative experimental responses. We relate our findings to known formulation component properties and established experimental results. Our direct imaging approach, enabled by deep learning-based image processing, delivers new insights into the disintegration mechanism of pharmaceutical tablets.
Collapse
Affiliation(s)
- Samuel Waldner
- Department of Pharmaceutical Sciences, Division of Pharmaceutical Technology, University of Basel, Klingelberstrasse 50, 4056, Basel, Switzerland
| | - Erwin Wendelspiess
- Department of Pharmaceutical Sciences, Division of Pharmaceutical Technology, University of Basel, Klingelberstrasse 50, 4056, Basel, Switzerland
| | - Pascal Detampel
- Department of Pharmaceutical Sciences, Division of Pharmaceutical Technology, University of Basel, Klingelberstrasse 50, 4056, Basel, Switzerland
| | | | - Jörg Huwyler
- Department of Pharmaceutical Sciences, Division of Pharmaceutical Technology, University of Basel, Klingelberstrasse 50, 4056, Basel, Switzerland
| | - Maxim Puchkov
- Department of Pharmaceutical Sciences, Division of Pharmaceutical Technology, University of Basel, Klingelberstrasse 50, 4056, Basel, Switzerland
| |
Collapse
|
74
|
Abhisheka B, Biswas SK, Purkayastha B. HBMD-Net: Feature Fusion Based Breast Cancer Classification with Class Imbalance Resolution. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01046-5. [PMID: 38409609 DOI: 10.1007/s10278-024-01046-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 02/06/2024] [Accepted: 02/09/2024] [Indexed: 02/28/2024]
Abstract
Breast cancer, a widespread global disease, represents a significant threat to women's health and lives, ranking as one of the most vulnerable malignant tumors they face. Many researchers have proposed their computer-aided diagnosis systems for classifying breast cancer. The majority of these approaches primarily utilize deep learning (DL) methods, which are not entirely reliable. These approaches overlook the crucial necessity of incorporating both local and global information for precise tumor detection, despite the fact that the subtle nuances are crucial for precise breast cancer classification. In addition, there are a limited number of publicly available breast cancer datasets, and the ones that are available tend to be imbalanced in nature. Therefore, this paper presents the hybrid breast mass detection-network (HBMD-Net) to address two critical challenges: class imbalance and the need to recognize that relying solely on either global or local features falls short in achieving precise tumor classification. To overcome the problem of class imbalance, HBMD-Net incorporates the borderline synthetic minority over-sampling technique (BSMOTE). Simultaneously, it employs a feature fusion approach, combining features by utilizing ResNet50 to extract deep features that provide global information, while handcrafted features are derived using histogram orientation gradient (HOG), that provide local information. In addition, an ROI segmentation has been implemented to avoid misclassifications. This integrated strategy substantially enhances breast cancer classification performance. Moreover, the proposed method integrates the block matching and 3D (BM3D) denoising filter to effectively eliminate multiplicative noise that has enhanced the performance of the system. The evaluation of the proposed HBMD-Net encompasses two breast ultrasound (BUS) datasets, namely BUSI and UDIAT. The proposed model has demonstrated a satisfactory performance, achieving accuracies of 99.14% and 94.49% respectively.
Collapse
Affiliation(s)
- Barsha Abhisheka
- Computer Science and Engineering, NIT Silchar, Silchar, 788010, Assam, India.
| | - Saroj Kr Biswas
- Computer Science and Engineering, NIT Silchar, Silchar, 788010, Assam, India
| | | |
Collapse
|
75
|
Wang X, Zhang S, Zhang T. Crop insect pest detection based on dilated multi-scale attention U-Net. PLANT METHODS 2024; 20:34. [PMID: 38409023 PMCID: PMC10898010 DOI: 10.1186/s13007-024-01163-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Accepted: 02/20/2024] [Indexed: 02/28/2024]
Abstract
BACKGROUND Crop pests seriously affect the yield and quality of crops. Accurately and rapidly detecting and segmenting insect pests in crop leaves is a premise for effectively controlling insect pests. METHODS Aiming at the detection problem of irregular multi-scale insect pests in the field, a dilated multi-scale attention U-Net (DMSAU-Net) model is constructed for crop insect pest detection. In its encoder, dilated Inception is designed to replace the convolution layer in U-Net to extract the multi-scale features of insect pest images. An attention module is added to its decoder to focus on the edge of the insect pest image. RESULTS The experiments on the crop insect pest image IP102 dataset are implemented, and achieved the detection accuracy of 92.16% and IoU of 91.2%, which is 3.3% and 1.5% higher than that of MSR-RCNN, respectively. CONCLUSION The results indicate that the proposed method is effective as a new insect pest detection method. The dilated Inception can improve the accuracy of the model, and the attention module can reduce the noise generated by upsampling and accelerate model convergence. It can be concluded that the proposed method can be applied to practical crop insect pest monitoring system.
Collapse
Affiliation(s)
- Xuqi Wang
- School of Electronic Information, Xijing University, Xi'an, 710123, China
| | - Shanwen Zhang
- School of Electronic Information, Xijing University, Xi'an, 710123, China.
| | - Ting Zhang
- School of Electronic Information, Xijing University, Xi'an, 710123, China
| |
Collapse
|
76
|
Zhao Y, Coppola A, Karamchandani U, Amiras D, Gupte CM. Artificial intelligence applied to magnetic resonance imaging reliably detects the presence, but not the location, of meniscus tears: a systematic review and meta-analysis. Eur Radiol 2024:10.1007/s00330-024-10625-7. [PMID: 38386028 DOI: 10.1007/s00330-024-10625-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Revised: 12/24/2023] [Accepted: 01/13/2024] [Indexed: 02/23/2024]
Abstract
OBJECTIVES To review and compare the accuracy of convolutional neural networks (CNN) for the diagnosis of meniscal tears in the current literature and analyze the decision-making processes utilized by these CNN algorithms. MATERIALS AND METHODS PubMed, MEDLINE, EMBASE, and Cochrane databases up to December 2022 were searched in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) statement. Risk of analysis was used for all identified articles. Predictive performance values, including sensitivity and specificity, were extracted for quantitative analysis. The meta-analysis was divided between AI prediction models identifying the presence of meniscus tears and the location of meniscus tears. RESULTS Eleven articles were included in the final review, with a total of 13,467 patients and 57,551 images. Heterogeneity was statistically significantly large for the sensitivity of the tear identification analysis (I2 = 79%). A higher level of accuracy was observed in identifying the presence of a meniscal tear over locating tears in specific regions of the meniscus (AUC, 0.939 vs 0.905). Pooled sensitivity and specificity were 0.87 (95% confidence interval (CI) 0.80-0.91) and 0.89 (95% CI 0.83-0.93) for meniscus tear identification and 0.88 (95% CI 0.82-0.91) and 0.84 (95% CI 0.81-0.85) for locating the tears. CONCLUSIONS AI prediction models achieved favorable performance in the diagnosis, but not location, of meniscus tears. Further studies on the clinical utilities of deep learning should include standardized reporting, external validation, and full reports of the predictive performances of these models, with a view to localizing tears more accurately. CLINICAL RELEVANCE STATEMENT Meniscus tears are hard to diagnose in the knee magnetic resonance images. AI prediction models may play an important role in improving the diagnostic accuracy of clinicians and radiologists. KEY POINTS • Artificial intelligence (AI) provides great potential in improving the diagnosis of meniscus tears. • The pooled diagnostic performance for artificial intelligence (AI) in identifying meniscus tears was better (sensitivity 87%, specificity 89%) than locating the tears (sensitivity 88%, specificity 84%). • AI is good at confirming the diagnosis of meniscus tears, but future work is required to guide the management of the disease.
Collapse
Affiliation(s)
- Yi Zhao
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK.
| | - Andrew Coppola
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK
| | | | - Dimitri Amiras
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK
- Imperial College London NHS Trust, London, UK
| | - Chinmay M Gupte
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK
- Imperial College London NHS Trust, London, UK
| |
Collapse
|
77
|
Xian PF, Po LM, Xiong JJ, Zhao YZ, Yu WY, Cheung KW. Mask-Pyramid Network: A Novel Panoptic Segmentation Method. SENSORS (BASEL, SWITZERLAND) 2024; 24:1411. [PMID: 38474944 DOI: 10.3390/s24051411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 02/17/2024] [Accepted: 02/18/2024] [Indexed: 03/14/2024]
Abstract
In this paper, we introduce a novel panoptic segmentation method called the Mask-Pyramid Network. Existing Mask RCNN-based methods first generate a large number of box proposals and then filter them at each feature level, which requires a lot of computational resources, while most of the box proposals are suppressed and discarded in the Non-Maximum Suppression process. Additionally, for panoptic segmentation, it is a problem to properly fuse the semantic segmentation results with the Mask RCNN-produced instance segmentation results. To address these issues, we propose a new mask pyramid mechanism to distinguish objects and generate much fewer proposals by referring to existing segmented masks, so as to reduce computing resource consumption. The Mask-Pyramid Network generates object proposals and predicts masks from larger to smaller sizes. It records the pixel area occupied by the larger object masks, and then only generates proposals on the unoccupied areas. Each object mask is represented as a H × W × 1 logit, which fits well in format with the semantic segmentation logits. By applying SoftMax to the concatenated semantic and instance segmentation logits, it is easy and natural to fuse both segmentation results. We empirically demonstrate that the proposed Mask-Pyramid Network achieves comparable accuracy performance on the Cityscapes and COCO datasets. Furthermore, we demonstrate the computational efficiency of the proposed method and obtain competitive results.
Collapse
Affiliation(s)
- Peng-Fei Xian
- Department of Electronic Engineering, City University of Hong Kong, Hong Kong
| | - Lai-Man Po
- Department of Electronic Engineering, City University of Hong Kong, Hong Kong
| | - Jing-Jing Xiong
- Department of Electronic Engineering, City University of Hong Kong, Hong Kong
| | - Yu-Zhi Zhao
- Department of Electronic Engineering, City University of Hong Kong, Hong Kong
| | - Wing-Yin Yu
- Department of Electronic Engineering, City University of Hong Kong, Hong Kong
| | - Kwok-Wai Cheung
- School of Communication, The Hang Seng University of Hong Kong, Hong Kong
| |
Collapse
|
78
|
Caznok Silveira AC, Antunes ASLM, Athié MCP, da Silva BF, Ribeiro dos Santos JV, Canateli C, Fontoura MA, Pinto A, Pimentel-Silva LR, Avansini SH, de Carvalho M. Between neurons and networks: investigating mesoscale brain connectivity in neurological and psychiatric disorders. Front Neurosci 2024; 18:1340345. [PMID: 38445254 PMCID: PMC10912403 DOI: 10.3389/fnins.2024.1340345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 01/29/2024] [Indexed: 03/07/2024] Open
Abstract
The study of brain connectivity has been a cornerstone in understanding the complexities of neurological and psychiatric disorders. It has provided invaluable insights into the functional architecture of the brain and how it is perturbed in disorders. However, a persistent challenge has been achieving the proper spatial resolution, and developing computational algorithms to address biological questions at the multi-cellular level, a scale often referred to as the mesoscale. Historically, neuroimaging studies of brain connectivity have predominantly focused on the macroscale, providing insights into inter-regional brain connections but often falling short of resolving the intricacies of neural circuitry at the cellular or mesoscale level. This limitation has hindered our ability to fully comprehend the underlying mechanisms of neurological and psychiatric disorders and to develop targeted interventions. In light of this issue, our review manuscript seeks to bridge this critical gap by delving into the domain of mesoscale neuroimaging. We aim to provide a comprehensive overview of conditions affected by aberrant neural connections, image acquisition techniques, feature extraction, and data analysis methods that are specifically tailored to the mesoscale. We further delineate the potential of brain connectivity research to elucidate complex biological questions, with a particular focus on schizophrenia and epilepsy. This review encompasses topics such as dendritic spine quantification, single neuron morphology, and brain region connectivity. We aim to showcase the applicability and significance of mesoscale neuroimaging techniques in the field of neuroscience, highlighting their potential for gaining insights into the complexities of neurological and psychiatric disorders.
Collapse
Affiliation(s)
- Ana Clara Caznok Silveira
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
- School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | | | - Maria Carolina Pedro Athié
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | - Bárbara Filomena da Silva
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | | | - Camila Canateli
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | - Marina Alves Fontoura
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | - Allan Pinto
- Brazilian Synchrotron Light Laboratory, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | | | - Simoni Helena Avansini
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | - Murilo de Carvalho
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
- Brazilian Synchrotron Light Laboratory, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| |
Collapse
|
79
|
Wang Z, Zheng C, Han X, Chen W, Lu L. An Innovative and Efficient Diagnostic Prediction Flow for Head and Neck Cancer: A Deep Learning Approach for Multi-Modal Survival Analysis Prediction Based on Text and Multi-Center PET/CT Images. Diagnostics (Basel) 2024; 14:448. [PMID: 38396486 PMCID: PMC10888043 DOI: 10.3390/diagnostics14040448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 02/14/2024] [Accepted: 02/14/2024] [Indexed: 02/25/2024] Open
Abstract
Objective: To comprehensively capture intra-tumor heterogeneity in head and neck cancer (HNC) and maximize the use of valid information collected in the clinical field, we propose a novel multi-modal image-text fusion strategy aimed at improving prognosis. Method: We have developed a tailored diagnostic algorithm for HNC, leveraging a deep learning-based model that integrates both image and clinical text information. For the image fusion part, we used the cross-attention mechanism to fuse the image information between PET and CT, and for the fusion of text and image, we used the Q-former architecture to fuse the text and image information. We also improved the traditional prognostic model by introducing time as a variable in the construction of the model, and finally obtained the corresponding prognostic results. Result: We assessed the efficacy of our methodology through the compilation of a multicenter dataset, achieving commendable outcomes in multicenter validations. Notably, our results for metastasis-free survival (MFS), recurrence-free survival (RFS), overall survival (OS), and progression-free survival (PFS) were as follows: 0.796, 0.626, 0.641, and 0.691. Our results demonstrate a notable superiority over the utilization of CT and PET independently, and exceed the result derived without the clinical textual information. Conclusions: Our model not only validates the effectiveness of multi-modal fusion in aiding diagnosis, but also provides insights for optimizing survival analysis. The study underscores the potential of our approach in enhancing prognosis and contributing to the advancement of personalized medicine in HNC.
Collapse
Affiliation(s)
- Zhaonian Wang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Chundan Zheng
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Pazhou Lab, Guangzhou 510330, China
| | - Xu Han
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Pazhou Lab, Guangzhou 510330, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Pazhou Lab, Guangzhou 510330, China
| |
Collapse
|
80
|
Li YL, Leu HB, Ting CH, Lim SS, Tsai TY, Wu CH, Chung IF, Liang KH. Predicting long-term time to cardiovascular incidents using myocardial perfusion imaging and deep convolutional neural networks. Sci Rep 2024; 14:3802. [PMID: 38360974 PMCID: PMC10869727 DOI: 10.1038/s41598-024-54139-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 02/08/2024] [Indexed: 02/17/2024] Open
Abstract
Myocardial perfusion imaging (MPI) is a clinical tool which can assess the heart's perfusion status, thereby revealing impairments in patients' cardiac function. Within the MPI modality, the acquired three-dimensional signals are typically represented as a sequence of two-dimensional grayscale tomographic images. Here, we proposed an end-to-end survival training approach for processing gray-scale MPI tomograms to generate a risk score which reflects subsequent time to cardiovascular incidents, including cardiovascular death, non-fatal myocardial infarction, and non-fatal ischemic stroke (collectively known as Major Adverse Cardiovascular Events; MACE) as well as Congestive Heart Failure (CHF). We recruited a total of 1928 patients who had undergone MPI followed by coronary interventions. Among them, 80% (n = 1540) were randomly reserved for the training and 5- fold cross-validation stage, while 20% (n = 388) were set aside for the testing stage. The end-to-end survival training can converge well in generating effective AI models via the fivefold cross-validation approach with 1540 patients. When a candidate model is evaluated using independent images, the model can stratify patients into below-median-risk (n = 194) and above-median-risk (n = 194) groups, the corresponding survival curves of the two groups have significant difference (P < 0.0001). We further stratify the above-median-risk group to the quartile 3 and 4 group (n = 97 each), and the three patient strata, referred to as the high, intermediate and low risk groups respectively, manifest statistically significant difference. Notably, the 5-year cardiovascular incident rate is less than 5% in the low-risk group (accounting for 50% of all patients), while the rate is nearly 40% in the high-risk group (accounting for 25% of all patients). Evaluation of patient subgroups revealed stronger effect size in patients with three blocked arteries (Hazard ratio [HR]: 18.377, 95% CI 3.719-90.801, p < 0.001), followed by those with two blocked vessels at HR 7.484 (95% CI 1.858-30.150; p = 0.005). Regarding stent placement, patients with a single stent displayed a HR of 4.410 (95% CI 1.399-13.904; p = 0.011). Patients with two stents show a HR of 10.699 (95% CI 2.262-50.601; p = 0.003), escalating notably to a HR of 57.446 (95% CI 1.922-1717.207; p = 0.019) for patients with three or more stents, indicating a substantial relationship between the disease severity and the predictive capability of the AI for subsequent cardiovascular inciidents. The success of the MPI AI model in stratifying patients into subgroups with distinct time-to-cardiovascular incidents demonstrated the feasibility of proposed end-to-end survival training approach.
Collapse
Affiliation(s)
- Yi-Lian Li
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Hsin-Bang Leu
- Department of Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Chien-Hsin Ting
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Su-Shen Lim
- Department of Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Tsung-Ying Tsai
- Department of Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Cheng-Hsueh Wu
- Department of Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - I-Fang Chung
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei City, Taiwan.
| | - Kung-Hao Liang
- Department of Medical Research, Taipei Veterans General Hospital, Taipei City, Taiwan.
| |
Collapse
|
81
|
Abu-Nowar H, Sait A, Al-Hadhrami T, Al-Sarem M, Noman Qasem S. SENSES-ASD: a social-emotional nurturing and skill enhancement system for autism spectrum disorder. PeerJ Comput Sci 2024; 10:e1792. [PMID: 38435572 PMCID: PMC10909167 DOI: 10.7717/peerj-cs.1792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/12/2023] [Indexed: 03/05/2024]
Abstract
This article introduces the Social-Emotional Nurturing and Skill Enhancement System (SENSES-ASD) as an innovative method for assisting individuals with autism spectrum disorder (ASD). Leveraging deep learning technologies, specifically convolutional neural networks (CNN), our approach promotes facial emotion recognition, enhancing social interactions and communication. The methodology involves the use of the Xception CNN model trained on the FER-2013 dataset. The designed system accepts a variety of media inputs, successfully classifying and predicting seven primary emotional states. Results show that our system achieved a peak accuracy rate of 71% on the training dataset and 66% on the validation dataset. The novelty of our work lies in the intricate combination of deep learning methods specifically tailored for high-functioning autistic adults and the development of a user interface that caters to their unique cognitive and sensory sensitivities. This offers a novel perspective on utilising technological advances for ASD intervention, especially in the domain of emotion recognition.
Collapse
Affiliation(s)
- Haya Abu-Nowar
- Computer Science Department, School of Science and Technology, Nottingham Trent University, Nottingham, United Kingdom
| | - Adeeb Sait
- Computer Science Department, School of Science and Technology, Nottingham Trent University, Nottingham, United Kingdom
| | - Tawfik Al-Hadhrami
- Computer Science Department, School of Science and Technology, Nottingham Trent University, Nottingham, United Kingdom
| | - Mohammed Al-Sarem
- College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia
| | - Sultan Noman Qasem
- Computer Science Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| |
Collapse
|
82
|
Deebansok S, Deng J, Le Calvez E, Zhu Y, Crosnier O, Brousse T, Fontaine O. Capacitive tendency concept alongside supervised machine-learning toward classifying electrochemical behavior of battery and pseudocapacitor materials. Nat Commun 2024; 15:1133. [PMID: 38326356 PMCID: PMC10850137 DOI: 10.1038/s41467-024-45394-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 01/19/2024] [Indexed: 02/09/2024] Open
Abstract
In recent decades, more than 100,000 scientific articles have been devoted to the development of electrode materials for supercapacitors and batteries. However, there is still intense debate surrounding the criteria for determining the electrochemical behavior involved in Faradaic reactions, as the issue is often complicated by the electrochemical signals produced by various electrode materials and their different physicochemical properties. The difficulty lies in the inability to determine which electrode type (battery vs. pseudocapacitor) these materials belong to via simple binary classification. To overcome this difficulty, we apply supervised machine learning for image classification to electrochemical shape analysis (over 5500 Cyclic Voltammetry curves and 2900 Galvanostatic Charge-Discharge curves), with the predicted confidence percentage reflecting the shape trend of the curve and thus defined as a manufacturer. It's called "capacitive tendency". This predictor not only transcends the limitations of human-based classification but also provides statistical trends regarding electrochemical behavior. Of note, and of particular importance to the electrochemical energy storage community, which publishes over a hundred articles per week, we have created an online tool to easily categorize their data.
Collapse
Affiliation(s)
- Siraprapha Deebansok
- Molecular Electrochemistry for Energy laboratory, VISTEC, Institute of Science and Technology, Rayong, 21210, Thailand
| | - Jie Deng
- Institute for Advanced Study & College of Food and Biological Engineering, Chengdu University, Chengdu, 610106, China
| | - Etienne Le Calvez
- Nantes Université, CNRS, Institut des Matériaux de Nantes Jean Rouxel, IMN, 44000, Nantes, France
- Réseau sur le Stockage Électrochimique de l'Énergie (RS2E), CNRS FR 3459, 33 rue Saint Leu, 80039, Amiens, France
| | - Yachao Zhu
- ICGM, Université de Montpellier, CNRS, 34293, Montpellier, France
| | - Olivier Crosnier
- Nantes Université, CNRS, Institut des Matériaux de Nantes Jean Rouxel, IMN, 44000, Nantes, France
- Réseau sur le Stockage Électrochimique de l'Énergie (RS2E), CNRS FR 3459, 33 rue Saint Leu, 80039, Amiens, France
| | - Thierry Brousse
- Nantes Université, CNRS, Institut des Matériaux de Nantes Jean Rouxel, IMN, 44000, Nantes, France
- Réseau sur le Stockage Électrochimique de l'Énergie (RS2E), CNRS FR 3459, 33 rue Saint Leu, 80039, Amiens, France
| | - Olivier Fontaine
- Molecular Electrochemistry for Energy laboratory, VISTEC, Institute of Science and Technology, Rayong, 21210, Thailand.
- Institut Universitaire de France, 75005, Paris, France.
| |
Collapse
|
83
|
Paul P. The Rise of Artificial Intelligence: Implications in Orthopedic Surgery. J Orthop Case Rep 2024; 14:1-4. [PMID: 38420225 PMCID: PMC10898706 DOI: 10.13107/jocr.2024.v14.i02.4194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Artificial intelligence (AI) is slowly making its way into all domains and medicine is no exception. AI is already proving to be a promising tool in the health-care field. With respect to orthopedics, AI is already under use in diagnostics as in fracture and tumor detection, predictive algorithms to predict the mortality risk and duration of hospital stay or complications such as implant loosening and in real-time assessment of post-operative rehabilitation. AI could also be of use in surgical training, utilizing technologies such as virtual reality and augmented reality. However, clinicians should also be aware of the limitations of AI as validation is necessary to avoid errors. This article aims to provide a description of AI and its subfields, its current applications in orthopedics, the limitations, and its future prospects.
Collapse
Affiliation(s)
- Prannoy Paul
- Institute of Advanced Orthopedics, M.O.S.C Medical College Hospital, Kolenchery, Ernakulam, Kerala, India
| |
Collapse
|
84
|
Qayyum SN. A comprehensive review of applications of artificial intelligence in echocardiography. Curr Probl Cardiol 2024; 49:102250. [PMID: 38043879 DOI: 10.1016/j.cpcardiol.2023.102250] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 11/28/2023] [Indexed: 12/05/2023]
Abstract
Echocardiography plays a crucial role in diagnosis of cardiovascular diseases. Artificial intelligence has emerged as a high-precision tool to automate echocardiographic analysis. This review discusses AI algorithms that have been utilized at various steps of echocardiographic analysis such as image acquisition, standard view classification, cardiac chamber segmentation, quantification of cardiac structure and function and aid diagnosis. The under-discussion AI models demonstrated high accuracy comparable to experts in view classification, measurement of cardiac structure and function and diagnosis of conditions such as cardiomyopathies. This review also discusses potential benefits and the value of AI in revolutionizing healthcare. It also explores the limitations such as the lack of large annotated datasets to train AI models and potential algorithm biases making it challenging to translate the benefits of AI into wider clinical practice.
Collapse
Affiliation(s)
- Sardar Noman Qayyum
- Department of Cardiology, Bacha Khan Medical College, Mardan, KPK 23200, Pakistan.
| |
Collapse
|
85
|
Wang X, Chai Z, Li S, Liu Y, Li C, Jiang Y, Liu Q. CTISL: a dynamic stacking multi-class classification approach for identifying cell types from single-cell RNA-seq data. Bioinformatics 2024; 40:btae063. [PMID: 38317054 PMCID: PMC10873586 DOI: 10.1093/bioinformatics/btae063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 02/15/2024] [Accepted: 02/15/2024] [Indexed: 02/07/2024] Open
Abstract
MOTIVATION Effective identification of cell types is of critical importance in single-cell RNA-sequencing (scRNA-seq) data analysis. To date, many supervised machine learning-based predictors have been implemented to identify cell types from scRNA-seq datasets. Despite the technical advances of these state-of-the-art tools, most existing predictors were single classifiers, of which the performances can still be significantly improved. It is therefore highly desirable to employ the ensemble learning strategy to develop more accurate computational models for robust and comprehensive identification of cell types on scRNA-seq datasets. RESULTS We propose a two-layer stacking model, termed CTISL (Cell Type Identification by Stacking ensemble Learning), which integrates multiple classifiers to identify cell types. In the first layer, given a reference scRNA-seq dataset with known cell types, CTISL dynamically combines multiple cell-type-specific classifiers (i.e. support-vector machine and logistic regression) as the base learners to deliver the outcomes for the input of a meta-classifier in the second layer. We conducted a total of 24 benchmarking experiments on 17 human and mouse scRNA-seq datasets to evaluate and compare the prediction performance of CTISL and other state-of-the-art predictors. The experiment results demonstrate that CTISL achieves superior or competitive performance compared to these state-of-the-art approaches. We anticipate that CTISL can serve as a useful and reliable tool for cost-effective identification of cell types from scRNA-seq datasets. AVAILABILITY AND IMPLEMENTATION The webserver and source code are freely available at http://bigdata.biocie.cn/CTISLweb/home and https://zenodo.org/records/10568906, respectively.
Collapse
Affiliation(s)
- Xiao Wang
- Department of Software Engineering, College of Information Engineering, Northwest A&F University, Yangling 712100, China
| | - Ziyi Chai
- Department of Software Engineering, College of Information Engineering, Northwest A&F University, Yangling 712100, China
| | - Shaohua Li
- Department of Software Engineering, College of Information Engineering, Northwest A&F University, Yangling 712100, China
| | - Yan Liu
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Chen Li
- Department of Biochemistry and Molecular Biology, Monash University, Melbourne, VIC 3800, Australia
| | - Yu Jiang
- Department of Animal Genetics, Breeding and Reproduction, College of Animal Science and Technology, Northwest A&F University, Yangling 712100, China
| | - Quanzhong Liu
- Department of Software Engineering, College of Information Engineering, Northwest A&F University, Yangling 712100, China
- Shaanxi Engineering Research Center of Agricultural Information Intelligent Perception and Analysis, Northwest A&F University, Yangling 712100, China
| |
Collapse
|
86
|
Mun C, Ha H, Lee O, Cheon M. Enhancing AI-CDSS with U-AnoGAN: Tackling data imbalance. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107954. [PMID: 38041995 DOI: 10.1016/j.cmpb.2023.107954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 11/12/2023] [Accepted: 11/25/2023] [Indexed: 12/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Clinical Decision Support Systems (CDSS) have substantially evolved, aiding healthcare professionals in informed patient care decision-making. The integration of AI, encompassing machine learning and natural language processing, has notably enhanced the capabilities of CDSS. However, a significant challenge remains in addressing data imbalance and the black box nature of AI algorithms, particularly for rare diseases or underrepresented demographic groups. This study aims to propose a model, U-AnoGAN, designed to overcome these hurdles and augment the diagnostic accuracy of AI-integrated CDSS. METHODS The U-AnoGAN was trained using masks derived from normal data, focusing on the Covid-19 and pneumonia datasets. Anomaly scores were calculated to assess the model's performance compared to existing AnoGAN-related algorithms. The study also evaluated the model's interpretability through the visualization of abnormal regions. RESULTS The results indicated that U-AnoGAN surpassed its counterparts in performance and interpretability. It effectively addressed the data imbalance problem by necessitating only normal data and showcased enhanced diagnostic accuracy. Precision, sensitivity, and specificity values reflected U-AnoGAN's superior capability in accurate disease prediction, diagnosis, treatment recommendations, and adverse event detection. CONCLUSIONS U-AnoGAN significantly bolsters the predictive power of AI-integrated CDSS, enabling more precise and timely diagnoses while providing better visualization to potentially overcome the black box problem. This model presents tremendous potential in elevating patient care with advanced AI tools and fostering more accurate and effective decision-making in healthcare environments. As the healthcare sector grapples with escalating data complexity and volume, the importance of models like U-AnoGAN in enhancing CDSS cannot be overstated.
Collapse
Affiliation(s)
- Changbae Mun
- Korea Institute of Science and Technology (KIST), 5, Hwarang-ro 14-gil Seongbuk-gu Seoul, 02792, Republic of Korea
| | - Hyodong Ha
- Hanyang Women's University, 200, Salgoji-gil, Seongdong-gu, Seoul 04763, Republic of Korea
| | - Ook Lee
- Hanyang University, 222, Wangsimni-ro, Seongdong-gu, Seoul 04763, Republic of Korea
| | - Minjong Cheon
- Hanyang Cyber University, 220, Wangsimni-ro, Seongdong-gu, Seoul 04763, Republic of Korea.
| |
Collapse
|
87
|
Wang MX, Kim JK, Kim CR, Chang MC. Deep Learning Algorithm Trained on Oblique Cervical Radiographs to Predict Outcomes of Transforaminal Epidural Steroid Injection for Pain from Cervical Foraminal Stenosis. Pain Ther 2024; 13:173-183. [PMID: 38190074 PMCID: PMC10796863 DOI: 10.1007/s40122-023-00573-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 12/15/2023] [Indexed: 01/09/2024] Open
Abstract
INTRODUCTION We developed a convolutional neural network (CNN) model to predict treatment outcomes of transforaminal epidural steroid injection (TFESI) for controlling cervical radicular pain due to cervical foraminal stenosis. METHODS We retrospectively recruited 293 patients with cervical TFESI due to cervical radicular pain caused by cervical foraminal stenosis. We obtained a single oblique cervical radiograph from each patient. We cut each oblique cervical radiograph image into a square shape, including the foramen that was targeted for TFESI, the intervertebral disc, the facet joint of the corresponding level with the targeted foramen, and the pedicles of the vertebral bodies just above and below the targeted foramen. Therefore, images including the targeted foramen and structures around the targeted foramen were used as input data. A favorable outcome was defined as a ≥ 50% reduction in the numeric rating scale (NRS) score at 2 months post TFESI compared to the pretreatment NRS score. A poor outcome was defined as a < 50% reduction in the NRS score at 2 months post TFESI vs. the pretreatment score. RESULTS The area under the curve of our developed model for predicting the treatment outcome of cervical TFESI in patients with cervical foraminal stenosis was 0.823. CONCLUSION A CNN model trained using oblique cervical radiographs can be helpful in predicting treatment outcomes after cervical TFESI in patients with cervical foraminal stenosis. If the predictive accuracy is increased, we believe that the deep learning model using cervical radiographs as input data can be easily and widely used in clinics or hospitals.
Collapse
Affiliation(s)
- Ming Xing Wang
- Department of Business Administration, School of Business, Yeungnam University, Gyeongsan-si, Republic of Korea
| | - Jeoung Kun Kim
- Department of Business Administration, School of Business, Yeungnam University, Gyeongsan-si, Republic of Korea
| | - Chung Reen Kim
- Department of Physical Medicine and Rehabilitation, Ulsan University Hospital, University of Ulsan College of Medicine, Ulsan, Republic of Korea
| | - Min Cheol Chang
- Department of Rehabilitation Medicine, College of Medicine, Yeungnam University, 317-1, Daemyungdong, Namku, Daegu, 705-717, Republic of Korea.
| |
Collapse
|
88
|
Sharma N, Sharma M, Tailor J, Chaudhari A, Joshi D, Acharya UR. Automated detection of depression using wavelet scattering networks. Med Eng Phys 2024; 124:104107. [PMID: 38418014 DOI: 10.1016/j.medengphy.2024.104107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 12/16/2023] [Accepted: 01/09/2024] [Indexed: 03/01/2024]
Abstract
Today, depression is a common problem that affects many people all over the world. It can impact a person's mood and quality of life unless identified and treated immediately. Due to the hectic and stressful modern life seems to be, depression has become a leading cause of mental health illnesses. Signals from electroencephalograms (EEG) are frequently used to detect depression. It is difficult, time-consuming, and highly skilled to manually detect depression using EEG data analysis. Hence, in the proposed study, an automated depression detection system using EEG signals is proposed. The proposed study uses a clinically available dataset and dataset provided by the Department of Psychiatry at the Government Medical College (GMC) in Kozhikode, Kerala, India which consisted of 15 depressed patients and 15 healthy subjects and a publically available Multi-modal Open Dataset (MODMA) for Mental-disorder Analysis available at UK Data service reshare that consisted of 24 depressed patients and 29 healthy subjects. In this study, we have developed a novel Deep Wavelet Scattering Network (DWSN) for the automated detection of depression EEG signals. The best-performing classifier is then chosen by feeding the features into several machine-learning algorithms. For the clinically available GMC dataset, Medium Neural Network (MNN) achieved the highest accuracy of 99.95% with a Kappa value of 0.999. Using the suggested methods, the precision, recall, and F1-score are all 1. For the MODMA dataset, Wide Neural Network (WNN) achieved the highest accuracy of 99.3% with a Kappa value of 0.987. Using the suggested methods, the precision, recall, and F1-score are all 0.99. In comparison to all current methodologies, the performance of the suggested research is superior. The proposed method can be used to automatically diagnose depression both at home and in clinical settings.
Collapse
Affiliation(s)
- Nishant Sharma
- Department of Electrical Engineering, Institute of Infrastructure, Technology, Research and Management (IITRAM), Ahmedabad, India.
| | - Manish Sharma
- Department of Electrical Engineering, Institute of Infrastructure, Technology, Research and Management (IITRAM), Ahmedabad, India.
| | - Jimit Tailor
- Department of Electrical Engineering, Institute of Infrastructure, Technology, Research and Management (IITRAM), Ahmedabad, India.
| | - Arth Chaudhari
- Department of Electrical Engineering, Institute of Infrastructure, Technology, Research and Management (IITRAM), Ahmedabad, India.
| | - Deepak Joshi
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi (IITD), Delhi, India.
| | - U Rajendra Acharya
- School of Mathematics, Physics, and Computing, University of Southern Queensland, Toowoomba 4350, Queensland, Australia.
| |
Collapse
|
89
|
Bedrikovetski S, Zhang J, Seow W, Traeger L, Moore JW, Verjans J, Carneiro G, Sammour T. Deep learning to predict lymph node status on pre-operative staging CT in patients with colon cancer. J Med Imaging Radiat Oncol 2024; 68:33-40. [PMID: 37724420 DOI: 10.1111/1754-9485.13584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 09/03/2023] [Indexed: 09/20/2023]
Abstract
INTRODUCTION Lymph node (LN) metastases are an important determinant of survival in patients with colon cancer, but remain difficult to accurately diagnose on preoperative imaging. This study aimed to develop and evaluate a deep learning model to predict LN status on preoperative staging CT. METHODS In this ambispective diagnostic study, a deep learning model using a ResNet-50 framework was developed to predict LN status based on preoperative staging CT. Patients with a preoperative staging abdominopelvic CT who underwent surgical resection for colon cancer were enrolled. Data were retrospectively collected from February 2007 to October 2019 and randomly separated into training, validation, and testing cohort 1. To prospectively test the deep learning model, data for testing cohort 2 was collected from October 2019 to July 2021. Diagnostic performance measures were assessed by the AUROC. RESULTS A total of 1,201 patients (median [range] age, 72 [28-98 years]; 653 [54.4%] male) fulfilled the eligibility criteria and were included in the training (n = 401), validation (n = 100), testing cohort 1 (n = 500) and testing cohort 2 (n = 200). The deep learning model achieved an AUROC of 0.619 (95% CI 0.507-0.731) in the validation cohort. In testing cohort 1 and testing cohort 2, the AUROC was 0.542 (95% CI 0.489-0.595) and 0.486 (95% CI 0.403-0.568), respectively. CONCLUSION A deep learning model based on a ResNet-50 framework does not predict LN status on preoperative staging CT in patients with colon cancer.
Collapse
Affiliation(s)
- Sergei Bedrikovetski
- Discipline of Surgery, Faculty of Health and Medical Sciences, School of Medicine, University of Adelaide, Adelaide, South Australia, Australia
- Colorectal Unit, Department of Surgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - Jianpeng Zhang
- Australian Institute for Machine Learning, School of Computer Science, University of Adelaide, Adelaide, South Australia, Australia
| | - Warren Seow
- Colorectal Unit, Department of Surgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - Luke Traeger
- Discipline of Surgery, Faculty of Health and Medical Sciences, School of Medicine, University of Adelaide, Adelaide, South Australia, Australia
- Colorectal Unit, Department of Surgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - James W Moore
- Discipline of Surgery, Faculty of Health and Medical Sciences, School of Medicine, University of Adelaide, Adelaide, South Australia, Australia
- Colorectal Unit, Department of Surgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - Johan Verjans
- Australian Institute for Machine Learning, School of Computer Science, University of Adelaide, Adelaide, South Australia, Australia
| | - Gustavo Carneiro
- Australian Institute for Machine Learning, School of Computer Science, University of Adelaide, Adelaide, South Australia, Australia
| | - Tarik Sammour
- Discipline of Surgery, Faculty of Health and Medical Sciences, School of Medicine, University of Adelaide, Adelaide, South Australia, Australia
- Colorectal Unit, Department of Surgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia
| |
Collapse
|
90
|
Uddin MG, Nash S, Rahman A, Dabrowski T, Olbert AI. Data-driven modelling for assessing trophic status in marine ecosystems using machine learning approaches. ENVIRONMENTAL RESEARCH 2024; 242:117755. [PMID: 38008200 DOI: 10.1016/j.envres.2023.117755] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 10/05/2023] [Accepted: 11/20/2023] [Indexed: 11/28/2023]
Abstract
Assessing eutrophication in coastal and transitional waters is of utmost importance, yet existing Trophic Status Index (TSI) models face challenges like multicollinearity, data redundancy, inappropriate aggregation methods, and complex classification schemes. To tackle these issues, we developed a novel tool that harnesses machine learning (ML) and artificial intelligence (AI), enhancing the reliability and accuracy of trophic status assessments. Our research introduces an improved data-driven methodology specifically tailored for transitional and coastal (TrC) waters, with a focus on Cork Harbour, Ireland, as a case study. Our innovative approach, named the Assessment Trophic Status Index (ATSI) model, comprises three main components: the selection of pertinent water quality indicators, the computation of ATSI scores, and the implementation of a new classification scheme. To optimize input data and minimize redundancy, we employed ML techniques, including advanced deep learning methods. Specifically, we developed a CHL prediction model utilizing ten algorithms, among which XGBoost demonstrated exceptional performance, showcasing minimal errors during both training (RMSE = 0.0, MSE = 0.0, MAE = 0.01) and testing (RMSE = 0.0, MSE = 0.0, MAE = 0.01) phases. Utilizing a novel linear rescaling interpolation function, we calculated ATSI scores and evaluated the model's sensitivity and efficiency across diverse application domains, employing metrics such as R2, the Nash-Sutcliffe efficiency (NSE), and the model efficiency factor (MEF). The results consistently revealed heightened sensitivity and efficiency across all application domains. Additionally, we introduced a brand new classification scheme for ranking the trophic status of transitional and coastal waters. To assess spatial sensitivity, we applied the ATSI model to four distinct waterbodies in Ireland, comparing trophic assessment outcomes with the Assessment of Trophic Status of Estuaries and Bays in Ireland (ATSEBI) System. Remarkably, significant disparities between the ATSI and ATSEBI System were evident in all domains, except for Mulroy Bay. Overall, our research significantly enhances the accuracy of trophic status assessments in marine ecosystems. The ATSI model, combined with cutting-edge ML techniques and our new classification scheme, represents a promising avenue for evaluating and monitoring trophic conditions in TrC waters. The study also demonstrated the effectiveness of ATSI in assessing trophic status across various waterbodies, including lakes, rivers, and more. These findings make substantial contributions to the field of marine ecosystem management and conservation.
Collapse
Affiliation(s)
- Md Galal Uddin
- School of Engineering, University of Galway, Ireland; Ryan Institute, University of Galway, Ireland; MaREI Research Centre, University of Galway, Ireland; Eco-HydroInformatics Research Group (EHIRG), Civil Engineering, University of Galway, Ireland.
| | - Stephen Nash
- School of Engineering, University of Galway, Ireland; Ryan Institute, University of Galway, Ireland; MaREI Research Centre, University of Galway, Ireland
| | - Azizur Rahman
- School of Computing, Mathematics and Engineering, Charles Sturt University, Wagga Wagga, Australia; The Gulbali Institute of Agriculture, Water and Environment, Charles Sturt University, Wagga Wagga, Australia
| | | | - Agnieszka I Olbert
- School of Engineering, University of Galway, Ireland; Ryan Institute, University of Galway, Ireland; MaREI Research Centre, University of Galway, Ireland; Eco-HydroInformatics Research Group (EHIRG), Civil Engineering, University of Galway, Ireland
| |
Collapse
|
91
|
Yoon D, Yoo M, Kim BS, Kim YG, Lee JH, Lee E, Min GH, Hwang DY, Baek C, Cho M, Suh YS, Kim S. Automated deep learning model for estimating intraoperative blood loss using gauze images. Sci Rep 2024; 14:2597. [PMID: 38297011 PMCID: PMC10830489 DOI: 10.1038/s41598-024-52524-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 01/19/2024] [Indexed: 02/02/2024] Open
Abstract
The intraoperative estimated blood loss (EBL), an essential parameter for perioperative management, has been evaluated by manually weighing blood in gauze and suction bottles, a process both time-consuming and labor-intensive. As the novel EBL prediction platform, we developed an automated deep learning EBL prediction model, utilizing the patch-wise crumpled state (P-W CS) of gauze images with texture analysis. The proposed algorithm was developed using animal data obtained from a porcine experiment and validated on human intraoperative data prospectively collected from 102 laparoscopic gastric cancer surgeries. The EBL prediction model involves gauze area detection and subsequent EBL regression based on the detected areas, with each stage optimized through comparative model performance evaluations. The selected gauze detection model demonstrated a sensitivity of 96.5% and a specificity of 98.0%. Based on this detection model, the performance of EBL regression stage models was compared. Comparative evaluations revealed that our P-W CS-based model outperforms others, including one reliant on convolutional neural networks and another analyzing the gauze's overall crumpled state. The P-W CS-based model achieved a mean absolute error (MAE) of 0.25 g and a mean absolute percentage error (MAPE) of 7.26% in EBL regression. Additionally, per-patient assessment yielded an MAE of 0.58 g, indicating errors < 1 g/patient. In conclusion, our algorithm provides an objective standard and streamlined approach for EBL estimation during surgery without the need for perioperative approximation and additional tasks by humans. The robust performance of the model across varied surgical conditions emphasizes its clinical potential for real-world application.
Collapse
Affiliation(s)
- Dan Yoon
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Mira Yoo
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam, 13620, Korea
| | - Byeong Soo Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Young Gyun Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Jong Hyeon Lee
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Eunju Lee
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam, 13620, Korea
- Department of Surgery, Chung-Ang University Gwangmyeong Hospital, Gwangmyeong, 14353, Korea
| | - Guan Hong Min
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam, 13620, Korea
| | - Du-Yeong Hwang
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam, 13620, Korea
| | - Changhoon Baek
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, 03080, Korea
| | - Minwoo Cho
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, 03080, Korea
| | - Yun-Suhk Suh
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam, 13620, Korea.
- Department of Surgery, Seoul National University College of Medicine, Seoul, 03080, Korea.
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080, Korea.
- Institute of Bioengineering, Seoul National University, Seoul, 08826, Korea.
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826, Korea.
| |
Collapse
|
92
|
Abdulnazar A, Kugic A, Schulz S, Stadlbauer V, Kreuzthaler M. O2 supplementation disambiguation in clinical narratives to support retrospective COVID-19 studies. BMC Med Inform Decis Mak 2024; 24:29. [PMID: 38297364 PMCID: PMC10829265 DOI: 10.1186/s12911-024-02425-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 01/15/2024] [Indexed: 02/02/2024] Open
Abstract
BACKGROUND Oxygen saturation, a key indicator of COVID-19 severity, poses challenges, especially in cases of silent hypoxemia. Electronic health records (EHRs) often contain supplemental oxygen information within clinical narratives. Streamlining patient identification based on oxygen levels is crucial for COVID-19 research, underscoring the need for automated classifiers in discharge summaries to ease the manual review burden on physicians. METHOD We analysed text lines extracted from anonymised COVID-19 patient discharge summaries in German to perform a binary classification task, differentiating patients who received oxygen supplementation and those who did not. Various machine learning (ML) algorithms, including classical ML to deep learning (DL) models, were compared. Classifier decisions were explained using Local Interpretable Model-agnostic Explanations (LIME), which visualize the model decisions. RESULT Classical ML to DL models achieved comparable performance in classification, with an F-measure varying between 0.942 and 0.955, whereas the classical ML approaches were faster. Visualisation of embedding representation of input data reveals notable variations in the encoding patterns between classic and DL encoders. Furthermore, LIME explanations provide insights into the most relevant features at token level that contribute to these observed differences. CONCLUSION Despite a general tendency towards deep learning, these use cases show that classical approaches yield comparable results at lower computational cost. Model prediction explanations using LIME in textual and visual layouts provided a qualitative explanation for the model performance.
Collapse
Affiliation(s)
- Akhila Abdulnazar
- Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, Graz, Austria
- CBmed GmbH - Center for Biomarker Research in Medicine, Graz, Austria
| | - Amila Kugic
- Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, Graz, Austria
| | - Stefan Schulz
- Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, Graz, Austria
| | - Vanessa Stadlbauer
- CBmed GmbH - Center for Biomarker Research in Medicine, Graz, Austria
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Medical University of Graz, Graz, Austria
| | - Markus Kreuzthaler
- Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, Graz, Austria.
| |
Collapse
|
93
|
Pannipulath Venugopal V, Babu Saheer L, Maktabdar Oghaz M. COVID-19 lateral flow test image classification using deep CNN and StyleGAN2. Front Artif Intell 2024; 6:1235204. [PMID: 38348096 PMCID: PMC10860423 DOI: 10.3389/frai.2023.1235204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 12/28/2023] [Indexed: 02/15/2024] Open
Abstract
Introduction Artificial intelligence (AI) in healthcare can enhance clinical workflows and diagnoses, particularly in large-scale operations like COVID-19 mass testing. This study presents a deep Convolutional Neural Network (CNN) model for automated COVID-19 RATD image classification. Methods To address the absence of a RATD image dataset, we crowdsourced 900 real-world images focusing on positive and negative cases. Rigorous data augmentation and StyleGAN2-ADA generated simulated images to overcome dataset limitations and class imbalances. Results The best CNN model achieved a 93% validation accuracy. Test accuracies were 88% for simulated datasets and 82% for real datasets. Augmenting simulated images during training did not significantly improve real-world test image performance but enhanced simulated test image performance. Discussion The findings of this study highlight the potential of the developed model in expediting COVID-19 testing processes and facilitating large-scale testing and tracking systems. The study also underscores the challenges in designing and developing such models, emphasizing the importance of addressing dataset limitations and class imbalances. Conclusion This research contributes to the deployment of large-scale testing and tracking systems, offering insights into the potential applications of AI in mitigating outbreaks similar to COVID-19. Future work could focus on refining the model and exploring its adaptability to other healthcare scenarios.
Collapse
Affiliation(s)
| | - Lakshmi Babu Saheer
- School of Computing and Information Science, Anglia Ruskin University, Cambridge, United Kingdom
| | | |
Collapse
|
94
|
Mostafa F, Chen M. Computational models for predicting liver toxicity in the deep learning era. FRONTIERS IN TOXICOLOGY 2024; 5:1340860. [PMID: 38312894 PMCID: PMC10834666 DOI: 10.3389/ftox.2023.1340860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 12/22/2023] [Indexed: 02/06/2024] Open
Abstract
Drug-induced liver injury (DILI) is a severe adverse reaction caused by drugs and may result in acute liver failure and even death. Many efforts have centered on mitigating risks associated with potential DILI in humans. Among these, quantitative structure-activity relationship (QSAR) was proven to be a valuable tool for early-stage hepatotoxicity screening. Its advantages include no requirement for physical substances and rapid delivery of results. Deep learning (DL) made rapid advancements recently and has been used for developing QSAR models. This review discusses the use of DL in predicting DILI, focusing on the development of QSAR models employing extensive chemical structure datasets alongside their corresponding DILI outcomes. We undertake a comprehensive evaluation of various DL methods, comparing with those of traditional machine learning (ML) approaches, and explore the strengths and limitations of DL techniques regarding their interpretability, scalability, and generalization. Overall, our review underscores the potential of DL methodologies to enhance DILI prediction and provides insights into future avenues for developing predictive models to mitigate DILI risk in humans.
Collapse
Affiliation(s)
- Fahad Mostafa
- Department of Mathematics and Statistics, Texas Tech University, Lubbock, TX, United States
- Division of Bioinformatics and Biostatistics, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, United States
| | - Minjun Chen
- Division of Bioinformatics and Biostatistics, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, United States
| |
Collapse
|
95
|
Raut P, Baldini G, Schöneck M, Caldeira L. Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors. FRONTIERS IN RADIOLOGY 2024; 3:1336902. [PMID: 38304344 PMCID: PMC10830800 DOI: 10.3389/fradi.2023.1336902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 12/28/2023] [Indexed: 02/03/2024]
Abstract
Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.
Collapse
Affiliation(s)
- P. Raut
- Department of Pediatric Pulmonology, Erasmus Medical Center, Rotterdam, Netherlands
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Rotterdam, Netherlands
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| | - G. Baldini
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - M. Schöneck
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| | - L. Caldeira
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| |
Collapse
|
96
|
Qattous H, Azzeh M, Ibrahim R, Abed Al-Ghafer I, Al Sorkhy M, Alkhateeb A. PaCMAP-embedded convolutional neural network for multi-omics data integration. Heliyon 2024; 10:e23195. [PMID: 38163104 PMCID: PMC10756978 DOI: 10.1016/j.heliyon.2023.e23195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 11/22/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024] Open
Abstract
Aims The multi-omics data integration has emerged as a prominent avenue within the healthcare industry, presenting substantial potential for enhancing predictive models. The main motivation behind this study stems from the imperative need to advance prognostic methodologies in cancer diagnosis, an area where precision is pivotal for effective clinical decision-making. In this context, the present study introduces an innovative methodology that integrates copy number alteration (CNA), DNA methylation, and gene expression data. Methods The three omics data were successfully merged into a two-dimensional (2D) map using the PaCMAP dimensionality reduction technique. Utilizing the RGB coloring scheme, a visual representation of the integration was produced utilizing the values of the three omics of each sample. Then, the colored 2D maps were fed into a convolutional neural network (CNN) to forecast the Gleason score. Results Our proposed model outperforms the cutting-edge i-SOM-GSN model by integrating multi-omics data and the CNN architecture with an accuracy of 98.89, and AUC of 0.9996. Conclusion This study demonstrates the effectiveness of multi-omics data integration in predicting health outcomes. The proposed methodology, combining PaCMAP for dimensionality reduction, RGB coloring for visualization, and CNN for prediction, offers a comprehensive framework for integrating heterogeneous omics data and improving predictive accuracy. These findings contribute to the advancement of personalized medicine and have the potential to aid in clinical decision-making for prostate cancer patients.
Collapse
Affiliation(s)
- Hazem Qattous
- Software Engineering Department, Princess Sumaya University for Technology, Amman P.O. Box 1438, Jordan
| | - Mohammad Azzeh
- Data Science Department, Princess Sumaya University for Technology, Amman P.O. Box 1438, Jordan
| | - Rahmeh Ibrahim
- Computer Science Department, Princess Sumaya University for Technology, Amman P.O. Box 1438, Jordan
| | - Ibrahim Abed Al-Ghafer
- Data Science Department, Princess Sumaya University for Technology, Amman P.O. Box 1438, Jordan
| | - Mohammad Al Sorkhy
- Heritage College of Osteopathic medicine, Ohio University, Cleveland, OH 44122, USA
| | - Abedalrhman Alkhateeb
- Computer Science Department, Lakehead University, 955 Oliver Rd, Thunder Bay, ON P7B 5E1, Ontario, Canada
| |
Collapse
|
97
|
Mohammed A, Corzo G. Spatiotemporal convolutional long short-term memory for regional streamflow predictions. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2024; 350:119585. [PMID: 38016234 DOI: 10.1016/j.jenvman.2023.119585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 10/05/2023] [Accepted: 11/06/2023] [Indexed: 11/30/2023]
Abstract
Rainfall-runoff (RR) modelling is a challenging task in hydrology, especially at the regional scale. This work presents an approach to simultaneously predict daily streamflow in 86 catchments across the US using a sequential CNN-LSTM deep learning architecture. The model effectively incorporates both spatial and temporal information, leveraging the CNN to encode spatial patterns and the LSTM to learn their temporal relations. For training, a year-long spatially distributed input with precipitation, maximum temperature, and minimum temperature for each day was used to predict one-day streamflow. The trained CNN-LSTM model was further fine-tuned for three local sub-clusters of the 86 stations, assessing the significance of fine-tuning in model performance. The CNN-LSTM model, post fine-tuning, exhibited strong predictive capabilities with a median Nash-Sutcliffe efficiency (NSE) of 0.62 over the test period. Remarkably, 65% of the 86 stations achieved NSE values greater than 0.6. The performance of the model was also compared to different deep learning models trained using a similar setup (CNN, LSTM, ANN). An LSTM model was also developed and trained individually to predict for each of the stations using local data. The CNN-LSTM model outperformed all the models which was trained regionally, and achieved a comparable performance to the local LSTM model. Fine-tuning improved the performance of all models during the test period. The results highlight the potential of the CNN-LSTM approach for regional RR modelling by effectively capturing complex spatiotemporal patterns inherent in the RR process.
Collapse
Affiliation(s)
- Abdalla Mohammed
- Hydroinformatics Department, IHE Delft Institute for Water Education, Westvest 7, 2611 AX, Delft, Netherlands; School of Geography and the Environment, University of Oxford, Oxford, UK.
| | - Gerald Corzo
- Hydroinformatics Department, IHE Delft Institute for Water Education, Westvest 7, 2611 AX, Delft, Netherlands
| |
Collapse
|
98
|
Gross M, Huber S, Arora S, Ze'evi T, Haider SP, Kucukkaya AS, Iseke S, Kuhn TN, Gebauer B, Michallek F, Dewey M, Vilgrain V, Sartoris R, Ronot M, Jaffe A, Strazzabosco M, Chapiro J, Onofrey JA. Automated MRI liver segmentation for anatomical segmentation, liver volumetry, and the extraction of radiomics. Eur Radiol 2024:10.1007/s00330-023-10495-5. [PMID: 38217704 DOI: 10.1007/s00330-023-10495-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/20/2023] [Accepted: 10/29/2023] [Indexed: 01/15/2024]
Abstract
OBJECTIVES To develop and evaluate a deep convolutional neural network (DCNN) for automated liver segmentation, volumetry, and radiomic feature extraction on contrast-enhanced portal venous phase magnetic resonance imaging (MRI). MATERIALS AND METHODS This retrospective study included hepatocellular carcinoma patients from an institutional database with portal venous MRI. After manual segmentation, the data was randomly split into independent training, validation, and internal testing sets. From a collaborating institution, de-identified scans were used for external testing. The public LiverHccSeg dataset was used for further external validation. A 3D DCNN was trained to automatically segment the liver. Segmentation accuracy was quantified by the Dice similarity coefficient (DSC) with respect to manual segmentation. A Mann-Whitney U test was used to compare the internal and external test sets. Agreement of volumetry and radiomic features was assessed using the intraclass correlation coefficient (ICC). RESULTS In total, 470 patients met the inclusion criteria (63.9±8.2 years; 376 males) and 20 patients were used for external validation (41±12 years; 13 males). DSC segmentation accuracy of the DCNN was similarly high between the internal (0.97±0.01) and external (0.96±0.03) test sets (p=0.28) and demonstrated robust segmentation performance on public testing (0.93±0.03). Agreement of liver volumetry was satisfactory in the internal (ICC, 0.99), external (ICC, 0.97), and public (ICC, 0.85) test sets. Radiomic features demonstrated excellent agreement in the internal (mean ICC, 0.98±0.04), external (mean ICC, 0.94±0.10), and public (mean ICC, 0.91±0.09) datasets. CONCLUSION Automated liver segmentation yields robust and generalizable segmentation performance on MRI data and can be used for volumetry and radiomic feature extraction. CLINICAL RELEVANCE STATEMENT Liver volumetry, anatomic localization, and extraction of quantitative imaging biomarkers require accurate segmentation, but manual segmentation is time-consuming. A deep convolutional neural network demonstrates fast and accurate segmentation performance on T1-weighted portal venous MRI. KEY POINTS • This deep convolutional neural network yields robust and generalizable liver segmentation performance on internal, external, and public testing data. • Automated liver volumetry demonstrated excellent agreement with manual volumetry. • Automated liver segmentations can be used for robust and reproducible radiomic feature extraction.
Collapse
Affiliation(s)
- Moritz Gross
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA.
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany.
| | - Steffen Huber
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
| | - Sandeep Arora
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
| | - Tal Ze'evi
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Stefan P Haider
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
- Department of Otorhinolaryngology, University Hospital of Ludwig Maximilians Universität München, Munich, Germany
| | - Ahmet S Kucukkaya
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Simon Iseke
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
- Department of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, Rostock University Medical Center, Rostock, Germany
| | - Tom Niklas Kuhn
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
- Department of Diagnostic and Interventional Radiology, University Duesseldorf, Duesseldorf, Germany
| | - Bernhard Gebauer
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Florian Michallek
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Marc Dewey
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Valérie Vilgrain
- Université Paris Cité, Île-de-France, Paris, France
- Department of Radiology, Hôpital Beaujon, AP-HP.Nord, Department of Radiology, Île-de-France, Clichy, France
| | - Riccardo Sartoris
- Université Paris Cité, Île-de-France, Paris, France
- Department of Radiology, Hôpital Beaujon, AP-HP.Nord, Department of Radiology, Île-de-France, Clichy, France
| | - Maxime Ronot
- Université Paris Cité, Île-de-France, Paris, France
- Department of Radiology, Hôpital Beaujon, AP-HP.Nord, Department of Radiology, Île-de-France, Clichy, France
| | - Ariel Jaffe
- Department of Internal Medicine, Yale University School of Medicine, New Haven, CT, USA
| | - Mario Strazzabosco
- Department of Internal Medicine, Yale University School of Medicine, New Haven, CT, USA
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA.
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
- Department of Urology, Yale University School of Medicine, New Haven, CT, USA.
| |
Collapse
|
99
|
Park JA, Kim D, Yang S, Kang JH, Kim JE, Huh KH, Lee SS, Yi WJ, Heo MS. Automatic detection of posterior superior alveolar artery in dental cone-beam CT images using a deeply supervised multi-scale 3D network. Dentomaxillofac Radiol 2024; 53:22-31. [PMID: 38214942 PMCID: PMC11003607 DOI: 10.1093/dmfr/twad002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/15/2023] [Accepted: 10/18/2023] [Indexed: 01/13/2024] Open
Abstract
OBJECTIVES This study aimed to develop a robust and accurate deep learning network for detecting the posterior superior alveolar artery (PSAA) in dental cone-beam CT (CBCT) images, focusing on the precise localization of the centre pixel as a critical centreline pixel. METHODS PSAA locations were manually labelled on dental CBCT data from 150 subjects. The left maxillary sinus images were horizontally flipped. In total, 300 datasets were created. Six different deep learning networks were trained, including 3D U-Net, deeply supervised 3D U-Net (3D U-Net DS), multi-scale deeply supervised 3D U-Net (3D U-Net MSDS), 3D Attention U-Net, 3D V-Net, and 3D Dense U-Net. The performance evaluation involved predicting the centre pixel of the PSAA. This was assessed using mean absolute error (MAE), mean radial error (MRE), and successful detection rate (SDR). RESULTS The 3D U-Net MSDS achieved the best prediction performance among the tested networks, with an MAE measurement of 0.696 ± 1.552 mm and MRE of 1.101 ± 2.270 mm. In comparison, the 3D U-Net showed the lowest performance. The 3D U-Net MSDS demonstrated a SDR of 95% within a 2 mm MAE. This was a significantly higher result than other networks that achieved a detection rate of over 80%. CONCLUSIONS This study presents a robust deep learning network for accurate PSAA detection in dental CBCT images, emphasizing precise centre pixel localization. The method achieves high accuracy in locating small vessels, such as the PSAA, and has the potential to enhance detection accuracy and efficiency, thus impacting oral and maxillofacial surgery planning and decision-making.
Collapse
Affiliation(s)
- Jae-An Park
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - DaEl Kim
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| | - Ju-Hee Kang
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Won-Jin Yi
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| |
Collapse
|
100
|
Sadr S, Rokhshad R, Daghighi Y, Golkar M, Tolooie Kheybari F, Gorjinejad F, Mataji Kojori A, Rahimirad P, Shobeiri P, Mahdian M, Mohammad-Rahimi H. Deep learning for tooth identification and numbering on dental radiography: a systematic review and meta-analysis. Dentomaxillofac Radiol 2024; 53:5-21. [PMID: 38183164 PMCID: PMC11003608 DOI: 10.1093/dmfr/twad001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 10/03/2023] [Accepted: 10/05/2023] [Indexed: 01/07/2024] Open
Abstract
OBJECTIVES Improved tools based on deep learning can be used to accurately number and identify teeth. This study aims to review the use of deep learning in tooth numbering and identification. METHODS An electronic search was performed through October 2023 on PubMed, Scopus, Cochrane, Google Scholar, IEEE, arXiv, and medRxiv. Studies that used deep learning models with segmentation, object detection, or classification tasks for teeth identification and numbering of human dental radiographs were included. For risk of bias assessment, included studies were critically analysed using quality assessment of diagnostic accuracy studies (QUADAS-2). To generate plots for meta-analysis, MetaDiSc and STATA 17 (StataCorp LP, College Station, TX, USA) were used. Pooled outcome diagnostic odds ratios (DORs) were determined through calculation. RESULTS The initial search yielded 1618 studies, of which 29 were eligible based on the inclusion criteria. Five studies were found to have low bias across all domains of the QUADAS-2 tool. Deep learning has been reported to have an accuracy range of 81.8%-99% in tooth identification and numbering and a precision range of 84.5%-99.94%. Furthermore, sensitivity was reported as 82.7%-98% and F1-scores ranged from 87% to 98%. Sensitivity was 75.5%-98% and specificity was 79.9%-99%. Only 6 studies found the deep learning model to be less than 90% accurate. The average DOR of the pooled data set was 1612, the sensitivity was 89%, the specificity was 99%, and the area under the curve was 96%. CONCLUSION Deep learning models successfully can detect, identify, and number teeth on dental radiographs. Deep learning-powered tooth numbering systems can enhance complex automated processes, such as accurately reporting which teeth have caries, thus aiding clinicians in making informed decisions during clinical practice.
Collapse
Affiliation(s)
- Soroush Sadr
- Department of Endodontics, School of Dentistry, Hamadan University of Medical Sciences, Hamadan 6517838636, Iran
| | - Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin 10117, Germany
- Section of Endocrinology, Nutrition, and Diabetes, Department of Medicine, Boston University Medical Center, Boston, MA 02118, United States
| | - Yasaman Daghighi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran 1983963113, Iran
| | - Mohsen Golkar
- Department of Oral and Maxillofacial Surgery, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran 4188794755, Iran
| | - Fateme Tolooie Kheybari
- Faculty of Dentistry, Tabriz Medical Sciences, Islamic Azad University, Tabriz 5166/15731, Iran
| | - Fatemeh Gorjinejad
- Faculty of Dentistry, Dental School of Islamic Azad University of Medical Sciences, Tehran 19395/1495, Iran
| | - Atousa Mataji Kojori
- Faculty of Dentistry, Dental School of Islamic Azad University of Medical Sciences, Tehran 19395/1495, Iran
| | - Parisa Rahimirad
- Student Research Committee, School of Dentistry, Guilan University of Medical Sciences, Rasht 4188794755, Iran
| | - Parnian Shobeiri
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Mina Mahdian
- Department of Prosthodontics and Digital Technology, Stony Brook University School of Dental Medicine, New York, NY 11794, United States
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin 10117, Germany
| |
Collapse
|