1
|
De Jesus Encarnacion Ramirez M, Chmutin G, Nurmukhametov R, Soto GR, Kannan S, Piavchenko G, Nikolenko V, Efe IE, Romero AR, Mukengeshay JN, Simfukwe K, Mpoyi Cherubin T, Nicolosi F, Sharif S, Roa JC, Montemurro N. Integrating Augmented Reality in Spine Surgery: Redefining Precision with New Technologies. Brain Sci 2024; 14:645. [PMID: 39061386 PMCID: PMC11274952 DOI: 10.3390/brainsci14070645] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 06/04/2024] [Accepted: 06/11/2024] [Indexed: 07/28/2024] Open
Abstract
INTRODUCTION The integration of augmented reality (AR) in spine surgery marks a significant advancement, enhancing surgical precision and patient outcomes. AR provides immersive, three-dimensional visualizations of anatomical structures, facilitating meticulous planning and execution of spine surgeries. This technology not only improves spatial understanding and real-time navigation during procedures but also aims to reduce surgical invasiveness and operative times. Despite its potential, challenges such as model accuracy, user interface design, and the learning curve for new technology must be addressed. AR's application extends beyond the operating room, offering valuable tools for medical education and improving patient communication and satisfaction. MATERIAL AND METHODS A literature review was conducted by searching PubMed and Scopus databases using keywords related to augmented reality in spine surgery, covering publications from January 2020 to January 2024. RESULTS In total, 319 articles were identified through the initial search of the databases. After screening titles and abstracts, 11 articles in total were included in the qualitative synthesis. CONCLUSION Augmented reality (AR) is becoming a transformative force in spine surgery, enhancing precision, education, and outcomes despite hurdles like technical limitations and integration challenges. AR's immersive visualizations and educational innovations, coupled with its potential synergy with AI and machine learning, indicate a bright future for surgical care. Despite the existing obstacles, AR's impact on improving surgical accuracy and safety marks a significant leap forward in patient treatment and care.
Collapse
Affiliation(s)
| | - Gennady Chmutin
- Department of Neurosurgery, Russian People’s Friendship University, 117198 Moscow, Russia
| | - Renat Nurmukhametov
- Department of Neurosurgery, Russian People’s Friendship University, 117198 Moscow, Russia
| | - Gervith Reyes Soto
- Department of Head and Neck, Unidad de Neurociencias, Instituto Nacional de Cancerología, Mexico City 14080, Mexico
| | - Siddarth Kannan
- School of Medicine, University of Central Lancashire, Preston PR0 2AA, UK
| | - Gennadi Piavchenko
- Department of Human Anatomy and Histology, Sechenov University, 119911 Moscow, Russia
| | - Vladmir Nikolenko
- Department of Neurosurgery, I.M. Sechenov First Moscow State Medical University (Sechenov University), 119991 Moscow, Russia
| | - Ibrahim E. Efe
- Department of Neurosurgery, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, 10178 Berlin, Germany
| | | | | | - Keith Simfukwe
- Department of Neurosurgery, Russian People’s Friendship University, 117198 Moscow, Russia
| | | | - Federico Nicolosi
- Department of Medicine and Surgery, Neurosurgery, University of Milano-Bicocca, 20126 Milan, Italy
| | - Salman Sharif
- Department of Neurosurgery, Liaquat National Hospital and Medical College, Karachi 05444, Pakistan
| | - Juan Carlos Roa
- Department of Pathology, School of Medicine, Pontificia Universidad Católica de Chile, Santiago 8330024, Chile
| | - Nicola Montemurro
- Department of Neurosurgery, Azienda Ospedaliero Universitaria Pisana (AOUP), 56100 Pisa, Italy
| |
Collapse
|
2
|
Sastry RA, Setty A, Liu DD, Zheng B, Ali R, Weil RJ, Roye GD, Doberstein CE, Oyelese AA, Niu T, Gokaslan ZL, Telfeian AE. Natural language processing augments comorbidity documentation in neurosurgical inpatient admissions. PLoS One 2024; 19:e0303519. [PMID: 38723044 PMCID: PMC11081267 DOI: 10.1371/journal.pone.0303519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 04/04/2024] [Indexed: 05/13/2024] Open
Abstract
OBJECTIVE To establish whether or not a natural language processing technique could identify two common inpatient neurosurgical comorbidities using only text reports of inpatient head imaging. MATERIALS AND METHODS A training and testing dataset of reports of 979 CT or MRI scans of the brain for patients admitted to the neurosurgery service of a single hospital in June 2021 or to the Emergency Department between July 1-8, 2021, was identified. A variety of machine learning and deep learning algorithms utilizing natural language processing were trained on the training set (84% of the total cohort) and tested on the remaining images. A subset comparison cohort (n = 76) was then assessed to compare output of the best algorithm against real-life inpatient documentation. RESULTS For "brain compression", a random forest classifier outperformed other candidate algorithms with an accuracy of 0.81 and area under the curve of 0.90 in the testing dataset. For "brain edema", a random forest classifier again outperformed other candidate algorithms with an accuracy of 0.92 and AUC of 0.94 in the testing dataset. In the provider comparison dataset, for "brain compression," the random forest algorithm demonstrated better accuracy (0.76 vs 0.70) and sensitivity (0.73 vs 0.43) than provider documentation. For "brain edema," the algorithm again demonstrated better accuracy (0.92 vs 0.84) and AUC (0.45 vs 0.09) than provider documentation. DISCUSSION A natural language processing-based machine learning algorithm can reliably and reproducibly identify selected common neurosurgical comorbidities from radiology reports. CONCLUSION This result may justify the use of machine learning-based decision support to augment provider documentation.
Collapse
Affiliation(s)
- Rahul A. Sastry
- Department of Neurosurgery, Warren Alpert Medical School, Rhode Island Hospital, Brown University, Providence, RI, United States of America
| | - Aayush Setty
- Department of Neurosurgery, Warren Alpert Medical School, Rhode Island Hospital, Brown University, Providence, RI, United States of America
- Department of Computer Science, Brown University, Providence, RI, United States of America
| | - David D. Liu
- Department of Neurosurgery, Warren Alpert Medical School, Rhode Island Hospital, Brown University, Providence, RI, United States of America
| | - Bryan Zheng
- Department of Neurosurgery, Warren Alpert Medical School, Rhode Island Hospital, Brown University, Providence, RI, United States of America
| | - Rohaid Ali
- Department of Neurosurgery, Warren Alpert Medical School, Rhode Island Hospital, Brown University, Providence, RI, United States of America
| | - Robert J. Weil
- Department of Neurosurgery, Brain & Spine, Southcoast Health, Dartmouth, MA, United States of America
| | - G. Dean Roye
- Department of Surgery, Warren Alpert Medical School, Rhode Island Hospital, Brown University, Providence, RI, United States of America
| | - Curtis E. Doberstein
- Department of Neurosurgery, Warren Alpert Medical School, Rhode Island Hospital, Brown University, Providence, RI, United States of America
| | - Adetokunbo A. Oyelese
- Department of Neurosurgery, Warren Alpert Medical School, Rhode Island Hospital, Brown University, Providence, RI, United States of America
| | - Tianyi Niu
- Department of Neurosurgery, Warren Alpert Medical School, Rhode Island Hospital, Brown University, Providence, RI, United States of America
| | - Ziya L. Gokaslan
- Department of Neurosurgery, Warren Alpert Medical School, Rhode Island Hospital, Brown University, Providence, RI, United States of America
| | - Albert E. Telfeian
- Department of Neurosurgery, Warren Alpert Medical School, Rhode Island Hospital, Brown University, Providence, RI, United States of America
| |
Collapse
|
3
|
Deol ES, Tollefson MK, Antolin A, Zohar M, Bar O, Ben-Ayoun D, Mynderse LA, Lomas DJ, Avant RA, Miller AR, Elliott DS, Boorjian SA, Wolf T, Asselmann D, Khanna A. Automated surgical step recognition in transurethral bladder tumor resection using artificial intelligence: transfer learning across surgical modalities. Front Artif Intell 2024; 7:1375482. [PMID: 38525302 PMCID: PMC10958784 DOI: 10.3389/frai.2024.1375482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 02/26/2024] [Indexed: 03/26/2024] Open
Abstract
Objective Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements. Materials and methods Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard. Results A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%). Conclusion We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.
Collapse
Affiliation(s)
- Ekamjit S. Deol
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Maya Zohar
- theator.io, Palo Alto, CA, United States
| | - Omri Bar
- theator.io, Palo Alto, CA, United States
| | | | | | - Derek J. Lomas
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | - Ross A. Avant
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | - Adam R. Miller
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Tamir Wolf
- theator.io, Palo Alto, CA, United States
| | | | - Abhinav Khanna
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
4
|
Alongi P, Arnone A, Vultaggio V, Fraternali A, Versari A, Casali C, Arnone G, DiMeco F, Vetrano IG. Artificial Intelligence Analysis Using MRI and PET Imaging in Gliomas: A Narrative Review. Cancers (Basel) 2024; 16:407. [PMID: 38254896 PMCID: PMC10814838 DOI: 10.3390/cancers16020407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 01/10/2024] [Accepted: 01/14/2024] [Indexed: 01/24/2024] Open
Abstract
The lack of early detection and a high rate of recurrence/progression after surgery are defined as the most common causes of a very poor prognosis of Gliomas. The developments of quantification systems with special regards to artificial intelligence (AI) on medical images (CT, MRI, PET) are under evaluation in the clinical and research context in view of several applications providing different information related to the reconstruction of imaging, the segmentation of tissues acquired, the selection of features, and the proper data analyses. Different approaches of AI have been proposed as the machine and deep learning, which utilize artificial neural networks inspired by neuronal architectures. In addition, new systems have been developed using AI techniques to offer suggestions or make decisions in medical diagnosis, emulating the judgment of radiologist experts. The potential clinical role of AI focuses on the prediction of disease progression in more aggressive forms in gliomas, differential diagnosis (pseudoprogression vs. proper progression), and the follow-up of aggressive gliomas. This narrative Review will focus on the available applications of AI in brain tumor diagnosis, mainly related to malignant gliomas, with particular attention to the postoperative application of MRI and PET imaging, considering the current state of technical approach and the evaluation after treatment (including surgery, radiotherapy/chemotherapy, and prognostic stratification).
Collapse
Affiliation(s)
- Pierpaolo Alongi
- Nuclear Medicine Unit, ARNAS Ospedali Civico, Di Cristina e Benfratelli, 90127 Palermo, Italy; (P.A.); (V.V.); (G.A.)
| | - Annachiara Arnone
- Nuclear Medicine Unit, Azienda Unità Sanitaria Locale IRCCS, 42122 Reggio Emilia, Italy; (A.A.); (A.F.); (A.V.)
| | - Viola Vultaggio
- Nuclear Medicine Unit, ARNAS Ospedali Civico, Di Cristina e Benfratelli, 90127 Palermo, Italy; (P.A.); (V.V.); (G.A.)
| | - Alessandro Fraternali
- Nuclear Medicine Unit, Azienda Unità Sanitaria Locale IRCCS, 42122 Reggio Emilia, Italy; (A.A.); (A.F.); (A.V.)
| | - Annibale Versari
- Nuclear Medicine Unit, Azienda Unità Sanitaria Locale IRCCS, 42122 Reggio Emilia, Italy; (A.A.); (A.F.); (A.V.)
| | - Cecilia Casali
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, 20133 Milan, Italy; (C.C.); (F.D.)
| | - Gaspare Arnone
- Nuclear Medicine Unit, ARNAS Ospedali Civico, Di Cristina e Benfratelli, 90127 Palermo, Italy; (P.A.); (V.V.); (G.A.)
| | - Francesco DiMeco
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, 20133 Milan, Italy; (C.C.); (F.D.)
- Department of Oncology and Onco-Hematology, Università di Milano, 20122 Milan, Italy
- Department of Neurological Surgery, Johns Hopkins Medical School, Baltimore, MD 21218, USA
| | - Ignazio Gaspare Vetrano
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, 20133 Milan, Italy; (C.C.); (F.D.)
- Department of Biomedical Sciences for Health, Università di Milano, 20122 Milan, Italy
| |
Collapse
|
5
|
Hameed MS, Laplante S, Masino C, Khalid MU, Zhang H, Protserov S, Hunter J, Mashouri P, Fecso AB, Brudno M, Madani A. What is the educational value and clinical utility of artificial intelligence for intraoperative and postoperative video analysis? A survey of surgeons and trainees. Surg Endosc 2023; 37:9453-9460. [PMID: 37697116 DOI: 10.1007/s00464-023-10377-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 07/30/2023] [Indexed: 09/13/2023]
Abstract
INTRODUCTION Surgical complications often occur due to lapses in judgment and decision-making. Advances in artificial intelligence (AI) have made it possible to train algorithms that identify anatomy and interpret the surgical field. These algorithms can potentially be used for intraoperative decision-support and postoperative video analysis and feedback. Despite the very early success of proof-of-concept algorithms, it remains unknown whether this innovation meets the needs of end-users or how best to deploy it. This study explores users' opinion on the value, usability and design for adapting AI in operating rooms. METHODS A device-agnostic web-accessible software was developed to provide AI inference either (1) intraoperatively on a live video stream (synchronous mode), or (2) on an uploaded video or image file (asynchronous mode) postoperatively for feedback. A validated AI model (GoNoGoNet), which identifies safe and dangerous zones of dissection during laparoscopic cholecystectomy, was used as the use case. Surgeons and trainees performing laparoscopic cholecystectomy interacted with the AI platform and completed a 5-point Likert scale survey to evaluate the educational value, usability and design of the platform. RESULTS Twenty participants (11 surgeons and 9 trainees) evaluated the platform intraoperatively (n = 10) and postoperatively (n = 11). The majority agreed or strongly agreed that AI is an effective adjunct to surgical training (81%; neutral = 10%), effective for providing real-time feedback (70%; neutral = 20%), postoperative feedback (73%; neutral = 27%), and capable of improving surgeon confidence (67%; neutral = 29%). Only 40% (neutral = 50%) and 57% (neutral = 43%) believe that the tool is effective in improving intraoperative decisions and performance, or beneficial for patient care, respectively. Overall, 38% (neutral = 43%) reported they would use this platform consistently if available. The majority agreed or strongly agreed that the platform was easy to use (81%; neutral = 14%) and has acceptable resolution (62%; neutral = 24%), while 30% (neutral = 20%) reported that it disrupted the OR workflow, and 20% (neutral = 0%) reported significant time lag. All respondents reported that such a system should be available "on-demand" to turn on/off at their discretion. CONCLUSIONS Most found AI to be a useful tool for providing support and feedback to surgeons, despite several implementation obstacles. The study findings will inform the future design and usability of this technology in order to optimize its clinical impact and adoption by end-users.
Collapse
Affiliation(s)
- M Saif Hameed
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada.
| | - Simon Laplante
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| | - Caterina Masino
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada
| | - Muhammad Uzair Khalid
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Haochi Zhang
- DATA Team, University Health Network, Toronto, ON, Canada
| | | | - Jaryd Hunter
- DATA Team, University Health Network, Toronto, ON, Canada
| | | | - Andras B Fecso
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada
| | - Michael Brudno
- DATA Team, University Health Network, Toronto, ON, Canada
| | - Amin Madani
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
6
|
Park JJ, Doiphode N, Zhang X, Pan L, Blue R, Shi J, Buch VP. Developing the surgeon-machine interface: using a novel instance-segmentation framework for intraoperative landmark labelling. Front Surg 2023; 10:1259756. [PMID: 37936949 PMCID: PMC10626480 DOI: 10.3389/fsurg.2023.1259756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/20/2023] [Indexed: 11/09/2023] Open
Abstract
Introduction The utilisation of artificial intelligence (AI) augments intraoperative safety, surgical training, and patient outcomes. We introduce the term Surgeon-Machine Interface (SMI) to describe this innovative intersection between surgeons and machine inference. A custom deep computer vision (CV) architecture within a sparse labelling paradigm was developed, specifically tailored to conceptualise the SMI. This platform demonstrates the ability to perform instance segmentation on anatomical landmarks and tools from a single open spinal dural arteriovenous fistula (dAVF) surgery video dataset. Methods Our custom deep convolutional neural network was based on SOLOv2 architecture for precise, instance-level segmentation of surgical video data. Test video consisted of 8520 frames, with sparse labelling of only 133 frames annotated for training. Accuracy and inference time, assessed using F1-score and mean Average Precision (mAP), were compared against current state-of-the-art architectures on a separate test set of 85 additionally annotated frames. Results Our SMI demonstrated superior accuracy and computing speed compared to these frameworks. The F1-score and mAP achieved by our platform were 17% and 15.2% respectively, surpassing MaskRCNN (15.2%, 13.9%), YOLOv3 (5.4%, 11.9%), and SOLOv2 (3.1%, 10.4%). Considering detections that exceeded the Intersection over Union threshold of 50%, our platform achieved an impressive F1-score of 44.2% and mAP of 46.3%, outperforming MaskRCNN (41.3%, 43.5%), YOLOv3 (15%, 34.1%), and SOLOv2 (9%, 32.3%). Our platform demonstrated the fastest inference time (88ms), compared to MaskRCNN (90ms), SOLOV2 (100ms), and YOLOv3 (106ms). Finally, the minimal amount of training set demonstrated a good generalisation performance -our architecture successfully identified objects in a frame that were not included in the training or validation frames, indicating its ability to handle out-of-domain scenarios. Discussion We present our development of an innovative intraoperative SMI to demonstrate the future promise of advanced CV in the surgical domain. Through successful implementation in a microscopic dAVF surgery, our framework demonstrates superior performance over current state-of-the-art segmentation architectures in intraoperative landmark guidance with high sample efficiency, representing the most advanced AI-enabled surgical inference platform to date. Our future goals include transfer learning paradigms for scaling to additional surgery types, addressing clinical and technical limitations for performing real-time decoding, and ultimate enablement of a real-time neurosurgical guidance platform.
Collapse
Affiliation(s)
- Jay J. Park
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
- Centre for Global Health, Usher Institute, Edinburgh Medical School, The University of Edinburgh, Edinburgh, United Kingdom
| | - Nehal Doiphode
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
- Department of Computer and Information Science, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Xiao Zhang
- Department of Computer Science, University of Chicago, Chicago, IL, United States
| | - Lishuo Pan
- Department of Computer Science, Brown University, Providence, RI, United States
| | - Rachel Blue
- Department of Neurosurgery, Perelman School of Medicine at The University of Pennsylvania, Philadelphia, PA, United States
| | - Jianbo Shi
- Department of Computer and Information Science, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Vivek P. Buch
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
| |
Collapse
|
7
|
Gupta R, Kumari S, Senapati A, Ambasta RK, Kumar P. New era of artificial intelligence and machine learning-based detection, diagnosis, and therapeutics in Parkinson's disease. Ageing Res Rev 2023; 90:102013. [PMID: 37429545 DOI: 10.1016/j.arr.2023.102013] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 06/26/2023] [Accepted: 07/06/2023] [Indexed: 07/12/2023]
Abstract
Parkinson's disease (PD) is characterized by the loss of neuronal cells, which leads to synaptic dysfunction and cognitive defects. Despite the advancements in treatment strategies, the management of PD is still a challenging event. Early prediction and diagnosis of PD are of utmost importance for effective management of PD. In addition, the classification of patients with PD as compared to normal healthy individuals also imposes drawbacks in the early diagnosis of PD. To address these challenges, artificial intelligence (AI) and machine learning (ML) models have been implicated in the diagnosis, prediction, and treatment of PD. Recent times have also demonstrated the implication of AI and ML models in the classification of PD based on neuroimaging methods, speech recording, gait abnormalities, and others. Herein, we have briefly discussed the role of AI and ML in the diagnosis, treatment, and identification of novel biomarkers in the progression of PD. We have also highlighted the role of AI and ML in PD management through altered lipidomics and gut-brain axis. We briefly explain the role of early PD detection through AI and ML algorithms based on speech recordings, handwriting patterns, gait abnormalities, and neuroimaging techniques. Further, the review discuss the potential role of the metaverse, the Internet of Things, and electronic health records in the effective management of PD to improve the quality of life. Lastly, we also focused on the implementation of AI and ML-algorithms in neurosurgical process and drug discovery.
Collapse
Affiliation(s)
- Rohan Gupta
- Molecular Neuroscience and Functional Genomics Laboratory, Department of Biotechnology, Delhi Technological, University, USA.
| | - Smita Kumari
- Molecular Neuroscience and Functional Genomics Laboratory, Department of Biotechnology, Delhi Technological, University, USA
| | | | - Rashmi K Ambasta
- Molecular Neuroscience and Functional Genomics Laboratory, Department of Biotechnology, Delhi Technological, University, USA
| | - Pravir Kumar
- Molecular Neuroscience and Functional Genomics Laboratory, Department of Biotechnology, Delhi Technological, University, USA.
| |
Collapse
|
8
|
Morris MX, Rajesh A, Asaad M, Hassan A, Saadoun R, Butler CE. Deep Learning Applications in Surgery: Current Uses and Future Directions. Am Surg 2023; 89:36-42. [PMID: 35567312 DOI: 10.1177/00031348221101490] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Deep learning (DL) is a subset of machine learning that is rapidly gaining traction in surgical fields. Its tremendous capacity for powerful data-driven problem-solving has generated computational breakthroughs in many realms, with the fields of medicine and surgery becoming increasingly prominent avenues. Through its multi-layer architecture of interconnected neural networks, DL enables feature extraction and pattern recognition of highly complex and large-volume data. Across various surgical specialties, DL is being applied to optimize both preoperative planning and intraoperative performance in new and innovative ways. Surgeons are now able to integrate deep learning tools into their practice to improve patient safety and outcomes. Through this review, we explore the applications of deep learning in surgery and related subspecialties with an aim to shed light on the practical utilization of this technology in the present and near future.
Collapse
Affiliation(s)
- Miranda X Morris
- 12277Duke University School of Medicine, Durham, NC, USA.,101571Duke Pratt School of Engineering, Durham, NC, USA
| | - Aashish Rajesh
- Department of Surgery, 14742University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Malke Asaad
- Department of Plastic Surgery, 6595University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Abbas Hassan
- Department of Plastic Surgery, 571198The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Rakan Saadoun
- Department of Plastic Surgery, 6595University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Charles E Butler
- Department of Plastic Surgery, 571198The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
9
|
Fang L, Mou L, Gu Y, Hu Y, Chen B, Chen X, Wang Y, Liu J, Zhao Y. Global-local multi-stage temporal convolutional network for cataract surgery phase recognition. Biomed Eng Online 2022; 21:82. [PMID: 36451164 PMCID: PMC9710114 DOI: 10.1186/s12938-022-01048-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 11/04/2022] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND Surgical video phase recognition is an essential technique in computer-assisted surgical systems for monitoring surgical procedures, which can assist surgeons in standardizing procedures and enhancing postsurgical assessment and indexing. However, the high similarity between the phases and temporal variations of cataract videos still poses the greatest challenge for video phase recognition. METHODS In this paper, we introduce a global-local multi-stage temporal convolutional network (GL-MSTCN) to explore the subtle differences between high similarity surgical phases and mitigate the temporal variations of surgical videos. The presented work consists of a triple-stream network (i.e., pupil stream, instrument stream, and video frame stream) and a multi-stage temporal convolutional network. The triple-stream network first detects the pupil and surgical instruments regions in the frame separately and then obtains the fine-grained semantic features of the video frames. The proposed multi-stage temporal convolutional network improves the surgical phase recognition performance by capturing longer time series features through dilated convolutional layers with varying receptive fields. RESULTS Our method is thoroughly validated on the CSVideo dataset with 32 cataract surgery videos and the public Cataract101 dataset with 101 cataract surgery videos, outperforming state-of-the-art approaches with 95.8% and 96.5% accuracy, respectively. CONCLUSIONS The experimental results show that the use of global and local feature information can effectively enhance the model to explore fine-grained features and mitigate temporal and spatial variations, thus improving the surgical phase recognition performance of the proposed GL-MSTCN.
Collapse
Affiliation(s)
- Lixin Fang
- grid.469325.f0000 0004 1761 325XCollege of Mechanical Engineering, Zhejiang University of Technology, Hangzhou, 310014 China ,grid.9227.e0000000119573309Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Lei Mou
- grid.9227.e0000000119573309Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Yuanyuan Gu
- grid.9227.e0000000119573309Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China ,grid.9227.e0000000119573309Zhejiang Engineering Research Center for Biomedical Materials, Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315300 China
| | - Yan Hu
- grid.263817.90000 0004 1773 1790Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055 China
| | - Bang Chen
- grid.9227.e0000000119573309Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Xu Chen
- Department of Ophthalmology, Shanghai Aier Eye Hospital, Shanghai, China ,Department of Ophthalmology, Shanghai Aier Qingliang Eye Hospital, Shanghai, China ,grid.258164.c0000 0004 1790 3548Aier Eye Hospital, Jinan University, No. 601, Huangpu Road West, Guangzhou, China ,grid.216417.70000 0001 0379 7164Aier School of Ophthalmology, Central South University Changsha, Changsha, Hunan China
| | - Yang Wang
- grid.9227.e0000000119573309Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
| | - Jiang Liu
- grid.263817.90000 0004 1773 1790Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055 China
| | - Yitian Zhao
- grid.9227.e0000000119573309Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China ,grid.9227.e0000000119573309Zhejiang Engineering Research Center for Biomedical Materials, Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315300 China
| |
Collapse
|