1
|
Swain BP, Nag DS, Anand R, Kumar H, Ganguly PK, Singh N. Current evidence on artificial intelligence in regional anesthesia. World J Clin Cases 2024; 12:6613-6619. [DOI: 10.12998/wjcc.v12.i33.6613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 09/11/2024] [Accepted: 09/19/2024] [Indexed: 09/27/2024] Open
Abstract
The recent advancement in regional anesthesia (RA) has been largely attributed to ultrasound technology. However, the safety and efficiency of ultrasound-guided nerve blocks depend upon the skill and experience of the performer. Even with adequate training, experience, and knowledge, human-related limitations such as fatigue, failure to recognize the correct anatomical structure, and unintentional needle or probe movement can hinder the overall effectiveness of RA. The amalgamation of artificial intelligence (AI) to RA practice has promised to override these human limitations. Machine learning, an integral part of AI can improve its performance through continuous learning and experience, like the human brain. It enables computers to recognize images and patterns specifically useful in anatomic structure identification during the performance of RA. AI can provide real-time guidance to clinicians by highlighting important anatomical structures on ultrasound images, and it can also assist in needle tracking and accurate deposition of local anesthetics. The future of RA with AI integration appears promising, yet obstacles such as device malfunction, data privacy, regulatory barriers, and cost concerns can deter its clinical implementation. The current mini review deliberates the current application, future direction, and barrier to the application of AI in RA practice.
Collapse
Affiliation(s)
- Bhanu Pratap Swain
- Department of Anaesthesiology, Tata Main Hospital, Jamshedpur 831001, India
- Department of Anesthesiology, Manipal Tata Medical College, Jamshedpur 831017, India
| | - Deb Sanjay Nag
- Department of Anaesthesiology, Tata Main Hospital, Jamshedpur 831001, India
| | - Rishi Anand
- Department of Anaesthesiology, Tata Main Hospital, Jamshedpur 831001, India
- Department of Anesthesiology, Manipal Tata Medical College, Jamshedpur 831017, India
| | - Himanshu Kumar
- Department of Anaesthesiology, Tata Main Hospital, Jamshedpur 831001, India
- Department of Anesthesiology, Manipal Tata Medical College, Jamshedpur 831017, India
| | | | - Niharika Singh
- Department of Anaesthesiology, Tata Main Hospital, Jamshedpur 831001, India
| |
Collapse
|
2
|
Kowa CY, Morecroft M, Macfarlane AJR, Burckett-St Laurent D, Pawa A, West S, Margetts S, Haslam N, Ashken T, Sebastian MP, Thottungal A, Womack J, Noble JA, Higham H, Bowness JS. Prospective randomized evaluation of the sustained impact of assistive artificial intelligence on anesthetists' ultrasound scanning for regional anesthesia. BMJ SURGERY, INTERVENTIONS, & HEALTH TECHNOLOGIES 2024; 6:e000264. [PMID: 39430867 PMCID: PMC11487881 DOI: 10.1136/bmjsit-2024-000264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 09/10/2024] [Indexed: 10/22/2024] Open
Abstract
Objectives Ultrasound-guided regional anesthesia (UGRA) relies on acquiring and interpreting an appropriate view of sonoanatomy. Artificial intelligence (AI) has the potential to aid this by applying a color overlay to key sonoanatomical structures.The primary aim was to determine whether an AI-generated color overlay was associated with a difference in participants' ability to identify an appropriate block view over a 2-month period after a standardized teaching session (as judged by a blinded assessor). Secondary outcomes included the ability to identify an appropriate block view (unblinded assessor), global rating score and participant confidence scores. Design Randomized, partially blinded, prospective cross-over study. Setting Simulation scans on healthy volunteers. Initial assessments on 29 November 2022 and 30 November 2022, with follow-up on 25 January 2023 - 27 January 2023. Participants 57 junior anesthetists undertook initial assessments and 51 (89.47%) returned at 2 months. Intervention Participants performed ultrasound scans for six peripheral nerve blocks, with AI assistance randomized to half of the blocks. Cross-over assignment was employed for 2 months. Main outcome measures Blinded experts assessed whether the block view acquired was acceptable (yes/no). Unblinded experts also assessed this parameter and provided a global performance rating (0-100). Participants reported scan confidence (0-100). Results AI assistance was associated with a higher rate of appropriate block view acquisition in both blinded and unblinded assessments (p=0.02 and <0.01, respectively). Participant confidence and expert rating scores were superior throughout (all p<0.01). Conclusions Assistive AI was associated with superior ultrasound scanning performance 2 months after formal teaching. It may aid application of sonoanatomical knowledge and skills gained in teaching, to support delivery of UGRA beyond the immediate post-teaching period. Trial registration number www.clinicaltrials.govNCT05583032.
Collapse
Affiliation(s)
- Chao-Ying Kowa
- Department of Anaesthesia, The Royal London Hospital, London, UK
| | - Megan Morecroft
- Faculty of Medicine, Health & Life Sciences, University of Swansea, Swansea, UK
| | - Alan J R Macfarlane
- Department of Anaesthesia, Glasgow Royal Infirmary, Glasgow, UK
- School of Medicine, Dentistry & Nursing, University of Glasgow, Glasgow, UK
| | | | - Amit Pawa
- Department of Medicine and Perioperative Medicine, Guy’s and St Thomas’ NHS Foundation Trust, London, UK
- Faculty of Life Sciences and Medicine, King’s College London, London, UK
| | - Simeon West
- Department of Anaesthesia, University College London Hospitals NHS Foundation Trust, London, UK
| | | | - Nat Haslam
- Department of Anaesthesia, South Tyneside and Sunderland NHS Foundation Trust, South Shields, UK
| | - Toby Ashken
- Department of Anaesthesia, University College London Hospitals NHS Foundation Trust, London, UK
| | - Maria Paz Sebastian
- Department of Anaesthetics, Royal National Orthopaedic Hospital NHS Trust, Stanmore, UK
| | - Athmaja Thottungal
- Department of Anaesthesia and Pain Management, East Kent Hospitals University NHS Foundation Trust, Canterbury, UK
| | - Jono Womack
- Department of Anaesthesia, Royal Victoria Infirmary, Newcastle upon Tyne, UK
| | | | - Helen Higham
- Nuffield Department of Clinical Anaesthesia, University of Oxford, Oxford, UK
- Department of Anaesthesia, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - James S Bowness
- Department of Anaesthesia, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Targeted Intervention, University College London, London, UK
| |
Collapse
|
3
|
Zhang C, He J, Liang X, Shi Q, Peng L, Wang S, He J, Xu J. Deep learning models for the prediction of acute postoperative pain in PACU for video-assisted thoracoscopic surgery. BMC Med Res Methodol 2024; 24:232. [PMID: 39375589 PMCID: PMC11457357 DOI: 10.1186/s12874-024-02357-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 09/27/2024] [Indexed: 10/09/2024] Open
Abstract
BACKGROUND Postoperative pain is a prevalent symptom experienced by patients undergoing surgical procedures. This study aims to develop deep learning algorithms for predicting acute postoperative pain using both essential patient details and real-time vital sign data during surgery. METHODS Through a retrospective observational approach, we utilized Graph Attention Networks (GAT) and graph Transformer Networks (GTN) deep learning algorithms to construct the DoseFormer model while incorporating an attention mechanism. This model employed patient information and intraoperative vital signs obtained during Video-assisted thoracoscopic surgery (VATS) surgery to anticipate postoperative pain. By categorizing the static and dynamic data, the DoseFormer model performed binary classification to predict the likelihood of postoperative acute pain. RESULTS A total of 1758 patients were initially included, with 1552 patients after data cleaning. These patients were then divided into training set (n = 931) and testing set (n = 621). In the testing set, the DoseFormer model exhibited significantly higher AUROC (0.98) compared to classical machine learning algorithms. Furthermore, the DoseFormer model displayed a significantly higher F1 value (0.85) in comparison to other classical machine learning algorithms. Notably, the attending anesthesiologists' F1 values (attending: 0.49, fellow: 0.43, Resident: 0.16) were significantly lower than those of the DoseFormer model in predicting acute postoperative pain. CONCLUSIONS Deep learning model can predict postoperative acute pain events based on patients' basic information and intraoperative vital signs.
Collapse
Affiliation(s)
- Cao Zhang
- Department of Anesthesiology, The Fourth Affiliated Hospital, Zhejiang University School of Medicine, Yiwu, Zhejiang, China.
- Zhejiang University School of Medicine, Hangzhou, China.
| | - Jiangqin He
- Department of Nursing, The Fourth Affiliated Hospital, Zhejiang University School of Medicine, Yiwu, Zhejiang, China
| | - Xingyuan Liang
- School of Computer Science and Engineering, Southeast University, Nanjing, Jiangsu, China
| | - Qinye Shi
- Department of Anesthesiology, The Fourth Affiliated Hospital, Zhejiang University School of Medicine, Yiwu, Zhejiang, China
| | - Lijia Peng
- Department of Anesthesiology, The Fourth Affiliated Hospital, Zhejiang University School of Medicine, Yiwu, Zhejiang, China
| | - Shuai Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, Jiangsu, China
| | - Jiannan He
- Department of Anesthesiology, The Fourth Affiliated Hospital, Zhejiang University School of Medicine, Yiwu, Zhejiang, China
| | - Jianhong Xu
- Department of Anesthesiology, The Fourth Affiliated Hospital, Zhejiang University School of Medicine, Yiwu, Zhejiang, China.
| |
Collapse
|
4
|
Marino M, Hagh R, Hamrin Senorski E, Longo UG, Oeding JF, Nellgard B, Szell A, Samuelsson K. Artificial intelligence-assisted ultrasound-guided regional anaesthesia: An explorative scoping review. J Exp Orthop 2024; 11:e12104. [PMID: 39144578 PMCID: PMC11322584 DOI: 10.1002/jeo2.12104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 05/17/2024] [Accepted: 05/20/2024] [Indexed: 08/16/2024] Open
Abstract
Purpose The present study reviews the available scientific literature on artificial intelligence (AI)-assisted ultrasound-guided regional anaesthesia (UGRA) and evaluates the reported intraprocedural parameters and postprocedural outcomes. Methods A literature search was performed on 19 September 2023, using the Medline, EMBASE, CINAHL, Cochrane Library and Google Scholar databases by experts in electronic searching. All study designs were considered with no restrictions regarding patient characteristics or cohort size. Outcomes assessed included the accuracy of AI-model tracking, success at the first attempt, differences in outcomes between AI-assisted and unassisted UGRA, operator feedback and case-report data. Results A joint adaptive median binary pattern (JAMBP) has been applied to improve the tracking procedure, while a particle filter (PF) is involved in feature extraction. JAMBP combined with PF was most accurate on all images for landmark identification, with accuracy scores of 0.83, 0.93 and 0.93 on original, preprocessed and filtered images, respectively. Evaluation of first-attempt success of spinal needle insertion revealed first-attempt success in most patients. When comparing AI application versus UGRA alone, a significant statistical difference (p < 0.05) was found for correct block view, correct structure identification and decrease in mean injection time, needle track adjustments and bone encounters in favour of having AI assistance. Assessment of operator feedback revealed that expert and nonexpert operator feedback was overall positive. Conclusion AI appears promising to enhance UGRA as well as to positively influence operator training. AI application of UGRA may improve the identification of anatomical structures and provide guidance for needle placement, reducing the risk of complications and improving patient outcomes. Level of Evidence Level IV.
Collapse
Affiliation(s)
- Martina Marino
- Fondazione Policlinico Universitario Campus Bio‐MedicoVia Alvaro del PortilloRomaItaly
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and SurgeryUniversità Campus Bio‐Medico di Roma, Via Alvaro del PortilloRomaItaly
| | - Rebecca Hagh
- Sahlgrenska Sports Medicine CenterGothenburgSweden
| | - Eric Hamrin Senorski
- Sahlgrenska Sports Medicine CenterGothenburgSweden
- Department of Health and Rehabilitation, Institute of Neuroscience and Physiology, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
| | - Umile Giuseppe Longo
- Fondazione Policlinico Universitario Campus Bio‐MedicoVia Alvaro del PortilloRomaItaly
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and SurgeryUniversità Campus Bio‐Medico di Roma, Via Alvaro del PortilloRomaItaly
| | - Jacob F. Oeding
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
- School of MedicineMayo Clinic Alix School of MedicineRochesterMinnesotaUSA
| | - Bengt Nellgard
- Department of Anesthesiology and Intensive Care, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
| | - Anita Szell
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
- Department of Anesthesiology and Intensive Care, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
| | - Kristian Samuelsson
- Sahlgrenska Sports Medicine CenterGothenburgSweden
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
| |
Collapse
|
5
|
Sugino T, Onogi S, Oishi R, Hanayama C, Inoue S, Ishida S, Yao Y, Ogasawara N, Murakawa M, Nakajima Y. Investigation of Appropriate Scaling of Networks and Images for Convolutional Neural Network-Based Nerve Detection in Ultrasound-Guided Nerve Blocks. SENSORS (BASEL, SWITZERLAND) 2024; 24:3696. [PMID: 38894486 PMCID: PMC11175212 DOI: 10.3390/s24113696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 05/31/2024] [Accepted: 06/04/2024] [Indexed: 06/21/2024]
Abstract
Ultrasound imaging is an essential tool in anesthesiology, particularly for ultrasound-guided peripheral nerve blocks (US-PNBs). However, challenges such as speckle noise, acoustic shadows, and variability in nerve appearance complicate the accurate localization of nerve tissues. To address this issue, this study introduces a deep convolutional neural network (DCNN), specifically Scaled-YOLOv4, and investigates an appropriate network model and input image scaling for nerve detection on ultrasound images. Utilizing two datasets, a public dataset and an original dataset, we evaluated the effects of model scale and input image size on detection performance. Our findings reveal that smaller input images and larger model scales significantly improve detection accuracy. The optimal configuration of model size and input image size not only achieved high detection accuracy but also demonstrated real-time processing capabilities.
Collapse
Affiliation(s)
- Takaaki Sugino
- Department of Biomedical Informatics, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (S.O.); (Y.N.)
| | - Shinya Onogi
- Department of Biomedical Informatics, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (S.O.); (Y.N.)
| | - Rieko Oishi
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Chie Hanayama
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Satoki Inoue
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Shinjiro Ishida
- TCC Media Lab Co., Ltd., Tokyo 192-0152, Japan; (S.I.); (N.O.)
| | - Yuhang Yao
- IOT SOFT Co., Ltd., Tokyo 103-0023, Japan;
| | | | - Masahiro Murakawa
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Yoshikazu Nakajima
- Department of Biomedical Informatics, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (S.O.); (Y.N.)
| |
Collapse
|
6
|
Gairola S, Solanki SL, Patkar S, Goel M. Artificial Intelligence in Perioperative Planning and Management of Liver Resection. Indian J Surg Oncol 2024; 15:186-195. [PMID: 38818006 PMCID: PMC11133260 DOI: 10.1007/s13193-024-01883-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 01/16/2024] [Indexed: 06/01/2024] Open
Abstract
Artificial intelligence (AI) is a speciality within computer science that deals with creating systems that can replicate the intelligence of a human mind and has problem-solving abilities. AI includes a diverse array of techniques and approaches such as machine learning, neural networks, natural language processing, robotics, and expert systems. An electronic literature search was conducted using the databases of "PubMed" and "Google Scholar". The period for the search was from 2000 to June 2023. The search terms included "artificial intelligence", "machine learning", "liver cancers", "liver tumors", "hepatectomy", "perioperative" and their synonyms in various combinations. The search also included all MeSH terms. The extracted articles were further reviewed in a step-wise manner for identification of relevant studies. A total of 148 articles were identified after the initial literature search. Initial review included screening of article titles for relevance and identifying duplicates. Finally, 65 articles were reviewed for this review article. The future of AI in liver cancer planning and management holds immense promise. AI-driven advancements will increasingly enable precise tumour detection, location, and characterisation through enhanced image analysis. ML algorithms will predict patient-specific treatment responses and complications, allowing for tailored therapies. Surgical robots and AI-guided procedures will enhance the precision of liver resections, reducing risks and improving outcomes. AI will also streamline patient monitoring, better hemodynamic management, enabling early detection of recurrence or complications. Moreover, AI will facilitate data-driven research, accelerating the development of novel treatments and therapies. Ultimately, AI's integration will revolutionise liver cancer care, offering personalised, efficient and effective solutions, improving patients' quality of life and survival rates.
Collapse
Affiliation(s)
- Shruti Gairola
- Department of Anaesthesiology, Critical Care and Pain, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai, Maharashtra India
| | - Sohan Lal Solanki
- Department of Anaesthesiology, Critical Care and Pain, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai, Maharashtra India
| | - Shraddha Patkar
- Division of Hepatobiliary Surgical Oncology, Department of Surgical Oncology, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai, Maharashtra India
| | - Mahesh Goel
- Division of Hepatobiliary Surgical Oncology, Department of Surgical Oncology, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai, Maharashtra India
| |
Collapse
|
7
|
Liu Z, Yang B, Shen Y, Ni X, Tsaftaris SA, Zhou H. Long-short diffeomorphism memory network for weakly-supervised ultrasound landmark tracking. Med Image Anal 2024; 94:103138. [PMID: 38479152 DOI: 10.1016/j.media.2024.103138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 01/26/2024] [Accepted: 03/05/2024] [Indexed: 04/16/2024]
Abstract
Ultrasound is a promising medical imaging modality benefiting from low-cost and real-time acquisition. Accurate tracking of an anatomical landmark has been of high interest for various clinical workflows such as minimally invasive surgery and ultrasound-guided radiation therapy. However, tracking an anatomical landmark accurately in ultrasound video is very challenging, due to landmark deformation, visual ambiguity and partial observation. In this paper, we propose a long-short diffeomorphism memory network (LSDM), which is a multi-task framework with an auxiliary learnable deformation prior to supporting accurate landmark tracking. Specifically, we design a novel diffeomorphic representation, which contains both long and short temporal information stored in separate memory banks for delineating motion margins and reducing cumulative errors. We further propose an expectation maximization memory alignment (EMMA) algorithm to iteratively optimize both the long and short deformation memory, updating the memory queue for mitigating local anatomical ambiguity. The proposed multi-task system can be trained in a weakly-supervised manner, which only requires few landmark annotations for tracking and zero annotation for deformation learning. We conduct extensive experiments on both public and private ultrasound landmark tracking datasets. Experimental results show that LSDM can achieve better or competitive landmark tracking performance with a strong generalization capability across different scanner types and different ultrasound modalities, compared with other state-of-the-art methods.
Collapse
Affiliation(s)
- Zhihua Liu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
| | - Bin Yang
- Department of Cardiovascular Sciences, University Hospitals of Leicester NHS Trust, Leicester, LE1 9HN, UK; Nantong-Leicester Joint Institute of Kidney Science, Department of Nephrology, Affiliated Hospital of Nantong University, Nantong, 226001, China
| | - Yan Shen
- Department of Emergency Medicine, Affiliated Hospital of Nantong University, Nantong, 226001, China
| | - Xuejun Ni
- Department of Emergency Medicine, Affiliated Hospital of Nantong University, Nantong, 226001, China
| | - Sotirios A Tsaftaris
- School of Engineering, The University of Edinburgh, Edinburgh EH9 3FG, UK; The Alan Turing Institute, London NW1 2DB, UK
| | - Huiyu Zhou
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK.
| |
Collapse
|
8
|
Bowness JS, Metcalfe D, El-Boghdadly K, Thurley N, Morecroft M, Hartley T, Krawczyk J, Noble JA, Higham H. Artificial intelligence for ultrasound scanning in regional anaesthesia: a scoping review of the evidence from multiple disciplines. Br J Anaesth 2024; 132:1049-1062. [PMID: 38448269 PMCID: PMC11103083 DOI: 10.1016/j.bja.2024.01.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/09/2024] [Accepted: 01/24/2024] [Indexed: 03/08/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) for ultrasound scanning in regional anaesthesia is a rapidly developing interdisciplinary field. There is a risk that work could be undertaken in parallel by different elements of the community but with a lack of knowledge transfer between disciplines, leading to repetition and diverging methodologies. This scoping review aimed to identify and map the available literature on the accuracy and utility of AI systems for ultrasound scanning in regional anaesthesia. METHODS A literature search was conducted using Medline, Embase, CINAHL, IEEE Xplore, and ACM Digital Library. Clinical trial registries, a registry of doctoral theses, regulatory authority databases, and websites of learned societies in the field were searched. Online commercial sources were also reviewed. RESULTS In total, 13,014 sources were identified; 116 were included for full-text review. A marked change in AI techniques was noted in 2016-17, from which point on the predominant technique used was deep learning. Methods of evaluating accuracy are variable, meaning it is impossible to compare the performance of one model with another. Evaluations of utility are more comparable, but predominantly gained from the simulation setting with limited clinical data on efficacy or safety. Study methodology and reporting lack standardisation. CONCLUSIONS There is a lack of structure to the evaluation of accuracy and utility of AI for ultrasound scanning in regional anaesthesia, which hinders rigorous appraisal and clinical uptake. A framework for consistent evaluation is needed to inform model evaluation, allow comparison between approaches/models, and facilitate appropriate clinical adoption.
Collapse
Affiliation(s)
- James S Bowness
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK.
| | - David Metcalfe
- Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, University of Oxford, Oxford, UK; Emergency Medicine Research in Oxford (EMROx), Oxford University Hospitals NHS Foundation Trust, Oxford, UK. https://twitter.com/@TraumaDataDoc
| | - Kariem El-Boghdadly
- Department of Anaesthesia and Peri-operative Medicine, Guy's & St Thomas's NHS Foundation Trust, London, UK; Centre for Human and Applied Physiological Sciences, King's College London, London, UK. https://twitter.com/@elboghdadly
| | - Neal Thurley
- Bodleian Health Care Libraries, University of Oxford, Oxford, UK
| | - Megan Morecroft
- Faculty of Medicine, Health & Life Sciences, University of Swansea, Swansea, UK
| | - Thomas Hartley
- Intelligent Ultrasound, Cardiff, UK. https://twitter.com/@tomhartley84
| | - Joanna Krawczyk
- Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK. https://twitter.com/@AlisonNoble_OU
| | - Helen Higham
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Anaesthesia, Oxford University Hospitals NHS Foundation Trust, Oxford, UK. https://twitter.com/@HelenEHigham
| |
Collapse
|
9
|
Lopes S, Rocha G, Guimarães-Pereira L. Artificial intelligence and its clinical application in Anesthesiology: a systematic review. J Clin Monit Comput 2024; 38:247-259. [PMID: 37864754 PMCID: PMC10995017 DOI: 10.1007/s10877-023-01088-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Accepted: 10/04/2023] [Indexed: 10/23/2023]
Abstract
PURPOSE Application of artificial intelligence (AI) in medicine is quickly expanding. Despite the amount of evidence and promising results, a thorough overview of the current state of AI in clinical practice of anesthesiology is needed. Therefore, our study aims to systematically review the application of AI in this context. METHODS A systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We searched Medline and Web of Science for articles published up to November 2022 using terms related with AI and clinical practice of anesthesiology. Articles that involved animals, editorials, reviews and sample size lower than 10 patients were excluded. Characteristics and accuracy measures from each study were extracted. RESULTS A total of 46 articles were included in this review. We have grouped them into 4 categories with regard to their clinical applicability: (1) Depth of Anesthesia Monitoring; (2) Image-guided techniques related to Anesthesia; (3) Prediction of events/risks related to Anesthesia; (4) Drug administration control. Each group was analyzed, and the main findings were summarized. Across all fields, the majority of AI methods tested showed superior performance results compared to traditional methods. CONCLUSION AI systems are being integrated into anesthesiology clinical practice, enhancing medical professionals' skills of decision-making, diagnostic accuracy, and therapeutic response.
Collapse
Affiliation(s)
- Sara Lopes
- Department of Anesthesiology, Centro Hospitalar Universitário São João, Porto, Portugal.
| | - Gonçalo Rocha
- Surgery and Physiology Department, Faculty of Medicine, University of Porto, Porto, Portugal
| | - Luís Guimarães-Pereira
- Department of Anesthesiology, Centro Hospitalar Universitário São João, Porto, Portugal
- Surgery and Physiology Department, Faculty of Medicine, University of Porto, Porto, Portugal
| |
Collapse
|
10
|
Bowness JS, Morse R, Lewis O, Lloyd J, Burckett-St Laurent D, Bellew B, Macfarlane AJR, Pawa A, Taylor A, Noble JA, Higham H. Variability between human experts and artificial intelligence in identification of anatomical structures by ultrasound in regional anaesthesia: a framework for evaluation of assistive artificial intelligence. Br J Anaesth 2023; 132:S0007-0912(23)00542-1. [PMID: 39492288 PMCID: PMC11103080 DOI: 10.1016/j.bja.2023.09.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/25/2023] [Accepted: 09/19/2023] [Indexed: 06/15/2024] Open
Abstract
BACKGROUND ScanNavTMAnatomy Peripheral Nerve Block (ScanNav™) is an artificial intelligence (AI)-based device that produces a colour overlay on real-time B-mode ultrasound to highlight key anatomical structures for regional anaesthesia. This study compares consistency of identification of sono-anatomical structures between expert ultrasonographers and ScanNav™. METHODS Nineteen experts in ultrasound-guided regional anaesthesia (UGRA) annotated 100 structures in 30 ultrasound videos across six anatomical regions. These annotations were compared with each other to produce a quantitative assessment of the level of agreement amongst human experts. The AI colour overlay was then compared with all expert annotations. Differences in human-human and human-AI agreement are presented for each structure class (artery, muscle, nerve, fascia/serosal plane) and structure. Clinical context is provided through subjective assessment data from UGRA experts. RESULTS For human-human and human-AI annotations, agreement was highest for arteries (mean Dice score 0.88/0.86), then muscles (0.80/0.77), and lowest for nerves (0.48/0.41). Wide discrepancy exists in consistency for different structures, both with human-human and human-AI comparisons; highest for sartorius muscle (0.91/0.92) and lowest for the radial nerve (0.21/0.27). CONCLUSIONS Human experts and the AI system both showed the same pattern of agreement in sono-anatomical structure identification. The clinical significance of the differences presented must be explored; however the perception that human expert opinion is uniform must be challenged. Elements of this assessment framework could be used for other devices to allow consistent evaluations that inform clinical training and practice. Anaesthetists should be actively engaged in the development and adoption of new AI technology.
Collapse
Affiliation(s)
- James S Bowness
- Nuffield Department of Clinical Anaesthesia, University of Oxford, Oxford, UK; Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK.
| | | | - Owen Lewis
- Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK
| | - James Lloyd
- Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK
| | | | - Boyne Bellew
- Department of Surgery & Cancer, Imperial College London, London, UK; Department of Anaesthesia, Imperial College Healthcare NHS Trust, London, UK
| | - Alan J R Macfarlane
- Department of Anaesthesia, NHS Greater Glasgow & Clyde, Glasgow, UK; School of Medicine, Dentistry & Nursing, University of Glasgow, Glasgow, UK
| | - Amit Pawa
- Department of Anaesthesia, Guy's & St Thomas' NHS Foundation Trust, London, UK; Faculty of Life Sciences and Medicine, King's College London, London, UK
| | | | - J Alison Noble
- Institute for Biomedical Engineering, University of Oxford, Oxford, UK
| | - Helen Higham
- Nuffield Department of Clinical Anaesthesia, University of Oxford, Oxford, UK; Department of Anaesthesia, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| |
Collapse
|
11
|
Zhao Y, Zheng S, Cai N, Zhang Q, Zhong H, Zhou Y, Zhang B, Wang G. Utility of Artificial Intelligence for Real-Time Anatomical Landmark Identification in Ultrasound-Guided Thoracic Paravertebral Block. J Digit Imaging 2023; 36:2051-2059. [PMID: 37291383 PMCID: PMC10501964 DOI: 10.1007/s10278-023-00851-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 05/03/2023] [Accepted: 05/08/2023] [Indexed: 06/10/2023] Open
Abstract
Thoracic paravertebral block (TPVB) is a common method of inducing perioperative analgesia in thoracic and abdominal surgery. Identifying anatomical structures in ultrasound images is very important especially for inexperienced anesthesiologists who are unfamiliar with the anatomy. Therefore, our aim was to develop an artificial neural network (ANN) to automatically identify (in real-time) anatomical structures in ultrasound images of TPVB. This study is a retrospective study using ultrasound scans (both video and standard still images) that we acquired. We marked the contours of the paravertebral space (PVS), lung, and bone in the TPVB ultrasound image. Based on the labeled ultrasound images, we used the U-net framework to train and create an ANN that enabled real-time identification of important anatomical structures in ultrasound images. A total of 742 ultrasound images were acquired and labeled in this study. In this ANN, the Intersection over Union (IoU) and Dice similarity coefficient (DSC or Dice coefficient) of the paravertebral space (PVS) were 0.75 and 0.86, respectively, the IoU and DSC of the lung were 0.85 and 0.92, respectively, and the IoU and DSC of the bone were 0.69 and 0.83, respectively. The accuracies of the PVS, lung, and bone were 91.7%, 95.4%, and 74.3%, respectively. For tenfold cross validation, the median interquartile range for PVS IoU and DSC was 0.773 and 0.87, respectively. There was no significant difference in the scores for the PVS, lung, and bone between the two anesthesiologists. We developed an ANN for the real-time automatic identification of thoracic paravertebral anatomy. The performance of the ANN was highly satisfactory. We conclude that AI has good prospects for use in TPVB. Clinical registration number: ChiCTR2200058470 (URL: http://www.chictr.org.cn/showproj.aspx?proj=152839 ; registration date: 2022-04-09).
Collapse
Affiliation(s)
- Yaoping Zhao
- Department of Anesthesiology, Beijing Jishuitan Hospital, No. 31 of Xinjiekou East Street, Xicheng District, Beijing, 100035, China
| | - Shaoqiang Zheng
- Department of Anesthesiology, Beijing Jishuitan Hospital, No. 31 of Xinjiekou East Street, Xicheng District, Beijing, 100035, China
| | - Nan Cai
- Department of Anesthesiology, Beijing Jishuitan Hospital, No. 31 of Xinjiekou East Street, Xicheng District, Beijing, 100035, China
| | - Qiang Zhang
- Department of Thoracic Surgery, Beijing Jishuitan Hospital, Beijing, 100035, China
| | - Hao Zhong
- Department of Anesthesiology, Beijing Jishuitan Hospital, No. 31 of Xinjiekou East Street, Xicheng District, Beijing, 100035, China
| | - Yan Zhou
- Department of Anesthesiology, Beijing Jishuitan Hospital, No. 31 of Xinjiekou East Street, Xicheng District, Beijing, 100035, China
| | - Bo Zhang
- AMIT Co., Ltd., Wuxi , Jiangsu, 214000, China
| | - Geng Wang
- Department of Anesthesiology, Beijing Jishuitan Hospital, No. 31 of Xinjiekou East Street, Xicheng District, Beijing, 100035, China.
| |
Collapse
|
12
|
Zhang TT, Shu H, Tang ZR, Lam KY, Chow CY, Chen XJ, Li A, Zheng YY. Weakly supervised real-time instance segmentation for ultrasound images of median nerves. Comput Biol Med 2023; 162:107057. [PMID: 37271112 DOI: 10.1016/j.compbiomed.2023.107057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 05/06/2023] [Accepted: 05/20/2023] [Indexed: 06/06/2023]
Abstract
Medical ultrasound technology has garnered significant attention in recent years, with Ultrasound-guided regional anesthesia (UGRA) and carpal tunnel diagnosis (CTS) being two notable examples. Instance segmentation, based on deep learning approaches, is a promising choice to support the analysis of ultrasound data. However, many instance segmentation models cannot achieve the requirement of ultrasound technology e.g. real-time. Moreover, fully supervised instance segmentation models require large numbers of images and corresponding mask annotations for training, which can be time-consuming and labor-intensive in the case of medical ultrasound data. This paper proposes a novel weakly supervised framework, CoarseInst, to achieve real-time instance segmentation of ultrasound images with only box annotations. CoarseInst not only improves the network structure, but also proposes a two-stage "coarse-to-fine" training strategy. Specifically, median nerves are used as the target application for UGRA and CTS. CoarseInst consists of two stages, with pseudo mask labels generated in the coarse mask generation stage for self-training. An object enhancement block is incorporated to mitigate the performance loss caused by parameter reduction in this stage. Additionally, we introduce a pair of loss functions, the amplification loss, and the deflation loss, that work together to generate the masks. A center area mask searching algorithm is also proposed to generate labels for the deflation loss. In the self-training stage, a novel self-feature similarity loss is designed to generate more precise masks. Experimental results on a practical ultrasound dataset demonstrate that CoarseInst could achieve better performance than some state-of-the-art fully supervised works.
Collapse
Affiliation(s)
- Tian-Tian Zhang
- Department of Computer Science, City University of Hong Kong, Hong Kong Special Administrative Region.
| | - Hua Shu
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China.
| | - Zhi-Ri Tang
- Department of Computer Science, City University of Hong Kong, Hong Kong Special Administrative Region.
| | - Kam-Yiu Lam
- Department of Computer Science, City University of Hong Kong, Hong Kong Special Administrative Region.
| | - Chi-Yin Chow
- Social Mind Analytics (Research and Technology) Limited, Hong Kong Special Administrative Region.
| | - Xiao-Jun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
| | - Ao Li
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China.
| | - Yuan-Yi Zheng
- Department of Ultrasound, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai 200233, China.
| |
Collapse
|
13
|
Viderman D, Dossov M, Seitenov S, Lee MH. Artificial intelligence in ultrasound-guided regional anesthesia: A scoping review. Front Med (Lausanne) 2022; 9:994805. [PMID: 36388935 PMCID: PMC9640918 DOI: 10.3389/fmed.2022.994805] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 09/22/2022] [Indexed: 01/06/2024] Open
Abstract
BACKGROUND Regional anesthesia is increasingly used in acute postoperative pain management. Ultrasound has been used to facilitate the performance of the regional block, increase the percentage of successfully performed procedures and reduce the complication rate. Artificial intelligence (AI) has been studied in many medical disciplines with achieving high success, especially in radiology. The purpose of this review was to review the evidence on the application of artificial intelligence for optimization and interpretation of the sonographic image, and visualization of needle advancement and injection of local anesthetic. METHODS To conduct this scoping review, we followed the PRISMA-S guidelines. We included studies if they met the following criteria: (1) Application of Artificial intelligence-assisted in ultrasound-guided regional anesthesia; (2) Any human subject (of any age), object (manikin), or animal; (3) Study design: prospective, retrospective, RCTs; (4) Any method of regional anesthesia (epidural, spinal anesthesia, peripheral nerves); (5) Any anatomical localization of regional anesthesia (any nerve or plexus) (6) Any methods of artificial intelligence; (7) Settings: Any healthcare settings (Medical centers, hospitals, clinics, laboratories. RESULTS The systematic searches identified 78 citations. After the removal of the duplicates, 19 full-text articles were assessed; and 15 studies were eligible for inclusion in the review. CONCLUSIONS AI solutions might be useful in anatomical landmark identification, reducing or even avoiding possible complications. AI-guided solutions can improve the optimization and interpretation of the sonographic image, visualization of needle advancement, and injection of local anesthetic. AI-guided solutions might improve the training process in UGRA. Although significant progress has been made in the application of AI-guided UGRA, randomized control trials are still missing.
Collapse
Affiliation(s)
- Dmitriy Viderman
- Department of Biomedical Sciences, Nazarbayev University School of Medicine, Nur-Sultan, Kazakhstan
| | - Mukhit Dossov
- Department of Anesthesiology and Critical Care, Presidential Hospital, Nur-Sultan, Kazakhstan
| | - Serik Seitenov
- Department of Anesthesiology and Critical Care, Presidential Hospital, Nur-Sultan, Kazakhstan
| | - Min-Ho Lee
- Department of Computer Sciences, Nazarbayev University School of Engineering and Digital Sciences, Nur-Sultan, Kazakhstan
| |
Collapse
|
14
|
Lockwood H, McLeod GA. A paired comparison of nerve dimensions using B-Mode ultrasound and shear wave elastography during regional anaesthesia. ULTRASOUND 2022; 30:346-354. [PMID: 36969534 PMCID: PMC10034658 DOI: 10.1177/1742271x221091726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 02/26/2022] [Indexed: 11/15/2022]
Abstract
Introduction: Shear wave elastography (SWE) presents nerves in colour, but the dimensions of its colour maps have not been validated with paired B-Mode nerve images. Our primary objective was to define the bias and limits of agreement of SWE with B-Mode nerve diameter. Our secondary objectives were to compare nerve area and shape, and provide a clinical standard for future application of new colour imaging technologies such as artificial intelligence. Materials and Methods: Eleven combined ultrasound-guided regional nerve blocks were conducted using a dual-mode transducer. Two raters outlined nerve margins on 110 paired B-Mode and SWE images every second for 20 s before and during injection. Bias and limits of agreement were plotted on Bland-Altman plots. We hypothesized that the bias of nerve diameter would be <2.5% and that the percent limits of agreement would lie ±0.67% (2 SD) of the bias. Results: There was no difference in the bias (95% confidence interval (CI) limits of agreement) of nerve diameter measurement, 0.01 (−0.14 to 0.16) cm, P = 0.85, equivalent to a 1.4% (−56.6% to 59.5) % difference. The bias and limits of agreement were 0.03 (−0.08 to 0.15) cm2, P = 0.54 for cross-sectional nerve area; and 0.02 (−0.03 to 0.07), P = 0.45 for shape. Reliability (ICC) between raters was 0.96 (0.94–0.98) for B-Mode nerve area and 0.91 (0.83–0.95) for SWE nerve area. Conclusions: Nerve diameter measurement from B-Mode and SWE images fell within a priori measures of bias and limits of agreement.
Collapse
Affiliation(s)
| | - Graeme A McLeod
- Institute of Academic Anesthesia,
School of Medicine, University of Dundee, Dundee, UK
- Graeme A McLeod, Institute of Academic
Anesthesia, School of Medicine, University of Dundee, Dundee DD1 9SY, UK.
| |
Collapse
|
15
|
Artificial Intelligence: Innovation to Assist in the Identification of Sono-anatomy for Ultrasound-Guided Regional Anaesthesia. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1356:117-140. [PMID: 35146620 DOI: 10.1007/978-3-030-87779-8_6] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
Abstract
Ultrasound-guided regional anaesthesia (UGRA) involves the targeted deposition of local anaesthesia to inhibit the function of peripheral nerves. Ultrasound allows the visualisation of nerves and the surrounding structures, to guide needle insertion to a perineural or fascial plane end point for injection. However, it is challenging to develop the necessary skills to acquire and interpret optimal ultrasound images. Sound anatomical knowledge is required and human image analysis is fallible, limited by heuristic behaviours and fatigue, while its subjectivity leads to varied interpretation even amongst experts. Therefore, to maximise the potential benefit of ultrasound guidance, innovation in sono-anatomical identification is required.Artificial intelligence (AI) is rapidly infiltrating many aspects of everyday life. Advances related to medicine have been slower, in part because of the regulatory approval process needing to thoroughly evaluate the risk-benefit ratio of new devices. One area of AI to show significant promise is computer vision (a branch of AI dealing with how computers interpret the visual world), which is particularly relevant to medical image interpretation. AI includes the subfields of machine learning and deep learning, techniques used to interpret or label images. Deep learning systems may hold potential to support ultrasound image interpretation in UGRA but must be trained and validated on data prior to clinical use.Review of the current UGRA literature compares the success and generalisability of deep learning and non-deep learning approaches to image segmentation and explains how computers are able to track structures such as nerves through image frames. We conclude this review with a case study from industry (ScanNav Anatomy Peripheral Nerve Block; Intelligent Ultrasound Limited). This includes a more detailed discussion of the AI approach involved in this system and reviews current evidence of the system performance.The authors discuss how this technology may be best used to assist anaesthetists and what effects this may have on the future of learning and practice of UGRA. Finally, we discuss possible avenues for AI within UGRA and the associated implications.
Collapse
|
16
|
Paris A, Hafiane A. Shape constraint function for artery tracking in ultrasound images. Comput Med Imaging Graph 2021; 93:101970. [PMID: 34428649 DOI: 10.1016/j.compmedimag.2021.101970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 05/26/2021] [Accepted: 08/06/2021] [Indexed: 11/17/2022]
Abstract
Ultrasound guided regional anesthesia (UGRA) has emerged as a powerful technique for pain management in the operating theatre. It uses ultrasound imaging to visualize anatomical structures, the needle insertion and the delivery of the anesthetic around the targeted nerve block. Detection of the nerves is a difficult task, however, due to the poor quality of the ultrasound images. Recent developments in pattern recognition and machine learning have heightened the need for computer aided systems in many applications. This type of system can improve UGRA practice. In many imaging situations nerves are not salient in images. Generally, practitioners rely on the arteries as key anatomical structures to confirm the positions of the nerves, making artery tracking an important aspect for UGRA procedure. However, artery tracking in a noisy environment is a challenging problem, due to the instability of the features. This paper proposes a new method for real-time artery tracking in ultrasound images. It is based on shape information to correct tracker location errors. A new objective function is proposed, which defines an artery as an elliptical shape, enabling its robust fitting in a noisy environment. This approach is incorporated in two well-known tracking algorithms, and shows a systematic improvement over the original trackers. Evaluations were performed on 71 videos of different axillary nerve blocks. The results obtained demonstrated the validity of the proposed method.
Collapse
Affiliation(s)
- Arnaud Paris
- INSA Centre Val de Loire, University of Orléans, Laboratory PRISME EA 4229, 88 boulevard Lahitolle, F-18020 Bourges, France.
| | - Adel Hafiane
- INSA Centre Val de Loire, University of Orléans, Laboratory PRISME EA 4229, 88 boulevard Lahitolle, F-18020 Bourges, France
| |
Collapse
|
17
|
Bowness J, Varsou O, Turbitt L, Burkett-St Laurent D. Identifying anatomical structures on ultrasound: assistive artificial intelligence in ultrasound-guided regional anesthesia. Clin Anat 2021; 34:802-809. [PMID: 33904628 DOI: 10.1002/ca.23742] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/01/2021] [Accepted: 03/04/2021] [Indexed: 12/29/2022]
Abstract
Ultrasound-guided regional anesthesia involves visualizing sono-anatomy to guide needle insertion and the perineural injection of local anesthetic. Anatomical knowledge and recognition of anatomical structures on ultrasound are known to be imperfect amongst anesthesiologists. This investigation evaluates the performance of an assistive artificial intelligence (AI) system in aiding the identification of anatomical structures on ultrasound. Three independent experts in regional anesthesia reviewed 40 ultrasound scans of seven body regions. Unmodified ultrasound videos were presented side-by-side with AI-highlighted ultrasound videos. Experts rated the overall system performance, ascertained whether highlighting helped identify specific anatomical structures, and provided opinion on whether it would help confirm the correct ultrasound view to a less experienced practitioner. Two hundred and seventy-five assessments were performed (five videos contained inadequate views); mean highlighting scores ranged from 7.87 to 8.69 (out of 10). The Kruskal-Wallis H-test showed a statistically significant difference in the overall performance rating (χ2 [6] = 36.719, asymptotic p < 0.001); regions containing a prominent vascular landmark ranked most highly. AI-highlighting was helpful in identifying specific anatomical structures in 1330/1334 cases (99.7%) and for confirming the correct ultrasound view in 273/275 scans (99.3%). These data demonstrate the clinical utility of an assistive AI system in aiding the identification of anatomical structures on ultrasound during ultrasound-guided regional anesthesia. Whilst further evaluation must follow, such technology may present an opportunity to enhance clinical practice and energize the important field of clinical anatomy amongst clinicians.
Collapse
Affiliation(s)
- James Bowness
- Oxford Simulation, Teaching and Research Centre, University of Oxford, Oxford, UK.,Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK
| | - Ourania Varsou
- Anatomy Facility, School of Life Sciences, University of Glasgow, Glasgow, UK
| | - Lloyd Turbitt
- Department of Anaesthesia, Belfast Health and Social Care Trust, Belfast, UK
| | | |
Collapse
|
18
|
Chel H, Bora PK, Ramchiary KK. A fast technique for hyper-echoic region separation from brain ultrasound images using patch based thresholding and cubic B-spline based contour smoothing. ULTRASONICS 2021; 111:106304. [PMID: 33360770 DOI: 10.1016/j.ultras.2020.106304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Revised: 11/14/2020] [Accepted: 11/14/2020] [Indexed: 06/12/2023]
Abstract
Ultrasound image guided brain surgery (UGBS) requires an automatic and fast image segmentation method. The level-set and active contour based algorithms have been found to be useful for obtaining topology-independent boundaries between different image regions. But slow convergence limits their use in online US image segmentation. The performance of these algorithms deteriorates on US images because of the intensity inhomogeneity. This paper proposes an effective region-driven method for the segmentation of hyper-echoic (HE) regions suppressing the hypo-echoic and anechoic regions in brain US images. An automatic threshold estimation scheme is developed with a modified Niblack's approach. The separation of the hyper-echoic and non-hyper-echoic (NHE) regions is performed by successively applying patch based intensity thresholding and boundary smoothing. First, a patch based segmentation is performed, which separates roughly the two regions. The patch based approach in this process reduces the effect of intensity heterogeneity within an HE region. An iterative boundary correction step with reducing patch size improves further the regional topology and refines the boundary regions. For avoiding the slope and curvature discontinuities and obtaining distinct boundaries between HE and NHE regions, a cubic B-spline model of curve smoothing is applied. The proposed method is 50-100 times faster than the other level-set based image segmentation algorithms. The segmentation performance and the convergence speed of the proposed method are compared with four other competing level-set based algorithms. The computational results show that the proposed segmentation approach outperforms other level-set based techniques both subjectively and objectively.
Collapse
Affiliation(s)
- Haradhan Chel
- Department of Electronics and Communication, Central Institute of Technology Kokrajhar, Assam 783370, India; City Clinic and Research Centre, Kokrajhar, Assam, India.
| | - P K Bora
- Department of EEE, Indian Institute of Technology Guwahati, Assam, India.
| | - K K Ramchiary
- City Clinic and Research Centre, Kokrajhar, Assam, India.
| |
Collapse
|
19
|
McKendrick M, Yang S, McLeod GA. The use of artificial intelligence and robotics in regional anaesthesia. Anaesthesia 2021; 76 Suppl 1:171-181. [PMID: 33426667 DOI: 10.1111/anae.15274] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 09/11/2020] [Indexed: 12/15/2022]
Abstract
The current fourth industrial revolution is a distinct technological era characterised by the blurring of physics, computing and biology. The driver of change is data, powered by artificial intelligence. The UK National Health Service Topol Report embraced this digital revolution and emphasised the importance of artificial intelligence to the health service. Application of artificial intelligence within regional anaesthesia, however, remains limited. An example of the use of a convoluted neural network applied to visual detection of nerves on ultrasound images is described. New technologies that may impact on regional anaesthesia include robotics and artificial sensing. Robotics in anaesthesia falls into three categories. The first, used commonly, is pharmaceutical, typified by target-controlled anaesthesia using electroencephalography within a feedback loop. Other types include mechanical robots that provide precision and dexterity better than humans, and cognitive robots that act as decision support systems. It is likely that the latter technology will expand considerably over the next decades and provide an autopilot for anaesthesia. Technical robotics will focus on the development of accurate sensors for training that incorporate visual and motion metrics. These will be incorporated into augmented reality and visual reality environments that will provide training at home or the office on life-like simulators. Real-time feedback will be offered that stimulates and rewards performance. In discussing the scope, applications, limitations and barriers to adoption of these technologies, we aimed to stimulate discussion towards a framework for the optimal application of current and emerging technologies in regional anaesthesia.
Collapse
Affiliation(s)
- M McKendrick
- Department of Psychology, School of Social Sciences, Heriot-Watt University, Edinburgh, UK.,Optomize Ltd, Glasgow, UK
| | - S Yang
- James Watt School of Engineering, University of Glasgow, Glasgow, UK
| | - G A McLeod
- Department of Anaesthesia, Ninewells Hospital, Dundee, UK.,University of Dundee, UK
| |
Collapse
|