1
|
Swain BP, Nag DS, Anand R, Kumar H, Ganguly PK, Singh N. Current evidence on artificial intelligence in regional anesthesia. World J Clin Cases 2024; 12:6613-6619. [DOI: 10.12998/wjcc.v12.i33.6613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 09/11/2024] [Accepted: 09/19/2024] [Indexed: 09/27/2024] Open
Abstract
The recent advancement in regional anesthesia (RA) has been largely attributed to ultrasound technology. However, the safety and efficiency of ultrasound-guided nerve blocks depend upon the skill and experience of the performer. Even with adequate training, experience, and knowledge, human-related limitations such as fatigue, failure to recognize the correct anatomical structure, and unintentional needle or probe movement can hinder the overall effectiveness of RA. The amalgamation of artificial intelligence (AI) to RA practice has promised to override these human limitations. Machine learning, an integral part of AI can improve its performance through continuous learning and experience, like the human brain. It enables computers to recognize images and patterns specifically useful in anatomic structure identification during the performance of RA. AI can provide real-time guidance to clinicians by highlighting important anatomical structures on ultrasound images, and it can also assist in needle tracking and accurate deposition of local anesthetics. The future of RA with AI integration appears promising, yet obstacles such as device malfunction, data privacy, regulatory barriers, and cost concerns can deter its clinical implementation. The current mini review deliberates the current application, future direction, and barrier to the application of AI in RA practice.
Collapse
Affiliation(s)
- Bhanu Pratap Swain
- Department of Anaesthesiology, Tata Main Hospital, Jamshedpur 831001, India
- Department of Anesthesiology, Manipal Tata Medical College, Jamshedpur 831017, India
| | - Deb Sanjay Nag
- Department of Anaesthesiology, Tata Main Hospital, Jamshedpur 831001, India
| | - Rishi Anand
- Department of Anaesthesiology, Tata Main Hospital, Jamshedpur 831001, India
- Department of Anesthesiology, Manipal Tata Medical College, Jamshedpur 831017, India
| | - Himanshu Kumar
- Department of Anaesthesiology, Tata Main Hospital, Jamshedpur 831001, India
- Department of Anesthesiology, Manipal Tata Medical College, Jamshedpur 831017, India
| | | | - Niharika Singh
- Department of Anaesthesiology, Tata Main Hospital, Jamshedpur 831001, India
| |
Collapse
|
2
|
Bowness JS, Metcalfe D, El-Boghdadly K, Thurley N, Morecroft M, Hartley T, Krawczyk J, Noble JA, Higham H. Artificial intelligence for ultrasound scanning in regional anaesthesia: a scoping review of the evidence from multiple disciplines. Br J Anaesth 2024; 132:1049-1062. [PMID: 38448269 PMCID: PMC11103083 DOI: 10.1016/j.bja.2024.01.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/09/2024] [Accepted: 01/24/2024] [Indexed: 03/08/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) for ultrasound scanning in regional anaesthesia is a rapidly developing interdisciplinary field. There is a risk that work could be undertaken in parallel by different elements of the community but with a lack of knowledge transfer between disciplines, leading to repetition and diverging methodologies. This scoping review aimed to identify and map the available literature on the accuracy and utility of AI systems for ultrasound scanning in regional anaesthesia. METHODS A literature search was conducted using Medline, Embase, CINAHL, IEEE Xplore, and ACM Digital Library. Clinical trial registries, a registry of doctoral theses, regulatory authority databases, and websites of learned societies in the field were searched. Online commercial sources were also reviewed. RESULTS In total, 13,014 sources were identified; 116 were included for full-text review. A marked change in AI techniques was noted in 2016-17, from which point on the predominant technique used was deep learning. Methods of evaluating accuracy are variable, meaning it is impossible to compare the performance of one model with another. Evaluations of utility are more comparable, but predominantly gained from the simulation setting with limited clinical data on efficacy or safety. Study methodology and reporting lack standardisation. CONCLUSIONS There is a lack of structure to the evaluation of accuracy and utility of AI for ultrasound scanning in regional anaesthesia, which hinders rigorous appraisal and clinical uptake. A framework for consistent evaluation is needed to inform model evaluation, allow comparison between approaches/models, and facilitate appropriate clinical adoption.
Collapse
Affiliation(s)
- James S Bowness
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK.
| | - David Metcalfe
- Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, University of Oxford, Oxford, UK; Emergency Medicine Research in Oxford (EMROx), Oxford University Hospitals NHS Foundation Trust, Oxford, UK. https://twitter.com/@TraumaDataDoc
| | - Kariem El-Boghdadly
- Department of Anaesthesia and Peri-operative Medicine, Guy's & St Thomas's NHS Foundation Trust, London, UK; Centre for Human and Applied Physiological Sciences, King's College London, London, UK. https://twitter.com/@elboghdadly
| | - Neal Thurley
- Bodleian Health Care Libraries, University of Oxford, Oxford, UK
| | - Megan Morecroft
- Faculty of Medicine, Health & Life Sciences, University of Swansea, Swansea, UK
| | - Thomas Hartley
- Intelligent Ultrasound, Cardiff, UK. https://twitter.com/@tomhartley84
| | - Joanna Krawczyk
- Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK. https://twitter.com/@AlisonNoble_OU
| | - Helen Higham
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Anaesthesia, Oxford University Hospitals NHS Foundation Trust, Oxford, UK. https://twitter.com/@HelenEHigham
| |
Collapse
|
3
|
Kovacheva VP, Nagle B. Opportunities of AI-powered applications in anesthesiology to enhance patient safety. Int Anesthesiol Clin 2024; 62:26-33. [PMID: 38348838 PMCID: PMC11185868 DOI: 10.1097/aia.0000000000000437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Affiliation(s)
- Vesela P. Kovacheva
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Baily Nagle
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
4
|
Lonsdale H, Gray GM, Ahumada LM, Matava CT. Machine Vision and Image Analysis in Anesthesia: Narrative Review and Future Prospects. Anesth Analg 2023; 137:830-840. [PMID: 37712476 PMCID: PMC11495405 DOI: 10.1213/ane.0000000000006679] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
Machine vision describes the use of artificial intelligence to interpret, analyze, and derive predictions from image or video data. Machine vision-based techniques are already in clinical use in radiology, ophthalmology, and dermatology, where some applications currently equal or exceed the performance of specialty physicians in areas of image interpretation. While machine vision in anesthesia has many potential applications, its development remains in its infancy in our specialty. Early research for machine vision in anesthesia has focused on automated recognition of anatomical structures during ultrasound-guided regional anesthesia or line insertion; recognition of the glottic opening and vocal cords during video laryngoscopy; prediction of the difficult airway using facial images; and clinical alerts for endobronchial intubation detected on chest radiograph. Current machine vision applications measuring the distance between endotracheal tube tip and carina have demonstrated noninferior performance compared to board-certified physicians. The performance and potential uses of machine vision for anesthesia will only grow with the advancement of underlying machine vision algorithm technical performance developed outside of medicine, such as convolutional neural networks and transfer learning. This article summarizes recently published works of interest, provides a brief overview of techniques used to create machine vision applications, explains frequently used terms, and discusses challenges the specialty will encounter as we embrace the advantages that this technology may bring to future clinical practice and patient care. As machine vision emerges onto the clinical stage, it is critically important that anesthesiologists are prepared to confidently assess which of these devices are safe, appropriate, and bring added value to patient care.
Collapse
Affiliation(s)
- Hannah Lonsdale
- Department of Anesthesiology, Division of Pediatric Anesthesiology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Geoffrey M. Gray
- Center for Pediatric Data Science and Analytics Methodology, Johns Hopkins All Children’s Hospital, St. Petersburg, Florida, USA
| | - Luis M. Ahumada
- Center for Pediatric Data Science and Analytics Methodology, Johns Hopkins All Children’s Hospital, St. Petersburg, Florida, USA
| | - Clyde T. Matava
- Department of Anesthesia and Pain Medicine, The Hospital for Sick Children, Toronto, ON, Canada
- Department of Anesthesiology and Pain Medicine, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
5
|
Ferraz S, Coimbra M, Pedrosa J. Assisted probe guidance in cardiac ultrasound: A review. Front Cardiovasc Med 2023; 10:1056055. [PMID: 36865885 PMCID: PMC9971589 DOI: 10.3389/fcvm.2023.1056055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 01/24/2023] [Indexed: 02/16/2023] Open
Abstract
Echocardiography is the most frequently used imaging modality in cardiology. However, its acquisition is affected by inter-observer variability and largely dependent on the operator's experience. In this context, artificial intelligence techniques could reduce these variabilities and provide a user independent system. In recent years, machine learning (ML) algorithms have been used in echocardiography to automate echocardiographic acquisition. This review focuses on the state-of-the-art studies that use ML to automate tasks regarding the acquisition of echocardiograms, including quality assessment (QA), recognition of cardiac views and assisted probe guidance during the scanning process. The results indicate that performance of automated acquisition was overall good, but most studies lack variability in their datasets. From our comprehensive review, we believe automated acquisition has the potential not only to improve accuracy of diagnosis, but also help novice operators build expertise and facilitate point of care healthcare in medically underserved areas.
Collapse
Affiliation(s)
- Sofia Ferraz
- Institute for Systems and Computer Engineering, Technology and Science INESC TEC, Porto, Portugal
- Faculty of Engineering of the University of Porto (FEUP), Porto, Portugal
| | - Miguel Coimbra
- Institute for Systems and Computer Engineering, Technology and Science INESC TEC, Porto, Portugal
- Faculty of Sciences of the University of Porto (FCUP), Porto, Portugal
| | - João Pedrosa
- Institute for Systems and Computer Engineering, Technology and Science INESC TEC, Porto, Portugal
- Faculty of Engineering of the University of Porto (FEUP), Porto, Portugal
| |
Collapse
|
6
|
Artificial intelligence using deep neural network learning for automatic location of the interscalene brachial plexus in ultrasound images. Eur J Anaesthesiol 2022; 39:758-765. [PMID: 35919026 DOI: 10.1097/eja.0000000000001720] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
BACKGROUND Identifying the interscalene brachial plexus can be challenging during ultrasound-guided interscalene block. OBJECTIVE We hypothesised that an algorithm based on deep learning could locate the interscalene brachial plexus in ultrasound images better than a nonexpert anaesthesiologist, thus possessing the potential to aid anaesthesiologists. DESIGN Observational study. SETTING A tertiary hospital in Shanghai, China. PATIENTS Patients undergoing elective surgery. INTERVENTIONS Ultrasound images at the interscalene level were collected from patients. Two independent image datasets were prepared to train and evaluate the deep learning model. Three senior anaesthesiologists who were experts in regional anaesthesia annotated the images. A deep convolutional neural network was developed, trained and optimised to locate the interscalene brachial plexus in the ultrasound images. Expert annotations on the datasets were regarded as an accurate baseline (ground truth). The test dataset was also annotated by five nonexpert anaesthesiologists. MAIN OUTCOME MEASURES The primary outcome of the research was the distance between the lateral midpoints of the nerve sheath contours of the model predictions and ground truth. RESULTS The data set was obtained from 1126 patients. The training dataset comprised 11 392 images from 1076 patients. The test dataset constituted 100 images from 50 patients. In the test dataset, the median [IQR] distance between the lateral midpoints of the nerve sheath contours of the model predictions and ground truth was 0.8 [0.4 to 2.9] mm: this was significantly shorter than that between nonexpert predictions and ground truth (3.4 mm [2.1 to 4.5] mm; P < 0.001). CONCLUSION The proposed model was able to locate the interscalene brachial plexus in ultrasound images more accurately than nonexperts. TRIAL REGISTRATION ClinicalTrials.gov (https://clinicaltrials.gov) identifier: NCT04183972.
Collapse
|
7
|
Huang A, Jiang L, Zhang J, Wang Q. Attention-VGG16-UNet: a novel deep learning approach for automatic segmentation of the median nerve in ultrasound images. Quant Imaging Med Surg 2022; 12:3138-3150. [PMID: 35655843 PMCID: PMC9131343 DOI: 10.21037/qims-21-1074] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 03/07/2022] [Indexed: 10/15/2023]
Abstract
BACKGROUND Ultrasonography-an imaging technique that can show the anatomical section of nerves and surrounding tissues-is one of the most effective imaging methods to diagnose nerve diseases. However, segmenting the median nerve in two-dimensional (2D) ultrasound images is challenging due to the tiny and inconspicuous size of the nerve, the low contrast of images, and imaging noise. This study aimed to apply deep learning approaches to improve the accuracy of automatic segmentation of the median nerve in ultrasound images. METHODS In this study, we proposed an improved network called VGG16-UNet, which incorporates a contracting path and an expanding path. The contracting path is the VGG16 model with the 3 fully connected layers removed. The architecture of the expanding path resembles the upsampling path of U-Net. Moreover, attention mechanisms or/and residual modules were added to the U-Net and VGG16-UNet, which sequentially obtained Attention-UNet (A-UNet), Summation-UNet (S-UNet), Attention-Summation-UNet (AS-UNet), Attention-VGG16-UNet (A-VGG16-UNet), Summation-VGG16-UNet (S-VGG16-UNet), and Attention-Summation-VGG16-UNet (AS-VGG16-UNet). Each model was trained on the dataset of 910 median nerve images from 19 participants and tested on 207 frames from a new image sequence. The performance of the models was evaluated by metrics including Dice similarity coefficient (Dice), Jaccard similarity coefficient (Jaccard), Precision, and Recall. Based on the best segmentation results, we reconstructed a 3D median nerve image using the volume rendering method in the Visualization Toolkit (VTK) to assist in clinical nerve diagnosis. RESULTS The results of paired t-tests showed significant differences (P<0.01) in the metrics' values of different models. It showed that AS-UNet ranked first in U-Net models. The VGG16-UNet and its variants performed better than the corresponding U-Net models. Furthermore, the model's performance with the attention mechanism was superior to that with the residual module either based on U-Net or VGG16-UNet. The A-VGG16-UNet achieved the best performance (Dice =0.904±0.035, Jaccard =0.826±0.057, Precision =0.905±0.061, and Recall =0.909±0.061). Finally, we applied the trained A-VGG16-UNet to segment the median nerve in the image sequence, then reconstructed and visualized the 3D image of the median nerve. CONCLUSIONS This study demonstrates that the attention mechanism and residual module improve deep learning models for segmenting ultrasound images. The proposed VGG16-UNet-based models performed better than U-Net-based models. With segmentation, a 3D median nerve image can be reconstructed and can provide a visual reference for nerve diagnosis.
Collapse
Affiliation(s)
- Aiyue Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Li Jiang
- Department of Rehabilitation, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Jiangshan Zhang
- Department of Rehabilitation, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Qing Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| |
Collapse
|
8
|
Artificial Intelligence: Innovation to Assist in the Identification of Sono-anatomy for Ultrasound-Guided Regional Anaesthesia. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1356:117-140. [PMID: 35146620 DOI: 10.1007/978-3-030-87779-8_6] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
Abstract
Ultrasound-guided regional anaesthesia (UGRA) involves the targeted deposition of local anaesthesia to inhibit the function of peripheral nerves. Ultrasound allows the visualisation of nerves and the surrounding structures, to guide needle insertion to a perineural or fascial plane end point for injection. However, it is challenging to develop the necessary skills to acquire and interpret optimal ultrasound images. Sound anatomical knowledge is required and human image analysis is fallible, limited by heuristic behaviours and fatigue, while its subjectivity leads to varied interpretation even amongst experts. Therefore, to maximise the potential benefit of ultrasound guidance, innovation in sono-anatomical identification is required.Artificial intelligence (AI) is rapidly infiltrating many aspects of everyday life. Advances related to medicine have been slower, in part because of the regulatory approval process needing to thoroughly evaluate the risk-benefit ratio of new devices. One area of AI to show significant promise is computer vision (a branch of AI dealing with how computers interpret the visual world), which is particularly relevant to medical image interpretation. AI includes the subfields of machine learning and deep learning, techniques used to interpret or label images. Deep learning systems may hold potential to support ultrasound image interpretation in UGRA but must be trained and validated on data prior to clinical use.Review of the current UGRA literature compares the success and generalisability of deep learning and non-deep learning approaches to image segmentation and explains how computers are able to track structures such as nerves through image frames. We conclude this review with a case study from industry (ScanNav Anatomy Peripheral Nerve Block; Intelligent Ultrasound Limited). This includes a more detailed discussion of the AI approach involved in this system and reviews current evidence of the system performance.The authors discuss how this technology may be best used to assist anaesthetists and what effects this may have on the future of learning and practice of UGRA. Finally, we discuss possible avenues for AI within UGRA and the associated implications.
Collapse
|
9
|
Drukker L, Sharma H, Droste R, Alsharid M, Chatelain P, Noble JA, Papageorghiou AT. Transforming obstetric ultrasound into data science using eye tracking, voice recording, transducer motion and ultrasound video. Sci Rep 2021; 11:14109. [PMID: 34238950 PMCID: PMC8266837 DOI: 10.1038/s41598-021-92829-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Accepted: 06/09/2021] [Indexed: 12/28/2022] Open
Abstract
Ultrasound is the primary modality for obstetric imaging and is highly sonographer dependent. Long training period, insufficient recruitment and poor retention of sonographers are among the global challenges in the expansion of ultrasound use. For the past several decades, technical advancements in clinical obstetric ultrasound scanning have largely concerned improving image quality and processing speed. By contrast, sonographers have been acquiring ultrasound images in a similar fashion for several decades. The PULSE (Perception Ultrasound by Learning Sonographer Experience) project is an interdisciplinary multi-modal imaging study aiming to offer clinical sonography insights and transform the process of obstetric ultrasound acquisition and image analysis by applying deep learning to large-scale multi-modal clinical data. A key novelty of the study is that we record full-length ultrasound video with concurrent tracking of the sonographer's eyes, voice and the transducer while performing routine obstetric scans on pregnant women. We provide a detailed description of the novel acquisition system and illustrate how our data can be used to describe clinical ultrasound. Being able to measure different sonographer actions or model tasks will lead to a better understanding of several topics including how to effectively train new sonographers, monitor the learning progress, and enhance the scanning workflow of experts.
Collapse
Affiliation(s)
- Lior Drukker
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - Harshita Sharma
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Richard Droste
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Mohammad Alsharid
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Pierre Chatelain
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| |
Collapse
|
10
|
Gungor I, Gunaydin B, Oktar SO, M Buyukgebiz B, Bagcaz S, Ozdemir MG, Inan G. A real-time anatomy ıdentification via tool based on artificial ıntelligence for ultrasound-guided peripheral nerve block procedures: an accuracy study. J Anesth 2021; 35:591-594. [PMID: 34008072 PMCID: PMC8131172 DOI: 10.1007/s00540-021-02947-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Accepted: 05/07/2021] [Indexed: 12/13/2022]
Abstract
We aimed to assess the accuracy of an artificial intelligence (AI)-based real-time anatomy identification software specifically developed to ease image interpretation intended for ultrasound-guided peripheral nerve block (UGPNB). Forty healthy participants (20 women, 20 men) were enrolled to perform interscalene, supraclavicular, infraclavicular, and transversus abdominis plane (TAP) blocks under ultrasound guidance using AI software by anesthesiology trainees. During block practice by a trainee, once the software indicates 100% scan success of each block associated anatomic landmarks, both raw and labeled ultrasound images were saved, assessed, and validated using a 5-point scale by expert validators. When trainees reached 100% scan success, accuracy scores of the validators were noted. Correlation analysis was used whether the relationship (r) according to demographics (gender, age, and body mass index: BMI) and block type exist. The BMI (kg/m2) and age (year) of participants were 22.2 ± 3 and 32.2 ± 5.25, respectively. Assessment scores of validators for all blocks were similar in male and female individuals. Mean assessment scores of validators were not significantly different according to age and BMI except for TAP block, which was inversely correlated with age and BMI (p = 0.01). AI technology can successfully interpret anatomical structures in real-time sonography while assisting young anesthesiologists during UGPNB practice.
Collapse
Affiliation(s)
- Irfan Gungor
- Department of Anesthesiology and Reanimation, Gazi University Faculty of Medicine, Besevler, 06500, Ankara, Turkey
| | - Berrin Gunaydin
- Department of Anesthesiology and Reanimation, Gazi University Faculty of Medicine, Besevler, 06500, Ankara, Turkey.
| | - Suna O Oktar
- Department of Radiology, Gazi University Faculty of Medicine, Ankara, Turkey
| | - Beyza M Buyukgebiz
- Department of Anesthesiology and Reanimation, Gazi University Faculty of Medicine, Besevler, 06500, Ankara, Turkey
| | - Selin Bagcaz
- Department of Anesthesiology and Reanimation, Gazi University Faculty of Medicine, Besevler, 06500, Ankara, Turkey
| | - Miray Gozde Ozdemir
- Department of Anesthesiology and Reanimation, Gazi University Faculty of Medicine, Besevler, 06500, Ankara, Turkey
| | - Gozde Inan
- Department of Anesthesiology and Reanimation, Gazi University Faculty of Medicine, Besevler, 06500, Ankara, Turkey
| |
Collapse
|
11
|
Smistad E, Johansen KF, Iversen DH, Reinertsen I. Highlighting nerves and blood vessels for ultrasound-guided axillary nerve block procedures using neural networks. J Med Imaging (Bellingham) 2018; 5:044004. [PMID: 30840734 PMCID: PMC6228309 DOI: 10.1117/1.jmi.5.4.044004] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Accepted: 10/23/2018] [Indexed: 11/14/2022] Open
Abstract
Ultrasound images acquired during axillary nerve block procedures can be difficult to interpret. Highlighting the important structures, such as nerves and blood vessels, may be useful for the training of inexperienced users. A deep convolutional neural network is used to identify the musculocutaneous, median, ulnar, and radial nerves, as well as the blood vessels in ultrasound images. A dataset of 49 subjects is collected and used for training and evaluation of the neural network. Several image augmentations, such as rotation, elastic deformation, shadows, and horizontal flipping, are tested. The neural network is evaluated using cross validation. The results showed that the blood vessels were the easiest to detect with a precision and recall above 0.8. Among the nerves, the median and ulnar nerves were the easiest to detect with an F -score of 0.73 and 0.62, respectively. The radial nerve was the hardest to detect with an F -score of 0.39. Image augmentations proved effective, increasing F -score by as much as 0.13. A Wilcoxon signed-rank test showed that the improvement from rotation, shadow, and elastic deformation augmentations were significant and the combination of all augmentations gave the best result. The results are promising; however, there is more work to be done, as the precision and recall are still too low. A larger dataset is most likely needed to improve accuracy, in combination with anatomical and temporal models.
Collapse
Affiliation(s)
- Erik Smistad
- SINTEF Medical Technology, Trondheim, Norway
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | | | - Daniel Høyer Iversen
- SINTEF Medical Technology, Trondheim, Norway
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | | |
Collapse
|
12
|
Scholten HJ, Pourtaherian A, Mihajlovic N, Korsten HHM, A. Bouwman R. Improving needle tip identification during ultrasound-guided procedures in anaesthetic practice. Anaesthesia 2017; 72:889-904. [DOI: 10.1111/anae.13921] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2017] [Indexed: 12/16/2022]
Affiliation(s)
- H. J. Scholten
- Department of Anaesthesiology; Intensive Care and Pain Medicine; Catharina Hospital; Eindhoven the Netherlands
| | - A. Pourtaherian
- Department of Electrical Engineering; Eindhoven University of Technology; Eindhoven the Netherlands
| | | | - H. H. M. Korsten
- Department of Anaesthesiology; Intensive Care and Pain Medicine; Catharina Hospital; Eindhoven the Netherlands
- Department of Electrical Engineering; Eindhoven University of Technology; Eindhoven the Netherlands
| | - R. A. Bouwman
- Department of Anaesthesiology; Intensive Care and Pain Medicine; Catharina Hospital; Eindhoven the Netherlands
- Department of Electrical Engineering; Eindhoven University of Technology; Eindhoven the Netherlands
| |
Collapse
|