1
|
Sugino T, Onogi S, Oishi R, Hanayama C, Inoue S, Ishida S, Yao Y, Ogasawara N, Murakawa M, Nakajima Y. Investigation of Appropriate Scaling of Networks and Images for Convolutional Neural Network-Based Nerve Detection in Ultrasound-Guided Nerve Blocks. SENSORS (BASEL, SWITZERLAND) 2024; 24:3696. [PMID: 38894486 PMCID: PMC11175212 DOI: 10.3390/s24113696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 05/31/2024] [Accepted: 06/04/2024] [Indexed: 06/21/2024]
Abstract
Ultrasound imaging is an essential tool in anesthesiology, particularly for ultrasound-guided peripheral nerve blocks (US-PNBs). However, challenges such as speckle noise, acoustic shadows, and variability in nerve appearance complicate the accurate localization of nerve tissues. To address this issue, this study introduces a deep convolutional neural network (DCNN), specifically Scaled-YOLOv4, and investigates an appropriate network model and input image scaling for nerve detection on ultrasound images. Utilizing two datasets, a public dataset and an original dataset, we evaluated the effects of model scale and input image size on detection performance. Our findings reveal that smaller input images and larger model scales significantly improve detection accuracy. The optimal configuration of model size and input image size not only achieved high detection accuracy but also demonstrated real-time processing capabilities.
Collapse
Affiliation(s)
- Takaaki Sugino
- Department of Biomedical Informatics, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (S.O.); (Y.N.)
| | - Shinya Onogi
- Department of Biomedical Informatics, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (S.O.); (Y.N.)
| | - Rieko Oishi
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Chie Hanayama
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Satoki Inoue
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Shinjiro Ishida
- TCC Media Lab Co., Ltd., Tokyo 192-0152, Japan; (S.I.); (N.O.)
| | - Yuhang Yao
- IOT SOFT Co., Ltd., Tokyo 103-0023, Japan;
| | | | - Masahiro Murakawa
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Yoshikazu Nakajima
- Department of Biomedical Informatics, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (S.O.); (Y.N.)
| |
Collapse
|
2
|
Bowness JS, Metcalfe D, El-Boghdadly K, Thurley N, Morecroft M, Hartley T, Krawczyk J, Noble JA, Higham H. Artificial intelligence for ultrasound scanning in regional anaesthesia: a scoping review of the evidence from multiple disciplines. Br J Anaesth 2024; 132:1049-1062. [PMID: 38448269 PMCID: PMC11103083 DOI: 10.1016/j.bja.2024.01.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/09/2024] [Accepted: 01/24/2024] [Indexed: 03/08/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) for ultrasound scanning in regional anaesthesia is a rapidly developing interdisciplinary field. There is a risk that work could be undertaken in parallel by different elements of the community but with a lack of knowledge transfer between disciplines, leading to repetition and diverging methodologies. This scoping review aimed to identify and map the available literature on the accuracy and utility of AI systems for ultrasound scanning in regional anaesthesia. METHODS A literature search was conducted using Medline, Embase, CINAHL, IEEE Xplore, and ACM Digital Library. Clinical trial registries, a registry of doctoral theses, regulatory authority databases, and websites of learned societies in the field were searched. Online commercial sources were also reviewed. RESULTS In total, 13,014 sources were identified; 116 were included for full-text review. A marked change in AI techniques was noted in 2016-17, from which point on the predominant technique used was deep learning. Methods of evaluating accuracy are variable, meaning it is impossible to compare the performance of one model with another. Evaluations of utility are more comparable, but predominantly gained from the simulation setting with limited clinical data on efficacy or safety. Study methodology and reporting lack standardisation. CONCLUSIONS There is a lack of structure to the evaluation of accuracy and utility of AI for ultrasound scanning in regional anaesthesia, which hinders rigorous appraisal and clinical uptake. A framework for consistent evaluation is needed to inform model evaluation, allow comparison between approaches/models, and facilitate appropriate clinical adoption.
Collapse
Affiliation(s)
- James S Bowness
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK.
| | - David Metcalfe
- Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, University of Oxford, Oxford, UK; Emergency Medicine Research in Oxford (EMROx), Oxford University Hospitals NHS Foundation Trust, Oxford, UK. https://twitter.com/@TraumaDataDoc
| | - Kariem El-Boghdadly
- Department of Anaesthesia and Peri-operative Medicine, Guy's & St Thomas's NHS Foundation Trust, London, UK; Centre for Human and Applied Physiological Sciences, King's College London, London, UK. https://twitter.com/@elboghdadly
| | - Neal Thurley
- Bodleian Health Care Libraries, University of Oxford, Oxford, UK
| | - Megan Morecroft
- Faculty of Medicine, Health & Life Sciences, University of Swansea, Swansea, UK
| | - Thomas Hartley
- Intelligent Ultrasound, Cardiff, UK. https://twitter.com/@tomhartley84
| | - Joanna Krawczyk
- Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK. https://twitter.com/@AlisonNoble_OU
| | - Helen Higham
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Anaesthesia, Oxford University Hospitals NHS Foundation Trust, Oxford, UK. https://twitter.com/@HelenEHigham
| |
Collapse
|
3
|
Guo Y, Chen M, Yang L, Yin H, Yang H, Zhou Y. A neural network with a human learning paradigm for breast fibroadenoma segmentation in sonography. Biomed Eng Online 2024; 23:5. [PMID: 38221632 PMCID: PMC10787993 DOI: 10.1186/s12938-024-01198-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 01/04/2024] [Indexed: 01/16/2024] Open
Abstract
BACKGROUND Breast fibroadenoma poses a significant health concern, particularly for young women. Computer-aided diagnosis has emerged as an effective and efficient method for the early and accurate detection of various solid tumors. Automatic segmentation of the breast fibroadenoma is important and potentially reduces unnecessary biopsies, but challenging due to the low image quality and presence of various artifacts in sonography. METHODS Human learning involves modularizing complete information and then integrating it through dense contextual connections in an intuitive and efficient way. Here, a human learning paradigm was introduced to guide the neural network by using two consecutive phases: the feature fragmentation stage and the information aggregation stage. To optimize this paradigm, three fragmentation attention mechanisms and information aggregation mechanisms were adapted according to the characteristics of sonography. The evaluation was conducted using a local dataset comprising 600 breast ultrasound images from 30 patients at Suining Central Hospital in China. Additionally, a public dataset consisting of 246 breast ultrasound images from Dataset_BUSI and DatasetB was used to further validate the robustness of the proposed network. Segmentation performance and inference speed were assessed by Dice similarity coefficient (DSC), Hausdorff distance (HD), and training time and then compared with those of the baseline model (TransUNet) and other state-of-the-art methods. RESULTS Most models guided by the human learning paradigm demonstrated improved segmentation on the local dataset with the best one (incorporating C3ECA and LogSparse Attention modules) outperforming the baseline model by 0.76% in DSC and 3.14 mm in HD and reducing the training time by 31.25%. Its robustness and efficiency on the public dataset are also confirmed, surpassing TransUNet by 0.42% in DSC and 5.13 mm in HD. CONCLUSIONS Our proposed human learning paradigm has demonstrated the superiority and efficiency of ultrasound breast fibroadenoma segmentation across both public and local datasets. This intuitive and efficient learning paradigm as the core of neural networks holds immense potential in medical image processing.
Collapse
Affiliation(s)
- Yongxin Guo
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, 1 Medical College Road, Chongqing, 400016, China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016, China
| | - Maoshan Chen
- Department of Breast and Thyroid Surgery, Suining Central Hospital, Suining, 629000, China
| | - Lei Yang
- Department of Breast and Thyroid Surgery, Suining Central Hospital, Suining, 629000, China
| | - Heng Yin
- Department of Breast and Thyroid Surgery, Suining Central Hospital, Suining, 629000, China
| | - Hongwei Yang
- Department of Breast and Thyroid Surgery, Suining Central Hospital, Suining, 629000, China
| | - Yufeng Zhou
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, 1 Medical College Road, Chongqing, 400016, China.
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016, China.
- National Medical Products Administration (NMPA) Key Laboratory for Quality Evaluation of Ultrasonic Surgical Equipment, 507 Gaoxin Ave., Donghu New Technology Development Zone, Wuhan, 430075, Hubei, China.
| |
Collapse
|
4
|
Guo Y, Zhou Y. MS-CFNet: a multi-scale clinical studying-based and feature extraction-guided network for breast fibroadenoma segmentation in ultrasonography. Biomed Eng Lett 2024; 14:173-184. [PMID: 38186950 PMCID: PMC10769961 DOI: 10.1007/s13534-023-00330-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 09/20/2023] [Accepted: 09/26/2023] [Indexed: 01/09/2024] Open
Abstract
Segmenting breast tumors in ultrasonography is challenging due to the low image quality and presence of artifacts. Radiologists' studying and diagnosis skills are integrated with artificial intelligence to establish a clinical learning-based deep learning network in order to robustly extract and delineate features of breast fibroadenoma. The spatial local feature contrast (SLFC) module captures overall tumor contours, while the channel recursive gated attention (CRGA) module enhances edge perception through high-dimensional information interaction. Additionally, full-scale feature fusion and enhanced deep supervision are applied to improve model stability and performance. To achieve smoother boundaries, we introduce a new loss function (cosh-smooth) that penalizes and finely tunes tumor edges. Our dataset comprises 1016 clinical ultrasound images of breast fibroadenoma with labeled masks, alongside a publicly available dataset of 246 ones. Segmentation performance is evaluated using the Dice similarity coefficient (DSC) and mean intersection over union (MIOU). Extensive experiments demonstrate that our proposed MS-CFNet outperforms state-of-the-art methods. Compared to TransUNet as a baseline model, MS-CFNet improves by 1.47% in DSC and 2.56% in MIOU. The promising result of MS-CFNet is attributed to the integration of radiologists' clinical diagnosis procedure and the bionic mindset, enhancing the network's ability to recognize and segment breast fibroadenomas effectively.
Collapse
Affiliation(s)
- Yongxin Guo
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016 China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016 China
| | - Yufeng Zhou
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016 China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016 China
- National Medical Products Administration (NMPA) Key Laboratory for Quality Evaluation, Ultrasonic Surgical Equipment, 507 Gaoxin Ave., Donghu New Technology Development Zone, Wuhan, 430075 Hubei China
| |
Collapse
|
5
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
6
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
7
|
Ansari MY, Yang Y, Meher PK, Dakua SP. Dense-PSP-UNet: A neural network for fast inference liver ultrasound segmentation. Comput Biol Med 2023; 153:106478. [PMID: 36603437 DOI: 10.1016/j.compbiomed.2022.106478] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 11/29/2022] [Accepted: 12/21/2022] [Indexed: 01/02/2023]
Abstract
Liver Ultrasound (US) or sonography is popularly used because of its real-time output, low-cost, ease-of-use, portability, and non-invasive nature. Segmentation of real-time liver US is essential for diagnosing and analyzing liver conditions (e.g., hepatocellular carcinoma (HCC)), assisting the surgeons/radiologists in therapeutic procedures. In this paper, we propose a method using a modified Pyramid Scene Parsing (PSP) module in tuned neural network backbones to achieve real-time segmentation without compromising the segmentation accuracy. Considering widespread noise in US data and its impact on outcomes, we study the impact of pre-processing and the influence of loss functions on segmentation performance. We have tested our method after annotating a publicly available US dataset containing 2400 images of 8 healthy volunteers (link to the annotated dataset is provided); the results show that the Dense-PSP-UNet model achieves a high Dice coefficient of 0.913±0.024 while delivering a real-time performance of 37 frames per second (FPS).
Collapse
Affiliation(s)
| | - Yin Yang
- Hamad Bin Khalifa Uinversity, Doha, Qatar
| | | | | |
Collapse
|
8
|
Li F, Li W, Gao X, Liu R, Xiao B. DCNet: Diversity convolutional network for ventricle segmentation on short-axis cardiac magnetic resonance images. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.110033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|