1
|
Paccini M, Paschina G, De Beni S, Stefanov A, Kolev V, Patanè G. US & MR/CT Image Fusion with Markerless Skin Registration: A Proof of Concept. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:615-628. [PMID: 39020154 PMCID: PMC11810866 DOI: 10.1007/s10278-024-01176-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 05/18/2024] [Accepted: 05/31/2024] [Indexed: 07/19/2024]
Abstract
This paper presents an innovative automatic fusion imaging system that combines 3D CT/MR images with real-time ultrasound acquisition. The system eliminates the need for external physical markers and complex training, making image fusion feasible for physicians with different experience levels. The integrated system involves a portable 3D camera for patient-specific surface acquisition, an electromagnetic tracking system, and US components. The fusion algorithm comprises two main parts: skin segmentation and rigid co-registration, both integrated into the US machine. The co-registration aligns the surface extracted from CT/MR images with the 3D surface acquired by the camera, facilitating rapid and effective fusion. Experimental tests in different settings, validate the system's accuracy, computational efficiency, noise robustness, and operator independence.
Collapse
Affiliation(s)
| | | | | | | | - Velizar Kolev
- MedCom GmbH, Dolivostr., 11, Darmstadt, 64293, Germany
| | | |
Collapse
|
2
|
Balagalla UB, Jayasooriya J, de Alwis C, Subasinghe A. Automated segmentation of standard scanning planes to measure biometric parameters in foetal ultrasound images – a survey. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2179343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- U. B. Balagalla
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - J.V.D. Jayasooriya
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - C. de Alwis
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - A. Subasinghe
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| |
Collapse
|
3
|
Chen F, Xu P, Xie Y, Zhang D, Liao H, Zhao Z. Annotation-guided encoder-decoder network for bone extraction in ultrasound-assisted orthopedic surgery. Comput Biol Med 2022; 148:105813. [PMID: 35849949 DOI: 10.1016/j.compbiomed.2022.105813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 06/05/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
The patients and surgeons are usually exposed in massive ionizing radiation during fluoroscopy-based navigation orthopedic surgery. Comparatively, ultrasound-assisted orthopedic surgery could not only decrease the risk of radiation but also provide rich navigation information. However, due to the artifacts in ultrasound images, the extraction of bone structure from ultrasound sequences can be a particularly difficult task, which leads to some major challenges in ultrasound-assisted orthopedic navigation. In this paper, we propose an annotation-guided encoder-decoder network (AGN) to extract bone structure from the radiation-free ultrasound sequences. Specifically, the variability of the ultrasound probe's pose leads to the change of the ultrasound frame during the acquisition of ultrasound sequences. Therefore, a feature alignment module deployed in the AGN model is used to achieve reliable matching across ultrasound frames. Moreover, inspired by the interactive ultrasound analysis, where user annotated foreground information can help target extraction, our AGN model incorporates the annotation information obtained by Siamese networks. Experimental results validated that the AGN model not only produced better bone surface extraction than state-of-the-art methods (IOU: 0.92 versus. 0.88), but also achieved almost real-time extraction with the speed about 15 frames per second. In addition, the acquired bone surface further provided radiation-free 3D intraoperative bone structure for the intuitive navigation of orthopedic surgery.
Collapse
Affiliation(s)
- Fang Chen
- Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, China.
| | - Peng Xu
- Children's Hospital of Nanjing Medical University, Nanjing, 21106, China.
| | - Yanting Xie
- Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, China
| | - Daoqiang Zhang
- Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, China
| | - Zhe Zhao
- Department of Orthopaedics, Beijing Tsinghua Changgung Hospital, Tsinghua University, China
| |
Collapse
|
4
|
Artificial intelligence in gastrointestinal and hepatic imaging: past, present and future scopes. Clin Imaging 2022; 87:43-53. [DOI: 10.1016/j.clinimag.2022.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 03/09/2022] [Accepted: 04/11/2022] [Indexed: 11/19/2022]
|
5
|
Allahverdy A, Zare-Sadeghi A, Kalantari R, Moqadam R, Loghmani N, Shiran M. Brain tumor segmentation using hierarchical combination of fuzzy logic and cellular automata. JOURNAL OF MEDICAL SIGNALS & SENSORS 2022; 12:263-268. [PMID: 36120403 PMCID: PMC9480508 DOI: 10.4103/jmss.jmss_128_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 10/30/2021] [Accepted: 01/01/2022] [Indexed: 11/25/2022]
Abstract
Background: Magnetic resonance (MR) image is one of the most important diagnostic tools for brain tumor detection. Segmentation of glioma tumor region in brain MR images is challenging in medical image processing problems. Precise and reliable segmentation algorithms can be significantly helpful in the diagnosis and treatment planning. Methods: In this article, a novel brain tumor segmentation method is introduced as a postsegmentation module, which uses the primary segmentation method's output as input and makes the segmentation performance values better. This approach is a combination of fuzzy logic and cellular automata (CA). Results: The BraTS online dataset has been used for implementing the proposed method. In the first step, the intensity of each pixel is fed to a fuzzy system to label each pixel, and at the second step, the label of each pixel is fed to a fuzzy CA to make the performance of segmentation better. This step repeated while the performance saturated. The accuracy of the first step was 85.8%, but the accuracy of segmentation after using fuzzy CA was obtained to 99.8%. Conclusion: The practical results have shown that our proposed method could improve the brain tumor segmentation in MR images significantly in comparison with other approaches.
Collapse
|
6
|
Xi J, Chen J, Wang Z, Ta D, Lu B, Deng X, Li X, Huang Q. Simultaneous Segmentation of Fetal Hearts and Lungs for Medical Ultrasound Images via an Efficient Multi-scale Model Integrated With Attention Mechanism. ULTRASONIC IMAGING 2021; 43:308-319. [PMID: 34470531 DOI: 10.1177/01617346211042526] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Large scale early scanning of fetuses via ultrasound imaging is widely used to alleviate the morbidity or mortality caused by congenital anomalies in fetal hearts and lungs. To reduce the intensive cost during manual recognition of organ regions, many automatic segmentation methods have been proposed. However, the existing methods still encounter multi-scale problem at a larger range of receptive fields of organs in images, resolution problem of segmentation mask, and interference problem of task-irrelevant features, obscuring the attainment of accurate segmentations. To achieve semantic segmentation with functions of (1) extracting multi-scale features from images, (2) compensating information of high resolution, and (3) eliminating the task-irrelevant features, we propose a multi-scale model with skip connection framework and attention mechanism integrated. The multi-scale feature extraction modules are incorporated with additive attention gate units for irrelevant feature elimination, through a U-Net framework with skip connections for information compensation. The performance of fetal heart and lung segmentation indicates the superiority of our method over the existing deep learning based approaches. Our method also shows competitive performance stability during the task of semantic segmentations, showing a promising contribution on ultrasound based prognosis of congenital anomaly in the early intervention, and alleviating the negative effects caused by congenital anomaly.
Collapse
Affiliation(s)
- Jianing Xi
- School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an, China
| | - Jiangang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication & Electronic Engineering, East China Normal University, Shanghai, China
| | - Zhao Wang
- School of Computer Science, Northwestern Polytechnical University, Xi'an, China
| | - Dean Ta
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Bing Lu
- Center for Medical Ultrasound, Nanjing Medical University Affiliated Suzhou Hospital, Suzhou, China
| | - Xuedong Deng
- Center for Medical Ultrasound, Nanjing Medical University Affiliated Suzhou Hospital, Suzhou, China
| | - Xuelong Li
- School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an, China
| | - Qinghua Huang
- School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an, China
| |
Collapse
|
7
|
Wang K, Liang S, Zhong S, Feng Q, Ning Z, Zhang Y. Breast ultrasound image segmentation: A coarse-to-fine fusion convolutional neural network. Med Phys 2021; 48:4262-4278. [PMID: 34053092 DOI: 10.1002/mp.15006] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 05/20/2021] [Accepted: 05/20/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Breast ultrasound (BUS) image segmentation plays a crucial role in computer-aided diagnosis systems for BUS examination, which are useful for improved accuracy of breast cancer diagnosis. However, such performance remains a challenging task owing to the poor image quality and large variations in the sizes, shapes, and locations of breast lesions. In this paper, we propose a new convolutional neural network with coarse-to-fine feature fusion to address the aforementioned challenges. METHODS The proposed fusion network consists of an encoder path, a decoder path, and a core fusion stream path (FSP). The encoder path is used to capture the context information, and the decoder path is used for localization prediction. The FSP is designed to generate beneficial aggregate feature representations (i.e., various-sized lesion features, aggregated coarse-to-fine information, and high-resolution edge characteristics) from the encoder and decoder paths, which are eventually used for accurate breast lesion segmentation. To better retain the boundary information and alleviate the effect of image noise, we input the superpixel image along with the original image to the fusion network. Furthermore, a weighted-balanced loss function was designed to address the problem of lesion regions having different sizes. We then conducted exhaustive experiments on three public BUS datasets to evaluate the proposed network. RESULTS The proposed method outperformed state-of-the-art (SOTA) segmentation methods on the three public BUS datasets, with average dice similarity coefficients of 84.71(±1.07), 83.76(±0.83), and 86.52(±1.52), average intersection-over-union values of 76.34(±1.50), 75.70(±0.98), and 77.86(±2.07), average sensitivities of 86.66(±1.82), 85.21(±1.98), and 87.21(±2.51), average specificities of 97.92(±0.46), 98.57(±0.19), and 99.42(±0.21), and average accuracies of 95.89(±0.57), 97.17(±0.3), and 98.51(±0.3). CONCLUSIONS The proposed fusion network could effectively segment lesions from BUS images, thereby presenting a new feature fusion strategy to handle challenging task of segmentation, while outperforming the SOTA segmentation methods. The code is publicly available at https://github.com/mniwk/CF2-NET.
Collapse
Affiliation(s)
- Ke Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Shujun Liang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Shengzhou Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Zhenyuan Ning
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| |
Collapse
|
8
|
Qu X, Shi Y, Hou Y, Jiang J. An attention-supervised full-resolution residual network for the segmentation of breast ultrasound images. Med Phys 2020; 47:5702-5714. [PMID: 32964449 DOI: 10.1002/mp.14470] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Revised: 08/07/2020] [Accepted: 08/10/2020] [Indexed: 01/22/2023] Open
Abstract
PURPOSE Breast cancer is the most common cancer among women worldwide. Medical ultrasound imaging is one of the widely applied breast imaging methods for breast tumors. Automatic breast ultrasound (BUS) image segmentation can measure the size of tumors objectively. However, various ultrasound artifacts hinder segmentation. We proposed an attention-supervised full-resolution residual network (ASFRRN) to segment tumors from BUS images. METHODS In the proposed method, Global Attention Upsample (GAU) and deep supervision were introduced into a full-resolution residual network (FRRN), where GAU learns to merge features at different levels with attention for deep supervision. Two datasets were employed for evaluation. One (Dataset A) consisted of 163 BUS images with tumors (53 malignant and 110 benign) from UDIAT Centre Diagnostic, and the other (Dataset B) included 980 BUS images with tumors (595 malignant and 385 benign) from the Sun Yat-sen University Cancer Center. The tumors from both datasets were manually segmented by medical doctors. For evaluation, the Dice coefficient (Dice), Jaccard similarity coefficient (JSC), and F1 score were calculated. RESULTS For Dataset A, the proposed method achieved higher Dice (84.3 ± 10.0%), JSC (75.2 ± 10.7%), and F1 score (84.3 ± 10.0%) than the previous best method: FRRN. For Dataset B, the proposed method also achieved higher Dice (90.7 ± 13.0%), JSC (83.7 ± 14.8%), and F1 score (90.7 ± 13.0%) than the previous best methods: DeepLabv3 and dual attention network (DANet). For Dataset A + B, the proposed method achieved higher Dice (90.5 ± 13.1%), JSC (83.3 ± 14.8%), and F1 score (90.5 ± 13.1%) than the previous best method: DeepLabv3. Additionally, the parameter number of ASFRRN was only 10.6 M, which is less than those of DANet (71.4 M) and DeepLabv3 (41.3 M). CONCLUSIONS We proposed ASFRRN, which combined with FRRN, attention mechanism, and deep supervision to segment tumors from BUS images. It achieved high segmentation accuracy with a reduced parameter number.
Collapse
Affiliation(s)
- Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, 100191, China
| | - Yao Shi
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, 100191, China
| | - Yaxin Hou
- Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
9
|
Segmentation of breast ultrasound image with semantic classification of superpixels. Med Image Anal 2020; 61:101657. [PMID: 32032899 DOI: 10.1016/j.media.2020.101657] [Citation(s) in RCA: 83] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Revised: 01/18/2020] [Accepted: 01/22/2020] [Indexed: 11/22/2022]
Abstract
Breast cancer is a great threat to females. Ultrasound imaging has been applied extensively in diagnosis of breast cancer. Due to the poor image quality, segmentation of breast ultrasound (BUS) image remains a very challenging task. Besides, BUS image segmentation is a crucial step for further analysis. In this paper, we proposed a novel method to segment the breast tumor via semantic classification and merging patches. The proposed method firstly selects two diagonal points to crop a region of interest (ROI) on the original image. Then, histogram equalization, bilateral filter and pyramid mean shift filter are adopted to enhance the image. The cropped image is divided into many superpixels using simple linear iterative clustering (SLIC). Furthermore, some features are extracted from the superpixels and a bag-of-words model can be created. The initial classification can be obtained by a back propagation neural network (BPNN). To refine preliminary result, k-nearest neighbor (KNN) is used for reclassification and the final result is achieved. To verify the proposed method, we collected a BUS dataset containing 320 cases. The segmentation results of our method have been compared with the corresponding results obtained by five existing approaches. The experimental results show that our method achieved competitive results compared to conventional methods in terms of TP and FP, and produced good approximations to the hand-labelled tumor contours with comprehensive consideration of all metrics (the F1-score = 89.87% ± 4.05%, and the average radial error = 9.95% ± 4.42%).
Collapse
|
10
|
Remote control of a robotic prosthesis arm with six-degree-of-freedom for ultrasonic scanning and three-dimensional imaging. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101606] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
11
|
Huang Q, Zeng Z, Li X. 2.5-D Extended Field-of-View Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:851-859. [PMID: 29610066 DOI: 10.1109/tmi.2017.2776971] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Recently, the growing emphasis on medical ultrasound (US) has led to a rapid development of US extended field-of-view (EFOV) techniques. US EFOV techniques can be classified into three categories: 2-D US EFOV, 3-D US, and 3-D US EFOV. In this paper, we propose a novel EFOV method called 2.5-D US EFOV that combines both the advantages of the 2-D US EFOV and the 3-D US by generating a panorama on a curved image plane guided by a curved scanning trajectory of the US probe. In 2.5-D US EFOV, the real-time position and orientation of the US image plane can be recorded via an electromagnetic spatial sensor attached to the probe. The scanning direction is not necessarily straight and can be curved according to the regions of interest (ROI). To form the curved panorama, an image cutting method is proposed. Finally, the curved panorama is rendered in a 3-D space using a surface rendering based on a texture mapping technique. This allows 3-D measurements of lines and angles. Phantom experiments demonstrated that 2.5-D US EFOV images could show anatomical structures of ROI accurately and rapidly. The overall average errors for the distance and angle measurements are -0.097 ± 0.128 cm (-1% ± 1.2%) and 1.50° ± 1.60° (1.9% ± 2%), respectively. A typical extended US image can be reconstructed from 321 B-scans images within 3 s. The satisfying quantitative result on the spinal tissues of a scoliosis subject demonstrates that our system has potential applications in the assessment of musculoskeletal issues.
Collapse
|
12
|
Huang Q, Wu B, Lan J, Li X. Fully Automatic Three-Dimensional Ultrasound Imaging Based on Conventional B-Scan. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:426-436. [PMID: 29570068 DOI: 10.1109/tbcas.2017.2782815] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Robotic ultrasound systems have turned into clinical use over the past few decades, increasing precision and quality of medical operations. In this paper, we propose a fully automatic scanning system for three-dimensional (3-D) ultrasound imaging. A depth camera was first used to obtain the depth data and color data of the tissue surface. Based on the depth image, the 3-D contour of the tissue was rendered and the scan path of ultrasound probe was automatically planned. Following the scan path, a 3-D translating device drove the probe to move on the tissue surface. Simultaneously, the B-scans and their positional information were recorded for subsequent volume reconstruction. In order to stop the scanning process when the pressure on the skin exceeded a preset threshold, two force sensors were attached to the front side of the probe for force measurement. In vitro and in vivo experiments were conducted for assessing the performance of the proposed system. Quantitative results show that the error of volume measurement was less than 1%, indicating that the system is capable of automatic ultrasound scanning and 3-D imaging. It is expected that the proposed system can be well used in clinical practices.
Collapse
|
13
|
Ilunga-Mbuyamba E, Avina-Cervantes JG, Lindner D, Arlt F, Ituna-Yudonago JF, Chalopin C. Patient-specific model-based segmentation of brain tumors in 3D intraoperative ultrasound images. Int J Comput Assist Radiol Surg 2018; 13:331-342. [PMID: 29330658 DOI: 10.1007/s11548-018-1703-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 01/04/2018] [Indexed: 11/27/2022]
Abstract
PURPOSE Intraoperative ultrasound (iUS) imaging is commonly used to support brain tumor operation. The tumor segmentation in the iUS images is a difficult task and still under improvement because of the low signal-to-noise ratio. The success of automatic methods is also limited due to the high noise sensibility. Therefore, an alternative brain tumor segmentation method in 3D-iUS data using a tumor model obtained from magnetic resonance (MR) data for local MR-iUS registration is presented in this paper. The aim is to enhance the visualization of the brain tumor contours in iUS. METHODS A multistep approach is proposed. First, a region of interest (ROI) based on the specific patient tumor model is defined. Second, hyperechogenic structures, mainly tumor tissues, are extracted from the ROI of both modalities by using automatic thresholding techniques. Third, the registration is performed over the extracted binary sub-volumes using a similarity measure based on gradient values, and rigid and affine transformations. Finally, the tumor model is aligned with the 3D-iUS data, and its contours are represented. RESULTS Experiments were successfully conducted on a dataset of 33 patients. The method was evaluated by comparing the tumor segmentation with expert manual delineations using two binary metrics: contour mean distance and Dice index. The proposed segmentation method using local and binary registration was compared with two grayscale-based approaches. The outcomes showed that our approach reached better results in terms of computational time and accuracy than the comparative methods. CONCLUSION The proposed approach requires limited interaction and reduced computation time, making it relevant for intraoperative use. Experimental results and evaluations were performed offline. The developed tool could be useful for brain tumor resection supporting neurosurgeons to improve tumor border visualization in the iUS volumes.
Collapse
Affiliation(s)
- Elisee Ilunga-Mbuyamba
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| | - Juan Gabriel Avina-Cervantes
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico.
| | - Dirk Lindner
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Felix Arlt
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Jean Fulbert Ituna-Yudonago
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
| | - Claire Chalopin
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| |
Collapse
|
14
|
Kuo JW, Mamou J, Wang Y, Saegusa-Beecroft E, Machi J, Feleppa EJ. Segmentation of 3-D High-Frequency Ultrasound Images of Human Lymph Nodes Using Graph Cut With Energy Functional Adapted to Local Intensity Distribution. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2017; 64:1514-1525. [PMID: 28796617 PMCID: PMC5913754 DOI: 10.1109/tuffc.2017.2737948] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Previous studies by our group have shown that 3-D high-frequency quantitative ultrasound (QUS) methods have the potential to differentiate metastatic lymph nodes (LNs) from cancer-free LNs dissected from human cancer patients. To successfully perform these methods inside the LN parenchyma (LNP), an automatic segmentation method is highly desired to exclude the surrounding thin layer of fat from QUS processing and accurately correct for ultrasound attenuation. In high-frequency ultrasound images of LNs, the intensity distribution of LNP and fat varies spatially because of acoustic attenuation and focusing effects. Thus, the intensity contrast between two object regions (e.g., LNP and fat) is also spatially varying. In our previous work, nested graph cut (GC) demonstrated its ability to simultaneously segment LNP, fat, and the outer phosphate-buffered saline bath even when some boundaries are lost because of acoustic attenuation and focusing effects. This paper describes a novel approach called GC with locally adaptive energy to further deal with spatially varying distributions of LNP and fat caused by inhomogeneous acoustic attenuation. The proposed method achieved Dice similarity coefficients of 0.937±0.035 when compared with expert manual segmentation on a representative data set consisting of 115 3-D LN images obtained from colorectal cancer patients.
Collapse
|
15
|
A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images. BIOMED RESEARCH INTERNATIONAL 2017; 2017:9157341. [PMID: 28536703 PMCID: PMC5426079 DOI: 10.1155/2017/9157341] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Revised: 01/21/2017] [Accepted: 03/14/2017] [Indexed: 11/17/2022]
Abstract
Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB) segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI) on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO) algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%), the second highest TPVF (85.34%), and the second lowest FPVF (4.48%).
Collapse
|
16
|
A Review on Real-Time 3D Ultrasound Imaging Technology. BIOMED RESEARCH INTERNATIONAL 2017; 2017:6027029. [PMID: 28459067 PMCID: PMC5385255 DOI: 10.1155/2017/6027029] [Citation(s) in RCA: 99] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Accepted: 03/07/2017] [Indexed: 01/06/2023]
Abstract
Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.
Collapse
|
17
|
Breast ultrasound image segmentation: a survey. Int J Comput Assist Radiol Surg 2017; 12:493-507. [DOI: 10.1007/s11548-016-1513-1] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 12/15/2016] [Indexed: 10/20/2022]
|
18
|
Feng C, Zhao D, Huang M. Image segmentation and bias correction using local inhomogeneous iNtensity clustering (LINC): A region-based level set method. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.09.008] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
19
|
Systematic Evaluation on Speckle Suppression Methods in Examination of Ultrasound Breast Images. APPLIED SCIENCES-BASEL 2016. [DOI: 10.3390/app7010037] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
20
|
Li Z, Zhang Y, Gong H, Li W, Tang X. Automatic coronary artery segmentation based on multi-domains remapping and quantile regression in angiographies. Comput Med Imaging Graph 2016; 54:55-66. [DOI: 10.1016/j.compmedimag.2016.08.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Revised: 08/08/2016] [Accepted: 08/17/2016] [Indexed: 11/29/2022]
|
21
|
|
22
|
Jiang WW, Li C, Li AH, Zheng YP. Clinical Evaluation of a 3-D Automatic Annotation Method for Breast Ultrasound Imaging. ULTRASOUND IN MEDICINE & BIOLOGY 2016; 42:870-881. [PMID: 26725169 DOI: 10.1016/j.ultrasmedbio.2015.11.028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2015] [Revised: 11/20/2015] [Accepted: 11/30/2015] [Indexed: 06/05/2023]
Abstract
The routine clinical breast ultrasound annotation method is limited by the time it consumes, inconsistency, inaccuracy and incomplete notation. A novel 3-D automatic annotation method for breast ultrasound imaging has been developed that uses a spatial sensor to track and record conventional B-mode scanning so as to provide more objective annotation. The aim of the study described here was to test the feasibility of the automatic annotation method in clinical breast ultrasound scanning. An ultrasound scanning procedure using the new method was established. The new method and the conventional manual annotation method were compared in 46 breast cancer patients (49 ± 12 y). The time used for scanning a patient was recorded and compared for the two methods. Intra-observer and inter-observer experiments were performed, and intra-class correlation coefficients (ICCs) were calculated to analyze system reproducibility. The results revealed that the new annotation method had an average scanning time 36 s (42.9%) less than that of the conventional method. There were high correlations between the results of the two annotation methods (r = 0.933, p < 0.0001 for distance; r = 0.995, p < 0.0001 for radial angle). Intra-observer and inter-observer reproducibility was excellent, with all ICCs > 0.92. The results indicated that the 3-D automatic annotation method is reliable for clinical breast ultrasound scanning and can greatly reduce scanning time. Although large-scale clinical studies are still needed, this work verified that the new annotation method has potential to be a valuable tool in breast ultrasound examination.
Collapse
Affiliation(s)
- Wei-Wei Jiang
- Interdisciplinary Division of Biomedical Engineering, Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Cheng Li
- Department of Ultrasound, State Key Laboratory of Oncology in Southern China, Sun Yat-Sen University Cancer Center, Guangzhou, China; Department of Ultrasound, Hospital of Traditional Chinese Medicine of Zhongshan, Zhongshan, China
| | - An-Hua Li
- Department of Ultrasound, State Key Laboratory of Oncology in Southern China, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yong-Ping Zheng
- Interdisciplinary Division of Biomedical Engineering, Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
| |
Collapse
|
23
|
Yang S, Yi Z, He X, Li X. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:5302-5314. [PMID: 26186793 DOI: 10.1109/tip.2015.2457033] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.
Collapse
|