1
|
China D, Feng Z, Hooshangnejad H, Sforza D, Vagdargi P, Bell MAL, Uneri A, Sisniega A, Ding K. FLEX: FLexible Transducer With External Tracking for Ultrasound Imaging With Patient-Specific Geometry Estimation. IEEE Trans Biomed Eng 2024; 71:1298-1307. [PMID: 38048239 PMCID: PMC10998498 DOI: 10.1109/tbme.2023.3333216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/06/2023]
Abstract
Flexible array transducers can adapt to patient-specific geometries during real-time ultrasound (US) image-guided therapy monitoring. This makes the system radiation-free and less user-dependency. Precise estimation of the flexible transducer's geometry is crucial for the delay-and-sum (DAS) beamforming algorithm to reconstruct B-mode US images. The primary innovation of this research is to build a system named FLexible transducer with EXternal tracking (FLEX) to estimate the position of each element of the flexible transducer and reconstruct precise US images. FLEX utilizes customized optical markers and a tracker to monitor the probe's geometry, employing a polygon fitting algorithm to estimate the position and azimuth angle of each transducer element. Subsequently, the traditional DAS algorithm processes the delay estimation from the tracked element position, reconstructing US images from radio-frequency (RF) channel data. The proposed method underwent evaluation on phantoms and cadaveric specimens, demonstrating its clinical feasibility. Deviations in tracked probe geometry compared to ground truth were minimal, measuring 0.50 ± 0.29 mm for the CIRS phantom, 0.54 ± 0.35 mm for the deformable phantom, and 0.36 ± 0.24 mm on the cadaveric specimen. Reconstructing the US image using tracked probe geometry significantly outperformed the untracked geometry, as indicated by a Dice score of 95.1 ± 3.3% versus 62.3 ± 9.2% for the CIRS phantom. The proposed method achieved high accuracy (<0.5 mm error) in tracking the element position for various random curvatures applicable for clinical deployment. The evaluation results show that the radiation-free proposed method can effectively reconstruct US images and assist in monitoring image-guided therapy with minimal user dependency.
Collapse
|
2
|
Yadav N, Dass R, Virmani J. Assessment of encoder-decoder-based segmentation models for thyroid ultrasound images. Med Biol Eng Comput 2023:10.1007/s11517-023-02849-4. [PMID: 37353695 DOI: 10.1007/s11517-023-02849-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 05/17/2023] [Indexed: 06/25/2023]
Abstract
Encoder-decoder-based semantic segmentation models classify image pixels into the corresponding class, such as the ROI (region of interest) or background. In the present study, simple / dilated convolution / series / directed acyclic graph (DAG)-based encoder-decoder semantic segmentation models have been implemented, i.e., SegNet (VGG16), SegNet (VGG19), U-Net, mobileNetv2, ResNet18, ResNet50, Xception and Inception networks for the segment TTUS(Thyroid Tumor Ultrasound) images. Transfer learning has been used to train these segmentation networks using original and despeckled TTUS images. The performance of the networks has been calculated using mIoU and mDC metrics. Based on the exhaustive experiments, it has been observed that ResNet50-based segmentation model obtained the best results objectively with values 0.87 for mIoU, 0.94 for mDC, and also according to radiologist opinion on shape, margin, and echogenicity characteristics of segmented lesions. It is noted that the segmentation model, namely ResNet50, provides better segmentation based on objective and subjective assessment. It may be used in the healthcare system to identify thyroid nodules accurately in real time.
Collapse
Affiliation(s)
- Niranjan Yadav
- Department of Electronics and Communication Engineering, Deenbandhu Chhotu Ram University of Science and Technology Murthal, Sonepat, 131039, India.
| | - Rajeshwar Dass
- Department of Electronics and Communication Engineering, Deenbandhu Chhotu Ram University of Science and Technology Murthal, Sonepat, 131039, India
| | - Jitendra Virmani
- Central Scientific Instruments Organization, Council of Scientific and Industrial Research, Chandigarh, 160030, India
| |
Collapse
|
3
|
Xia M, Yang H, Huang Y, Qu Y, Zhou G, Zhang F, Wang Y, Guo Y. 3D pyramidal densely connected network with cross-frame uncertainty guidance for intravascular ultrasound sequence segmentation. Phys Med Biol 2023; 68. [PMID: 36745930 DOI: 10.1088/1361-6560/acb988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 02/06/2023] [Indexed: 02/08/2023]
Abstract
Objective. Automatic extraction of external elastic membrane border (EEM) and lumen-intima border (LIB) in intravascular ultrasound (IVUS) sequences aids atherosclerosis diagnosis. Existing IVUS segmentation networks ignored longitudinal relations among sequential images and neglected that IVUS images of different vascular conditions vary largely in intricacy and informativeness. As a result, they suffered from performance degradation in complicated parts in IVUS sequences.Approach. In this paper, we develop a 3D Pyramidal Densely-connected Network (PDN) with Adaptive learning and post-Correction guided by a novel cross-frame uncertainty (CFU). The proposed method is named PDN-AC. Specifically, the PDN enables the longitudinal information exploitation and the effective perception of size-varied vessel regions in IVUS samples, by pyramidally connecting multi-scale 3D dilated convolutions. Additionally, the CFU enhances the robustness of the method to complicated pathology from the frame-level (f-CFU) and pixel-level (p-CFU) via exploiting cross-frame knowledge in IVUS sequences. The f-CFU weighs the complexity of IVUS frames and steers an adaptive sampling during the PDN training. The p-CFU visualizes uncertain pixels probably misclassified by the PDN and guides an active contour-based post-correction.Main results. Human and animal experiments were conducted on IVUS datasets acquired from atherosclerosis patients and pigs. Results showed that the f-CFU weighted adaptive sampling reduced the Hausdorff distance (HD) by 10.53%/7.69% in EEM/LIB detection. Improvements achieved by the p-CFU guided post-correction were 2.94%/5.56%.Significance. The PDN-AC attained mean Jaccard values of 0.90/0.87 and HD values of 0.33/0.34 mm in EEM/LIB detection, preferable to state-of-the-art IVUS segmentation methods.
Collapse
Affiliation(s)
- Menghua Xia
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Hongbo Yang
- Department of Cardiology, Zhongshan Hospital, Fudan University. Shanghai Institute of Cardiovascular Diseases, Shanghai 200032, People's Republic of China
| | - Yi Huang
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Yanan Qu
- Department of Cardiology, Zhongshan Hospital, Fudan University. Shanghai Institute of Cardiovascular Diseases, Shanghai 200032, People's Republic of China
| | - Guohui Zhou
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai 200433, People's Republic of China.,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Fudan University, Shanghai 200032, People's Republic of China
| | - Feng Zhang
- Department of Cardiology, Zhongshan Hospital, Fudan University. Shanghai Institute of Cardiovascular Diseases, Shanghai 200032, People's Republic of China
| | - Yuanyuan Wang
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai 200433, People's Republic of China.,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Fudan University, Shanghai 200032, People's Republic of China
| | - Yi Guo
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai 200433, People's Republic of China.,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Fudan University, Shanghai 200032, People's Republic of China
| |
Collapse
|
4
|
Benabdallah FZ, Djerou L. Active Contour Extension Basing on Haralick Texture Features, Multi-gene Genetic Programming, and Block Matching to Segment Thyroid in 3D Ultrasound Images. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07286-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
5
|
Krönke M, Eilers C, Dimova D, Köhler M, Buschner G, Schweiger L, Konstantinidou L, Makowski M, Nagarajah J, Navab N, Weber W, Wendler T. Tracked 3D ultrasound and deep neural network-based thyroid segmentation reduce interobserver variability in thyroid volumetry. PLoS One 2022; 17:e0268550. [PMID: 35905038 PMCID: PMC9337648 DOI: 10.1371/journal.pone.0268550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/02/2022] [Indexed: 11/29/2022] Open
Abstract
Thyroid volumetry is crucial in the diagnosis, treatment, and monitoring of thyroid diseases. However, conventional thyroid volumetry with 2D ultrasound is highly operator-dependent. This study compares 2D and tracked 3D ultrasound with an automatic thyroid segmentation based on a deep neural network regarding inter- and intraobserver variability, time, and accuracy. Volume reference was MRI. 28 healthy volunteers (24—50 a) were scanned with 2D and 3D ultrasound (and by MRI) by three physicians (MD 1, 2, 3) with different experience levels (6, 4, and 1 a). In the 2D scans, the thyroid lobe volumes were calculated with the ellipsoid formula. A convolutional deep neural network (CNN) automatically segmented the 3D thyroid lobes. 26, 6, and 6 random lobe scans were used for training, validation, and testing, respectively. On MRI (T1 VIBE sequence) the thyroid was manually segmented by an experienced MD. MRI thyroid volumes ranged from 2.8 to 16.7ml (mean 7.4, SD 3.05). The CNN was trained to obtain an average Dice score of 0.94. The interobserver variability comparing two MDs showed mean differences for 2D and 3D respectively of 0.58 to 0.52ml (MD1 vs. 2), −1.33 to −0.17ml (MD1 vs. 3) and −1.89 to −0.70ml (MD2 vs. 3). Paired samples t-tests showed significant differences for 2D (p = .140, p = .002 and p = .002) and none for 3D (p = .176, p = .722 and p = .057). Intraobsever variability was similar for 2D and 3D ultrasound. Comparison of ultrasound volumes and MRI volumes showed a significant difference for the 2D volumetry of all MDs (p = .002, p = .009, p <.001), and no significant difference for 3D ultrasound (p = .292, p = .686, p = 0.091). Acquisition time was significantly shorter for 3D ultrasound. Tracked 3D ultrasound combined with a CNN segmentation significantly reduces interobserver variability in thyroid volumetry and increases the accuracy of the measurements with shorter acquisition times.
Collapse
Affiliation(s)
- Markus Krönke
- Department of Radiology and Nuclear Medicine, German Heart Center, Technical University of Munich, Munich, Germany
- Department of Nuclear Medicine, School of Medicine, Technical University of Munich, Munich, Germany
| | - Christine Eilers
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Computer Science, Technical University of Munich, Garching Near Munich, Germany
- * E-mail:
| | - Desislava Dimova
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Computer Science, Technical University of Munich, Garching Near Munich, Germany
| | - Melanie Köhler
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Computer Science, Technical University of Munich, Garching Near Munich, Germany
- Medical Faculty, Technical University of Munich, Munich, Germany
| | - Gabriel Buschner
- Department of Nuclear Medicine, School of Medicine, Technical University of Munich, Munich, Germany
| | - Lilit Schweiger
- Department of Nuclear Medicine, School of Medicine, Technical University of Munich, Munich, Germany
| | - Lemonia Konstantinidou
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Computer Science, Technical University of Munich, Garching Near Munich, Germany
| | - Marcus Makowski
- Department of Radiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - James Nagarajah
- Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Computer Science, Technical University of Munich, Garching Near Munich, Germany
- Chair for Computer Aided Medical Procedures, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - Wolfgang Weber
- Department of Nuclear Medicine, School of Medicine, Technical University of Munich, Munich, Germany
| | - Thomas Wendler
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Computer Science, Technical University of Munich, Garching Near Munich, Germany
| |
Collapse
|
6
|
Trimpl MJ, Primakov S, Lambin P, Stride EPJ, Vallis KA, Gooding MJ. Beyond automatic medical image segmentation-the spectrum between fully manual and fully automatic delineation. Phys Med Biol 2022; 67. [PMID: 35523158 DOI: 10.1088/1361-6560/ac6d9c] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 05/06/2022] [Indexed: 12/19/2022]
Abstract
Semi-automatic and fully automatic contouring tools have emerged as an alternative to fully manual segmentation to reduce time spent contouring and to increase contour quality and consistency. Particularly, fully automatic segmentation has seen exceptional improvements through the use of deep learning in recent years. These fully automatic methods may not require user interactions, but the resulting contours are often not suitable to be used in clinical practice without a review by the clinician. Furthermore, they need large amounts of labelled data to be available for training. This review presents alternatives to manual or fully automatic segmentation methods along the spectrum of variable user interactivity and data availability. The challenge lies to determine how much user interaction is necessary and how this user interaction can be used most effectively. While deep learning is already widely used for fully automatic tools, interactive methods are just at the starting point to be transformed by it. Interaction between clinician and machine, via artificial intelligence, can go both ways and this review will present the avenues that are being pursued to improve medical image segmentation.
Collapse
Affiliation(s)
- Michael J Trimpl
- Mirada Medical Ltd, Oxford, United Kingdom
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
- Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom
| | - Sergey Primakov
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University, Maastricht, NL, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University, Maastricht, NL, The Netherlands
| | - Eleanor P J Stride
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Katherine A Vallis
- Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom
| | | |
Collapse
|
7
|
Xia M, Yang H, Huang Y, Qu Y, Guo Y, Zhou G, Zhang F, Wang Y. AwCPM-Net: A Collaborative Constraint GAN for 3D Coronary Artery Reconstruction in Intravascular Ultrasound Sequences. IEEE J Biomed Health Inform 2022; 26:3047-3058. [PMID: 35104236 DOI: 10.1109/jbhi.2022.3147888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
3D coronary artery reconstruction (3D-CAR) in intravascular ultrasound (IVUS) sequences allows quantitative analyses of vessel properties. Existing methods treat two main tasks of the 3D-CAR separately, including the cardiac phase retrieval (CPR) and the membrane border extraction (MBE). They ignore the CPR-MBE connection that could achieve mutual promotions to both tasks. In this paper, we pioneer to achieve one-step 3D-CAR via a collaborative constraint generative adversarial network (GAN) named the AwCPM-Net. The AwCPM-Net consists of a dual-task collaborative generator and a dual-task constraint discriminator. The generator combines a self-supervised CPR branch with a semi-supervised MBE branch via a warming-up connection. The discriminator promotes dual-branch predictions simultaneously. The CPR branch requires no annotations and outputs inter-frame deformation fields used for identifying cardiac phases. Deformation fields are additionally constrained by the MBE branch and the discriminator. The MBE branch predicts membrane boundaries for each frame. Two aspects assist the semi-supervised segmentation: annotation augmentation by deformation fields of the CPR branch; information exploitation on unlabeled images enabled by GAN design. Trained and tested on an IVUS dataset acquired from atherosclerosis patients, the AwCPM-Net is effective in both CPR and MBE tasks, superior to state-of-the-art IVUS CPR or MBE methods. Hence, the AwCPM-Net reconstructs reliable 3D artery anatomy in the IVUS modality.
Collapse
|
8
|
Illanes A, Esmaeili N, Poudel P, Balakrishnan S, Friebe M. Parametrical modelling for texture characterization-A novel approach applied to ultrasound thyroid segmentation. PLoS One 2019; 14:e0211215. [PMID: 30695052 PMCID: PMC6350984 DOI: 10.1371/journal.pone.0211215] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Accepted: 01/09/2019] [Indexed: 11/18/2022] Open
Abstract
Texture analysis is an important topic in Ultrasound (US) image analysis for structure segmentation and tissue classification. In this work a novel approach for US image texture feature extraction is presented. It is mainly based on parametrical modelling of a signal version of the US image in order to process it as data resulting from a dynamical process. Because of the predictive characteristics of such a model representation, good estimations of texture features can be obtained with less data than generally used methods require, allowing higher robustness to low Signal-to-Noise ratio and a more localized US image analysis. The usability of the proposed approach was demonstrated by extracting texture features for segmenting the thyroid in US images. The obtained results showed that features corresponding to energy ratios between different modelled texture frequency bands allowed to clearly distinguish between thyroid and non-thyroid texture. A simple k-means clustering algorithm has been used for separating US image patches as belonging to thyroid or not. Segmentation of thyroid was performed in two different datasets obtaining Dice coefficients over 85%.
Collapse
Affiliation(s)
- Alfredo Illanes
- INKA, Institute of Medical Technology, Otto-von-Guericke-Universität Magdeburg, Magdeburg, Germany
- * E-mail:
| | - Nazila Esmaeili
- INKA, Institute of Medical Technology, Otto-von-Guericke-Universität Magdeburg, Magdeburg, Germany
| | - Prabal Poudel
- INKA, Institute of Medical Technology, Otto-von-Guericke-Universität Magdeburg, Magdeburg, Germany
| | - Sathish Balakrishnan
- INKA, Institute of Medical Technology, Otto-von-Guericke-Universität Magdeburg, Magdeburg, Germany
| | - Michael Friebe
- INKA, Institute of Medical Technology, Otto-von-Guericke-Universität Magdeburg, Magdeburg, Germany
| |
Collapse
|