1
|
Liu J, Zeng B, Chen X. Heart and great vessels segmentation in congenital heart disease via CNN and conditioned energy function postprocessing. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03182-3. [PMID: 38814529 DOI: 10.1007/s11548-024-03182-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 05/08/2024] [Indexed: 05/31/2024]
Abstract
PURPOSE The segmentation of the heart and great vessels in CT images of congenital heart disease (CHD) is critical for the clinical assessment of cardiac anomalies and the diagnosis of CHD. However, the diverse types and abnormalities inherent in CHD present significant challenges to comprehensive heart segmentation. METHODS We proposed a novel two-stage segmentation approach, integrating a Convolutional Neural Network (CNN) with a postprocessing method with conditioned energy function for pulmonary and aorta. The initial stage employs a CNN enhanced by a gated self-attention mechanism for the segmentation of five primary heart structures and two major vessels. Subsequently, the second stage utilizes a conditioned energy function specifically tailored to refine the segmentation of the pulmonary artery and aorta, ensuring vascular continuity. RESULTS Our method was evaluated on a public dataset including 110 3D CT volumes, encompassing 16 CHD variants. Compared to prevailing segmentation techniques (U-Net, V-Net, Unetr, dynUnet), our approach demonstrated improvements of 1.02, 1.04, and 1.41% in Dice Coefficient (DSC), Intersection over Union (IOU), and the 95th percentile Hausdorff Distance (HD95), respectively, for heart structure segmentation. For the two great vessels, the enhancements were 1.05, 1.07, and 1.42% in these metrics. CONCLUSION The outcomes on the public dataset affirm the efficacy of our proposed segmentation method. Precise segmentation of the entire heart and great vessels can significantly aid in the diagnosis and treatment of CHD, underscoring the clinical relevance of our findings.
Collapse
Affiliation(s)
- Jiaxuan Liu
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Bolun Zeng
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China.
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
2
|
Wang B, Yang J, Zhou Y, Yang Y, Tian X, Zhang G, Zhang X. LEACS: a learnable and efficient active contour model with space-frequency pooling for medical image segmentation. Phys Med Biol 2024; 69:015026. [PMID: 38048633 DOI: 10.1088/1361-6560/ad1212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 12/04/2023] [Indexed: 12/06/2023]
Abstract
Diseases can be diagnosed and monitored by extracting regions of interest (ROIs) from medical images. However, accurate and efficient delineation and segmentation of ROIs in medical images remain challenging due to unrefined boundaries, inhomogeneous intensity and limited image acquisition. To overcome these problems, we propose an end-to-end learnable and efficient active contour segmentation model, which integrates a global convex segmentation (GCS) module into a light-weighted encoder-decoder convolutional segmentation network with a multiscale attention module (ED-MSA). The GCS automatically obtains the initialization and corresponding parameters of the curve deformation according to the prediction map generated by the ED-MSA, while provides the refined object boundary prediction for ED-MSA optimization. To provide precise and reliable initial contour for the GCS, we design the space-frequency pooling operation layers in the encoder stage of ED-MSA, which can effectively reduce the number of iterations of the GCS. Beside, we construct ED-MSA using the depth-wise separable convolutional residual module to mitigate the overfitting of the model. The effectiveness of our method is validated on four challenging medical image datasets. Code is here:https://github.com/Yang-fashion/ED-MSA_GCS.
Collapse
Affiliation(s)
- Bing Wang
- College of Mathematics and Information Science, Hebei University, Baoding, 071000, Hebei, People's Republic of China
- Hebei Key Laboratory of machine Learning and Computational Intelligence, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| | - Jie Yang
- College of Mathematics and Information Science, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| | - Yunlai Zhou
- College of Mathematics and Information Science, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| | - Ying Yang
- Hebei University Affiliated Hospital, Baoding, 071000, Hebei, People's Republic of China
| | - Xuedong Tian
- College of Cyber Security and Computer, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| | - Guochun Zhang
- Hebei Key Laboratory of machine Learning and Computational Intelligence, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| | - Xin Zhang
- College of Electronic Information Engineering, Hebei University, Baoding, 071000, Hebei, People's Republic of China
| |
Collapse
|
3
|
Rahman A, Ali H, Badshah N, Zakarya M, Hussain H, Rahman IU, Ahmed A, Haleem M. Power mean based image segmentation in the presence of noise. Sci Rep 2022; 12:21177. [PMID: 36477447 PMCID: PMC9729210 DOI: 10.1038/s41598-022-25250-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Accepted: 11/28/2022] [Indexed: 12/12/2022] Open
Abstract
In image segmentation and in general in image processing, noise and outliers distort contained information posing in this way a great challenge for accurate image segmentation results. To ensure a correct image segmentation in presence of noise and outliers, it is necessary to identify the outliers and isolate them during a denoising pre-processing or impose suitable constraints into a segmentation framework. In this paper, we impose suitable removing outliers constraints supported by a well-designed theory in a variational framework for accurate image segmentation. We investigate a novel approach based on the power mean function equipped with a well established theoretical base. The power mean function has the capability to distinguishes between true image pixels and outliers and, therefore, is robust against outliers. To deploy the novel image data term and to guaranteed unique segmentation results, a fuzzy-membership function is employed in the proposed energy functional. Based on qualitative and quantitative extensive analysis on various standard data sets, it has been observed that the proposed model works well in images having multi-objects with high noise and in images with intensity inhomogeneity in contrast with the latest and state-of-the-art models.
Collapse
Affiliation(s)
- Afzal Rahman
- grid.266976.a0000 0001 1882 0101Department of Mathematics, University of Peshawar,
Peshawar, Pakistan
| | - Haider Ali
- grid.266976.a0000 0001 1882 0101Department of Mathematics, University of Peshawar,
Peshawar, Pakistan
| | - Noor Badshah
- grid.444992.60000 0004 0609 495XDepartment of Basic Sciences, University of Engineering and Technology Peshawar,
Peshawar, Pakistan
| | - Muhammad Zakarya
- grid.440522.50000 0004 0478 6450Department of Computer Science, Abdul Wali Khan University, Mardan, Pakistan
| | - Hameed Hussain
- Department of Computer Science, University of Buner,
Buner, Pakistan
| | - Izaz Ur Rahman
- grid.440522.50000 0004 0478 6450Department of Computer Science, Abdul Wali Khan University, Mardan, Pakistan
| | - Aftab Ahmed
- grid.440522.50000 0004 0478 6450Department of Computer Science, Abdul Wali Khan University, Mardan, Pakistan
| | - Muhammad Haleem
- grid.448672.b0000 0004 0569 2552Department of Computer Science, Kardan University, Kabul, Afghanistan
| |
Collapse
|
4
|
Wang J, Chen G, Chen S, Joseph Raj AN, Zhuang Z, Xie L, Ma S. Ultrasonic breast tumor extraction based on adversarial mechanism and active contour. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107052. [PMID: 35985149 DOI: 10.1016/j.cmpb.2022.107052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 06/07/2022] [Accepted: 07/30/2022] [Indexed: 02/05/2023]
Abstract
BACKGROUND AND OBJECTIVE Breast cancer is a high incidence of gynecological diseases; breast ultrasound screening can effectively reduce the mortality rate of breast cancer. In breast ultrasound images, the localization and segmentation of tumor lesions are important steps for the extraction of lesions, which helps clinicians evaluate breast lesions quantitatively and makes better clinical diagnosis of the disease. However, the segmentation of breast lesions is difficult due to the blurred and uneven edges of some lesions. In this paper, we propose a segmentation framework combining active contour module and deep learning adversarial mechanism and apply it for the segmentation of breast tumor lesions. METHOD We use a conditional adversarial network as the main framework. The generator is a segmentation network consisting of a Deformed U-Net and an active contour module. Here, the Deformed U-Net performs pixel-level segmentation for breast ultrasound images. The active contour module refines the tumor lesion edges, and the refined result provides loss information for Deformed U-Net. Therefore, the Deformed U-Net can better classify the edge pixels. The discriminator is the Markov discriminator; this discriminator provides loss feedback for the segmentation network. We cross-train the discriminator and segmentation network to implement Adversarial Mechanism for getting a more optimized segmentation network. RESULTS The segmentation performance of the segmentation network for breast ultrasound images is improved by adding a Markov discriminator to provide discriminant loss training. The proposed method for segmenting the tumor lesions in breast ultrasound image obtains dice coefficient: 89.7%, accuracy: 98.1%, precision: 86.3%, mean-intersection-over-union: 82.2%, recall: 94.7%, specificity: 98.5% and F1score: 89.7%. CONCLUSION Comparing with traditional methods, the proposed method gives better performance. The experimental results show that the proposed method can effectively segment the lesions in breast ultrasound images, and then assist doctors to realize the diagnosis of breast lesions.
Collapse
Affiliation(s)
- Jinhong Wang
- Department of Ultrasound, The First Affiliated Hospital of Shantou University Medical College, 57 Changping Road, Longhu District, Shantou, Guangdong, China
| | - Guiqing Chen
- Department of Electronic Engineering, Shantou University, No.243, Daxue Road, Tuo Jiang Street, Jinping District, Shantou City, Guangdong, China
| | - Shiqiang Chen
- Department of Electronic Engineering, Shantou University, No.243, Daxue Road, Tuo Jiang Street, Jinping District, Shantou City, Guangdong, China
| | - Alex Noel Joseph Raj
- Department of Electronic Engineering, Shantou University, No.243, Daxue Road, Tuo Jiang Street, Jinping District, Shantou City, Guangdong, China
| | - Zhemin Zhuang
- Department of Electronic Engineering, Shantou University, No.243, Daxue Road, Tuo Jiang Street, Jinping District, Shantou City, Guangdong, China
| | - Lei Xie
- Department of Radiology, The First Affiliated Hospital of Shantou University Medical College, 57 Changping Road, Longhu District, Shantou, Guangdong, China
| | - Shuhua Ma
- Department of Radiology, The First Affiliated Hospital of Shantou University Medical College, 57 Changping Road, Longhu District, Shantou, Guangdong, China
| |
Collapse
|
5
|
Mabood L, Badshah N, Ali H, Zakarya M, Ahmed A, Khan AA, Rada L, Haleem M. Multi-scale-average-filter-assisted level set segmentation model with local region restoration achievements. Sci Rep 2022; 12:15949. [PMID: 36153339 PMCID: PMC9509349 DOI: 10.1038/s41598-022-19893-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 09/06/2022] [Indexed: 11/27/2022] Open
Abstract
Segmentation of noisy images having light in the background it is a challenging task for the existing segmentation approaches and methods. In this paper, we suggest a novel variational method for joint restoration and segmentation of noisy images which are having intensity and inhomogeneity in the existence of high contrast light in the background. The proposed model combines statistical local region information of circular regions centered at each pixel with a multi-phase segmentation technique enabling inhomogeneous image restoration. The proposed model is written in the fuzzy set framework and resolved through alternating direction minimization approach of multipliers. Through experiments, we have tested the performance of the suggested approach on diverse types of synthetic and real images in the existence of intensity and in-homogeneity; and evaluate the precision, as well as, the robustness of the suggested model. Furthermore, the outcomes are, then, compared with other state-of-the-art models including two-phase and multi-phase approaches and show that our method has superiority for images in the existence of noise and inhomogeneity. Our empirical evaluation and experiments, using real images, evaluate and assess the efficiency of the suggested model against several other closest rivals. We observed that the suggested model can precisely segment all the images having brightness, diffuse edges, high contrast light in the background, and inhomogeneity.
Collapse
|
6
|
Minaee S, Boykov Y, Porikli F, Plaza A, Kehtarnavaz N, Terzopoulos D. Image Segmentation Using Deep Learning: A Survey. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:3523-3542. [PMID: 33596172 DOI: 10.1109/tpami.2021.3059968] [Citation(s) in RCA: 323] [Impact Index Per Article: 161.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Image segmentation is a key task in computer vision and image processing with important applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among others, and numerous segmentation algorithms are found in the literature. Against this backdrop, the broad success of deep learning (DL) has prompted the development of new image segmentation approaches leveraging DL models. We provide a comprehensive review of this recent literature, covering the spectrum of pioneering efforts in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multiscale and pyramid-based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the relationships, strengths, and challenges of these DL-based segmentation models, examine the widely used datasets, compare performances, and discuss promising research directions.
Collapse
|
7
|
Ding J, Zhang Y, Amjad A, Xu J, Thill D, Li XA. Automatic contour refinement for deep learning auto-segmentation of complex organs in MRI-guided adaptive radiotherapy. Adv Radiat Oncol 2022; 7:100968. [PMID: 35847549 PMCID: PMC9280040 DOI: 10.1016/j.adro.2022.100968] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 04/12/2022] [Indexed: 11/18/2022] Open
Abstract
Purpose Fast and accurate auto-segmentation on daily images is essential for magnetic resonance imaging (MRI)–guided adaptive radiation therapy (ART). However, the state-of-the-art auto-segmentation based on deep learning still has limited success, particularly for complex structures in the abdomen. This study aimed to develop an automatic contour refinement (ACR) process to quickly correct for unacceptable auto-segmented contours. Methods and Materials An improved level set–based active contour model (ACM) was implemented for the ACR process and was tested on the deep learning–based auto-segmentation of 80 abdominal MRI sets along with their ground truth contours. The performance of the ACR process was evaluated using 4 contour accuracy metrics: the Dice similarity coefficient (DSC), mean distance to agreement (MDA), surface DSC, and added path length (APL) on the auto-segmented contours of the small bowel, large bowel, combined bowels, pancreas, duodenum, and stomach. Results A portion (3%-39%) of the corrected contours became practically acceptable per the American Association of Physicists in Medicine Task Group 132 (TG-132) recommendation (DSC >0.8 and MDA <3 mm). The best correction performance was seen in the combined bowels, where for the contours with major errors (initial DSC <0.5 or MDA >8 mm), the mean DSC increased from 0.34 to 0.59, the mean MDA decreased from 7.02 mm to 5.23 mm, and the APL reduced by almost 20 mm, whereas for the contours with minor errors, the mean DSC increased from 0.72 to 0.79, the mean MDA decreased from 3.35 mm to 3.29 mm, and more than one-third (39%) of the ACR contours became clinically acceptable. The execution time for the ACR process on one subregion was less than 2 seconds using an NVIDIA GTX 1060 GPU. Conclusions The ACR process implemented based on the ACM was able to quickly correct for some inaccurate contours produced from MRI-based deep learning auto-segmentation of complex abdominal anatomy. The ACR method may be integrated into the auto-segmentation step to accelerate the process of MRI-guided ART.
Collapse
|
8
|
Astley JR, Wild JM, Tahir BA. Deep learning in structural and functional lung image analysis. Br J Radiol 2022; 95:20201107. [PMID: 33877878 PMCID: PMC9153705 DOI: 10.1259/bjr.20201107] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
The recent resurgence of deep learning (DL) has dramatically influenced the medical imaging field. Medical image analysis applications have been at the forefront of DL research efforts applied to multiple diseases and organs, including those of the lungs. The aims of this review are twofold: (i) to briefly overview DL theory as it relates to lung image analysis; (ii) to systematically review the DL research literature relating to the lung image analysis applications of segmentation, reconstruction, registration and synthesis. The review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 479 studies were initially identified from the literature search with 82 studies meeting the eligibility criteria. Segmentation was the most common lung image analysis DL application (65.9% of papers reviewed). DL has shown impressive results when applied to segmentation of the whole lung and other pulmonary structures. DL has also shown great potential for applications in image registration, reconstruction and synthesis. However, the majority of published studies have been limited to structural lung imaging with only 12.9% of reviewed studies employing functional lung imaging modalities, thus highlighting significant opportunities for further research in this field. Although the field of DL in lung image analysis is rapidly expanding, concerns over inconsistent validation and evaluation strategies, intersite generalisability, transparency of methodological detail and interpretability need to be addressed before widespread adoption in clinical lung imaging workflow.
Collapse
Affiliation(s)
| | - Jim M Wild
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, United Kingdom
| | | |
Collapse
|
9
|
Liu J, Shen C, Aguilera N, Cukras C, Hufnagel RB, Zein WM, Liu T, Tam J. Active Cell Appearance Model Induced Generative Adversarial Networks for Annotation-Efficient Cell Segmentation and Identification on Adaptive Optics Retinal Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2820-2831. [PMID: 33507868 PMCID: PMC8548993 DOI: 10.1109/tmi.2021.3055483] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Data annotation is a fundamental precursor for establishing large training sets to effectively apply deep learning methods to medical image analysis. For cell segmentation, obtaining high quality annotations is an expensive process that usually requires manual grading by experts. This work introduces an approach to efficiently generate annotated images, called "A-GANs", created by combining an active cell appearance model (ACAM) with conditional generative adversarial networks (C-GANs). ACAM is a statistical model that captures a realistic range of cell characteristics and is used to ensure that the image statistics of generated cells are guided by real data. C-GANs utilize cell contours generated by ACAM to produce cells that match input contours. By pairing ACAM-generated contours with A-GANs-based generated images, high quality annotated images can be efficiently generated. Experimental results on adaptive optics (AO) retinal images showed that A-GANs robustly synthesize realistic, artificial images whose cell distributions are exquisitely specified by ACAM. The cell segmentation performance using as few as 64 manually-annotated real AO images combined with 248 artificially-generated images from A-GANs was similar to the case of using 248 manually-annotated real images alone (Dice coefficients of 88% for both). Finally, application to rare diseases in which images exhibit never-seen characteristics demonstrated improvements in cell segmentation without the need for incorporating manual annotations from these new retinal images. Overall, A-GANs introduce a methodology for generating high quality annotated data that statistically captures the characteristics of any desired dataset and can be used to more efficiently train deep-learning-based medical image analysis applications.
Collapse
|
10
|
Liu L, Wolterink JM, Brune C, Veldhuis RNJ. Anatomy-aided deep learning for medical image segmentation: a review. Phys Med Biol 2021; 66. [PMID: 33906186 DOI: 10.1088/1361-6560/abfbf4] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/27/2021] [Indexed: 01/17/2023]
Abstract
Deep learning (DL) has become widely used for medical image segmentation in recent years. However, despite these advances, there are still problems for which DL-based segmentation fails. Recently, some DL approaches had a breakthrough by using anatomical information which is the crucial cue for manual segmentation. In this paper, we provide a review of anatomy-aided DL for medical image segmentation which covers systematically summarized anatomical information categories and corresponding representation methods. We address known and potentially solvable challenges in anatomy-aided DL and present a categorized methodology overview on using anatomical information with DL from over 70 papers. Finally, we discuss the strengths and limitations of the current anatomy-aided DL approaches and suggest potential future work.
Collapse
Affiliation(s)
- Lu Liu
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands.,Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Jelmer M Wolterink
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Christoph Brune
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Raymond N J Veldhuis
- Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| |
Collapse
|
11
|
Yang B, Yan M, Yan Z, Zhu C, Xu D, Dong F. Segmentation and classification of thyroid follicular neoplasm using cascaded convolutional neural network. Phys Med Biol 2020; 65:245040. [PMID: 33137800 DOI: 10.1088/1361-6560/abc6f2] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
In this paper, we present a segmentation and classification method for thyroid follicular neoplasms based on a combination of the prior-based level set method and deep convolutional neural network. The proposed method aims to discriminate thyroid follicular adenoma (TFA) and follicular thyroid carcinoma (FTC) in ultrasound images. In their appearance, these two kinds of tumours have similar shapes, sizes and contrasts. Therefore, it is difficult for even ultrasound specialists to distinguish them. Because of the complex background in thyroid ultrasound images, before distinguishing TFA and FTC, we need to segment the lesions from the whole image for each patient. The main challenge of segmentation is that the images often have weak edges and heterogeneous regions. The main issue of classification is that the accuracy depends on the features extracted from the segmentation results. To solve these problems, we conduct the two tasks, i.e. segmentation and classification, by a cascaded learning architecture. For segmentation, to obtain more accurate results, we exploit the Res-U-net framework and the prior-based level set method to enhance their respective abilities. Then, the classification network is trained by sharing shallow layers of the segmentation network. Testing the proposed method on real patient data shows that it is able to segment the lesion areas in thyroid ultrasound images with a Dice score of 92.65% and to distinguish TFA and FTC with a classification accuracy of 96.00%.
Collapse
Affiliation(s)
- Bailin Yang
- School of Computer and Information Engineering, Zhejiang Gongshang University, Hangzhou 310018, People's Republic of China. Bailin Yang and Meiying Yan are co-first authors
| | | | | | | | | | | |
Collapse
|
12
|
Arriola-Rios VE, Guler P, Ficuciello F, Kragic D, Siciliano B, Wyatt JL. Modeling of Deformable Objects for Robotic Manipulation: A Tutorial and Review. Front Robot AI 2020; 7:82. [PMID: 33501249 PMCID: PMC7805872 DOI: 10.3389/frobt.2020.00082] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 05/19/2020] [Indexed: 11/13/2022] Open
Abstract
Manipulation of deformable objects has given rise to an important set of open problems in the field of robotics. Application areas include robotic surgery, household robotics, manufacturing, logistics, and agriculture, to name a few. Related research problems span modeling and estimation of an object's shape, estimation of an object's material properties, such as elasticity and plasticity, object tracking and state estimation during manipulation, and manipulation planning and control. In this survey article, we start by providing a tutorial on foundational aspects of models of shape and shape dynamics. We then use this as the basis for a review of existing work on learning and estimation of these models and on motion planning and control to achieve desired deformations. We also discuss potential future lines of work.
Collapse
Affiliation(s)
- Veronica E Arriola-Rios
- Department of Mathematics, Faculty of Science, UNAM Universidad Nacional Autonoma de Mexico, Ciudad de México, Mexico
| | - Puren Guler
- Autonomous Mobile Manipulation Laboratory, Centre for Applied Autonomous Sensor Systems, Orebro University, Orebro, Sweden
| | - Fanny Ficuciello
- PRISMA Laboratory, Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy
| | - Danica Kragic
- Robotics, Learning and Perception Laboratory, Centre for Autonomous Systems, EECS, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Bruno Siciliano
- PRISMA Laboratory, Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy
| | - Jeremy L Wyatt
- School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|