1
|
Mikhailov I, Chauveau B, Bourdel N, Bartoli A. A deep learning-based interactive medical image segmentation framework with sequential memory. Comput Methods Programs Biomed 2024; 245:108038. [PMID: 38271792 DOI: 10.1016/j.cmpb.2024.108038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 12/22/2023] [Accepted: 01/16/2024] [Indexed: 01/27/2024]
Abstract
BACKGROUND AND OBJECTIVE Image segmentation is an essential component in medical image analysis. The case of 3D images such as MRI is particularly challenging and time consuming. Interactive or semi-automatic methods are thus highly desirable. However, existing methods do not exploit the typical sequentiality of real user interactions. This is due to the interaction memory used in these systems, which discards ordering. In contrast, we argue that the order of the user corrections should be used for training and lead to performance improvements. METHODS We contribute to solving this problem by proposing a general multi-class deep learning-based interactive framework for image segmentation, which embeds a base network in a user interaction loop with a user feedback memory. We propose to model the memory explicitly as a sequence of consecutive system states, from which the features can be learned, generally learning from the segmentation refinement process. Training is a major difficulty owing to the network's input being dependent on the previous output. We adapt the network to this loop by introducing a virtual user in the training process, modelled by dynamically simulating the iterative user feedback. RESULTS We evaluated our framework against existing methods on the complex task of multi-class semantic instance female pelvis MRI segmentation with 5 classes, including up to 27 tumour instances, using a segmentation dataset collected in our hospital, and on liver and pancreas CT segmentation, using public datasets. We conducted a user evaluation, involving both senior and junior medical personnel in matching and adjacent areas of expertise. We observed an annotation time reduction with 5'56" for our framework against 25' on average for classical tools. We systematically evaluated the influence of the number of clicks on the segmentation accuracy. A single interaction round our framework outperforms existing automatic systems with a comparable setup. We provide an ablation study and show that our framework outperforms existing interactive systems. CONCLUSIONS Our framework largely outperforms existing systems in accuracy, with the largest impact on the smallest, most difficult classes, and drastically reduces the average user segmentation time with fast inference at 47.2±6.2 ms per image.
Collapse
Affiliation(s)
- Ivan Mikhailov
- EnCoV, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, 63000, France; SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France.
| | - Benoit Chauveau
- SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France; CHU de Clermont-Ferrand, Clermont-Ferrand, 63000, France
| | - Nicolas Bourdel
- EnCoV, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, 63000, France; SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France; CHU de Clermont-Ferrand, Clermont-Ferrand, 63000, France
| | - Adrien Bartoli
- EnCoV, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, 63000, France; SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France; CHU de Clermont-Ferrand, Clermont-Ferrand, 63000, France
| |
Collapse
|
2
|
Zhuang M, Chen Z, Yang Y, Kettunen L, Wang H. Annotation-efficient training of medical image segmentation network based on scribble guidance in difficult areas. Int J Comput Assist Radiol Surg 2024; 19:87-96. [PMID: 37233894 DOI: 10.1007/s11548-023-02931-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Accepted: 04/19/2023] [Indexed: 05/27/2023]
Abstract
PURPOSE The training of deep medical image segmentation networks usually requires a large amount of human-annotated data. To alleviate the burden of human labor, many semi- or non-supervised methods have been developed. However, due to the complexity of clinical scenario, insufficient training labels still causes inaccurate segmentation in some difficult local areas such as heterogeneous tumors and fuzzy boundaries. METHODS We propose an annotation-efficient training approach, which only requires scribble guidance in the difficult areas. A segmentation network is initially trained with a small amount of fully annotated data and then used to produce pseudo labels for more training data. Human supervisors draw scribbles in the areas of incorrect pseudo labels (i.e., difficult areas), and the scribbles are converted into pseudo label maps using a probability-modulated geodesic transform. To reduce the influence of the potential errors in the pseudo labels, a confidence map of the pseudo labels is generated by jointly considering the pixel-to-scribble geodesic distance and the network output probability. The pseudo labels and confidence maps are iteratively optimized with the update of the network, and the network training is promoted by the pseudo labels and the confidence maps in turn. RESULTS Cross-validation based on two data sets (brain tumor MRI and liver tumor CT) showed that our method significantly reduces the annotation time while maintains the segmentation accuracy of difficult areas (e.g., tumors). Using 90 scribble-annotated training images (annotated time: ~ 9 h), our method achieved the same performance as using 45 fully annotated images (annotation time: > 100 h) but required much shorter annotation time. CONCLUSION Compared to the conventional full annotation approaches, the proposed method significantly saves the annotation efforts by focusing the human supervisions on the most difficult regions. It provides an annotation-efficient way for training medical image segmentation networks in complex clinical scenario.
Collapse
Affiliation(s)
- Mingrui Zhuang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, 116024, China
| | - Zhonghua Chen
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, 116024, China
- Faculty of Information Technology, University of Jyväskylä, 40100, Jyvaskyla, Finland
| | - Yuxin Yang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, 116024, China
| | - Lauri Kettunen
- Faculty of Information Technology, University of Jyväskylä, 40100, Jyvaskyla, Finland
| | - Hongkai Wang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, 116024, China.
- Liaoning Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian, China.
| |
Collapse
|
3
|
Bi L, Buehner U, Fu X, Williamson T, Choong P, Kim J. Hybrid CNN-transformer network for interactive learning of challenging musculoskeletal images. Comput Methods Programs Biomed 2024; 243:107875. [PMID: 37871450 DOI: 10.1016/j.cmpb.2023.107875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 10/25/2023]
Abstract
BACKGROUND AND OBJECTIVES Segmentation of regions of interest (ROIs) such as tumors and bones plays an essential role in the analysis of musculoskeletal (MSK) images. Segmentation results can help with orthopaedic surgeons in surgical outcomes assessment and patient's gait cycle simulation. Deep learning-based automatic segmentation methods, particularly those using fully convolutional networks (FCNs), are considered as the state-of-the-art. However, in scenarios where the training data is insufficient to account for all the variations in ROIs, these methods struggle to segment the challenging ROIs that with less common image characteristics. Such characteristics might include low contrast to the background, inhomogeneous textures, and fuzzy boundaries. METHODS we propose a hybrid convolutional neural network - transformer network (HCTN) for semi-automatic segmentation to overcome the limitations of segmenting challenging MSK images. Specifically, we propose to fuse user-inputs (manual, e.g., mouse clicks) with high-level semantic image features derived from the neural network (automatic) where the user-inputs are used in an interactive training for uncommon image characteristics. In addition, we propose to leverage the transformer network (TN) - a deep learning model designed for handling sequence data, in together with features derived from FCNs for segmentation; this addresses the limitation of FCNs that can only operate on small kernels, which tends to dismiss global context and only focus on local patterns. RESULTS We purposely selected three MSK imaging datasets covering a variety of structures to evaluate the generalizability of the proposed method. Our semi-automatic HCTN method achieved a dice coefficient score (DSC) of 88.46 ± 9.41 for segmenting the soft-tissue sarcoma tumors from magnetic resonance (MR) images, 73.32 ± 11.97 for segmenting the osteosarcoma tumors from MR images and 93.93 ± 1.84 for segmenting the clavicle bones from chest radiographs. When compared to the current state-of-the-art automatic segmentation method, our HCTN method is 11.7%, 19.11% and 7.36% higher in DSC on the three datasets, respectively. CONCLUSION Our experimental results demonstrate that HCTN achieved more generalizable results than the current methods, especially with challenging MSK studies.
Collapse
Affiliation(s)
- Lei Bi
- Institute of Translational Medicine, National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China; School of Computer Science, University of Sydney, NSW, Australia
| | | | - Xiaohang Fu
- School of Computer Science, University of Sydney, NSW, Australia
| | - Tom Williamson
- Stryker Corporation, Kalamazoo, Michigan, USA; Centre for Additive Manufacturing, School of Engineering, RMIT University, VIC, Australia
| | - Peter Choong
- Department of Surgery, University of Melbourne, VIC, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, NSW, Australia.
| |
Collapse
|
4
|
Gong X, Wang L, Miao L, Chen N, Li J. PIMedSeg: Progressive interactive medical image segmentation. Comput Methods Programs Biomed 2023; 241:107776. [PMID: 37651820 DOI: 10.1016/j.cmpb.2023.107776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 08/21/2023] [Accepted: 08/22/2023] [Indexed: 09/02/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate object segmentation in medical images is a crucial step in medical diagnosis and other applications. Despite years of research on automatic segmentation approaches, achieving clinically acceptable image quality remains challenging. Interactive segmentation is seen as a promising alternative; thus, we propose a new interactive segmentation framework based on a progressive workflow to reduce user effort and provide high-quality results. METHOD First, our approach encodes user-provided region clicks and edge scribbles using our proposed disk and curve transform. Then, it is followed by refinement with a transformer-based module that extracts effective features from the outputs of the convolutional neural network (CNN) and the extra input maps. RESULT Extensive experiments conducted on various medical images, including ultrasound (US), computerized tomography (CT), and magnetic resonance images (MRI), have demonstrated the effectiveness of our new approach over the state-of-the-art alternatives. CONCLUSION The proposed framework can achieve high-quality segmentation using minimal interactions without the substantial cost of manual segmentation.
Collapse
Affiliation(s)
- Xun Gong
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, PR China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, Chengdu 611756, PR China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Southwest Jiaotong University, Chengdu 611756, PR China.
| | - Li Wang
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, PR China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, Chengdu 611756, PR China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Southwest Jiaotong University, Chengdu 611756, PR China
| | - Longlong Miao
- Tangshan Research Institute, Southwest Jiaotong University, Tangshan 063002, PR China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, Chengdu 611756, PR China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Southwest Jiaotong University, Chengdu 611756, PR China
| | - Nuo Chen
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, PR China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, Chengdu 611756, PR China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Southwest Jiaotong University, Chengdu 611756, PR China
| | - Jiao Li
- Department of Gastroenterology, The Third People's Hospital of Chendu, Affiliated Hospital of Southwest Jiaotong University, Chengdu 610031, PR China.
| |
Collapse
|
5
|
Li X, Xia M, Jiao J, Zhou S, Chang C, Wang Y, Guo Y. HAL-IA: A Hybrid Active Learning framework using Interactive Annotation for medical image segmentation. Med Image Anal 2023; 88:102862. [PMID: 37295312 DOI: 10.1016/j.media.2023.102862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 05/19/2023] [Accepted: 05/26/2023] [Indexed: 06/12/2023]
Abstract
High performance of deep learning models on medical image segmentation greatly relies on large amount of pixel-wise annotated data, yet annotations are costly to collect. How to obtain high accuracy segmentation labels of medical images with limited cost (e.g. time) becomes an urgent problem. Active learning can reduce the annotation cost of image segmentation, but it faces three challenges: the cold start problem, an effective sample selection strategy for segmentation task and the burden of manual annotation. In this work, we propose a Hybrid Active Learning framework using Interactive Annotation (HAL-IA) for medical image segmentation, which reduces the annotation cost both in decreasing the amount of the annotated images and simplifying the annotation process. Specifically, we propose a novel hybrid sample selection strategy to select the most valuable samples for segmentation model performance improvement. This strategy combines pixel entropy, regional consistency and image diversity to ensure that the selected samples have high uncertainty and diversity. In addition, we propose a warm-start initialization strategy to build the initial annotated dataset to avoid the cold-start problem. To simplify the manual annotation process, we propose an interactive annotation module with suggested superpixels to obtain pixel-wise label with several clicks. We validate our proposed framework with extensive segmentation experiments on four medical image datasets. Experimental results showed that the proposed framework achieves high accuracy pixel-wise annotations and models with less labeled data and fewer interactions, outperforming other state-of-the-art methods. Our method can help physicians efficiently obtain accurate medical image segmentation results for clinical analysis and diagnosis.
Collapse
Affiliation(s)
- Xiaokang Li
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Menghua Xia
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Jing Jiao
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Shichong Zhou
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Cai Chang
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Yuanyuan Wang
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China.
| | - Yi Guo
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China.
| |
Collapse
|
6
|
Zhuang M, Chen Z, Wang H, Tang H, He J, Qin B, Yang Y, Jin X, Yu M, Jin B, Li T, Kettunen L. Efficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images. Int J Comput Assist Radiol Surg 2023; 18:379-94. [PMID: 36048319 DOI: 10.1007/s11548-022-02730-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 07/29/2022] [Indexed: 02/04/2023]
Abstract
PURPOSE Training deep neural networks usually require a large number of human-annotated data. For organ segmentation from volumetric medical images, human annotation is tedious and inefficient. To save human labour and to accelerate the training process, the strategy of annotation by iterative deep learning recently becomes popular in the research community. However, due to the lack of domain knowledge or efficient human-interaction tools, the current AID methods still suffer from long training time and high annotation burden. METHODS We develop a contour-based annotation by iterative deep learning (AID) algorithm which uses boundary representation instead of voxel labels to incorporate high-level organ shape knowledge. We propose a contour segmentation network with a multi-scale feature extraction backbone to improve the boundary detection accuracy. We also developed a contour-based human-intervention method to facilitate easy adjustments of organ boundaries. By combining the contour-based segmentation network and the contour-adjustment intervention method, our algorithm achieves fast few-shot learning and efficient human proofreading. RESULTS For validation, two human operators independently annotated four abdominal organs in computed tomography (CT) images using our method and two compared methods, i.e. a traditional contour-interpolation method and a state-of-the-art (SOTA) convolutional network (CNN) method based on voxel label representation. Compared to these methods, our approach considerably saved annotation time and reduced inter-rater variabilities. Our contour detection network also outperforms the SOTA nnU-Net in producing anatomically plausible organ shape with only a small training set. CONCLUSION Taking advantage of the boundary shape prior and the contour representation, our method is more efficient, more accurate and less prone to inter-operator variability than the SOTA AID methods for organ segmentation from volumetric medical images. The good shape learning ability and flexible boundary adjustment function make it suitable for fast annotation of organ structures with regular shape.
Collapse
|
7
|
Pace DF, Dalca AV, Brosch T, Geva T, Powell AJ, Weese J, Moghari MH, Golland P. Learned iterative segmentation of highly variable anatomy from limited data: Applications to whole heart segmentation for congenital heart disease. Med Image Anal 2022; 80:102469. [PMID: 35640385 PMCID: PMC9617683 DOI: 10.1016/j.media.2022.102469] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 04/26/2022] [Accepted: 04/29/2022] [Indexed: 02/08/2023]
Abstract
Training deep learning models that segment an image in one step typically requires a large collection of manually annotated images that captures the anatomical variability in a cohort. This poses challenges when anatomical variability is extreme but training data is limited, as when segmenting cardiac structures in patients with congenital heart disease (CHD). In this paper, we propose an iterative segmentation model and show that it can be accurately learned from a small dataset. Implemented as a recurrent neural network, the model evolves a segmentation over multiple steps, from a single user click until reaching an automatically determined stopping point. We develop a novel loss function that evaluates the entire sequence of output segmentations, and use it to learn model parameters. Segmentations evolve predictably according to growth dynamics encapsulated by training data, which consists of images, partially completed segmentations, and the recommended next step. The user can easily refine the final segmentation by examining those that are earlier or later in the output sequence. Using a dataset of 3D cardiac MR scans from patients with a wide range of CHD types, we show that our iterative model offers better generalization to patients with the most severe heart malformations.
Collapse
Affiliation(s)
- Danielle F Pace
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Adrian V Dalca
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Tom Brosch
- Philips Research Laboratories, Hamburg, Germany
| | - Tal Geva
- Department of Cardiology, Boston Children's Hospital, Boston, MA, USA; Department of Pediatrics, Harvard Medical School, Boston, MA, USA
| | - Andrew J Powell
- Department of Cardiology, Boston Children's Hospital, Boston, MA, USA; Department of Pediatrics, Harvard Medical School, Boston, MA, USA
| | | | - Mehdi H Moghari
- Department of Cardiology, Boston Children's Hospital, Boston, MA, USA; Department of Pediatrics, Harvard Medical School, Boston, MA, USA
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
8
|
Jiang D, Wang Y, Zhou F, Ma H, Zhang W, Fang W, Zhao P, Tong Z. Residual refinement for interactive skin lesion segmentation. J Biomed Semantics 2021; 12:22. [PMID: 34922629 PMCID: PMC8684232 DOI: 10.1186/s13326-021-00255-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Accepted: 11/11/2021] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND Image segmentation is a difficult and classic problem. It has a wide range of applications, one of which is skin lesion segmentation. Numerous researchers have made great efforts to tackle the problem, yet there is still no universal method in various application domains. RESULTS We propose a novel approach that combines a deep convolutional neural network with a grabcut-like user interaction to tackle the interactive skin lesion segmentation problem. Slightly deviating from grabcut user interaction, our method uses boxes and clicks. In addition, contrary to existing interactive segmentation algorithms that combine the initial segmentation task with the following refinement task, we explicitly separate these tasks by designing individual sub-networks. One network is SBox-Net, and the other is Click-Net. SBox-Net is a full-fledged segmentation network that is built upon a pre-trained, state-of-the-art segmentation model, while Click-Net is a simple yet powerful network that combines feature maps extracted from SBox-Net and user clicks to residually refine the mistakes made by SBox-Net. Extensive experiments on two public datasets, PH2 and ISIC, confirm the effectiveness of our approach. CONCLUSIONS We present an interactive two-stage pipeline method for skin lesion segmentation, which was demonstrated to be effective in comprehensive experiments.
Collapse
Affiliation(s)
- Dalei Jiang
- Echocardiography and Vascular Ultrasound Center, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China
| | - Yin Wang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, CN, China
| | - Feng Zhou
- Department of Urology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China
| | - Hongtao Ma
- College of Computer Science and Technology, Zhejiang University, Hangzhou, CN, China
| | - Wenting Zhang
- Echocardiography and Vascular Ultrasound Center, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China
| | - Weijia Fang
- Department of Medical Oncology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China
| | - Peng Zhao
- Department of Medical Oncology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China.
| | - Zhou Tong
- Department of Medical Oncology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China.
| |
Collapse
|
9
|
Yuan Y, Chen YW, Dong C, Yu H, Zhu Z. Hybrid method combining superpixel, random walk and active contour model for fast and accurate liver segmentation. Comput Med Imaging Graph 2018; 70:119-134. [PMID: 30359946 DOI: 10.1016/j.compmedimag.2018.08.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 04/27/2018] [Accepted: 08/27/2018] [Indexed: 10/28/2022]
Abstract
Organ segmentation is an important pre-processing step in surgery planning and computer-aided diagnosis. In this paper, we propose a fast and accurate liver segmentation framework. Our proposed method combines a knowledge-based slice-by-slice Random Walk (RW) segmentation algorithm (proposed in our previous work) with a superpixel algorithm called the Contrast-enhanced Compact Watershed (CCWS) method to reduce computing time and memory costs. Compared to the commonly used Simple Linear Iterative Clustering (SLIC), we demonstrate that our CCWS is more appropriate for liver segmentation. To improve the methods accuracy, we use a modified narrow band active contour model as a refinement after the initial segmentation. The experiments showed that the superpixel-based slice-by-slice RW could segment the entire liver with improved speed, and the modified active contour model is more precise than the original Chan-Vese Model. As a result, the proposed framework is able to quickly and accurately segment the entire liver.
Collapse
Affiliation(s)
- Ye Yuan
- Software College of Northeastern University, No. 195 Chuangxin Road, Shenyang, China
| | - Yen-Wei Chen
- Graduate School of Information Science and Engineering, Ritsumeikan University, Noji-higashi 1-1-1, Kusatsu, Japan
| | - Chunhua Dong
- Department of Mathematics and Computer Science, Fort Valley State University, 1005 State University Drive, Fort Valley, United States
| | - Hai Yu
- Software College of Northeastern University, No. 195 Chuangxin Road, Shenyang, China
| | - Zhiliang Zhu
- Software College of Northeastern University, No. 195 Chuangxin Road, Shenyang, China.
| |
Collapse
|
10
|
Zhong Z, Kim Y, Buatti J, Wu X. 3D Alpha Matting Based Co-segmentation of Tumors on PET-CT Images. Mol Imaging Reconstr Anal Mov Body Organs Stroke Imaging Treat (2017) 2017; 10555:31-42. [PMID: 31799515 DOI: 10.1007/978-3-319-67564-0_4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Positron emission tomography - computed tomography (PET-CT) has been widely used in modern cancer imaging. Accurate tumor delineation from PET and CT plays an important role in radiation therapy. The PET-CT co-segmentation technique, which makes use of advantages of both modalities, has achieved impressive performance for tumor delineation. In this work, we propose a novel 3D image matting based semi-automated co-segmentation method for tumor delineation on dual PET-CT scans. The "matte" values generated by 3D image matting are employed to compute the region costs for the graph based co-segmentation. Compared to previous PET-CT co-segmentation methods, our method is completely data-driven in the design of cost functions, thus using much less hyper-parameters in our segmentation model. Comparative experiments on 54 PET-CT scans of lung cancer patients demonstrated the effectiveness of our method.
Collapse
|
11
|
Luengo I, Darrow MC, Spink MC, Sun Y, Dai W, He CY, Chiu W, Pridmore T, Ashton AW, Duke EMH, Basham M, French AP. SuRVoS: Super-Region Volume Segmentation workbench. J Struct Biol 2017; 198:43-53. [PMID: 28246039 PMCID: PMC5405849 DOI: 10.1016/j.jsb.2017.02.007] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2016] [Revised: 02/16/2017] [Accepted: 02/20/2017] [Indexed: 01/08/2023]
Abstract
Segmentation of biological volumes is a crucial step needed to fully analyse their scientific content. Not having access to convenient tools with which to segment or annotate the data means many biological volumes remain under-utilised. Automatic segmentation of biological volumes is still a very challenging research field, and current methods usually require a large amount of manually-produced training data to deliver a high-quality segmentation. However, the complex appearance of cellular features and the high variance from one sample to another, along with the time-consuming work of manually labelling complete volumes, makes the required training data very scarce or non-existent. Thus, fully automatic approaches are often infeasible for many practical applications. With the aim of unifying the segmentation power of automatic approaches with the user expertise and ability to manually annotate biological samples, we present a new workbench named SuRVoS (Super-Region Volume Segmentation). Within this software, a volume to be segmented is first partitioned into hierarchical segmentation layers (named Super-Regions) and is then interactively segmented with the user's knowledge input in the form of training annotations. SuRVoS first learns from and then extends user inputs to the rest of the volume, while using Super-Regions for quicker and easier segmentation than when using a voxel grid. These benefits are especially noticeable on noisy, low-dose, biological datasets.
Collapse
Affiliation(s)
- Imanol Luengo
- School of Computer Science, University of Nottingham, Jubilee Campus, Nottingham NG8 1BB, United Kingdom; Diamond Light Source, Harwell Science & Innovation Campus, Didcot OX11 0DE, United Kingdom.
| | - Michele C Darrow
- Diamond Light Source, Harwell Science & Innovation Campus, Didcot OX11 0DE, United Kingdom.
| | - Matthew C Spink
- Diamond Light Source, Harwell Science & Innovation Campus, Didcot OX11 0DE, United Kingdom.
| | - Ying Sun
- Department of Biological Sciences, National University of Singapore, Singapore 117563, Singapore; National Center for Macromolecular Imaging, Department of Biochemistry and Molecular Biology, Baylor College of Medicine, Houston, TX 77030, USA.
| | - Wei Dai
- Department of Cell Biology and Neuroscience, and Center for Integrative Proteomics Research, Rutgers University, NJ 08901, USA.
| | - Cynthia Y He
- Department of Biological Sciences, National University of Singapore, Singapore 117563, Singapore.
| | - Wah Chiu
- National Center for Macromolecular Imaging, Department of Biochemistry and Molecular Biology, Baylor College of Medicine, Houston, TX 77030, USA.
| | - Tony Pridmore
- School of Computer Science, University of Nottingham, Jubilee Campus, Nottingham NG8 1BB, United Kingdom.
| | - Alun W Ashton
- Diamond Light Source, Harwell Science & Innovation Campus, Didcot OX11 0DE, United Kingdom.
| | - Elizabeth M H Duke
- Diamond Light Source, Harwell Science & Innovation Campus, Didcot OX11 0DE, United Kingdom.
| | - Mark Basham
- Diamond Light Source, Harwell Science & Innovation Campus, Didcot OX11 0DE, United Kingdom.
| | - Andrew P French
- School of Computer Science, University of Nottingham, Jubilee Campus, Nottingham NG8 1BB, United Kingdom.
| |
Collapse
|
12
|
Abstract
Controlling relative daughter cell size is key during cytokinesis. Uncontrolled size asymmetries can lead to aneuploidy and division failure. At the same time, precisely regulated size asymmetries are of crucial importance in many divisions during embryonic development. Therefore, being able to monitor daughter cell size is important in cytokinesis studies. However, freely available tools allowing to effectively measure the size of daughter cells in three dimensions during cytokinesis are missing. Here, we describe an open-access plugin for ImageJ or Fiji based on an active contour surface representation of the cells. Our method provides a user-friendly and accurate way to monitor the size of the two daughter cells throughout cytokinesis.
Collapse
Affiliation(s)
- M B Smith
- MRC Laboratory for Molecular Cell Biology, University College London, London, United Kingdom
| | - A Chaigne
- MRC Laboratory for Molecular Cell Biology, University College London, London, United Kingdom
| | - E K Paluch
- MRC Laboratory for Molecular Cell Biology, University College London, London, United Kingdom
| |
Collapse
|
13
|
Gan HS, Tan TS, Wong LX, Tham WK, Sayuti KA, Abdul Karim AH, bin Abdul Kadir MR. Interactive knee cartilage extraction using efficient segmentation software: data from the osteoarthritis initiative. Biomed Mater Eng 2015; 24:3145-57. [PMID: 25227024 DOI: 10.3233/bme-141137] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In medical image segmentation, manual segmentation is considered both labor- and time-intensive while automated segmentation often fails to segment anatomically intricate structure accordingly. Interactive segmentation can tackle shortcomings reported by previous segmentation approaches through user intervention. To better reflect user intention, development of suitable editing functions is critical. In this paper, we propose an interactive knee cartilage extraction software that covers three important features: intuitiveness, speed, and convenience. The segmentation is performed using multi-label random walks algorithm. Our segmentation software is simple to use, intuitive to normal and osteoarthritic image segmentation and efficient using only two third of manual segmentation's time. Future works will extend this software to three dimensional segmentation and quantitative analysis.
Collapse
Affiliation(s)
- Hong-Seng Gan
- Department of Biotechnology and Medical Engineering, Faculty of Biosciences and Medical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia
| | - Tian-Swee Tan
- Department of Biotechnology and Medical Engineering, Faculty of Biosciences and Medical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia
| | - Liang-Xuan Wong
- Department of Control Engineering and Mechatronic Engineering, Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia
| | - Weng-Kit Tham
- Department of Control Engineering and Mechatronic Engineering, Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia
| | - Khairil Amir Sayuti
- Department of Radiology, School of Medical Sciences, Universiti Sains Malaysia, 16150 Kubang Kerian, Kelantan, Malaysia
| | - Ahmad Helmy Abdul Karim
- Department of Radiology, School of Medical Sciences, Universiti Sains Malaysia, 16150 Kubang Kerian, Kelantan, Malaysia
| | - Mohammed Rafiq bin Abdul Kadir
- Department of Clinical Sciences, Faculty of Biosciences and Medical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia
| |
Collapse
|
14
|
Park SH, Lee S, Yun ID, Lee SU. Structured patch model for a unified automatic and interactive segmentation framework. Med Image Anal 2015; 24:297-312. [PMID: 25682219 DOI: 10.1016/j.media.2015.01.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2014] [Revised: 01/05/2015] [Accepted: 01/19/2015] [Indexed: 11/30/2022]
Abstract
We present a novel interactive segmentation framework incorporating a priori knowledge learned from training data. The knowledge is learned as a structured patch model (StPM) comprising sets of corresponding local patch priors and their pairwise spatial distribution statistics which represent the local shape and appearance along its boundary and the global shape structure, respectively. When successive user annotations are given, the StPM is appropriately adjusted in the target image and used together with the annotations to guide the segmentation. The StPM reduces the dependency on the placement and quantity of user annotations with little increase in complexity since the time-consuming StPM construction is performed offline. Furthermore, a seamless learning system can be established by directly adding the patch priors and the pairwise statistics of segmentation results to the StPM. The proposed method was evaluated on three datasets, respectively, of 2D chest CT, 3D knee MR, and 3D brain MR. The experimental results demonstrate that within an equal amount of time, the proposed interactive segmentation framework outperforms recent state-of-the-art methods in terms of accuracy, while it requires significantly less computing and editing time to obtain results with comparable accuracy.
Collapse
Affiliation(s)
- Sang Hyun Park
- Department of Electrical Engineering, ASRI, INMC, Seoul National University, Seoul, Republic of Korea.
| | - Soochahn Lee
- Department of Electronic Engineering, Soonchunhyang University, Asan-si, Republic of Korea.
| | - Il Dong Yun
- Department of Digital Information Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea.
| | - Sang Uk Lee
- Department of Electrical Engineering, ASRI, INMC, Seoul National University, Seoul, Republic of Korea.
| |
Collapse
|
15
|
Sayed A, Layne G, Abraham J, Mukdadi OM. 3-D visualization and non-linear tissue classification of breast tumors using ultrasound elastography in vivo. Ultrasound Med Biol 2014; 40:1490-1502. [PMID: 24768484 DOI: 10.1016/j.ultrasmedbio.2014.02.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2013] [Revised: 01/27/2014] [Accepted: 02/03/2014] [Indexed: 06/03/2023]
Abstract
The goal of the study described here was to introduce new methods for the classification and visualization of human breast tumors using 3-D ultrasound elastography. A tumor's type, shape and size are key features that can help the physician to decide the sort and extent of necessary treatment. In this work, tumor type, being either benign or malignant, was classified non-invasively for nine volunteer patients. The classification was based on estimating four parameters that reflect the tumor's non-linear biomechanical behavior, under multi-compression levels. Tumor prognosis using non-linear elastography was confirmed with biopsy as a gold standard. Three tissue classification parameters were found to be statistically significant with a p-value < 0.05, whereas the fourth non-linear parameter was highly significant, having a p-value < 0.001. Furthermore, each breast tumor's shape and size were estimated in vivo using 3-D elastography, and were enhanced using interactive segmentation. Segmentation with level sets was used to isolate the stiff tumor from the surrounding soft tissue. Segmentation also provided a reliable means to estimate tumors volumes. Four volumetric strains were investigated: the traditional normal axial strain, the first principal strain, von Mises strain and maximum shear strain. It was noted that these strains can provide varying degrees of boundary enhancement to the stiff tumor in the constructed elastograms. The enhanced boundary improved the performance of the segmentation process. In summary, the proposed methods can be employed as a 3-D non-invasive tool for characterization of breast tumors, and may provide early prognosis with minimal pain, as well as diminish the risk of late-stage breast cancer.
Collapse
Affiliation(s)
- Ahmed Sayed
- Biomedical Engineering Department, Misr University for Science &Technology, 6th of October City, Egypt
| | - Ginger Layne
- Department of Radiology, West Virginia University Health Sciences Center, Morgantown, West Virginia, USA
| | - Jame Abraham
- Taussig Cancer Institute, Cleveland Clinic, Cleveland, Ohio, USA
| | - Osama M Mukdadi
- Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, West Virginia, USA.
| |
Collapse
|
16
|
Barbosa D, Heyde B, Cikes M, Dietenbeck T, Claus P, Friboulet D, Bernard O, D'hooge J. Real-time 3D interactive segmentation of echocardiographic data through user-based deformation of B-spline explicit active surfaces. Comput Med Imaging Graph 2014; 38:57-67. [PMID: 24332441 DOI: 10.1016/j.compmedimag.2013.10.002] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2013] [Revised: 08/27/2013] [Accepted: 10/08/2013] [Indexed: 11/23/2022]
Abstract
Image segmentation is an ubiquitous task in medical image analysis, which is required to estimate morphological or functional properties of given anatomical targets. While automatic processing is highly desirable, image segmentation remains to date a supervised process in daily clinical practice. Indeed, challenging data often requires user interaction to capture the required level of anatomical detail. To optimize the analysis of 3D images, the user should be able to efficiently interact with the result of any segmentation algorithm to correct any possible disagreement. Building on a previously developed real-time 3D segmentation algorithm, we propose in the present work an extension towards an interactive application where user information can be used online to steer the segmentation result. This enables a synergistic collaboration between the operator and the underlying segmentation algorithm, thus contributing to higher segmentation accuracy, while keeping total analysis time competitive. To this end, we formalize the user interaction paradigm using a geometrical approach, where the user input is mapped to a non-cartesian space while this information is used to drive the boundary towards the position provided by the user. Additionally, we propose a shape regularization term which improves the interaction with the segmented surface, thereby making the interactive segmentation process less cumbersome. The resulting algorithm offers competitive performance both in terms of segmentation accuracy, as well as in terms of total analysis time. This contributes to a more efficient use of the existing segmentation tools in daily clinical practice. Furthermore, it compares favorably to state-of-the-art interactive segmentation software based on a 3D livewire-based algorithm.
Collapse
|