1
|
Wang J, Fang Z, Yao S, Yang F. Ellipse guided multi-task network for fetal head circumference measurement. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
2
|
Ye X, Guo D, Ge J, Yan S, Xin Y, Song Y, Yan Y, Huang BS, Hung TM, Zhu Z, Peng L, Ren Y, Liu R, Zhang G, Mao M, Chen X, Lu Z, Li W, Chen Y, Huang L, Xiao J, Harrison AP, Lu L, Lin CY, Jin D, Ho TY. Comprehensive and clinically accurate head and neck cancer organs-at-risk delineation on a multi-institutional study. Nat Commun 2022; 13:6137. [PMID: 36253346 PMCID: PMC9576793 DOI: 10.1038/s41467-022-33178-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 09/07/2022] [Indexed: 12/24/2022] Open
Abstract
Accurate organ-at-risk (OAR) segmentation is critical to reduce radiotherapy complications. Consensus guidelines recommend delineating over 40 OARs in the head-and-neck (H&N). However, prohibitive labor costs cause most institutions to delineate a substantially smaller subset of OARs, neglecting the dose distributions of other OARs. Here, we present an automated and highly effective stratified OAR segmentation (SOARS) system using deep learning that precisely delineates a comprehensive set of 42 H&N OARs. We train SOARS using 176 patients from an internal institution and independently evaluate it on 1327 external patients across six different institutions. It consistently outperforms other state-of-the-art methods by at least 3-5% in Dice score for each institutional evaluation (up to 36% relative distance error reduction). Crucially, multi-user studies demonstrate that 98% of SOARS predictions need only minor or no revisions to achieve clinical acceptance (reducing workloads by 90%). Moreover, segmentation and dosimetric accuracy are within or smaller than the inter-user variation.
Collapse
Affiliation(s)
- Xianghua Ye
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Dazhou Guo
- grid.481557.aDAMO Academy, Alibaba Group, New York, NY USA
| | - Jia Ge
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Senxiang Yan
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yi Xin
- Ping An Technology, Shenzhen, China
| | - Yuchen Song
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yongheng Yan
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Bing-shen Huang
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| | - Tsung-Min Hung
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| | - Zhuotun Zhu
- grid.21107.350000 0001 2171 9311Department of Computer Science, Johns Hopkins University, Baltimore, MD USA
| | - Ling Peng
- grid.417401.70000 0004 1798 6507Department of Respiratory Disease, Zhejiang Provincial People’s Hospital, Hangzhou, Zhejiang, China
| | - Yanping Ren
- grid.413597.d0000 0004 1757 8802Department of Radiation Oncology, Huadong Hospital Affiliated to Fudan University, Shanghai, China
| | - Rui Liu
- grid.452438.c0000 0004 1760 8119Department of Radiation Oncology, The First Affiliated Hospital, Xi’an Jiaotong University, Xi’an, China
| | - Gong Zhang
- Department of Radiation Oncology, People’s Hospital of Shanxi Province, Shanxi, China
| | - Mengyuan Mao
- grid.284723.80000 0000 8877 7471Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Xiaohua Chen
- grid.412643.60000 0004 1757 2902Department of Radiation Oncology, The First Hospital of Lanzhou University, Lanzhou, Gansu China
| | - Zhongjie Lu
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Wenxiang Li
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yuzhen Chen
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| | | | | | | | - Le Lu
- grid.481557.aDAMO Academy, Alibaba Group, New York, NY USA
| | - Chien-Yu Lin
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC ,grid.413801.f0000 0001 0711 0593Particle Physics and Beam Delivery Core Laboratory, Chang Gung Memorial Hospital and Chang Gung University, Taoyuan, Taiwan, ROC
| | - Dakai Jin
- grid.481557.aDAMO Academy, Alibaba Group, New York, NY USA
| | - Tsung-Ying Ho
- grid.413801.f0000 0001 0711 0593Department of Nuclear Medicine, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| |
Collapse
|
3
|
Lei T, Wang R, Zhang Y, Wan Y, Liu C, Nandi AK. DefED-Net: Deformable Encoder-Decoder Network for Liver and Liver Tumor Segmentation. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3059780] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
4
|
Huang M, Huang C, Yuan J, Kong D. A Semiautomated Deep Learning Approach for Pancreas Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:3284493. [PMID: 34306587 PMCID: PMC8272661 DOI: 10.1155/2021/3284493] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Revised: 05/28/2021] [Accepted: 06/21/2021] [Indexed: 12/03/2022]
Abstract
Accurate pancreas segmentation from 3D CT volumes is important for pancreas diseases therapy. It is challenging to accurately delineate the pancreas due to the poor intensity contrast and intrinsic large variations in volume, shape, and location. In this paper, we propose a semiautomated deformable U-Net, i.e., DUNet for the pancreas segmentation. The key innovation of our proposed method is a deformable convolution module, which adaptively adds learned offsets to each sampling position of 2D convolutional kernel to enhance feature representation. Combining deformable convolution module with U-Net enables our DUNet to flexibly capture pancreatic features and improve the geometric modeling capability of U-Net. Moreover, a nonlinear Dice-based loss function is designed to tackle the class-imbalanced problem in the pancreas segmentation. Experimental results show that our proposed method outperforms all comparison methods on the same NIH dataset.
Collapse
Affiliation(s)
- Meixiang Huang
- The School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Chongfei Huang
- The School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Jing Yuan
- The School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
- The School of Mathematics and Statistics, Xidian University, Xi'an 710069, China
| | - Dexing Kong
- The School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
5
|
Barat M, Chassagnon G, Dohan A, Gaujoux S, Coriat R, Hoeffel C, Cassinotto C, Soyer P. Artificial intelligence: a critical review of current applications in pancreatic imaging. Jpn J Radiol 2021; 39:514-523. [PMID: 33550513 DOI: 10.1007/s11604-021-01098-5] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 01/25/2021] [Indexed: 12/11/2022]
Abstract
The applications of artificial intelligence (AI), including machine learning and deep learning, in the field of pancreatic disease imaging are rapidly expanding. AI can be used for the detection of pancreatic ductal adenocarcinoma and other pancreatic tumors but also for pancreatic lesion characterization. In this review, the basic of radiomics, recent developments and current results of AI in the field of pancreatic tumors are presented. Limitations and future perspectives of AI are discussed.
Collapse
Affiliation(s)
- Maxime Barat
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
| | - Guillaume Chassagnon
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
| | - Anthony Dohan
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
| | - Sébastien Gaujoux
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
- Department of Abdominal Surgery, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 75014, Paris, France
| | - Romain Coriat
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
- Department of Gastroenterology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 75014, Paris, France
| | - Christine Hoeffel
- Department of Radiology, Robert Debré Hospital, 51092, Reims, France
| | - Christophe Cassinotto
- Department of Radiology, CHU Montpellier, University of Montpellier, Saint-Éloi Hospital, 34000, Montpellier, France
| | - Philippe Soyer
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France.
- Université de Paris, Descartes-Paris 5, 75006, Paris, France.
| |
Collapse
|
6
|
Petit O, Thome N, Soler L. Iterative confidence relabeling with deep ConvNets for organ segmentation with partial labels. Comput Med Imaging Graph 2021; 91:101938. [PMID: 34153879 DOI: 10.1016/j.compmedimag.2021.101938] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 03/22/2021] [Accepted: 04/27/2021] [Indexed: 11/16/2022]
Abstract
Training deep ConvNets requires large labeled datasets. However, collecting pixel-level labels for medical image segmentation is very expensive and requires a high level of expertise. In addition, most existing segmentation masks provided by clinical experts focus on specific anatomical structures. In this paper, we propose a method dedicated to handle such partially labeled medical image datasets. We propose a strategy to identify pixels for which labels are correct, and to train Fully Convolutional Neural Networks with a multi-label loss adapted to this context. In addition, we introduce an iterative confidence self-training approach inspired by curriculum learning to relabel missing pixel labels, which relies on selecting the most confident prediction with a specifically designed confidence network that learns an uncertainty measure which is leveraged in our relabeling process. Our approach, INERRANT for Iterative coNfidencE Relabeling of paRtial ANnoTations, is thoroughly evaluated on two public datasets (TCAI and LITS), and one internal dataset with seven abdominal organ classes. We show that INERRANT robustly deals with partial labels, performing similarly to a model trained on all labels even for large missing label proportions. We also highlight the importance of our iterative learning scheme and the proposed confidence measure for optimal performance. Finally we show a practical use case where a limited number of completely labeled data are enriched by publicly available but partially labeled data.
Collapse
Affiliation(s)
- Olivier Petit
- CEDRIC, Conservatoire National des Arts et Metiers, 292 rue Saint-Martin, Paris, 75003, France; Visible Patient, 8 rue Gustave Adolphe Hirn, Strasbourg, 67000, France.
| | - Nicolas Thome
- CEDRIC, Conservatoire National des Arts et Metiers, 292 rue Saint-Martin, Paris, 75003, France
| | - Luc Soler
- Visible Patient, 8 rue Gustave Adolphe Hirn, Strasbourg, 67000, France
| |
Collapse
|
7
|
Qiu B, van der Wel H, Kraeima J, Hendrik Glas H, Guo J, Borra RJH, Witjes MJH, van Ooijen PMA. Robust and Accurate Mandible Segmentation on Dental CBCT Scans Affected by Metal Artifacts Using a Prior Shape Model. J Pers Med 2021; 11:364. [PMID: 34062762 PMCID: PMC8147374 DOI: 10.3390/jpm11050364] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 04/26/2021] [Accepted: 04/27/2021] [Indexed: 12/17/2022] Open
Abstract
Accurate mandible segmentation is significant in the field of maxillofacial surgery to guide clinical diagnosis and treatment and develop appropriate surgical plans. In particular, cone-beam computed tomography (CBCT) images with metal parts, such as those used in oral and maxillofacial surgery (OMFS), often have susceptibilities when metal artifacts are present such as weak and blurred boundaries caused by a high-attenuation material and a low radiation dose in image acquisition. To overcome this problem, this paper proposes a novel deep learning-based approach (SASeg) for automated mandible segmentation that perceives overall mandible anatomical knowledge. SASeg utilizes a prior shape feature extractor (PSFE) module based on a mean mandible shape, and recurrent connections maintain the continuity structure of the mandible. The effectiveness of the proposed network is substantiated on a dental CBCT dataset from orthodontic treatment containing 59 patients. The experiments show that the proposed SASeg can be easily used to improve the prediction accuracy in a dental CBCT dataset corrupted by metal artifacts. In addition, the experimental results on the PDDCA dataset demonstrate that, compared with the state-of-the-art mandible segmentation models, our proposed SASeg can achieve better segmentation performance.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (J.G.); (P.M.A.v.O.)
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Hylke van der Wel
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (J.G.); (P.M.A.v.O.)
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (J.G.); (P.M.A.v.O.)
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
8
|
Li W, Qin S, Li F, Wang L. MAD-UNet: A deep U-shaped network combined with an attention mechanism for pancreas segmentation in CT images. Med Phys 2020; 48:329-341. [PMID: 33222222 DOI: 10.1002/mp.14617] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 11/11/2020] [Accepted: 11/13/2020] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Pancreas segmentation is a difficult task because of the high intrapatient variability in the shape, size, and location of the organ, as well as the low contrast and small footprint of the CT scan. At present, the U-Net model is likely to lead to the problems of intraclass inconsistency and interclass indistinction in pancreas segmentation. To solve this problem, we improved the contextual and semantic feature information acquisition method of the biomedical image segmentation model (U-Net) based on a convolutional network and proposed an improved segmentation model called the multiscale attention dense residual U-shaped network (MAD-UNet). METHODS There are two aspects considered in this method. First, we adopted dense residual blocks and weighted binary cross-entropy to enhance the semantic features to learn the details of the pancreas. Using such an approach can reduce the effects of intraclass inconsistency. Second, we used an attention mechanism and multiscale convolution to enrich the contextual information and suppress learning in unrelated areas. We let the model be more sensitive to pancreatic marginal information and reduced the impact of interclass indistinction. RESULTS We evaluated our model using fourfold cross-validation on 82 abdominal enhanced three-dimensional (3D) CT scans from the National Institutes of Health (NIH-82) and 281 3D CT scans from the 2018 MICCAI segmentation decathlon challenge (MSD). The experimental results showed that our method achieved state-of-the-art performance on the two pancreatic datasets. The mean Dice coefficients were 86.10% ± 3.52% and 88.50% ± 3.70%. CONCLUSIONS Our model can effectively solve the problems of intraclass inconsistency and interclass indistinction in the segmentation of the pancreas, and it has value in clinical application. Code is available at https://github.com/Mrqins/pancreas-segmentation.
Collapse
Affiliation(s)
- Weisheng Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Sheng Qin
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Feiyan Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Linhong Wang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| |
Collapse
|
9
|
The integration of artificial intelligence models to augment imaging modalities in pancreatic cancer. JOURNAL OF PANCREATOLOGY 2020. [DOI: 10.1097/jp9.0000000000000056] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
|
10
|
Zhang Y, Wu J, Liu Y, Chen Y, Chen W, Wu EX, Li C, Tang X. A deep learning framework for pancreas segmentation with multi-atlas registration and 3D level-set. Med Image Anal 2020; 68:101884. [PMID: 33246228 DOI: 10.1016/j.media.2020.101884] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 06/14/2020] [Accepted: 10/16/2020] [Indexed: 12/21/2022]
Abstract
In this paper, we propose and validate a deep learning framework that incorporates both multi-atlas registration and level-set for segmenting pancreas from CT volume images. The proposed segmentation pipeline consists of three stages, namely coarse, fine, and refine stages. Firstly, a coarse segmentation is obtained through multi-atlas based 3D diffeomorphic registration and fusion. After that, to learn the connection feature, a 3D patch-based convolutional neural network (CNN) and three 2D slice-based CNNs are jointly used to predict a fine segmentation based on a bounding box determined from the coarse segmentation. Finally, a 3D level-set method is used, with the fine segmentation being one of its constraints, to integrate information of the original image and the CNN-derived probability map to achieve a refine segmentation. In other words, we jointly utilize global 3D location information (registration), contextual information (patch-based 3D CNN), shape information (slice-based 2.5D CNN) and edge information (3D level-set) in the proposed framework. These components form our cascaded coarse-fine-refine segmentation framework. We test the proposed framework on three different datasets with varying intensity ranges obtained from different resources, respectively containing 36, 82 and 281 CT volume images. In each dataset, we achieve an average Dice score over 82%, being superior or comparable to other existing state-of-the-art pancreas segmentation algorithms.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Jiong Wu
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China; School of Computer and Electrical Engineering, Hunan University of Arts and Science, Hunan, China
| | - Yilong Liu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Yifan Chen
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Wei Chen
- Department of Radiology, Third Military Medical University Southwest Hospital, Chongqing, China
| | - Ed X Wu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Chunming Li
- Department of Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaoying Tang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China.
| |
Collapse
|
11
|
|
12
|
Kumar H, DeSouza SV, Petrov MS. Automated pancreas segmentation from computed tomography and magnetic resonance images: A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 178:319-328. [PMID: 31416559 DOI: 10.1016/j.cmpb.2019.07.002] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Revised: 07/01/2019] [Accepted: 07/03/2019] [Indexed: 06/10/2023]
Abstract
The pancreas is a highly variable organ, the size, shape, and position of which are affected by age, sex, adiposity, the presence of diseases affecting the pancreas (e.g., diabetes, pancreatic cancer, pancreatitis) and other factors. Accurate automated segmentation of the pancreas has the potential to facilitate timely diagnosing and managing of diseases of the endocrine and exocrine pancreas. The aim was to systematically review studies reporting on automated pancreas segmentation algorithms derived from computed tomography (CT) or magnetic resonance (MR) images. The MEDLINE database and three patent databases were searched. Data on the performance of algorithms were meta-analysed, when possible. The algorithms were classified into one of four groups: multiorgan atlas-based, landmark-based, shape model-based, and neural network-based. A total of 13 cohorts suitable for meta-analysis were pooled to determine the performance of pancreas segmentation algorithms altogether using the Dice coefficient. These cohorts, comprising 1110 individuals, yielded a weighted mean Dice coefficient of 74.4%. Eight cohorts suitable for meta-analysis were pooled to determine the performance of pancreas segmentation algorithms altogether using the Jaccard index. These cohorts, comprising 636 individuals, yielded a weighted mean Jaccard index of 63.7%. Multiorgan atlas-based algorithms had a weighted mean Dice coefficient of 70.1% and a weighted mean Jaccard index of 59.8%. Neural network-based algorithms had a weighted mean Dice coefficient of 82.3% and a weighted mean Jaccard index of 70.1%. Studies using the other two types of algorithms were not meta-analysable. The above findings indicate that the automation of pancreas segmentation represents a considerable challenge as the performance of current automated pancreas segmentation algorithms is suboptimal. Adopting standardised reporting on performance of pancreas segmentation algorithms and encouraging the use of benchmark pancreas segmentation datasets will allow future algorithms to be tested and compared more easily and fairly.
Collapse
Affiliation(s)
- Haribalan Kumar
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Steve V DeSouza
- School of Medicine, University of Auckland, Auckland, New Zealand
| | - Maxim S Petrov
- School of Medicine, University of Auckland, Auckland, New Zealand.
| |
Collapse
|
13
|
Asaturyan H, Gligorievski A, Villarini B. Morphological and multi-level geometrical descriptor analysis in CT and MRI volumes for automatic pancreas segmentation. Comput Med Imaging Graph 2019; 75:1-13. [DOI: 10.1016/j.compmedimag.2019.04.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 12/10/2018] [Accepted: 04/26/2019] [Indexed: 11/24/2022]
|
14
|
Wu X, Udupa JK, Tong Y, Odhner D, Pednekar GV, Simone CB, McLaughlin D, Apinorasethkul C, Apinorasethkul O, Lukens J, Mihailidis D, Shammo G, James P, Tiwari A, Wojtowicz L, Camaratta J, Torigian DA. AAR-RT - A system for auto-contouring organs at risk on CT images for radiation therapy planning: Principles, design, and large-scale evaluation on head-and-neck and thoracic cancer cases. Med Image Anal 2019; 54:45-62. [PMID: 30831357 PMCID: PMC6499546 DOI: 10.1016/j.media.2019.01.008] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2018] [Revised: 12/04/2018] [Accepted: 01/26/2019] [Indexed: 12/25/2022]
Abstract
Contouring (segmentation) of Organs at Risk (OARs) in medical images is required for accurate radiation therapy (RT) planning. In current clinical practice, OAR contouring is performed with low levels of automation. Although several approaches have been proposed in the literature for improving automation, it is difficult to gain an understanding of how well these methods would perform in a realistic clinical setting. This is chiefly due to three key factors - small number of patient studies used for evaluation, lack of performance evaluation as a function of input image quality, and lack of precise anatomic definitions of OARs. In this paper, extending our previous body-wide Automatic Anatomy Recognition (AAR) framework to RT planning of OARs in the head and neck (H&N) and thoracic body regions, we present a methodology called AAR-RT to overcome some of these hurdles. AAR-RT follows AAR's 3-stage paradigm of model-building, object-recognition, and object-delineation. Model-building: Three key advances were made over AAR. (i) AAR-RT (like AAR) starts off with a computationally precise definition of the two body regions and all of their OARs. Ground truth delineations of OARs are then generated following these definitions strictly. We retrospectively gathered patient data sets and the associated contour data sets that have been created previously in routine clinical RT planning from our Radiation Oncology department and mended the contours to conform to these definitions. We then derived an Object Quality Score (OQS) for each OAR sample and an Image Quality Score (IQS) for each study, both on a 1-to-10 scale, based on quality grades assigned to each OAR sample following 9 key quality criteria. Only studies with high IQS and high OQS for all of their OARs were selected for model building. IQS and OQS were employed for evaluating AAR-RT's performance as a function of image/object quality. (ii) In place of the previous hand-crafted hierarchy for organizing OARs in AAR, we devised a method to find an optimal hierarchy for each body region. Optimality was based on minimizing object recognition error. (iii) In addition to the parent-to-child relationship encoded in the hierarchy in previous AAR, we developed a directed probability graph technique to further improve recognition accuracy by learning and encoding in the model "steady" relationships that may exist among OAR boundaries in the three orthogonal planes. Object-recognition: The two key improvements over the previous approach are (i) use of the optimal hierarchy for actual recognition of OARs in a given image, and (ii) refined recognition by making use of the trained probability graph. Object-delineation: We use a kNN classifier confined to the fuzzy object mask localized by the recognition step and then fit optimally the fuzzy mask to the kNN-derived voxel cluster to bring back shape constraint on the object. We evaluated AAR-RT on 205 thoracic and 298 H&N (total 503) studies, involving both planning and re-planning scans and a total of 21 organs (9 - thorax, 12 - H&N). The studies were gathered from two patient age groups for each gender - 40-59 years and 60-79 years. The number of 3D OAR samples analyzed from the two body regions was 4301. IQS and OQS tended to cluster at the two ends of the score scale. Accordingly, we considered two quality groups for each gender - good and poor. Good quality data sets typically had OQS ≥ 6 and had distortions, artifacts, pathology etc. in not more than 3 slices through the object. The number of model-worthy data sets used for training were 38 for thorax and 36 for H&N, and the remaining 479 studies were used for testing AAR-RT. Accordingly, we created 4 anatomy models, one each for: Thorax male (20 model-worthy data sets), Thorax female (18 model-worthy data sets), H&N male (20 model-worthy data sets), and H&N female (16 model-worthy data sets). On "good" cases, AAR-RT's recognition accuracy was within 2 voxels and delineation boundary distance was within ∼1 voxel. This was similar to the variability observed between two dosimetrists in manually contouring 5-6 OARs in each of 169 studies. On "poor" cases, AAR-RT's errors hovered around 5 voxels for recognition and 2 voxels for boundary distance. The performance was similar on planning and replanning cases, and there was no gender difference in performance. AAR-RT's recognition operation is much more robust than delineation. Understanding object and image quality and how they influence performance is crucial for devising effective object recognition and delineation algorithms. OQS seems to be more important than IQS in determining accuracy. Streak artifacts arising from dental implants and fillings and beam hardening from bone pose the greatest challenge to auto-contouring methods.
Collapse
Affiliation(s)
- Xingyu Wu
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States
| | - Gargi V Pednekar
- Quantitative Radiology Solutions, 3624 Market Street, Suite 5E, Philadelphia, PA 19104, United States
| | - Charles B Simone
- Department of Radiation Oncology, Maryland Proton Treatment Center, School of Medicine, University of Maryland 850W, Baltimore, MD 21201, United States
| | - David McLaughlin
- Quantitative Radiology Solutions, 3624 Market Street, Suite 5E, Philadelphia, PA 19104, United States
| | - Chavanon Apinorasethkul
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Ontida Apinorasethkul
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - John Lukens
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Dimitris Mihailidis
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Geraldine Shammo
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Paul James
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Akhil Tiwari
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Lisa Wojtowicz
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Joseph Camaratta
- Quantitative Radiology Solutions, 3624 Market Street, Suite 5E, Philadelphia, PA 19104, United States
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States
| |
Collapse
|
15
|
Schlemper J, Oktay O, Schaap M, Heinrich M, Kainz B, Glocker B, Rueckert D. Attention gated networks: Learning to leverage salient regions in medical images. Med Image Anal 2019; 53:197-207. [PMID: 30802813 PMCID: PMC7610718 DOI: 10.1016/j.media.2019.01.012] [Citation(s) in RCA: 575] [Impact Index Per Article: 115.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 01/15/2019] [Accepted: 01/18/2019] [Indexed: 02/07/2023]
Abstract
We propose a novel attention gate (AG) model for medical image analysis that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules when using convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN models such as VGG or U-Net architectures with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed AG models are evaluated on a variety of tasks, including medical image classification and segmentation. For classification, we demonstrate the use case of AGs in scan plane detection for fetal ultrasound screening. We show that the proposed attention mechanism can provide efficient object localisation while improving the overall prediction performance by reducing false positives. For segmentation, the proposed architecture is evaluated on two large 3D CT abdominal datasets with manual annotations for multiple organs. Experimental results show that AG models consistently improve the prediction performance of the base architectures across different datasets and training sizes while preserving computational efficiency. Moreover, AGs guide the model activations to be focused around salient regions, which provides better insights into how model predictions are made. The source code for the proposed AG models is publicly available.
Collapse
Affiliation(s)
- Jo Schlemper
- BioMedIA, Imperial College London, London, SW7 2AZ, UK.
| | - Ozan Oktay
- BioMedIA, Imperial College London, London, SW7 2AZ, UK; HeartFlow, Redwood City, CA 94063, USA.
| | | | | | | | - Ben Glocker
- BioMedIA, Imperial College London, London, SW7 2AZ, UK
| | | |
Collapse
|
16
|
Bieth M, Peter L, Nekolla SG, Eiber M, Langs G, Schwaiger M, Menze B. Segmentation of Skeleton and Organs in Whole-Body CT Images via Iterative Trilateration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2276-2286. [PMID: 28678702 DOI: 10.1109/tmi.2017.2720261] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Whole body oncological screening using CT images requires a good anatomical localisation of organs and the skeleton. While a number of algorithms for multi-organ localisation have been presented, developing algorithms for a dense anatomical annotation of the whole skeleton, however, has not been addressed until now. Only methods for specialised applications, e.g., in spine imaging, have been previously described. In this work, we propose an approach for localising and annotating different parts of the human skeleton in CT images. We introduce novel anatomical trilateration features and employ them within iterative scale-adaptive random forests in a hierarchical fashion to annotate the whole skeleton. The anatomical trilateration features provide high-level long-range context information that complements the classical local context-based features used in most image segmentation approaches. They rely on anatomical landmarks derived from the previous element of the cascade to express positions relative to reference points. Following a hierarchical approach, large anatomical structures are segmented first, before identifying substructures. We develop this method for bone annotation but also illustrate its performance, although not specifically optimised for it, for multi-organ annotation. Our method achieves average dice scores of 77.4 to 85.6 for bone annotation on three different data sets. It can also segment different organs with sufficient performance for oncological applications, e.g., for PET/CT analysis, and its computation time allows for its use in clinical practice.
Collapse
|
17
|
An efficient Riemannian statistical shape model using differential coordinates: With application to the classification of data from the Osteoarthritis Initiative. Med Image Anal 2017; 43:1-9. [PMID: 28961450 DOI: 10.1016/j.media.2017.09.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Revised: 09/12/2017] [Accepted: 09/12/2017] [Indexed: 11/22/2022]
Abstract
We propose a novel Riemannian framework for statistical analysis of shapes that is able to account for the nonlinearity in shape variation. By adopting a physical perspective, we introduce a differential representation that puts the local geometric variability into focus. We model these differential coordinates as elements of a Lie group thereby endowing our shape space with a non-Euclidean structure. A key advantage of our framework is that statistics in a manifold shape space becomes numerically tractable improving performance by several orders of magnitude over state-of-the-art. We show that our Riemannian model is well suited for the identification of intra-population variability as well as inter-population differences. In particular, we demonstrate the superiority of the proposed model in experiments on specificity and generalization ability. We further derive a statistical shape descriptor that outperforms the standard Euclidean approach in terms of shape-based classification of morphological disorders.
Collapse
|
18
|
|
19
|
Multi-atlas pancreas segmentation: Atlas selection based on vessel structure. Med Image Anal 2017; 39:18-28. [DOI: 10.1016/j.media.2017.03.006] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Revised: 11/03/2016] [Accepted: 03/22/2017] [Indexed: 11/24/2022]
|
20
|
ShapeCut: Bayesian surface estimation using shape-driven graph. Med Image Anal 2017; 40:11-29. [PMID: 28582702 DOI: 10.1016/j.media.2017.04.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Revised: 04/12/2017] [Accepted: 04/22/2017] [Indexed: 11/21/2022]
Abstract
A variety of medical image segmentation problems present significant technical challenges, including heterogeneous pixel intensities, noisy/ill-defined boundaries and irregular shapes with high variability. The strategy of estimating optimal segmentations within a statistical framework that combines image data with priors on anatomical structures promises to address some of these technical challenges. However, methods that rely on local optimization techniques and/or local shape penalties (e.g., smoothness) have been proven to be inadequate for many difficult segmentation problems. These challenging segmentation problems can benefit from the inclusion of global shape priors within a maximum-a-posteriori estimation framework, which biases solutions toward an object class of interest. In this paper, we propose a maximum-a-posteriori formulation that relies on a generative image model by incorporating both local and global shape priors. The proposed method relies on graph cuts as well as a new shape parameters estimation that provides a global updates-based optimization strategy. We demonstrate our approach on synthetic datasets as well as on the left atrial wall segmentation from late-gadolinium enhancement MRI, which has been shown to be effective for identifying myocardial fibrosis in the diagnosis of atrial fibrillation. Experimental results prove the effectiveness of the proposed approach in terms of the average surface distance between extracted surfaces and the corresponding ground-truth, as well as the clinical efficacy of the method in the identification of fibrosis and scars in the atrial wall.
Collapse
|
21
|
Fast approximation for joint optimization of segmentation, shape, and location priors, and its application in gallbladder segmentation. Int J Comput Assist Radiol Surg 2017; 12:743-756. [PMID: 28349505 DOI: 10.1007/s11548-017-1571-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2016] [Accepted: 03/16/2017] [Indexed: 10/19/2022]
Abstract
PURPOSE This paper addresses joint optimization for segmentation and shape priors, including translation, to overcome inter-subject variability in the location of an organ. Because a simple extension of the previous exact optimization method is too computationally complex, we propose a fast approximation for optimization. The effectiveness of the proposed approximation is validated in the context of gallbladder segmentation from a non-contrast computed tomography (CT) volume. METHODS After spatial standardization and estimation of the posterior probability of the target organ, simultaneous optimization of the segmentation, shape, and location priors is performed using a branch-and-bound method. Fast approximation is achieved by combining sampling in the eigenshape space to reduce the number of shape priors and an efficient computational technique for evaluating the lower bound. RESULTS Performance was evaluated using threefold cross-validation of 27 CT volumes. Optimization in terms of translation of the shape prior significantly improved segmentation performance. The proposed method achieved a result of 0.623 on the Jaccard index in gallbladder segmentation, which is comparable to that of state-of-the-art methods. The computational efficiency of the algorithm is confirmed to be good enough to allow execution on a personal computer. CONCLUSIONS Joint optimization of the segmentation, shape, and location priors was proposed, and it proved to be effective in gallbladder segmentation with high computational efficiency.
Collapse
|
22
|
Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets. Int J Comput Assist Radiol Surg 2016; 12:399-411. [PMID: 27885540 DOI: 10.1007/s11548-016-1501-5] [Citation(s) in RCA: 83] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2016] [Accepted: 11/03/2016] [Indexed: 10/20/2022]
Abstract
PURPOSE Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. METHODS The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. RESULTS Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. CONCLUSION A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.
Collapse
|
23
|
Automated liver segmentation from a postmortem CT scan based on a statistical shape model. Int J Comput Assist Radiol Surg 2016; 12:205-221. [PMID: 27659283 DOI: 10.1007/s11548-016-1481-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 08/31/2016] [Indexed: 10/21/2022]
Abstract
PURPOSE Automated liver segmentation from a postmortem computed tomography (PMCT) volume is a challenging problem owing to the large deformation and intensity changes caused by severe pathology and/or postmortem changes. This paper addresses this problem by a novel segmentation algorithm using a statistical shape model (SSM) for a postmortem liver. METHODS The location and shape parameters of a liver are directly estimated from a given volume by the proposed SSM-guided expectation-maximization (EM) algorithm without any spatial standardization that might fail owing to the large deformation and intensity changes. The estimated location and shape parameters are then used as a constraint of the subsequent fine segmentation process based on graph cuts. Algorithms with eight different SSMs were trained using 144 in vivo and 32 postmortem livers, and the segmentation algorithm was tested on 32 postmortem livers in a twofold cross validation manner. The segmentation performance is measured by the Jaccard index (JI) between the segmentation result and the true liver label. RESULTS The average JI of the segmentation result with the best SSM was 0.8501, which was better compared with the results obtained using conventional SSMs and the results of the previous postmortem liver segmentation with statistically significant difference. CONCLUSIONS We proposed an algorithm for automated liver segmentation from a PMCT volume, in which an SSM-guided EM algorithm estimated the location and shape parameters of a liver in a given volume accurately. We demonstrated the effectiveness of the proposed algorithm using actual postmortem CT volumes.
Collapse
|