1
|
Ma Z, Li C, Du T, Zhang L, Tang D, Ma D, Huang S, Liu Y, Sun Y, Chen Z, Yuan J, Nie Q, Grzegorzek M, Sun H. AATCT-IDS: A benchmark Abdominal Adipose Tissue CT Image Dataset for image denoising, semantic segmentation, and radiomics evaluation. Comput Biol Med 2024; 177:108628. [PMID: 38810476 DOI: 10.1016/j.compbiomed.2024.108628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 04/14/2024] [Accepted: 05/18/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND AND OBJECTIVE The metabolic syndrome induced by obesity is closely associated with cardiovascular disease, and the prevalence is increasing globally, year by year. Obesity is a risk marker for detecting this disease. However, current research on computer-aided detection of adipose distribution is hampered by the lack of open-source large abdominal adipose datasets. METHODS In this study, a benchmark Abdominal Adipose Tissue CT Image Dataset (AATCT-IDS) containing 300 subjects is prepared and published. AATCT-IDS publics 13,732 raw CT slices, and the researchers individually annotate the subcutaneous and visceral adipose tissue regions of 3213 of those slices that have the same slice distance to validate denoising methods, train semantic segmentation models, and study radiomics. For different tasks, this paper compares and analyzes the performance of various methods on AATCT-IDS by combining the visualization results and evaluation data. Thus, verify the research potential of this data set in the above three types of tasks. RESULTS In the comparative study of image denoising, algorithms using a smoothing strategy suppress mixed noise at the expense of image details and obtain better evaluation data. Methods such as BM3D preserve the original image structure better, although the evaluation data are slightly lower. The results show significant differences among them. In the comparative study of semantic segmentation of abdominal adipose tissue, the segmentation results of adipose tissue by each model show different structural characteristics. Among them, BiSeNet obtains segmentation results only slightly inferior to U-Net with the shortest training time and effectively separates small and isolated adipose tissue. In addition, the radiomics study based on AATCT-IDS reveals three adipose distributions in the subject population. CONCLUSION AATCT-IDS contains the ground truth of adipose tissue regions in abdominal CT slices. This open-source dataset can attract researchers to explore the multi-dimensional characteristics of abdominal adipose tissue and thus help physicians and patients in clinical practice. AATCT-IDS is freely published for non-commercial purpose at: https://figshare.com/articles/dataset/AATTCT-IDS/23807256.
Collapse
Affiliation(s)
- Zhiyu Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Le Zhang
- Department of Radiology, Qingdao Municipal Hospital, Qingdao University, Qingdao, China
| | - Dechao Tang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Deguo Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Shanchuan Huang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Yan Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Yihao Sun
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Zhihao Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Jin Yuan
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Qianqing Nie
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang 110122, China.
| |
Collapse
|
2
|
Wang Z, Song J, Lin K, Hong W, Mao S, Wu X, Zhang J. Automated detection of otosclerosis with interpretable deep learning using temporal bone computed tomography images. Heliyon 2024; 10:e29670. [PMID: 38655358 PMCID: PMC11036044 DOI: 10.1016/j.heliyon.2024.e29670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 04/10/2024] [Accepted: 04/12/2024] [Indexed: 04/26/2024] Open
Abstract
Objective This study aimed to develop an automated detection schema for otosclerosis with interpretable deep learning using temporal bone computed tomography images. Methods With approval from the institutional review board, we retrospectively analyzed high-resolution computed tomography scans of the temporal bone of 182 participants with otosclerosis (67 male subjects and 115 female subjects; average age, 36.42 years) and 157 participants without otosclerosis (52 male subjects and 102 female subjects; average age, 30.61 years) using deep learning. Transfer learning with the pretrained VGG19, Mask RCNN, and EfficientNet models was used. In addition, 3 clinical experts compared the system's performance by reading the same computed tomography images for a subset of 35 unseen subjects. An area under the receiver operating characteristic curve and a saliency map were used to further evaluate the diagnostic performance. Results In prospective unseen test data, the diagnostic performance of the automatically interpretable otosclerosis detection system at the optimal threshold was 0.97 and 0.98 for sensitivity and specificity, respectively. In comparison with the clinical acumen of otolaryngologists at P < 0.05, the proposed system was not significantly different. Moreover, the area under the receiver operating characteristic curve for the proposed system was 0.99, indicating satisfactory diagnostic accuracy. Conclusion Our research develops and evaluates a deep learning system that detects otosclerosis at a level comparable with clinical otolaryngologists. Our system is an effective schema for the differential diagnosis of otosclerosis in computed tomography examinations.
Collapse
Affiliation(s)
- Zheng Wang
- School of Computer Science, Hunan First Normal University, Changsha, 410205, China
- Key Laboratory of Informalization Technology for Basic Education in Hunan Province, Changsha, 410205, China
| | - Jian Song
- Department of Otorhinolaryngology, Xiangya Hospital Central South University, Changsha, Hunan, China
- Province Key Laboratory of Otolaryngology Critical Diseases, Changsha, Hunan, China
| | - Kaibin Lin
- School of Computer Science, Hunan First Normal University, Changsha, 410205, China
- Key Laboratory of Informalization Technology for Basic Education in Hunan Province, Changsha, 410205, China
| | - Wei Hong
- School of Computer Science, Hunan First Normal University, Changsha, 410205, China
- Key Laboratory of Informalization Technology for Basic Education in Hunan Province, Changsha, 410205, China
| | - Shuang Mao
- Department of Otorhinolaryngology, Xiangya Hospital Central South University, Changsha, Hunan, China
- Province Key Laboratory of Otolaryngology Critical Diseases, Changsha, Hunan, China
| | - Xuewen Wu
- Department of Otorhinolaryngology, Xiangya Hospital Central South University, Changsha, Hunan, China
- Province Key Laboratory of Otolaryngology Critical Diseases, Changsha, Hunan, China
| | - Jianglin Zhang
- Department of Dermatology, Shenzhen People's Hospital, The Second Clinical Medical College, Jinan University. The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen, 518020, Guangdong, China
- Candidate Branch of National Clinical Research Center for Skin Diseases, Shenzhen, 518020, Guangdong, China
- Department of Geriatrics, Shenzhen People's Hospital, The Second Clinical Medical College, Jinan University. The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen, 518020, Guangdong, China
| |
Collapse
|
3
|
Wang Y, Luo Z, Zhou Z, Zhong Y, Zhang R, Shen X, Huang L, He W, Lin J, Fang J, Huang Q, Wang H, Zhang Z, Mao R, Feng ST, Li X, Huang B, Li Z, Zhang J, Chen Z. CT-based radiomics signature of visceral adipose tissue and bowel lesions for identifying patients with Crohn's disease resistant to infliximab. Insights Imaging 2024; 15:28. [PMID: 38289416 PMCID: PMC10828370 DOI: 10.1186/s13244-023-01581-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/25/2023] [Indexed: 02/02/2024] Open
Abstract
PURPOSE To develop a CT-based radiomics model combining with VAT and bowel features to improve the predictive efficacy of IFX therapy on the basis of bowel model. METHODS This retrospective study included 231 CD patients (training cohort, n = 112; internal validation cohort, n = 48; external validation cohort, n = 71) from two tertiary centers. Machine-learning VAT model and bowel model were developed separately to identify CD patients with primary nonresponse to IFX. A comprehensive model incorporating VAT and bowel radiomics features was further established to verify whether CT features extracted from VAT would improve the predictive efficacy of bowel model. Area under the curve (AUC) and decision curve analysis were used to compare the prediction performance. Clinical utility was assessed by integrated differentiation improvement (IDI). RESULTS VAT model and bowel model exhibited comparable performance for identifying patients with primary nonresponse in both internal (AUC: VAT model vs bowel model, 0.737 (95% CI, 0.590-0.854) vs. 0.832 (95% CI, 0.750-0.896)) and external validation cohort [AUC: VAT model vs. bowel model, 0.714 (95% CI, 0.595-0.815) vs. 0.799 (95% CI, 0.687-0.885)), exhibiting a relatively good net benefit. The comprehensive model incorporating VAT into bowel model yielded a satisfactory predictive efficacy in both internal (AUC, 0.840 (95% CI, 0.706-0.930)) and external validation cohort (AUC, 0.833 (95% CI, 0.726-0.911)), significantly better than bowel alone (IDI = 4.2% and 3.7% in internal and external validation cohorts, both p < 0.05). CONCLUSION VAT has an effect on IFX treatment response. It improves the performance for identification of CD patients at high risk of primary nonresponse to IFX therapy with selected features from RM. CRITICAL RELEVANCE STATEMENT Our radiomics model (RM) for VAT-bowel analysis captured the pathophysiological changes occurring in VAT and whole bowel lesion, which could help to identify CD patients who would not response to infliximab at the beginning of therapy. KEY POINTS • Radiomics signatures with VAT and bowel alone or in combination predicting infliximab efficacy. • VAT features contribute to the prediction of IFX treatment efficacy. • Comprehensive model improved the performance compared with the bowel model alone.
Collapse
Affiliation(s)
- Yangdi Wang
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, 510080, People's Republic of China
| | - Zixin Luo
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, Guangdong, People's Republic of China
| | - Zhengran Zhou
- Zhongshan School of Medicine, Sun Yat-Sen University, 74 Zhongshan II Road, Guangzhou, Guangdong, People's Republic of China
| | - Yingkui Zhong
- Department of Gastroenterology, The Sixth Affiliated Hospital, Sun Yat-Sen University, Yuancun Er Heng Road, No. 26, Guangzhou, Guangdong, People's Republic of China
| | - Ruonan Zhang
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, 510080, People's Republic of China
| | - Xiaodi Shen
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, 510080, People's Republic of China
| | - Lili Huang
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, 510080, People's Republic of China
| | - Weitao He
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, 510080, People's Republic of China
| | - Jinjiang Lin
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, 510080, People's Republic of China
| | - Jiayu Fang
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, 510080, People's Republic of China
| | - Qiapeng Huang
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, People's Republic of China
| | - Haipeng Wang
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, Guangdong, People's Republic of China
| | - Zhuya Zhang
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, Guangdong, People's Republic of China
| | - Ren Mao
- Department of Gastroenterology, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, People's Republic of China
| | - Shi-Ting Feng
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, 510080, People's Republic of China
| | - Xuehua Li
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, 510080, People's Republic of China
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, Guangdong, People's Republic of China
| | - Zhoulei Li
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, 510080, People's Republic of China.
| | - Jian Zhang
- Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, Guangdong, People's Republic of China.
- Health Science Center, School of Biomedical Engineering, Shenzhen University, Shenzhen, Guangdong, People's Republic of China.
| | - Zhihui Chen
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Sun Yat-Sen University, 58 Zhongshan II Road, Guangzhou, Guangdong, People's Republic of China.
- Guangxi Hospital Division of The First Affiliated Hospital, Sun Yat-sen University, Nanning, Guangxi, People's Republic of China.
| |
Collapse
|
4
|
Xue Y, Yang S, Sun W, Tan H, Lin K, Peng L, Wang Z, Zhang J. Approaching expert-level accuracy for differentiating ACL tear types on MRI with deep learning. Sci Rep 2024; 14:938. [PMID: 38195977 PMCID: PMC10776725 DOI: 10.1038/s41598-024-51666-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Accepted: 01/08/2024] [Indexed: 01/11/2024] Open
Abstract
Treatment for anterior cruciate ligament (ACL) tears depends on the condition of the ligament. We aimed to identify different tear statuses from preoperative MRI using deep learning-based radiomics with sex and age. We reviewed 862 patients with preoperative MRI scans reflecting ACL status from Hunan Provincial People's Hospital. Based on sagittal proton density-weighted images, a fully automated approach was developed that consisted of a deep learning model for segmenting ACL tissue (ACL-DNet) and a deep learning-based recognizer for ligament status classification (ACL-SNet). The efficacy of the proposed approach was evaluated by using the sensitivity, specificity and area under the receiver operating characteristic curve (AUC) and compared with that of a group of three orthopedists in the holdout test set. The ACL-DNet model yielded a Dice coefficient of 98% ± 6% on the MRI datasets. Our proposed classification model yielded a sensitivity of 97% and a specificity of 97%. In comparison, the sensitivity of alternative models ranged from 84 to 90%, while the specificity was between 86 and 92%. The AUC of the ACL-SNet model was 99%, demonstrating high overall diagnostic accuracy. The diagnostic performance of the clinical experts as reflected in the AUC was 96%, 92% and 88%, respectively. The fully automated model shows potential as a highly reliable and reproducible tool that allows orthopedists to noninvasively identify the ACL status and may aid in optimizing different techniques, such as ACL remnant preservation, for ACL reconstruction.
Collapse
Affiliation(s)
- Yang Xue
- School of Computer Science, Hunan First Normal University, Changsha, 410205, China
- Hunan Provincial Key Laboratory of Information Technology for Basic Education, Changsha, 410205, China
| | - Shu Yang
- Department of Orthopaedic, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha, 410002, China
| | - Wenjie Sun
- Department of Radiology, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha, 410002, China
| | - Hui Tan
- Department of Radiology, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha, 410002, China
| | - Kaibin Lin
- School of Computer Science, Hunan First Normal University, Changsha, 410205, China
- Hunan Provincial Key Laboratory of Information Technology for Basic Education, Changsha, 410205, China
| | - Li Peng
- School of Computer Science, Hunan First Normal University, Changsha, 410205, China
- Hunan Provincial Key Laboratory of Information Technology for Basic Education, Changsha, 410205, China
| | - Zheng Wang
- School of Computer Science, Hunan First Normal University, Changsha, 410205, China.
- Hunan Provincial Key Laboratory of Information Technology for Basic Education, Changsha, 410205, China.
| | - Jianglin Zhang
- Department of Dermatology, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China.
- Candidate Branch of National Clinical Research Center for Skin Diseases, Shenzhen, 518020, Guangdong, China.
- Department of Geriatrics, Shenzhen People's Hospital, (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China.
| |
Collapse
|
5
|
Cao C, Song J, Su R, Wu X, Wang Z, Hou M. Structure-constrained deep feature fusion for chronic otitis media and cholesteatoma identification. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-21. [PMID: 37362730 PMCID: PMC10157598 DOI: 10.1007/s11042-023-15425-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 03/19/2023] [Accepted: 04/18/2023] [Indexed: 06/28/2023]
Abstract
Chronic suppurative otitis media (CSOM) and middle ear cholesteatoma (MEC) were two most common chronic middle ear disease(MED) clinically. Accurate differential diagnosis between these two diseases is of high clinical importance given the difference in etiologies, lesion manifestations and treatments. The high-resolution computed tomography (CT) scanning of the temporal bone presents a better view of auditory structures, which is currently regarded as the first-line diagnostic imaging modality in the case of MED. In this paper, we first used a region-of-interest (ROI) network to find the area of the middle ear in the entire temporal bone CT image and segment it to a size of 100*100 pixels. Then, we used a structure-constrained deep feature fusion algorithm to convert different characteristic features of the middle ear in three groups as suppurative otitis media (CSOM), middle ear cholesteatoma (MEC) and normal patches. To fuse structure information, we introduced a graph isomorphism network that implements a feature vector from neighbourhoods and the coordinate distance between vertices. Finally, we construct a classifier named the "otitis media, cholesteatoma and normal identification classifier" (OMCNIC). The experimental results achieved by the graph isomorphism network revealed a 96.36% accuracy in all CSOM and MEC classifications. The experimental results indicate that our structure-constrained deep feature fusion algorithm can quickly and effectively classify CSOM and MEC. It will help otologist in the selection of the most appropriate treatment, and the complications can also be reduced.
Collapse
Affiliation(s)
- Cong Cao
- School of Mathematics and Statistics, Central South University, Changsha, 410083 China
| | - Jian Song
- Department of Otorhinolaryngology of Xiangya Hospital, Central South University, Changsha, 410008 China
- Key Laboratory of Otolaryngology Major Disease Research of Hunan Province, Changsha, 410008 China
- National Clinical Research Centre for Geriatric Disorders, Department of Geriatrics, Xiangya Hospital, Central South University, Changsha, 410008 China
| | - Ri Su
- School of Mathematics and Statistics, Central South University, Changsha, 410083 China
| | - Xuewen Wu
- Department of Otorhinolaryngology of Xiangya Hospital, Central South University, Changsha, 410008 China
- Key Laboratory of Otolaryngology Major Disease Research of Hunan Province, Changsha, 410008 China
- National Clinical Research Centre for Geriatric Disorders, Department of Geriatrics, Xiangya Hospital, Central South University, Changsha, 410008 China
| | - Zheng Wang
- School of Computer Science, Hunan First Normal University, Changsha, 410205 China
| | - Muzhou Hou
- School of Mathematics and Statistics, Central South University, Changsha, 410083 China
| |
Collapse
|
6
|
A skeleton context-aware 3D fully convolutional network for abdominal artery segmentation. Int J Comput Assist Radiol Surg 2023; 18:461-472. [PMID: 36273078 DOI: 10.1007/s11548-022-02767-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Accepted: 09/26/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE This paper aims to propose a deep learning-based method for abdominal artery segmentation. Blood vessel structure information is essential to diagnosis and treatment. Accurate blood vessel segmentation is critical to preoperative planning. Although deep learning-based methods perform well on large organs, segmenting small organs such as blood vessels is challenging due to complicated branching structures and positions. We propose a 3D deep learning network from a skeleton context-aware perspective to improve segmentation accuracy. In addition, we propose a novel 3D patch generation method which could strengthen the structural diversity of a training data set. METHOD The proposed method segments abdominal arteries from an abdominal computed tomography (CT) volume using a 3D fully convolutional network (FCN). We add two auxiliary tasks to the network to extract the skeleton context of abdominal arteries. In addition, our skeleton-based patch generation (SBPG) method further enables the FCN to segment small arteries. SBPG generates a 3D patch from a CT volume by leveraging artery skeleton information. These methods improve the segmentation accuracies of small arteries. RESULTS We used 20 cases of abdominal CT volumes to evaluate the proposed method. The experimental results showed that our method outperformed previous segmentation accuracies. The averaged precision rate, recall rate, and F-measure were 95.5%, 91.0%, and 93.2%, respectively. Compared to a baseline method, our method improved 1.5% the averaged recall rate and 0.7% the averaged F-measure. CONCLUSIONS We present a skeleton context-aware 3D FCN to segment abdominal arteries from an abdominal CT volume. In addition, we propose a 3D patch generation method. Our fully automated method segmented most of the abdominal artery regions. The method produced competitive segmentation performance compared to previous methods.
Collapse
|
7
|
Li Y, Song S, Sun Y, Bao N, Yang B, Xu L. Segmentation and volume quantification of epicardial adipose tissue in computed tomography images. Med Phys 2022; 49:6477-6490. [PMID: 36047382 DOI: 10.1002/mp.15965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 08/16/2022] [Accepted: 08/18/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Many cardiovascular diseases are closely related to the composition of epicardial adipose tissue (EAT). Accurate segmentation of EAT can provide a reliable reference for doctors to diagnose the disease. The distribution and composition of EAT often have significant individual differences, and the traditional segmentation methods are not effective. In recent years, deep learning method has been gradually introduced into EAT segmentation task. PURPOSE The existing EAT segmentation methods based on deep learning have a large amount of computation and the segmentation accuracy needs to be improved. Therefore, the purpose of this paper is to develop a lightweight EAT segmentation network, which can obtain higher segmentation accuracy with less computation and further alleviate the problem of false positive segmentation. METHODS Firstly, the obtained Computed Tomography (CT) was preprocessed. That is, the threshold range of EAT was determined to be (-190, -30) HU according to prior knowledge, and the non-adipose pixels were excluded by threshold segmentation to reduce the difficulty of training. Secondly, the image obtained after thresholding was input into the lightweight RDU-Net network to perform the training, validating, and testing process. RDU-Net uses a residual multi-scale dilated convolution block in order to extract a wider range of information without changing the current resolution. At the same time, the form of residual connection is adopted to avoid the problem of gradient expansion or gradient explosion caused by too deep network, which also makes the learning easier. In order to optimize the training process, this paper proposes PNDiceLoss, which takes both positive and negative pixels as learning targets, fully considers the class imbalance problem and appropriately highlights the status of positive pixels. RESULTS In this paper, 50 CCTA images were randomly selected from the hospital, and the commonly used Dice similarity coefficient (DSC), Jaccard similarity (JS), Accuracy (ACC), Specificity (SP), Precision (PC), and Pearson correlation coefficient are used as evaluation metrics. Bland-Altman analysis results show that the extracted EAT volume is consistent with the actual volume. Compared with the existing methods, the segmentation results show that the proposed method achieves better performance on these metrics, achieving the DSC of 0.9262. The number of false positive pixels has been reduced by more than half. Pearson correlation coefficient reached 0.992, and linear regression coefficient reached 0.977 when measuring the volume of EAT obtained. In order to verify the effectiveness of the proposed method, experiments are carried out in the cardiac fat database of VisualLab. On this database, the proposed method also achieved good results, and the DSC value reached 0.927 in the case of only 878 slices. CONCLUSIONS A new method to segment and quantify EAT is proposed. Comprehensive experiments show that compared with some classical segmentation algorithms, the proposed method has the advantages of shorter time-consuming, less memory required for operations, and higher segmentation accuracy. The code is available at https://github.com/lvanlee/EAT_Seg/tree/main/EAT_seg. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Yifan Li
- School of Science, Northeastern University, Shenyang, 110819, China
| | - Shuni Song
- Guangdong Peizheng College, Guangzhou, 510830, China
| | - Yu Sun
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China.,Department of Radiology, General Hospital of Northern Theater Command, Shenyang, 110016, China.,Key Laboratory of Cardiovascular Imaging and Research of Liaoning Province, Shenyang, 110169, China
| | - Nan Bao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China.,Key Laboratory of Medical Image Computing, Ministry of Education, Shenyang, Liaoning, 110169, China
| | - Benqiang Yang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, 110016, China.,Key Laboratory of Cardiovascular Imaging and Research of Liaoning Province, Shenyang, 110169, China
| | - Lisheng Xu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China.,Key Laboratory of Medical Image Computing, Ministry of Education, Shenyang, Liaoning, 110169, China.,Neusoft Research of Intelligent Healthcare Technology, Co. Ltd., Shenyang, Liaoning, 110169, China
| |
Collapse
|
8
|
Chen X, Liu X, Wang Y, Ma R, Zhu S, Li S, Li S, Dong X, Li H, Wang G, Wu Y, Zhang Y, Qiu G, Qian W. Development and Validation of an Artificial Intelligence Preoperative Planning System for Total Hip Arthroplasty. Front Med (Lausanne) 2022; 9:841202. [PMID: 35391886 PMCID: PMC8981237 DOI: 10.3389/fmed.2022.841202] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 02/11/2022] [Indexed: 11/13/2022] Open
Abstract
BackgroundAccurate preoperative planning is essential for successful total hip arthroplasty (THA). However, the requirements of time, manpower, and complex workflow for accurate planning have limited its application. This study aims to develop a comprehensive artificial intelligent preoperative planning system for THA (AIHIP) and validate its accuracy in clinical performance.MethodsOver 1.2 million CT images from 3,000 patients were included to develop an artificial intelligence preoperative planning system (AIHIP). Deep learning algorithms were developed to facilitate automatic image segmentation, image correction, recognition of preoperative deformities and postoperative simulations. A prospective study including 120 patients was conducted to validate the accuracy, clinical outcome and radiographic outcome.ResultsThe comprehensive workflow was integrated into the AIHIP software. Deep learning algorithms achieved an optimal Dice similarity coefficient (DSC) of 0.973 and loss of 0.012 at an average time of 1.86 ± 0.12 min for each case, compared with 185.40 ± 21.76 min for the manual workflow. In clinical validation, AIHIP was significantly more accurate than X-ray-based planning in predicting the component size with more high offset stems used.ConclusionThe use of AIHIP significantly reduced the time and manpower required to conduct detailed preoperative plans while being more accurate than traditional planning method. It has potential in assisting surgeons, especially beginners facing the fast-growing need for total hip arthroplasty with easy accessibility.
Collapse
Affiliation(s)
- Xi Chen
- Department of Orthopedic Surgery, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Xingyu Liu
- School of Life Sciences, Tsinghua University, Beijing, China
- Institute of Biomedical and Health Engineering (iBHE), Tsinghua Shenzhen International Graduate School, Shenzhen, China
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
- Longwood Valley Medical Technology Co. Ltd., Beijing, China
| | - Yiou Wang
- Department of Orthopedic Surgery, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Ruichen Ma
- School of Medicine, Tsinghua University, Beijing, China
| | - Shibai Zhu
- Department of Orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Shanni Li
- Department of Orthopedic Surgery, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Songlin Li
- Department of Orthopedic Surgery, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Xiying Dong
- Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Hairui Li
- Department of Plastic Surgery, Sichuan University West China Hospital, Chengdu, China
| | - Guangzhi Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Yaojiong Wu
- Institute of Biomedical and Health Engineering (iBHE), Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Yiling Zhang
- Longwood Valley Medical Technology Co. Ltd., Beijing, China
- *Correspondence: Yiling Zhang,
| | - Guixing Qiu
- Department of Orthopedic Surgery, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
- Guixing Qiu,
| | - Wenwei Qian
- Department of Orthopedic Surgery, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
- Wenwei Qian,
| |
Collapse
|
9
|
Wang Z, Hounye AH, Zhang J, Hou M, Qi M. Deep learning for abdominal adipose tissue segmentation with few labelled samples. Int J Comput Assist Radiol Surg 2021; 17:579-587. [PMID: 34845590 DOI: 10.1007/s11548-021-02533-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 11/04/2021] [Indexed: 11/30/2022]
Abstract
PURPOSE Fully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses. However, to identify and segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the abdominal region, the traditional routine process used in clinical practise is unattractive, expensive, time-consuming and leads to false segmentation. To address this challenge, this paper introduces and develops an effective global-anatomy-level convolutional neural network (ConvNet) automated segmentation of abdominal adipose tissue from CT scans termed EFNet to accommodate multistage semantic segmentation and high similarity intensity characteristics of the two classes (VAT and SAT) in the abdominal region. METHODS EFNet consists of three pathways: (1) The first pathway is the max unpooling operator, which was used to reduce computational consumption. (2) The second pathway is concatenation, which was applied to recover the shape segmentation results. (3) The third pathway is anatomy pyramid pooling, which was adopted to obtain fine-grained features. The usable anatomical information was encoded in the output of EFNet and allowed for the control of the density of the fine-grained features. RESULTS We formulated an end-to-end manner for the learning process of EFNet, where the representation features can be jointly learned through a mixed feature fusion layer. We immensely evaluated our model on different datasets and compared it to existing deep learning networks. Our proposed model called EFNet outperformed other state-of-the-art models on the segmentation results and demonstrated tremendous performances for abdominal adipose tissue segmentation. CONCLUSION EFNet is extremely fast with remarkable performance for fully automated segmentation of the VAT and SAT in abdominal adipose tissue from CT scans. The proposed method demonstrates a strength ability for automated detection and segmentation of abdominal adipose tissue in clinical practise.
Collapse
Affiliation(s)
- Zheng Wang
- School of Mathematics and Statistics, Central South University, Changsha, 410083, China
- Science and Engineering School, Hunan First Normal University, Changsha, 410205, China
| | | | - Jianglin Zhang
- Department of Detmatology, The Second Clinical Medical College, Shenzhen Peoples Hospital, Jinan University. The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen, 518020, Guangdong, China
| | - Muzhou Hou
- School of Mathematics and Statistics, Central South University, Changsha, 410083, China.
| | - Min Qi
- Department of Plastic Surgery, Xiangya Hospital, Central South University, Changsha, 410008, China.
| |
Collapse
|
10
|
Quantitative Imaging of Body Fat Distribution in the Era of Deep Learning. Acad Radiol 2021; 28:1488-1490. [PMID: 34023197 DOI: 10.1016/j.acra.2021.04.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Accepted: 04/14/2021] [Indexed: 11/20/2022]
|
11
|
Greco F, Mallio CA. Artificial intelligence and abdominal adipose tissue analysis: a literature review. Quant Imaging Med Surg 2021; 11:4461-4474. [PMID: 34603998 DOI: 10.21037/qims-21-370] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Accepted: 06/01/2021] [Indexed: 12/12/2022]
Abstract
Body composition imaging relies on assessment of tissues composition and distribution. Quantitative data provided by body composition imaging analysis have been linked to pathogenesis, risk, and clinical outcomes of a wide spectrum of diseases, including cardiovascular and oncologic. Manual segmentation of imaging data allows to obtain information on abdominal adipose tissue; however, this procedure can be cumbersome and time-consuming. On the other hand, quantitative imaging analysis based on artificial intelligence (AI) has been proposed as a fast and reliable automatic technique for segmentation of abdominal adipose tissue compartments, possibly improving the current standard of care. AI holds the potential to extract quantitative data from computed tomography (CT) and magnetic resonance (MR) images, which in most of the cases are acquired for other purposes. This information is of great importance for physicians dealing with a wide spectrum of diseases, including cardiovascular and oncologic, for the assessment of risk, pathogenesis, clinical outcomes, response to treatments, and complications. In this review we summarize the available evidence on AI algorithms aimed to the segmentation of visceral and subcutaneous adipose tissue compartments on CT and MR images.
Collapse
Affiliation(s)
- Federico Greco
- U.O.C. Diagnostica per Immagini Territoriale Aziendale, Cittadella della Salute Azienda Sanitaria Locale di Lecce, Piazza Filippo Bottazzi, Lecce, Italy
| | | |
Collapse
|
12
|
Li Y, Pan J, Zhou N, Fu D, Lian G, Yi J, Peng Y, Liu X. A random forest model predicts responses to infliximab in Crohn's disease based on clinical and serological parameters. Scand J Gastroenterol 2021; 56:1030-1039. [PMID: 34304688 DOI: 10.1080/00365521.2021.1939411] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
BACKGROUND Infliximab (IFX) has revolutionised the treatment for Crohn's disease (CD) recently, while a part of patients show no response to it at the end of the induction period. We developed a random forest-based prediction tool to predict the response to IFX in CD patients. METHODS This observational study retrospectively enrolled the patients diagnosed with active CD and received IFX treatment at the Gastroenterology Department in Xiangya Hospital of Central South University between January 2017 and December 2019. The baseline data were recorded in the beginning and were used as predictor variables to construct models to forecast the outcome of the response to IFX. RESULTS Our cohort identified a total of 174 patients finally with a response rate of 29.3% (51/174). The area under the receiver operating characteristic curve (AUC) for the model, based on the random forest was 0.90 (95%CI: 0.82-0.98), compared to the logistic regression model with AUC of 0.68 (95%CI: 0.52-0.85). The optimal cut-off value of the random forest model was 0.34 with the specificity of 0.94, the sensitivity of 0.81 and the accuracy of 0.85. We demonstrated a strong association of IFX response with the levels of complement C3 (C3), high density lipoprotein, serum albumin, Controlling Nutritional Status (CONUT) score and visceral fat area/subcutaneous fat area ratio (VSR). CONCLUSION A novel random forest model using the clinical and serological parameters of baseline data was established to identify CD patients with baseline inflammation to achieve IFX response. This model could be valuable for physicians, patients and insurers, which allows individualised therapy.
Collapse
Affiliation(s)
- Yong Li
- Department of Gastroenterology, Xiangya Hospital, Central South University, Changsha, China
| | - Jianfeng Pan
- Department of Gastroenterology, Xiangya Hospital, Central South University, Changsha, China
| | - Nan Zhou
- Department of Gastroenterology, Xiangya Hospital, Central South University, Changsha, China
| | - Dongni Fu
- Department of Gastroenterology, Xiangya Hospital, Central South University, Changsha, China
| | - Guanghui Lian
- Department of Gastroenterology, Xiangya Hospital, Central South University, Changsha, China
| | - Jun Yi
- Department of Gastroenterology, Xiangya Hospital, Central South University, Changsha, China
| | - Yu Peng
- Department of Gastroenterology, Xiangya Hospital, Central South University, Changsha, China
| | - Xiaowei Liu
- Department of Gastroenterology, Xiangya Hospital, Central South University, Changsha, China.,Hunan International Scientific and Technological Cooperation Base of Artificial Intelligence Computer Aided Diagnosis and Treatment for Digestive Disease, Xiangya Hospital, Central South University, Changsha, China
| |
Collapse
|
13
|
Wang Z, Xiao Y, Weng F, Li X, Zhu D, Lu F, Liu X, Hou M, Meng Y. R-JaunLab: Automatic Multi-Class Recognition of Jaundice on Photos of Subjects with Region Annotation Networks. J Digit Imaging 2021; 34:337-350. [PMID: 33634415 DOI: 10.1007/s10278-021-00432-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Revised: 07/01/2020] [Accepted: 02/09/2021] [Indexed: 12/21/2022] Open
Abstract
Jaundice occurs as a symptom of various diseases, such as hepatitis, the liver cancer, gallbladder or pancreas. Therefore, clinical measurement with special equipment is a common method that is used to identify the total serum bilirubin level in patients. Fully automated multi-class recognition of jaundice combines two key issues: (1) the critical difficulties in multi-class recognition of jaundice approaches contrasting with the binary class and (2) the subtle difficulties in multi-class recognition of jaundice represent extensive individuals variability of high-resolution photos of subjects, huge coherency between healthy controls and occult jaundice, as well as broadly inhomogeneous color distribution. We introduce a novel approach for multi-class recognition of jaundice to detect occult jaundice, obvious jaundice and healthy controls. First, region annotation network is developed and trained to propose eye candidates. Subsequently, an efficient jaundice recognizer is proposed to learn similarities, context, localization features and globalization characteristics on photos of subjects. Finally, both networks are unified by using shared convolutional layer. Evaluation of the structured model in a comparative study resulted in a significant performance boost (categorical accuracy for mean 91.38%) over the independent human observer. Our work was exceeded against the state-of-the-art convolutional neural network (96.85% and 90.06% for training and validation subset, respectively) and showed a remarkable categorical result for mean 95.33% on testing subset. The proposed network makes a performance better than physicians. This work demonstrates the strength of our proposal to help bringing an efficient tool for multi-class recognition of jaundice into clinical practice.
Collapse
Affiliation(s)
- Zheng Wang
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China.,Science and Engineering School, Hunan First Normal University, Changsha, 410205, China
| | - Ying Xiao
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Futian Weng
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China
| | - Xiaojun Li
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Danhua Zhu
- Department of Gastroenterology, Hunan Provincial People's Hospital, Changsha, 410002, China
| | - Fanggen Lu
- The Second Xiangya Hospital, Central South University, 410083, Changsha, China
| | - Xiaowei Liu
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Muzhou Hou
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China.
| | - Yu Meng
- Department of Gastroenterology and Hepatology, Shenzhen University General Hospital, Shenzhen, 518055, China.
| |
Collapse
|
14
|
Wang Z, Weng F, Liu J, Cao K, Hou M, Wang J. Numerical solution for high-dimensional partial differential equations based on deep learning with residual learning and data-driven learning. INT J MACH LEARN CYB 2021. [DOI: 10.1007/s13042-021-01277-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
15
|
Abstract
Accurate and efficient dose calculation is an important prerequisite to ensure the success of radiation therapy. However, all the dose calculation algorithms commonly used in current clinical practice have to compromise between calculation accuracy and efficiency, which may result in unsatisfactory dose accuracy or highly intensive computation time in many clinical situations. The purpose of this work is to develop a novel dose calculation algorithm based on the deep learning method for radiation therapy. In this study we performed a feasibility investigation on implementing a fast and accurate dose calculation based on a deep learning technique. A two-dimensional (2D) fluence map was first converted into a three-dimensional (3D) volume using ray traversal algorithm. 3D U-Net like deep residual network was then established to learn a mapping between this converted 3D volume, CT and 3D dose distribution. Therefore an indirect relationship was built between a fluence map and its corresponding 3D dose distribution without using significantly complex neural networks. Two hundred patients, including nasopharyngeal, lung, rectum and breast cancer cases, were collected and applied to train the proposed network. Additional 47 patients were randomly selected to evaluate the accuracy of the proposed method through comparing dose distributions, dose volume histograms and clinical indices with the results from a treatment planning system (TPS), which was used as the ground truth in this study. The proposed deep learning based dose calculation algorithm achieved good predictive performance. For 47 tested patients, the average per-voxel bias of the deep learning calculated value and standard deviation (normalized to the prescription), relative to the TPS calculation, is 0.17%±2.28%. The average deep learning calculated values and standard deviations for relevant clinical indices were compared with the TPS calculated results and the t-test p-values demonstrated the consistency between them. In this study we developed a new deep learning based dose calculation method. This approach was evaluated by the clinical cases with different sites. Our results demonstrated its feasibility and reliability and indicated its great potential to improve the efficiency and accuracy of radiation dose calculation for different treatment modalities.
Collapse
Affiliation(s)
- Jiawei Fan
- Department of Radiation Oncology, Stanford University, 875 Blake Wilbur Drive, Stanford, CA 94305-5847, United States of America
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, People's Republic of China; Department of Oncology, Shanghai Medical College Fudan University, Shanghai 200032, People's Republic of China
- On leave from Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, People's Republic of China; Department of Oncology, Shanghai Medical College Fudan University, Shanghai 200032, People's Republic of China
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, 875 Blake Wilbur Drive, Stanford, CA 94305-5847, United States of America
| | - Peng Dong
- Department of Radiation Oncology, Stanford University, 875 Blake Wilbur Drive, Stanford, CA 94305-5847, United States of America
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, People's Republic of China; Department of Oncology, Shanghai Medical College Fudan University, Shanghai 200032, People's Republic of China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, People's Republic of China; Department of Oncology, Shanghai Medical College Fudan University, Shanghai 200032, People's Republic of China
| | - Yong Yang
- Department of Radiation Oncology, Stanford University, 875 Blake Wilbur Drive, Stanford, CA 94305-5847, United States of America
| |
Collapse
|
16
|
Li W, Qin S, Li F, Wang L. MAD-UNet: A deep U-shaped network combined with an attention mechanism for pancreas segmentation in CT images. Med Phys 2020; 48:329-341. [PMID: 33222222 DOI: 10.1002/mp.14617] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 11/11/2020] [Accepted: 11/13/2020] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Pancreas segmentation is a difficult task because of the high intrapatient variability in the shape, size, and location of the organ, as well as the low contrast and small footprint of the CT scan. At present, the U-Net model is likely to lead to the problems of intraclass inconsistency and interclass indistinction in pancreas segmentation. To solve this problem, we improved the contextual and semantic feature information acquisition method of the biomedical image segmentation model (U-Net) based on a convolutional network and proposed an improved segmentation model called the multiscale attention dense residual U-shaped network (MAD-UNet). METHODS There are two aspects considered in this method. First, we adopted dense residual blocks and weighted binary cross-entropy to enhance the semantic features to learn the details of the pancreas. Using such an approach can reduce the effects of intraclass inconsistency. Second, we used an attention mechanism and multiscale convolution to enrich the contextual information and suppress learning in unrelated areas. We let the model be more sensitive to pancreatic marginal information and reduced the impact of interclass indistinction. RESULTS We evaluated our model using fourfold cross-validation on 82 abdominal enhanced three-dimensional (3D) CT scans from the National Institutes of Health (NIH-82) and 281 3D CT scans from the 2018 MICCAI segmentation decathlon challenge (MSD). The experimental results showed that our method achieved state-of-the-art performance on the two pancreatic datasets. The mean Dice coefficients were 86.10% ± 3.52% and 88.50% ± 3.70%. CONCLUSIONS Our model can effectively solve the problems of intraclass inconsistency and interclass indistinction in the segmentation of the pancreas, and it has value in clinical application. Code is available at https://github.com/Mrqins/pancreas-segmentation.
Collapse
Affiliation(s)
- Weisheng Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Sheng Qin
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Feiyan Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Linhong Wang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| |
Collapse
|
17
|
Esaki T, Furukawa R. [Volume Measurements of Post-transplanted Liver of Pediatric Recipients Using Workstations and Deep Learning]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2020; 76:1133-1142. [PMID: 33229843 DOI: 10.6009/jjrt.2020_jsrt_76.11.1133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE The purpose of this study was to propose a method for segmentation and volume measurement of graft liver and spleen of pediatric transplant recipients on digital imaging and communications in medicine (DICOM) -format images using U-Net and three-dimensional (3-D) workstations (3DWS) . METHOD For segmentation accuracy assessments, Dice coefficients were calculated for the graft liver and spleen. After verifying that the created DICOM-format images could be imported using the existing 3DWS, accuracy rates between the ground truth and segmentation images were calculated via mask processing. RESULT As per the verification results, Dice coefficients for the test data were as follows: graft liver, 0.758 and spleen, 0.577. All created DICOM-format images were importable using the 3DWS, with accuracy rates of 87.10±4.70% and 80.27±11.29% for the graft liver and spleen, respectively. CONCLUSION The U-Net could be used for graft liver and spleen segmentations, and volume measurement using 3DWS was simplified by this method.
Collapse
Affiliation(s)
- Toru Esaki
- Department of Radiologic Technology, Jichi Medical University Hospital
| | - Rieko Furukawa
- Department of Pediatric Medical Imaging, Jichi Children's Medical Center Tochigi
| |
Collapse
|
18
|
Langner T, Strand R, Ahlström H, Kullberg J. Large-scale biometry with interpretable neural network regression on UK Biobank body MRI. Sci Rep 2020; 10:17752. [PMID: 33082454 PMCID: PMC7576214 DOI: 10.1038/s41598-020-74633-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Accepted: 10/05/2020] [Indexed: 11/14/2022] Open
Abstract
In a large-scale medical examination, the UK Biobank study has successfully imaged more than 32,000 volunteer participants with magnetic resonance imaging (MRI). Each scan is linked to extensive metadata, providing a comprehensive medical survey of imaged anatomy and related health states. Despite its potential for research, this vast amount of data presents a challenge to established methods of evaluation, which often rely on manual input. To date, the range of reference values for cardiovascular and metabolic risk factors is therefore incomplete. In this work, neural networks were trained for image-based regression to infer various biological metrics from the neck-to-knee body MRI automatically. The approach requires no manual intervention or direct access to reference segmentations for training. The examined fields span 64 variables derived from anthropometric measurements, dual-energy X-ray absorptiometry (DXA), atlas-based segmentations, and dedicated liver scans. With the ResNet50, the standardized framework achieves a close fit to the target values (median R\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^2 > 0.97$$\end{document}2>0.97) in cross-validation. Interpretation of aggregated saliency maps suggests that the network correctly targets specific body regions and limbs, and learned to emulate different modalities. On several body composition metrics, the quality of the predictions is within the range of variability observed between established gold standard techniques.
Collapse
Affiliation(s)
- Taro Langner
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden.
| | - Robin Strand
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden.,Department of Information Technology, Uppsala University, 751 85, Uppsala, Sweden
| | - Håkan Ahlström
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden.,Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Joel Kullberg
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden.,Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| |
Collapse
|
19
|
Solution of Ruin Probability for Continuous Time Model Based on Block Trigonometric Exponential Neural Network. Symmetry (Basel) 2020. [DOI: 10.3390/sym12060876] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The ruin probability is used to determine the overall operating risk of an insurance company. Modeling risks through the characteristics of the historical data of an insurance business, such as premium income, dividends and reinvestments, can usually produce an integral differential equation that is satisfied by the ruin probability. However, the distribution function of the claim inter-arrival times is more complicated, which makes it difficult to find an analytical solution of the ruin probability. Therefore, based on the principles of artificial intelligence and machine learning, we propose a novel numerical method for solving the ruin probability equation. The initial asset u is used as the input vector and the ruin probability as the only output. A trigonometric exponential function is proposed as the projection mapping in the hidden layer, then a block trigonometric exponential neural network (BTENN) model with a symmetrical structure is established. Trial solution is set to meet the initial value condition, simultaneously, connection weights are optimized by solving a linear system using the extreme learning machine (ELM) algorithm. Three numerical experiments were carried out by Python. The results show that the BTENN model can obtain the approximate solution of the ruin probability under the classical risk model and the Erlang(2) risk model at any time point. Comparing with existing methods such as Legendre neural networks (LNN) and trigonometric neural networks (TNN), the proposed BTENN model has a higher stability and lower deviation, which proves that it is feasible and superior to use a BTENN model to estimate the ruin probability.
Collapse
|