51
|
Castiglione J, Somasundaram E, Gilligan LA, Trout AT, Brady S. Automated Segmentation of Abdominal Skeletal Muscle on Pediatric CT Scans Using Deep Learning. Radiol Artif Intell 2021; 3:e200130. [PMID: 33937859 PMCID: PMC8043356 DOI: 10.1148/ryai.2021200130] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 11/27/2020] [Accepted: 12/17/2020] [Indexed: 12/12/2022]
Abstract
PURPOSE To automate skeletal muscle segmentation in a pediatric population using convolutional neural networks that identify and segment the L3 level at CT. MATERIALS AND METHODS In this retrospective study, two sets of U-Net-based models were developed to identify the L3 level in the sagittal plane and segment the skeletal muscle from the corresponding axial image. For model development, 370 patients (sampled uniformly across age group from 0 to 18 years and including both sexes) were selected between January 2009 and January 2019, and ground truth L3 location and skeletal muscle segmentation were manually defined. Twenty percent (74 of 370) of the examinations were reserved for testing the L3 locator and muscle segmentation, while the remaining were used for training. For the L3 locator models, maximum intensity projections (MIPs) from a fixed number of central sections of sagittal reformats (either 12 or 18 sections) were used as input with or without transfer learning using an L3 localizer trained on an external dataset (four models total). For the skeletal muscle segmentation models, two loss functions (weighted Dice similarity coefficient [DSC] and binary cross-entropy) were used on models trained with or without data augmentation (four models total). Outputs from each model were compared with ground truth, and the mean relative error and DSC from each of the models were compared with one another. RESULTS L3 section detection trained with an 18-section MIP model with transfer learning had a mean error of 3.23 mm ± 2.61 standard deviation, which was within the reconstructed image thickness (3 or 5 mm). Skeletal muscle segmentation trained with the weighted DSC loss model without data augmentation had a mean DSC of 0.93 ± 0.03 and mean relative error of 0.04 ± 0.04. CONCLUSION Convolutional neural network models accurately identified the L3 level and segmented the skeletal muscle on pediatric CT scans.Supplemental material is available for this article.See also the commentary by Cadrin-Chênevert in this issue.© RSNA, 2021.
Collapse
Affiliation(s)
- James Castiglione
- From the Department of Radiology, Cincinnati Children’s Hospital Medical Center, 3333 Burnet Ave, MLC 5031, Cincinnati, OH 45229-3026 (J.C., E.S., L.A.G., A.T.T., S.B.); and Departments of Radiology (E.S., A.T.T., S.B.) and Pediatrics (A.T.T.), University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Elanchezhian Somasundaram
- From the Department of Radiology, Cincinnati Children’s Hospital Medical Center, 3333 Burnet Ave, MLC 5031, Cincinnati, OH 45229-3026 (J.C., E.S., L.A.G., A.T.T., S.B.); and Departments of Radiology (E.S., A.T.T., S.B.) and Pediatrics (A.T.T.), University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Leah A. Gilligan
- From the Department of Radiology, Cincinnati Children’s Hospital Medical Center, 3333 Burnet Ave, MLC 5031, Cincinnati, OH 45229-3026 (J.C., E.S., L.A.G., A.T.T., S.B.); and Departments of Radiology (E.S., A.T.T., S.B.) and Pediatrics (A.T.T.), University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Andrew T. Trout
- From the Department of Radiology, Cincinnati Children’s Hospital Medical Center, 3333 Burnet Ave, MLC 5031, Cincinnati, OH 45229-3026 (J.C., E.S., L.A.G., A.T.T., S.B.); and Departments of Radiology (E.S., A.T.T., S.B.) and Pediatrics (A.T.T.), University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Samuel Brady
- From the Department of Radiology, Cincinnati Children’s Hospital Medical Center, 3333 Burnet Ave, MLC 5031, Cincinnati, OH 45229-3026 (J.C., E.S., L.A.G., A.T.T., S.B.); and Departments of Radiology (E.S., A.T.T., S.B.) and Pediatrics (A.T.T.), University of Cincinnati College of Medicine, Cincinnati, Ohio
| |
Collapse
|
52
|
|
53
|
Zhao H, Ke Z, Yang F, Li K, Chen N, Song L, Zheng C, Liang D, Liu C. Deep Learning Enables Superior Photoacoustic Imaging at Ultralow Laser Dosages. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2021; 8:2003097. [PMID: 33552869 PMCID: PMC7856900 DOI: 10.1002/advs.202003097] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/14/2020] [Indexed: 05/02/2023]
Abstract
Optical-resolution photoacoustic microscopy (OR-PAM) is an excellent modality for in vivo biomedical imaging as it noninvasively provides high-resolution morphologic and functional information without the need for exogenous contrast agents. However, the high excitation laser dosage, limited imaging speed, and imperfect image quality still hinder the use of OR-PAM in clinical applications. The laser dosage, imaging speed, and image quality are mutually restrained by each other, and thus far, no methods have been proposed to resolve this challenge. Here, a deep learning method called the multitask residual dense network is proposed to overcome this challenge. This method utilizes an innovative strategy of integrating multisupervised learning, dual-channel sample collection, and a reasonable weight distribution. The proposed deep learning method is combined with an application-targeted modified OR-PAM system. Superior images under ultralow laser dosage (32-fold reduced dosage) are obtained for the first time in this study. Using this new technique, a high-quality, high-speed OR-PAM system that meets clinical requirements is now conceivable.
Collapse
Affiliation(s)
- Huangxuan Zhao
- Research Laboratory for Biomedical Optics and Molecular ImagingCAS Key Laboratory of Health InformaticsShenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhen518055China
- Department of RadiologyUnion HospitalTongji Medical CollegeHuazhong University of Science and TechnologyWuhan430022China
| | - Ziwen Ke
- Research Center for Medical AICAS Key Laboratory of Health InformaticsShenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhen518055China
- Shenzhen College of Advanced TechnologyUniversity of Chinese Academy of SciencesShenzhen518055China
| | - Fan Yang
- Department of RadiologyUnion HospitalTongji Medical CollegeHuazhong University of Science and TechnologyWuhan430022China
| | - Ke Li
- Research Laboratory for Biomedical Optics and Molecular ImagingCAS Key Laboratory of Health InformaticsShenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhen518055China
| | - Ningbo Chen
- Research Laboratory for Biomedical Optics and Molecular ImagingCAS Key Laboratory of Health InformaticsShenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhen518055China
| | - Liang Song
- Research Laboratory for Biomedical Optics and Molecular ImagingCAS Key Laboratory of Health InformaticsShenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhen518055China
| | - Chuansheng Zheng
- Department of RadiologyUnion HospitalTongji Medical CollegeHuazhong University of Science and TechnologyWuhan430022China
| | - Dong Liang
- Research Center for Medical AICAS Key Laboratory of Health InformaticsShenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhen518055China
| | - Chengbo Liu
- Research Laboratory for Biomedical Optics and Molecular ImagingCAS Key Laboratory of Health InformaticsShenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhen518055China
| |
Collapse
|
54
|
Large-scale analysis of iliopsoas muscle volumes in the UK Biobank. Sci Rep 2020; 10:20215. [PMID: 33214629 PMCID: PMC7677387 DOI: 10.1038/s41598-020-77351-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Accepted: 11/05/2020] [Indexed: 12/16/2022] Open
Abstract
Psoas muscle measurements are frequently used as markers of sarcopenia and predictors of health. Manually measured cross-sectional areas are most commonly used, but there is a lack of consistency regarding the position of the measurement and manual annotations are not practical for large population studies. We have developed a fully automated method to measure iliopsoas muscle volume (comprised of the psoas and iliacus muscles) using a convolutional neural network. Magnetic resonance images were obtained from the UK Biobank for 5000 participants, balanced for age, gender and BMI. Ninety manual annotations were available for model training and validation. The model showed excellent performance against out-of-sample data (average dice score coefficient of 0.9046 ± 0.0058 for six-fold cross-validation). Iliopsoas muscle volumes were successfully measured in all 5000 participants. Iliopsoas volume was greater in male compared with female subjects. There was a small but significant asymmetry between left and right iliopsoas muscle volumes. We also found that iliopsoas volume was significantly related to height, BMI and age, and that there was an acceleration in muscle volume decrease in men with age. Our method provides a robust technique for measuring iliopsoas muscle volume that can be applied to large cohorts.
Collapse
|
55
|
Wang L, Liang D, Yin X, Qiu J, Yang Z, Xing J, Dong J, Ma Z. Coronary artery segmentation in angiographic videos utilizing spatial-temporal information. BMC Med Imaging 2020; 20:110. [PMID: 32972374 PMCID: PMC7513273 DOI: 10.1186/s12880-020-00509-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 09/13/2020] [Indexed: 12/02/2022] Open
Abstract
Background Coronary artery angiography is an indispensable assistive technique for cardiac interventional surgery. Segmentation and extraction of blood vessels from coronary angiographic images or videos are very essential prerequisites for physicians to locate, assess and diagnose the plaques and stenosis in blood vessels. Methods This article proposes a novel coronary artery segmentation framework that combines a three–dimensional (3D) convolutional input layer and a two–dimensional (2D) convolutional network. Instead of a single input image in the previous medical image segmentation applications, our framework accepts a sequence of coronary angiographic images as input, and outputs the clearest mask of segmentation result. The 3D input layer leverages the temporal information in the image sequence, and fuses the multiple images into more comprehensive 2D feature maps. The 2D convolutional network implements down–sampling encoders, up–sampling decoders, bottle–neck modules, and skip connections to accomplish the segmentation task. Results The spatial–temporal model of this article obtains good segmentation results despite the poor quality of coronary angiographic video sequences, and outperforms the state–of–the–art techniques. Conclusions The results justify that making full use of the spatial and temporal information in the image sequences will promote the analysis and understanding of the images in videos.
Collapse
Affiliation(s)
- Lu Wang
- The Future Laboratory, Tsinghua University, Beijing, 100084, China.,Department of Information Art and Design, Academy of Arts and Design, Tsinghua University, Beijing, 100084, China
| | - Dongxue Liang
- The Future Laboratory, Tsinghua University, Beijing, 100084, China.
| | - Xiaolei Yin
- The Future Laboratory, Tsinghua University, Beijing, 100084, China.,Department of Information Art and Design, Academy of Arts and Design, Tsinghua University, Beijing, 100084, China
| | - Jing Qiu
- The Future Laboratory, Tsinghua University, Beijing, 100084, China
| | - Zhiyun Yang
- Center for Cardiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, 100029, China
| | - Junhui Xing
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
| | - Jianzeng Dong
- Center for Cardiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, 100029, China.,The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
| | - Zhaoyuan Ma
- The Future Laboratory, Tsinghua University, Beijing, 100084, China
| |
Collapse
|
56
|
Galbusera F, Cina A, Panico M, Albano D, Messina C. Image-based biomechanical models of the musculoskeletal system. Eur Radiol Exp 2020; 4:49. [PMID: 32789547 PMCID: PMC7423821 DOI: 10.1186/s41747-020-00172-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 06/30/2020] [Indexed: 12/31/2022] Open
Abstract
Finite element modeling is a precious tool for the investigation of the biomechanics of the musculoskeletal system. A key element for the development of anatomically accurate, state-of-the art finite element models is medical imaging. Indeed, the workflow for the generation of a finite element model includes steps which require the availability of medical images of the subject of interest: segmentation, which is the assignment of each voxel of the images to a specific material such as bone and cartilage, allowing for a three-dimensional reconstruction of the anatomy; meshing, which is the creation of the computational mesh necessary for the approximation of the equations describing the physics of the problem; assignment of the material properties to the various parts of the model, which can be estimated for example from quantitative computed tomography for the bone tissue and with other techniques (elastography, T1rho, and T2 mapping from magnetic resonance imaging) for soft tissues. This paper presents a brief overview of the techniques used for image segmentation, meshing, and assessing the mechanical properties of biological tissues, with focus on finite element models of the musculoskeletal system. Both consolidated methods and recent advances such as those based on artificial intelligence are described.
Collapse
Affiliation(s)
| | - Andrea Cina
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | - Matteo Panico
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy.,Department of Chemistry, Materials and Chemical Engineering "Giulio Natta", Politecnico di Milano, Milan, Italy
| | - Domenico Albano
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy.,Department of Biomedicine, Neuroscience and Advanced Diagnostics, Università degli Studi di Palermo, Palermo, Italy
| | - Carmelo Messina
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy.,Department of Biomedical Sciences for Health, Università degli Studi di Milano, Milan, Italy
| |
Collapse
|
57
|
SONG LUJIE, ZOU HAIBO, JI ZHENYU, XIE XIAOMING, LI WEI. A NOVEL ITERATIVE MATCHING SCHEME BASED ON HOMOGRAPHY METHOD FOR X-RAY IMAGE. J MECH MED BIOL 2020. [DOI: 10.1142/s0219519420500384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Purpose anterior cervical decompression and fusion is a common surgical procedure. Traditionally, experienced doctors observe X-ray films regularly examined by patients to determine postoperative conditions by observing the tiny movements between the limited vertebral bodies. But it is not accurate. This may lead to error diagnostics and serious deterioration of the condition and secondary injury to the patient and will also put a greater financial burden on them. Doctors need a quantitative standard to determine small motion with limited vertebral landmarks after surgery. Computer vision technology is needed to match the over-extension and over-flexion cervical vertebral body to provide objective measurement data for further quantification of intervertebral activity. Based on conventional scheme, the point mean square error is used as the evaluation criterion of the matching effect, and the iterative matching scheme is proposed to improve the stability of the original scheme. The cervical X-ray films of patients from the China–Japan Friendship Hospital were collected as samples to verify the reliability of the scheme. Compared with the existing image matching schemes based on feature points, our scheme is superior in matching effect, matching speed and stability. This scheme can provide a solid foundation for further assisting doctors in the study of rehabilitation after anterior cervical fusion.
Collapse
Affiliation(s)
- LUJIE SONG
- Department of Orthopedics, China-Japan Friendship Hospital, Beijing 100029, P. R. China
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, P. R. China
| | - HAIBO ZOU
- Department of Orthopedics, China-Japan Friendship Hospital, Beijing 100029, P. R. China
| | - ZHENYU JI
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, P. R. China
| | - XIAOMING XIE
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, P. R. China
| | - WEI LI
- School of Information and Electronics, Beijing Institute of Technology, Haidian District 100811, P. R. China
| |
Collapse
|
58
|
Surface Muscle Segmentation Using 3D U-Net Based on Selective Voxel Patch Generation in Whole-Body CT Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10134477] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study aimed to develop and validate an automated segmentation method for surface muscles using a three-dimensional (3D) U-Net based on selective voxel patches from whole-body computed tomography (CT) images. Our method defined a voxel patch (VP) as the input images, which consisted of 56 slices selected at equal intervals from the whole slices. In training, one VP was used for each case. In the test, multiple VPs were created according to the number of slices in the test case. Segmentation was then performed for each VP and the results of each VP merged. The proposed method achieved a segmentation accuracy mean dice coefficient of 0.900 for 8 cases. Although challenges remain in muscles adjacent to visceral organs and in small muscle areas, VP is useful for surface muscle segmentation using whole-body CT images with limited annotation data. The limitation of our study is that it is limited to cases of muscular disease with atrophy. Future studies should address whether the proposed method is effective for other modalities or using data with different imaging ranges.
Collapse
|
59
|
Automatic Pancreas Segmentation Using Coarse-Scaled 2D Model of Deep Learning: Usefulness of Data Augmentation and Deep U-Net. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10103360] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Combinations of data augmentation methods and deep learning architectures for automatic pancreas segmentation on CT images are proposed and evaluated. Images from a public CT dataset of pancreas segmentation were used to evaluate the models. Baseline U-net and deep U-net were chosen for the deep learning models of pancreas segmentation. Methods of data augmentation included conventional methods, mixup, and random image cropping and patching (RICAP). Ten combinations of the deep learning models and the data augmentation methods were evaluated. Four-fold cross validation was performed to train and evaluate these models with data augmentation methods. The dice similarity coefficient (DSC) was calculated between automatic segmentation results and manually annotated labels and these were visually assessed by two radiologists. The performance of the deep U-net was better than that of the baseline U-net with mean DSC of 0.703–0.789 and 0.686–0.748, respectively. In both baseline U-net and deep U-net, the methods with data augmentation performed better than methods with no data augmentation, and mixup and RICAP were more useful than the conventional method. The best mean DSC was obtained using a combination of deep U-net, mixup, and RICAP, and the two radiologists scored the results from this model as good or perfect in 76 and 74 of the 82 cases.
Collapse
|