51
|
Godoy IRB, Silva RP, Rodrigues TC, Skaf AY, de Castro Pochini A, Yamada AF. Automatic MRI segmentation of pectoralis major muscle using deep learning. Sci Rep 2022; 12:5300. [PMID: 35351924 PMCID: PMC8964724 DOI: 10.1038/s41598-022-09280-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 03/21/2022] [Indexed: 11/30/2022] Open
Abstract
To develop and validate a deep convolutional neural network (CNN) method capable of selecting the greatest Pectoralis Major Cross-Sectional Area (PMM-CSA) and automatically segmenting PMM on an axial Magnetic Resonance Imaging (MRI). We hypothesized a CNN technique can accurately perform both tasks compared with manual reference standards. Our method is based on two steps: (A) segmentation model, (B) PMM-CSA selection. In step A, we manually segmented the PMM on 134 axial T1-weighted PM MRIs. The segmentation model was trained from scratch (MONAI/Pytorch SegResNet, 4 mini-batch, 1000 epochs, dropout 0.20, Adam, learning rate 0.0005, cosine annealing, softmax). Mean-dice score determined the segmentation score on 8 internal axial T1-weighted PM MRIs. In step B, we used the OpenCV2 (version 4.5.1, https://opencv.org) framework to calculate the PMM-CSA of the model predictions and ground truth. Then, we selected the top-3 slices with the largest cross-sectional area and compared them with the ground truth. If one of the selected was in the top-3 from the ground truth, then we considered it to be a success. A top-3 accuracy evaluated this method on 8 axial T1-weighted PM MRIs internal test cases. The segmentation model (Step A) produced an accurate pectoralis muscle segmentation with a Mean Dice score of 0.94 ± 0.01. The results of Step B showed top-3 accuracy > 98% to select an appropriate axial image with the greatest PMM-CSA. Our results show an overall accurate selection of PMM-CSA and automated PM muscle segmentation using a combination of deep CNN algorithms.
Collapse
Affiliation(s)
- Ivan Rodrigues Barros Godoy
- Department of Radiology, Hospital Do Coração (HCor) and Teleimagem, São Paulo, SP, Brazil. .,Department of Diagnostic Imaging, Universidade Federal de São Paulo - UNIFESP, Rua Napoleão de Barros, 800, São Paulo, SP, 04024-002, Brazil.
| | | | | | - Abdalla Youssef Skaf
- Department of Radiology, Hospital Do Coração (HCor) and Teleimagem, São Paulo, SP, Brazil.,ALTA Diagnostic Center (DASA Group), São Paulo, Brazil
| | - Alberto de Castro Pochini
- Department of Orthopedics and Traumatology, Universidade Federal de São Paulo (UNIFESP), São Paulo, SP, Brazil
| | - André Fukunishi Yamada
- Department of Radiology, Hospital Do Coração (HCor) and Teleimagem, São Paulo, SP, Brazil.,Department of Diagnostic Imaging, Universidade Federal de São Paulo - UNIFESP, Rua Napoleão de Barros, 800, São Paulo, SP, 04024-002, Brazil.,ALTA Diagnostic Center (DASA Group), São Paulo, Brazil
| |
Collapse
|
52
|
Teoh YX, Lai KW, Usman J, Goh SL, Mohafez H, Hasikin K, Qian P, Jiang Y, Zhang Y, Dhanalakshmi S. Discovering Knee Osteoarthritis Imaging Features for Diagnosis and Prognosis: Review of Manual Imaging Grading and Machine Learning Approaches. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4138666. [PMID: 35222885 PMCID: PMC8881170 DOI: 10.1155/2022/4138666] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 01/24/2022] [Accepted: 01/26/2022] [Indexed: 12/30/2022]
Abstract
Knee osteoarthritis (OA) is a deliberating joint disorder characterized by cartilage loss that can be captured by imaging modalities and translated into imaging features. Observing imaging features is a well-known objective assessment for knee OA disorder. However, the variety of imaging features is rarely discussed. This study reviews knee OA imaging features with respect to different imaging modalities for traditional OA diagnosis and updates recent image-based machine learning approaches for knee OA diagnosis and prognosis. Although most studies recognized X-ray as standard imaging option for knee OA diagnosis, the imaging features are limited to bony changes and less sensitive to short-term OA changes. Researchers have recommended the usage of MRI to study the hidden OA-related radiomic features in soft tissues and bony structures. Furthermore, ultrasound imaging features should be explored to make it more feasible for point-of-care diagnosis. Traditional knee OA diagnosis mainly relies on manual interpretation of medical images based on the Kellgren-Lawrence (KL) grading scheme, but this approach is consistently prone to human resource and time constraints and less effective for OA prevention. Recent studies revealed the capability of machine learning approaches in automating knee OA diagnosis and prognosis, through three major tasks: knee joint localization (detection and segmentation), classification of OA severity, and prediction of disease progression. AI-aided diagnostic models improved the quality of knee OA diagnosis significantly in terms of time taken, reproducibility, and accuracy. Prognostic ability was demonstrated by several prediction models in terms of estimating possible OA onset, OA deterioration, progressive pain, progressive structural change, progressive structural change with pain, and time to total knee replacement (TKR) incidence. Despite research gaps, machine learning techniques still manifest huge potential to work on demanding tasks such as early knee OA detection and estimation of future disease events, as well as fundamental tasks such as discovering the new imaging features and establishment of novel OA status measure. Continuous machine learning model enhancement may favour the discovery of new OA treatment in future.
Collapse
Affiliation(s)
- Yun Xin Teoh
- Department of Biomedical Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Juliana Usman
- Department of Biomedical Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Siew Li Goh
- Faculty of Medicine, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Hamidreza Mohafez
- Department of Biomedical Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Pengjiang Qian
- School of Artificial Intelligence and Computer Sciences, Jiangnan University, Wuxi 214122, China
| | - Yizhang Jiang
- School of Artificial Intelligence and Computer Sciences, Jiangnan University, Wuxi 214122, China
| | - Yuanpeng Zhang
- Department of Medical Informatics of Medical (Nursing) School, Nantong University, Nantong 226001, China
| | - Samiappan Dhanalakshmi
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur 603203, India
| |
Collapse
|
53
|
Awan MJ, Rahim MSM, Salim N, Rehman A, Garcia-Zapirain B. Automated Knee MR Images Segmentation of Anterior Cruciate Ligament Tears. SENSORS 2022; 22:s22041552. [PMID: 35214451 PMCID: PMC8876207 DOI: 10.3390/s22041552] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 02/12/2022] [Accepted: 02/14/2022] [Indexed: 12/10/2022]
Abstract
The anterior cruciate ligament (ACL) is one of the main stabilizer parts of the knee. ACL injury leads to causes of osteoarthritis risk. ACL rupture is common in the young athletic population. Accurate segmentation at an early stage can improve the analysis and classification of anterior cruciate ligaments tears. This study automatically segmented the anterior cruciate ligament (ACL) tears from magnetic resonance imaging through deep learning. The knee mask was generated on the original Magnetic Resonance (MR) images to apply a semantic segmentation technique with convolutional neural network architecture U-Net. The proposed segmentation method was measured by accuracy, intersection over union (IoU), dice similarity coefficient (DSC), precision, recall and F1-score of 98.4%, 99.0%, 99.4%, 99.6%, 99.6% and 99.6% on 11451 training images, whereas on the validation images of 3817 was, respectively, 97.7%, 93.8%,96.8%, 96.5%, 97.3% and 96.9%. We also provide dice loss of training and test datasets that have remained 0.005 and 0.031, respectively. The experimental results show that the ACL segmentation on JPEG MRI images with U-Nets achieves accuracy that outperforms the human segmentation. The strategy has promising potential applications in medical image analytics for the segmentation of knee ACL tears for MR images.
Collapse
Affiliation(s)
- Mazhar Javed Awan
- Faculty of Engineering, School of Computing, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor, Malaysia; (M.S.M.R.); (N.S.)
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan
- Correspondence: (M.J.A.); (B.G.-Z.)
| | - Mohd Shafry Mohd Rahim
- Faculty of Engineering, School of Computing, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor, Malaysia; (M.S.M.R.); (N.S.)
| | - Naomie Salim
- Faculty of Engineering, School of Computing, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor, Malaysia; (M.S.M.R.); (N.S.)
| | - Amjad Rehman
- Artificial Intelligence and Data Analytics Laboratory, College of Computer and Information Sciences (CCIS), Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | | |
Collapse
|
54
|
AI musculoskeletal clinical applications: how can AI increase my day-to-day efficiency? Skeletal Radiol 2022; 51:293-304. [PMID: 34341865 DOI: 10.1007/s00256-021-03876-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 07/21/2021] [Accepted: 07/21/2021] [Indexed: 02/02/2023]
Abstract
Artificial intelligence (AI) is expected to bring greater efficiency in radiology by performing tasks that would otherwise require human intelligence, also at a much faster rate than human performance. In recent years, milestone deep learning models with unprecedented low error rates and high computational efficiency have shown remarkable performance for lesion detection, classification, and segmentation tasks. However, the growing field of AI has significant implications for radiology that are not limited to visual tasks. These are essential applications for optimizing imaging workflows and improving noninterpretive tasks. This article offers an overview of the recent literature on AI, focusing on the musculoskeletal imaging chain, including initial patient scheduling, optimized protocoling, magnetic resonance imaging reconstruction, image enhancement, medical image-to-image translation, and AI-aided image interpretation. The substantial developments of advanced algorithms, the emergence of massive quantities of medical data, and the interest of researchers and clinicians reveal the potential for the growing applications of AI to augment the day-to-day efficiency of musculoskeletal radiologists.
Collapse
|
55
|
Nozawa M, Ito H, Ariji Y, Fukuda M, Igarashi C, Nishiyama M, Ogi N, Katsumata A, Kobayashi K, Ariji E. Automatic segmentation of the temporomandibular joint disc on magnetic resonance images using a deep learning technique. Dentomaxillofac Radiol 2022; 51:20210185. [PMID: 34347537 PMCID: PMC8693319 DOI: 10.1259/dmfr.20210185] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
OBJECTIVES The aims of the present study were to construct a deep learning model for automatic segmentation of the temporomandibular joint (TMJ) disc on magnetic resonance (MR) images, and to evaluate the performances using the internal and external test data. METHODS In total, 1200 MR images of closed and open mouth positions in patients with temporomandibular disorder (TMD) were collected from two hospitals (Hospitals A and B). The training and validation data comprised 1000 images from Hospital A, which were used to create a segmentation model. The performance was evaluated using 200 images from Hospital A (internal validity test) and 200 images from Hospital B (external validity test). RESULTS Although the analysis of performance determined with data from Hospital B showed low recall (sensitivity), compared with the performance determined with data from Hospital A, both performances were above 80%. Precision (positive predictive value) was lower when test data from Hospital A were used for the position of anterior disc displacement. According to the intra-articular TMD classification, the proportions of accurately assigned TMJs were higher when using images from Hospital A than when using images from Hospital B. CONCLUSION The segmentation deep learning model created in this study may be useful for identifying disc positions on MR images.
Collapse
Affiliation(s)
| | - Hirokazu Ito
- Department of Oral and Maxillofacial Radiology, Tsurumi University School of Dentistry, Yokohama, Japan
| | - Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Chinami Igarashi
- Department of Oral and Maxillofacial Radiology, Tsurumi University School of Dentistry, Yokohama, Japan
| | - Masako Nishiyama
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Nobumi Ogi
- Department of Oral and Maxillofacial Surgery, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Akitoshi Katsumata
- Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Kaoru Kobayashi
- Department of Oral and Maxillofacial Radiology, Tsurumi University School of Dentistry, Yokohama, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
56
|
Flannery SW, Kiapour AM, Edgar DJ, Murray MM, Beveridge JE, Fleming BC. A transfer learning approach for automatic segmentation of the surgically treated anterior cruciate ligament. J Orthop Res 2022; 40:277-284. [PMID: 33458865 PMCID: PMC8285460 DOI: 10.1002/jor.24984] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 12/17/2020] [Accepted: 01/11/2021] [Indexed: 02/04/2023]
Abstract
Quantitative magnetic resonance imaging enables quantitative assessment of the healing anterior cruciate ligament or graft post-surgery, but its use is constrained by the need for time consuming manual image segmentation. The goal of this study was to validate a deep learning model for automatic segmentation of repaired and reconstructed anterior cruciate ligaments. We hypothesized that (1) a deep learning model would segment repaired ligaments and grafts with comparable anatomical similarity to intact ligaments, and (2) automatically derived quantitative features (i.e., signal intensity and volume) would not be significantly different from those obtained by manual segmentation. Constructive Interference in Steady State sequences were acquired of ACL repairs (n = 238) and grafts (n = 120). A previously validated model for intact ACLs was retrained on both surgical groups using transfer learning. Anatomical performance was measured with Dice coefficient, sensitivity, and precision. Quantitative features were compared to ground truth manual segmentation. Automatic segmentation of both surgical groups resulted in decreased anatomical performance compared to intact ACL automatic segmentation (repairs/grafts: Dice coefficient = .80/.78, precision = .79/.78, sensitivity = .82/.80), but neither decrease was statistically significant (Kruskal-Wallis: Dice coefficient p = .02, precision p = .09, sensitivity p = .17; Dunn post-hoc test for Dice coefficient: repairs/grafts p = .054/.051). There were no significant differences in quantitative features between the ground truth and automatic segmentation of repairs/grafts (0.82/2.7% signal intensity difference, p = .57/.26; 1.7/2.7% volume difference, p = .68/.72). The anatomical similarity performance and statistical similarities of quantitative features supports the use of this automated segmentation model in quantitative magnetic resonance imaging pipelines, which will accelerate research and provide a step towards clinical applicability.
Collapse
Affiliation(s)
- Sean W. Flannery
- Department of Orthopaedics, Warren Alpert Medical School of Brown University/Rhode Island Hospital, Providence, RI, USA
| | - Ata M. Kiapour
- Division of Sports Medicine, Department of Orthopaedic Surgery, Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - David J. Edgar
- Department of Orthopaedics, Warren Alpert Medical School of Brown University/Rhode Island Hospital, Providence, RI, USA
| | - Martha M. Murray
- Division of Sports Medicine, Department of Orthopaedic Surgery, Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Jillian E. Beveridge
- Department of Orthopaedics, Warren Alpert Medical School of Brown University/Rhode Island Hospital, Providence, RI, USA,Department of Biomedical Engineering, Cleveland Clinic, Cleveland, OH, USA
| | - Braden C. Fleming
- Department of Orthopaedics, Warren Alpert Medical School of Brown University/Rhode Island Hospital, Providence, RI, USA
| |
Collapse
|
57
|
Perslev M, Pai A, Runhaar J, Igel C, Dam EB. Cross-Cohort Automatic Knee MRI Segmentation With Multi-Planar U-Nets. J Magn Reson Imaging 2021; 55:1650-1663. [PMID: 34918423 PMCID: PMC9106804 DOI: 10.1002/jmri.27978] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 10/18/2021] [Accepted: 10/20/2021] [Indexed: 12/16/2022] Open
Abstract
Background Segmentation of medical image volumes is a time‐consuming manual task. Automatic tools are often tailored toward specific patient cohorts, and it is unclear how they behave in other clinical settings. Purpose To evaluate the performance of the open‐source Multi‐Planar U‐Net (MPUnet), the validated Knee Imaging Quantification (KIQ) framework, and a state‐of‐the‐art two‐dimensional (2D) U‐Net architecture on three clinical cohorts without extensive adaptation of the algorithms. Study Type Retrospective cohort study. Subjects A total of 253 subjects (146 females, 107 males, ages 57 ± 12 years) from three knee osteoarthritis (OA) studies (Center for Clinical and Basic Research [CCBR], Osteoarthritis Initiative [OAI], and Prevention of OA in Overweight Females [PROOF]) with varying demographics and OA severity (64/37/24/53/2 scans of Kellgren and Lawrence [KL] grades 0–4). Field Strength/Sequence 0.18 T, 1.0 T/1.5 T, and 3 T sagittal three‐dimensional fast‐spin echo T1w and dual‐echo steady‐state sequences. Assessment All models were fit without tuning to knee magnetic resonance imaging (MRI) scans with manual segmentations from three clinical cohorts. All models were evaluated across KL grades. Statistical Tests Segmentation performance differences as measured by Dice coefficients were tested with paired, two‐sided Wilcoxon signed‐rank statistics with significance threshold α = 0.05. Results The MPUnet performed superior or equal to KIQ and 2D U‐Net on all compartments across three cohorts. Mean Dice overlap was significantly higher for MPUnet compared to KIQ and U‐Net on CCBR (0.83±0.04 vs. 0.81±0.06 and 0.82±0.05), significantly higher than KIQ and U‐Net OAI (0.86±0.03 vs. 0.84±0.04 and 0.85±0.03), and not significantly different from KIQ while significantly higher than 2D U‐Net on PROOF (0.78±0.07 vs. 0.77±0.07, P=0.10, and 0.73±0.07). The MPUnet performed significantly better on N=22 KL grade 3 CCBR scans with 0.78±0.06 vs. 0.75±0.08 for KIQ and 0.76±0.06 for 2D U‐Net. Data Conclusion The MPUnet matched or exceeded the performance of state‐of‐the‐art knee MRI segmentation models across cohorts of variable sequences and patient demographics. The MPUnet required no manual tuning making it both accurate and easy‐to‐use. Level of Evidence 3 Technical Efficacy Stage 2
Collapse
Affiliation(s)
- Mathias Perslev
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Akshay Pai
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.,Cerebriu A/S, Copenhagen, Denmark
| | | | - Christian Igel
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Erik B Dam
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.,Cerebriu A/S, Copenhagen, Denmark
| |
Collapse
|
58
|
Latif MHA, Faye I. Automated tibiofemoral joint segmentation based on deeply supervised 2D-3D ensemble U-Net: Data from the Osteoarthritis Initiative. Artif Intell Med 2021; 122:102213. [PMID: 34823835 DOI: 10.1016/j.artmed.2021.102213] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 11/07/2021] [Accepted: 11/08/2021] [Indexed: 10/19/2022]
Abstract
Improving longevity is one of the greatest achievements in humanity. Because of this, the population is growing older, and the ubiquity of knee osteoarthritis (OA) is on the rise. Nonetheless, the understanding and ability to investigate potential precursors of knee OA have been impeded by time-consuming and laborious manual delineation processes which are prone to poor reproducibility. A method for automatic segmentation of the tibiofemoral joint using magnetic resonance imaging (MRI) is presented in this work. The proposed method utilizes a deeply supervised 2D-3D ensemble U-Net, which consists of foreground class oversampling, deep supervision loss branches, and Gaussian weighted softmax score aggregation. It was designed, optimized, and tested on 507 3D double echo steady-state (DESS) MR volumes using a two-fold cross-validation approach. A state-of-the-art segmentation accuracy measured as Dice similarity coefficient (DSC) for the femur bone (98.6 ± 0.27%), tibia bone (98.8 ± 0.31%), femoral cartilage (90.3 ± 2.89%), and tibial cartilage (86.7 ± 4.07%) is achieved. Notably, the proposed method yields sub-voxel accuracy for an average symmetric surface distance (ASD) less than 0.36 mm. The model performance is not affected by the severity of radiographic osteoarthritis (rOA) grades or the presence of pathophysiological changes. The proposed method offers an accurate segmentation with high time efficiency (~62 s) per 3D volume, which is well suited for efficient processing and analysis of the large prospective cohorts of the Osteoarthritis Initiative (OAI).
Collapse
Affiliation(s)
- Muhamad Hafiz Abd Latif
- Centre for Intelligent Signal and Imaging Research, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia; Electrical & Electronic Engineering Department, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia.
| | - Ibrahima Faye
- Centre for Intelligent Signal and Imaging Research, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia; Fundamental & Applied Sciences Department, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia.
| |
Collapse
|
59
|
Emergence of Deep Learning in Knee Osteoarthritis Diagnosis. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:4931437. [PMID: 34804143 PMCID: PMC8598325 DOI: 10.1155/2021/4931437] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 10/23/2021] [Accepted: 10/25/2021] [Indexed: 12/13/2022]
Abstract
Osteoarthritis (OA), especially knee OA, is the most common form of arthritis, causing significant disability in patients worldwide. Manual diagnosis, segmentation, and annotations of knee joints remain as the popular method to diagnose OA in clinical practices, although they are tedious and greatly subject to user variation. Therefore, to overcome the limitations of the commonly used method as above, numerous deep learning approaches, especially the convolutional neural network (CNN), have been developed to improve the clinical workflow efficiency. Medical imaging processes, especially those that produce 3-dimensional (3D) images such as MRI, possess ability to reveal hidden structures in a volumetric view. Acknowledging that changes in a knee joint is a 3D complexity, 3D CNN has been employed to analyse the joint problem for a more accurate diagnosis in the recent years. In this review, we provide a broad overview on the current 2D and 3D CNN approaches in the OA research field. We reviewed 74 studies related to classification and segmentation of knee osteoarthritis from the Web of Science database and discussed the various state-of-the-art deep learning approaches proposed. We highlighted the potential and possibility of 3D CNN in the knee osteoarthritis field. We concluded by discussing the possible challenges faced as well as the potential advancements in adopting 3D CNNs in this field.
Collapse
|
60
|
Chalian M, Li X, Guermazi A, Obuchowski NA, Carrino JA, Oei EH, Link TM. The QIBA Profile for MRI-based Compositional Imaging of Knee Cartilage. Radiology 2021; 301:423-432. [PMID: 34491127 PMCID: PMC8574057 DOI: 10.1148/radiol.2021204587] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 06/18/2021] [Accepted: 07/07/2021] [Indexed: 12/16/2022]
Abstract
MRI-based cartilage compositional analysis shows biochemical and microstructural changes at early stages of osteoarthritis before changes become visible with structural MRI sequences and arthroscopy. This could help with early diagnosis, risk assessment, and treatment monitoring of osteoarthritis. Spin-lattice relaxation time constant in rotating frame (T1ρ) and T2 mapping are the MRI techniques best established for assessing cartilage composition. Only T2 mapping is currently commercially available, which is sensitive to water, collagen content, and orientation of collagen fibers, whereas T1ρ is more sensitive to proteoglycan content. Clinical application of cartilage compositional imaging is limited by high variability and suboptimal reproducibility of the biomarkers, which was the motivation for creating the Quantitative Imaging Biomarkers Alliance (QIBA) Profile for cartilage compositional imaging by the Musculoskeletal Biomarkers Committee of the QIBA. The profile aims at providing recommendations to improve reproducibility and to standardize cartilage compositional imaging. The QIBA Profile provides two complementary claims (summary statements of the technical performance of the quantitative imaging biomarkers that are being profiled) regarding the reproducibility of biomarkers. First, cartilage T1ρ and T2 values are measurable at 3.0-T MRI with a within-subject coefficient of variation of 4%-5%. Second, a measured increase or decrease in T1ρ and T2 of 14% or more indicates a minimum detectable change with 95% confidence. If only an increase in T1ρ and T2 values is expected (progressive cartilage degeneration), then an increase of 12% represents a minimum detectable change over time. The QIBA Profile provides recommendations for clinical researchers, clinicians, and industry scientists pertaining to image data acquisition, analysis, and interpretation and assessment procedures for T1ρ and T2 cartilage imaging and test-retest conformance. This special report aims to provide the rationale for the proposed claims, explain the content of the QIBA Profile, and highlight the future needs and developments for MRI-based cartilage compositional imaging for risk prediction, early diagnosis, and treatment monitoring of osteoarthritis.
Collapse
Affiliation(s)
- Majid Chalian
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - Xiaojuan Li
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - Ali Guermazi
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - Nancy A. Obuchowski
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - John A. Carrino
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - Edwin H. Oei
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - Thomas M. Link
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - for the RSNA QIBA MSK Biomarker Committee
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| |
Collapse
|
61
|
More S, Singla J. Discrete-MultiResUNet: Segmentation and feature extraction model for knee MR images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-211459] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Deep learning has shown outstanding efficiency in medical image segmentation. Segmentation of knee tissues is an important task for early diagnosis of rheumatoid arthritis (RA) with selecting variant features. Automated segmentation and feature extraction of knee tissues are desirable for faster and reliable analysis of large datasets and further diagnosis. In this paper a novel architecture called as Discrete-MultiResUNet, which is a combination of discrete wavelet transform (DWT) with MultiResUNet architecture is applied for feature extraction and segmentation, respectively. This hybrid architecture captures more prominent features from the knee magnetic resonance image efficiently with segmenting vital knee tissues. The hybrid model is evaluated on the knee MR dataset demonstrating outperforming performance compared with baseline models. The model achieves excellent segmentation performance accuracy of 96.77% with a dice coefficient of 98%.
Collapse
Affiliation(s)
- Sujeet More
- School of Computer Science and Engineering, Lovely Professional University, Jalandhar, India
| | - Jimmy Singla
- School of Computer Science and Engineering, Lovely Professional University, Jalandhar, India
| |
Collapse
|
62
|
Jeon U, Kim H, Hong H, Wang J. Automatic Meniscus Segmentation Using Adversarial Learning-Based Segmentation Network with Object-Aware Map in Knee MR Images. Diagnostics (Basel) 2021; 11:diagnostics11091612. [PMID: 34573953 PMCID: PMC8472118 DOI: 10.3390/diagnostics11091612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 08/26/2021] [Accepted: 08/31/2021] [Indexed: 11/16/2022] Open
Abstract
Meniscus segmentation from knee MR images is an essential step when analyzing the length, width, height, cross-sectional area, surface area for meniscus allograft transplantation using a 3D reconstruction model based on the patient's normal meniscus. In this paper, we propose a two-stage DCNN that combines a 2D U-Net-based meniscus localization network with a conditional generative adversarial network-based segmentation network using an object-aware map. First, the 2D U-Net segments knee MR images into six classes including bone and cartilage with whole MR images at a resolution of 512 × 512 to localize the medial and lateral meniscus. Second, adversarial learning with a generator based on the 2D U-Net and a discriminator based on the 2D DCNN using an object-aware map segments the meniscus into localized regions-of-interest with a resolution of 64 × 64. The average Dice similarity coefficient of the meniscus was 85.18% at the medial meniscus and 84.33% at the lateral meniscus; these values were 10.79%p and 1.14%p, and 7.78%p and 1.12%p higher than the segmentation method without adversarial learning and without the use of an object-aware map with the Dice similarity coefficient at the medial meniscus and lateral meniscus, respectively. The proposed automatic meniscus localization through multi-class can prevent the class imbalance problem by focusing on local regions. The proposed adversarial learning using an object-aware map can prevent under-segmentation by repeatedly judging and improving the segmentation results, and over-segmentation by considering information only from the meniscus regions. Our method can be used to identify and analyze the shape of the meniscus for allograft transplantation using a 3D reconstruction model of the patient's unruptured meniscus.
Collapse
Affiliation(s)
- Uju Jeon
- Department of Software Convergence, Seoul Women’s University, Seoul 01797, Korea; (U.J.); (H.K.)
| | - Hyeonjin Kim
- Department of Software Convergence, Seoul Women’s University, Seoul 01797, Korea; (U.J.); (H.K.)
| | - Helen Hong
- Department of Software Convergence, Seoul Women’s University, Seoul 01797, Korea; (U.J.); (H.K.)
- Correspondence:
| | - Joonho Wang
- Department of Orthopedic Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Korea;
| |
Collapse
|
63
|
Nobashi T, Zacharias C, Ellis JK, Ferri V, Koran ME, Franc BL, Iagaru A, Davidzon GA. Performance Comparison of Individual and Ensemble CNN Models for the Classification of Brain 18F-FDG-PET Scans. J Digit Imaging 2021; 33:447-455. [PMID: 31659587 DOI: 10.1007/s10278-019-00289-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
The high-background glucose metabolism of normal gray matter on [18F]-fluoro-2-D-deoxyglucose (FDG) positron emission tomography (PET) of the brain results in a low signal-to-background ratio, potentially increasing the possibility of missing important findings in patients with intracranial malignancies. To explore the strategy of using a deep learning classifier to aid in distinguishing normal versus abnormal findings on PET brain images, this study evaluated the performance of a two-dimensional convolutional neural network (2D-CNN) to classify FDG PET brain scans as normal (N) or abnormal (A). METHODS Two hundred eighty-nine brain FDG-PET scans (N; n = 150, A; n = 139) resulting in a total of 68,260 images were included. Nine individual 2D-CNN models with three different window settings for axial, coronal, and sagittal axes were trained and validated. The performance of these individual and ensemble models was evaluated and compared using a test dataset. Odds ratio, Akaike's information criterion (AIC), and area under curve (AUC) on receiver-operative-characteristic curve, accuracy, and standard deviation (SD) were calculated. RESULTS An optimal window setting to classify normal and abnormal scans was different for each axis of the individual models. An ensembled model using different axes with an optimized window setting (window-triad) showed better performance than ensembled models using the same axis and different windows settings (axis-triad). Increase in odds ratio and decrease in SD were observed in both axis-triad and window-triad models compared with individual models, whereas improvements of AUC and AIC were seen in window-triad models. An overall model averaging the probabilities of all individual models showed the best accuracy of 82.0%. CONCLUSIONS Data ensemble using different window settings and axes was effective to improve 2D-CNN performance parameters for the classification of brain FDG-PET scans. If prospectively validated with a larger cohort of patients, similar models could provide decision support in a clinical setting.
Collapse
Affiliation(s)
- Tomomi Nobashi
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, 300 Pasteur Drive, Office H2228, Stanford, CA, 94305, USA
| | - Claudia Zacharias
- Clinic for Nuclear Medicine, University Hospital Essen, Essen, Germany
| | - Jason K Ellis
- DimensionalMechanics Inc.®, 2821 Northup Way Suite, Bellevue, WA, #200, USA
| | - Valentina Ferri
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, 300 Pasteur Drive, Office H2228, Stanford, CA, 94305, USA
| | - Mary Ellen Koran
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, 300 Pasteur Drive, Office H2228, Stanford, CA, 94305, USA
| | - Benjamin L Franc
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, 300 Pasteur Drive, Office H2228, Stanford, CA, 94305, USA
| | - Andrei Iagaru
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, 300 Pasteur Drive, Office H2228, Stanford, CA, 94305, USA
| | - Guido A Davidzon
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, 300 Pasteur Drive, Office H2228, Stanford, CA, 94305, USA.
| |
Collapse
|
64
|
Zijlstra F, Seevinck PR. Multiple-echo steady-state (MESS): Extending DESS for joint T 2 mapping and chemical-shift corrected water-fat separation. Magn Reson Med 2021; 86:3156-3165. [PMID: 34270127 PMCID: PMC8596862 DOI: 10.1002/mrm.28921] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 06/21/2021] [Accepted: 06/21/2021] [Indexed: 12/21/2022]
Abstract
Purpose To extend the double echo steady‐state (DESS) sequence to enable chemical‐shift corrected water‐fat separation. Methods This study proposes multiple‐echo steady‐state (MESS), a sequence that modifies the readouts of the DESS sequence to acquire two echoes each with bipolar readout gradients with higher readout bandwidth. This enables water‐fat separation and eliminates the need for water‐selective excitation that is often used in combination with DESS, without increasing scan time. An iterative fitting approach was used to perform joint chemical‐shift corrected water‐fat separation and T2 estimation on all four MESS echoes simultaneously. MESS and water‐selective DESS images were acquired for five volunteers, and were compared qualitatively as well as quantitatively on cartilage T2 and thickness measurements. Signal‐to‐noise ratio (SNR) and T2 quantification were evaluated numerically using pseudo‐replications of the acquisition. Results The water‐fat separation provided by MESS was robust and with quality comparable to water‐selective DESS. MESS T2 estimation was similar to DESS, albeit with slightly higher variability. Noise analysis showed that SNR in MESS was comparable to DESS on average, but did exhibit local variations caused by uncertainty in the water‐fat separation. Conclusion In the same acquisition time as DESS, MESS provides water‐fat separation with comparable SNR in the reconstructed water and fat images. By providing additional image contrasts in addition to the water‐selective DESS images, MESS provides a promising alternative to DESS.
Collapse
Affiliation(s)
- Frank Zijlstra
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands.,Department of Radiology and Nuclear Medicine, St Olav's University Hospital, Trondheim, Norway
| | - Peter R Seevinck
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands.,MRIGuidance BV, Utrecht, The Netherlands
| |
Collapse
|
65
|
Automatic knee cartilage and bone segmentation using multi-stage convolutional neural networks: data from the osteoarthritis initiative. MAGNETIC RESONANCE MATERIALS IN PHYSICS BIOLOGY AND MEDICINE 2021; 34:859-875. [PMID: 34101071 DOI: 10.1007/s10334-021-00934-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 05/25/2021] [Accepted: 05/26/2021] [Indexed: 10/21/2022]
Abstract
OBJECTIVES Accurate and efficient knee cartilage and bone segmentation are necessary for basic science, clinical trial, and clinical applications. This work tested a multi-stage convolutional neural network framework for MRI image segmentation. MATERIALS AND METHODS Stage 1 of the framework coarsely segments images outputting probabilities of each voxel belonging to the classes of interest: 4 cartilage tissues, 3 bones, 1 background. Stage 2 segments overlapping sub-volumes that include Stage 1 probability maps concatenated to raw image data. Using six fold cross-validation, this framework was tested on two datasets comprising 176 images [88 individuals in the Osteoarthritis Initiative (OAI)] and 60 images (15 healthy young men), respectively. RESULTS On the OAI segmentation dataset, the framework produces cartilage segmentation accuracies (Dice similarity coefficient) of 0.907 (femoral), 0.876 (medial tibial), 0.913 (lateral tibial), and 0.840 (patellar). Healthy cartilage accuracies are excellent (femoral = 0.938, medial tibial = 0.911, lateral tibial = 0.930, patellar = 0.955). Average surface distances are less than in-plane resolution. Segmentations take 91 ± 11 s per knee. DISCUSSION The framework learns to automatically segment knee cartilage tissues and bones from MR images acquired with two sequences, producing efficient, accurate quantifications at varying disease severities.
Collapse
|
66
|
Leonardi R, Lo Giudice A, Farronato M, Ronsivalle V, Allegrini S, Musumeci G, Spampinato C. Fully automatic segmentation of sinonasal cavity and pharyngeal airway based on convolutional neural networks. Am J Orthod Dentofacial Orthop 2021; 159:824-835.e1. [PMID: 34059213 DOI: 10.1016/j.ajodo.2020.05.017] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2020] [Revised: 05/01/2020] [Accepted: 05/01/2020] [Indexed: 12/21/2022]
Abstract
INTRODUCTION This study aimed to test the accuracy of a new automatic deep learning-based approach on the basis of convolutional neural networks (CNN) for fully automatic segmentation of the sinonasal cavity and the pharyngeal airway from cone-beam computed tomography (CBCT) scans. METHODS Forty CBCT scans from healthy patients (20 women and 20 men; mean age, 23.37 ± 3.34 years) were collected, and manual segmentation of the sinonasal cavity and pharyngeal subregions were carried out by using Mimics software (version 20.0; Materialise, Leuven, Belgium). Twenty CBCT scans from the total sample were randomly selected and used for training the artificial intelligence model file. The remaining 20 CBCT segmentation masks were used to test the accuracy of the CNN fully automatic method by comparing the segmentation volumes of the 3-dimensional models obtained with automatic and manual segmentations. The accuracy of the CNN-based method was also assessed by using the Dice score coefficient and by the surface-to-surface matching technique. The intraclass correlation coefficient and Dahlberg's formula were used to test the intraobserver reliability and method error, respectively. Independent Student t test was used for between-groups volumetric comparison. RESULTS Measurements were highly correlated with an intraclass correlation coefficient value of 0.921, whereas the method error was 0.31 mm3. A mean difference of 1.93 ± 0.73 cm3 was found between the methodologies, but it was not statistically significant (P >0.05). The mean matching percentage detected was 85.35 ± 2.59 (tolerance 0.5 mm) and 93.44 ± 2.54 (tolerance 1.0 mm). The differences, measured as the Dice score coefficient in percentage, between the assessments done with both methods were 3.3% and 5.8%, respectively. CONCLUSIONS The new deep learning-based method for automated segmentation of the sinonasal cavity and the pharyngeal airway in CBCT scans is accurate and performs equally well as an experienced image reader.
Collapse
Affiliation(s)
- Rosalia Leonardi
- Department of Orthodontics, School of Dentistry, University of Catania, Catania, Italy.
| | - Antonino Lo Giudice
- Department of Orthodontics, School of Dentistry, University of Catania, Catania, Italy
| | - Marco Farronato
- Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Department of Biomedical, Surgical and Dental Sciences, School of Dentistry, University of Milan, Milan, Italy
| | - Vincenzo Ronsivalle
- Department of Orthodontics, School of Dentistry, University of Catania, Catania, Italy
| | | | - Giuseppe Musumeci
- Department of Biomedical and Biotechnological Sciences, Human Anatomy and Histology Section, School of Medicine, University of Catania, Catania, Italy
| | - Concetto Spampinato
- Department of Computer and Telecommunications Engineering, University of Catania, Catania, Italy
| |
Collapse
|
67
|
Wirth W, Eckstein F, Kemnitz J, Baumgartner CF, Konukoglu E, Fuerst D, Chaudhari AS. Accuracy and longitudinal reproducibility of quantitative femorotibial cartilage measures derived from automated U-Net-based segmentation of two different MRI contrasts: data from the osteoarthritis initiative healthy reference cohort. MAGMA (NEW YORK, N.Y.) 2021; 34:337-354. [PMID: 33025284 PMCID: PMC8154803 DOI: 10.1007/s10334-020-00889-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 08/22/2020] [Accepted: 09/10/2020] [Indexed: 12/19/2022]
Abstract
OBJECTIVE To evaluate the agreement, accuracy, and longitudinal reproducibility of quantitative cartilage morphometry from 2D U-Net-based automated segmentations for 3T coronal fast low angle shot (corFLASH) and sagittal double echo at steady-state (sagDESS) MRI. METHODS 2D U-Nets were trained using manual, quality-controlled femorotibial cartilage segmentations available for 92 Osteoarthritis Initiative healthy reference cohort participants from both corFLASH and sagDESS (n = 50/21/21 training/validation/test-set). Cartilage morphometry was computed from automated and manual segmentations for knees from the test-set. Agreement and accuracy were evaluated from baseline visits (dice similarity coefficient: DSC, correlation analysis, systematic offset). The longitudinal reproducibility was assessed from year-1 and -2 follow-up visits (root-mean-squared coefficient of variation, RMSCV%). RESULTS Automated segmentations showed high agreement (DSC 0.89-0.92) and high correlations (r ≥ 0.92) with manual ground truth for both corFLASH and sagDESS and only small systematic offsets (≤ 10.1%). The automated measurements showed a similar test-retest reproducibility over 1 year (RMSCV% 1.0-4.5%) as manual measurements (RMSCV% 0.5-2.5%). DISCUSSION The 2D U-Net-based automated segmentation method yielded high agreement compared with manual segmentation and also demonstrated high accuracy and longitudinal test-retest reproducibility for morphometric analysis of articular cartilage derived from it, using both corFLASH and sagDESS.
Collapse
Affiliation(s)
- Wolfgang Wirth
- Department of Imaging and Functional Musculoskeletal Research, Institute of Anatomy and Cell Biology, Paracelsus Medical University Salzburg and Nuremberg, Strubergasse 21, 5020, Salzburg, Austria.
- Ludwig Boltzmann Institute for Arthritis and Rehabilitation, Paracelsus Medical University, Salzburg, Austria.
- Chondrometrics GmbH, Ainring, Germany.
| | - Felix Eckstein
- Department of Imaging and Functional Musculoskeletal Research, Institute of Anatomy and Cell Biology, Paracelsus Medical University Salzburg and Nuremberg, Strubergasse 21, 5020, Salzburg, Austria
- Ludwig Boltzmann Institute for Arthritis and Rehabilitation, Paracelsus Medical University, Salzburg, Austria
- Chondrometrics GmbH, Ainring, Germany
| | - Jana Kemnitz
- Department of Imaging and Functional Musculoskeletal Research, Institute of Anatomy and Cell Biology, Paracelsus Medical University Salzburg and Nuremberg, Strubergasse 21, 5020, Salzburg, Austria
| | | | | | - David Fuerst
- Department of Imaging and Functional Musculoskeletal Research, Institute of Anatomy and Cell Biology, Paracelsus Medical University Salzburg and Nuremberg, Strubergasse 21, 5020, Salzburg, Austria
- Ludwig Boltzmann Institute for Arthritis and Rehabilitation, Paracelsus Medical University, Salzburg, Austria
- Chondrometrics GmbH, Ainring, Germany
| | | |
Collapse
|
68
|
Fayad LM, Parekh VS, Luna RDC, Ko CC, Tank D, Fritz J, Ahlawat S, Jacobs MA. A Deep Learning System for Synthetic Knee Magnetic Resonance Imaging: Is Artificial Intelligence-Based Fat-Suppressed Imaging Feasible? Invest Radiol 2021; 56:357-368. [PMID: 33350717 PMCID: PMC8087629 DOI: 10.1097/rli.0000000000000751] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
MATERIALS AND METHODS This single-center study was approved by the institutional review board. Artificial intelligence-based FS MRI scans were created from non-FS images using a deep learning system with a modified convolutional neural network-based U-Net that used a training set of 25,920 images and validation set of 16,416 images. Three musculoskeletal radiologists reviewed 88 knee MR studies in 2 sessions, the original (proton density [PD] + FSPD) and the synthetic (PD + AFSMRI). Readers recorded AFSMRI quality (diagnostic/nondiagnostic) and the presence or absence of meniscal, ligament, and tendon tears; cartilage defects; and bone marrow abnormalities. Contrast-to-noise rate measurements were made among subcutaneous fat, fluid, bone marrow, cartilage, and muscle. The original MRI sequences were used as the reference standard to determine the diagnostic performance of AFSMRI (combined with the original PD sequence). This is a fully balanced study design, where all readers read all images the same number of times, which allowed the determination of the interchangeability of the original and synthetic protocols. Descriptive statistics, intermethod agreement, interobserver concordance, and interchangeability tests were applied. A P value less than 0.01 was considered statistically significant for the likelihood ratio testing, and P value less than 0.05 for all other statistical analyses. RESULTS Artificial intelligence-based FS MRI quality was rated as diagnostic (98.9% [87/88] to 100% [88/88], all readers). Diagnostic performance (sensitivity/specificity) of the synthetic protocol was high, for tears of the menisci (91% [71/78], 86% [84/98]), cruciate ligaments (92% [12/13], 98% [160/163]), collateral ligaments (80% [16/20], 100% [156/156]), and tendons (90% [9/10], 100% [166/166]). For cartilage defects and bone marrow abnormalities, the synthetic protocol offered an overall sensitivity/specificity of 77% (170/221)/93% (287/307) and 76% (95/125)/90% (443/491), respectively. Intermethod agreement ranged from moderate to substantial for almost all evaluated structures (menisci, cruciate ligaments, collateral ligaments, and bone marrow abnormalities). No significant difference was observed between methods for all structural abnormalities by all readers (P > 0.05), except for cartilage assessment. Interobserver agreement ranged from moderate to substantial for almost all evaluated structures. Original and synthetic protocols were interchangeable for the diagnosis of all evaluated structures. There was no significant difference for the common exact match proportions for all combinations (P > 0.01). The conspicuity of all tissues assessed through contrast-to-noise rate was higher on AFSMRI than on original FSPD images (P < 0.05). CONCLUSIONS Artificial intelligence-based FS MRI (3D AFSMRI) is feasible and offers a method for fast imaging, with similar detection rates for structural abnormalities of the knee, compared with original 3D MR sequences.
Collapse
Affiliation(s)
- Laura M. Fayad
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
| | - Vishwa S. Parekh
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD, USA
| | - Rodrigo de Castro Luna
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
| | - Charles C. Ko
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
| | - Dharmesh Tank
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
| | - Jan Fritz
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
- Department of Radiology, New York University Grossman School of Medicine, NYU Langone Health, New York, NY, USA
| | - Shivani Ahlawat
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
| | - Michael A. Jacobs
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
- Sidney Kimmel Comprehensive Cancer Center., The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
69
|
Chang GH, Park LK, Le NA, Jhun RS, Surendran T, Lai J, Seo H, Promchotichai N, Yoon G, Scalera J, Capellini TD, Felson DT, Kolachalama VB. Subchondral bone length in knee osteoarthritis: A deep learning derived imaging measure and its association with radiographic and clinical outcomes. Arthritis Rheumatol 2021; 73:2240-2248. [PMID: 33973737 DOI: 10.1002/art.41808] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 05/06/2021] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Develop a bone shape measure that reflects the extent of cartilage loss and bone flattening in knee osteoarthritis (OA) and test it against estimates of disease severity. METHODS A fast region-based convolutional neural network was trained to crop the knee joints in sagittal dual-echo steady state MRI sequences obtained from the Osteoarthritis Initiative (OAI). Publicly available annotations of the cartilage and menisci were used as references to annotate the tibia and the femur in 61 knees. Another deep neural network (U-Net) was developed to learn these annotations. Model predictions were compared with radiologist-driven annotations on an independent test set (27 knees). The U-Net was applied to automatically extract the knee joint structures on the larger OAI dataset (9,434 knees). We defined subchondral bone length (SBL), a novel shape measure characterizing the extent of overlying cartilage and bone flattening, and examined its relationship with radiographic joint space narrowing (JSN), concurrent WOMAC pain and disability as well as subsequent partial or total knee replacement (KR). Odds ratios for each outcome were estimated using relative changes in SBL on the OAI dataset into quartiles. RESULT Mean SBL values for knees with JSN were consistently different from knees without JSN. Greater changes of SBL from baseline were associated with greater pain and disability. For knees with medial or lateral JSN, the odds ratios between lowest and highest quartiles corresponding to SBL changes for future KR were 5.68 (95% CI:[3.90,8.27]) and 7.19 (95% CI:[3.71,13.95]), respectively. CONCLUSION SBL quantified OA status based on JSN severity. It has promise as an imaging marker in predicting clinical and structural OA outcomes.
Collapse
Affiliation(s)
- Gary H Chang
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, MA, USA, 02118
| | - Lisa K Park
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, MA, USA, 02118
| | - Nina A Le
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, MA, USA, 02118
| | - Ray S Jhun
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, MA, USA, 02118
| | - Tejus Surendran
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, MA, USA, 02118
| | - Joseph Lai
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, MA, USA, 02118
| | - Hojoon Seo
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, MA, USA, 02118
| | - Nuwapa Promchotichai
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, MA, USA, 02118
| | - Grace Yoon
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, MA, USA, 02118
| | - Jonathan Scalera
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA, 02118
| | - Terence D Capellini
- Department of Human Evolutionary Biology, Harvard University, Cambridge, MA, USA, 02138.,Broad Institute of MIT and Harvard, Cambridge, MA, USA, 02142
| | - David T Felson
- Section of Rheumatology, Department of Medicine, Boston University School of Medicine, Boston, MA, USA - 02118; Centre for Epidemiology, University of Manchester and the NIHR Manchester BRC, Manchester University, NHS Trust, Manchester, UK
| | - Vijaya B Kolachalama
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, MA, USA, 02118.,Department of Computer Science, Faculty of Computing & Data Sciences, Boston University, Boston, MA, USA, 02215
| |
Collapse
|
70
|
Deep Convolutional Neural Network-Based Diagnosis of Anterior Cruciate Ligament Tears: Performance Comparison of Homogenous Versus Heterogeneous Knee MRI Cohorts With Different Pulse Sequence Protocols and 1.5-T and 3-T Magnetic Field Strengths. Invest Radiol 2021; 55:499-506. [PMID: 32168039 PMCID: PMC7343178 DOI: 10.1097/rli.0000000000000664] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Objectives The aim of this study was to clinically validate a Deep Convolutional Neural Network (DCNN) for the detection of surgically proven anterior cruciate ligament (ACL) tears in a large patient cohort and to analyze the effect of magnetic resonance examinations from different institutions, varying protocols, and field strengths. Materials and Methods After ethics committee approval, this retrospective analysis of prospectively collected data was performed on 512 consecutive subjects, who underwent knee magnetic resonance imaging (MRI) in a total of 59 different institutions followed by arthroscopic knee surgery at our institution. The DCNN and 3 fellowship-trained full-time academic musculoskeletal radiologists evaluated the MRI examinations for full-thickness ACL tears independently. Surgical reports served as the reference standard. Statistics included diagnostic performance metrics, including sensitivity, specificity, area under the receiver operating curve (“AUC ROC”), and kappa statistics. P values less than 0.05 were considered to represent statistical significance. Results Anterior cruciate ligament tears were present in 45.7% (234/512) and absent in 54.3% (278/512) of the subjects. The DCNN had a sensitivity of 96.1%, which was not significantly different from the readers (97.5%–97.9%; all P ≥ 0.118), but significantly lower specificity of 93.1% (readers, 99.6%–100%; all P < 0.001) and “AUC ROC” of 0.935 (readers, 0.989–0.991; all P < 0.001) for the entire cohort. Subgroup analysis showed a significantly lower sensitivity, specificity, and “AUC ROC” of the DCNN for outside MRI (92.5%, 87.1%, and 0.898, respectively) than in-house MRI (99.0%, 94.4%, and 0.967, respectively) examinations (P = 0.026, P = 0.043, and P < 0.05, respectively). There were no significant differences in DCNN performance for 1.5-T and 3-T MRI examinations (all P ≥ 0.753, respectively). Conclusions Deep Convolutional Neural Network performance of ACL tear diagnosis can approach performance levels similar to fellowship-trained full-time academic musculoskeletal radiologists at 1.5 T and 3 T; however, the performance may decrease with increasing MRI examination heterogeneity.
Collapse
|
71
|
Shin Y, Yang J, Lee YH. Deep Generative Adversarial Networks: Applications in Musculoskeletal Imaging. Radiol Artif Intell 2021; 3:e200157. [PMID: 34136816 PMCID: PMC8204145 DOI: 10.1148/ryai.2021200157] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 02/10/2021] [Accepted: 02/16/2021] [Indexed: 12/12/2022]
Abstract
In recent years, deep learning techniques have been applied in musculoskeletal radiology to increase the diagnostic potential of acquired images. Generative adversarial networks (GANs), which are deep neural networks that can generate or transform images, have the potential to aid in faster imaging by generating images with a high level of realism across multiple contrast and modalities from existing imaging protocols. This review introduces the key architectures of GANs as well as their technical background and challenges. Key research trends are highlighted, including: (a) reconstruction of high-resolution MRI; (b) image synthesis with different modalities and contrasts; (c) image enhancement that efficiently preserves high-frequency information suitable for human interpretation; (d) pixel-level segmentation with annotation sharing between domains; and (e) applications to different musculoskeletal anatomies. In addition, an overview is provided of the key issues wherein clinical applicability is challenging to capture with conventional performance metrics and expert evaluation. When clinically validated, GANs have the potential to improve musculoskeletal imaging. Keywords: Adults and Pediatrics, Computer Aided Diagnosis (CAD), Computer Applications-General (Informatics), Informatics, Skeletal-Appendicular, Skeletal-Axial, Soft Tissues/Skin © RSNA, 2021.
Collapse
Affiliation(s)
- YiRang Shin
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| | - Jaemoon Yang
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| | - Young Han Lee
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| |
Collapse
|
72
|
Automatic Discoid Lateral Meniscus Diagnosis from Radiographs Based on Image Processing Tools and Machine Learning. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:6662664. [PMID: 33968355 PMCID: PMC8081628 DOI: 10.1155/2021/6662664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Revised: 03/14/2021] [Accepted: 03/22/2021] [Indexed: 11/17/2022]
Abstract
The aim of the present study is to build a software implementation of a previous study and to diagnose discoid lateral menisci on knee joint radiograph images. A total of 160 images from normal individuals and patients who were diagnosed with discoid lateral menisci were included. Our software implementation includes two parts: preprocessing and measurement. In the first phase, the whole radiograph image was analyzed to obtain basic information about the patient. Machine learning was used to segment the knee joint from the original radiograph image. Image enhancement and denoising tools were used to strengthen the image and remove noise. In the second phase, edge detection was used to quantify important features in the image. A specific algorithm was designed to build a model of the knee joint and measure the parameters. Of the test images, 99.65% were segmented correctly. Furthermore, 97.5% of the tested images were segmented correctly and their parameters were measured successfully. There was no significant difference between manual and automatic measurements in the discoid (P=0.28) and control groups (P=0.15). The mean and standard deviations of the ratio of lateral joint space distance to the height of the lateral tibial spine were compared with the results of manual measurement. The software performed well on raw radiographs, showing a satisfying success rate and robustness. Thus, it is possible to diagnose discoid lateral menisci on radiographs with the help of radiograph-image-analyzing software (BM3D, etc.) and artificial intelligence-related tools (YOLOv3). The results of this study can help build a joint database that contains data from patients and thus can play a role in the diagnosis of discoid lateral menisci and other knee joint diseases in the future.
Collapse
|
73
|
Deep learning method for segmentation of rotator cuff muscles on MR images. Skeletal Radiol 2021; 50:683-692. [PMID: 32939590 DOI: 10.1007/s00256-020-03599-2] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 08/27/2020] [Accepted: 09/03/2020] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To develop and validate a deep convolutional neural network (CNN) method capable of (1) selecting a specific shoulder sagittal MR image (Y-view) and (2) automatically segmenting rotator cuff (RC) muscles on a Y-view. We hypothesized a CNN approach can accurately perform both tasks compared with manual reference standards. MATERIAL AND METHODS We created 2 models: model A for Y-view selection and model B for muscle segmentation. For model A, we manually selected shoulder sagittal T1 Y-views from 258 cases as ground truth to train a classification CNN (Keras/Tensorflow, Inception v3, 16 batch, 100 epochs, dropout 0.2, learning rate 0.001, RMSprop). A top-3 success rate evaluated model A on 100 internal and 50 external test cases. For model B, we manually segmented subscapularis, supraspinatus, and infraspinatus/teres minor on 1048 sagittal T1 Y-views. After histogram equalization and data augmentation, the model was trained from scratch (U-Net, 8 batch, 50 epochs, dropout 0.25, learning rate 0.0001, softmax). Dice (F1) score determined segmentation accuracy on 105 internal and 50 external test images. RESULTS Model A showed top-3 accuracy > 98% to select an appropriate Y-view. Model B produced accurate RC muscle segmentations with mean Dice scores > 0.93. Individual muscle Dice scores on internal/external datasets were as follows: subscapularis 0.96/0.93, supraspinatus 0.97/0.96, and infraspinatus/teres minor 0.97/0.95. CONCLUSIONS Our results show overall accurate Y-view selection and automated RC muscle segmentation using a combination of deep CNN algorithms.
Collapse
|
74
|
Automated cartilage segmentation and quantification using 3D ultrashort echo time (UTE) cones MR imaging with deep convolutional neural networks. Eur Radiol 2021; 31:7653-7663. [PMID: 33783571 DOI: 10.1007/s00330-021-07853-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 12/18/2020] [Accepted: 01/15/2021] [Indexed: 10/21/2022]
Abstract
OBJECTIVE To develop a fully automated full-thickness cartilage segmentation and mapping of T1, T1ρ, and T2*, as well as macromolecular fraction (MMF) by combining a series of quantitative 3D ultrashort echo time (UTE) cones MR imaging with a transfer learning-based U-Net convolutional neural networks (CNN) model. METHODS Sixty-five participants (20 normal, 29 doubtful-minimal osteoarthritis (OA), and 16 moderate-severe OA) were scanned using 3D UTE cones T1 (Cones-T1), adiabatic T1ρ (Cones-AdiabT1ρ), T2* (Cones-T2*), and magnetization transfer (Cones-MT) sequences at 3 T. Manual segmentation was performed by two experienced radiologists, and automatic segmentation was completed using the proposed U-Net CNN model. The accuracy of cartilage segmentation was evaluated using the Dice score and volumetric overlap error (VOE). Pearson correlation coefficient and intraclass correlation coefficient (ICC) were calculated to evaluate the consistency of quantitative MR parameters extracted from automatic and manual segmentations. UTE biomarkers were compared among different subject groups using one-way ANOVA. RESULTS The U-Net CNN model provided reliable cartilage segmentation with a mean Dice score of 0.82 and a mean VOE of 29.86%. The consistency of Cones-T1, Cones-AdiabT1ρ, Cones-T2*, and MMF calculated using automatic and manual segmentations ranged from 0.91 to 0.99 for Pearson correlation coefficients, and from 0.91 to 0.96 for ICCs, respectively. Significant increases in Cones-T1, Cones-AdiabT1ρ, and Cones-T2* (p < 0.05) and a decrease in MMF (p < 0.001) were observed in doubtful-minimal OA and/or moderate-severe OA over normal controls. CONCLUSION Quantitative 3D UTE cones MR imaging combined with the proposed U-Net CNN model allows a fully automated comprehensive assessment of articular cartilage. KEY POINTS • 3D UTE cones imaging combined with U-Net CNN model was able to provide fully automated cartilage segmentation. • UTE parameters obtained from automatic segmentation were able to reliably provide a quantitative assessment of cartilage.
Collapse
|
75
|
Ling J, Li G, Shao H, Wang H, Yin H, Zhou H, Song Y, Chen G. Helix Matrix Transformation Combined With Convolutional Neural Network Algorithm for Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry-Based Bacterial Identification. Front Microbiol 2020; 11:565434. [PMID: 33304324 PMCID: PMC7693542 DOI: 10.3389/fmicb.2020.565434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 10/26/2020] [Indexed: 01/27/2023] Open
Abstract
Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) analysis is a rapid and reliable method for bacterial identification. Classification algorithms, as a critical part of the MALDI-TOF MS analysis approach, have been developed using both traditional algorithms and machine learning algorithms. In this study, a method that combined helix matrix transformation with a convolutional neural network (CNN) algorithm was presented for bacterial identification. A total of 14 bacterial species including 58 strains were selected to create an in-house MALDI-TOF MS spectrum dataset. The 1D array-type MALDI-TOF MS spectrum data were transformed through a helix matrix transformation into matrix-type data, which was fitted during the CNN training. Through the parameter optimization, the threshold for binarization was set as 16 and the final size of a matrix-type data was set as 25 × 25 to obtain a clean dataset with a small size. A CNN model with three convolutional layers was well trained using the dataset to predict bacterial species. The filter sizes for the three convolutional layers were 4, 8, and 16. The kernel size was three and the activation function was the rectified linear unit (ReLU). A back propagation neural network (BPNN) model was created without helix matrix transformation and a convolution layer to demonstrate whether the helix matrix transformation combined with CNN algorithm works better. The areas under the receiver operating characteristic (ROC) curve of the CNN and BPNN models were 0.98 and 0.87, respectively. The accuracies of the CNN and BPNN models were 97.78 ± 0.08 and 86.50 ± 0.01, respectively, with a significant statistical difference (p < 0.001). The results suggested that helix matrix transformation combined with the CNN algorithm enabled the feature extraction of the bacterial MALDI-TOF MS spectrum, which might be a proposed solution to identify bacterial species.
Collapse
Affiliation(s)
- Jin Ling
- NMPA Key Laboratory for Quality Control of Therapeutic Monoclonal Antibodies, Shanghai Institute for Food and Drug Control, Shanghai, China.,Department of Biochemical Drugs and Biological Products, Shanghai Institute for Food and Drug Control, Shanghai, China
| | - Gaomin Li
- NMPA Key Laboratory for Quality Control of Therapeutic Monoclonal Antibodies, Shanghai Institute for Food and Drug Control, Shanghai, China.,Department of Biochemical Drugs and Biological Products, Shanghai Institute for Food and Drug Control, Shanghai, China
| | - Hong Shao
- NMPA Key Laboratory for Quality Control of Therapeutic Monoclonal Antibodies, Shanghai Institute for Food and Drug Control, Shanghai, China.,Department of Biochemical Drugs and Biological Products, Shanghai Institute for Food and Drug Control, Shanghai, China
| | - Hong Wang
- NMPA Key Laboratory for Quality Control of Therapeutic Monoclonal Antibodies, Shanghai Institute for Food and Drug Control, Shanghai, China.,Department of Biochemical Drugs and Biological Products, Shanghai Institute for Food and Drug Control, Shanghai, China
| | - Hongrui Yin
- NMPA Key Laboratory for Quality Control of Therapeutic Monoclonal Antibodies, Shanghai Institute for Food and Drug Control, Shanghai, China.,Department of Biochemical Drugs and Biological Products, Shanghai Institute for Food and Drug Control, Shanghai, China
| | - Hu Zhou
- Department of Analytical Chemistry, Shanghai Institute of Materia Medica, Chinese Academy of Sciences, Shanghai, China
| | - Yufei Song
- Department of Gastroenterology, Lihuili Hospital of Ningbo Medical Center, Ningbo, China
| | - Gang Chen
- NMPA Key Laboratory for Quality Control of Therapeutic Monoclonal Antibodies, Shanghai Institute for Food and Drug Control, Shanghai, China.,Department of Biochemical Drugs and Biological Products, Shanghai Institute for Food and Drug Control, Shanghai, China
| |
Collapse
|
76
|
|
77
|
Kijowski R, Liu F, Caliva F, Pedoia V. Deep Learning for Lesion Detection, Progression, and Prediction of Musculoskeletal Disease. J Magn Reson Imaging 2020; 52:1607-1619. [PMID: 31763739 PMCID: PMC7251925 DOI: 10.1002/jmri.27001] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Revised: 10/30/2019] [Accepted: 10/31/2019] [Indexed: 12/23/2022] Open
Abstract
Deep learning is one of the most exciting new areas in medical imaging. This review article provides a summary of the current clinical applications of deep learning for lesion detection, progression, and prediction of musculoskeletal disease on radiographs, computed tomography (CT), magnetic resonance imaging (MRI), and nuclear medicine. Deep-learning methods have shown success for estimating pediatric bone age, detecting fractures, and assessing the severity of osteoarthritis on radiographs. In particular, the high diagnostic performance of deep-learning approaches for estimating pediatric bone age and detecting fractures suggests that the new technology may soon become available for use in clinical practice. Recent studies have also documented the feasibility of using deep-learning methods for identifying a wide variety of pathologic abnormalities on CT and MRI including internal derangement, metastatic disease, infection, fractures, and joint degeneration. However, the detection of musculoskeletal disease on CT and especially MRI is challenging, as it often requires analyzing complex abnormalities on multiple slices of image datasets with different tissue contrasts. Thus, additional technical development is needed to create deep-learning methods for reliable and repeatable interpretation of musculoskeletal CT and MRI examinations. Furthermore, the diagnostic performance of all deep-learning methods for detecting and characterizing musculoskeletal disease must be evaluated in prospective studies using large image datasets acquired at different institutions with different imaging parameters and different imaging hardware before they can be implemented in clinical practice. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 2 J. MAGN. RESON. IMAGING 2020;52:1607-1619.
Collapse
Affiliation(s)
- Richard Kijowski
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Fang Liu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Francesco Caliva
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Valentina Pedoia
- Department of Radiology, University of California at San Francisco School of Medicine, San Francisco, California, USA
| |
Collapse
|
78
|
Chaudhari AS, Kogan F, Pedoia V, Majumdar S, Gold GE, Hargreaves BA. Rapid Knee MRI Acquisition and Analysis Techniques for Imaging Osteoarthritis. J Magn Reson Imaging 2020; 52:1321-1339. [PMID: 31755191 PMCID: PMC7925938 DOI: 10.1002/jmri.26991] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 10/22/2019] [Accepted: 10/22/2019] [Indexed: 12/16/2022] Open
Abstract
Osteoarthritis (OA) of the knee is a major source of disability that has no known treatment or cure. Morphological and compositional MRI is commonly used for assessing the bone and soft tissues in the knee to enhance the understanding of OA pathophysiology. However, it is challenging to extend these imaging methods and their subsequent analysis techniques to study large population cohorts due to slow and inefficient imaging acquisition and postprocessing tools. This can create a bottleneck in assessing early OA changes and evaluating the responses of novel therapeutics. The purpose of this review article is to highlight recent developments in tools for enhancing the efficiency of knee MRI methods useful to study OA. Advances in efficient MRI data acquisition and reconstruction tools for morphological and compositional imaging, efficient automated image analysis tools, and hardware improvements to further drive efficient imaging are discussed in this review. For each topic, we discuss the current challenges as well as potential future opportunities to alleviate these challenges. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 3.
Collapse
Affiliation(s)
| | - Feliks Kogan
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, California, USA
- Center of Digital Health Innovation (CDHI), University of California San Francisco, San Francisco, California, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, California, USA
- Center of Digital Health Innovation (CDHI), University of California San Francisco, San Francisco, California, USA
| | - Garry E. Gold
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Orthopaedic Surgery, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | - Brian A. Hargreaves
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| |
Collapse
|
79
|
From classical to deep learning: review on cartilage and bone segmentation techniques in knee osteoarthritis research. Artif Intell Rev 2020. [DOI: 10.1007/s10462-020-09924-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
80
|
The optimisation of deep neural networks for segmenting multiple knee joint tissues from MRIs. Comput Med Imaging Graph 2020; 86:101793. [PMID: 33075675 PMCID: PMC7721597 DOI: 10.1016/j.compmedimag.2020.101793] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 07/30/2020] [Accepted: 09/01/2020] [Indexed: 01/06/2023]
Abstract
Automated semantic segmentation of multiple knee joint tissues is desirable to allow faster and more reliable analysis of large datasets and to enable further downstream processing e.g. automated diagnosis. In this work, we evaluate the use of conditional Generative Adversarial Networks (cGANs) as a robust and potentially improved method for semantic segmentation compared to other extensively used convolutional neural network, such as the U-Net. As cGANs have not yet been widely explored for semantic medical image segmentation, we analysed the effect of training with different objective functions and discriminator receptive field sizes on the segmentation performance of the cGAN. Additionally, we evaluated the possibility of using transfer learning to improve the segmentation accuracy. The networks were trained on i) the SKI10 dataset which comes from the MICCAI grand challenge "Segmentation of Knee Images 2010″, ii) the OAI ZIB dataset containing femoral and tibial bone and cartilage segmentations of the Osteoarthritis Initiative cohort and iii) a small locally acquired dataset (Advanced MRI of Osteoarthritis (AMROA) study) consisting of 3D fat-saturated spoiled gradient recalled-echo knee MRIs with manual segmentations of the femoral, tibial and patellar bone and cartilage, as well as the cruciate ligaments and selected peri-articular muscles. The Sørensen-Dice Similarity Coefficient (DSC), volumetric overlap error (VOE) and average surface distance (ASD) were calculated for segmentation performance evaluation. DSC ≥ 0.95 were achieved for all segmented bone structures, DSC ≥ 0.83 for cartilage and muscle tissues and DSC of ≈0.66 were achieved for cruciate ligament segmentations with both cGAN and U-Net on the in-house AMROA dataset. Reducing the receptive field size of the cGAN discriminator network improved the networks segmentation performance and resulted in segmentation accuracies equivalent to those of the U-Net. Pretraining not only increased segmentation accuracy of a few knee joint tissues of the fine-tuned dataset, but also increased the network's capacity to preserve segmentation capabilities for the pretrained dataset. cGAN machine learning can generate automated semantic maps of multiple tissues within the knee joint which could increase the accuracy and efficiency for evaluating joint health.
Collapse
|
81
|
Cantarelli Rodrigues T, Deniz CM, Alaia EF, Gorelik N, Babb JS, Dublin J, Gyftopoulos S. Three-dimensional MRI Bone Models of the Glenohumeral Joint Using Deep Learning: Evaluation of Normal Anatomy and Glenoid Bone Loss. Radiol Artif Intell 2020; 2:e190116. [PMID: 33033803 DOI: 10.1148/ryai.2020190116] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Revised: 06/09/2020] [Accepted: 06/16/2020] [Indexed: 01/17/2023]
Abstract
Purpose To use convolutional neural networks (CNNs) for fully automated MRI segmentation of the glenohumeral joint and evaluate the accuracy of three-dimensional (3D) MRI models created with this method. Materials and Methods Shoulder MR images of 100 patients (average age, 44 years; range, 14-80 years; 60 men) were retrospectively collected from September 2013 to August 2018. CNNs were used to develop a fully automated segmentation model for proton density-weighted images. Shoulder MR images from an additional 50 patients (mean age, 33 years; range, 16-65 years; 35 men) were retrospectively collected from May 2014 to April 2019 to create 3D MRI glenohumeral models by transfer learning using Dixon-based sequences. Two musculoskeletal radiologists performed measurements on fully and semiautomated segmented 3D MRI models to assess glenohumeral anatomy, glenoid bone loss (GBL), and their impact on treatment selection. Performance of the CNNs was evaluated using Dice similarity coefficient (DSC), sensitivity, precision, and surface-based distance measurements. Measurements were compared using matched-pairs Wilcoxon signed rank test. Results The two-dimensional CNN model for the humerus and glenoid achieved a DSC of 0.95 and 0.86, a precision of 95.5% and 87.5%, an average precision of 98.6% and 92.3%, and a sensitivity of 94.8% and 86.1%, respectively. The 3D CNN model, for the humerus and glenoid, achieved a DSC of 0.95 and 0.86, precision of 95.1% and 87.1%, an average precision of 98.7% and 91.9%, and a sensitivity of 94.9% and 85.6%, respectively. There was no difference between glenoid and humeral head width fully and semiautomated 3D model measurements (P value range, .097-.99). Conclusion CNNs could potentially be used in clinical practice to provide rapid and accurate 3D MRI glenohumeral bone models and GBL measurements. Supplemental material is available for this article. © RSNA, 2020.
Collapse
Affiliation(s)
- Tatiane Cantarelli Rodrigues
- Department of Radiology, Hospital do Coração (HCOR) and Teleimagem, Rua Desembargador Eliseu Guilherme 53, 7th Floor, São Paulo, SP, Brazil 04004-030 (T.C.R.); Department of Radiology, NYU Langone Medical Center, New York, NY (C.M.D., E.F.A., S.G.); Department of Radiology, McGill University Health Centre, Montreal, Canada (N.G.); and Department of Radiology, New York University School of Medicine, New York, NY (J.S.B., J.D.)
| | - Cem M Deniz
- Department of Radiology, Hospital do Coração (HCOR) and Teleimagem, Rua Desembargador Eliseu Guilherme 53, 7th Floor, São Paulo, SP, Brazil 04004-030 (T.C.R.); Department of Radiology, NYU Langone Medical Center, New York, NY (C.M.D., E.F.A., S.G.); Department of Radiology, McGill University Health Centre, Montreal, Canada (N.G.); and Department of Radiology, New York University School of Medicine, New York, NY (J.S.B., J.D.)
| | - Erin F Alaia
- Department of Radiology, Hospital do Coração (HCOR) and Teleimagem, Rua Desembargador Eliseu Guilherme 53, 7th Floor, São Paulo, SP, Brazil 04004-030 (T.C.R.); Department of Radiology, NYU Langone Medical Center, New York, NY (C.M.D., E.F.A., S.G.); Department of Radiology, McGill University Health Centre, Montreal, Canada (N.G.); and Department of Radiology, New York University School of Medicine, New York, NY (J.S.B., J.D.)
| | - Natalia Gorelik
- Department of Radiology, Hospital do Coração (HCOR) and Teleimagem, Rua Desembargador Eliseu Guilherme 53, 7th Floor, São Paulo, SP, Brazil 04004-030 (T.C.R.); Department of Radiology, NYU Langone Medical Center, New York, NY (C.M.D., E.F.A., S.G.); Department of Radiology, McGill University Health Centre, Montreal, Canada (N.G.); and Department of Radiology, New York University School of Medicine, New York, NY (J.S.B., J.D.)
| | - James S Babb
- Department of Radiology, Hospital do Coração (HCOR) and Teleimagem, Rua Desembargador Eliseu Guilherme 53, 7th Floor, São Paulo, SP, Brazil 04004-030 (T.C.R.); Department of Radiology, NYU Langone Medical Center, New York, NY (C.M.D., E.F.A., S.G.); Department of Radiology, McGill University Health Centre, Montreal, Canada (N.G.); and Department of Radiology, New York University School of Medicine, New York, NY (J.S.B., J.D.)
| | - Jared Dublin
- Department of Radiology, Hospital do Coração (HCOR) and Teleimagem, Rua Desembargador Eliseu Guilherme 53, 7th Floor, São Paulo, SP, Brazil 04004-030 (T.C.R.); Department of Radiology, NYU Langone Medical Center, New York, NY (C.M.D., E.F.A., S.G.); Department of Radiology, McGill University Health Centre, Montreal, Canada (N.G.); and Department of Radiology, New York University School of Medicine, New York, NY (J.S.B., J.D.)
| | - Soterios Gyftopoulos
- Department of Radiology, Hospital do Coração (HCOR) and Teleimagem, Rua Desembargador Eliseu Guilherme 53, 7th Floor, São Paulo, SP, Brazil 04004-030 (T.C.R.); Department of Radiology, NYU Langone Medical Center, New York, NY (C.M.D., E.F.A., S.G.); Department of Radiology, McGill University Health Centre, Montreal, Canada (N.G.); and Department of Radiology, New York University School of Medicine, New York, NY (J.S.B., J.D.)
| |
Collapse
|
82
|
Neubert A, Bourgeat P, Wood J, Engstrom C, Chandra SS, Crozier S, Fripp J. Simultaneous super-resolution and contrast synthesis of routine clinical magnetic resonance images of the knee for improving automatic segmentation of joint cartilage: data from the Osteoarthritis Initiative. Med Phys 2020; 47:4939-4948. [PMID: 32745260 DOI: 10.1002/mp.14421] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 07/07/2020] [Accepted: 07/24/2020] [Indexed: 12/21/2022] Open
Abstract
PURPOSE High resolution three-dimensional (3D) magnetic resonance (MR) images are well suited for automated cartilage segmentation in the human knee joint. However, volumetric scans such as 3D Double-Echo Steady-State (DESS) images are not routinely acquired in clinical practice which limits opportunities for reliable cartilage segmentation using (fully) automated algorithms. In this work, a method for generating synthetic 3D MR (syn3D-DESS) images with better contrast and higher spatial resolution from routine, low resolution, two-dimensional (2D) Turbo-Spin Echo (TSE) clinical knee scans is proposed. METHODS A UNet convolutional neural network is employed for synthesizing enhanced artificial MR images suitable for automated knee cartilage segmentation. Training of the model was performed on a large, publically available dataset from the OAI, consisting of 578 MR examinations of knee joints from 102 healthy individuals and patients with knee osteoarthritis. RESULTS The generated synthetic images have higher spatial resolution and better tissue contrast than the original 2D TSE, which allow high quality automated 3D segmentations of the cartilage. The proposed approach was evaluated on a separate set of MR images from 88 subjects with manual cartilage segmentations. It provided a significant improvement in automated segmentation of knee cartilages when using the syn3D-DESS images compared to the original 2D TSE images. CONCLUSION The proposed method can successfully synthesize 3D DESS images from 2D TSE images to provide images suitable for automated cartilage segmentation.
Collapse
Affiliation(s)
- Aleš Neubert
- The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Herston, Australia
| | - Pierrick Bourgeat
- The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Herston, Australia
| | - Jason Wood
- The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Herston, Australia
| | - Craig Engstrom
- School of Information Technology and Electrical Engineering, The University of Queensland, St. Lucia, Australia
| | - Shekhar S Chandra
- School of Information Technology and Electrical Engineering, The University of Queensland, St. Lucia, Australia
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, The University of Queensland, St. Lucia, Australia
| | - Jurgen Fripp
- The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Herston, Australia
| |
Collapse
|
83
|
Byra M, Jarosik P, Szubert A, Galperin M, Ojeda-Fournier H, Olson L, O’Boyle M, Comstock C, Andre M. Breast mass segmentation in ultrasound with selective kernel U-Net convolutional neural network. Biomed Signal Process Control 2020; 61:102027. [PMID: 34703489 PMCID: PMC8545275 DOI: 10.1016/j.bspc.2020.102027] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this work, we propose a deep learning method for breast mass segmentation in ultrasound (US). Variations in breast mass size and image characteristics make the automatic segmentation difficult. To address this issue, we developed a selective kernel (SK) U-Net convolutional neural network. The aim of the SKs was to adjust network's receptive fields via an attention mechanism, and fuse feature maps extracted with dilated and conventional convolutions. The proposed method was developed and evaluated using US images collected from 882 breast masses. Moreover, we used three datasets of US images collected at different medical centers for testing (893 US images). On our test set of 150 US images, the SK-U-Net achieved mean Dice score of 0.826, and outperformed regular U-Net, Dice score of 0.778. When evaluated on three separate datasets, the proposed method yielded mean Dice scores ranging from 0.646 to 0.780. Additional fine-tuning of our better-performing model with data collected at different centers improved mean Dice scores by ~6%. SK-U-Net utilized both dilated and regular convolutions to process US images. We found strong correlation, Spearman's rank coefficient of 0.7, between the utilization of dilated convolutions and breast mass size in the case of network's expansion path. Our study shows the usefulness of deep learning methods for breast mass segmentation. SK-U-Net implementation and pre-trained weights can be found at github.com/mbyr/bus_seg.
Collapse
Affiliation(s)
- Michal Byra
- Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
- Department of Radiology, University of California, San Diego, USA
| | - Piotr Jarosik
- Department of Information and Computational Science, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | - Aleksandra Szubert
- Maria Sklodowska-Curie Memorial Cancer Centre and Institute of Oncology, Warsaw, Poland
| | | | | | - Linda Olson
- Department of Radiology, University of California, San Diego, USA
| | - Mary O’Boyle
- Department of Radiology, University of California, San Diego, USA
| | | | - Michael Andre
- Department of Radiology, University of California, San Diego, USA
| |
Collapse
|
84
|
Abstract
Deep learning methods have shown promising results for accelerating quantitative musculoskeletal (MSK) magnetic resonance imaging (MRI) for T2 and T1ρ relaxometry. These methods have been shown to improve musculoskeletal tissue segmentation on parametric maps, allowing efficient and accurate T2 and T1ρ relaxometry analysis for monitoring and predicting MSK diseases. Deep learning methods have shown promising results for disease detection on quantitative MRI with diagnostic performance superior to conventional machine-learning methods for identifying knee osteoarthritis.
Collapse
Affiliation(s)
- Fang Liu
- Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
85
|
Brui E, Efimtcev AY, Fokin VA, Fernandez R, Levchuk AG, Ogier AC, Samsonov AA, Mattei JP, Melchakova IV, Bendahan D, Andreychenko A. Deep learning-based fully automatic segmentation of wrist cartilage in MR images. NMR IN BIOMEDICINE 2020; 33:e4320. [PMID: 32394453 PMCID: PMC7784718 DOI: 10.1002/nbm.4320] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2018] [Revised: 04/10/2020] [Accepted: 04/14/2020] [Indexed: 05/10/2023]
Abstract
The study objective was to investigate the performance of a dedicated convolutional neural network (CNN) optimized for wrist cartilage segmentation from 2D MR images. CNN utilized a planar architecture and patch-based (PB) training approach that ensured optimal performance in the presence of a limited amount of training data. The CNN was trained and validated in 20 multi-slice MRI datasets acquired with two different coils in 11 subjects (healthy volunteers and patients). The validation included a comparison with the alternative state-of-the-art CNN methods for the segmentation of joints from MR images and the ground-truth manual segmentation. When trained on the limited training data, the CNN outperformed significantly image-based and PB-U-Net networks. Our PB-CNN also demonstrated a good agreement with manual segmentation (Sørensen-Dice similarity coefficient [DSC] = 0.81) in the representative (central coronal) slices with a large amount of cartilage tissue. Reduced performance of the network for slices with a very limited amount of cartilage tissue suggests the need for fully 3D convolutional networks to provide uniform performance across the joint. The study also assessed inter- and intra-observer variability of the manual wrist cartilage segmentation (DSC = 0.78-0.88 and 0.9, respectively). The proposed deep learning-based segmentation of the wrist cartilage from MRI could facilitate research of novel imaging markers of wrist osteoarthritis to characterize its progression and response to therapy.
Collapse
Affiliation(s)
- Ekaterina Brui
- University of Information Technology Mechanics and Optics, International Research Center Nanophotonics and Metamaterials, 199034 S.-Petersburg, Russia
| | - Aleksandr Y. Efimtcev
- University of Information Technology Mechanics and Optics, International Research Center Nanophotonics and Metamaterials, 199034 S.-Petersburg, Russia
- Federal Almazov North-West Medical Research Center, 197341 S.-Petersburg, Russia
| | - Vladimir A. Fokin
- University of Information Technology Mechanics and Optics, International Research Center Nanophotonics and Metamaterials, 199034 S.-Petersburg, Russia
- Federal Almazov North-West Medical Research Center, 197341 S.-Petersburg, Russia
| | - Remi Fernandez
- APHM, Service de Radiologie, Hôpital de la Conception, Marseille, France
| | - Anatoliy G. Levchuk
- Federal Almazov North-West Medical Research Center, 197341 S.-Petersburg, Russia
| | - Augustin C. Ogier
- Aix-Marseille Universite, CNRS, Centre de Résonance Magnétique Biologique et Médicale, UMR 7339, Marseille, France
| | - Alexey A. Samsonov
- University of Wisconsin-Madison, Department of Radiology, Madison, WI 53705-2275 USA
| | - Jean. P. Mattei
- Aix-Marseille Universite, CNRS, Centre de Résonance Magnétique Biologique et Médicale, UMR 7339, Marseille, France
- Assistance Publique Hôpitaux de Marseille, Institut de l’appareil locomoteur, Service de Rhumatologie, Hôpital Sainte Marguerite, Marseille, France
| | - Irina V. Melchakova
- University of Information Technology Mechanics and Optics, International Research Center Nanophotonics and Metamaterials, 199034 S.-Petersburg, Russia
| | - David Bendahan
- Aix-Marseille Universite, CNRS, Centre de Résonance Magnétique Biologique et Médicale, UMR 7339, Marseille, France
| | - Anna Andreychenko
- University of Information Technology Mechanics and Optics, International Research Center Nanophotonics and Metamaterials, 199034 S.-Petersburg, Russia
- Research and Practical Clinical Center of Diagnostics and Telemedicine Technologies, Department of Health Care of Moscow, Moscow, Russia
| |
Collapse
|
86
|
Machine learning methods to support personalized neuromusculoskeletal modelling. Biomech Model Mechanobiol 2020; 19:1169-1185. [DOI: 10.1007/s10237-020-01367-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Accepted: 07/08/2020] [Indexed: 12/19/2022]
|
87
|
Roemer FW, Demehri S, Omoumi P, Link TM, Kijowski R, Saarakkala S, Crema MD, Guermazi A. State of the Art: Imaging of Osteoarthritis—Revisited 2020. Radiology 2020; 296:5-21. [DOI: 10.1148/radiol.2020192498] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
88
|
Zheng Q, Shellikeri S, Huang H, Hwang M, Sze RW. Deep Learning Measurement of Leg Length Discrepancy in Children Based on Radiographs. Radiology 2020; 296:152-158. [PMID: 32315267 DOI: 10.1148/radiol.2020192003] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Background Radiographic measurement of leg length discrepancy (LLD) is time consuming yet cognitively simple for pediatric radiologists. Purpose To compare deep learning (DL) measurements of LLD in pediatric patients to measurements performed by radiologists. Materials and Methods For this HIPAA-compliant retrospective study, radiographs obtained to evaluate LLD in children between January and August 2018 were identified. LLD was automatically measured by means of image segmentation followed by leg length calculation. On training data, a DL model was trained to segment femurs and tibias on radiographs. The validation set was used to select the optimized model. On testing data, leg lengths were calculated from segmentation masks and compared with measurements from the radiology report. Statistical analysis was performed by using a paired Wilcoxon signed-rank test to compare DL calculations and radiology reports. In addition, the measurement time was manually assessed by a pediatric radiologist and automatically assessed by the DL model on a randomly chosen group of 26 cases; the values were compared with the paired Wilcoxon signed-rank test. Results Radiographs obtained to evaluate LLD in 179 children (mean age ± standard deviation, 12 years ± 3; age range, 5-19 years; 89 boys and 90 girls) were evaluated. Radiographs were randomly divided into training, validation, and testing sets and consisted of studies from 70, 32, and 77 patients, respectively. In the training and validation sets, the DL model showed a high spatial overlap between manual and automatic segmentation masks of pediatric legs (Dice similarity coefficient, 0.94). For the testing set, the correlation between radiology reports and DL-calculated lengths of separated femurs and tibias (r = 0.99; mean absolute error [MAE], 0.45 cm), full pediatric leg lengths (r = 0.99; MAE, 0.45 cm), and full LLD (r = 0.92; MAE, 0.51 cm) was high (P < .001 for all correlations). Calculation time for the DL method per radiograph was faster than the mean time for radiologist manual calculation (1 second vs 96 seconds ± 7, respectively; P < .001). Conclusion A deep learning algorithm measured pediatric leg lengths with high spatial overlap compared with manual measurement at a rate 96 times faster than that of subspecialty-trained pediatric radiologists. © RSNA, 2020 See also the editorial by van Rijn and De Luca in this issue.
Collapse
Affiliation(s)
- Qiang Zheng
- From the School of Computer and Control Engineering, Yantai University, Yantai, China (Q.Z.); Department of Radiology, Children's Hospital of Philadelphia, 3401 Civic Center Blvd, Philadelphia, PA 19104 (S.S., H.H., M.H., R.W.S.); and Department of Radiology, University of Pennsylvania, Philadelphia, Pa (H.H., M.H., R.W.S.)
| | - Sphoorti Shellikeri
- From the School of Computer and Control Engineering, Yantai University, Yantai, China (Q.Z.); Department of Radiology, Children's Hospital of Philadelphia, 3401 Civic Center Blvd, Philadelphia, PA 19104 (S.S., H.H., M.H., R.W.S.); and Department of Radiology, University of Pennsylvania, Philadelphia, Pa (H.H., M.H., R.W.S.)
| | - Hao Huang
- From the School of Computer and Control Engineering, Yantai University, Yantai, China (Q.Z.); Department of Radiology, Children's Hospital of Philadelphia, 3401 Civic Center Blvd, Philadelphia, PA 19104 (S.S., H.H., M.H., R.W.S.); and Department of Radiology, University of Pennsylvania, Philadelphia, Pa (H.H., M.H., R.W.S.)
| | - Misun Hwang
- From the School of Computer and Control Engineering, Yantai University, Yantai, China (Q.Z.); Department of Radiology, Children's Hospital of Philadelphia, 3401 Civic Center Blvd, Philadelphia, PA 19104 (S.S., H.H., M.H., R.W.S.); and Department of Radiology, University of Pennsylvania, Philadelphia, Pa (H.H., M.H., R.W.S.)
| | - Raymond W Sze
- From the School of Computer and Control Engineering, Yantai University, Yantai, China (Q.Z.); Department of Radiology, Children's Hospital of Philadelphia, 3401 Civic Center Blvd, Philadelphia, PA 19104 (S.S., H.H., M.H., R.W.S.); and Department of Radiology, University of Pennsylvania, Philadelphia, Pa (H.H., M.H., R.W.S.)
| |
Collapse
|
89
|
Artificial intelligence in orthopedic surgery: current state and future perspective. Chin Med J (Engl) 2020; 132:2521-2523. [PMID: 31658155 PMCID: PMC6846263 DOI: 10.1097/cm9.0000000000000479] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
|
90
|
Automated multi-atlas segmentation of gluteus maximus from Dixon and T1-weighted magnetic resonance images. MAGNETIC RESONANCE MATERIALS IN PHYSICS BIOLOGY AND MEDICINE 2020; 33:677-688. [PMID: 32152794 DOI: 10.1007/s10334-020-00839-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/02/2020] [Accepted: 02/18/2020] [Indexed: 01/10/2023]
Abstract
OBJECTIVE To design, develop and evaluate an automated multi-atlas method for segmentation and volume quantification of gluteus maximus from Dixon and T1-weighted images. MATERIALS AND METHODS The multi-atlas segmentation method uses an atlas library constructed from 15 Dixon MRI scans of healthy subjects. A non-rigid registration between each atlas and the target, followed by majority voting label fusion, is used in the segmentation. We propose a region of interest (ROI) to standardize the measurement of muscle bulk. The method was evaluated using the dice similarity coefficient (DSC) and the relative volume difference (RVD) as metrics, for Dixon and T1-weighted target images. RESULTS The mean(± SD) DSC was 0.94 ± 0.01 for Dixon images, while 0.93 ± 0.02 for T1-weighted. The RVD between the automated and manual segmentation had a mean(± SD) value of 1.5 ± 4.3% for Dixon and 1.5 ± 4.8% for T1-weighted images. In the muscle bulk ROI, the DSC was 0.95 ± 0.01 and the RVD was 0.6 ± 3.8%. CONCLUSION The method allows an accurate fully automated segmentation of gluteus maximus for Dixon and T1-weighted images and provides a relatively accurate volume measurement in shorter times (~ 20 min) than the current gold-standard manual segmentations (2 h). Visual inspection of the segmentation would be required when higher accuracy is needed.
Collapse
|
91
|
Krogue JD, Cheng KV, Hwang KM, Toogood P, Meinberg EG, Geiger EJ, Zaid M, McGill KC, Patel R, Sohn JH, Wright A, Darger BF, Padrez KA, Ozhinsky E, Majumdar S, Pedoia V. Automatic Hip Fracture Identification and Functional Subclassification with Deep Learning. Radiol Artif Intell 2020; 2:e190023. [PMID: 33937815 PMCID: PMC8017394 DOI: 10.1148/ryai.2020190023] [Citation(s) in RCA: 68] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2019] [Revised: 11/06/2019] [Accepted: 12/19/2019] [Indexed: 05/01/2023]
Abstract
PURPOSE To investigate the feasibility of automatic identification and classification of hip fractures using deep learning, which may improve outcomes by reducing diagnostic errors and decreasing time to operation. MATERIALS AND METHODS Hip and pelvic radiographs from 1118 studies were reviewed, and 3026 hips were labeled via bounding boxes and classified as normal, displaced femoral neck fracture, nondisplaced femoral neck fracture, intertrochanteric fracture, previous open reduction and internal fixation, or previous arthroplasty. A deep learning-based object detection model was trained to automate the placement of the bounding boxes. A Densely Connected Convolutional Neural Network (or DenseNet) was trained on a subset of the bounding box images, and its performance was evaluated on a held-out test set and by comparison on a 100-image subset with two groups of human observers: fellowship-trained radiologists and orthopedists; senior residents in emergency medicine, radiology, and orthopedics. RESULTS The binary accuracy for detecting a fracture of this model was 93.7% (95% confidence interval [CI]: 90.8%, 96.5%), with a sensitivity of 93.2% (95% CI: 88.9%, 97.1%) and a specificity of 94.2% (95% CI: 89.7%, 98.4%). Multiclass classification accuracy was 90.8% (95% CI: 87.5%, 94.2%). When compared with the accuracy of human observers, the accuracy of the model achieved an expert-level classification, at the very least, under all conditions. Additionally, when the model was used as an aid, human performance improved, with aided resident performance approximating unaided fellowship-trained expert performance in the multiclass classification. CONCLUSION A deep learning model identified and classified hip fractures with expert-level performance, at the very least, and when used as an aid, improved human performance, with aided resident performance approximating that of unaided fellowship-trained attending physicians.Supplemental material is available for this article.© RSNA, 2020.
Collapse
Affiliation(s)
| | | | - Kevin M. Hwang
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Paul Toogood
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Eric G. Meinberg
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Erik J. Geiger
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Musa Zaid
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Kevin C. McGill
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Rina Patel
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Jae Ho Sohn
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Alexandra Wright
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Bryan F. Darger
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Kevin A. Padrez
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Eugene Ozhinsky
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Sharmila Majumdar
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| | - Valentina Pedoia
- From the Departments of Orthopaedic Surgery (J.D.K., K.M.H., P.T., E.G.M., E.J.G., M.Z.), Emergency Medicine (B.F.D., K.A.P.), and Radiology and Biomedical Imaging (K.C.M., R.P., J.H.S., A.W., E.O., S.M., V.P.), University of California, San Francisco, 6945 Geary Blvd, San Francisco, CA 94121; and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, Calif (K.V.C.)
| |
Collapse
|
92
|
Kijowski R, Demehri S, Roemer F, Guermazi A. Osteoarthritis year in review 2019: imaging. Osteoarthritis Cartilage 2020; 28:285-295. [PMID: 31877380 DOI: 10.1016/j.joca.2019.11.009] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 10/17/2019] [Accepted: 11/15/2019] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To provide a narrative review of original articles on osteoarthritis (OA) imaging published between April 1, 2018 and March 30, 2019. METHODS All original research articles on OA imaging published in English between April 1, 2018 and March 30, 2019 were identified using a PubMed database search. The search terms of "Osteoarthritis" or "OA" were combined with the search terms "Radiography", "X-Rays", "Magnetic Resonance Imaging", "MRI", "Ultrasound", "US", "Computed Tomography", "Dual Energy X-Ray Absorptiometry", "DXA", "DEXA", "CT", "Nuclear Medicine", "Scintigraphy", "Single-Photon Emission Computed Tomography", "SPECT", "Positron Emission Tomography", "PET", "PET-CT", or "PET-MRI". Articles were reviewed to determine relevance based upon the following criteria: 1) study involved human subjects with OA or risk factors for OA and 2) study involved imaging to evaluate OA disease status or OA treatment response. Relevant articles were ranked according to scientific merit, with the best publications selected for inclusion in the narrative report. RESULTS The PubMed search revealed a total of 1257 articles, of which 256 (20.4%) were considered relevant to OA imaging. Two-hundred twenty-six (87.1%) articles involved the knee joint, while 195 (76.2%) articles involved the use of magnetic resonance imaging (MRI). The proportion of published studies involving the use of MRI was higher than previous years. An increasing number of articles were also published on imaging of subjects with joint injury and on deep learning application in OA imaging. CONCLUSION MRI and other imaging modalities continue to play an important role in research studies designed to better understand the pathogenesis, progression, and treatment of OA.
Collapse
Affiliation(s)
- R Kijowski
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, USA.
| | - S Demehri
- Department of Radiology, Johns Hopkins University, Baltimore, MD, USA.
| | - F Roemer
- Department of Radiology, Boston University, Boston, MA, USA; Department of Radiology, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Universitätsklinikum Erlangen, Erlangen, Germany.
| | - A Guermazi
- Department of Radiology, Boston University, Boston, MA, USA.
| |
Collapse
|
93
|
Byra M, Wu M, Zhang X, Jang H, Ma YJ, Chang EY, Shah S, Du J. Knee menisci segmentation and relaxometry of 3D ultrashort echo time cones MR imaging using attention U-Net with transfer learning. Magn Reson Med 2020; 83:1109-1122. [PMID: 31535731 PMCID: PMC6879791 DOI: 10.1002/mrm.27969] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 07/11/2019] [Accepted: 08/04/2019] [Indexed: 12/24/2022]
Abstract
PURPOSE To develop a deep learning-based method for knee menisci segmentation in 3D ultrashort echo time (UTE) cones MR imaging, and to automatically determine MR relaxation times, namely the T1, T1ρ , and T 2 ∗ parameters, which can be used to assess knee osteoarthritis (OA). METHODS Whole knee joint imaging was performed using 3D UTE cones sequences to collect data from 61 human subjects. Regions of interest (ROIs) were outlined by 2 experienced radiologists based on subtracted T1ρ -weighted MR images. Transfer learning was applied to develop 2D attention U-Net convolutional neural networks for the menisci segmentation based on each radiologist's ROIs separately. Dice scores were calculated to assess segmentation performance. Next, the T1, T1ρ , T 2 ∗ relaxations, and ROI areas were determined for the manual and automatic segmentations, then compared. RESULTS The models developed using ROIs provided by 2 radiologists achieved high Dice scores of 0.860 and 0.833, while the radiologists' manual segmentations achieved a Dice score of 0.820. Linear correlation coefficients for the T1, T1ρ , and T 2 ∗ relaxations calculated using the automatic and manual segmentations ranged between 0.90 and 0.97, and there were no associated differences between the estimated average meniscal relaxation parameters. The deep learning models achieved segmentation performance equivalent to the inter-observer variability of 2 radiologists. CONCLUSION The proposed deep learning-based approach can be used to efficiently generate automatic segmentations and determine meniscal relaxations times. The method has the potential to help radiologists with the assessment of meniscal diseases, such as OA.
Collapse
Affiliation(s)
- Michal Byra
- Department of Radiology, University of California, San Diego, CA, USA
- Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | - Mei Wu
- Department of Radiology, University of California, San Diego, CA, USA
| | - Xiaodong Zhang
- Department of Radiology, University of California, San Diego, CA, USA
| | - Hyungseok Jang
- Department of Radiology, University of California, San Diego, CA, USA
| | - Ya-Jun Ma
- Department of Radiology, University of California, San Diego, CA, USA
| | - Eric Y Chang
- Department of Radiology, University of California, San Diego, CA, USA
- Radiology Service, VA San Diego Healthcare System, San Diego, USA
| | - Sameer Shah
- Department of Orthopedic Surgery and Bioengineering, University of California, San Diego, CA, USA
| | - Jiang Du
- Department of Radiology, University of California, San Diego, CA, USA
| |
Collapse
|
94
|
Chea P, Mandell JC. Current applications and future directions of deep learning in musculoskeletal radiology. Skeletal Radiol 2020; 49:183-197. [PMID: 31377836 DOI: 10.1007/s00256-019-03284-z] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Revised: 07/11/2019] [Accepted: 07/15/2019] [Indexed: 02/02/2023]
Abstract
Deep learning with convolutional neural networks (CNN) is a rapidly advancing subset of artificial intelligence that is ideally suited to solving image-based problems. There are an increasing number of musculoskeletal applications of deep learning, which can be conceptually divided into the categories of lesion detection, classification, segmentation, and non-interpretive tasks. Numerous examples of deep learning achieving expert-level performance in specific tasks in all four categories have been demonstrated in the past few years, although comprehensive interpretation of imaging examinations has not yet been achieved. It is important for the practicing musculoskeletal radiologist to understand the current scope of deep learning as it relates to musculoskeletal radiology. Interest in deep learning from researchers, radiology leadership, and industry continues to increase, and it is likely that these developments will impact the daily practice of musculoskeletal radiology in the near future.
Collapse
Affiliation(s)
- Pauley Chea
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Jacob C Mandell
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
95
|
Bonaretti S, Gold GE, Beaupre GS. pyKNEEr: An image analysis workflow for open and reproducible research on femoral knee cartilage. PLoS One 2020; 15:e0226501. [PMID: 31978052 PMCID: PMC6980400 DOI: 10.1371/journal.pone.0226501] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Accepted: 11/27/2019] [Indexed: 02/04/2023] Open
Abstract
Transparent research in musculoskeletal imaging is fundamental to reliably investigate diseases such as knee osteoarthritis (OA), a chronic disease impairing femoral knee cartilage. To study cartilage degeneration, researchers have developed algorithms to segment femoral knee cartilage from magnetic resonance (MR) images and to measure cartilage morphology and relaxometry. The majority of these algorithms are not publicly available or require advanced programming skills to be compiled and run. However, to accelerate discoveries and findings, it is crucial to have open and reproducible workflows. We present pyKNEEr, a framework for open and reproducible research on femoral knee cartilage from MR images. pyKNEEr is written in python, uses Jupyter notebook as a user interface, and is available on GitHub with a GNU GPLv3 license. It is composed of three modules: 1) image preprocessing to standardize spatial and intensity characteristics; 2) femoral knee cartilage segmentation for intersubject, multimodal, and longitudinal acquisitions; and 3) analysis of cartilage morphology and relaxometry. Each module contains one or more Jupyter notebooks with narrative, code, visualizations, and dependencies to reproduce computational environments. pyKNEEr facilitates transparent image-based research of femoral knee cartilage because of its ease of installation and use, and its versatility for publication and sharing among researchers. Finally, due to its modular structure, pyKNEEr favors code extension and algorithm comparison. We tested our reproducible workflows with experiments that also constitute an example of transparent research with pyKNEEr, and we compared pyKNEEr performances to existing algorithms in literature review visualizations. We provide links to executed notebooks and executable environments for immediate reproducibility of our findings.
Collapse
Affiliation(s)
- Serena Bonaretti
- Department of Radiology, Stanford University, Stanford, CA, United States of America
- Musculoskeletal Research Laboratory, VA Palo Alto Health Care System, Palo Alto, CA, United States of America
| | - Garry E. Gold
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Gary S. Beaupre
- Musculoskeletal Research Laboratory, VA Palo Alto Health Care System, Palo Alto, CA, United States of America
- Department of Bioengineering, Stanford University, Stanford, CA, United States of America
| |
Collapse
|
96
|
Deep convolutional neural network-based detection of meniscus tears: comparison with radiologists and surgery as standard of reference. Skeletal Radiol 2020; 49:1207-1217. [PMID: 32170334 PMCID: PMC7299917 DOI: 10.1007/s00256-020-03410-2] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 02/11/2020] [Accepted: 03/01/2020] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To clinically validate a fully automated deep convolutional neural network (DCNN) for detection of surgically proven meniscus tears. MATERIALS AND METHODS One hundred consecutive patients were retrospectively included, who underwent knee MRI and knee arthroscopy in our institution. All MRI were evaluated for medial and lateral meniscus tears by two musculoskeletal radiologists independently and by DCNN. Included patients were not part of the training set of the DCNN. Surgical reports served as the standard of reference. Statistics included sensitivity, specificity, accuracy, ROC curve analysis, and kappa statistics. RESULTS Fifty-seven percent (57/100) of patients had a tear of the medial and 24% (24/100) of the lateral meniscus, including 12% (12/100) with a tear of both menisci. For medial meniscus tear detection, sensitivity, specificity, and accuracy were for reader 1: 93%, 91%, and 92%, for reader 2: 96%, 86%, and 92%, and for the DCNN: 84%, 88%, and 86%. For lateral meniscus tear detection, sensitivity, specificity, and accuracy were for reader 1: 71%, 95%, and 89%, for reader 2: 67%, 99%, and 91%, and for the DCNN: 58%, 92%, and 84%. Sensitivity for medial meniscus tears was significantly different between reader 2 and the DCNN (p = 0.039), and no significant differences existed for all other comparisons (all p ≥ 0.092). The AUC-ROC of the DCNN was 0.882, 0.781, and 0.961 for detection of medial, lateral, and overall meniscus tear. Inter-reader agreement was very good for the medial (kappa = 0.876) and good for the lateral meniscus (kappa = 0.741). CONCLUSION DCNN-based meniscus tear detection can be performed in a fully automated manner with a similar specificity but a lower sensitivity in comparison with musculoskeletal radiologists.
Collapse
|
97
|
Pizzolato C, Saxby DJ, Palipana D, Diamond LE, Barrett RS, Teng YD, Lloyd DG. Neuromusculoskeletal Modeling-Based Prostheses for Recovery After Spinal Cord Injury. Front Neurorobot 2019; 13:97. [PMID: 31849634 PMCID: PMC6900959 DOI: 10.3389/fnbot.2019.00097] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Accepted: 11/05/2019] [Indexed: 01/12/2023] Open
Abstract
Concurrent stimulation and reinforcement of motor and sensory pathways has been proposed as an effective approach to restoring function after developmental or acquired neurotrauma. This can be achieved by applying multimodal rehabilitation regimens, such as thought-controlled exoskeletons or epidural electrical stimulation to recover motor pattern generation in individuals with spinal cord injury (SCI). However, the human neuromusculoskeletal (NMS) system has often been oversimplified in designing rehabilitative and assistive devices. As a result, the neuromechanics of the muscles is seldom considered when modeling the relationship between electrical stimulation, mechanical assistance from exoskeletons, and final joint movement. A powerful way to enhance current neurorehabilitation is to develop the next generation prostheses incorporating personalized NMS models of patients. This strategy will enable an individual voluntary interfacing with multiple electromechanical rehabilitation devices targeting key afferent and efferent systems for functional improvement. This narrative review discusses how real-time NMS models can be integrated with finite element (FE) of musculoskeletal tissues and interface multiple assistive and robotic devices with individuals with SCI to promote neural restoration. In particular, the utility of NMS models for optimizing muscle stimulation patterns, tracking functional improvement, monitoring safety, and providing augmented feedback during exercise-based rehabilitation are discussed.
Collapse
Affiliation(s)
- Claudio Pizzolato
- School of Allied Health Sciences, Griffith University, Gold Coast, QLD, Australia.,Griffith Centre for Biomedical and Rehabilitation Engineering, Menzies Health Institute Queensland, Griffith University, Gold Coast, QLD, Australia
| | - David J Saxby
- School of Allied Health Sciences, Griffith University, Gold Coast, QLD, Australia.,Griffith Centre for Biomedical and Rehabilitation Engineering, Menzies Health Institute Queensland, Griffith University, Gold Coast, QLD, Australia
| | - Dinesh Palipana
- Griffith Centre for Biomedical and Rehabilitation Engineering, Menzies Health Institute Queensland, Griffith University, Gold Coast, QLD, Australia.,The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Gold Coast, QLD, Australia.,Gold Coast Hospital and Health Service, Gold Coast, QLD, Australia.,School of Medicine, Griffith University, Gold Coast, QLD, Australia
| | - Laura E Diamond
- School of Allied Health Sciences, Griffith University, Gold Coast, QLD, Australia.,Griffith Centre for Biomedical and Rehabilitation Engineering, Menzies Health Institute Queensland, Griffith University, Gold Coast, QLD, Australia
| | - Rod S Barrett
- School of Allied Health Sciences, Griffith University, Gold Coast, QLD, Australia.,Griffith Centre for Biomedical and Rehabilitation Engineering, Menzies Health Institute Queensland, Griffith University, Gold Coast, QLD, Australia
| | - Yang D Teng
- Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Harvard Medical School, Charlestown, MA, United States.,Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States
| | - David G Lloyd
- School of Allied Health Sciences, Griffith University, Gold Coast, QLD, Australia.,Griffith Centre for Biomedical and Rehabilitation Engineering, Menzies Health Institute Queensland, Griffith University, Gold Coast, QLD, Australia
| |
Collapse
|
98
|
Gaj S, Yang M, Nakamura K, Li X. Automated cartilage and meniscus segmentation of knee MRI with conditional generative adversarial networks. Magn Reson Med 2019; 84:437-449. [PMID: 31793071 DOI: 10.1002/mrm.28111] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 11/14/2019] [Accepted: 11/15/2019] [Indexed: 12/16/2022]
Abstract
PURPOSE Fully automatic tissue segmentation is an essential step to translate quantitative MRI techniques to clinical setting. The goal of this study was to develop a novel approach based on the generative adversarial networks for fully automatic segmentation of knee cartilage and meniscus. THEORY AND METHODS Defining proper loss function for semantic segmentation to enforce the learning of multiscale spatial constraints in an end-to-end training process is an open problem. In this work, we have used the conditional generative adversarial networks to improve segmentation performance of convolutional neural network, such as UNet alone by overcoming the problems caused by pixel-wise mapping based objective functions, and to capture cartilage features during the training of the network. Furthermore, the Dice coefficient and cross entropy losses were incorporated to the loss functions to improve the model performance. The model was trained and tested on 176, 3D DESS (double-echo steady-state) knee images from the Osteoarthritis Initiative data set. RESULTS The proposed model provided excellent segmentation performance for cartilages with Dice coefficients ranging from 0.84 in patellar cartilage to 0.91 in lateral tibial cartilage, with an average Dice coefficient of 0.88. For meniscus segmentation, the model achieves 0.89 Dice coefficient for lateral meniscus and 0.87 Dice coefficient for medial meniscus. The results are superior to previously published automatic cartilage and meniscus segmentation methods based on deep learning models such as convolutional neural network. CONCLUSION The proposed UNet-conditional generative adversarial networks based model demonstrated a fully automated segmentation method with high accuracy for knee cartilage and meniscus.
Collapse
Affiliation(s)
- Sibaji Gaj
- Program of Advanced Musculoskeletal Imaging (PAMI), Department of Biomedical Engineering, Cleveland Clinic, Cleveland, Ohio
| | - Mingrui Yang
- Program of Advanced Musculoskeletal Imaging (PAMI), Department of Biomedical Engineering, Cleveland Clinic, Cleveland, Ohio
| | - Kunio Nakamura
- Program of Advanced Musculoskeletal Imaging (PAMI), Department of Biomedical Engineering, Cleveland Clinic, Cleveland, Ohio
| | - Xiaojuan Li
- Program of Advanced Musculoskeletal Imaging (PAMI), Department of Biomedical Engineering, Cleveland Clinic, Cleveland, Ohio
| |
Collapse
|
99
|
Accurate segmentation of overlapping cells in cervical cytology with deep convolutional neural networks. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.06.086] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
100
|
Liu F, Samsonov A, Chen L, Kijowski R, Feng L. SANTIS: Sampling-Augmented Neural neTwork with Incoherent Structure for MR image reconstruction. Magn Reson Med 2019; 82:1890-1904. [PMID: 31166049 PMCID: PMC6660404 DOI: 10.1002/mrm.27827] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 05/02/2019] [Accepted: 05/03/2019] [Indexed: 12/23/2022]
Abstract
PURPOSE To develop and evaluate a novel deep learning-based reconstruction framework called SANTIS (Sampling-Augmented Neural neTwork with Incoherent Structure) for efficient MR image reconstruction with improved robustness against sampling pattern discrepancy. METHODS With a combination of data cycle-consistent adversarial network, end-to-end convolutional neural network mapping, and data fidelity enforcement for reconstructing undersampled MR data, SANTIS additionally utilizes a sampling-augmented training strategy by extensively varying undersampling patterns during training, so that the network is capable of learning various aliasing structures and thereby removing undersampling artifacts more effectively and robustly. The performance of SANTIS was demonstrated for accelerated knee imaging and liver imaging using a Cartesian trajectory and a golden-angle radial trajectory, respectively. Quantitative metrics were used to assess its performance against different references. The feasibility of SANTIS in reconstructing dynamic contrast-enhanced images was also demonstrated using transfer learning. RESULTS Compared to conventional reconstruction that exploits image sparsity, SANTIS achieved consistently improved reconstruction performance (lower errors and greater image sharpness). Compared to standard learning-based methods without sampling augmentation (e.g., training with a fixed undersampling pattern), SANTIS provides comparable reconstruction performance, but significantly improved robustness, against sampling pattern discrepancy. SANTIS also achieved encouraging results for reconstructing liver images acquired at different contrast phases. CONCLUSION By extensively varying undersampling patterns, the sampling-augmented training strategy in SANTIS can remove undersampling artifacts more robustly. The novel concept behind SANTIS can particularly be useful for improving the robustness of deep learning-based image reconstruction against discrepancy between training and inference, an important, but currently less explored, topic.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Alexey Samsonov
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Lihua Chen
- Department of Radiology, Southwest Hospital, Chongqing, China
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Li Feng
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| |
Collapse
|