1
|
Yu B, Whitmarsh T, Riede P, McDonald S, Kaggie JD, Cox TM, Poole KES, Deegan P. Deep learning-based quantification of osteonecrosis using magnetic resonance images in Gaucher disease. Bone 2024; 186:117142. [PMID: 38834102 DOI: 10.1016/j.bone.2024.117142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 05/28/2024] [Accepted: 05/31/2024] [Indexed: 06/06/2024]
Abstract
Gaucher disease is one of the most common lysosomal storage disorders. Osteonecrosis is a principal clinical manifestation of Gaucher disease and often leads to joint collapse and fractures. T1-weighted (T1w) modality in MRI is widely used to monitor bone involvement in Gaucher disease and to diagnose osteonecrosis. However, objective and quantitative methods for characterizing osteonecrosis are still limited. In this work, we present a deep learning-based quantification approach for the segmentation of osteonecrosis and the extraction of characteristic parameters. We first constructed two independent U-net models to segment the osteonecrosis and bone marrow unaffected by osteonecrosis (UBM) in spine and femur respectively, based on T1w images from patients in the UK national Gaucherite study database. We manually delineated parcellation maps including osteonecrosis and UBM from 364 T1w images (176 for spine, 188 for femur) as the training datasets, and the trained models were subsequently applied to all the 917 T1w images in the database. To quantify the segmentation, we calculated morphological parameters including the volume of osteonecrosis, the volume of UBM, and the fraction of total marrow occupied by osteonecrosis. Then, we examined the correlation between calculated features and the bone marrow burden score for marrow infiltration of the corresponding image, and no strong correlation was found. In addition, we analyzed the influence of splenectomy and the interval between the age at first symptom and the age of onset of treatment on the quantitative measurements of osteonecrosis. The results are consistent with previous studies, showing that prior splenectomy is closely associated with the fractional volume of osteonecrosis, and there is a positive relationship between the duration of untreated disease and the quantifications of osteonecrosis. We propose this technique as an efficient and reliable tool for assessing the extent of osteonecrosis in MR images of patients and improving prediction of clinically important adverse events.
Collapse
Affiliation(s)
- Boliang Yu
- Department of Medicine, University of Cambridge, Cambridge, UK.
| | | | - Philipp Riede
- Department of Radiology, University of Cambridge, Cambridge, UK
| | - Scott McDonald
- Department of Radiology, University of Cambridge, Cambridge, UK
| | - Joshua D Kaggie
- Department of Radiology, University of Cambridge, Cambridge, UK
| | - Timothy M Cox
- Department of Medicine, University of Cambridge, Cambridge, UK
| | | | - Patrick Deegan
- Department of Medicine, University of Cambridge, Cambridge, UK
| |
Collapse
|
2
|
Woo JJ, Vidhani FR, Zhang YB, Olsen RJ, Nawabi DH, Fitz W, Chen AF, Iorio R, Ramkumar PN. Who Are the Anatomic Outliers Undergoing Total Knee Arthroplasty? A Computed Tomography-Based Analysis of the Hip-Knee-Ankle Axis Across 1,352 Preoperative Computed Tomographies Using a Deep Learning and Computer Vision-Based Pipeline. J Arthroplasty 2024; 39:S188-S199. [PMID: 38548237 DOI: 10.1016/j.arth.2024.03.053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/17/2024] [Accepted: 03/18/2024] [Indexed: 04/16/2024] Open
Abstract
BACKGROUND Dissatisfaction after total knee arthroplasty (TKA) ranges from 15 to 30%. While patient selection may be partially responsible, morphological and reconstructive challenges may be determinants. Preoperative computed tomography (CT) scans for TKA planning allow us to evaluate the hip-knee-ankle axis and establish a baseline phenotypic distribution across anatomic parameters. The purpose of this cross-sectional analysis was to establish the distributions of 27 parameters in a pre-TKA cohort and perform threshold analysis to identify anatomic outliers. METHODS There were 1,352 pre-TKA CTs that were processed. A 2-step deep learning pipeline of classification and segmentation models identified landmark images and then generated contour representations. We used an open-source computer vision library to compute measurements for 27 anatomic metrics along the hip-knee axis. Normative distribution plots were established, and thresholds for the 15th percentile at both extremes were calculated. Metrics falling outside the central 70th percentile were considered outlier indices. A threshold analysis of outlier indices against the proportion of the cohort was performed. RESULTS Significant variation exists in pre-TKA anatomy across 27 normally distributed metrics. Threshold analysis revealed a sigmoid function with a critical point at 9 outlier indices, representing 31.2% of subjects as anatomic outliers. Metrics with the greatest variation related to deformity (tibiofemoral angle, medial proximal tibial angle, lateral distal femoral angle), bony size (tibial width, anteroposterior femoral size, femoral head size, medial femoral condyle size), intraoperative landmarks (posterior tibial slope, transepicondylar and posterior condylar axes), and neglected rotational considerations (acetabular and femoral version, femoral torsion). CONCLUSIONS In the largest non-industry database of pre-TKA CTs using a fully automated 3-stage deep learning and computer vision-based pipeline, marked anatomic variation exists. In the pursuit of understanding the dissatisfaction rate after TKA, acknowledging that 31% of patients represent anatomic outliers may help us better achieve anatomically personalized TKA, with or without adjunctive technology.
Collapse
Affiliation(s)
- Joshua J Woo
- Brown University/The Warren Alpert School of Brown University, Providence, Rhode Island
| | - Faizaan R Vidhani
- Brown University/The Warren Alpert School of Brown University, Providence, Rhode Island
| | - Yibin B Zhang
- Harvard Medical School/Brigham and Women's, Boston, Massachusetts
| | - Reena J Olsen
- Sports Medicine Institute, Hospital for Special Surgery, New York, New York
| | - Danyal H Nawabi
- Sports Medicine Institute, Hospital for Special Surgery, New York, New York
| | - Wolfgang Fitz
- Harvard Medical School/Brigham and Women's, Boston, Massachusetts
| | - Antonia F Chen
- Harvard Medical School/Brigham and Women's, Boston, Massachusetts
| | - Richard Iorio
- Harvard Medical School/Brigham and Women's, Boston, Massachusetts
| | | |
Collapse
|
3
|
Chen X, Liu Q, Deng HH, Kuang T, Lin HHY, Xiao D, Gateno J, Xia JJ, Yap PT. Improving Image Segmentation with Contextual and Structural Similarity. PATTERN RECOGNITION 2024; 152:110489. [PMID: 38645435 PMCID: PMC11027435 DOI: 10.1016/j.patcog.2024.110489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Deep learning models for medical image segmentation are usually trained with voxel-wise losses, e.g., cross-entropy loss, focusing on unary supervision without considering inter-voxel relationships. This oversight potentially leads to semantically inconsistent predictions. Here, we propose a contextual similarity loss (CSL) and a structural similarity loss (SSL) to explicitly and efficiently incorporate inter-voxel relationships for improved performance. The CSL promotes consistency in predicted object categories for each image sub-region compared to ground truth. The SSL enforces compatibility between the predictions of voxel pairs by computing pair-wise distances between them, ensuring that voxels of the same class are close together whereas those from different classes are separated by a wide margin in the distribution space. The effectiveness of the CSL and SSL is evaluated using a clinical cone-beam computed tomography (CBCT) dataset of patients with various craniomaxillofacial (CMF) deformities and a public pancreas dataset. Experimental results show that the CSL and SSL outperform state-of-the-art regional loss functions in preserving segmentation semantics.
Collapse
Affiliation(s)
- Xiaoyang Chen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, 27599, NC, USA
| | - Qin Liu
- Department of Computer Science, University of North Carolina, Chapel Hill, 27599, NC, USA
| | - Hannah H. Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
| | - Henry Hung-Ying Lin
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, 27599, NC, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, 10065, NY, USA
| | - James J. Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, 10065, NY, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, 27599, NC, USA
| |
Collapse
|
4
|
Yu B, Kaku A, Liu K, Parnandi A, Fokas E, Venkatesan A, Pandit N, Ranganath R, Schambra H, Fernandez-Granda C. Quantifying impairment and disease severity using AI models trained on healthy subjects. NPJ Digit Med 2024; 7:180. [PMID: 38969786 PMCID: PMC11226623 DOI: 10.1038/s41746-024-01173-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 06/21/2024] [Indexed: 07/07/2024] Open
Abstract
Automatic assessment of impairment and disease severity is a key challenge in data-driven medicine. We propose a framework to address this challenge, which leverages AI models trained exclusively on healthy individuals. The COnfidence-Based chaRacterization of Anomalies (COBRA) score exploits the decrease in confidence of these models when presented with impaired or diseased patients to quantify their deviation from the healthy population. We applied the COBRA score to address a key limitation of current clinical evaluation of upper-body impairment in stroke patients. The gold-standard Fugl-Meyer Assessment (FMA) requires in-person administration by a trained assessor for 30-45 minutes, which restricts monitoring frequency and precludes physicians from adapting rehabilitation protocols to the progress of each patient. The COBRA score, computed automatically in under one minute, is shown to be strongly correlated with the FMA on an independent test cohort for two different data modalities: wearable sensors (ρ = 0.814, 95% CI [0.700,0.888]) and video (ρ = 0.736, 95% C.I [0.584, 0.838]). To demonstrate the generalizability of the approach to other conditions, the COBRA score was also applied to quantify severity of knee osteoarthritis from magnetic-resonance imaging scans, again achieving significant correlation with an independent clinical assessment (ρ = 0.644, 95% C.I [0.585,0.696]).
Collapse
Affiliation(s)
- Boyang Yu
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
| | - Aakash Kaku
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
| | - Kangning Liu
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
| | - Avinash Parnandi
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Emily Fokas
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Anita Venkatesan
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Natasha Pandit
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Rajesh Ranganath
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
- Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY, 10012, USA
| | - Heidi Schambra
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA.
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA.
| | - Carlos Fernandez-Granda
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA.
- Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY, 10012, USA.
| |
Collapse
|
5
|
Chen S, Zhang Z. A Semi-Automatic Magnetic Resonance Imaging Annotation Algorithm Based on Semi-Weakly Supervised Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:3893. [PMID: 38931677 PMCID: PMC11207229 DOI: 10.3390/s24123893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 06/07/2024] [Accepted: 06/11/2024] [Indexed: 06/28/2024]
Abstract
The annotation of magnetic resonance imaging (MRI) images plays an important role in deep learning-based MRI segmentation tasks. Semi-automatic annotation algorithms are helpful for improving the efficiency and reducing the difficulty of MRI image annotation. However, the existing semi-automatic annotation algorithms based on deep learning have poor pre-annotation performance in the case of insufficient segmentation labels. In this paper, we propose a semi-automatic MRI annotation algorithm based on semi-weakly supervised learning. In order to achieve a better pre-annotation performance in the case of insufficient segmentation labels, semi-supervised and weakly supervised learning were introduced, and a semi-weakly supervised learning segmentation algorithm based on sparse labels was proposed. In addition, in order to improve the contribution rate of a single segmentation label to the performance of the pre-annotation model, an iterative annotation strategy based on active learning was designed. The experimental results on public MRI datasets show that the proposed algorithm achieved an equivalent pre-annotation performance when the number of segmentation labels was much less than that of the fully supervised learning algorithm, which proves the effectiveness of the proposed algorithm.
Collapse
Affiliation(s)
- Shaolong Chen
- School of Sino-German Intelligent Manufacturing, Shenzhen City Polytechnic, Shenzhen 518000, China;
- School of Electronics and Communication Engineering, Sun Yat-Sen University, Shenzhen 518000, China
| | - Zhiyong Zhang
- School of Electronics and Communication Engineering, Sun Yat-Sen University, Shenzhen 518000, China
| |
Collapse
|
6
|
Vidhani FR, Woo JJ, Zhang YB, Olsen RJ, Ramkumar PN. Automating Linear and Angular Measurements for the Hip and Knee After Computed Tomography: Validation of a Three-Stage Deep Learning and Computer Vision-Based Pipeline for Pathoanatomic Assessment. Arthroplast Today 2024; 27:101394. [PMID: 39071819 PMCID: PMC11282415 DOI: 10.1016/j.artd.2024.101394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/17/2024] [Accepted: 04/01/2024] [Indexed: 07/30/2024] Open
Abstract
Background Variability in the bony morphology of pathologic hips/knees is a challenge in automating preoperative computed tomography (CT) scan measurements. With the increasing prevalence of CT for advanced preoperative planning, processing this data represents a critical bottleneck in presurgical planning, research, and development. The purpose of this study was to demonstrate a reproducible and scalable methodology for analyzing CT-based anatomy to process hip and knee anatomy for perioperative planning and execution. Methods One hundred patients with preoperative CT scans undergoing total knee arthroplasty for osteoarthritis were processed. A two-step deep learning pipeline of classification and segmentation models was developed that identifies landmark images and then generates contour representations. We utilized an open-source computer vision library to compute measurements. Classification models were assessed by accuracy, precision, and recall. Segmentation models were evaluated using dice and mean Intersection over Union (IOU) metrics. Contour measurements were compared against manual measurements to validate posterior condylar axis angle, sulcus angle, trochlear groove-tibial tuberosity distance, acetabular anteversion, and femoral version. Results Classifiers identified landmark images with accuracy of 0.91 and 0.88 for hip and knee models, respectively. Segmentation models demonstrated mean IOU scores above 0.95 with the highest dice coefficient of 0.957 [0.954-0.961] (UNet3+) and the highest mean IOU of 0.965 [0.961-0.969] (Attention U-Net). There were no statistically significant differences for the measurements taken automatically vs manually (P > 0.05). Average time for the pipeline to preprocess (48.65 +/- 4.41 sec), classify/retrieve landmark images (8.36 +/- 3.40 sec), segment images (<1 sec), and obtain measurements was 2.58 (+/- 1.92) minutes. Conclusions A fully automated three-stage deep learning and computer vision-based pipeline of classification and segmentation models accurately localized, segmented, and measured landmark hip and knee images for patients undergoing total knee arthroplasty. Incorporation of clinical parameters, like patient-reported outcome measures and instability risk, will be important considerations alongside anatomic parameters.
Collapse
Affiliation(s)
- Faizaan R. Vidhani
- Brown University/The Warren Alpert School of Brown University, Providence, RI, USA
| | - Joshua J. Woo
- Brown University/The Warren Alpert School of Brown University, Providence, RI, USA
| | - Yibin B. Zhang
- Harvard Medical School/Brigham and Women’s, Boston, MA, USA
| | - Reena J. Olsen
- Sports Medicine Institute, Hospital for Special Surgery, New York, NY, USA
| | | |
Collapse
|
7
|
Luo P, Lu L, Xu R, Jiang L, Li G. Gaining Insight into Updated MR Imaging for Quantitative Assessment of Cartilage Injury in Knee Osteoarthritis. Curr Rheumatol Rep 2024:10.1007/s11926-024-01152-x. [PMID: 38809506 DOI: 10.1007/s11926-024-01152-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/24/2024] [Indexed: 05/30/2024]
Abstract
PURPOSE OF THE REVIEW Knee Osteoarthritis (KOA) entails progressive cartilage degradation, reviewed via MRI for morphology, biochemical composition, and microtissue alterations, discussing clinical advantages, limitations, and research applicability. RECENT FINDINGS Compositional MRI, like T2/T2* mapping, T1rho mapping, gagCEST, dGEMRIC, sodium imaging, diffusion-weighted imaging, and diffusion-tensor imaging, provide insights into cartilage injury in KOA. These methods quantitatively measure collagen, glycosaminoglycans, and water content, revealing important information about biochemical compositional and microstructural alterations. Innovative techniques like hybrid multi-dimensional MRI and diffusion-relaxation correlation spectrum imaging show potential in depicting initial cartilage changes at a sub-voxel level. Integration of automated image analysis tools addressed limitations in manual cartilage segmentation, ensuring robust and reproducible assessments of KOA cartilage. Compositional MRI techniques reveal microstructural changes in cartilage. Multi-dimensional MR imaging assesses biochemical alterations in KOA-afflicted cartilage, aiding early degeneration identification. Integrating artificial intelligence enhances cartilage analysis, optimal diagnostic accuracy for early KOA detection and monitoring.
Collapse
Affiliation(s)
- Peng Luo
- Department of Radiology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, 110 Ganhe Rd, Shanghai, 200437, China
| | - Li Lu
- Department of Radiology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, 110 Ganhe Rd, Shanghai, 200437, China
| | - Run Xu
- Department of Radiology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, 110 Ganhe Rd, Shanghai, 200437, China
| | - Lei Jiang
- Department of Radiology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, 110 Ganhe Rd, Shanghai, 200437, China
| | - Guanwu Li
- Department of Radiology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, 110 Ganhe Rd, Shanghai, 200437, China.
| |
Collapse
|
8
|
Li S, Cao P, Li J, Chen T, Luo P, Ruan G, Zhang Y, Wang X, Han W, Zhu Z, Dang Q, Wang Q, Zhang M, Bai Q, Chai Z, Yang H, Chen H, Tang M, Akbar A, Tack A, Hunter DJ, Ding C. Integrating Radiomics and Neural Networks for Knee Osteoarthritis Incidence Prediction. Arthritis Rheumatol 2024. [PMID: 38751101 DOI: 10.1002/art.42915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 04/02/2024] [Accepted: 05/06/2024] [Indexed: 07/09/2024]
Abstract
OBJECTIVE Accurately predicting knee osteoarthritis (KOA) is essential for early detection and personalized treatment. We aimed to develop and test a magnetic resonance imaging (MRI)-based joint space (JS) radiomic model (RM) to predict radiographic KOA incidence through neural networks by integrating meniscus and femorotibial cartilage radiomic features. METHODS In the Osteoarthritis Initiative cohort, participants with knees without radiographic KOA at baseline but at high risk for radiographic KOA were included. Patients' knees developed radiographic KOA, whereas control knees did not over four years. We randomly split the participants into development and test cohorts (8:2) and extracted features from baseline three-dimensional double-echo steady-state sequence MRI. Model performance was evaluated using an area under the receiver operating characteristic curve (AUC), sensitivity, and specificity in both cohorts. Nine resident surgeons performed the reader experiment without/with the JS-RM aid. RESULTS Our study included 549 knees in the development cohort (275 knees of patients with KOA vs 274 knees of controls) and 137 knees in the test cohort (68 knees of patients with KOA vs 69 knees of controls). In the test cohort, JS-RM had a favorable accuracy for predicting the radiographic KOA incidence with an AUC of 0.931 (95% confidence interval [CI] 0.876-0.963), a sensitivity of 84.4% (95% CI 83.9%-84.9%), and a specificity of 85.6% (95% CI 85.2%-86.0%). The mean specificity and sensitivity of resident surgeons through MRI reading in predicting radiographic KOA incidence were increased from 0.474 (95% CI 0.333-0.614) and 0.586 (95% CI 0.429-0.743) without the assistance of JS-RM to 0.874 (95% CI 0.847-0.901) and 0.812 (95% CI 0.742-0.881) with JS-RM assistance, respectively (P < 0.001). CONCLUSION JS-RM integrating the features of the meniscus and cartilage showed improved predictive values in radiographic KOA incidence.
Collapse
Affiliation(s)
- Shengfa Li
- Zhujiang Hospital of Southern Medical University, Guangzhou, The Third People's Hospital of Chengdu, Affiliated Hospital of Southwest Jiaotong University, The Second Affiliated Chengdu Hospital of Chongqing Medical University, Chengdu, China
| | - Peihua Cao
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Jia Li
- Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Tianyu Chen
- The Third Affiliated Hospital of Southern Medical University, Guangzhou, China
| | - Ping Luo
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Guangfeng Ruan
- Guangzhou First People's Hospital, South China University of Technology, Guangzhou, China
| | - Yan Zhang
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Xiaoshuai Wang
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Weiyu Han
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Zhaohua Zhu
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Qin Dang
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Qianyi Wang
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Mengdi Zhang
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Qiushun Bai
- Southern Medical University, Guangzhou, China
| | - Zhiyi Chai
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Hao Yang
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Haowei Chen
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Mingze Tang
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Arafat Akbar
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | | | - David J Hunter
- Zhujiang Hospital of Southern Medical University, Guangzhou, China, and Royal North Shore Hospital and University of Sydney, Sydney, New South Wales, Australia
| | - Changhai Ding
- Zhujiang Hospital of Southern Medical University; Guangzhou First People's Hospital, South China University of Technology, Guangzhou, China; and University of Tasmania, Hobart, Tasmania, Australia
| |
Collapse
|
9
|
Gatti AA, Blankemeier L, Van Veen D, Hargreaves B, Delp SL, Gold GE, Kogan F, Chaudhari AS. ShapeMed-Knee: A Dataset and Neural Shape Model Benchmark for Modeling 3D Femurs. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.05.06.24306965. [PMID: 38766040 PMCID: PMC11100941 DOI: 10.1101/2024.05.06.24306965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Analyzing anatomic shapes of tissues and organs is pivotal for accurate disease diagnostics and clinical decision-making. One prominent disease that depends on anatomic shape analysis is osteoarthritis, which affects 30 million Americans. To advance osteoarthritis diagnostics and prognostics, we introduce ShapeMed-Knee, a 3D shape dataset with 9,376 high-resolution, medical-imaging-based 3D shapes of both femur bone and cartilage. Besides data, ShapeMed-Knee includes two benchmarks for assessing reconstruction accuracy and five clinical prediction tasks that assess the utility of learned shape representations. Leveraging ShapeMed-Knee, we develop and evaluate a novel hybrid explicit-implicit neural shape model which achieves up to 40% better reconstruction accuracy than a statistical shape model and implicit neural shape model. Our hybrid models achieve state-of-the-art performance for preserving cartilage biomarkers; they're also the first models to successfully predict localized structural features of osteoarthritis, outperforming shape models and convolutional neural networks applied to raw magnetic resonance images and segmentations. The ShapeMed-Knee dataset provides medical evaluations to reconstruct multiple anatomic surfaces and embed meaningful disease-specific information. ShapeMed-Knee reduces barriers to applying 3D modeling in medicine, and our benchmarks highlight that advancements in 3D modeling can enhance the diagnosis and risk stratification for complex diseases. The dataset, code, and benchmarks will be made freely accessible.
Collapse
Affiliation(s)
- Anthony A Gatti
- Department of Radiology at Stanford University, Stanford, CA, 94305, USA
| | - Louis Blankemeier
- Department of Electrical Engineering at Stanford University, Stanford, CA, 94305, USA
| | - Dave Van Veen
- Department of Electrical Engineering at Stanford University, Stanford, CA, 94305, USA
| | - Brian Hargreaves
- Department of Radiology at Stanford University, Stanford, CA, 94305, USA
| | - Scott L Delp
- Department of Bioengineering at Stanford University, Stanford, CA, 94305, USA
| | - Garry E Gold
- Department of Radiology at Stanford University, Stanford, CA, 94305, USA
| | - Feliks Kogan
- Department of Radiology at Stanford University, Stanford, CA, 94305, USA
| | - Akshay S Chaudhari
- Department of Radiology at Stanford University, Stanford, CA, 94305, USA
| |
Collapse
|
10
|
Amiranashvili T, Lüdke D, Li HB, Zachow S, Menze BH. Learning continuous shape priors from sparse data with neural implicit functions. Med Image Anal 2024; 94:103099. [PMID: 38395009 DOI: 10.1016/j.media.2024.103099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 10/31/2023] [Accepted: 01/30/2024] [Indexed: 02/25/2024]
Abstract
Statistical shape models are an essential tool for various tasks in medical image analysis, including shape generation, reconstruction and classification. Shape models are learned from a population of example shapes, which are typically obtained through segmentation of volumetric medical images. In clinical practice, highly anisotropic volumetric scans with large slice distances are prevalent, e.g., to reduce radiation exposure in CT or image acquisition time in MR imaging. For existing shape modeling approaches, the resolution of the emerging model is limited to the resolution of the training shapes. Therefore, any missing information between slices prohibits existing methods from learning a high-resolution shape prior. We propose a novel shape modeling approach that can be trained on sparse, binary segmentation masks with large slice distances. This is achieved through employing continuous shape representations based on neural implicit functions. After training, our model can reconstruct shapes from various sparse inputs at high target resolutions beyond the resolution of individual training examples. We successfully reconstruct high-resolution shapes from as few as three orthogonal slices. Furthermore, our shape model allows us to embed various sparse segmentation masks into a common, low-dimensional latent space - independent of the acquisition direction, resolution, spacing, and field of view. We show that the emerging latent representation discriminates between healthy and pathological shapes, even when provided with sparse segmentation masks. Lastly, we qualitatively demonstrate that the emerging latent space is smooth and captures characteristic modes of shape variation. We evaluate our shape model on two anatomical structures: the lumbar vertebra and the distal femur, both from publicly available datasets.
Collapse
Affiliation(s)
- Tamaz Amiranashvili
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland; Department of Computer Science, Technical University of Munich, Munich, Germany.
| | - David Lüdke
- Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany; Department of Computer Science, Technical University of Munich, Munich, Germany
| | - Hongwei Bran Li
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland; Department of Computer Science, Technical University of Munich, Munich, Germany
| | - Stefan Zachow
- Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
| | - Bjoern H Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland; Department of Computer Science, Technical University of Munich, Munich, Germany
| |
Collapse
|
11
|
Zhao J, Jiang T, Lin Y, Chan LC, Chan PK, Wen C, Chen H. Adaptive Fusion of Deep Learning With Statistical Anatomical Knowledge for Robust Patella Segmentation From CT Images. IEEE J Biomed Health Inform 2024; 28:2842-2853. [PMID: 38446653 DOI: 10.1109/jbhi.2024.3372576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Kneeosteoarthritis (KOA), as a leading joint disease, can be decided by examining the shapes of patella to spot potential abnormal variations. To assist doctors in the diagnosis of KOA, a robust automatic patella segmentation method is highly demanded in clinical practice. Deep learning methods, especially convolutional neural networks (CNNs) have been widely applied to medical image segmentation in recent years. Nevertheless, poor image quality and limited data still impose challenges to segmentation via CNNs. On the other hand, statistical shape models (SSMs) can generate shape priors which give anatomically reliable segmentation to varying instances. Thus, in this work, we propose an adaptive fusion framework, explicitly combining deep neural networks and anatomical knowledge from SSM for robust patella segmentation. Our adaptive fusion framework will accordingly adjust the weight of segmentation candidates in fusion based on their segmentation performance. We also propose a voxel-wise refinement strategy to make the segmentation of CNNs more anatomically correct. Extensive experiments and thorough assessment have been conducted on various mainstream CNN backbones for patella segmentation in low-data regimes, which demonstrate that our framework can be flexibly attached to a CNN model, significantly improving its performance when labeled training data are limited and input image data are of poor quality.
Collapse
|
12
|
Kofler A, Wald C, Kolbitsch C, V Tycowicz C, Ambellan F. Joint reconstruction and segmentation in undersampled 3D knee MRI combining shape knowledge and deep learning. Phys Med Biol 2024; 69:095022. [PMID: 38527376 DOI: 10.1088/1361-6560/ad3797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 03/25/2024] [Indexed: 03/27/2024]
Abstract
Objective.Task-adapted image reconstruction methods using end-to-end trainable neural networks (NNs) have been proposed to optimize reconstruction for subsequent processing tasks, such as segmentation. However, their training typically requires considerable hardware resources and thus, only relatively simple building blocks, e.g. U-Nets, are typically used, which, albeit powerful, do not integrate model-specific knowledge.Approach.In this work, we extend an end-to-end trainable task-adapted image reconstruction method for a clinically realistic reconstruction and segmentation problem of bone and cartilage in 3D knee MRI by incorporating statistical shape models (SSMs). The SSMs model the prior information and help to regularize the segmentation maps as a final post-processing step. We compare the proposed method to a simultaneous multitask learning approach for image reconstruction and segmentation (MTL) and to a complex SSMs-informed segmentation pipeline (SIS).Main results.Our experiments show that the combination of joint end-to-end training and SSMs to further regularize the segmentation maps obtained by MTL highly improves the results, especially in terms of mean and maximal surface errors. In particular, we achieve the segmentation quality of SIS and, at the same time, a substantial model reduction that yields a five-fold decimation in model parameters and a computational speedup of an order of magnitude.Significance.Remarkably, even for undersampling factors of up toR= 8, the obtained segmentation maps are of comparable quality to those obtained by SIS from ground-truth images.
Collapse
Affiliation(s)
- A Kofler
- Physikalisch-Technische Bundesanstalt, Braunschweig and Berlin, Germany
| | - C Wald
- Department of Mathematics, Technical University of Berlin, Berlin, Germany
| | - C Kolbitsch
- Physikalisch-Technische Bundesanstalt, Braunschweig and Berlin, Germany
| | - C V Tycowicz
- Department of Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
| | - F Ambellan
- Department of Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
| |
Collapse
|
13
|
Sun C, Gao H, Wu S, Lu Q, Wang Y, Cai X. Evaluation of the consistency of the MRI- based AI segmentation cartilage model using the natural tibial plateau cartilage. J Orthop Surg Res 2024; 19:247. [PMID: 38632625 PMCID: PMC11025227 DOI: 10.1186/s13018-024-04680-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 03/15/2024] [Indexed: 04/19/2024] Open
Abstract
OBJECTIVE The study aims to evaluate the accuracy of an MRI-based artificial intelligence (AI) segmentation cartilage model by comparing it to the natural tibial plateau cartilage. METHODS This study included 33 patients (41 knees) with severe knee osteoarthritis scheduled to undergo total knee arthroplasty (TKA). All patients had a thin-section MRI before TKA. Our study is mainly divided into two parts: (i) In order to evaluate the MRI-based AI segmentation cartilage model's 2D accuracy, the natural tibial plateau was used as gold standard. The MRI-based AI segmentation cartilage model and the natural tibial plateau were represented in binary visualization (black and white) simulated photographed images by the application of Simulation Photography Technology. Both simulated photographed images were compared to evaluate the 2D Dice similarity coefficients (DSC). (ii) In order to evaluate the MRI-based AI segmentation cartilage model's 3D accuracy. Hand-crafted cartilage model based on knee CT was established. We used these hand-crafted CT-based knee cartilage model as gold standard to evaluate 2D and 3D consistency of between the MRI-based AI segmentation cartilage model and hand-crafted CT-based cartilage model. 3D registration technology was used for both models. Correlations between the MRI-based AI knee cartilage model and CT-based knee cartilage model were also assessed with the Pearson correlation coefficient. RESULTS The AI segmentation cartilage model produced reasonably high two-dimensional DSC. The average 2D DSC between MRI-based AI cartilage model and the tibial plateau cartilage is 0.83. The average 2D DSC between the AI segmentation cartilage model and the CT-based cartilage model is 0.82. As for 3D consistency, the average 3D DSC between MRI-based AI cartilage model and CT-based cartilage model is 0.52. However, the quantification of cartilage segmentation with the AI and CT-based models showed excellent correlation (r = 0.725; P values < 0.05). CONCLUSION Our study demonstrated that our MRI-based AI cartilage model can reliably extract morphologic features such as cartilage shape and defect location of the tibial plateau cartilage. This approach could potentially benefit clinical practices such as diagnosing osteoarthritis. However, in terms of cartilage thickness and three-dimensional accuracy, MRI-based AI cartilage model underestimate the actual cartilage volume. The previous AI verification methods may not be completely accurate and should be verified with natural cartilage images. Combining multiple verification methods will improve the accuracy of the AI model.
Collapse
Affiliation(s)
- Changjiao Sun
- Joint Diseases Center, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, No. 168 Litang Road, Dongxiaokou Town, Changping District, Beijing, 102218, China
| | - Hong Gao
- Beijing MEDERA Medical Group, Beijing, 102200, China
| | - Sha Wu
- Joint Diseases Center, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, No. 168 Litang Road, Dongxiaokou Town, Changping District, Beijing, 102218, China
- Beijing MEDERA Medical Group, Beijing, 102200, China
| | - Qian Lu
- Nuctech Company Limited, Beijing, 100083, China
| | - Yakui Wang
- Radiology Department, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Xu Cai
- Joint Diseases Center, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, No. 168 Litang Road, Dongxiaokou Town, Changping District, Beijing, 102218, China.
- Beijing MEDERA Medical Group, Beijing, 102200, China.
| |
Collapse
|
14
|
Woo B, Engstrom C, Baresic W, Fripp J, Crozier S, Chandra SS. Automated anomaly-aware 3D segmentation of bones and cartilages in knee MR images from the Osteoarthritis Initiative. Med Image Anal 2024; 93:103089. [PMID: 38246088 DOI: 10.1016/j.media.2024.103089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 09/25/2023] [Accepted: 01/12/2024] [Indexed: 01/23/2024]
Abstract
In medical image analysis, automated segmentation of multi-component anatomical entities, with the possible presence of variable anomalies or pathologies, is a challenging task. In this work, we develop a multi-step approach using U-Net-based models to initially detect anomalies (bone marrow lesions, bone cysts) in the distal femur, proximal tibia and patella from 3D magnetic resonance (MR) images in individuals with varying grades of knee osteoarthritis. Subsequently, the extracted data are used for downstream tasks involving semantic segmentation of individual bone and cartilage volumes as well as bone anomalies. For anomaly detection, U-Net-based models were developed to reconstruct bone volume profiles of the femur and tibia in images via inpainting so anomalous bone regions could be replaced with close to normal appearances. The reconstruction error was used to detect bone anomalies. An anomaly-aware segmentation network, which was compared to anomaly-naïve segmentation networks, was used to provide a final automated segmentation of the individual femoral, tibial and patellar bone and cartilage volumes from the knee MR images which contain a spectrum of bone anomalies. The anomaly-aware segmentation approach provided up to 58% reduction in Hausdorff distances for bone segmentations compared to the results from anomaly-naïve segmentation networks. In addition, the anomaly-aware networks were able to detect bone anomalies in the MR images with greater sensitivity and specificity (area under the receiver operating characteristic curve [AUC] up to 0.896) compared to anomaly-naïve segmentation networks (AUC up to 0.874).
Collapse
Affiliation(s)
- Boyeong Woo
- School of Electrical Engineering and Computer Science, The University of Queensland, Australia.
| | - Craig Engstrom
- School of Human Movement and Nutrition Sciences, The University of Queensland, Australia
| | - William Baresic
- School of Human Movement and Nutrition Sciences, The University of Queensland, Australia
| | - Jurgen Fripp
- School of Electrical Engineering and Computer Science, The University of Queensland, Australia; Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organization, Australia
| | - Stuart Crozier
- School of Electrical Engineering and Computer Science, The University of Queensland, Australia
| | - Shekhar S Chandra
- School of Electrical Engineering and Computer Science, The University of Queensland, Australia
| |
Collapse
|
15
|
Chadoulos C, Tsaopoulos D, Symeonidis A, Moustakidis S, Theocharis J. Dense Multi-Scale Graph Convolutional Network for Knee Joint Cartilage Segmentation. Bioengineering (Basel) 2024; 11:278. [PMID: 38534552 DOI: 10.3390/bioengineering11030278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Revised: 03/07/2024] [Accepted: 03/11/2024] [Indexed: 03/28/2024] Open
Abstract
In this paper, we propose a dense multi-scale adaptive graph convolutional network (DMA-GCN) method for automatic segmentation of the knee joint cartilage from MR images. Under the multi-atlas setting, the suggested approach exhibits several novelties, as described in the following. First, our models integrate both local-level and global-level learning simultaneously. The local learning task aggregates spatial contextual information from aligned spatial neighborhoods of nodes, at multiple scales, while global learning explores pairwise affinities between nodes, located globally at different positions in the image. We propose two different structures of building models, whereby the local and global convolutional units are combined by following an alternating or a sequential manner. Secondly, based on the previous models, we develop the DMA-GCN network, by utilizing a densely connected architecture with residual skip connections. This is a deeper GCN structure, expanded over different block layers, thus being capable of providing more expressive node feature representations. Third, all units pertaining to the overall network are equipped with their individual adaptive graph learning mechanism, which allows the graph structures to be automatically learned during training. The proposed cartilage segmentation method is evaluated on the entire publicly available Osteoarthritis Initiative (OAI) cohort. To this end, we have devised a thorough experimental setup, with the goal of investigating the effect of several factors of our approach on the classification rates. Furthermore, we present exhaustive comparative results, considering traditional existing methods, six deep learning segmentation methods, and seven graph-based convolution methods, including the currently most representative models from this field. The obtained results demonstrate that the DMA-GCN outperforms all competing methods across all evaluation measures, providing DSC=95.71% and DSC=94.02% for the segmentation of femoral and tibial cartilage, respectively.
Collapse
Affiliation(s)
- Christos Chadoulos
- Department of Electrical & Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Dimitrios Tsaopoulos
- Institute for Bio-Economy and Agri-Technology, Centre for Research and Technology-Hellas, 38333 Volos, Greece
| | - Andreas Symeonidis
- Department of Electrical & Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Serafeim Moustakidis
- Department of Electrical & Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - John Theocharis
- Department of Electrical & Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| |
Collapse
|
16
|
Stoel BC, Staring M, Reijnierse M, van der Helm-van Mil AHM. Deep learning in rheumatological image interpretation. Nat Rev Rheumatol 2024; 20:182-195. [PMID: 38332242 DOI: 10.1038/s41584-023-01074-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/20/2023] [Indexed: 02/10/2024]
Abstract
Artificial intelligence techniques, specifically deep learning, have already affected daily life in a wide range of areas. Likewise, initial applications have been explored in rheumatology. Deep learning might not easily surpass the accuracy of classic techniques when performing classification or regression on low-dimensional numerical data. With images as input, however, deep learning has become so successful that it has already outperformed the majority of conventional image-processing techniques developed during the past 50 years. As with any new imaging technology, rheumatologists and radiologists need to consider adapting their arsenal of diagnostic, prognostic and monitoring tools, and even their clinical role and collaborations. This adaptation requires a basic understanding of the technical background of deep learning, to efficiently utilize its benefits but also to recognize its drawbacks and pitfalls, as blindly relying on deep learning might be at odds with its capabilities. To facilitate such an understanding, it is necessary to provide an overview of deep-learning techniques for automatic image analysis in detecting, quantifying, predicting and monitoring rheumatic diseases, and of currently published deep-learning applications in radiological imaging for rheumatology, with critical assessment of possible limitations, errors and confounders, and conceivable consequences for rheumatologists and radiologists in clinical practice.
Collapse
Affiliation(s)
- Berend C Stoel
- Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.
| | - Marius Staring
- Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Monique Reijnierse
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | | |
Collapse
|
17
|
Daneshmand M, Panfilov E, Bayramoglu N, Korhonen RK, Saarakkala S. Deep learning based detection of osteophytes in radiographs and magnetic resonance imagings of the knee using 2D and 3D morphology. J Orthop Res 2024. [PMID: 38323840 DOI: 10.1002/jor.25800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 10/26/2023] [Accepted: 01/19/2024] [Indexed: 02/08/2024]
Abstract
In this study, we investigated the discriminative capacity of knee morphology in automatic detection of osteophytes defined by the Osteoarthritis Research Society International atlas, using X-ray and magnetic resonance imaging (MRI) data. For the X-ray analysis, we developed a deep learning (DL) based model to segment femur and tibia. In case of MRIs, we utilized previously validated segmentations of femur, tibia, corresponding cartilage tissues, and menisci. Osteophyte detection was performed using DL models in four compartments: medial femur (FM), lateral femur (FL), medial tibia (TM), and lateral tibia (TL). To analyze the confounding effects of soft tissues, we investigated their morphology in combination with bones, including bones+cartilage, bones+menisci, and all the tissues. From X-ray-based 2D morphology, the models yielded balanced accuracy of 0.73, 0.69, 0.74, and 0.74 for FM, FL, TM, TL, respectively. Using 3D bone morphology from MRI, balanced accuracy was 0.80, 0.77, 0.71, and 0.76, respectively. The performance was higher than in 2D for all the compartments except for TM, with significant improvements observed for femoral compartments. Adding menisci or cartilage morphology consistently improved balanced accuracy in TM, with the greatest improvement seen for small osteophyte. Otherwise, the models performed similarly to bones-only. Our experiments demonstrated that MRI-based models show higher detection capability than X-ray based models for identifying knee osteophytes. This study highlighted the feasibility of automated osteophyte detection from X-ray and MRI data and suggested further need for development of osteophyte assessment criteria in addition to OARSI, particularly, for early osteophytic changes.
Collapse
Affiliation(s)
| | - Egor Panfilov
- Faculty of Medicine, University of Oulu, Oulu, Finland
| | | | | | - Simo Saarakkala
- University of Oulu and Oulu University Hospital, Oulu, Finland
| |
Collapse
|
18
|
Kakavand R, Palizi M, Tahghighi P, Ahmadi R, Gianchandani N, Adeeb S, Souza R, Edwards WB, Komeili A. Integration of Swin UNETR and statistical shape modeling for a semi-automated segmentation of the knee and biomechanical modeling of articular cartilage. Sci Rep 2024; 14:2748. [PMID: 38302524 PMCID: PMC10834430 DOI: 10.1038/s41598-024-52548-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 01/19/2024] [Indexed: 02/03/2024] Open
Abstract
Simulation studies, such as finite element (FE) modeling, provide insight into knee joint mechanics without patient involvement. Generic FE models mimic the biomechanical behavior of the tissue, but overlook variations in geometry, loading, and material properties of a population. Conversely, subject-specific models include these factors, resulting in enhanced predictive precision, but are laborious and time intensive. The present study aimed to enhance subject-specific knee joint FE modeling by incorporating a semi-automated segmentation algorithm using a 3D Swin UNETR for an initial segmentation of the femur and tibia, followed by a statistical shape model (SSM) adjustment to improve surface roughness and continuity. For comparison, a manual FE model was developed through manual segmentation (i.e., the de-facto standard approach). Both FE models were subjected to gait loading and the predicted mechanical response was compared. The semi-automated segmentation achieved a Dice similarity coefficient (DSC) of over 98% for both the femur and tibia. Hausdorff distance (mm) between the semi-automated and manual segmentation was 1.4 mm. The mechanical results (max principal stress and strain, fluid pressure, fibril strain, and contact area) showed no significant differences between the manual and semi-automated FE models, indicating the effectiveness of the proposed semi-automated segmentation in creating accurate knee joint FE models. We have made our semi-automated models publicly accessible to support and facilitate biomechanical modeling and medical image segmentation efforts ( https://data.mendeley.com/datasets/k5hdc9cz7w/1 ).
Collapse
Affiliation(s)
- Reza Kakavand
- Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, CCIT 216, 2500 University Drive NW, Calgary, AB, T2N 1N4, Canada
| | - Mehrdad Palizi
- Civil and Environmental Engineering Department, Faculty of Engineering, University of Alberta, Edmonton, Canada
| | - Peyman Tahghighi
- Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, CCIT 216, 2500 University Drive NW, Calgary, AB, T2N 1N4, Canada
| | - Reza Ahmadi
- Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, CCIT 216, 2500 University Drive NW, Calgary, AB, T2N 1N4, Canada
| | - Neha Gianchandani
- Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, CCIT 216, 2500 University Drive NW, Calgary, AB, T2N 1N4, Canada
| | - Samer Adeeb
- Civil and Environmental Engineering Department, Faculty of Engineering, University of Alberta, Edmonton, Canada
| | - Roberto Souza
- Department of Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, Canada
- Cumming School of Medicine, Hotchkiss Brain Institute, University of Calgary, Calgary, Canada
| | - W Brent Edwards
- Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, CCIT 216, 2500 University Drive NW, Calgary, AB, T2N 1N4, Canada
- McCaig Institute for Bone and Joint Health, University of Calgary, Calgary, Canada
- Human Performance Laboratory, Faculty of Kinesiology, University of Calgary, Calgary, Canada
| | - Amin Komeili
- Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, CCIT 216, 2500 University Drive NW, Calgary, AB, T2N 1N4, Canada.
- McCaig Institute for Bone and Joint Health, University of Calgary, Calgary, Canada.
- Human Performance Laboratory, Faculty of Kinesiology, University of Calgary, Calgary, Canada.
| |
Collapse
|
19
|
He D, Guo Y, Zhang X, Wang C, Zhao Z, Chen W, Zhang K, Ji B. Dual output feature fusion networks for femoral segmentation and quantitative analysis of the knee joint. Med Phys 2024; 51:1145-1162. [PMID: 37633838 DOI: 10.1002/mp.16665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 06/20/2023] [Accepted: 07/19/2023] [Indexed: 08/28/2023] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) is the preferred imaging modality for diagnosing knee disease. Segmentation of the knee MRI images is essential for subsequent quantification of clinical parameters and treatment planning for knee prosthesis replacement. However, the segmentation remains difficult due to individual differences in anatomy, the difficulty of obtaining accurate edges at lower resolutions, and the presence of speckle noise and artifacts in the images. In addition, radiologists must manually measure the knee's parameters which is a laborious and time-consuming process. PURPOSE Automatic quantification of femoral morphological parameters can be of fundamental help in the design of prosthetic implants for the repair of the knee and the femur. Knowledge of knee femoral parameters can provide a basis for femoral repair of the knee, the design of fixation materials for femoral prostheses, and the replacement of prostheses. METHODS This paper proposes a new deep network architecture to comprehensively address these challenges. A dual output model structure is proposed, with a high and low layer fusion extraction feature module designed to extract rich features through the cross-fusion mechanism. A multi-scale edge information extraction spatial feature module is also developed to address the boundary-blurring problem. RESULTS Based on the precise automated segmentation results, 10 key clinical parameters were automatically measured for a knee femoral prosthesis replacement program. The correlation coefficients of the quantitative results of these parameters compared to manual results all achieved at least 0.92. The proposed method was extensively evaluated with MRIs of 78 patients' knees, and it consistently outperformed other methods used for segmentation. CONCLUSIONS The automated quantization process produced comparable measurements to those manually obtained by radiologists. This paper demonstrates the viability of automatic knee MRI image segmentation and quantitative analysis with the proposed method. This provides data to support the accuracy of assessing the progression and biomechanical changes of osteoarthritis of the knee using an automated process, thus saving valuable time for the radiologists and surgeons.
Collapse
Affiliation(s)
- Dongdong He
- College of Biomedical Engineering, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Yuan Guo
- College of Biomedical Engineering, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Xushu Zhang
- College of Biomedical Engineering, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Changjiang Wang
- Department of Engineering and Design, University of Sussex, Sussex House, Brighton, UK
| | - Zihui Zhao
- College of Biomedical Engineering, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Weiyi Chen
- College of Biomedical Engineering, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Kai Zhang
- Shanxi Hua Jin Orthopaedic Hospital, Taiyuan, Shanxi, China
| | - Binping Ji
- Shanxi Hua Jin Orthopaedic Hospital, Taiyuan, Shanxi, China
| |
Collapse
|
20
|
Roemer FW, Wirth W, Demehri S, Kijowski R, Jarraya M, Hayashi D, Eckstein F, Guermazi A. Imaging Biomarkers of Osteoarthritis. Semin Musculoskelet Radiol 2024; 28:14-25. [PMID: 38330967 DOI: 10.1055/s-0043-1776432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2024]
Abstract
Currently no disease-modifying osteoarthritis drug has been approved for the treatment of osteoarthritis (OA) that can reverse, hold, or slow the progression of structural damage of OA-affected joints. The reasons for failure are manifold and include the heterogeneity of structural disease of the OA joint at trial inclusion, and the sensitivity of biomarkers used to measure a potential treatment effect.This article discusses the role and potential of different imaging biomarkers in OA research. We review the current role of radiography, as well as advances in quantitative three-dimensional morphological cartilage assessment and semiquantitative whole-organ assessment of OA. Although magnetic resonance imaging has evolved as the leading imaging method in OA research, recent developments in computed tomography are also discussed briefly. Finally, we address the experience from the Foundation for the National Institutes of Health Biomarker Consortium biomarker qualification study and the future role of artificial intelligence.
Collapse
Affiliation(s)
- Frank W Roemer
- Department of Radiology, Chobanian & Avedisian Boston University School of Medicine, Boston, Massachusetts
- Department of Radiology, Universitätsklinikum Erlangen & Friedrich Alexander Universität (FAU) Erlangen-Nürnberg, Erlangen, Germany
| | - Wolfgang Wirth
- Center of Anatomy, and Ludwig Boltzmann Institute for Arthritis and Rehabilitation (LBIAR), Paracelsus Medical University, Salzburg, Austria
- Chondrometrics, GmbH, Freilassing, Germany
| | - Shadpour Demehri
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Richard Kijowski
- Department of Radiology, New York University Grossmann School of Medicine, New York, New York
| | - Mohamed Jarraya
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Daichi Hayashi
- Department of Radiology, Tufts Medical Center, Tufts University School of Medicine, Boston, Massachusetts
- Harvard T.H. Chan School of Public Health, Harvard University, Boston, Massachusetts
| | - Felix Eckstein
- Center of Anatomy, and Ludwig Boltzmann Institute for Arthritis and Rehabilitation (LBIAR), Paracelsus Medical University, Salzburg, Austria
- Chondrometrics, GmbH, Freilassing, Germany
| | - Ali Guermazi
- Department of Radiology, Chobanian & Avedisian Boston University School of Medicine, Boston, Massachusetts
- Department of Radiology, Boston VA Healthcare System, West Roxbury, Massachusetts
| |
Collapse
|
21
|
Orava H, Paakkari P, Jäntti J, Honkanen MKM, Honkanen JTJ, Virén T, Joenathan A, Tanska P, Korhonen RK, Grinstaff MW, Töyräs J, Mäkelä JTA. Triple contrast computed tomography reveals site-specific biomechanical differences in the human knee joint-A proof of concept study. J Orthop Res 2024; 42:415-424. [PMID: 37593815 DOI: 10.1002/jor.25683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 07/05/2023] [Accepted: 08/14/2023] [Indexed: 08/19/2023]
Abstract
Cartilage and synovial fluid are challenging to observe separately in native computed tomography (CT). We report the use of triple contrast agent (bismuth nanoparticles [BiNPs], CA4+, and gadoteridol) to image and segment cartilage in cadaveric knee joints with a clinical CT scanner. We hypothesize that BiNPs will remain in synovial fluid while the CA4+ and gadoteridol will diffuse into cartilage, allowing (1) segmentation of cartilage, and (2) evaluation of cartilage biomechanical properties based on contrast agent concentrations. To investigate these hypotheses, triple contrast agent was injected into both knee joints of a cadaver (N = 1), imaged with a clinical CT at multiple timepoints during the contrast agent diffusion. Knee joints were extracted, imaged with micro-CT (µCT), and biomechanical properties of the cartilage surface were determined by stress-relaxation mapping. Cartilage was segmented and contrast agent concentrations (CA4+ and gadoteridol) were compared with the biomechanical properties at multiple locations (n = 185). Spearman's correlation between cartilage thickness from clinical CT and reference µCT images verifies successful and reliable segmentation. CA4+ concentration is significantly higher in femoral than in tibial cartilage at 60 min and further timepoints, which corresponds to the higher Young's modulus observed in femoral cartilage. In this pilot study, we show that (1) large BiNPs do not diffuse into cartilage, facilitating straightforward segmentation of human knee joint cartilage in a clinical setting, and (2) CA4+ concentration in cartilage reflects the biomechanical differences between femoral and tibial cartilage. Thus, the triple contrast agent CT shows potential in cartilage morphology and condition estimation in clinical CT.
Collapse
Affiliation(s)
- Heta Orava
- Department of Technical Physics, University of Eastern Finland, Kuopio, Finland
- Diagnostic Imaging Center, Kuopio University Hospital, Kuopio, Finland
| | - Petri Paakkari
- Department of Technical Physics, University of Eastern Finland, Kuopio, Finland
- Diagnostic Imaging Center, Kuopio University Hospital, Kuopio, Finland
| | - Jiri Jäntti
- Department of Technical Physics, University of Eastern Finland, Kuopio, Finland
- Diagnostic Imaging Center, Kuopio University Hospital, Kuopio, Finland
| | - Miitu K M Honkanen
- Department of Technical Physics, University of Eastern Finland, Kuopio, Finland
- Diagnostic Imaging Center, Kuopio University Hospital, Kuopio, Finland
| | | | - Tuomas Virén
- Center of Oncology, Kuopio University Hospital, Kuopio, Finland
| | - Anisha Joenathan
- Departments of Biomedical Engineering, Chemistry, and Medicine, Boston University, Boston, Massachusetts, USA
| | - Petri Tanska
- Department of Technical Physics, University of Eastern Finland, Kuopio, Finland
| | - Rami K Korhonen
- Department of Technical Physics, University of Eastern Finland, Kuopio, Finland
| | - Mark W Grinstaff
- Departments of Biomedical Engineering, Chemistry, and Medicine, Boston University, Boston, Massachusetts, USA
| | - Juha Töyräs
- Department of Technical Physics, University of Eastern Finland, Kuopio, Finland
- Science Service Center, Kuopio University Hospital, Kuopio, Finland
- School of Electrical Engineering and Computer Science, The University of Queensland, Brisbane, Australia
| | - Janne T A Mäkelä
- Department of Technical Physics, University of Eastern Finland, Kuopio, Finland
- Diagnostic Imaging Center, Kuopio University Hospital, Kuopio, Finland
| |
Collapse
|
22
|
Mahendrakar P, Kumar D, Patil U. A Comprehensive Review on MRI-based Knee Joint Segmentation and Analysis Techniques. Curr Med Imaging 2024; 20:e150523216894. [PMID: 37189281 DOI: 10.2174/1573405620666230515090557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 11/29/2022] [Accepted: 12/28/2022] [Indexed: 05/17/2023]
Abstract
Using magnetic resonance imaging (MRI) in osteoarthritis pathogenesis research has proven extremely beneficial. However, it is always challenging for both clinicians and researchers to detect morphological changes in knee joints from magnetic resonance (MR) imaging since the surrounding tissues produce identical signals in MR studies, making it difficult to distinguish between them. Segmenting the knee bone, articular cartilage and menisci from the MR images allows one to examine the complete volume of the bone, articular cartilage, and menisci. It can also be used to assess certain characteristics quantitatively. However, segmentation is a laborious and time-consuming operation that requires sufficient training to complete correctly. With the advancement of MRI technology and computational methods, researchers have developed several algorithms to automate the task of individual knee bone, articular cartilage and meniscus segmentation during the last two decades. This systematic review aims to present available fully and semi-automatic segmentation methods for knee bone, cartilage, and meniscus published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field of image analysis and segmentation, which helps the development of novel automated methods for clinical applications. The review also contains the recently developed fully automated deep learning-based methods for segmentation, which not only provides better results compared to the conventional techniques but also open a new field of research in Medical Imaging.
Collapse
Affiliation(s)
- Pavan Mahendrakar
- BLDEA’s V.P.Dr. P.G., Halakatti College of Engineering and Technology, Vijayapur, Karnataka, India
| | | | - Uttam Patil
- Jain College of Engineering, T.S Nagar, Hunchanhatti Road, Machhe, Belagavi, Karnataka, India
| |
Collapse
|
23
|
Yao Y, Zhong J, Zhang L, Khan S, Chen W. CartiMorph: A framework for automated knee articular cartilage morphometrics. Med Image Anal 2024; 91:103035. [PMID: 37992496 DOI: 10.1016/j.media.2023.103035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 08/25/2023] [Accepted: 11/13/2023] [Indexed: 11/24/2023]
Abstract
We introduce CartiMorph, a framework for automated knee articular cartilage morphometrics. It takes an image as input and generates quantitative metrics for cartilage subregions, including the percentage of full-thickness cartilage loss (FCL), mean thickness, surface area, and volume. CartiMorph leverages the power of deep learning models for hierarchical image feature representation. Deep learning models were trained and validated for tissue segmentation, template construction, and template-to-image registration. We established methods for surface-normal-based cartilage thickness mapping, FCL estimation, and rule-based cartilage parcellation. Our cartilage thickness map showed less error in thin and peripheral regions. We evaluated the effectiveness of the adopted segmentation model by comparing the quantitative metrics obtained from model segmentation and those from manual segmentation. The root-mean-squared deviation of the FCL measurements was less than 8%, and strong correlations were observed for the mean thickness (Pearson's correlation coefficient ρ∈[0.82,0.97]), surface area (ρ∈[0.82,0.98]) and volume (ρ∈[0.89,0.98]) measurements. We compared our FCL measurements with those from a previous study and found that our measurements deviated less from the ground truths. We observed superior performance of the proposed rule-based cartilage parcellation method compared with the atlas-based approach. CartiMorph has the potential to promote imaging biomarkers discovery for knee osteoarthritis.
Collapse
Affiliation(s)
- Yongcheng Yao
- CU Lab of AI in Radiology (CLAIR), Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China.
| | - Junru Zhong
- CU Lab of AI in Radiology (CLAIR), Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China
| | - Liping Zhang
- CU Lab of AI in Radiology (CLAIR), Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China
| | - Sheheryar Khan
- School of Professional Education and Executive Development, The Hong Kong Polytechnic University, Hong Kong, China
| | - Weitian Chen
- CU Lab of AI in Radiology (CLAIR), Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China.
| |
Collapse
|
24
|
Bi L, Buehner U, Fu X, Williamson T, Choong P, Kim J. Hybrid CNN-transformer network for interactive learning of challenging musculoskeletal images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107875. [PMID: 37871450 DOI: 10.1016/j.cmpb.2023.107875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 10/25/2023]
Abstract
BACKGROUND AND OBJECTIVES Segmentation of regions of interest (ROIs) such as tumors and bones plays an essential role in the analysis of musculoskeletal (MSK) images. Segmentation results can help with orthopaedic surgeons in surgical outcomes assessment and patient's gait cycle simulation. Deep learning-based automatic segmentation methods, particularly those using fully convolutional networks (FCNs), are considered as the state-of-the-art. However, in scenarios where the training data is insufficient to account for all the variations in ROIs, these methods struggle to segment the challenging ROIs that with less common image characteristics. Such characteristics might include low contrast to the background, inhomogeneous textures, and fuzzy boundaries. METHODS we propose a hybrid convolutional neural network - transformer network (HCTN) for semi-automatic segmentation to overcome the limitations of segmenting challenging MSK images. Specifically, we propose to fuse user-inputs (manual, e.g., mouse clicks) with high-level semantic image features derived from the neural network (automatic) where the user-inputs are used in an interactive training for uncommon image characteristics. In addition, we propose to leverage the transformer network (TN) - a deep learning model designed for handling sequence data, in together with features derived from FCNs for segmentation; this addresses the limitation of FCNs that can only operate on small kernels, which tends to dismiss global context and only focus on local patterns. RESULTS We purposely selected three MSK imaging datasets covering a variety of structures to evaluate the generalizability of the proposed method. Our semi-automatic HCTN method achieved a dice coefficient score (DSC) of 88.46 ± 9.41 for segmenting the soft-tissue sarcoma tumors from magnetic resonance (MR) images, 73.32 ± 11.97 for segmenting the osteosarcoma tumors from MR images and 93.93 ± 1.84 for segmenting the clavicle bones from chest radiographs. When compared to the current state-of-the-art automatic segmentation method, our HCTN method is 11.7%, 19.11% and 7.36% higher in DSC on the three datasets, respectively. CONCLUSION Our experimental results demonstrate that HCTN achieved more generalizable results than the current methods, especially with challenging MSK studies.
Collapse
Affiliation(s)
- Lei Bi
- Institute of Translational Medicine, National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China; School of Computer Science, University of Sydney, NSW, Australia
| | | | - Xiaohang Fu
- School of Computer Science, University of Sydney, NSW, Australia
| | - Tom Williamson
- Stryker Corporation, Kalamazoo, Michigan, USA; Centre for Additive Manufacturing, School of Engineering, RMIT University, VIC, Australia
| | - Peter Choong
- Department of Surgery, University of Melbourne, VIC, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, NSW, Australia.
| |
Collapse
|
25
|
Moglia A, Marsilio L, Rossi M, Pinelli M, Lettieri E, Mainardi L, Manzotti A, Cerveri P. Mixed Reality and Artificial Intelligence: A Holistic Approach to Multimodal Visualization and Extended Interaction in Knee Osteotomy. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 12:279-290. [PMID: 38410183 PMCID: PMC10896423 DOI: 10.1109/jtehm.2023.3335608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 10/16/2023] [Accepted: 11/17/2023] [Indexed: 02/28/2024]
Abstract
OBJECTIVE Recent advancements in augmented reality led to planning and navigation systems for orthopedic surgery. However little is known about mixed reality (MR) in orthopedics. Furthermore, artificial intelligence (AI) has the potential to boost the capabilities of MR by enabling automation and personalization. The purpose of this work is to assess Holoknee prototype, based on AI and MR for multimodal data visualization and surgical planning in knee osteotomy, developed to run on the HoloLens 2 headset. METHODS Two preclinical test sessions were performed with 11 participants (eight surgeons, two residents, and one medical student) executing three times six tasks, corresponding to a number of holographic data interactions and preoperative planning steps. At the end of each session, participants answered a questionnaire on user perception and usability. RESULTS During the second trial, the participants were faster in all tasks than in the first one, while in the third one, the time of execution decreased only for two tasks ("Patient selection" and "Scrolling through radiograph") with respect to the second attempt, but without statistically significant difference (respectively [Formula: see text] = 0.14 and [Formula: see text] = 0.13, [Formula: see text]). All subjects strongly agreed that MR can be used effectively for surgical training, whereas 10 (90.9%) strongly agreed that it can be used effectively for preoperative planning. Six (54.5%) agreed and two of them (18.2%) strongly agreed that it can be used effectively for intraoperative guidance. DISCUSSION/CONCLUSION In this work, we presented Holoknee, the first holistic application of AI and MR for surgical planning for knee osteotomy. It reported promising results on its potential translation to surgical training, preoperative planning, and surgical guidance. Clinical and Translational Impact Statement - Holoknee can be helpful to support surgeons in the preoperative planning of knee osteotomy. It has the potential to impact positively the training of the future generation of residents and aid surgeons in the intraoperative stage.
Collapse
Affiliation(s)
- Andrea Moglia
- Department of ElectronicsInformation and BioengineeringPolitecnico di Milano20133MilanItaly
| | - Luca Marsilio
- Department of ElectronicsInformation and BioengineeringPolitecnico di Milano20133MilanItaly
| | - Matteo Rossi
- Department of ElectronicsInformation and BioengineeringPolitecnico di Milano20133MilanItaly
- Istituto Auxologico Italiano IRCCS20149MilanItaly
| | - Maria Pinelli
- Department of Management, Economics and Industrial EngineeringPolitecnico di Milano20133MilanItaly
| | - Emanuele Lettieri
- Department of Management, Economics and Industrial EngineeringPolitecnico di Milano20133MilanItaly
| | - Luca Mainardi
- Department of ElectronicsInformation and BioengineeringPolitecnico di Milano20133MilanItaly
| | | | - Pietro Cerveri
- Department of ElectronicsInformation and BioengineeringPolitecnico di Milano20133MilanItaly
- Istituto Auxologico Italiano IRCCS20149MilanItaly
| |
Collapse
|
26
|
Kijowski R, Fritz J, Deniz CM. Deep learning applications in osteoarthritis imaging. Skeletal Radiol 2023; 52:2225-2238. [PMID: 36759367 PMCID: PMC10409879 DOI: 10.1007/s00256-023-04296-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 12/22/2022] [Accepted: 01/31/2023] [Indexed: 02/11/2023]
Abstract
Deep learning (DL) is one of the most exciting new areas in medical imaging. This article will provide a review of current applications of DL in osteoarthritis (OA) imaging, including methods used for cartilage lesion detection, OA diagnosis, cartilage segmentation, and OA risk assessment. DL techniques have been shown to have similar diagnostic performance as human readers for detecting and grading cartilage lesions within the knee on MRI. A variety of DL methods have been developed for detecting and grading the severity of knee OA and various features of knee OA on X-rays using standardized classification systems with diagnostic performance similar to human readers. Multiple DL approaches have been described for fully automated segmentation of cartilage and other knee tissues and have achieved higher segmentation accuracy than currently used methods with substantial reductions in segmentation times. Various DL models analyzing baseline X-rays and MRI have been developed for OA risk assessment. These models have shown high diagnostic performance for predicting a wide variety of OA outcomes, including the incidence and progression of radiographic knee OA, the presence and progression of knee pain, and future total knee replacement. The preliminary results of DL applications in OA imaging have been encouraging. However, many DL techniques require further technical refinement to maximize diagnostic performance. Furthermore, the generalizability of DL approaches needs to be further investigated in prospective studies using large image datasets acquired at different institutions with different imaging hardware before they can be implemented in clinical practice and research studies.
Collapse
Affiliation(s)
- Richard Kijowski
- Department of Radiology, New York University Grossman School of Medicine, 660 First Avenue, 3Rd Floor, New York, NY, 10016, USA.
| | - Jan Fritz
- Department of Radiology, New York University Grossman School of Medicine, 660 First Avenue, 3Rd Floor, New York, NY, 10016, USA
| | - Cem M Deniz
- Department of Radiology, New York University Grossman School of Medicine, 660 First Avenue, 3Rd Floor, New York, NY, 10016, USA
| |
Collapse
|
27
|
Wirth W, Ladel C, Maschek S, Wisser A, Eckstein F, Roemer F. Quantitative measurement of cartilage morphology in osteoarthritis: current knowledge and future directions. Skeletal Radiol 2023; 52:2107-2122. [PMID: 36380243 PMCID: PMC10509082 DOI: 10.1007/s00256-022-04228-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 10/29/2022] [Accepted: 10/31/2022] [Indexed: 11/16/2022]
Abstract
Quantitative measures of cartilage morphology ("cartilage morphometry") extracted from high resolution 3D magnetic resonance imaging (MRI) sequences have been shown to be sensitive to osteoarthritis (OA)-related change and also to treatment interventions. Cartilage morphometry is therefore nowadays widely used as outcome measure for observational studies and randomized interventional clinical trials. The objective of this narrative review is to summarize the current status of cartilage morphometry in OA research, to provide insights into aspects relevant for the design of future studies and clinical trials, and to give an outlook on future developments. It covers the aspects related to the acquisition of MRIs suitable for cartilage morphometry, the analysis techniques needed for deriving quantitative measures from the MRIs, the quality assurance required for providing reliable cartilage measures, and the appropriate participant recruitment criteria for the enrichment of study cohorts with knees likely to show structural progression. Finally, it provides an overview over recent clinical trials that relied on cartilage morphometry as a structural outcome measure for evaluating the efficacy of disease-modifying OA drugs (DMOAD).
Collapse
Affiliation(s)
- Wolfgang Wirth
- Department of Imaging & Functional Musculoskeletal Research, Institute of Anatomy & Cell Biology, Paracelsus Medical University Salzburg & Nuremberg, Strubergasse 21, 5020 Salzburg, Austria
- Ludwig Boltzmann Inst. for Arthritis and Rehabilitation, Paracelsus Medical University Salzburg & Nuremberg, Salzburg, Austria
- Chondrometrics GmbH, Freilassing, Germany
| | | | - Susanne Maschek
- Department of Imaging & Functional Musculoskeletal Research, Institute of Anatomy & Cell Biology, Paracelsus Medical University Salzburg & Nuremberg, Strubergasse 21, 5020 Salzburg, Austria
- Chondrometrics GmbH, Freilassing, Germany
| | - Anna Wisser
- Department of Imaging & Functional Musculoskeletal Research, Institute of Anatomy & Cell Biology, Paracelsus Medical University Salzburg & Nuremberg, Strubergasse 21, 5020 Salzburg, Austria
- Ludwig Boltzmann Inst. for Arthritis and Rehabilitation, Paracelsus Medical University Salzburg & Nuremberg, Salzburg, Austria
- Chondrometrics GmbH, Freilassing, Germany
| | - Felix Eckstein
- Department of Imaging & Functional Musculoskeletal Research, Institute of Anatomy & Cell Biology, Paracelsus Medical University Salzburg & Nuremberg, Strubergasse 21, 5020 Salzburg, Austria
- Ludwig Boltzmann Inst. for Arthritis and Rehabilitation, Paracelsus Medical University Salzburg & Nuremberg, Salzburg, Austria
- Chondrometrics GmbH, Freilassing, Germany
| | - Frank Roemer
- Quantitative Imaging Center, Department of Radiology, Boston University School of Medicine, Boston, MA USA
- Department of Radiology, Universitätsklinikum Erlangen and Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Erlangen, Germany
| |
Collapse
|
28
|
Gao KT, Xie E, Chen V, Iriondo C, Calivà F, Souza RB, Majumdar S, Pedoia V. Large-Scale Analysis of Meniscus Morphology as Risk Factor for Knee Osteoarthritis. Arthritis Rheumatol 2023; 75:1958-1968. [PMID: 37262347 DOI: 10.1002/art.42623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 03/24/2023] [Accepted: 05/25/2023] [Indexed: 06/03/2023]
Abstract
OBJECTIVE Although it is established that structural damage of the meniscus is linked to knee osteoarthritis (OA) progression, the predisposition to future development of OA because of geometric meniscal shapes is plausible and unexplored. This study aims to identify common variations in meniscal shape and determine their relationships to tissue morphology, OA onset, and longitudinal changes in cartilage thickness. METHODS A total of 4,790 participants from the Osteoarthritis Initiative data set were studied. A statistical shape model was developed for the meniscus, and shape scores were evaluated between a control group and an OA incidence group. Shape features were then associated with cartilage thickness changes over 8 years to localize the relationship between meniscus shape and cartilage degeneration. RESULTS Seven shape features between the medial and lateral menisci were identified to be different between knees that remain normal and those that develop OA. These include length-width ratios, horn lengths, root attachment angles, and concavity. These "at-risk" shapes were linked to unique cartilage thickness changes that suggest a relationship between meniscus geometry and decreased tibial coverage and rotational imbalances. Additionally, strong associations were found between meniscal shape and demographic subpopulations, future tibial extrusion, and meniscal and ligamentous tears. CONCLUSION This automatic method expanded upon known meniscus characteristics that are associated with the onset of OA and discovered novel shape features that have yet to be investigated in the context of OA risk.
Collapse
Affiliation(s)
- Kenneth T Gao
- University of California, San Francisco and University of California Berkeley-University of California San Francisco Graduate Program in Bioengineering, San Francisco, United States
| | - Emily Xie
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, United States
| | - Vincent Chen
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, United States
| | - Claudia Iriondo
- University of California, San Francisco and University of California Berkeley-University of California San Francisco Graduate Program in Bioengineering, San Francisco, United States
| | - Francesco Calivà
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, United States
| | - Richard B Souza
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco and Department of Physical Therapy and Rehabilitation Science, University of California, San Francisco, United States
| | - Sharmila Majumdar
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, United States
| | - Valentina Pedoia
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, United States
| |
Collapse
|
29
|
Shetty K, Birkhold A, Jaganathan S, Strobel N, Egger B, Kowarschik M, Maier A. BOSS: Bones, organs and skin shape model. Comput Biol Med 2023; 165:107383. [PMID: 37657357 DOI: 10.1016/j.compbiomed.2023.107383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 07/21/2023] [Accepted: 08/14/2023] [Indexed: 09/03/2023]
Abstract
A virtual anatomical model of a patient can be a valuable tool for enhancing clinical tasks such as workflow automation, patient-specific X-ray dose optimization, markerless tracking, positioning, and navigation assistance in image-guided interventions. For these tasks, it is highly desirable that the patient's surface and internal organs are of high quality for any pose and shape estimate. At present, the majority of statistical shape models (SSMs) are restricted to a small number of organs or bones or do not adequately represent the general population. To address this, we propose a deformable human shape and pose model that combines skin, internal organs, and bones, learned from CT images. By modeling the statistical variations in a pose-normalized space using probabilistic PCA while also preserving joint kinematics, our approach offers a holistic representation of the body that can be beneficial for automation in various medical applications. In an interventional setup, our model could, for example, facilitate automatic system/patient positioning, organ-specific iso-centering, automated collimation or collision prediction. We assessed our model's performance on a registered dataset, utilizing the unified shape space, and noted an average error of 3.6 mm for bones and 8.8 mm for organs. By utilizing solely skin surface data or patient metadata like height and weight, we find that the overall combined error for bone-organ measurement is 8.68 mm and 8.11 mm, respectively. To further verify our findings, we conducted additional tests on publicly available datasets with multi-part segmentations, which confirmed the effectiveness of our model. In the diverse TotalSegmentator dataset, the errors for bones and organs are observed to be 5.10mm and 8.72mm, respectively. Our work shows that anatomically parameterized statistical shape models can be created accurately and in a computationally efficient manner. The proposed approach enables the construction of shape models that can be directly integrated into to various medical applications.
Collapse
Affiliation(s)
- Karthik Shetty
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, 91058, Germany; Siemens Healthcare GmbH, Forchheim, 91301, Germany.
| | | | - Srikrishna Jaganathan
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, 91058, Germany; Siemens Healthcare GmbH, Forchheim, 91301, Germany
| | - Norbert Strobel
- Siemens Healthcare GmbH, Forchheim, 91301, Germany; Institute of Medical Engineering Schweinfurt, Technical University of Applied Sciences Würzburg-Schweinfurt, Schweinfurt, 97421, Germany
| | - Bernhard Egger
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, 91058, Germany
| | | | - Andreas Maier
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, 91058, Germany
| |
Collapse
|
30
|
Fan X, Li Z, Li Z, Wang X, Liu R, Luo Z, Huang H. Automated Learning for Deformable Medical Image Registration by Jointly Optimizing Network Architectures and Objective Functions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:4880-4892. [PMID: 37624710 DOI: 10.1109/tip.2023.3307215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Deformable image registration plays a critical role in various tasks of medical image analysis. A successful registration algorithm, either derived from conventional energy optimization or deep networks, requires tremendous efforts from computer experts to well design registration energy or to carefully tune network architectures with respect to medical data available for a given registration task/scenario. This paper proposes an automated learning registration algorithm (AutoReg) that cooperatively optimizes both architectures and their corresponding training objectives, enabling non-computer experts to conveniently find off-the-shelf registration algorithms for various registration scenarios. Specifically, we establish a triple-level framework to embrace the searching for both network architectures and objectives with a cooperating optimization. Extensive experiments on multiple volumetric datasets and various registration scenarios demonstrate that AutoReg can automatically learn an optimal deep registration network for given volumes and achieve state-of-the-art performance. The automatically learned network also improves computational efficiency over the mainstream UNet architecture from 0.558 to 0.270 seconds for a volume pair on the same configuration.
Collapse
|
31
|
Liu Z, Lv Q, Yang Z, Li Y, Lee CH, Shen L. Recent progress in transformer-based medical image analysis. Comput Biol Med 2023; 164:107268. [PMID: 37494821 DOI: 10.1016/j.compbiomed.2023.107268] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/30/2023] [Accepted: 07/16/2023] [Indexed: 07/28/2023]
Abstract
The transformer is primarily used in the field of natural language processing. Recently, it has been adopted and shows promise in the computer vision (CV) field. Medical image analysis (MIA), as a critical branch of CV, also greatly benefits from this state-of-the-art technique. In this review, we first recap the core component of the transformer, the attention mechanism, and the detailed structures of the transformer. After that, we depict the recent progress of the transformer in the field of MIA. We organize the applications in a sequence of different tasks, including classification, segmentation, captioning, registration, detection, enhancement, localization, and synthesis. The mainstream classification and segmentation tasks are further divided into eleven medical image modalities. A large number of experiments studied in this review illustrate that the transformer-based method outperforms existing methods through comparisons with multiple evaluation metrics. Finally, we discuss the open challenges and future opportunities in this field. This task-modality review with the latest contents, detailed information, and comprehensive comparison may greatly benefit the broad MIA community.
Collapse
Affiliation(s)
- Zhaoshan Liu
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Qiujie Lv
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Ziduo Yang
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Yifan Li
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Chau Hung Lee
- Department of Radiology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore, 308433, Singapore.
| | - Lei Shen
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| |
Collapse
|
32
|
Winter P, Rother S, Orth P, Fritsch E. [Innovative image-based planning in musculoskeletal surgery]. ORTHOPADIE (HEIDELBERG, GERMANY) 2023:10.1007/s00132-023-04393-3. [PMID: 37286621 DOI: 10.1007/s00132-023-04393-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/03/2023] [Indexed: 06/09/2023]
Abstract
BACKGROUND For the preparation of surgical procedures in orthopedics and trauma surgery, precise knowledge of imaging and the three-dimensional imagination of the surgeon are of outstanding importance. Image-based, preoperative two-dimensional planning is the gold standard in arthroplasty today. In complex cases, further imaging such as computed tomography (CT) or magnetic resonance imaging is also performed, generating a three-dimensional model of the body region and helping the surgeon in the planning of the surgical treatment. Four-dimensional, dynamic CT studies have also been reported and are available as a complementary tool. DIGITAL AIDS Furthermore, digital aids should generate an improved representation of the pathology to be treated and optimize the surgeon's imagination. The finite element method allows patient-specific and implant-specific parameters to be taken into account in preoperative surgical planning. Intraoperatively, relevant information can be provided by augmented reality without significantly influencing the surgical workflow.
Collapse
Affiliation(s)
- Philipp Winter
- Klinik für Orthopädie und Orthopädische Chirurgie, Universität des Saarlandes, Kirrberger Str. 100, 66421, Homburg, Deutschland.
| | - Stephan Rother
- Klinik für Orthopädie und Orthopädische Chirurgie, Universität des Saarlandes, Kirrberger Str. 100, 66421, Homburg, Deutschland
| | - Patrick Orth
- Klinik für Orthopädie und Orthopädische Chirurgie, Universität des Saarlandes, Kirrberger Str. 100, 66421, Homburg, Deutschland
| | - Ekkehard Fritsch
- Klinik für Orthopädie und Orthopädische Chirurgie, Universität des Saarlandes, Kirrberger Str. 100, 66421, Homburg, Deutschland
| |
Collapse
|
33
|
Zheng JQ, Lim NH, Papież BW. Accurate volume alignment of arbitrarily oriented tibiae based on a mutual attention network for osteoarthritis analysis. Comput Med Imaging Graph 2023; 106:102204. [PMID: 36863214 DOI: 10.1016/j.compmedimag.2023.102204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 02/14/2023] [Accepted: 02/14/2023] [Indexed: 02/26/2023]
Abstract
Damage to cartilage is an important indicator of osteoarthritis progression, but manual extraction of cartilage morphology is time-consuming and prone to error. To address this, we hypothesize that automatic labeling of cartilage can be achieved through the comparison of contrasted and non-contrasted Computer Tomography (CT). However, this is non-trivial as the pre-clinical volumes are at arbitrary starting poses due to the lack of standardized acquisition protocols. Thus, we propose an annotation-free deep learning method, D-net, for accurate and automatic alignment of pre- and post-contrasted cartilage CT volumes. D-Net is based on a novel mutual attention network structure to capture large-range translation and full-range rotation without the need for a prior pose template. CT volumes of mice tibiae are used for validation, with synthetic transformation for training and tested with real pre- and post-contrasted CT volumes. Analysis of Variance (ANOVA) was used to compare the different network structures. Our proposed method, D-net, achieves a Dice coefficient of 0.87, and significantly outperforms other state-of-the-art deep learning models, in the real-world alignment of 50 pairs of pre- and post-contrasted CT volumes when cascaded as a multi-stage network.
Collapse
Affiliation(s)
- Jian-Qing Zheng
- The Kennedy Institute of Rheumatology, University of Oxford, UK.
| | - Ngee Han Lim
- The Kennedy Institute of Rheumatology, University of Oxford, UK.
| | | |
Collapse
|
34
|
Ileșan RR, Beyer M, Kunz C, Thieringer FM. Comparison of Artificial Intelligence-Based Applications for Mandible Segmentation: From Established Platforms to In-House-Developed Software. Bioengineering (Basel) 2023; 10:604. [PMID: 37237673 PMCID: PMC10215609 DOI: 10.3390/bioengineering10050604] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 05/16/2023] [Indexed: 05/28/2023] Open
Abstract
Medical image segmentation, whether semi-automatically or manually, is labor-intensive, subjective, and needs specialized personnel. The fully automated segmentation process recently gained importance due to its better design and understanding of CNNs. Considering this, we decided to develop our in-house segmentation software and compare it to the systems of established companies, an inexperienced user, and an expert as ground truth. The companies included in the study have a cloud-based option that performs accurately in clinical routine (dice similarity coefficient of 0.912 to 0.949) with an average segmentation time ranging from 3'54″ to 85'54″. Our in-house model achieved an accuracy of 94.24% compared to the best-performing software and had the shortest mean segmentation time of 2'03″. During the study, developing in-house segmentation software gave us a glimpse into the strenuous work that companies face when offering clinically relevant solutions. All the problems encountered were discussed with the companies and solved, so both parties benefited from this experience. In doing so, we demonstrated that fully automated segmentation needs further research and collaboration between academics and the private sector to achieve full acceptance in clinical routines.
Collapse
Affiliation(s)
- Robert R. Ileșan
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
| | - Michel Beyer
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
- Medical Additive Manufacturing Research Group (Swiss MAM), Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| | - Christoph Kunz
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
| | - Florian M. Thieringer
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
- Medical Additive Manufacturing Research Group (Swiss MAM), Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| |
Collapse
|
35
|
Martel-Pelletier J, Paiement P, Pelletier JP. Magnetic resonance imaging assessments for knee segmentation and their use in combination with machine/deep learning as predictors of early osteoarthritis diagnosis and prognosis. Ther Adv Musculoskelet Dis 2023; 15:1759720X231165560. [PMID: 37151912 PMCID: PMC10155034 DOI: 10.1177/1759720x231165560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 03/23/2023] [Indexed: 05/09/2023] Open
Abstract
Knee osteoarthritis (OA) is a prevalent and disabling disease that can develop over decades. This disease is heterogeneous and involves structural changes in the whole joint, encompassing multiple tissue types. Detecting OA before the onset of irreversible changes is crucial for early management, and this could be achieved by allowing knee tissue visualization and quantifying their changes over time. Although some imaging modalities are available for knee structure assessment, magnetic resonance imaging (MRI) is preferred. This narrative review looks at existing literature, first on MRI-developed approaches for evaluating knee articular tissues, and second on prediction using machine/deep-learning-based methodologies and MRI as input or outcome for early OA diagnosis and prognosis. A substantial number of MRI methodologies have been developed to assess several knee tissues in a semi-quantitative and quantitative fashion using manual, semi-automated and fully automated systems. This dynamic field has grown substantially since the advent of machine/deep learning. Another active area is predictive modelling using machine/deep-learning methodologies enabling robust early OA diagnosis/prognosis. Moreover, incorporating MRI markers as input/outcome in such predictive models is important for a more accurate OA structural diagnosis/prognosis. The main limitation of their usage is the ability to move them in rheumatology practice. In conclusion, MRI knee tissue determination and quantification provide early indicators for individuals at high risk of developing this disease or for patient prognosis. Such assessment of knee tissues, combined with the development of models/tools from machine/deep learning using, in addition to other parameters, MRI markers for early diagnosis/prognosis, will maximize opportunities for individualized risk assessment for use in clinical practice permitting precision medicine. Future efforts should be made to integrate such prediction models into open access, allowing early disease management to prevent or delay the OA outcome.
Collapse
Affiliation(s)
- Johanne Martel-Pelletier
- Osteoarthritis Research Unit, University of
Montreal Hospital Research Centre (CRCHUM), 900 Saint-Denis, R11.412B,
Montreal, QC H2X 0A9, Canada
| | - Patrice Paiement
- Osteoarthritis Research Unit, University of
Montreal Hospital Research Centre (CRCHUM), Montreal, QC, Canada
| | - Jean-Pierre Pelletier
- Osteoarthritis Research Unit, University of
Montreal Hospital Research Centre (CRCHUM), Montreal, QC, Canada
| |
Collapse
|
36
|
Schmidt AM, Desai AD, Watkins LE, Crowder HA, Black MS, Mazzoli V, Rubin EB, Lu Q, MacKay JW, Boutin RD, Kogan F, Gold GE, Hargreaves BA, Chaudhari AS. Generalizability of Deep Learning Segmentation Algorithms for Automated Assessment of Cartilage Morphology and MRI Relaxometry. J Magn Reson Imaging 2023; 57:1029-1039. [PMID: 35852498 PMCID: PMC9849481 DOI: 10.1002/jmri.28365] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 07/06/2022] [Accepted: 07/07/2022] [Indexed: 01/21/2023] Open
Abstract
BACKGROUND Deep learning (DL)-based automatic segmentation models can expedite manual segmentation yet require resource-intensive fine-tuning before deployment on new datasets. The generalizability of DL methods to new datasets without fine-tuning is not well characterized. PURPOSE Evaluate the generalizability of DL-based models by deploying pretrained models on independent datasets varying by MR scanner, acquisition parameters, and subject population. STUDY TYPE Retrospective based on prospectively acquired data. POPULATION Overall test dataset: 59 subjects (26 females); Study 1: 5 healthy subjects (zero females), Study 2: 8 healthy subjects (eight females), Study 3: 10 subjects with osteoarthritis (eight females), Study 4: 36 subjects with various knee pathology (10 females). FIELD STRENGTH/SEQUENCE A 3-T, quantitative double-echo steady state (qDESS). ASSESSMENT Four annotators manually segmented knee cartilage. Each reader segmented one of four qDESS datasets in the test dataset. Two DL models, one trained on qDESS data and another on Osteoarthritis Initiative (OAI)-DESS data, were assessed. Manual and automatic segmentations were compared by quantifying variations in segmentation accuracy, volume, and T2 relaxation times for superficial and deep cartilage. STATISTICAL TESTS Dice similarity coefficient (DSC) for segmentation accuracy. Lin's concordance correlation coefficient (CCC), Wilcoxon rank-sum tests, root-mean-squared error-coefficient-of-variation to quantify manual vs. automatic T2 and volume variations. Bland-Altman plots for manual vs. automatic T2 agreement. A P value < 0.05 was considered statistically significant. RESULTS DSCs for the qDESS-trained model, 0.79-0.93, were higher than those for the OAI-DESS-trained model, 0.59-0.79. T2 and volume CCCs for the qDESS-trained model, 0.75-0.98 and 0.47-0.95, were higher than respective CCCs for the OAI-DESS-trained model, 0.35-0.90 and 0.13-0.84. Bland-Altman 95% limits of agreement for superficial and deep cartilage T2 were lower for the qDESS-trained model, ±2.4 msec and ±4.0 msec, than the OAI-DESS-trained model, ±4.4 msec and ±5.2 msec. DATA CONCLUSION The qDESS-trained model may generalize well to independent qDESS datasets regardless of MR scanner, acquisition parameters, and subject population. EVIDENCE LEVEL 1 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Andrew M Schmidt
- Department of Radiology, Stanford University, Palo Alto, California, USA
| | - Arjun D Desai
- Department of Radiology, Stanford University, Palo Alto, California, USA
- Electrical Engineering, Stanford University, Palo Alto, California, USA
| | - Lauren E Watkins
- Department of Radiology, Stanford University, Palo Alto, California, USA
- Bioengineering, Stanford University, Palo Alto, California, USA
| | - Hollis A Crowder
- Mechanical Engineering, Stanford University, Palo Alto, California, USA
| | - Marianne S Black
- Department of Radiology, Stanford University, Palo Alto, California, USA
- Mechanical Engineering, Stanford University, Palo Alto, California, USA
| | - Valentina Mazzoli
- Department of Radiology, Stanford University, Palo Alto, California, USA
| | - Elka B Rubin
- Department of Radiology, Stanford University, Palo Alto, California, USA
| | - Quin Lu
- Philips Healthcare North America, Gainesville, Florida, USA
| | - James W MacKay
- Department of Radiology, University of Cambridge, Cambridge, UK
- Norwich Medical School, University of East Anglia, Norwich, UK
| | - Robert D Boutin
- Department of Radiology, Stanford University, Palo Alto, California, USA
| | - Feliks Kogan
- Department of Radiology, Stanford University, Palo Alto, California, USA
| | - Garry E Gold
- Department of Radiology, Stanford University, Palo Alto, California, USA
- Bioengineering, Stanford University, Palo Alto, California, USA
| | - Brian A Hargreaves
- Department of Radiology, Stanford University, Palo Alto, California, USA
- Electrical Engineering, Stanford University, Palo Alto, California, USA
- Bioengineering, Stanford University, Palo Alto, California, USA
| | - Akshay S Chaudhari
- Department of Radiology, Stanford University, Palo Alto, California, USA
- Biomedical Data Science, Stanford University, Palo Alto, California, USA
| |
Collapse
|
37
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 27] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
38
|
Mukherjee S, Bandyopadhyay O, Biswas A, Bhattacharya BB. Tracking patellar osteophytes to detect osteoarthritis. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2194453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
39
|
Kim-Wang SY, Bradley PX, Cutcliffe HC, Collins AT, Crook BS, Paranjape CS, Spritzer CE, DeFrate LE. Auto-segmentation of the tibia and femur from knee MR images via deep learning and its application to cartilage strain and recovery. J Biomech 2023; 149:111473. [PMID: 36791514 PMCID: PMC10281551 DOI: 10.1016/j.jbiomech.2023.111473] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 12/21/2022] [Accepted: 01/24/2023] [Indexed: 01/27/2023]
Abstract
The ability to efficiently and reproducibly generate subject-specific 3D models of bone and soft tissue is important to many areas of musculoskeletal research. However, methodologies requiring such models have largely been limited by lengthy manual segmentation times. Recently, machine learning, and more specifically, convolutional neural networks, have shown potential to alleviate this bottleneck in research throughput. Thus, the purpose of this work was to develop a modified version of the convolutional neural network architecture U-Net to automate segmentation of the tibia and femur from double echo steady state knee magnetic resonance (MR) images. Our model was trained on a dataset of over 4,000 MR images from 34 subjects, segmented by three experienced researchers, and reviewed by a musculoskeletal radiologist. For our validation and testing sets, we achieved dice coefficients of 0.985 and 0.984, respectively. As further testing, we applied our trained model to a prior study of tibial cartilage strain and recovery. In this analysis, across all subjects, there were no statistically significant differences in cartilage strain between the machine learning and ground truth bone models, with a mean difference of 0.2 ± 0.7 % (mean ± 95 % confidence interval). This difference is within the measurement resolution of previous cartilage strain studies from our lab using manual segmentation. In summary, we successfully trained, validated, and tested a machine learning model capable of segmenting MR images of the knee, achieving results that are comparable to trained human segmenters.
Collapse
Affiliation(s)
- Sophia Y Kim-Wang
- Duke University School of Medicine, United States; Department of Biomedical Engineering, Duke University, United States
| | - Patrick X Bradley
- Department of Mechanical Engineering and Materials Science, Duke University, United States
| | | | - Amber T Collins
- Department of Orthopaedic Surgery, Duke University School of Medicine, United States
| | - Bryan S Crook
- Department of Orthopaedic Surgery, Duke University School of Medicine, United States
| | - Chinmay S Paranjape
- Department of Orthopaedic Surgery, Duke University School of Medicine, United States
| | - Charles E Spritzer
- Department of Radiology, Duke University School of Medicine, United States
| | - Louis E DeFrate
- Department of Biomedical Engineering, Duke University, United States; Department of Mechanical Engineering and Materials Science, Duke University, United States; Department of Orthopaedic Surgery, Duke University School of Medicine, United States.
| |
Collapse
|
40
|
Zhang L, Ning G, Zhou L, Liao H. Symmetric pyramid network for medical image inverse consistent diffeomorphic registration. Comput Med Imaging Graph 2023; 104:102184. [PMID: 36657212 DOI: 10.1016/j.compmedimag.2023.102184] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 12/31/2022] [Accepted: 01/03/2023] [Indexed: 01/15/2023]
Abstract
Over the past few years, deep learning-based image registration methods have achieved remarkable performance in medical image analysis. However, many existing methods struggle to ensure accurate registration while preserving the desired diffeomorphic properties and inverse consistency of the final deformation field. To address the problem, this paper presents a novel symmetric pyramid network for medical image inverse consistent diffeomorphic registration. Specifically, we first encode the multi-scale images to the feature pyramids via a shared-weights encoder network and then progressively conduct the feature-level diffeomorphic registration. The feature-level registration is implemented symmetrically to ensure inverse consistency. We independently carry out the forward and backward feature-level registration and average the estimated bidirectional velocity fields for more robust estimation. Finally, we employ symmetric multi-scale similarity loss to train the network. Experimental results on three public datasets, including Mindboggle101, CANDI, and OAI, show that our method significantly outperforms others, demonstrating that the proposed network can achieve accurate alignment and generate the deformation fields with expected properties. Our code will be available at https://github.com/zhangliutong/SPnet.
Collapse
Affiliation(s)
- Liutong Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Guochen Ning
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Lei Zhou
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
41
|
Zhuang Z, Si L, Wang S, Xuan K, Ouyang X, Zhan Y, Xue Z, Zhang L, Shen D, Yao W, Wang Q. Knee Cartilage Defect Assessment by Graph Representation and Surface Convolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:368-379. [PMID: 36094985 DOI: 10.1109/tmi.2022.3206042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Knee osteoarthritis (OA) is the most common osteoarthritis and a leading cause of disability. Cartilage defects are regarded as major manifestations of knee OA, which are visible by magnetic resonance imaging (MRI). Thus early detection and assessment for knee cartilage defects are important for protecting patients from knee OA. In this way, many attempts have been made on knee cartilage defect assessment by applying convolutional neural networks (CNNs) to knee MRI. However, the physiologic characteristics of the cartilage may hinder such efforts: the cartilage is a thin curved layer, implying that only a small portion of voxels in knee MRI can contribute to the cartilage defect assessment; heterogeneous scanning protocols further challenge the feasibility of the CNNs in clinical practice; the CNN-based knee cartilage evaluation results lack interpretability. To address these challenges, we model the cartilages structure and appearance from knee MRI into a graph representation, which is capable of handling highly diverse clinical data. Then, guided by the cartilage graph representation, we design a non-Euclidean deep learning network with the self-attention mechanism, to extract cartilage features in the local and global, and to derive the final assessment with a visualized result. Our comprehensive experiments show that the proposed method yields superior performance in knee cartilage defect assessment, plus its convenient 3D visualization for interpretability.
Collapse
|
42
|
Mahdi H, Hardisty M, Fullerton K, Vachhani K, Nam D, Whyne C. Open-source pipeline for automatic segmentation and microstructural analysis of murine knee subchondral bone. Bone 2023; 167:116616. [PMID: 36402366 DOI: 10.1016/j.bone.2022.116616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 11/11/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022]
Abstract
UNLABELLED μCT images are commonly analysed to assess changes in bone density and microstructure in preclinical murine models. Several platforms provide automated analysis of bone microstructural parameters from volumetric regions of interest (ROI). However, segmentation of the regions of subchondral bone to create the volumetric ROIs remains a manual and time-consuming task. This study aimed to develop an automated end-to-end pipeline, combining segmentation and microstructural analysis, to evaluate subchondral bone in the mouse proximal knee. METHODS A segmented dataset of μCT scans from 62 knees (healthy and arthritic) from 10-week male C57BL/6 mice was used to train a U-Net type architecture to automate segmentation of the subchondral trabecular bone. These segmentations were used in tandem with the original scans as input for microstructural analysis along with thresholded trabecular bone. Manually and U-Net segmented ROIs were fed into two available pipelines for microstructural analysis: the ITKBoneMorphometry library and CTan (SKYSCAN). Outcome parameters were compared between pipelines, including: bone volume (BV), total volume (TV), BV/TV, trabecular number (TbN), trabecular thickness (TbTh), trabecular separation (TbSp), and bone surface density (BSBV). RESULTS There was good agreement for all bone measures comparing the manual and U-Net pipelines utilizing ITK (R = 0.88-0.98) and CTAn (R = 0.91-0.98). ITK and CTAn showed good agreement for BV, TV, BV/TV, TbTh and BSBV (R = 0.9-0.98). However, limited agreement was seen between TbN (R = 0.73) and TbSb (R = 0.59) due to methodological differences in how spacing is evaluated. Microstructural parameters generated from manual and automatic segmentations showed high correlation across all measures. Using the CTAn pipeline yielded strong R2 values (0.83-0.96) and very strong agreement based on ICC (0.90-0.98). The ITK pipeline yielded similarly high R2 values (0.91-0.96, except for TbN (0.77)), and ICC values (0.88-0.98). The automated segmentations yield lower average values for BV, TV and BV/TV (ranging from 14 % to 6.3 %), but differences were not found to be influenced by the mean ROI values. CONCLUSIONS This integrated pipeline seamlessly automated both segmentation and quantification of the proximal tibia subchondral bone microstructure. This automated pipeline allows the analysis of large volumes of data, and its open-source nature may enable the standardization of microstructural analysis of trabecular bone across different research groups.
Collapse
Affiliation(s)
- Hamza Mahdi
- Sunnybrook Research Institute, Holland Musculoskeletal Research Program, Canada
| | - Michael Hardisty
- Sunnybrook Research Institute, Holland Musculoskeletal Research Program, Canada
| | - Kelly Fullerton
- Sunnybrook Research Institute, Holland Musculoskeletal Research Program, Canada
| | - Kathak Vachhani
- Sunnybrook Research Institute, Holland Musculoskeletal Research Program, Canada
| | - Diane Nam
- Sunnybrook Research Institute, Holland Musculoskeletal Research Program, Canada
| | - Cari Whyne
- Sunnybrook Research Institute, Holland Musculoskeletal Research Program, Canada.
| |
Collapse
|
43
|
Chen C, Qi S, Zhou K, Lu T, Ning H, Xiao R. Pairwise attention-enhanced adversarial model for automatic bone segmentation in CT images. Phys Med Biol 2023; 68. [PMID: 36634367 DOI: 10.1088/1361-6560/acb2ab] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Bone segmentation is a critical step in screw placement navigation. Although the deep learning methods have promoted the rapid development for bone segmentation, the local bone separation is still challenging due to irregular shapes and similar representational features.Approach. In this paper, we proposed the pairwise attention-enhanced adversarial model (Pair-SegAM) for automatic bone segmentation in computed tomography images, which includes the two parts of the segmentation model and discriminator. Considering that the distributions of the predictions from the segmentation model contains complicated semantics, we improve the discriminator to strengthen the awareness ability of the target region, improving the parsing of semantic information features. The Pair-SegAM has a pairwise structure, which uses two calculation mechanics to set up pairwise attention maps, then we utilize the semantic fusion to filter unstable regions. Therefore, the improved discriminator provides more refinement information to capture the bone outline, thus effectively enhancing the segmentation models for bone segmentation.Main results. To test the Pair-SegAM, we selected the two bone datasets for assessment. We evaluated our method against several bone segmentation models and latest adversarial models on the both datasets. The experimental results prove that our method not only exhibits superior bone segmentation performance, but also states effective generalization.Significance. Our method provides a more efficient segmentation of specific bones and has the potential to be extended to other semantic segmentation domains.
Collapse
Affiliation(s)
- Cheng Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Siyu Qi
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Kangneng Zhou
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Tong Lu
- Visual 3D Medical Science and Technology Development Co. Ltd, Beijing 100082, People's Republic of China
| | - Huansheng Ning
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Ruoxiu Xiao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China.,Shunde Innovation School, University of Science and Technology Beijing, Foshan 100024, People's Republic of China
| |
Collapse
|
44
|
Bonaldi L, Pretto A, Pirri C, Uccheddu F, Fontanella CG, Stecco C. Deep Learning-Based Medical Images Segmentation of Musculoskeletal Anatomical Structures: A Survey of Bottlenecks and Strategies. Bioengineering (Basel) 2023; 10:bioengineering10020137. [PMID: 36829631 PMCID: PMC9952222 DOI: 10.3390/bioengineering10020137] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
By leveraging the recent development of artificial intelligence algorithms, several medical sectors have benefited from using automatic segmentation tools from bioimaging to segment anatomical structures. Segmentation of the musculoskeletal system is key for studying alterations in anatomical tissue and supporting medical interventions. The clinical use of such tools requires an understanding of the proper method for interpreting data and evaluating their performance. The current systematic review aims to present the common bottlenecks for musculoskeletal structures analysis (e.g., small sample size, data inhomogeneity) and the related strategies utilized by different authors. A search was performed using the PUBMED database with the following keywords: deep learning, musculoskeletal system, segmentation. A total of 140 articles published up until February 2022 were obtained and analyzed according to the PRISMA framework in terms of anatomical structures, bioimaging techniques, pre/post-processing operations, training/validation/testing subset creation, network architecture, loss functions, performance indicators and so on. Several common trends emerged from this survey; however, the different methods need to be compared and discussed based on each specific case study (anatomical region, medical imaging acquisition setting, study population, etc.). These findings can be used to guide clinicians (as end users) to better understand the potential benefits and limitations of these tools.
Collapse
Affiliation(s)
- Lorenza Bonaldi
- Department of Civil, Environmental and Architectural Engineering, University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Andrea Pretto
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
| | - Carmelo Pirri
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
| | - Francesca Uccheddu
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Chiara Giulia Fontanella
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
- Correspondence: ; Tel.: +39-049-8276754
| | - Carla Stecco
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| |
Collapse
|
45
|
Kulseng CPS, Nainamalai V, Grøvik E, Geitung JT, Årøen A, Gjesdal KI. Automatic segmentation of human knee anatomy by a convolutional neural network applying a 3D MRI protocol. BMC Musculoskelet Disord 2023; 24:41. [PMID: 36650496 PMCID: PMC9847207 DOI: 10.1186/s12891-023-06153-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 01/10/2023] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND To study deep learning segmentation of knee anatomy with 13 anatomical classes by using a magnetic resonance (MR) protocol of four three-dimensional (3D) pulse sequences, and evaluate possible clinical usefulness. METHODS The sample selection involved 40 healthy right knee volumes from adult participants. Further, a recently injured single left knee with previous known ACL reconstruction was included as a test subject. The MR protocol consisted of the following 3D pulse sequences: T1 TSE, PD TSE, PD FS TSE, and Angio GE. The DenseVNet neural network was considered for these experiments. Five input combinations of sequences (i) T1, (ii) T1 and FS, (iii) PD and FS, (iv) T1, PD, and FS and (v) T1, PD, FS and Angio were trained using the deep learning algorithm. The Dice similarity coefficient (DSC), Jaccard index and Hausdorff were used to compare the performance of the networks. RESULTS Combining all sequences collectively performed significantly better than other alternatives. The following DSCs (±standard deviation) were obtained for the test dataset: Bone medulla 0.997 (±0.002), PCL 0.973 (±0.015), ACL 0.964 (±0.022), muscle 0.998 (±0.001), cartilage 0.966 (±0.018), bone cortex 0.980 (±0.010), arteries 0.943 (±0.038), collateral ligaments 0.919 (± 0.069), tendons 0.982 (±0.005), meniscus 0.955 (±0.032), adipose tissue 0.998 (±0.001), veins 0.980 (±0.010) and nerves 0.921 (±0.071). The deep learning network correctly identified the anterior cruciate ligament (ACL) tear of the left knee, thus indicating a future aid to orthopaedics. CONCLUSIONS The convolutional neural network proves highly capable of correctly labeling all anatomical structures of the knee joint when applied to 3D MR sequences. We have demonstrated that this deep learning model is capable of automatized segmentation that may give 3D models and discover pathology. Both useful for a preoperative evaluation.
Collapse
Affiliation(s)
| | - Varatharajan Nainamalai
- grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Larsgaardvegen 2, Ålesund, 6025 Norway
| | - Endre Grøvik
- grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Høgskoleringen 5, Trondheim, 7491 Norway ,Møre og Romsdal Hospital Trust, Postboks 1600, Ålesund, 6025 Norway
| | - Jonn-Terje Geitung
- Sunnmøre MR-klinikk, Langelandsvegen 15, Ålesund, 6010 Norway ,grid.5510.10000 0004 1936 8921Faculty of Medicine, University of Oslo, Klaus Torgårds vei 3, Oslo, 0372 Norway ,grid.411279.80000 0000 9637 455XDepartment of Radiology, Akershus University Hospital, Postboks 1000, Lørenskog, 1478 Norway
| | - Asbjørn Årøen
- grid.411279.80000 0000 9637 455XDepartment of Orthopedic Surgery, Institute of Clinical Medicine, Akershus University Hospital, Problemveien 7, Oslo, 0315 Norway ,grid.412285.80000 0000 8567 2092Oslo Sports Trauma Research Center, Norwegian School of Sport Sciences, Postboks 4014 Ullevål Stadion, Oslo, 0806 Norway
| | - Kjell-Inge Gjesdal
- Sunnmøre MR-klinikk, Langelandsvegen 15, Ålesund, 6010 Norway ,grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Larsgaardvegen 2, Ålesund, 6025 Norway ,grid.411279.80000 0000 9637 455XDepartment of Radiology, Akershus University Hospital, Postboks 1000, Lørenskog, 1478 Norway
| |
Collapse
|
46
|
Fan X, Zhu Q, Tu P, Joskowicz L, Chen X. A review of advances in image-guided orthopedic surgery. Phys Med Biol 2023; 68. [PMID: 36595258 DOI: 10.1088/1361-6560/acaae9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 12/12/2022] [Indexed: 12/15/2022]
Abstract
Orthopedic surgery remains technically demanding due to the complex anatomical structures and cumbersome surgical procedures. The introduction of image-guided orthopedic surgery (IGOS) has significantly decreased the surgical risk and improved the operation results. This review focuses on the application of recent advances in artificial intelligence (AI), deep learning (DL), augmented reality (AR) and robotics in image-guided spine surgery, joint arthroplasty, fracture reduction and bone tumor resection. For the pre-operative stage, key technologies of AI and DL based medical image segmentation, 3D visualization and surgical planning procedures are systematically reviewed. For the intra-operative stage, the development of novel image registration, surgical tool calibration and real-time navigation are reviewed. Furthermore, the combination of the surgical navigation system with AR and robotic technology is also discussed. Finally, the current issues and prospects of the IGOS system are discussed, with the goal of establishing a reference and providing guidance for surgeons, engineers, and researchers involved in the research and development of this area.
Collapse
Affiliation(s)
- Xingqi Fan
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Qiyang Zhu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.,Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
47
|
Chen S, Zhong L, Qiu C, Zhang Z, Zhang X. Transformer-based multilevel region and edge aggregation network for magnetic resonance image segmentation. Comput Biol Med 2023; 152:106427. [PMID: 36543009 DOI: 10.1016/j.compbiomed.2022.106427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 11/18/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
To improve the quality of magnetic resonance (MR) image edge segmentation, some researchers applied additional edge labels to train the network to extract edge information and aggregate it with region information. They have made significant progress. However, due to the intrinsic locality of convolution operations, the convolution neural network-based region and edge aggregation has limitations in modeling long-range information. To solve this problem, we proposed a novel transformer-based multilevel region and edge aggregation network for MR image segmentation. To the best of our knowledge, this is the first literature on transformer-based region and edge aggregation. We first extract multilevel region and edge features using a dual-branch module. Then, the region and edge features at different levels are inferred and aggregated through multiple transformer-based inference modules to form multilevel complementary features. Finally, the attention feature selection module aggregates these complementary features with the corresponding level region and edge features to decode the region and edge features. We evaluated our method on a public MR dataset: Medical image computation and computer-assisted intervention atrial segmentation challenge (ASC). Meanwhile, the private MR dataset considered infrapatellar fat pad (IPFP). Our method achieved a dice score of 93.2% for ASC and 91.9% for IPFP. Compared with other 2D segmentation methods, our method improved a dice score by 0.6% for ASC and 3.0% for IPFP.
Collapse
Affiliation(s)
- Shaolong Chen
- School of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen, 518107, China
| | - Lijie Zhong
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics·Guangdong Province), Guangzhou, 510630, China
| | - Changzhen Qiu
- School of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen, 518107, China
| | - Zhiyong Zhang
- School of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen, 518107, China.
| | - Xiaodong Zhang
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics·Guangdong Province), Guangzhou, 510630, China.
| |
Collapse
|
48
|
Rodriguez-Vila B, Gonzalez-Hospital V, Puertas E, Beunza JJ, Pierce DM. Democratization of deep learning for segmenting cartilage from MRIs of human knees: Application to data from the osteoarthritis initiative. J Orthop Res 2022. [PMID: 36573479 DOI: 10.1002/jor.25509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 11/29/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022]
Abstract
In this study, we aimed to democratize access to convolutional neural networks (CNN) for segmenting cartilage volumes, generating state-of-the-art results for specialized, real-world applications in hospitals and research. Segmentation of cross-sectional and/or longitudinal magnetic resonance (MR) images of articular cartilage facilitates both clinical management of joint damage/disease and fundamental research. Manual delineation of such images is a time-consuming task susceptible to high intra- and interoperator variability and prone to errors. Thus, enabling reliable and efficient analyses of MRIs of cartilage requires automated segmentation of cartilage volumes. Two main limitations arise in the development of hospital- or population-specific deep learning (DL) models for image segmentation: specialized knowledge and specialized hardware. We present a relatively easy and accessible implementation of a DL model to automatically segment MRIs of human knees with state-of-the-art accuracy. In representative examples, we trained CNN models in 6-8 h and obtained results quantitatively comparable to state-of-the-art for every anatomical structure. We established and evaluated our methods using two publicly available MRI data sets originating from the Osteoarthritis Initiative, Stryker Imorphics, and Zuse Institute Berlin (ZIB), as representative test cases. We use Google Colabfor editing and adapting the Python codes and selecting the runtime environment leveraging high-performance graphical processing units. We designed our solution for novice users to apply to any data set with relatively few adaptations requiring only basic programming skills. To facilitate the adoption of our methods, we provide a complete guideline for using our methods and software, as well as the software tools themselves. Clinical significance: We establish and detail methods that clinical personal can apply to create their own DL models without specialized knowledge of DL nor specialized hardware/infrastructure and obtain results comparable with the state-of-the-art to facilitate both clinical management of joint damage/disease and fundamental research.
Collapse
Affiliation(s)
- Borja Rodriguez-Vila
- Department of Electronics, Universidad Rey Juan Carlos, Madrid, Spain.,Medical Image Analysis and Biometry Laboratory, Universidad Rey Juan Carlos, Madrid, Spain.,IAsalud, School for Doctoral Studies and Research, Universidad Europea de Madrid, Madrid, Spain
| | - Vera Gonzalez-Hospital
- IAsalud, School for Doctoral Studies and Research, Universidad Europea de Madrid, Madrid, Spain
| | - Enrique Puertas
- IAsalud, School for Doctoral Studies and Research, Universidad Europea de Madrid, Madrid, Spain.,Department of Computer Science and Technology, School of Architecture, Engineering and Design, Universidad Europea de Madrid, Madrid, Spain
| | - Juan-Jose Beunza
- IAsalud, School for Doctoral Studies and Research, Universidad Europea de Madrid, Madrid, Spain.,Department of Medicine, School of Biomedical and Health Sciences, Universidad Europea de Madrid, Madrid, Spain
| | - David M Pierce
- Department of Mechanical Engineering, University of Connecticut, Storrs, Connecticut, USA.,Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| |
Collapse
|
49
|
Gibbons KD, Malbouby V, Alvarez O, Fitzpatrick CK. Robust automatic hexahedral cartilage meshing framework enables population-based computational studies of the knee. Front Bioeng Biotechnol 2022; 10:1059003. [PMID: 36568304 PMCID: PMC9780478 DOI: 10.3389/fbioe.2022.1059003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 11/25/2022] [Indexed: 12/13/2022] Open
Abstract
Osteoarthritis of the knee is increasingly prevalent as our population ages, representing an increasing financial burden, and severely impacting quality of life. The invasiveness of in vivo procedures and the high cost of cadaveric studies has left computational tools uniquely suited to study knee biomechanics. Developments in deep learning have great potential for efficiently generating large-scale datasets to enable researchers to perform population-sized investigations, but the time and effort associated with producing robust hexahedral meshes has been a limiting factor in expanding finite element studies to encompass a population. Here we developed a fully automated pipeline capable of taking magnetic resonance knee images and producing a working finite element simulation. We trained an encoder-decoder convolutional neural network to perform semantic image segmentation on the Imorphics dataset provided through the Osteoarthritis Initiative. The Imorphics dataset contained 176 image sequences with varying levels of cartilage degradation. Starting from an open-source swept-extrusion meshing algorithm, we further developed this algorithm until it could produce high quality meshes for every sequence and we applied a template-mapping procedure to automatically place soft-tissue attachment points. The meshing algorithm produced simulation-ready meshes for all 176 sequences, regardless of the use of provided (manually reconstructed) or predicted (automatically generated) segmentation labels. The average time to mesh all bones and cartilage tissues was less than 2 min per knee on an AMD Ryzen 5600X processor, using a parallel pool of three workers for bone meshing, followed by a pool of four workers meshing the four cartilage tissues. Of the 176 sequences with provided segmentation labels, 86% of the resulting meshes completed a simulated flexion-extension activity. We used a reserved testing dataset of 28 sequences unseen during network training to produce simulations derived from predicted labels. We compared tibiofemoral contact mechanics between manual and automated reconstructions for the 24 pairs of successful finite element simulations from this set, resulting in mean root-mean-squared differences under 20% of their respective min-max norms. In combination with further advancements in deep learning, this framework represents a feasible pipeline to produce population sized finite element studies of the natural knee from subject-specific models.
Collapse
|
50
|
Chadoulos CG, Tsaopoulos DE, Moustakidis S, Tsakiridis NL, Theocharis JB. A novel multi-atlas segmentation approach under the semi-supervised learning framework: Application to knee cartilage segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 227:107208. [PMID: 36384059 DOI: 10.1016/j.cmpb.2022.107208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 10/19/2022] [Accepted: 10/27/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-atlas based segmentation techniques, which rely on an atlas library comprised of training images labeled by an expert, have proven their effectiveness in multiple automatic segmentation applications. However, the usage of exhaustive patch libraries combined with the voxel-wise labeling incur a large computational cost in terms of memory requirements and execution times. METHODS To confront this shortcoming, we propose a novel two-stage multi-atlas approach designed under the Semi-Supervised Learning (SSL) framework. The main properties of our method are as follows: First, instead of the voxel-wise labeling approach, the labeling of target voxels is accomplished here by exploiting the spectral content of globally sampled datasets from the target image, along with their spatially correspondent data collected from the atlases. Following SSL, voxels classification is boosted by incorporating unlabeled data from the target image, in addition to the labeled ones from atlas library. Our scheme integrates constructively fruitful concepts, including sparse reconstructions of voxels from linear neighborhoods, HOG feature descriptors of patches/regions, and label propagation via sparse graph constructions. Segmentation of the target image is carried out in two stages: stage-1 focuses on the sampling and labeling of global data, while stage-2 undertakes the above tasks for the out-of-sample data. Finally, we propose different graph-based methods for the labeling of global data, while these methods are extended to deal with the out-of-sample voxels. RESULTS A thorough experimental investigation is conducted on 76 subjects provided by the publicly accessible Osteoarthritis Initiative (OAI) repository. Comparative results and statistical analysis demonstrate that the suggested methodology exhibits superior segmentation performance compared to the existing patch-based methods, across all evaluation metrics (DSC:88.89%, Precision: 89.86%, Recall: 88.12%), while at the same time it requires a considerably reduced computational load (>70% reduction on average execution time with respect to other patch-based). In addition, our approach is favorably compared against other non patch-based and deep learning methods in terms of performance accuracy (on the 3-class problem). A final experimentation on a 5-class setting of the problems demonstrates that our approach is capable of achieving performance comparable to existing state-of-the-art knee cartilage segmentation methods (DSC:88.22% and DSC:85.84% for femoral and tibial cartilage respectively).
Collapse
Affiliation(s)
- Christos G Chadoulos
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, 54124, Greece.
| | - Dimitrios E Tsaopoulos
- Institute for Bio-Economy and Agri-Technology, Centre for Research and Technology Hellas, Volos, 38333, Greece.
| | | | - Nikolaos L Tsakiridis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, 54124, Greece.
| | - John B Theocharis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, 54124, Greece.
| |
Collapse
|