1
|
Ditmer S, Dwenger N, Jensen LN, Kim H, Boel RV, Ghaffari A, Rahbek O. Fully automatic system to detect and segment the proximal femur in pelvic radiographic images for Legg-Calvé-Perthes disease. J Orthop Res 2024; 42:1074-1085. [PMID: 38053300 DOI: 10.1002/jor.25761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 11/23/2023] [Accepted: 11/28/2023] [Indexed: 12/07/2023]
Abstract
This study aimed to develop a method using computer vision techniques to accurately detect and delineate the proximal femur in radiographs of Legg-Calvé-Perthes disease (LCPD) patients. Currently, evaluating femoral head deformity, a crucial predictor of LCPD outcomes, relies on unreliable categorical and qualitative classifications. To address this limitation, we employed the pretrained object detection model YOLOv5 to detect the proximal femur on over 2000 radiographs, including images of shoulders and chests, to enhance robustness and generalizability. Subsequently, we utilized the U-Net convolutional neural network architecture for image segmentation of the proximal femur in more than 800 manually annotated images of stage IV LCPD. The results demonstrate outstanding performance, with the object detection model achieving high accuracy (mean average precision of 0.99) and the segmentation model attaining an accuracy score of 91%, dice coefficient of 0.75, and binary IoU score of 0.85 on the held-out test set. The proposed fully automatic proximal femur detection and segmentation system offers a promising approach to accurately detect and delineate the proximal femoral bone contour in radiographic images, which is essential for further image analysis in LCPD patients. Clinical significance: This study highlights the potential of computer vision techniques for enhancing the reliability of Legg-Calvé-Perthes disease staging and outcome prediction.
Collapse
Affiliation(s)
- Sofie Ditmer
- School of Communication and Culture, University of Aarhus, Aarhus, Denmark
| | - Nicole Dwenger
- School of Communication and Culture, University of Aarhus, Aarhus, Denmark
| | - Louise N Jensen
- School of Communication and Culture, University of Aarhus, Aarhus, Denmark
| | - Harry Kim
- Scottish Rite for Children, Dallas, Texas, USA
| | - Rikke V Boel
- Department of Interdisciplinary Orthopedics, Aalborg University Hospital, Aalborg, Denmark
| | - Arash Ghaffari
- Department of Interdisciplinary Orthopedics, Aalborg University Hospital, Aalborg, Denmark
| | - Ole Rahbek
- Department of Interdisciplinary Orthopedics, Aalborg University Hospital, Aalborg, Denmark
| |
Collapse
|
2
|
Gartland C, Curran E, Healy J, Lynham RS, Nowlan NC, Green C, Redmond SJ. Automatic Segmentation of the Paediatric Femoral Head. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083019 DOI: 10.1109/embc40787.2023.10340016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Developmental dysplasia of the hip (DDH) is a developmental deformity occurring in 0.1-3.4% of infants. Timely surgical intervention can ameliorate the condition in stable hips and reduce future cases of osteoarthritis and total hip replacement. However, current definitions of DDH are subjective, and thus would benefit from a more objective and reliable assessment metric. Since the shape of the femoral head and its congruence with the acetabulum are disrupted by DDH, analysis of the femoral head could potentially play a role in the development of novel objective morphological metric for stable DDH. Therefore, this paper aimed to segment the paediatric femoral head in stable hips from radiographs, which has not been attempted before in the chosen focus age group (1-16 years) where the pelvis and hip joint undergo significant development. Two techniques were compared against a baseline U-Net: data augmentation and region-of-interest (ROI) networks. Four models were developed either without, with just one, or with both techniques. Evaluated using tenfold cross-validation, the U-Net trained with both techniques achieved the best results, with a Dice Similarity Coefficient (DSC) of 0.951±0.037 (mean ± standard deviation, calculated with 720 images). Future work will use this segmentation algorithm to accurately characterise hip joint morphology and estimate the benefit of early surgical intervention in DDH.
Collapse
|
3
|
Soydan Z, Saglam Y, Key S, Kati YA, Taskiran M, Kiymet S, Salturk T, Aydin AS, Bilgili F, Sen C. An AI based classifier model for lateral pillar classification of Legg-Calve-Perthes. Sci Rep 2023; 13:6870. [PMID: 37106026 PMCID: PMC10140055 DOI: 10.1038/s41598-023-34176-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 04/25/2023] [Indexed: 04/29/2023] Open
Abstract
We intended to compare the doctors with a convolutional neural network (CNN) that we had trained using our own unique method for the Lateral Pillar Classification (LPC) of Legg-Calve-Perthes Disease (LCPD). Thousands of training data sets are frequently required for artificial intelligence (AI) applications in medicine. Since we did not have enough real patient radiographs to train a CNN, we devised a novel method to obtain them. We trained the CNN model with the data we created by modifying the normal hip radiographs. No real patient radiographs were ever used during the training phase. We tested the CNN model on 81 hips with LCPD. Firstly, we detected the interobserver reliability of the whole system and then the reliability of CNN alone. Second, the consensus list was used to compare the results of 11 doctors and the CNN model. Percentage agreement and interobserver analysis revealed that CNN had good reliability (ICC = 0.868). CNN has achieved a 76.54% classification performance and outperformed 9 out of 11 doctors. The CNN, which we trained with the aforementioned method, can now provide better results than doctors. In the future, as training data evolves and improves, we anticipate that AI will perform significantly better than physicians.
Collapse
Affiliation(s)
- Zafer Soydan
- Orthopedics and Traumatology, Bhtclinic İstanbul Tema Hastanesi, Nisantası University, Atakent Mh 4. Cadde No 36 PC, 34307, Kucukcekmece, Istanbul, Turkey.
| | - Yavuz Saglam
- Orthopedics and Traumatology, Istanbul University Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Sefa Key
- Orthopedics and Traumatology, Bingol State Hospital, Bingol Merkez, Turkey
| | - Yusuf Alper Kati
- Orthopedics and Traumatology, Antalya Egitim ve Arastirma Hastanesi, Antalya, Turkey
| | - Murat Taskiran
- Department of Electronics and Communication Engineering, Yildiz Technical University, Istanbul, Turkey
| | - Seyfullah Kiymet
- Department of Electronics and Communication Engineering, Yildiz Technical University, Istanbul, Turkey
| | - Tuba Salturk
- Department of Informatics, Yildiz Technical University, Istanbul, Turkey
| | - Ahmet Serhat Aydin
- Orthopedics and Traumatology, Istanbul University Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Fuat Bilgili
- Orthopedics and Traumatology, Istanbul University Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Cengiz Sen
- Orthopedics and Traumatology, Istanbul University Istanbul Faculty of Medicine, Istanbul, Turkey
| |
Collapse
|
4
|
Fan X, Zhu Q, Tu P, Joskowicz L, Chen X. A review of advances in image-guided orthopedic surgery. Phys Med Biol 2023; 68. [PMID: 36595258 DOI: 10.1088/1361-6560/acaae9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 12/12/2022] [Indexed: 12/15/2022]
Abstract
Orthopedic surgery remains technically demanding due to the complex anatomical structures and cumbersome surgical procedures. The introduction of image-guided orthopedic surgery (IGOS) has significantly decreased the surgical risk and improved the operation results. This review focuses on the application of recent advances in artificial intelligence (AI), deep learning (DL), augmented reality (AR) and robotics in image-guided spine surgery, joint arthroplasty, fracture reduction and bone tumor resection. For the pre-operative stage, key technologies of AI and DL based medical image segmentation, 3D visualization and surgical planning procedures are systematically reviewed. For the intra-operative stage, the development of novel image registration, surgical tool calibration and real-time navigation are reviewed. Furthermore, the combination of the surgical navigation system with AR and robotic technology is also discussed. Finally, the current issues and prospects of the IGOS system are discussed, with the goal of establishing a reference and providing guidance for surgeons, engineers, and researchers involved in the research and development of this area.
Collapse
Affiliation(s)
- Xingqi Fan
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Qiyang Zhu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.,Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
5
|
Prijs J, Liao Z, To MS, Verjans J, Jutte PC, Stirler V, Olczak J, Gordon M, Guss D, DiGiovanni CW, Jaarsma RL, IJpma FFA, Doornberg JN. Development and external validation of automated detection, classification, and localization of ankle fractures: inside the black box of a convolutional neural network (CNN). Eur J Trauma Emerg Surg 2022; 49:1057-1069. [PMID: 36374292 PMCID: PMC10175446 DOI: 10.1007/s00068-022-02136-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 10/10/2022] [Indexed: 11/16/2022]
Abstract
Abstract
Purpose
Convolutional neural networks (CNNs) are increasingly being developed for automated fracture detection in orthopaedic trauma surgery. Studies to date, however, are limited to providing classification based on the entire image—and only produce heatmaps for approximate fracture localization instead of delineating exact fracture morphology. Therefore, we aimed to answer (1) what is the performance of a CNN that detects, classifies, localizes, and segments an ankle fracture, and (2) would this be externally valid?
Methods
The training set included 326 isolated fibula fractures and 423 non-fracture radiographs. The Detectron2 implementation of the Mask R-CNN was trained with labelled and annotated radiographs. The internal validation (or ‘test set’) and external validation sets consisted of 300 and 334 radiographs, respectively. Consensus agreement between three experienced fellowship-trained trauma surgeons was defined as the ground truth label. Diagnostic accuracy and area under the receiver operator characteristic curve (AUC) were used to assess classification performance. The Intersection over Union (IoU) was used to quantify accuracy of the segmentation predictions by the CNN, where a value of 0.5 is generally considered an adequate segmentation.
Results
The final CNN was able to classify fibula fractures according to four classes (Danis-Weber A, B, C and No Fracture) with AUC values ranging from 0.93 to 0.99. Diagnostic accuracy was 89% on the test set with average sensitivity of 89% and specificity of 96%. External validity was 89–90% accurate on a set of radiographs from a different hospital. Accuracies/AUCs observed were 100/0.99 for the ‘No Fracture’ class, 92/0.99 for ‘Weber B’, 88/0.93 for ‘Weber C’, and 76/0.97 for ‘Weber A’. For the fracture bounding box prediction by the CNN, a mean IoU of 0.65 (SD ± 0.16) was observed. The fracture segmentation predictions by the CNN resulted in a mean IoU of 0.47 (SD ± 0.17).
Conclusions
This study presents a look into the ‘black box’ of CNNs and represents the first automated delineation (segmentation) of fracture lines on (ankle) radiographs. The AUC values presented in this paper indicate good discriminatory capability of the CNN and substantiate further study of CNNs in detecting and classifying ankle fractures.
Level of evidence
II, Diagnostic imaging study.
Collapse
Affiliation(s)
- Jasper Prijs
- Department of Orthopaedic Surgery, Groningen University Medical Centre, Groningen, The Netherlands.
- Department of Surgery, Groningen University Medical Centre, Groningen, The Netherlands.
- Department of Orthopaedic & Trauma Surgery, Flinders Medical Centre, Flinders University, Adelaide, Australia.
| | - Zhibin Liao
- Australian Institute for Machine Learning, Adelaide, Australia
| | - Minh-Son To
- College of Medicine and Public Health, Flinders University, Adelaide, Australia
- Department of Neurosurgery, Flinders Medical Center, Adelaide, Australia
| | - Johan Verjans
- Australian Institute for Machine Learning, Adelaide, Australia
| | - Paul C Jutte
- Department of Orthopaedic Surgery, Groningen University Medical Centre, Groningen, The Netherlands
| | - Vincent Stirler
- Department of Orthopaedic Surgery, Groningen University Medical Centre, Groningen, The Netherlands
| | - Jakub Olczak
- Institute of Clinical Sciences, Danderyd University Hospital, Karolinska Institute, Solna, Sweden
| | - Max Gordon
- Institute of Clinical Sciences, Danderyd University Hospital, Karolinska Institute, Solna, Sweden
| | - Daniel Guss
- Massachusetts General Hospital, Boston, USA
- Harvard Medical School, Boston, USA
| | | | - Ruurd L Jaarsma
- Department of Orthopaedic & Trauma Surgery, Flinders Medical Centre, Flinders University, Adelaide, Australia
| | - Frank F A IJpma
- Department of Orthopaedic Surgery, Groningen University Medical Centre, Groningen, The Netherlands
| | - Job N Doornberg
- Department of Orthopaedic Surgery, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Orthopaedic & Trauma Surgery, Flinders Medical Centre, Flinders University, Adelaide, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
6
|
Automated 3D Analysis of Clinical Magnetic Resonance Images Demonstrates Significant Reductions in Cam Morphology Following Arthroscopic Intervention in Contrast to Physiotherapy. Arthrosc Sports Med Rehabil 2022; 4:e1353-e1362. [PMID: 36033193 PMCID: PMC9402425 DOI: 10.1016/j.asmr.2022.04.020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 04/19/2022] [Indexed: 11/25/2022] Open
Abstract
Purpose To obtain automated measurements of cam volume, surface area, and height from baseline (preintervention) and 12-month magnetic resonance (MR) images acquired from male and female patients allocated to physiotherapy (PT) or arthroscopic surgery (AS) management for femoroacetabular impingement (FAI) in the Australian FASHIoN trial. Methods An automated segmentation pipeline (CamMorph) was used to obtain cam morphology data from three-dimensional (3D) MR hip examinations in FAI patients classified with mild, moderate, or major cam volumes. Pairwise comparisons between baseline and 12-month cam volume, surface area, and height data were performed within the PT and AS patient groups using paired t-tests or Wilcoxon signed-rank tests. Results A total of 43 patients were included with 15 PT patients (9 males, 6 females) and 28 AS patients (18 males, 10 females) for premanagement and postmanagement cam morphology assessments. Within the PT male and female patient groups, there were no significant differences between baseline and 12-month mean cam volume (male: 1269 vs 1288 mm3, t[16] = −0.39; female: 545 vs 550 mm,3t[10] = −0.78), surface area (male: 1525 vs 1491 mm2, t[16] = 0.92; female: 885 vs 925 mm,2t[10] = −0.78), maximum height (male: 4.36 vs 4.32 mm, t[16] = 0.34; female: 3.05 vs 2.96 mm, t[10] = 1.05) and average height (male: 2.18 vs 2.18 mm, t[16] = 0.22; female: 1.4 vs 1.43 mm, t[10] = −0.38). In contrast, within the AS male and female patient groups, there were significant differences between baseline and 12-month cam volume (male: 1343 vs 718 mm3, W = 0.0; female: 499 vs 240 mm3, t[18] = 2.89), surface area (male: 1520 vs 1031 mm2, t(34) = 6.48; female: 782 vs 483 mm2, t(18) = 3.02), maximum-height (male: 4.3 vs 3.42 mm, W = 13.5; female: 2.85 vs 2.24 mm, t(18) = 3.04) and average height (male: 2.17 vs 1.52 mm, W = 3.0; female: 1.4 vs 0.94 mm, W = 3.0). In AS patients, 3D bone models provided good visualization of cam bone mass removal postostectomy. Conclusions Automated measurement of cam morphology from baseline (preintervention) and 12-month MR images demonstrated that the cam volume, surface area, maximum-height, and average height were significantly smaller in AS patients following ostectomy, whereas there were no significant differences in these cam measures in PT patients from the Australian FASHIoN study. Level of Evidence Level II, cohort study.
Collapse
|
7
|
Machine learning-based identification of craniosynostosis in newborns. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2022.100292] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
|
8
|
Memiş A, Varlı S, Bilgili F. Fast and Accurate Registration of the Proximal Femurs in Bilateral Hip Joint Images by Using the Random Sub-Sample Points. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2021.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
9
|
Valizadeh A, Shariatee M. The Progress of Medical Image Semantic Segmentation Methods for Application in COVID-19 Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:7265644. [PMID: 34840563 PMCID: PMC8611358 DOI: 10.1155/2021/7265644] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 10/18/2021] [Indexed: 11/17/2022]
Abstract
Image medical semantic segmentation has been employed in various areas, including medical imaging, computer vision, and intelligent transportation. In this study, the method of semantic segmenting images is split into two sections: the method of the deep neural network and previous traditional method. The traditional method and the published dataset for segmentation are reviewed in the first step. The presented aspects, including all-convolution network, sampling methods, FCN connector with CRF methods, extended convolutional neural network methods, improvements in network structure, pyramid methods, multistage and multifeature methods, supervised methods, semiregulatory methods, and nonregulatory methods, are then thoroughly explored in current methods based on the deep neural network. Finally, a general conclusion on the use of developed advances based on deep neural network concepts in semantic segmentation is presented.
Collapse
Affiliation(s)
- Amin Valizadeh
- Department of Mechanical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Morteza Shariatee
- Department of Mechanical Engineering, Iowa State University, Ames, IA, USA
| |
Collapse
|
10
|
Cha JY, Yoon HI, Yeo IS, Huh KH, Han JS. Panoptic Segmentation on Panoramic Radiographs: Deep Learning-Based Segmentation of Various Structures Including Maxillary Sinus and Mandibular Canal. J Clin Med 2021; 10:2577. [PMID: 34208024 PMCID: PMC8230590 DOI: 10.3390/jcm10122577] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/03/2021] [Accepted: 06/09/2021] [Indexed: 11/29/2022] Open
Abstract
Panoramic radiographs, also known as orthopantomograms, are routinely used in most dental clinics. However, it has been difficult to develop an automated method that detects the various structures present in these radiographs. One of the main reasons for this is that structures of various sizes and shapes are collectively shown in the image. In order to solve this problem, the recently proposed concept of panoptic segmentation, which integrates instance segmentation and semantic segmentation, was applied to panoramic radiographs. A state-of-the-art deep neural network model designed for panoptic segmentation was trained to segment the maxillary sinus, maxilla, mandible, mandibular canal, normal teeth, treated teeth, and dental implants on panoramic radiographs. Unlike conventional semantic segmentation, each object in the tooth and implant classes was individually classified. For evaluation, the panoptic quality, segmentation quality, recognition quality, intersection over union (IoU), and instance-level IoU were calculated. The evaluation and visualization results showed that the deep learning-based artificial intelligence model can perform panoptic segmentation of images, including those of the maxillary sinus and mandibular canal, on panoramic radiographs. This automatic machine learning method might assist dental practitioners to set up treatment plans and diagnose oral and maxillofacial diseases.
Collapse
Affiliation(s)
- Jun-Young Cha
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - Hyung-In Yoon
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - In-Sung Yeo
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea
| | - Jung-Suk Han
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| |
Collapse
|
11
|
Abstract
Abstract
Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy.
Collapse
|
12
|
Memiş A, Varlı S, Bilgili F. A novel approach for computerized quantitative image analysis of proximal femur bone shape deformities based on the hip joint symmetry. Artif Intell Med 2021; 115:102057. [PMID: 34001317 DOI: 10.1016/j.artmed.2021.102057] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 03/15/2021] [Accepted: 03/22/2021] [Indexed: 11/29/2022]
Abstract
As a result of most of the bone disorders seen in hip joints, shape deformities occur in the structural form of the hip joint components. Image-based quantitative analysis and assessment of these deformities in bone shapes are very important for the evaluation, treatment, and prognosis of the various hip joint bone disorders. In this article, a novel approach for the image-based computerized quantitative analysis of proximal femur shape deformities is presented. In the proposed approach, shape deformities of the pathological proximal femurs were quantified over the contralateral healthy proximal femur shape structure of the same patient in 2D by taking the hip joint symmetry property of human anatomy into consideration. It is based on the idea that if the right and left proximal femurs in bilateral hip joints are highly symmetrical and also if one of the proximal femurs is healthy and the contralateral one is pathological, the non-overlapping bone shape regions can represent the deformities in pathological proximal femurs when both proximal femurs are registered to overlap each other. In the methodological process of the proposed study, a set of image preprocessing operations was primarily performed on the raw magnetic resonance imaging (MRI) data. Then, the segmented proximal femurs in bilateral hip joint images were automatically aligned with the Iterative Closest Point (ICP) rigid registration method. Following the registration, a set of image postprocessing operations was performed on the images of proximal femurs aligned. In the quantification phase, the bone shape deformities in pathological proximal femurs were quantified simply in terms of the mismatching area in 2D by measuring a shape variation index representing the total bone shape deformity ratio. To evaluate the proposed quantitative shape analysis approach, bilateral hip joints in a total of 13 coronal MRI sections of 13 patients with Legg-Calve-Perthes disease (LCPD) were used. Experimental studies have shown that the proposed approach has quite promising results in the quantitative representation of the pathological proximal femur shape deformities. Furthermore, consistent results have been observed for the Waldenström classification stages of the disease. The shape deformity ratios in pathological proximal femurs were quantified as 9.44% (±1.40), 18.38% (±6.30), 24.73% (±12.42), and 27.66% (±10.41), respectively for the Initial, Fragmentation, Reossification, and Remodelling stages of LCPD with the quantification error rates of 0.29% (±0.16), 0.58% (±0.71), 1.12% (±0.82), and 0.80% (±0.98). Additionally, a mean error rate of 0.65% (±0.68) was observed for the quantified shape deformity ratios of all samples.
Collapse
Affiliation(s)
- Abbas Memiş
- Department of Computer Engineering, Faculty of Electrical and Electronics Engineering, Yıldız Technical University, İstanbul, Turkey.
| | - Songül Varlı
- Department of Computer Engineering, Faculty of Electrical and Electronics Engineering, Yıldız Technical University, İstanbul, Turkey.
| | - Fuat Bilgili
- Department of Orthopaedics and Traumatology, İstanbul Faculty of Medicine, İstanbul University, İstanbul, Turkey.
| |
Collapse
|