1
|
Tabata N, Ijichi T, Itai H, Tateishi M, Kita K, Obata A, Kawahara Y, Sonoda L, Katou S, Inoue T, Ideguchi T. [New Method of Paired Comparison for Improved Observer Shortage Using Deep Learning Models]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2024; 80:605-615. [PMID: 38763757 DOI: 10.6009/jjrt.2024-1446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/21/2024]
Abstract
PURPOSE The aim of this study was to validate the potential of substituting an observer in a paired comparison with a deep-learning observer. METHODS Phantom images were obtained using computed tomography. Imaging conditions included a standard setting of 120 kVp and 200 mA, with tube current variations ranging from 160 mA, 120 mA, 80 mA, 40 mA, and 20 mA, resulting in six different imaging conditions. Fourteen radiologic technologists with >10 years of experience conducted pairwise comparisons using Ura's method. After training, VGG16 and VGG19 models were combined to form deep learning models, which were then evaluated for accuracy, recall, precision, specificity, and F1value. The validation results were used as the standard, and the results of the average degree of preference and significance tests between images were compared to the standard if the results of deep learning were incorporated. RESULTS The average accuracy of the deep learning model was 82%, with a maximum difference of 0.13 from the standard regarding the average degree of preference, a minimum difference of 0, and an average difference of 0.05. Significant differences were observed in the test results when replacing human observers with AI counterparts for image pairs with tube currents of 160 mA vs. 120 mA and 200 mA vs. 160 mA. CONCLUSION In paired comparisons with a limited phantom (7-point noise evaluation), the potential use of deep learning was suggested as one of the observers.
Collapse
Affiliation(s)
- Nariaki Tabata
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University
- Department of Radiology, Fukuoka University Chikushi Hospital
| | - Tetsuya Ijichi
- Department of Radiology, Fukuoka University Chikushi Hospital
| | - Hirotaka Itai
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University
- Department of Radiology, National Hospital Organization Kyushu Medical Center
| | - Masaru Tateishi
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University
| | - Kento Kita
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University
| | - Asami Obata
- Department of Radiology, Fukuoka University Chikushi Hospital
| | - Yuna Kawahara
- Department of Radiology, Fukuoka University Chikushi Hospital
| | - Lisa Sonoda
- Department of Radiology, Fukuoka University Chikushi Hospital
| | - Shinichi Katou
- Department of Radiology, Fukuoka University Chikushi Hospital
| | - Toshirou Inoue
- Department of Radiology, Fukuoka University Chikushi Hospital
| | - Tadamitsu Ideguchi
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University
| |
Collapse
|
2
|
Sultana J, Naznin M, Faisal TR. SSDL-an automated semi-supervised deep learning approach for patient-specific 3D reconstruction of proximal femur from QCT images. Med Biol Eng Comput 2024; 62:1409-1425. [PMID: 38217823 DOI: 10.1007/s11517-023-03013-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 12/27/2023] [Indexed: 01/15/2024]
Abstract
Deep Learning (DL) techniques have recently been used in medical image segmentation and the reconstruction of 3D anatomies of a human body. In this work, we propose a semi-supervised DL (SSDL) approach utilizing a CNN-based 3D U-Net model for femur segmentation from sparsely annotated quantitative computed tomography (QCT) slices. Specifically, QCT slices at the proximal end of the femur forming ball and socket joint with acetabulum were annotated for precise segmentation, where a segmenting binary mask was generated using a 3D U-Net model to segment the femur accurately. A total of 5474 QCT slices were considered for training among which 2316 slices were annotated. 3D femurs were further reconstructed from segmented slices employing polynomial spline interpolation. Both qualitative and quantitative performance of segmentation and 3D reconstruction were satisfactory with more than 90% accuracy achieved for all of the standard performance metrics considered. The spatial overlap index and reproducibility validation metric for segmentation-Dice Similarity Coefficient was 91.8% for unseen patients and 99.2% for validated patients. An average relative error of 12.02% and 10.75% for volume and surface area, respectively, were computed for 3D reconstructed femurs. The proposed approach demonstrates its effectiveness in accurately segmenting and reconstructing 3D femur from QCT slices.
Collapse
Affiliation(s)
- Jamalia Sultana
- Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
| | - Mahmuda Naznin
- Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
| | - Tanvir R Faisal
- Department of Mechanical Engineering, University of Louisiana at Lafayette, Lafayette, LA, 70503, USA.
| |
Collapse
|
3
|
Aldieri A, Biondi R, La Mattina AA, Szyszko JA, Polizzi S, Dall'Olio D, Curti N, Castellani G, Viceconti M. Development and validation of a semi-automated and unsupervised method for femur segmentation from CT. Sci Rep 2024; 14:7403. [PMID: 38548805 PMCID: PMC10978861 DOI: 10.1038/s41598-024-57618-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 03/20/2024] [Indexed: 04/01/2024] Open
Abstract
Quantitative computed tomography (QCT)-based in silico models have demonstrated improved accuracy in predicting hip fractures with respect to the current gold standard, the areal bone mineral density. These models require that the femur bone is segmented as a first step. This task can be challenging, and in fact, it is often almost fully manual, which is time-consuming, operator-dependent, and hard to reproduce. This work proposes a semi-automated procedure for femur bone segmentation from CT images. The proposed procedure is based on the bone and joint enhancement filter and graph-cut algorithms. The semi-automated procedure performances were assessed on 10 subjects through comparison with the standard manual segmentation. Metrics based on the femur geometries and the risk of fracture assessed in silico resulting from the two segmentation procedures were considered. The average Hausdorff distance (0.03 ± 0.01 mm) and the difference union ratio (0.06 ± 0.02) metrics computed between the manual and semi-automated segmentations were significantly higher than those computed within the manual segmentations (0.01 ± 0.01 mm and 0.03 ± 0.02). Besides, a blind qualitative evaluation revealed that the semi-automated procedure was significantly superior (p < 0.001) to the manual one in terms of fidelity to the CT. As for the hip fracture risk assessed in silico starting from both segmentations, no significant difference emerged between the two (R2 = 0.99). The proposed semi-automated segmentation procedure overcomes the manual one, shortening the segmentation time and providing a better segmentation. The method could be employed within CT-based in silico methodologies and to segment large volumes of images to train and test fully automated and supervised segmentation methods.
Collapse
Affiliation(s)
- Alessandra Aldieri
- PolitoBIOMedLab, Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Turin, Italy.
- Medical Technology Lab, IRCCS Istituto Ortopedico Rizzoli, Bologna, Italy.
| | - Riccardo Biondi
- IRCCS Bologna - Istituto delle Scienze Neurologiche di Bologna, Bologna, Italy
- Department of Medical and Surgical Sciences, Alma Mater Studiorum - University of Bologna, Bologna, Italy
| | - Antonino A La Mattina
- Medical Technology Lab, IRCCS Istituto Ortopedico Rizzoli, Bologna, Italy
- Department of Industrial Engineering, Alma Mater Studiorum - University of Bologna, Bologna, Italy
| | - Julia A Szyszko
- Medical Technology Lab, IRCCS Istituto Ortopedico Rizzoli, Bologna, Italy
- Department of Industrial Engineering, Alma Mater Studiorum - University of Bologna, Bologna, Italy
| | - Stefano Polizzi
- Department of Medical and Surgical Sciences, Alma Mater Studiorum - University of Bologna, Bologna, Italy
| | - Daniele Dall'Olio
- IRCCS Bologna - Istituto delle Scienze Neurologiche di Bologna, Bologna, Italy
| | - Nico Curti
- IRCCS Bologna - Istituto delle Scienze Neurologiche di Bologna, Bologna, Italy
- Department of Physics and Astronomy, Alma Mater Studiorum - University of Bologna, Bologna, Italy
| | - Gastone Castellani
- Department of Medical and Surgical Sciences, Alma Mater Studiorum - University of Bologna, Bologna, Italy
| | - Marco Viceconti
- Medical Technology Lab, IRCCS Istituto Ortopedico Rizzoli, Bologna, Italy
- Department of Industrial Engineering, Alma Mater Studiorum - University of Bologna, Bologna, Italy
| |
Collapse
|
4
|
Edelmers E, Kazoka D, Bolocko K, Sudars K, Pilmane M. Automatization of CT Annotation: Combining AI Efficiency with Expert Precision. Diagnostics (Basel) 2024; 14:185. [PMID: 38248062 PMCID: PMC10814874 DOI: 10.3390/diagnostics14020185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 01/12/2024] [Accepted: 01/13/2024] [Indexed: 01/23/2024] Open
Abstract
The integration of artificial intelligence (AI), particularly through machine learning (ML) and deep learning (DL) algorithms, marks a transformative progression in medical imaging diagnostics. This technical note elucidates a novel methodology for semantic segmentation of the vertebral column in CT scans, exemplified by a dataset of 250 patients from Riga East Clinical University Hospital. Our approach centers on the accurate identification and labeling of individual vertebrae, ranging from C1 to the sacrum-coccyx complex. Patient selection was meticulously conducted, ensuring demographic balance in age and sex, and excluding scans with significant vertebral abnormalities to reduce confounding variables. This strategic selection bolstered the representativeness of our sample, thereby enhancing the external validity of our findings. Our workflow streamlined the segmentation process by eliminating the need for volume stitching, aligning seamlessly with the methodology we present. By leveraging AI, we have introduced a semi-automated annotation system that enables initial data labeling even by individuals without medical expertise. This phase is complemented by thorough manual validation against established anatomical standards, significantly reducing the time traditionally required for segmentation. This dual approach not only conserves resources but also expedites project timelines. While this method significantly advances radiological data annotation, it is not devoid of challenges, such as the necessity for manual validation by anatomically skilled personnel and reliance on specialized GPU hardware. Nonetheless, our methodology represents a substantial leap forward in medical data semantic segmentation, highlighting the potential of AI-driven approaches to revolutionize clinical and research practices in radiology.
Collapse
Affiliation(s)
- Edgars Edelmers
- Institute of Anatomy and Anthropology, Rīga Stradiņš University, LV-1010 Riga, Latvia; (D.K.); (M.P.)
| | - Dzintra Kazoka
- Institute of Anatomy and Anthropology, Rīga Stradiņš University, LV-1010 Riga, Latvia; (D.K.); (M.P.)
| | - Katrina Bolocko
- Department of Computer Graphics and Computer Vision, Riga Technical University, LV-1048 Riga, Latvia;
| | - Kaspars Sudars
- Institute of Electronics and Computer Science, LV-1006 Riga, Latvia;
| | - Mara Pilmane
- Institute of Anatomy and Anthropology, Rīga Stradiņš University, LV-1010 Riga, Latvia; (D.K.); (M.P.)
| |
Collapse
|
5
|
Huang TL, Lu NH, Huang YH, Twan WH, Yeh LR, Liu KY, Chen TB. Transfer learning with CNNs for efficient prostate cancer and BPH detection in transrectal ultrasound images. Sci Rep 2023; 13:21849. [PMID: 38071254 PMCID: PMC10710441 DOI: 10.1038/s41598-023-49159-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
Early detection of prostate cancer (PCa) and benign prostatic hyperplasia (BPH) is crucial for maintaining the health and well-being of aging male populations. This study aims to evaluate the performance of transfer learning with convolutional neural networks (CNNs) for efficient classification of PCa and BPH in transrectal ultrasound (TRUS) images. A retrospective experimental design was employed in this study, with 1380 TRUS images for PCa and 1530 for BPH. Seven state-of-the-art deep learning (DL) methods were employed as classifiers with transfer learning applied to popular CNN architectures. Performance indices, including sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), Kappa value, and Hindex (Youden's index), were used to assess the feasibility and efficacy of the CNN methods. The CNN methods with transfer learning demonstrated a high classification performance for TRUS images, with all accuracy, specificity, sensitivity, PPV, NPV, Kappa, and Hindex values surpassing 0.9400. The optimal accuracy, sensitivity, and specificity reached 0.9987, 0.9980, and 0.9980, respectively, as evaluated using twofold cross-validation. The investigated CNN methods with transfer learning showcased their efficiency and ability for the classification of PCa and BPH in TRUS images. Notably, the EfficientNetV2 with transfer learning displayed a high degree of effectiveness in distinguishing between PCa and BPH, making it a promising tool for future diagnostic applications.
Collapse
Affiliation(s)
- Te-Li Huang
- Department of Radiology, Kaohsiung Veterans General Hospital, No. 386, Dazhong 1st Rd., Zuoying Dist., Kaohsiung, 81362, Taiwan
| | - Nan-Han Lu
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan.
- Department of Pharmacy, Tajen University, No.20, Weixin Rd., Yanpu Township, Pingtung, 90741, Taiwan.
- Department of Radiology, E-DA Hospital, I-Shou University, No.1, Yida Rd., Jiao-Su Village, Yan-Chao District, Kaohsiung, 82445, Taiwan.
| | - Yung-Hui Huang
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan
| | - Wen-Hung Twan
- Department of Life Sciences, National Taitung University, No.369, Sec. 2, University Rd., Taitung, 95092, Taiwan
| | - Li-Ren Yeh
- Department of Anesthesiology, E-DA Cancer Hospital, I-Shou University, No.1, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan
| | - Kuo-Ying Liu
- Department of Radiology, E-DA Hospital, I-Shou University, No.1, Yida Rd., Jiao-Su Village, Yan-Chao District, Kaohsiung, 82445, Taiwan
| | - Tai-Been Chen
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan.
- Institute of Statistics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu, 30010, Taiwan.
| |
Collapse
|
6
|
Zhai H, Huang J, Li L, Tao H, Wang J, Li K, Shao M, Cheng X, Wang J, Wu X, Wu C, Zhang X, Wang H, Xiong Y. Deep learning-based workflow for hip joint morphometric parameter measurement from CT images. Phys Med Biol 2023; 68:225003. [PMID: 37852280 DOI: 10.1088/1361-6560/ad04aa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Accepted: 10/18/2023] [Indexed: 10/20/2023]
Abstract
Objective.Precise hip joint morphometry measurement from CT images is crucial for successful preoperative arthroplasty planning and biomechanical simulations. Although deep learning approaches have been applied to clinical bone surgery planning, there is still a lack of relevant research on quantifying hip joint morphometric parameters from CT images.Approach.This paper proposes a deep learning workflow for CT-based hip morphometry measurement. For the first step, a coarse-to-fine deep learning model is designed for accurate reconstruction of the hip geometry (3D bone models and key landmark points). Based on the geometric models, a robust measurement method is developed to calculate a full set of morphometric parameters, including the acetabular anteversion and inclination, the femoral neck shaft angle and the inclination, etc. Our methods were validated on two datasets with different imaging protocol parameters and further compared with the conventional 2D x-ray-based measurement method.Main results. The proposed method yields high bone segmentation accuracies (Dice coefficients of 98.18% and 97.85%, respectively) and low landmark prediction errors (1.55 mm and 1.65 mm) on both datasets. The automated measurements agree well with the radiologists' manual measurements (Pearson correlation coefficients between 0.47 and 0.99 and intraclass correlation coefficients between 0.46 and 0.98). This method provides more accurate measurements than the conventional 2D x-ray-based measurement method, reducing the error of acetabular cup size from over 2 mm to less than 1 mm. Moreover, our morphometry measurement method is robust against the error of the previous bone segmentation step. As we tested different deep learning methods for the prerequisite bone segmentation, our method produced consistent final measurement results, with only a 0.37 mm maximum inter-method difference in the cup size.Significance. This study proposes a deep learning approach with improved robustness and accuracy for pelvis arthroplasty planning.
Collapse
Affiliation(s)
- Haoyu Zhai
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian 116024, People's Republic of China
| | - Jin Huang
- Department of Orthopaedics, Daping Hospital, Army Medical University, Chongqing, People's Republic of China
| | - Lei Li
- Department of Vascular Surgery, The Second Affiliated Hospital of Dalian Medical University, Dalian 116024, People's Republic of China
| | - Hairong Tao
- Shanghai Key Laboratory of Orthopaedic Implants, Shanghai 200011, People's Republic of China
- Department of Orthopaedic Surgery, Shanghai Ninth People's Hospital, Shanghai 200011, People's Republic of China
- Shanghai Jiao Tong University Shcool of Medicine, Shanghai 200011, People's Republic of China
| | - Jinwu Wang
- Department of Orthopaedic Surgery, Shanghai Ninth People's Hospital, Shanghai 200011, People's Republic of China
- Shanghai Jiaotong University School of Medicine Department of Orthopaedics & Bone and Joint Research Center, Shanghai 200011, People's Republic of China
| | - Kang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu 610041, People's Republic of China
| | - Moyu Shao
- Jiangsu Yunqianbai Digital Technology Co., LTD, Xuzhou 221000, People's Republic of China
| | - Xiaomin Cheng
- Jiangsu Yunqianbai Digital Technology Co., LTD, Xuzhou 221000, People's Republic of China
| | - Jing Wang
- Xi'an JiaoTong University. School of Chemical Engineering and Technology, Xi'an 710049, People's Republic of China
| | - Xiang Wu
- School of Medical Information & Engineering, Xuzhou Medical University, Xuzhou 221000, People's Republic of China
| | - Chuan Wu
- School of Medical Information & Engineering, Xuzhou Medical University, Xuzhou 221000, People's Republic of China
| | - Xiao Zhang
- School of Medical Information & Engineering, Xuzhou Medical University, Xuzhou 221000, People's Republic of China
| | - Hongkai Wang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian 116024, People's Republic of China
- Liaoning Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian 116024, People's Republic of China
| | - Yan Xiong
- Department of Orthopaedics, Daping Hospital, Army Medical University, Chongqing, People's Republic of China
| |
Collapse
|
7
|
Schnider E, Wolleb J, Huck A, Toranelli M, Rauter G, Müller-Gerbl M, Cattin PC. Improved distinct bone segmentation in upper-body CT through multi-resolution networks. Int J Comput Assist Radiol Surg 2023; 18:2091-2099. [PMID: 37338664 PMCID: PMC10589171 DOI: 10.1007/s11548-023-02957-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 05/09/2023] [Indexed: 06/21/2023]
Abstract
PURPOSE Automated distinct bone segmentation from CT scans is widely used in planning and navigation workflows. U-Net variants are known to provide excellent results in supervised semantic segmentation. However, in distinct bone segmentation from upper-body CTs a large field of view and a computationally taxing 3D architecture are required. This leads to low-resolution results lacking detail or localisation errors due to missing spatial context when using high-resolution inputs. METHODS We propose to solve this problem by using end-to-end trainable segmentation networks that combine several 3D U-Nets working at different resolutions. Our approach, which extends and generalizes HookNet and MRN, captures spatial information at a lower resolution and skips the encoded information to the target network, which operates on smaller high-resolution inputs. We evaluated our proposed architecture against single-resolution networks and performed an ablation study on information concatenation and the number of context networks. RESULTS Our proposed best network achieves a median DSC of 0.86 taken over all 125 segmented bone classes and reduces the confusion among similar-looking bones in different locations. These results outperform our previously published 3D U-Net baseline results on the task and distinct bone segmentation results reported by other groups. CONCLUSION The presented multi-resolution 3D U-Nets address current shortcomings in bone segmentation from upper-body CT scans by allowing for capturing a larger field of view while avoiding the cubic growth of the input pixels and intermediate computations that quickly outgrow the computational capacities in 3D. The approach thus improves the accuracy and efficiency of distinct bone segmentation from upper-body CT.
Collapse
Affiliation(s)
- Eva Schnider
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland.
| | - Julia Wolleb
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| | - Antal Huck
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| | - Mireille Toranelli
- Department of Biomedicine, Musculoskeletal Research, University of Basel, Basel, Switzerland
| | - Georg Rauter
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| | - Magdalena Müller-Gerbl
- Department of Biomedicine, Musculoskeletal Research, University of Basel, Basel, Switzerland
| | - Philippe C Cattin
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| |
Collapse
|
8
|
Sánchez-Bonaste A, Merchante LFS, Gónzalez-Bravo C, Carnicero A. Systematic measuring cortical thickness in tibiae for bio-mechanical analysis. Comput Biol Med 2023; 163:107123. [PMID: 37343467 DOI: 10.1016/j.compbiomed.2023.107123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 05/08/2023] [Accepted: 05/30/2023] [Indexed: 06/23/2023]
Abstract
BACKGROUND AND OBJECTIVE Measuring the thickness of cortical bone tissue helps diagnose bone diseases or monitor the progress of different treatments. This type of measurement can be performed visually from CAT images by a radiologist or by semi-automatic algorithms from Hounsfield values. This article proposes a mechanism capable of measuring thickness over the entire bone surface, aligning and orienting all the images in the same direction to have comparable references and reduce human intervention to a minimum. The objective is to batch process large numbers of patients' CAT images obtaining thicknesses profiles of their cortical tissue to be used in many applications. METHODS Classical morphological and Deep Learning segmentation is used to extract the area of interest, filtering and interpolation to clean the bones and contour detection and Signed Distance Functions to measure the cortical Thickness. The alignment of the set of bones is achieved by detecting their longitudinal direction, and the orientation is performed by computing their principal component of the center of mass slice. RESULTS The method processed in an unattended manner 67% of the patients in the first run and 100% in the second run. The difference in the thickness values between the values provided by the algorithm and the measures done by a radiologist was, on average, 0.25 millimetres with a standard deviation of 0.2. CONCLUSION Measuring the cortical thickness of a bone would allow us to prepare accurate traumatological surgeries or study their structural properties. Obtaining thickness profiles of an extensive set of patients opens the way for numerous studies to be carried out to find patterns between bone thickness and the patients' medical, social or demographic variables.
Collapse
Affiliation(s)
- Alberto Sánchez-Bonaste
- ICAI School of Engineering, Comillas Pontifical University, Alberto Aguilera 25, 28015, Madrid, Spain
| | - Luis F S Merchante
- MOBIOS Lab, Institute for Research in Technology, Comillas Pontifical University, Sta Cruz de Marcenado 26, 28015, Madrid, Spain
| | - Carlos Gónzalez-Bravo
- ICAI School of Engineering, Comillas Pontifical University, Alberto Aguilera 25, 28015, Madrid, Spain
| | - Alberto Carnicero
- MOBIOS Lab, Institute for Research in Technology, Comillas Pontifical University, Sta Cruz de Marcenado 26, 28015, Madrid, Spain.
| |
Collapse
|
9
|
Jones BC, Wehrli FW, Kamona N, Deshpande RS, Vu BTD, Song HK, Lee H, Grewal RK, Chan TJ, Witschey WR, MacLean MT, Josselyn NJ, Iyer SK, Al Mukaddam M, Snyder PJ, Rajapakse CS. Automated, calibration-free quantification of cortical bone porosity and geometry in postmenopausal osteoporosis from ultrashort echo time MRI and deep learning. Bone 2023; 171:116743. [PMID: 36958542 PMCID: PMC10121925 DOI: 10.1016/j.bone.2023.116743] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 03/01/2023] [Accepted: 03/16/2023] [Indexed: 03/25/2023]
Abstract
BACKGROUND Assessment of cortical bone porosity and geometry by imaging in vivo can provide useful information about bone quality that is independent of bone mineral density (BMD). Ultrashort echo time (UTE) MRI techniques of measuring cortical bone porosity and geometry have been extensively validated in preclinical studies and have recently been shown to detect impaired bone quality in vivo in patients with osteoporosis. However, these techniques rely on laborious image segmentation, which is clinically impractical. Additionally, UTE MRI porosity techniques typically require long scan times or external calibration samples and elaborate physics processing, which limit their translatability. To this end, the UTE MRI-derived Suppression Ratio has been proposed as a simple-to-calculate, reference-free biomarker of porosity which can be acquired in clinically feasible acquisition times. PURPOSE To explore whether a deep learning method can automate cortical bone segmentation and the corresponding analysis of cortical bone imaging biomarkers, and to investigate the Suppression Ratio as a fast, simple, and reference-free biomarker of cortical bone porosity. METHODS In this retrospective study, a deep learning 2D U-Net was trained to segment the tibial cortex from 48 individual image sets comprised of 46 slices each, corresponding to 2208 training slices. Network performance was validated through an external test dataset comprised of 28 scans from 3 groups: (1) 10 healthy, young participants, (2) 9 postmenopausal, non-osteoporotic women, and (3) 9 postmenopausal, osteoporotic women. The accuracy of automated porosity and geometry quantifications were assessed with the coefficient of determination and the intraclass correlation coefficient (ICC). Furthermore, automated MRI biomarkers were compared between groups and to dual energy X-ray absorptiometry (DXA)- and peripheral quantitative CT (pQCT)-derived BMD. Additionally, the Suppression Ratio was compared to UTE porosity techniques based on calibration samples. RESULTS The deep learning model provided accurate labeling (Dice score 0.93, intersection-over-union 0.88) and similar results to manual segmentation in quantifying cortical porosity (R2 ≥ 0.97, ICC ≥ 0.98) and geometry (R2 ≥ 0.82, ICC ≥ 0.75) parameters in vivo. Furthermore, the Suppression Ratio was validated compared to established porosity protocols (R2 ≥ 0.78). Automated parameters detected age- and osteoporosis-related impairments in cortical bone porosity (P ≤ .002) and geometry (P values ranging from <0.001 to 0.08). Finally, automated porosity markers showed strong, inverse Pearson's correlations with BMD measured by pQCT (|R| ≥ 0.88) and DXA (|R| ≥ 0.76) in postmenopausal women, confirming that lower mineral density corresponds to greater porosity. CONCLUSION This study demonstrated feasibility of a simple, automated, and ionizing-radiation-free protocol for quantifying cortical bone porosity and geometry in vivo from UTE MRI and deep learning.
Collapse
Affiliation(s)
- Brandon C Jones
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America; Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, 210 South 33(rd) St, Philadelphia, PA 19104, United States of America.
| | - Felix W Wehrli
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America.
| | - Nada Kamona
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America; Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, 210 South 33(rd) St, Philadelphia, PA 19104, United States of America.
| | - Rajiv S Deshpande
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America; Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, 210 South 33(rd) St, Philadelphia, PA 19104, United States of America.
| | - Brian-Tinh Duc Vu
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America; Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, 210 South 33(rd) St, Philadelphia, PA 19104, United States of America.
| | - Hee Kwon Song
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America.
| | - Hyunyeol Lee
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America; School of Electronics Engineering, Kyungpook National University, 80 Daehakro, Bukgu, Daegu 41566, Republic of Korea.
| | - Rasleen Kaur Grewal
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America.
| | - Trevor Jackson Chan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America; Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, 210 South 33(rd) St, Philadelphia, PA 19104, United States of America.
| | - Walter R Witschey
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America.
| | - Matthew T MacLean
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America.
| | - Nicholas J Josselyn
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America; Department of Data Science, Worcester Polytechnic Institute, 100 Institute Road, Worcester, MA 01609, United States of America.
| | - Srikant Kamesh Iyer
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America
| | - Mona Al Mukaddam
- Department of Medicine, Division of Endocrinology, Perelman School of Medicine, University of Pennsylvania, Perelman Center for Advanced Medicine, 3400 Civic Center Boulevard, Philadelphia, PA 19104, United States of America.
| | - Peter J Snyder
- Department of Medicine, Division of Endocrinology, Perelman School of Medicine, University of Pennsylvania, Perelman Center for Advanced Medicine, 3400 Civic Center Boulevard, Philadelphia, PA 19104, United States of America.
| | - Chamith S Rajapakse
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 1 Founders Building, 3400 Spruce St, Philadelphia, PA 19104, United States of America.
| |
Collapse
|
10
|
León-Muñoz VJ, Moya-Angeler J, López-López M, Lisón-Almagro AJ, Martínez-Martínez F, Santonja-Medina F. Integration of Square Fiducial Markers in Patient-Specific Instrumentation and Their Applicability in Knee Surgery. J Pers Med 2023; 13:jpm13050727. [PMID: 37240897 DOI: 10.3390/jpm13050727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/23/2023] [Accepted: 04/23/2023] [Indexed: 05/28/2023] Open
Abstract
Computer technologies play a crucial role in orthopaedic surgery and are essential in personalising different treatments. Recent advances allow the usage of augmented reality (AR) for many orthopaedic procedures, which include different types of knee surgery. AR assigns the interaction between virtual environments and the physical world, allowing both to intermingle (AR superimposes information on real objects in real-time) through an optical device and allows personalising different processes for each patient. This article aims to describe the integration of fiducial markers in planning knee surgeries and to perform a narrative description of the latest publications on AR applications in knee surgery. Augmented reality-assisted knee surgery is an emerging set of techniques that can increase accuracy, efficiency, and safety and decrease the radiation exposure (in some surgical procedures, such as osteotomies) of other conventional methods. Initial clinical experience with AR projection based on ArUco-type artificial marker sensors has shown promising results and received positive operator feedback. Once initial clinical safety and efficacy have been demonstrated, the continued experience should be studied to validate this technology and generate further innovation in this rapidly evolving field.
Collapse
Affiliation(s)
- Vicente J León-Muñoz
- Department of Orthopaedic Surgery and Traumatology, Hospital General Universitario Reina Sofía, 30003 Murcia, Spain
- Instituto de Cirugía Avanzada de la Rodilla (ICAR), 30005 Murcia, Spain
| | - Joaquín Moya-Angeler
- Department of Orthopaedic Surgery and Traumatology, Hospital General Universitario Reina Sofía, 30003 Murcia, Spain
- Instituto de Cirugía Avanzada de la Rodilla (ICAR), 30005 Murcia, Spain
| | - Mirian López-López
- Subdirección General de Tecnologías de la Información, Servicio Murciano de Salud, 30100 Murcia, Spain
| | - Alonso J Lisón-Almagro
- Department of Orthopaedic Surgery and Traumatology, Hospital General Universitario Reina Sofía, 30003 Murcia, Spain
| | - Francisco Martínez-Martínez
- Department of Orthopaedic Surgery and Traumatology, Hospital Clínico Universitario Virgen de la Arrixaca, 30120 Murcia, Spain
| | - Fernando Santonja-Medina
- Department of Orthopaedic Surgery and Traumatology, Hospital Clínico Universitario Virgen de la Arrixaca, 30120 Murcia, Spain
- Department of Surgery, Pediatrics and Obstetrics & Gynecology, Faculty of Medicine, University of Murcia, 30120 Murcia, Spain
| |
Collapse
|
11
|
Moolenaar JZ, Tümer N, Checa S. Computer-assisted preoperative planning of bone fracture fixation surgery: A state-of-the-art review. Front Bioeng Biotechnol 2022; 10:1037048. [PMID: 36312550 PMCID: PMC9613932 DOI: 10.3389/fbioe.2022.1037048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 09/30/2022] [Indexed: 11/13/2022] Open
Abstract
Background: Bone fracture fixation surgery is one of the most commonly performed surgical procedures in the orthopedic field. However, fracture healing complications occur frequently, and the choice of the most optimal surgical approach often remains challenging. In the last years, computational tools have been developed with the aim to assist preoperative planning procedures of bone fracture fixation surgery. Objectives: The aims of this review are 1) to provide a comprehensive overview of the state-of-the-art in computer-assisted preoperative planning of bone fracture fixation surgery, 2) to assess the clinical feasibility of the existing virtual planning approaches, and 3) to assess their clinical efficacy in terms of clinical outcomes as compared to conventional planning methods. Methods: A literature search was performed in the MEDLINE-PubMed, Ovid-EMBASE, Ovid-EMCARE, Web of Science, and Cochrane libraries to identify articles reporting on the clinical use of computer-assisted preoperative planning of bone fracture fixation. Results: 79 articles were included to provide an overview of the state-of-the art in virtual planning. While patient-specific geometrical model construction, virtual bone fracture reduction, and virtual fixation planning are routinely applied in virtual planning, biomechanical analysis is rarely included in the planning framework. 21 of the included studies were used to assess the feasibility and efficacy of computer-assisted planning methods. The reported total mean planning duration ranged from 22 to 258 min in different studies. Computer-assisted planning resulted in reduced operation time (Standardized Mean Difference (SMD): -2.19; 95% Confidence Interval (CI): -2.87, -1.50), less blood loss (SMD: -1.99; 95% CI: -2.75, -1.24), decreased frequency of fluoroscopy (SMD: -2.18; 95% CI: -2.74, -1.61), shortened fracture healing times (SMD: -0.51; 95% CI: -0.97, -0.05) and less postoperative complications (Risk Ratio (RR): 0.64, 95% CI: 0.46, 0.90). No significant differences were found in hospitalization duration. Some studies reported improvements in reduction quality and functional outcomes but these results were not pooled for meta-analysis, since the reported outcome measures were too heterogeneous. Conclusion: Current computer-assisted planning approaches are feasible to be used in clinical practice and have been shown to improve clinical outcomes. Including biomechanical analysis into the framework has the potential to further improve clinical outcome.
Collapse
Affiliation(s)
- Jet Zoë Moolenaar
- Berlin Institute of Health at Charité, Universitätsmedizin Berlin, Julius Wolff Institute, Berlin, Germany
- Department of Biomechanical Engineering, Delft University of Technology (TU Delft), Delft, Netherlands
| | - Nazli Tümer
- Department of Biomechanical Engineering, Delft University of Technology (TU Delft), Delft, Netherlands
- *Correspondence: Nazli Tümer, ; Sara Checa,
| | - Sara Checa
- Berlin Institute of Health at Charité, Universitätsmedizin Berlin, Julius Wolff Institute, Berlin, Germany
- *Correspondence: Nazli Tümer, ; Sara Checa,
| |
Collapse
|
12
|
Image reconstruction method for limited-angle CT based on total variation minimization using guided image filtering. Med Biol Eng Comput 2022; 60:2109-2118. [DOI: 10.1007/s11517-022-02579-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 04/22/2022] [Indexed: 10/18/2022]
|
13
|
Application of Methods for a Morphological Analysis of the Femoral Diaphysis Based on Clinical CT Images to Prehistoric Human Bone: Comparison of Modern Japanese and Jomon Populations from Hegi Cave, Oita, Japan. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2069063. [PMID: 35711519 PMCID: PMC9197615 DOI: 10.1155/2022/2069063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 04/26/2022] [Accepted: 05/04/2022] [Indexed: 11/18/2022]
Abstract
A morphological analysis of ancient human bones is essential for understanding life history, medical history, and genetic characteristics. In addition to external measurements, a three-dimensional structural analysis using CT will provide more detailed information. The present study examined adult male human skeletons excavated from Hegi cave, Nakatsu city, Oita Prefecture. CT images were taken from the femurs of adult males (Initial/Early Jomon Period (n = 10) and Late Jomon Period (n = 5)). Cross-sectional images of the diaphysis from below the lesser trochanter to above the adductor tubercle were obtained using the method established by Imamura et al. (2019) and Imamura et al. (2021). Using Excel formulas and macros, the area of cortical bone, thickness, and degree of curvature were quantitatively analyzed. The results were compared with data on modern Japanese. The maximum thickness of cortical bone in the diaphysis and the degree of the anterior curvature were significantly greater in Late Jomon humans than in the other groups. In contrast to modern humans, the majority of Jomon femurs showed the S-shaped curvature with the medial side at the top position and the lateral side at the lower position. The present results demonstrate that Late Jomon humans had a wider range of activity than the other groups and also provide insights into diseases in the hip and knee joints of Jomon humans.
Collapse
|