1
|
Li Y, Jin D, Zhang Y, Li W, Jiang C, Ni M, Liao N, Yuan H. Utilizing artificial intelligence to determine bone mineral density using spectral CT. Bone 2025; 192:117321. [PMID: 39515509 DOI: 10.1016/j.bone.2024.117321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Revised: 10/04/2024] [Accepted: 11/03/2024] [Indexed: 11/16/2024]
Abstract
BACKGROUND Dual-energy computed tomography (DECT) has demonstrated the feasibility of using HAP-water to respond to BMD changes without requiring dedicated software or calibration. Artificial intelligence (AI) has been utilized for diagnosising osteoporosis in routine CT scans but has rarely been used in DECT. This study investigated the diagnostic performance of an AI system for osteoporosis screening using DECT images with reference quantitative CT (QCT). METHODS This prospective study included 120 patients who underwent DECT and QCT scans from August to December 2023. Two convolutional neural networks, 3D RetinaNet and U-Net, were employed for automated vertebral body segmentation. The accuracy of the bone mineral density (BMD) measurement was assessed with relative measurement error (RME%). Linear regression and Bland-Altman analyses were performed to compare the BMD values between the AI and manual systems with those of the QCT. The diagnostic performance of the AI and manual systems for osteoporosis and low BMD was evaluated using receiver operating characteristic curve analysis. RESULTS The overall mean RME% for the AI and manual systems were - 15.93 ± 12.05 % and - 25.47 ± 14.83 %, respectively. BMD measurements using the AI system achieved greater agreement with the QCT results than those using the manual system (R2 = 0.973, 0.948, p < 0.001; mean errors, 23.27, 35.71 mg/cm3; 95 % LoA, -9.72 to 56.26, -11.45 to 82.87 mg/cm3). The areas under the curve for the AI and manual systems were 0.979 and 0.933 for detecting osteoporosis and 0.980 and 0.991 for low BMD. CONCLUSION This AI system could achieve relatively high accuracy for automated BMD measurement on DECT scans, providing great potential for the follow-up of BMD in osteoporosis screening.
Collapse
Affiliation(s)
- Yali Li
- Department of Radiology, Peking University Third Hospital, 49 Huayuan N Rd, Haidian District, Beijing, China
| | - Dan Jin
- Department of Radiology, Peking University Third Hospital, 49 Huayuan N Rd, Haidian District, Beijing, China
| | - Yan Zhang
- Department of Radiology, Peking University Third Hospital, 49 Huayuan N Rd, Haidian District, Beijing, China
| | - Wenhuan Li
- CT Research Center, GE Healthcare China, 1 South Tongji Road, Beijing, China
| | - Chenyu Jiang
- Department of Radiology, Peking University Third Hospital, 49 Huayuan N Rd, Haidian District, Beijing, China
| | - Ming Ni
- Department of Radiology, Peking University Third Hospital, 49 Huayuan N Rd, Haidian District, Beijing, China
| | - Nianxi Liao
- Yizhun Medical AI Co., Ltd, No. 7 Zhichun Road, Haidian District, Beijing, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, 49 Huayuan N Rd, Haidian District, Beijing, China.
| |
Collapse
|
2
|
Chen J, Liu S, Li Y, Zhang Z, Liao N, Shi H, Hu W, Lin Y, Chen Y, Gao B, Huang D, Liang A, Gao W. Deep learning model for automated detection of fresh and old vertebral fractures on thoracolumbar CT. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2025; 34:1177-1186. [PMID: 39708132 DOI: 10.1007/s00586-024-08623-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 10/07/2024] [Accepted: 12/14/2024] [Indexed: 12/23/2024]
Abstract
PURPOSE To develop a deep learning system for automatic segmentation of compression fracture vertebral bodies on thoracolumbar CT and differentiate between fresh and old fractures. METHODS We included patients with thoracolumbar fractures treated at our Hospital South Campus from January 2020 to December 2023, with prospective validation from January to June 2024, and used data from the North Campus from January to December 2023 for external validation. Fresh fractures were defined as back pain lasting less than 4 weeks, with MRI showing bone marrow edema (BME). We utilized a 3D V-Net for image segmentation and several ResNet and DenseNet models for classification, evaluating performance with ROC curves, accuracy, sensitivity, specificity, precision, F1 score, and AUC. The optimal model was selected to construct deep learning system and its diagnostic efficacy was compared with that of two clinicians. RESULTS The training dataset included 238 vertebras (man/women: 55/183; age: 72.11 ± 11.55), with 59 in internal validation (man/women: 13/46; age: 74.76 ± 8.96), 34 in external validation, and 48 in prospective validation. The 3D V-Net model achieved a DSC of 0.90 on the validation dataset. ResNet18 performed best among classification models, with an AUC of 0.96 in validation, 0.89 in external dataset, and 0.87 in prospective validation, surpassing the two clinicians in both external and prospective validations. CONCLUSION The deep learning model can automatically and accurately segment the vertebral bodies with compression fractures and classify them as fresh or old fractures, thereby assisting clinicians in making clinical decisions.
Collapse
Affiliation(s)
- Jianan Chen
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China
| | - Song Liu
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China
| | - Yong Li
- Sun Yat-Sen Memorial Hospital Department of Radiology, Guangzhou, China
| | - Zaoqiang Zhang
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China
| | - Nianchun Liao
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China
| | - Huihong Shi
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China
| | - Wenjun Hu
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China
| | - Youxi Lin
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China
| | - Yanbo Chen
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China
| | - Bo Gao
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China
| | - Dongsheng Huang
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China.
| | - Anjing Liang
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China.
| | - Wenjie Gao
- Sun Yat-Sen Memorial Hospital Department of Orthopedics, Guangzhou, China.
| |
Collapse
|
3
|
Chen W, Han Y, Awais Ashraf M, Liu J, Zhang M, Su F, Huang Z, Wong KK. A patch-based deep learning MRI segmentation model for improving efficiency and clinical examination of the spinal tumor. J Bone Oncol 2024; 49:100649. [PMID: 39659517 PMCID: PMC11629321 DOI: 10.1016/j.jbo.2024.100649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 11/02/2024] [Accepted: 11/05/2024] [Indexed: 12/12/2024] Open
Abstract
Background and objective Magnetic resonance imaging (MRI) plays a vital role in diagnosing spinal diseases, including different types of spinal tumors. However, conventional segmentation techniques are often labor-intensive and susceptible to variability. This study aims to propose a full-automatic segmentation method for spine MRI images, utilizing a convolutional-deconvolution neural network and patch-based deep learning. The objective is to improve segmentation efficiency, meeting clinical needs for accurate diagnoses and treatment planning. Methods The methodology involved the utilization of a convolutional neural network to automatically extract deep learning features from spine data. This allowed for the effective representation of anatomical structures. The network was trained to learn discriminative features necessary for accurate segmentation of the spine MRI data. Furthermore, a patch extraction (PE) based deep neural network was developed using a convolutional neural network to restore the feature maps to their original image size. To improve training efficiency, a combination of pre-training and an enhanced stochastic gradient descent method was utilized. Results The experimental results highlight the effectiveness of the proposed method for spine image segmentation using Gadolinium-enhanced T1 MRI. This approach not only delivers high accuracy but also offers real-time performance. The innovative model attained impressive metrics, achieving 90.6% precision, 91.1% recall, 93.2% accuracy, 91.3% F1-score, 83.8% Intersection over Union (IoU), and 91.1% Dice Coefficient (DC). These results indicate that the proposed method can accurately segment spine tumors CT images, addressing the limitations of traditional segmentation algorithms. Conclusion In conclusion, this study introduces a fully automated segmentation method for spine MRI images utilizing a convolutional neural network, enhanced by the application of the PE-module. By utilizing a patch extraction based neural network (PENN) deep learning techniques, the proposed method effectively addresses the deficiencies of traditional algorithms and achieves accurate and real-time spine MRI image segmentation.
Collapse
Affiliation(s)
- Weimin Chen
- School of Information and Electronics, Hunan City University, Yiyang, Hunan 413000, China
| | - Yong Han
- School of Design, Quanzhou University of Information Engineering, Quanzhou, Fujian 362000, China
| | - Muhammad Awais Ashraf
- Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Junhan Liu
- School of Design, Quanzhou University of Information Engineering, Quanzhou, Fujian 362000, China
| | - Mu Zhang
- Department of Emergency, Xiangya Hospital, Central South University, Changsha, Hunan 410008, China
| | - Feng Su
- Department of Emergency, Xiangya Hospital, Central South University, Changsha, Hunan 410008, China
| | - Zhiguo Huang
- Department of Emergency, Xiangya Hospital, Central South University, Changsha, Hunan 410008, China
| | - Kelvin K.L. Wong
- School of Information and Electronics, Hunan City University, Yiyang, Hunan 413000, China
| |
Collapse
|
4
|
Haouchine N, Hackney DB, Pieper SD, Wells WM, Sanhinova M, Balboni TA, Spektor A, Huynh MA, Kozono DE, Doyle P, Alkalay RN. An open annotated dataset and baseline machine learning model for segmentation of vertebrae with metastatic bone lesions from CT. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.10.14.24314447. [PMID: 39484265 PMCID: PMC11527073 DOI: 10.1101/2024.10.14.24314447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/03/2024]
Abstract
Automatic analysis of pathologic vertebrae from computed tomography (CT) scans could significantly improve the diagnostic management of patients with metastatic spine disease. We provide the first publicly available annotated imaging dataset of cancerous CT spines to help develop artificial intelligence frameworks for automatic vertebrae segmentation and classification. This collection contains a dataset of 55 CT scans collected on patients with various types of primary cancers at two different institutions. In addition to raw images, data include manual segmentations and contours, vertebral level labeling, vertebral lesion-type classifications, and patient demographic details. Our automated segmentation model uses nnU-Net, a freely available open-source framework for deep learning in healthcare imaging, and is made publicly available. This data will facilitate the development and validation of models for predicting the mechanical response to loading and the resulting risk of fractures and spinal deformity.
Collapse
Affiliation(s)
- Nazim Haouchine
- Brigham & Women's Hospital, Boston, MA
- Harvard Medical School, Boston, MA
| | - David B Hackney
- Beth Israel Deaconess Medical Center, Boston, MA
- Harvard Medical School, Boston, MA
| | | | - William M Wells
- Brigham & Women's Hospital, Boston, MA
- Harvard Medical School, Boston, MA
| | - Malika Sanhinova
- Beth Israel Deaconess Medical Center, Boston, MA
- Harvard Medical School, Boston, MA
| | | | | | | | | | | | - Ron N Alkalay
- Beth Israel Deaconess Medical Center, Boston, MA
- Harvard Medical School, Boston, MA
| |
Collapse
|
5
|
Moeller AR, Garrett JW, Summers RM, Pickhardt PJ. Adjusting for the effect of IV contrast on automated CT body composition measures during the portal venous phase. Abdom Radiol (NY) 2024; 49:2543-2551. [PMID: 38744704 DOI: 10.1007/s00261-024-04376-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 05/04/2024] [Accepted: 05/06/2024] [Indexed: 05/16/2024]
Abstract
OBJECTIVE Fully-automated CT-based algorithms for quantifying numerous biomarkers have been validated for unenhanced abdominal scans. There is great interest in optimizing the documentation and reporting of biophysical measures present on all CT scans for the purposes of opportunistic screening and risk profiling. The purpose of this study was to determine and adjust the effect of intravenous (IV) contrast on these automated body composition measures at routine portal venous phase post-contrast imaging. METHODS Final study cohort consisted of 1,612 older adults (mean age, 68.0 years; 594 women) all imaged utilizing a uniform CT urothelial protocol consisting of pre-contrast, portal venous, and delayed excretory phases. Fully-automated CT-based algorithms for quantifying numerous biomarkers, including muscle and fat area and density, bone mineral density, and solid organ volume were applied to pre-contrast and portal venous phases. The effect of IV contrast upon these body composition measures was analyzed. Regression analyses, including square of the Pearson correlation coefficient (r2), were performed for each comparison. RESULTS We found that simple, linear relationships can be derived to determine non-contrast equivalent values from the post-contrast CT biomeasures. Excellent positive linear correlation (r2 = 0.91-0.99) between pre- and post-contrast values was observed for all automated soft tissue measures, whereas moderate positive linear correlation was observed for bone attenuation (r2 = 0.58-0.76). In general, the area- and volume-based measurement require less adjustment than attenuation-based measures, as expected. CONCLUSION Fully-automated quantitative CT-biomarker measures at portal venous phase abdominal CT can be adjusted to a non-contrast equivalent using simple, linear relationships.
Collapse
Affiliation(s)
- Alexander R Moeller
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, E3/311 Clinical Science Center 600 Highland Ave., Madison, WI, 53792-3252, USA
| | - John W Garrett
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, E3/311 Clinical Science Center 600 Highland Ave., Madison, WI, 53792-3252, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Perry J Pickhardt
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, E3/311 Clinical Science Center 600 Highland Ave., Madison, WI, 53792-3252, USA.
| |
Collapse
|
6
|
Seoni S, Shahini A, Meiburger KM, Marzola F, Rotunno G, Acharya UR, Molinari F, Salvi M. All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108200. [PMID: 38677080 DOI: 10.1016/j.cmpb.2024.108200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Alen Shahini
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Kristen M Meiburger
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Francesco Marzola
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Giulia Rotunno
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Centre for Health Research, University of Southern Queensland, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| |
Collapse
|
7
|
Rasouligandomani M, Del Arco A, Chemorion FK, Bisotti MA, Galbusera F, Noailly J, González Ballester MA. Dataset of Finite Element Models of Normal and Deformed Thoracolumbar Spine. Sci Data 2024; 11:549. [PMID: 38811573 PMCID: PMC11137096 DOI: 10.1038/s41597-024-03351-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 05/08/2024] [Indexed: 05/31/2024] Open
Abstract
Adult spine deformity (ASD) is prevalent and leads to a sagittal misalignment in the vertebral column. Computational methods, including Finite Element (FE) Models, have emerged as valuable tools for investigating the causes and treatment of ASD through biomechanical simulations. However, the process of generating personalised FE models is often complex and time-consuming. To address this challenge, we present a dataset of FE models with diverse spine morphologies that statistically represent real geometries from a cohort of patients. These models are generated using EOS images, which are utilized to reconstruct 3D surface spine models. Subsequently, a Statistical Shape Model (SSM) is constructed, enabling the adaptation of a FE hexahedral mesh template for both the bone and soft tissues of the spine through mesh morphing. The SSM deformation fields facilitate the personalization of the mean hexahedral FE model based on sagittal balance measurements. Ultimately, this new hexahedral SSM tool offers a means to generate a virtual cohort of 16807 thoracolumbar FE spine models, which are openly shared in a public repository.
Collapse
Affiliation(s)
| | | | - Francis Kiptengwer Chemorion
- BCN MedTech, Department of Engineering, Universitat Pompeu Fabra, Barcelona, 08018, Spain
- InSilicoTrials Technologies Company, Trieste, 34123, Italy
- Barcelona Supercomputing Center, Barcelona, 08034, Spain
| | | | - Fabio Galbusera
- IRCCS Istituto Ortopedico Galeazzi, Milan, 20161, Italy
- Schulthess Klinik, Zürich, 8008, Switzerland
| | - Jérôme Noailly
- BCN MedTech, Department of Engineering, Universitat Pompeu Fabra, Barcelona, 08018, Spain.
| | - Miguel A González Ballester
- BCN MedTech, Department of Engineering, Universitat Pompeu Fabra, Barcelona, 08018, Spain
- ICREA, Barcelona, 08010, Spain
| |
Collapse
|
8
|
Xiong X, Graves SA, Gross BA, Buatti JM, Beichel RR. Lumbar and Thoracic Vertebrae Segmentation in CT Scans Using a 3D Multi-Object Localization and Segmentation CNN. Tomography 2024; 10:738-760. [PMID: 38787017 PMCID: PMC11125921 DOI: 10.3390/tomography10050057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 05/04/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024] Open
Abstract
Radiation treatment of cancers like prostate or cervix cancer requires considering nearby bone structures like vertebrae. In this work, we present and validate a novel automated method for the 3D segmentation of individual lumbar and thoracic vertebra in computed tomography (CT) scans. It is based on a single, low-complexity convolutional neural network (CNN) architecture which works well even if little application-specific training data are available. It is based on volume patch-based processing, enabling the handling of arbitrary scan sizes. For each patch, it performs segmentation and an estimation of up to three vertebrae center locations in one step, which enables utilizing an advanced post-processing scheme to achieve high segmentation accuracy, as required for clinical use. Overall, 1763 vertebrae were used for the performance assessment. On 26 CT scans acquired for standard radiation treatment planning, a Dice coefficient of 0.921 ± 0.047 (mean ± standard deviation) and a signed distance error of 0.271 ± 0.748 mm was achieved. On the large-sized publicly available VerSe2020 data set with 129 CT scans depicting lumbar and thoracic vertebrae, the overall Dice coefficient was 0.940 ± 0.065 and the signed distance error was 0.109 ± 0.301 mm. A comparison to other methods that have been validated on VerSe data showed that our approach achieved a better overall segmentation performance.
Collapse
Affiliation(s)
- Xiaofan Xiong
- Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242, USA;
| | - Stephen A. Graves
- Department of Radiology, The University of Iowa, Iowa City, IA 52242, USA;
| | - Brandie A. Gross
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; (B.A.G.); (J.M.B.)
| | - John M. Buatti
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; (B.A.G.); (J.M.B.)
| | - Reinhard R. Beichel
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|
9
|
Liu D, Garrett JW, Perez AA, Zea R, Binkley NC, Summers RM, Pickhardt PJ. Fully automated CT imaging biomarkers for opportunistic prediction of future hip fractures. Br J Radiol 2024; 97:770-778. [PMID: 38379423 PMCID: PMC11027263 DOI: 10.1093/bjr/tqae041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 09/27/2023] [Accepted: 02/19/2024] [Indexed: 02/22/2024] Open
Abstract
OBJECTIVE Assess automated CT imaging biomarkers in patients who went on to hip fracture, compared with controls. METHODS In this retrospective case-control study, 6926 total patients underwent initial abdominal CT over a 20-year interval at one institution. A total of 1308 patients (mean age at initial CT, 70.5 ± 12.0 years; 64.4% female) went on to hip fracture (mean time to fracture, 5.2 years); 5618 were controls (mean age 70.3 ± 12.0 years; 61.2% female; mean follow-up interval 7.6 years). Validated fully automated quantitative CT algorithms for trabecular bone attenuation (at L1), skeletal muscle attenuation (at L3), and subcutaneous adipose tissue area (SAT) (at L3) were applied to all scans. Hazard ratios (HRs) comparing highest to lowest risk quartiles and receiver operating characteristic (ROC) curve analysis including area under the curve (AUC) were derived. RESULTS Hip fracture HRs (95% CI) were 3.18 (2.69-3.76) for low trabecular bone HU, 1.50 (1.28-1.75) for low muscle HU, and 2.18 (1.86-2.56) for low SAT. 10-year ROC AUC values for predicting hip fracture were 0.702, 0.603, and 0.603 for these CT-based biomarkers, respectively. Multivariate combinations of these biomarkers further improved predictive value; the 10-year ROC AUC combining bone/muscle/SAT was 0.733, while combining muscle/SAT was 0.686. CONCLUSION Opportunistic use of automated CT bone, muscle, and fat measures can identify patients at higher risk for future hip fracture, regardless of the indication for CT imaging. ADVANCES IN KNOWLEDGE CT data can be leveraged opportunistically for further patient evaluation, with early intervention as needed. These novel AI tools analyse CT data to determine a patient's future hip fracture risk.
Collapse
Affiliation(s)
- Daniel Liu
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, Madison, WI, 53792, United States
| | - John W Garrett
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, Madison, WI, 53792, United States
| | - Alberto A Perez
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, Madison, WI, 53792, United States
| | - Ryan Zea
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, Madison, WI, 53792, United States
| | - Neil C Binkley
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, Madison, WI, 53792, United States
| | - Ronald M Summers
- National Institutes of Health Clinical Center, Potomac, MD, 20892, United States
| | - Perry J Pickhardt
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, Madison, WI, 53792, United States
| |
Collapse
|
10
|
Zhang W, Yue Y, Pan H, Chen Z, Wang C, Pfister H, Wang W. Marching Windows: Scalable Mesh Generation for Volumetric Data With Multiple Materials. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:1728-1742. [PMID: 36455093 PMCID: PMC10980537 DOI: 10.1109/tvcg.2022.3225526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Volumetric data abounds in medical imaging and other fields. With the improved imaging quality and the increased resolution, volumetric datasets are getting so large that the existing tools have become inadequate for processing and analyzing the data. Here we consider the problem of computing tetrahedral meshes to represent large volumetric datasets with labeled multiple materials, which are often encountered in medical imaging or microscopy optical slice tomography. Such tetrahedral meshes are a more compact and expressive geometric representation so are in demand for efficient visualization and simulation of the data, which are impossible if the original large volumetric data are used directly due to the large memory requirement. Existing methods for meshing volumetric data are not scalable for handling large datasets due to their sheer demand on excessively large run-time memory or failure to produce a tet-mesh that preserves the multi-material structure of the original volumetric data. In this article we propose a novel approach, called Marching Windows, that uses a moving window and a disk-swap strategy to reduce the run-time memory footprint, devise a new scheme that guarantees to preserve the topological structure of the original dataset, and adopt an error-guided optimization technique to improve both geometric approximation error and mesh quality. Extensive experiments show that our method is capable of processing very large volumetric datasets beyond the capability of the existing methods and producing tetrahedral meshes of high quality.
Collapse
|
11
|
You X, Gu Y, Liu Y, Lu S, Tang X, Yang J. VerteFormer: A single-staged Transformer network for vertebrae segmentation from CT images with arbitrary field of views. Med Phys 2023; 50:6296-6318. [PMID: 37211910 DOI: 10.1002/mp.16467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 04/09/2023] [Accepted: 04/27/2023] [Indexed: 05/23/2023] Open
Abstract
BACKGROUND Spinal diseases are burdening an increasing number of patients. And fully automatic vertebrae segmentation for CT images with arbitrary field of views (FOVs), has been a fundamental research for computer-assisted spinal disease diagnosis and surgical intervention. Therefore, researchers aim to solve this challenging task in the past years. PURPOSE This task suffers from challenges including the intra-vertebrae inconsistency of segmentation and the poor identification of biterminal vertebrae in CT scans. And there are some limitations in existing models, which might be difficult to be applied to spinal cases with arbitrary FOVs or employ multi-stage networks with too much computational cost. In this paper, we propose a single-staged model called VerteFormer which can effectively deal with the challenges and limitations mentioned above. METHODS The proposed VerteFormer utilizes the advantage of Vision Transformer (ViT), which does well in mining global relations for input data. The Transformer and UNet-based structure effectively fuse global and local features of vertebrae. Beisdes, we propose the Edge Detection (ED) block based on convolution and self-attention to divide neighboring vertebrae with clear boundary lines. And it simultaneously promotes the network to achieve more consistent segmentation masks of vertebrae. To better identify the labels of vertebrae in the spine, particularly biterminal vertebrae, we further introduce global information generated from the Global Information Extraction (GIE) block. RESULTS We evaluate the proposed model on two public datasets: MICCAI Challenge VerSe 2019 and 2020. And VerteFormer achieve 86.39% and 86.54% of dice scores on the public and hidden test datasets of VerSe 2019, 84.53% and 86.86% of dice scores on VerSe 2020, which outperforms other Transformer-based models and single-staged methods specifically designed for the VerSe Challenge. Additional ablation experiments validate the effectiveness of ViT block, ED block and GIE block. CONCLUSIONS We propose a single-staged Transformer-based model for the task of fully automatic vertebrae segmentation from CT images with arbitrary FOVs. ViT demonstrates its effectiveness in modeling long-term relations. The ED block and GIE block has shown their improvements to the segmentation performance of vertebrae. The proposed model can assist physicians for spinal diseases' diagnosis and surgical intervention, and is also promising to be generalized and transferred to other applications of medical imaging.
Collapse
Affiliation(s)
- Xin You
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Yun Gu
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Yingying Liu
- Research, Technology and Clinical, Medtronic Technology Center, Shanghai, China
| | - Steve Lu
- Visualization and Robotics, Medtronic Technology Center, Shanghai, China
| | - Xin Tang
- Research, Technology and Clinical, Medtronic Technology Center, Shanghai, China
| | - Jie Yang
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
12
|
Constant C, Aubin CE, Kremers HM, Garcia DVV, Wyles CC, Rouzrokh P, Larson AN. The use of deep learning in medical imaging to improve spine care: A scoping review of current literature and clinical applications. NORTH AMERICAN SPINE SOCIETY JOURNAL 2023; 15:100236. [PMID: 37599816 PMCID: PMC10432249 DOI: 10.1016/j.xnsj.2023.100236] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 06/14/2023] [Indexed: 08/22/2023]
Abstract
Background Artificial intelligence is a revolutionary technology that promises to assist clinicians in improving patient care. In radiology, deep learning (DL) is widely used in clinical decision aids due to its ability to analyze complex patterns and images. It allows for rapid, enhanced data, and imaging analysis, from diagnosis to outcome prediction. The purpose of this study was to evaluate the current literature and clinical utilization of DL in spine imaging. Methods This study is a scoping review and utilized the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to review the scientific literature from 2012 to 2021. A search in PubMed, Web of Science, Embased, and IEEE Xplore databases with syntax specific for DL and medical imaging in spine care applications was conducted to collect all original publications on the subject. Specific data was extracted from the available literature, including algorithm application, algorithms tested, database type and size, algorithm training method, and outcome of interest. Results A total of 365 studies (total sample of 232,394 patients) were included and grouped into 4 general applications: diagnostic tools, clinical decision support tools, automated clinical/instrumentation assessment, and clinical outcome prediction. Notable disparities exist in the selected algorithms and the training across multiple disparate databases. The most frequently used algorithms were U-Net and ResNet. A DL model was developed and validated in 92% of included studies, while a pre-existing DL model was investigated in 8%. Of all developed models, only 15% of them have been externally validated. Conclusions Based on this scoping review, DL in spine imaging is used in a broad range of clinical applications, particularly for diagnosing spinal conditions. There is a wide variety of DL algorithms, database characteristics, and training methods. Future studies should focus on external validation of existing models before bringing them into clinical use.
Collapse
Affiliation(s)
- Caroline Constant
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Polytechnique Montreal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
- AO Research Institute Davos, Clavadelerstrasse 8, CH 7270, Davos, Switzerland
| | - Carl-Eric Aubin
- Polytechnique Montreal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
| | - Hilal Maradit Kremers
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
| | - Diana V. Vera Garcia
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
| | - Cody C. Wyles
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Department of Orthopedic Surgery, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| | - Pouria Rouzrokh
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Radiology Informatics Laboratory, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| | - Annalise Noelle Larson
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Department of Orthopedic Surgery, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| |
Collapse
|
13
|
Bott KN, Matheson BE, Smith ACJ, Tse JJ, Boyd SK, Manske SL. Addressing Challenges of Opportunistic Computed Tomography Bone Mineral Density Analysis. Diagnostics (Basel) 2023; 13:2572. [PMID: 37568935 PMCID: PMC10416827 DOI: 10.3390/diagnostics13152572] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/20/2023] [Accepted: 07/31/2023] [Indexed: 08/13/2023] Open
Abstract
Computed tomography (CT) offers advanced biomedical imaging of the body and is broadly utilized for clinical diagnosis. Traditionally, clinical CT scans have not been used for volumetric bone mineral density (vBMD) assessment; however, computational advances can now leverage clinically obtained CT data for the secondary analysis of bone, known as opportunistic CT analysis. Initial applications focused on using clinically acquired CT scans for secondary osteoporosis screening, but opportunistic CT analysis can also be applied to answer research questions related to vBMD changes in response to various disease states. There are several considerations for opportunistic CT analysis, including scan acquisition, contrast enhancement, the internal calibration technique, and bone segmentation, but there remains no consensus on applying these methods. These factors may influence vBMD measures and therefore the robustness of the opportunistic CT analysis. Further research and standardization efforts are needed to establish a consensus and optimize the application of opportunistic CT analysis for accurate and reliable assessment of vBMD in clinical and research settings. This review summarizes the current state of opportunistic CT analysis, highlighting its potential and addressing the associated challenges.
Collapse
Affiliation(s)
- Kirsten N. Bott
- Department of Radiology, University of Calgary, Calgary, AB T2N 1N4, Canada; (K.N.B.); (S.K.B.)
- McCaig Institute for Bone and Joint Health, University of Calgary, Calgary, AB T2N 4Z6, Canada
| | - Bryn E. Matheson
- McCaig Institute for Bone and Joint Health, University of Calgary, Calgary, AB T2N 4Z6, Canada
- Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
| | - Ainsley C. J. Smith
- Department of Radiology, University of Calgary, Calgary, AB T2N 1N4, Canada; (K.N.B.); (S.K.B.)
- McCaig Institute for Bone and Joint Health, University of Calgary, Calgary, AB T2N 4Z6, Canada
- Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
| | - Justin J. Tse
- Department of Radiology, University of Calgary, Calgary, AB T2N 1N4, Canada; (K.N.B.); (S.K.B.)
- McCaig Institute for Bone and Joint Health, University of Calgary, Calgary, AB T2N 4Z6, Canada
| | - Steven K. Boyd
- Department of Radiology, University of Calgary, Calgary, AB T2N 1N4, Canada; (K.N.B.); (S.K.B.)
- McCaig Institute for Bone and Joint Health, University of Calgary, Calgary, AB T2N 4Z6, Canada
| | - Sarah L. Manske
- Department of Radiology, University of Calgary, Calgary, AB T2N 1N4, Canada; (K.N.B.); (S.K.B.)
- McCaig Institute for Bone and Joint Health, University of Calgary, Calgary, AB T2N 4Z6, Canada
| |
Collapse
|
14
|
Liu D, Binkley NC, Perez A, Garrett JW, Zea R, Summers RM, Pickhardt PJ. CT image-based biomarkers acquired by AI-based algorithms for the opportunistic prediction of falls. BJR Open 2023; 5:20230014. [PMID: 37953870 PMCID: PMC10636337 DOI: 10.1259/bjro.20230014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 03/15/2023] [Accepted: 04/11/2023] [Indexed: 11/14/2023] Open
Abstract
Objective Evaluate whether biomarkers measured by automated artificial intelligence (AI)-based algorithms are suggestive of future fall risk. Methods In this retrospective age- and sex-matched case-control study, 9029 total patients underwent initial abdominal CT for a variety of indications over a 20-year interval at one institution. 3535 case patients (mean age at initial CT, 66.5 ± 9.6 years; 63.4% female) who went on to fall (mean interval to fall, 6.5 years) and 5494 controls (mean age at initial CT, 66.7 ± 9.8 years; 63.4% females; mean follow-up interval, 6.6 years) were included. Falls were identified by electronic health record review. Validated and fully automated quantitative CT algorithms for skeletal muscle, adipose tissue, and trabecular bone attenuation at the level of L1 were applied to all scans. Uni- and multivariate assessment included hazard ratios (HRs) and area under the receiver operating characteristic (AUROC) curve. Results Fall HRs (with 95% CI) for low muscle Hounsfield unit, high total adipose area, and low bone Hounsfield unit were 1.82 (1.65-2.00), 1.31 (1.19-1.44) and 1.91 (1.74-2.11), respectively, and the 10-year AUROC values for predicting falls were 0.619, 0.556, and 0.639, respectively. Combining all these CT biomarkers further improved the predictive value, including 10-year AUROC of 0.657. Conclusion Automated abdominal CT-based opportunistic measures of muscle, fat, and bone offer a novel approach to risk stratification for future falls, potentially by identifying patients with osteosarcopenic obesity. Advances in knowledge There are few well-established clinical tools to predict falls. We use novel AI-based body composition algorithms to leverage incidental CT data to help determine a patient's future fall risk.
Collapse
Affiliation(s)
- Daniel Liu
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, Madison, WI, USA
| | - Neil C Binkley
- Osteoporosis Clinical Research Program, University of Wisconsin School of Medicine & Public Health, Madison, WI, USA
| | - Alberto Perez
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, Madison, WI, USA
| | - John W Garrett
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, Madison, WI, USA
| | - Ryan Zea
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, Madison, WI, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Perry J Pickhardt
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, Madison, WI, USA
| |
Collapse
|
15
|
CT-Based Automatic Spine Segmentation Using Patch-Based Deep Learning. INT J INTELL SYST 2023. [DOI: 10.1155/2023/2345835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
CT vertebral segmentation plays an essential role in various clinical applications, such as computer-assisted surgical interventions, assessment of spinal abnormalities, and vertebral compression fractures. Automatic CT vertebral segmentation is challenging due to the overlapping shadows of thoracoabdominal structures such as the lungs, bony structures such as the ribs, and other issues such as ambiguous object borders, complicated spine architecture, patient variability, and fluctuations in image contrast. Deep learning is an emerging technique for disease diagnosis in the medical field. This study proposes a patch-based deep learning approach to extract the discriminative features from unlabeled data using a stacked sparse autoencoder (SSAE). 2D slices from a CT volume are divided into overlapping patches fed into the model for training. A random under sampling (RUS)-module is applied to balance the training data by selecting a subset of the majority class. SSAE uses pixel intensities alone to learn high-level features to recognize distinctive features from image patches. Each image is subjected to a sliding window operation to express image patches using autoencoder high-level features, which are then fed into a sigmoid layer to classify whether each patch is a vertebra or not. We validate our approach on three diverse publicly available datasets: VerSe, CSI-Seg, and the Lumbar CT dataset. Our proposed method outperformed other models after configuration optimization by achieving 89.9% in precision, 90.2% in recall, 98.9% in accuracy, 90.4% in F-score, 82.6% in intersection over union (IoU), and 90.2% in Dice coefficient (DC). The results of this study demonstrate that our model’s performance consistency using a variety of validation strategies is flexible, fast, and generalizable, making it suited for clinical application.
Collapse
|
16
|
Xie L, Wisse LEM, Wang J, Ravikumar S, Khandelwal P, Glenn T, Luther A, Lim S, Wolk DA, Yushkevich PA. Deep label fusion: A generalizable hybrid multi-atlas and deep convolutional neural network for medical image segmentation. Med Image Anal 2023; 83:102683. [PMID: 36379194 PMCID: PMC10009820 DOI: 10.1016/j.media.2022.102683] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 10/18/2022] [Accepted: 11/03/2022] [Indexed: 11/07/2022]
Abstract
Deep convolutional neural networks (DCNN) achieve very high accuracy in segmenting various anatomical structures in medical images but often suffer from relatively poor generalizability. Multi-atlas segmentation (MAS), while less accurate than DCNN in many applications, tends to generalize well to unseen datasets with different characteristics from the training dataset. Several groups have attempted to integrate the power of DCNN to learn complex data representations and the robustness of MAS to changes in image characteristics. However, these studies primarily focused on replacing individual components of MAS with DCNN models and reported marginal improvements in accuracy. In this study we describe and evaluate a 3D end-to-end hybrid MAS and DCNN segmentation pipeline, called Deep Label Fusion (DLF). The DLF pipeline consists of two main components with learnable weights, including a weighted voting subnet that mimics the MAS algorithm and a fine-tuning subnet that corrects residual segmentation errors to improve final segmentation accuracy. We evaluate DLF on five datasets that represent a diversity of anatomical structures (medial temporal lobe subregions and lumbar vertebrae) and imaging modalities (multi-modality, multi-field-strength MRI and Computational Tomography). These experiments show that DLF achieves comparable segmentation accuracy to nnU-Net (Isensee et al., 2020), the state-of-the-art DCNN pipeline, when evaluated on a dataset with similar characteristics to the training datasets, while outperforming nnU-Net on tasks that involve generalization to datasets with different characteristics (different MRI field strength or different patient population). DLF is also shown to consistently improve upon conventional MAS methods. In addition, a modality augmentation strategy tailored for multimodal imaging is proposed and demonstrated to be beneficial in improving the segmentation accuracy of learning-based methods, including DLF and DCNN, in missing data scenarios in test time as well as increasing the interpretability of the contribution of each individual modality.
Collapse
Affiliation(s)
- Long Xie
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA.
| | - Laura E M Wisse
- Department of Diagnostic Radiology, Lund University, Lund, Sweden
| | - Jiancong Wang
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Sadhana Ravikumar
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Pulkit Khandelwal
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Trevor Glenn
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Anica Luther
- Department of Diagnostic Radiology, Lund University, Lund, Sweden
| | - Sydney Lim
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - David A Wolk
- Penn Memory Center, University of Pennsylvania, Philadelphia, USA; Department of Neurology, University of Pennsylvania, Philadelphia, USA
| | - Paul A Yushkevich
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
17
|
Khan N, Peterson AC, Aubert B, Morris A, Atkins PR, Lenz AL, Anderson AE, Elhabian SY. Statistical multi-level shape models for scalable modeling of multi-organ anatomies. Front Bioeng Biotechnol 2023; 11:1089113. [PMID: 36873362 PMCID: PMC9978224 DOI: 10.3389/fbioe.2023.1089113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 02/06/2023] [Indexed: 02/18/2023] Open
Abstract
Statistical shape modeling is an indispensable tool in the quantitative analysis of anatomies. Particle-based shape modeling (PSM) is a state-of-the-art approach that enables the learning of population-level shape representation from medical imaging data (e.g., CT, MRI) and the associated 3D models of anatomy generated from them. PSM optimizes the placement of a dense set of landmarks (i.e., correspondence points) on a given shape cohort. PSM supports multi-organ modeling as a particular case of the conventional single-organ framework via a global statistical model, where multi-structure anatomy is considered as a single structure. However, global multi-organ models are not scalable for many organs, induce anatomical inconsistencies, and result in entangled shape statistics where modes of shape variation reflect both within- and between-organ variations. Hence, there is a need for an efficient modeling approach that can capture the inter-organ relations (i.e., pose variations) of the complex anatomy while simultaneously optimizing the morphological changes of each organ and capturing the population-level statistics. This paper leverages the PSM approach and proposes a new approach for correspondence-point optimization of multiple organs that overcomes these limitations. The central idea of multilevel component analysis, is that the shape statistics consists of two mutually orthogonal subspaces: the within-organ subspace and the between-organ subspace. We formulate the correspondence optimization objective using this generative model. We evaluate the proposed method using synthetic shape data and clinical data for articulated joint structures of the spine, foot and ankle, and hip joint.
Collapse
Affiliation(s)
- Nawazish Khan
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, United States
- School of Computing, University of Utah, Salt Lake City, UT, United States
- *Correspondence: Nawazish Khan ,
| | - Andrew C. Peterson
- Department of Orthopaedics, School of Medicine, University of Utah, Salt Lake City, UT, United States
| | | | - Alan Morris
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, United States
| | - Penny R. Atkins
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, United States
- Department of Orthopaedics, School of Medicine, University of Utah, Salt Lake City, UT, United States
| | - Amy L. Lenz
- Department of Orthopaedics, School of Medicine, University of Utah, Salt Lake City, UT, United States
| | - Andrew E. Anderson
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, United States
- Department of Orthopaedics, School of Medicine, University of Utah, Salt Lake City, UT, United States
| | - Shireen Y. Elhabian
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, United States
- School of Computing, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
18
|
Zhang D, Aoude A, Driscoll M. Development and model form assessment of an automatic subject-specific vertebra reconstruction method. Comput Biol Med 2022; 150:106158. [PMID: 37859278 DOI: 10.1016/j.compbiomed.2022.106158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 09/09/2022] [Accepted: 09/24/2022] [Indexed: 11/21/2022]
Abstract
BACKGROUND Current spine models for analog bench models, surgical navigation and training platforms are conventionally based on 3D models from anatomical human body polygon database or from time-consuming manual-labelled data. This work proposed a workflow of quick and accurate subject-specific vertebra reconstruction method and quantified the reconstructed model accuracy and model form errors. METHODS Four different neural networks were customized for vertebra segmentation. To validate the workflow in clinical applications, an excised human lumbar vertebra was scanned via CT and reconstructed into 3D CAD models using four refined networks. A reverse engineering solution was proposed to obtain the high-precision geometry of the excised vertebra as gold standard. The 3D model evaluation metrics and a finite element analysis (FEA) method were designed to reflect the model accuracy and model form errors. RESULTS The automatic segmentation networks achieved the best Dice score of 94.20% in validation datasets. The accuracy of reconstructed models was quantified with the best 3D Dice index of 92.80%, 3D IoU of 86.56%, Hausdorff distance of 1.60 mm, and the heatmaps and histograms were used for error visualization. The FEA results showed the impact of different geometries and reflected partial surface accuracy of the reconstructed vertebra under biomechanical loads with the closest percentage error of 4.2710% compared to the gold standard model. CONCLUSIONS In this work, a workflow of automatic subject-specific vertebra reconstruction method was proposed while the errors in geometry and FEA were quantified. Such errors should be considered when leveraging subject-specific modelling towards the development and improvement of treatments.
Collapse
Affiliation(s)
- Dingzhong Zhang
- Musculoskeletal Biomechanics Research Lab, Department of Mechanical Engineering, McGill University, 845 Sherbrooke St. W, Montréal, Quebec, H3A 0G4, Canada.
| | - Ahmed Aoude
- Orthopaedic Research Laboratory, Research Institute of McGill University Health Centre, Montreal General Hospital, 1650 Cedar Avenue, Montréal, Québec, H3G 1A4, Canada.
| | - Mark Driscoll
- Musculoskeletal Biomechanics Research Lab, Department of Mechanical Engineering, McGill University, 845 Sherbrooke St. W, Montréal, Quebec, H3A 0G4, Canada; Orthopaedic Research Laboratory, Research Institute of McGill University Health Centre, Montreal General Hospital, 1650 Cedar Avenue, Montréal, Québec, H3G 1A4, Canada.
| |
Collapse
|
19
|
Cheng P, Cao X, Yang Y, Zhang G, He Y. Automatically recognize and segment morphological features of the 3D vertebra based on topological data analysis. Comput Biol Med 2022; 149:106031. [DOI: 10.1016/j.compbiomed.2022.106031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 08/02/2022] [Accepted: 08/20/2022] [Indexed: 11/26/2022]
|
20
|
|
21
|
Alukaev D, Kiselev S, Mustafaev T, Ainur A, Ibragimov B, Vrtovec T. A deep learning framework for vertebral morphometry and Cobb angle measurement with external validation. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2022; 31:2115-2124. [PMID: 35596800 DOI: 10.1007/s00586-022-07245-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 04/11/2022] [Accepted: 04/21/2022] [Indexed: 01/20/2023]
Abstract
PURPOSE To propose a fully automated deep learning (DL) framework for the vertebral morphometry and Cobb angle measurement from three-dimensional (3D) computed tomography (CT) images of the spine, and validate the proposed framework on an external database. METHODS The vertebrae were first localized and segmented in each 3D CT image using a DL architecture based on an ensemble of U-Nets, and then automated vertebral morphometry in the form of vertebral body (VB) and intervertebral disk (IVD) heights, and spinal curvature measurements in the form of coronal and sagittal Cobb angles (thoracic kyphosis and lumbar lordosis) were performed using dedicated machine learning techniques. The framework was trained on 1725 vertebrae from 160 CT images and validated on an external database of 157 vertebrae from 15 CT images. RESULTS The resulting mean absolute errors (± standard deviation) between the obtained DL and corresponding manual measurements were 1.17 ± 0.40 mm for VB heights, 0.54 ± 0.21 mm for IVD heights, and 3.42 ± 1.36° for coronal and sagittal Cobb angles, with respective maximal absolute errors of 2.51 mm, 1.64 mm, and 5.52°. Linear regression revealed excellent agreement, with Pearson's correlation coefficient of 0.943, 0.928, and 0.996, respectively. CONCLUSION The obtained results are within the range of values, obtained by existing DL approaches without external validation. The results therefore confirm the scalability of the proposed DL framework from the perspective of application to external data, and time and computational resource consumption required for framework training.
Collapse
Affiliation(s)
- Danis Alukaev
- AI Lab, Innopolis University, Universitetskaya St 1, 420500, Innopolis, Republic of Tatarstan, Russian Federation
| | - Semen Kiselev
- AI Lab, Innopolis University, Universitetskaya St 1, 420500, Innopolis, Republic of Tatarstan, Russian Federation
| | - Tamerlan Mustafaev
- AI Lab, Innopolis University, Universitetskaya St 1, 420500, Innopolis, Republic of Tatarstan, Russian Federation.,Kazan Public Hospital, Chekhova 1A, 42000, Kazan, Republic of Tatarstan, Russian Federation
| | - Ahatov Ainur
- Barsmed Diagnostic Center, Daurskaya 12, 42000, Kazan, Republic of Tatarstan, Russian Federation
| | - Bulat Ibragimov
- Department of Computer Science, University of Copenhagen, Universitetsparken 1, 2100, Copenhagen, Denmark.,Laboratory of Imaging Technologies, Faculty of Electrical Engineering, University of Ljubljana, Tržaška cesta 25, 1000, Ljubljana, Slovenia
| | - Tomaž Vrtovec
- Laboratory of Imaging Technologies, Faculty of Electrical Engineering, University of Ljubljana, Tržaška cesta 25, 1000, Ljubljana, Slovenia.
| |
Collapse
|
22
|
Automated segmentation of the fractured vertebrae on CT and its applicability in a radiomics model to predict fracture malignancy. Sci Rep 2022; 12:6735. [PMID: 35468985 PMCID: PMC9038736 DOI: 10.1038/s41598-022-10807-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 04/13/2022] [Indexed: 11/08/2022] Open
Abstract
Although CT radiomics has shown promising results in the evaluation of vertebral fractures, the need for manual segmentation of fractured vertebrae limited the routine clinical implementation of radiomics. Therefore, automated segmentation of fractured vertebrae is needed for successful clinical use of radiomics. In this study, we aimed to develop and validate an automated algorithm for segmentation of fractured vertebral bodies on CT, and to evaluate the applicability of the algorithm in a radiomics prediction model to differentiate benign and malignant fractures. A convolutional neural network was trained to perform automated segmentation of fractured vertebral bodies using 341 vertebrae with benign or malignant fractures from 158 patients, and was validated on independent test sets (internal test, 86 vertebrae [59 patients]; external test, 102 vertebrae [59 patients]). Then, a radiomics model predicting fracture malignancy on CT was constructed, and the prediction performance was compared between automated and human expert segmentations. The algorithm achieved good agreement with human expert segmentation at testing (Dice similarity coefficient, 0.93-0.94; cross-sectional area error, 2.66-2.97%; average surface distance, 0.40-0.54 mm). The radiomics model demonstrated good performance in the training set (AUC, 0.93). In the test sets, automated and human expert segmentations showed comparable prediction performances (AUC, internal test, 0.80 vs 0.87, p = 0.044; external test, 0.83 vs 0.80, p = 0.37). In summary, we developed and validated an automated segmentation algorithm that showed comparable performance to human expert segmentation in a CT radiomics model to predict fracture malignancy, which may enable more practical clinical utilization of radiomics.
Collapse
|
23
|
SVseg: Stacked Sparse Autoencoder-Based Patch Classification Modeling for Vertebrae Segmentation. MATHEMATICS 2022. [DOI: 10.3390/math10050796] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Precise vertebrae segmentation is essential for the image-related analysis of spine pathologies such as vertebral compression fractures and other abnormalities, as well as for clinical diagnostic treatment and surgical planning. An automatic and objective system for vertebra segmentation is required, but its development is likely to run into difficulties such as low segmentation accuracy and the requirement of prior knowledge or human intervention. Recently, vertebral segmentation methods have focused on deep learning-based techniques. To mitigate the challenges involved, we propose deep learning primitives and stacked Sparse autoencoder-based patch classification modeling for Vertebrae segmentation (SVseg) from Computed Tomography (CT) images. After data preprocessing, we extract overlapping patches from CT images as input to train the model. The stacked sparse autoencoder learns high-level features from unlabeled image patches in an unsupervised way. Furthermore, we employ supervised learning to refine the feature representation to improve the discriminability of learned features. These high-level features are fed into a logistic regression classifier to fine-tune the model. A sigmoid classifier is added to the network to discriminate the vertebrae patches from non-vertebrae patches by selecting the class with the highest probabilities. We validated our proposed SVseg model on the publicly available MICCAI Computational Spine Imaging (CSI) dataset. After configuration optimization, our proposed SVseg model achieved impressive performance, with 87.39% in Dice Similarity Coefficient (DSC), 77.60% in Jaccard Similarity Coefficient (JSC), 91.53% in precision (PRE), and 90.88% in sensitivity (SEN). The experimental results demonstrated the method’s efficiency and significant potential for diagnosing and treating clinical spinal diseases.
Collapse
|
24
|
Eltes PE, Turbucz M, Fayad J, Bereczki F, Szőke G, Terebessy T, Lacroix D, Varga PP, Lazary A. A Novel Three-Dimensional Computational Method to Assess Rod Contour Deformation and to Map Bony Fusion in a Lumbopelvic Reconstruction After En-Bloc Sacrectomy. Front Surg 2022; 8:698179. [PMID: 35071306 PMCID: PMC8766313 DOI: 10.3389/fsurg.2021.698179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 11/24/2021] [Indexed: 11/13/2022] Open
Abstract
Introduction: En-bloc resection of a primary malignant sacral tumor with wide oncological margins impacts the biomechanics of the spinopelvic complex, deteriorating postoperative function. The closed-loop technique (CLT) for spinopelvic fixation (SPF) uses a single U-shaped rod to restore the spinopelvic biomechanical integrity. The CLT method was designed to provide a non-rigid fixation, however this hypothesis has not been previously tested. Here, we establish a computational method to measure the deformation of the implant and characterize the bony fusion process based on the 6-year follow-up (FU) data. Materials and Methods: Post-operative CT scans were collected of a male patient who underwent total sacrectomy at the age of 42 due to a chordoma. CLT was used to reconstruct the spinopelvic junction. We defined the 3D geometry of the implant construct. Using rigid registration algorithms, a common coordinate system was created for the CLT to measure and visualize the deformation of the construct during the FU. In order to demonstrate the cyclical loading of the construct, the patient underwent gait analysis at the 6th year FU. First, a region of interest (ROI) was selected at the proximal level of the construct, then the deformation was determined during the follow-up period. In order to investigate the fusion process, a single axial slice-based voxel finite element (FE) mesh was created. The Hounsfield values (HU) were determined, then using an empirical linear equation, bone mineral density (BMD) values were assigned for every mesh element, out of 10 color-coded categories (1st category = 0 g/cm3, 10th category 1.12 g/cm3). Results: Significant correlation was found between the number of days postoperatively and deformation in the sagittal plane, resulting in a forward bending tendency of the construct. Volume distributions were determined and visualized over time for the different BMD categories and it was found that the total volume of the elements in the highest BMD category in the first postoperative CT was 0.04 cm3, at the 2nd year, FU was 0.98 cm3, and after 6 years, it was 2.30 cm3. Conclusion: The CLT provides a non-rigid fixation. The quantification of implant deformation and bony fusion may help understate the complex lumbopelvic biomechanics after sacrectomy.
Collapse
Affiliation(s)
- Peter Endre Eltes
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
- Department of Spine Surgery, Semmelweis University, Budapest, Hungary
- *Correspondence: Peter Endre Eltes
| | - Mate Turbucz
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
- School of PhD Studies, Semmelweis University, Budapest, Hungary
| | - Jennifer Fayad
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
- Department of Industrial Engineering, Alma Mater Studiorum, Universita di Bologna, Bologna, Italy
| | - Ferenc Bereczki
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
- School of PhD Studies, Semmelweis University, Budapest, Hungary
| | - György Szőke
- Department of Orthopaedics, Semmelweis University, Budapest, Hungary
| | - Tamás Terebessy
- Department of Orthopaedics, Semmelweis University, Budapest, Hungary
| | - Damien Lacroix
- INSIGNEO Institute for In Silico Medicine, Department of Mechanical Engineering, The University of Sheffield, Sheffield, United Kingdom
| | - Peter Pal Varga
- National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
| | - Aron Lazary
- Department of Spine Surgery, Semmelweis University, Budapest, Hungary
- National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
| |
Collapse
|
25
|
Salari A, Djavadifar A, Liu XR, Najjaran H. Object recognition datasets and challenges: A review. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
26
|
Arends SR, Savenije MH, Eppinga WS, van der Velden JM, van den Berg CA, Verhoeff JJ. Clinical utility of convolutional neural networks for treatment planning in radiotherapy for spinal metastases. Phys Imaging Radiat Oncol 2022; 21:42-47. [PMID: 35243030 PMCID: PMC8857663 DOI: 10.1016/j.phro.2022.02.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 01/08/2022] [Accepted: 02/11/2022] [Indexed: 01/22/2023] Open
Abstract
We presented a CNN workflow for segmentation and labeling of vertebrae on CT. This approach proved to be robust in a majority of cases with spinal metastases. The presented workflow can save time in a clinical radiotherapy setting. The approach also allows for more advanced quantitative image analysis of vertebrae.
Background and purpose Spine delineation is essential for high quality radiotherapy treatment planning of spinal metastases. However, manual delineation is time-consuming and prone to interobserver variability. Automatic spine delineation, especially using deep learning, has shown promising results in healthy subjects. We aimed to evaluate the clinical utility of deep learning-based vertebral body delineations for radiotherapy planning purposes. Materials and methods A multi-scale convolutional neural network (CNN) was used for automatic segmentation and labeling. Two approaches were tested: the combined approach using one CNN for both segmentation and labeling, and the sequential approach using separate CNN’s for these tasks. Training and internal validation data included 580 vertebrae, external validation data included 202 vertebrae. For quantitative assessment, Dice similarity coefficient (DSC) and Hausdorff distance (HD) were used. Axial slices from external images were presented to radiation oncologists for subjective evaluation. Results Both approaches performed comparably during the internal validation (DSC: 96.7%, HD: 3.6 mm), but the sequential approach proved more robust during the external validation (DSC: 94.5% vs 94.4%, p < 0.001, HD: 4.5 vs 7.1 mm, p < 0.001). Subsequently, subjective evaluation of this sequential approach showed that experienced radiation oncologists could distinguish automatic from human-made contours in 63% of cases. They rated automatic contours clinically acceptable in 77% of cases, compared to 88% of human-made contours. Conclusion We present a feasible approach for automatic vertebral body delineation using two variants of a multi-scale CNN. This approach generates high quality automatic delineations, which can save time in a clinical radiotherapy workflow.
Collapse
|
27
|
Li B, Liu C, Wu S, Li G. Verte-Box: A Novel Convolutional Neural Network for Fully Automatic Segmentation of Vertebrae in CT Image. Tomography 2022; 8:45-58. [PMID: 35076631 PMCID: PMC8788501 DOI: 10.3390/tomography8010005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 12/14/2021] [Accepted: 12/17/2021] [Indexed: 12/19/2022] Open
Abstract
Due to the complex shape of the vertebrae and the background containing a lot of interference information, it is difficult to accurately segment the vertebrae from the computed tomography (CT) volume by manual segmentation. This paper proposes a convolutional neural network for vertebrae segmentation, named Verte-Box. Firstly, in order to enhance feature representation and suppress interference information, this paper places a robust attention mechanism on the central processing unit, including a channel attention module and a dual attention module. The channel attention module is used to explore and emphasize the interdependence between channel graphs of low-level features. The dual attention module is used to enhance features along the location and channel dimensions. Secondly, we design a multi-scale convolution block to the network, which can make full use of different combinations of receptive field sizes and significantly improve the network’s perception of the shape and size of the vertebrae. In addition, we connect the rough segmentation prediction maps generated by each feature in the feature box to generate the final fine prediction result. Therefore, the deep supervision network can effectively capture vertebrae information. We evaluated our method on the publicly available dataset of the CSI 2014 Vertebral Segmentation Challenge and achieved a mean Dice similarity coefficient of 92.18 ± 0.45%, an intersection over union of 87.29 ± 0.58%, and a 95% Hausdorff distance of 7.7107 ± 0.5958, outperforming other algorithms.
Collapse
Affiliation(s)
- Bing Li
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China; (C.L.); (S.W.); (G.L.)
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin 150080, China
- Correspondence:
| | - Chuang Liu
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China; (C.L.); (S.W.); (G.L.)
| | - Shaoyong Wu
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China; (C.L.); (S.W.); (G.L.)
| | - Guangqing Li
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China; (C.L.); (S.W.); (G.L.)
| |
Collapse
|
28
|
Karpiel I, Ziębiński A, Kluszczyński M, Feige D. A Survey of Methods and Technologies Used for Diagnosis of Scoliosis. SENSORS (BASEL, SWITZERLAND) 2021; 21:8410. [PMID: 34960509 PMCID: PMC8707023 DOI: 10.3390/s21248410] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 12/04/2021] [Accepted: 12/09/2021] [Indexed: 02/07/2023]
Abstract
The purpose of this article is to present diagnostic methods used in the diagnosis of scoliosis in the form of a brief review. This article aims to point out the advantages of select methods. This article focuses on general issues without elaborating on problems strictly related to physiotherapy and treatment methods, which may be the subject of further discussions. By outlining and categorizing each method, we summarize relevant publications that may not only help introduce other researchers to the field but also be a valuable source for studying existing methods, developing new ones or choosing evaluation strategies.
Collapse
Affiliation(s)
- Ilona Karpiel
- Łukasiewicz Research Network—Institute of Medical Technology and Equipment, 118 Roosevelt, 41-800 Zabrze, Poland;
| | - Adam Ziębiński
- Department of Distributed Systems and Informatic Devices, Silesian University of Technology, 16 Akademicka, 44-100 Gliwice, Poland;
| | - Marek Kluszczyński
- Department of Health Sciences, Jan Dlugosz University, 4/8 Waszyngtona, 42-200 Częstochowa, Poland;
| | - Daniel Feige
- Łukasiewicz Research Network—Institute of Medical Technology and Equipment, 118 Roosevelt, 41-800 Zabrze, Poland;
- Department of Distributed Systems and Informatic Devices, Silesian University of Technology, 16 Akademicka, 44-100 Gliwice, Poland;
- PhD School, Silesian University of Technology, 2A Akademicka, 44-100 Gliwice, Poland
| |
Collapse
|
29
|
Bereczki F, Turbucz M, Kiss R, Eltes PE, Lazary A. Stability Evaluation of Different Oblique Lumbar Interbody Fusion Constructs in Normal and Osteoporotic Condition - A Finite Element Based Study. Front Bioeng Biotechnol 2021; 9:749914. [PMID: 34805108 PMCID: PMC8602101 DOI: 10.3389/fbioe.2021.749914] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 10/11/2021] [Indexed: 12/20/2022] Open
Abstract
Introduction: In developed countries, the age structure of the population is currently undergoing an upward shift, resulting a decrease in general bone quality and surgical durability. Over the past decade, oblique lumbar interbody fusion (OLIF) has been globally accepted as a minimally invasive surgical technique. There are several stabilization options available for OLIF cage fixation such as self-anchored stand-alone (SSA), lateral plate-screw (LPS), and bilateral pedicle screw (BPS) systems. The constructs’ stability are crucial for the immediate and long-term success of the surgery. The aim of this study is to investigate the biomechanical effect of the aforementioned constructs, using finite element analysis with different bone qualities (osteoporotic and normal). Method: A bi-segmental (L2–L4) finite element (FE) model was created, using a CT scan of a 24-year-old healthy male. After the FE model validation, CAD geometries of the implants were inserted into the L3–L4 motion segment during a virtual surgery. For the simulations, a 150 N follower load was applied on the models, then 10 Nm of torque was used in six general directions (flexion, extension, right/left bending, and right/left rotation), with different bone material properties. Results: The smallest segmental (L3–L4) ROM (range of motion) was observed in the BPS system, except for right bending. Osteoporosis increased ROMs in all constructs, especially in the LPS system (right bending increase: 140.26%). Osteoporosis also increased the caudal displacement of the implanted cage in all models (healthy bone: 0.06 ± 0.03 mm, osteoporosis: 0.106 ± 0.07 mm), particularly with right bending, where the displacement doubled in SSA and LPS constructs. The displacement of the screws inside the L4 vertebra increased by 59% on average (59.33 ± 21.53%) due to osteoporosis (100% in LPS, rotation). BPS-L4 screw displacements were the least affected by osteoporosis. Conclusions: The investigated constructs provide different levels of stability to the spine depending on the quality of the bone, which can affect the outcome of the surgery. In our model, the BPS system was found to be the most stable construct in osteoporosis. The presented model, after further development, has the potential to help the surgeon in planning a particular spinal surgery by adjusting the stabilization type to the patient’s bone quality.
Collapse
Affiliation(s)
- Ferenc Bereczki
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Budapest, Hungary.,School of PhD Studies, Semmelweis University, Budapest, Hungary
| | - Mate Turbucz
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Budapest, Hungary.,School of PhD Studies, Semmelweis University, Budapest, Hungary
| | - Rita Kiss
- Department of Mechatronics, Optics and Mechanical Engineering Informatics, Budapest University of Technology and Economics, Budapest, Hungary
| | - Peter Endre Eltes
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Budapest, Hungary.,Department of Spine Surgery, Semmelweis University, Budapest, Hungary
| | - Aron Lazary
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Budapest, Hungary.,Department of Spine Surgery, Semmelweis University, Budapest, Hungary
| |
Collapse
|
30
|
Automatic vertebrae localization and segmentation in CT with a two-stage Dense-U-Net. Sci Rep 2021; 11:22156. [PMID: 34772972 PMCID: PMC8589948 DOI: 10.1038/s41598-021-01296-1] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 10/26/2021] [Indexed: 11/09/2022] Open
Abstract
Automatic vertebrae localization and segmentation in computed tomography (CT) are fundamental for spinal image analysis and spine surgery with computer-assisted surgery systems. But they remain challenging due to high variation in spinal anatomy among patients. In this paper, we proposed a deep-learning approach for automatic CT vertebrae localization and segmentation with a two-stage Dense-U-Net. The first stage used a 2D-Dense-U-Net to localize vertebrae by detecting the vertebrae centroids with dense labels and 2D slices. The second stage segmented the specific vertebra within a region-of-interest identified based on the centroid using 3D-Dense-U-Net. Finally, each segmented vertebra was merged into a complete spine and resampled to original resolution. We evaluated our method on the dataset from the CSI 2014 Workshop with 6 metrics: location error (1.69 ± 0.78 mm), detection rate (100%) for vertebrae localization; the dice coefficient (0.953 ± 0.014), intersection over union (0.911 ± 0.025), Hausdorff distance (4.013 ± 2.128 mm), pixel accuracy (0.998 ± 0.001) for vertebrae segmentation. The experimental results demonstrated the efficiency of the proposed method. Furthermore, evaluation on the dataset from the xVertSeg challenge with location error (4.12 ± 2.31), detection rate (100%), dice coefficient (0.877 ± 0.035) shows the generalizability of our method. In summary, our solution localized the vertebrae successfully by detecting the centroids of vertebrae and implemented instance segmentation of vertebrae in the whole spine.
Collapse
|
31
|
D’Antoni F, Russo F, Ambrosio L, Vollero L, Vadalà G, Merone M, Papalia R, Denaro V. Artificial Intelligence and Computer Vision in Low Back Pain: A Systematic Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph182010909. [PMID: 34682647 PMCID: PMC8535895 DOI: 10.3390/ijerph182010909] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/04/2021] [Accepted: 10/09/2021] [Indexed: 12/16/2022]
Abstract
Chronic Low Back Pain (LBP) is a symptom that may be caused by several diseases, and it is currently the leading cause of disability worldwide. The increased amount of digital images in orthopaedics has led to the development of methods related to artificial intelligence, and to computer vision in particular, which aim to improve diagnosis and treatment of LBP. In this manuscript, we have systematically reviewed the available literature on the use of computer vision in the diagnosis and treatment of LBP. A systematic research of PubMed electronic database was performed. The search strategy was set as the combinations of the following keywords: "Artificial Intelligence", "Feature Extraction", "Segmentation", "Computer Vision", "Machine Learning", "Deep Learning", "Neural Network", "Low Back Pain", "Lumbar". Results: The search returned a total of 558 articles. After careful evaluation of the abstracts, 358 were excluded, whereas 124 papers were excluded after full-text examination, taking the number of eligible articles to 76. The main applications of computer vision in LBP include feature extraction and segmentation, which are usually followed by further tasks. Most recent methods use deep learning models rather than digital image processing techniques. The best performing methods for segmentation of vertebrae, intervertebral discs, spinal canal and lumbar muscles achieve Sørensen-Dice scores greater than 90%, whereas studies focusing on localization and identification of structures collectively showed an accuracy greater than 80%. Future advances in artificial intelligence are expected to increase systems' autonomy and reliability, thus providing even more effective tools for the diagnosis and treatment of LBP.
Collapse
Affiliation(s)
- Federico D’Antoni
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 21, 00128 Rome, Italy; (F.D.); (L.V.)
| | - Fabrizio Russo
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
- Correspondence: (F.R.); (M.M.)
| | - Luca Ambrosio
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Luca Vollero
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 21, 00128 Rome, Italy; (F.D.); (L.V.)
| | - Gianluca Vadalà
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Mario Merone
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 21, 00128 Rome, Italy; (F.D.); (L.V.)
- Correspondence: (F.R.); (M.M.)
| | - Rocco Papalia
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Vincenzo Denaro
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| |
Collapse
|
32
|
Tao R, Liu W, Zheng G. Spine-transformers: Vertebra labeling and segmentation in arbitrary field-of-view spine CTs via 3D transformers. Med Image Anal 2021; 75:102258. [PMID: 34670147 DOI: 10.1016/j.media.2021.102258] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 08/10/2021] [Accepted: 09/28/2021] [Indexed: 11/26/2022]
Abstract
In this paper, we address the problem of fully automatic labeling and segmentation of 3D vertebrae in arbitrary Field-Of-View (FOV) CT images. We propose a deep learning-based two-stage solution to tackle these two problems. More specifically, in the first stage, the challenging vertebra labeling problem is solved via a novel transformers-based 3D object detector that views automatic detection of vertebrae in arbitrary FOV CT scans as a one-to-one set prediction problem. The main components of the new method, called Spine-Transformers, are a one-to-one set based global loss that forces unique predictions and a light-weighted 3D transformer architecture equipped with a skip connection and learnable positional embeddings for encoder and decoder, respectively. We additionally propose an inscribed sphere-based object detector to replace the regular box-based object detector for a better handling of volume orientation variations. Our method reasons about the relationships of different levels of vertebrae and the global volume context to directly infer all vertebrae in parallel. In the second stage, the segmentation of the identified vertebrae and the refinement of the detected centers are then done by training one single multi-task encoder-decoder network for all vertebrae as the network does not need to identify which vertebra it is working on. The two tasks share a common encoder path but with different decoder paths. Comprehensive experiments are conducted on two public datasets and one in-house dataset. The experimental results demonstrate the efficacy of the present approach.
Collapse
Affiliation(s)
- Rong Tao
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No.800 Dongchuan Road, Shanghai 200240, China
| | - Wenyong Liu
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University) of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China.
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No.800 Dongchuan Road, Shanghai 200240, China.
| |
Collapse
|
33
|
Li Q, Du Z, Yu H. Precise laminae segmentation based on neural network for robot-assisted decompressive laminectomy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106333. [PMID: 34391999 DOI: 10.1016/j.cmpb.2021.106333] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 07/29/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE The decompressive laminectomy is one of the most common operations to treat lumbar spinal stenosis by removing the laminae above the spinal nerve. Recently, an increasing number of robots are deployed during the surgical process to reduce the burden on surgeons and to reduce complications. However, for the robot-assisted decompressive laminectomy, an accurate 3D model of laminae from a CT image is highly desired. The purpose of this paper is to precisely segment the laminae with fewer calculations. METHODS We propose a two-stage neural network SegRe-Net. In the first stage, the entire intraoperative CT image is inputted to acquire the coarse segmentation of vertebrae with low resolution and the probability map of the laminar centers. The second stage is trained to refine the segmentation of laminae. RESULTS Three public available datasets were used to train and validate the models. The experimental results demonstrated the effectiveness of the proposed network on laminar segmentation with an average Dice coefficient of 96.38% and an average symmetric surface distance of 0.097 mm. CONCLUSION The proposed two-stage network can achieve better results than those baseline models in the laminae segmentation task with less calculation amount and learnable parameters. Our methods improve the accuracy of laminar models and reduce the image processing time. It can be used to provide a more precise planning trajectory and may promote the clinical application for the robot-assisted decompression laminectomy surgery.
Collapse
Affiliation(s)
- Qian Li
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Zhijiang Du
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China.
| | - Hongjian Yu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China.
| |
Collapse
|
34
|
A Region-Based Deep Level Set Formulation for Vertebral Bone Segmentation of Osteoporotic Fractures. J Digit Imaging 2021; 33:191-203. [PMID: 31011954 DOI: 10.1007/s10278-019-00216-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
Accurate segmentation of the vertebrae from medical images plays an important role in computer-aided diagnoses (CADs). It provides an initial and early diagnosis of various vertebral abnormalities to doctors and radiologists. Vertebrae segmentation is very important but difficult task in medical imaging due to low-contrast imaging and noise. It becomes more challenging when dealing with fractured (osteoporotic) cases. This work is dedicated to address the challenging problem of vertebra segmentation. In the past, various segmentation techniques of vertebrae have been proposed. Recently, deep learning techniques have been introduced in biomedical image processing for segmentation and characterization of several abnormalities. These techniques are becoming popular for segmentation purposes due to their robustness and accuracy. In this paper, we present a novel combination of traditional region-based level set with deep learning framework in order to predict shape of vertebral bones accurately; thus, it would be able to handle the fractured cases efficiently. We termed this novel Framework as "FU-Net" which is a powerful and practical framework to handle fractured vertebrae segmentation efficiently. The proposed method was successfully evaluated on two different challenging datasets: (1) 20 CT scans, 15 healthy cases, and 5 fractured cases provided at spine segmentation challenge CSI 2014; (2) 25 CT image data (both healthy and fractured cases) provided at spine segmentation challenge CSI 2016 or xVertSeg.v1 challenge. We have achieved promising results on our proposed technique especially on fractured cases. Dice score was found to be 96.4 ± 0.8% without fractured cases and 92.8 ± 1.9% with fractured cases in CSI 2014 dataset (lumber and thoracic). Similarly, dice score was 95.2 ± 1.9% on 15 CT dataset (with given ground truths) and 95.4 ± 2.1% on total 25 CT dataset for CSI 2016 datasets (with 10 annotated CT datasets). The proposed technique outperformed other state-of-the-art techniques and handled the fractured cases for the first time efficiently.
Collapse
|
35
|
Sekuboyina A, Husseini ME, Bayat A, Löffler M, Liebl H, Li H, Tetteh G, Kukačka J, Payer C, Štern D, Urschler M, Chen M, Cheng D, Lessmann N, Hu Y, Wang T, Yang D, Xu D, Ambellan F, Amiranashvili T, Ehlke M, Lamecker H, Lehnert S, Lirio M, Olaguer NPD, Ramm H, Sahu M, Tack A, Zachow S, Jiang T, Ma X, Angerman C, Wang X, Brown K, Kirszenberg A, Puybareau É, Chen D, Bai Y, Rapazzo BH, Yeah T, Zhang A, Xu S, Hou F, He Z, Zeng C, Xiangshang Z, Liming X, Netherton TJ, Mumme RP, Court LE, Huang Z, He C, Wang LW, Ling SH, Huỳnh LD, Boutry N, Jakubicek R, Chmelik J, Mulay S, Sivaprakasam M, Paetzold JC, Shit S, Ezhov I, Wiestler B, Glocker B, Valentinitsch A, Rempfler M, Menze BH, Kirschke JS. VerSe: A Vertebrae labelling and segmentation benchmark for multi-detector CT images. Med Image Anal 2021; 73:102166. [PMID: 34340104 DOI: 10.1016/j.media.2021.102166] [Citation(s) in RCA: 113] [Impact Index Per Article: 28.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 06/25/2021] [Accepted: 07/06/2021] [Indexed: 11/25/2022]
Abstract
Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse.
Collapse
Affiliation(s)
- Anjany Sekuboyina
- Department of Informatics, Technical University of Munich, Germany; Munich School of BioEngineering, Technical University of Munich, Germany; Department of Neuroradiology, Klinikum Rechts der Isar, Germany.
| | - Malek E Husseini
- Department of Informatics, Technical University of Munich, Germany; Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| | - Amirhossein Bayat
- Department of Informatics, Technical University of Munich, Germany; Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| | | | - Hans Liebl
- Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| | - Hongwei Li
- Department of Informatics, Technical University of Munich, Germany
| | - Giles Tetteh
- Department of Informatics, Technical University of Munich, Germany
| | - Jan Kukačka
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Germany
| | - Christian Payer
- Institute of Computer Graphics and Vision, Graz University of Technology, Austria
| | - Darko Štern
- Gottfried Schatz Research Center: Biophysics, Medical University of Graz, Austria
| | - Martin Urschler
- School of Computer Science, The University of Auckland, New Zealand
| | - Maodong Chen
- Computer Vision Group, iFLYTEK Research South China, China
| | - Dalong Cheng
- Computer Vision Group, iFLYTEK Research South China, China
| | - Nikolas Lessmann
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center Nijmegen, The Netherlands
| | - Yujin Hu
- Shenzhen Research Institute of Big Data, China
| | - Tianfu Wang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Xin Wang
- Department of Electronic Engineering, Fudan University, China; Department of Radiology, University of North Carolina at Chapel Hill, USA
| | | | | | | | | | | | | | | | | | | | - Feng Hou
- Institute of Computing Technology, Chinese Academy of Sciences, China
| | | | | | - Zheng Xiangshang
- College of Computer Science and Technology, Zhejiang University, China; Real Doctor AI Research Centre, Zhejiang University, China
| | - Xu Liming
- College of Computer Science and Technology, Zhejiang University, China
| | | | | | | | - Zixun Huang
- Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, China
| | - Chenhang He
- Department of Computing, The Hong Kong Polytechnic University, China
| | - Li-Wen Wang
- Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, China
| | - Sai Ho Ling
- The School of Biomedical Engineering, University of Technology Sydney, Australia
| | - Lê Duy Huỳnh
- EPITA Research and Development Laboratory (LRDE), France
| | - Nicolas Boutry
- EPITA Research and Development Laboratory (LRDE), France
| | - Roman Jakubicek
- Department of Biomedical Engineering, Brno University of Technology, Czech Republic
| | - Jiri Chmelik
- Department of Biomedical Engineering, Brno University of Technology, Czech Republic
| | - Supriti Mulay
- Indian Institute of Technology Madras, India; Healthcare Technology Innovation Centre, India
| | | | | | - Suprosanna Shit
- Department of Informatics, Technical University of Munich, Germany
| | - Ivan Ezhov
- Department of Informatics, Technical University of Munich, Germany
| | | | - Ben Glocker
- Department of Computing, Imperial College London, UK
| | | | - Markus Rempfler
- Friedrich Miescher Institute for Biomedical Engineering, Switzerland
| | - Björn H Menze
- Department of Informatics, Technical University of Munich, Germany; Department for Quantitative Biomedicine, University of Zurich, Switzerland
| | - Jan S Kirschke
- Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| |
Collapse
|
36
|
Peng W, Li L, Liang L, Ding H, Zang L, Yuan S, Wang G. A convenient and stable vertebrae instance segmentation method for transforaminal endoscopic surgery planning. Int J Comput Assist Radiol Surg 2021; 16:1263-1276. [PMID: 34117989 DOI: 10.1007/s11548-021-02429-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 06/01/2021] [Indexed: 11/25/2022]
Abstract
PURPOSE Transforaminal endoscopic surgery (TES) is effective for treatment of intervertebral disc-related diseases. To avoid injury to the critical structures, preoperative planning is required to find a safe working channel. Therefore, accurate patient-specific vertebral segmentation is important. The purpose of this work is to develop a convenient, stable and feasible lumbar vertebrae segmentation method for TES planning. METHODS Based on the chain structure of the spine, an interactive dual-output vertebrae instance segmentation network was designed to segment the specific vertebrae in CT images. First, an initialization locator module was set up to provide an initial locating box. Then the dual-output network was designed to segment two adjacent vertebrae inside the locating box. Finally, iteration was performed until all the expected vertebrae were segmented. RESULTS Verification on reconstructed public dataset showed that the vertebral segmentation Dice coefficient was 96.8 ± 1.2% and average surface distance (ASD) was 0.25 ± 0.10 mm. For intervertebral foramen (IVF) region, the Dice coefficient was 96.1 ± 1.5% and ASD was 0.29 ± 0.10 mm. For IVF forming region, the Dice coefficient was 93.4 ± 3.1% and ASD was 0.28 ± 0.13 mm. The evaluation on private dataset showed that more than 90% of the segmentation were suitable for TES planning. For IVF region, the Dice coefficient was 94.4 ± 1.8% and ASD was 0.71 ± 0.49 mm. CONCLUSION This work provides a convenient, stable and feasible segmentation method for lumbar vertebrae, IVF region, and IVF forming region. The segmentation can meet the requirement for TES planning.
Collapse
Affiliation(s)
- Wuke Peng
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Room C249, Beijing, 100084, People's Republic of China
| | - Liang Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Room C249, Beijing, 100084, People's Republic of China
| | - Libin Liang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Room C249, Beijing, 100084, People's Republic of China
| | - Hui Ding
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Room C249, Beijing, 100084, People's Republic of China
| | - Lei Zang
- Department of Orthopedics, Beijing Chaoyang Hospital, Capital Medical University, Beijing, 100043, People's Republic of China
| | - Shuo Yuan
- Department of Orthopedics, Beijing Chaoyang Hospital, Capital Medical University, Beijing, 100043, People's Republic of China
| | - Guangzhi Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Room C249, Beijing, 100084, People's Republic of China.
| |
Collapse
|
37
|
Segmentation and Identification of Vertebrae in CT Scans Using CNN, k-Means Clustering and k-NN. INFORMATICS 2021. [DOI: 10.3390/informatics8020040] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
The accurate segmentation and identification of vertebrae presents the foundations for spine analysis including fractures, malfunctions and other visual insights. The large-scale vertebrae segmentation challenge (VerSe), organized as a competition at the Medical Image Computing and Computer Assisted Intervention (MICCAI), is aimed at vertebrae segmentation and labeling. In this paper, we propose a framework that addresses the tasks of vertebrae segmentation and identification by exploiting both deep learning and classical machine learning methodologies. The proposed solution comprises two phases: a binary fully automated segmentation of the whole spine, which exploits a 3D convolutional neural network, and a semi-automated procedure that allows locating vertebrae centroids using traditional machine learning algorithms. Unlike other approaches, the proposed method comes with the added advantage of no requirement for single vertebrae-level annotations to be trained. A dataset of 214 CT scans has been extracted from VerSe’20 challenge data, for training, validating and testing the proposed approach. In addition, to evaluate the robustness of the segmentation and labeling algorithms, 12 CT scans from subjects affected by severe, moderate and mild scoliosis have been collected from a local medical clinic. On the designated test set from Verse’20 data, the binary spine segmentation stage allowed to obtain a binary Dice coefficient of 89.17%, whilst the vertebrae identification one reached an average multi-class Dice coefficient of 90.09%. In order to ensure the reproducibility of the algorithms hereby developed, the code has been made publicly available.
Collapse
|
38
|
Pijpker PAJ, Oosterhuis TS, Witjes MJH, Faber C, van Ooijen PMA, Kosinka J, Kuijlen JMA, Groen RJM, Kraeima J. A semi-automatic seed point-based method for separation of individual vertebrae in 3D surface meshes: a proof of principle study. Int J Comput Assist Radiol Surg 2021; 16:1447-1457. [PMID: 34043144 PMCID: PMC8354998 DOI: 10.1007/s11548-021-02407-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 05/11/2021] [Indexed: 11/25/2022]
Abstract
PURPOSE The purpose of this paper is to present and validate a new semi-automated 3D surface mesh segmentation approach that optimizes the laborious individual human vertebrae separation in the spinal virtual surgical planning workflow and make a direct accuracy and segmentation time comparison with current standard segmentation method. METHODS The proposed semi-automatic method uses the 3D bone surface derived from CT image data for seed point-based 3D mesh partitioning. The accuracy of the proposed method was evaluated on a representative patient dataset. In addition, the influence of the number of used seed points was studied. The investigators analyzed whether there was a reduction in segmentation time when compared to manual segmentation. Surface-to-surface accuracy measurements were applied to assess the concordance with the manual segmentation. RESULTS The results demonstrated a statically significant reduction in segmentation time, while maintaining a high accuracy compared to the manual segmentation. A considerably smaller error was found when increasing the number of seed points. Anatomical regions that include articulating areas tend to show the highest errors, while the posterior laminar surface yielded an almost negligible error. CONCLUSION A novel seed point initiated surface based segmentation method for the laborious individual human vertebrae separation was presented. This proof-of-principle study demonstrated the accuracy of the proposed method on a clinical CT image dataset and its feasibility for spinal virtual surgical planning applications.
Collapse
Affiliation(s)
- Peter A J Pijpker
- 3D-Lab and Department of Neurosurgery, University of Groningen, University Medical Center Groningen, Hanzeplein 1, 9713, GZ, Groningen, The Netherlands.
| | - Tim S Oosterhuis
- 3D-Lab and Bernoulli Institute, University of Groningen, University Medical Center Groningen, Hanzeplein 1, 9713, GZ, Groningen, The Netherlands
| | - Max J H Witjes
- 3D-Lab and Department of Oral and Maxillofacial Surgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Chris Faber
- Department of Orthopedic Surgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Peter M A van Ooijen
- Department of Radiation Oncology and Data Science Center in Health, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Jiří Kosinka
- Bernoulli Institute, University of Groningen, Groningen, The Netherlands
| | - Jos M A Kuijlen
- Department of Neurosurgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Rob J M Groen
- Department of Neurosurgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Joep Kraeima
- 3D-Lab and Department of Oral and Maxillofacial Surgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| |
Collapse
|
39
|
Gong H, Liu J, Li S, Chen B. Axial-SpineGAN: simultaneous segmentation and diagnosis of multiple spinal structures on axial magnetic resonance imaging images. Phys Med Biol 2021; 66. [PMID: 33887718 DOI: 10.1088/1361-6560/abfad9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 04/22/2021] [Indexed: 11/12/2022]
Abstract
Providing a simultaneous segmentation and diagnosis of the spinal structures on axial magnetic resonance imaging (MRI) images has significant value for subsequent pathological analyses and clinical treatments. However, this task remains challenging, owing to the significant structural diversity, subtle differences between normal and abnormal structures, implicit borders, and insufficient training data. In this study, we propose an innovative network framework called 'Axial-SpineGAN' comprising a generator, discriminator, and diagnostor, aiming to address the above challenges, and to achieve simultaneous segmentation and disease diagnosis for discs, neural foramens, thecal sacs, and posterior arches on axial MRI images. The generator employs an enhancing feature fusion module to generate discriminative features, i.e. to address the challenges regarding the significant structural diversity and subtle differences between normal and abnormal structures. An enhancing border alignment module is employed to obtain an accurate pixel classification of the implicit borders. The discriminator employs an adversarial learning module to effectively strengthen the higher-order spatial consistency, and to avoid overfitting owing to insufficient training data. The diagnostor employs an automated diagnosis module to provide automated recognition of spinal diseases. Extensive experiments demonstrate that these modules have positive effects on improving the segmentation and diagnosis accuracies. Additionally, the results indicate that Axial-SpineGAN has the highest Dice similarity coefficient (94.9% ± 1.8%) in terms of the segmentation accuracy and highest accuracy rate (93.9% ± 2.6%) in terms of the diagnosis accuracy, thereby outperforming existing state-of-the-art methods. Therefore, our proposed Axial-SpineGAN is effective and potential as a clinical tool for providing an automated segmentation and disease diagnosis for multiple spinal structures on MRI images.
Collapse
Affiliation(s)
- Hao Gong
- Beijing Institute of Technology, School of Mechanical Engineering, 5 South Zhongguancun Street, Haidian District, Beijing, 100081, People's Republic of China
| | - Jianhua Liu
- Beijing Institute of Technology, School of Mechanical Engineering, 5 South Zhongguancun Street, Haidian District, Beijing, 100081, People's Republic of China
| | - Shuo Li
- University of Western, Department of Medical Imaging and Medical Biophysics, London, ON, N6A 5W9, Canada
| | - Bo Chen
- Western University, School of Health Science, London, ON, N6A 4V2, Canada
| |
Collapse
|
40
|
Eltes PE, Kiss L, Bereczki F, Szoverfi Z, Techens C, Jakab G, Hajnal B, Varga PP, Lazary A. A novel three-dimensional volumetric method to measure indirect decompression after percutaneous cement discoplasty. J Orthop Translat 2021; 28:131-139. [PMID: 33898249 PMCID: PMC8050383 DOI: 10.1016/j.jot.2021.02.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Revised: 01/07/2021] [Accepted: 02/10/2021] [Indexed: 11/24/2022] Open
Abstract
PURPOSE Percutaneous cement discoplasty (PCD) is a minimally invasive surgical option to treat patients who suffer from the consequences of advanced disc degeneration. As the current two-dimensional methods can inappropriately measure the difference in the complex 3D anatomy of the spinal segment, our aim was to develop and apply a volumetric method to measure the geometrical change in the surgically treated segments. METHODS Prospective clinical and radiological data of 10 patients who underwent single- or multilevel PCD was collected. Pre- and postoperative CT scan-based 3D reconstructions were performed. The injected PMMA (Polymethylmethacrylate) induced lifting of the cranial vertebra and the following volumetric change was measured by subtraction of the geometry of the spinal canal from a pre- and postoperatively predefined cylinder. The associations of the PMMA geometry and the volumetric change of the spinal canal with clinical outcome were determined. RESULTS Change in the spinal canal volume (ΔV) due to the surgery proved to be significant (mean ΔV = 2266.5 ± 1172.2 mm3, n = 16; p = 0.0004). A significant, positive correlation was found between ΔV, the volume and the surface of the injected PMMA. A strong, significant association between pain intensity (low back and leg pain) and the magnitude of the volumetric increase of the spinal canal was shown (ρ = 0.772, p = 0.009 for LBP and ρ = 0.693, p = 0.026 for LP). CONCLUSION The developed method is accurate, reproducible and applicable for the analysis of any other spinal surgical method. The volume and surface area of the injected PMMA have a predictive power on the extent of the indirect spinal canal decompression. The larger the ΔV the higher clinical benefit was achieved with the PCD procedure. THE TRANSLATIONAL POTENTIAL OF THIS ARTICLE The developed method has the potential to be integrated into clinical software's to evaluate the efficacy of different surgical procedures based on indirect decompression effect such as PCD, anterior lumbar interbody fusion (ALIF), lateral lumbar interbody fusion (LLIF), oblique lumbar interbody fusion (OLIF), extreme lateral interbody fusion (XLIF). The intraoperative use of the method will allow the surgeon to respond if the decompression does not reach the desired level.
Collapse
Affiliation(s)
- Peter Endre Eltes
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
- Department of Spine Surgery, Semmelweis University, Budapest, Hungary
| | - Laszlo Kiss
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
- School of PhD Studies, Semmelweis University, Budapest, Hungary
| | - Ferenc Bereczki
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
- School of PhD Studies, Semmelweis University, Budapest, Hungary
| | - Zsolt Szoverfi
- National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
| | - Chloé Techens
- Biomechanics Lab, Department of Industrial Engineering, Alma Mater Studiorum, Universita di Bologna, Italy
| | - Gabor Jakab
- National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
| | - Benjamin Hajnal
- In Silico Biomechanics Laboratory, National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
| | - Peter Pal Varga
- National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
| | - Aron Lazary
- Department of Spine Surgery, Semmelweis University, Budapest, Hungary
- National Center for Spinal Disorders, Buda Health Center, Budapest, Hungary
| |
Collapse
|
41
|
Rajasenbagam T, Jeyanthi S, Pandian JA. Detection of pneumonia infection in lungs from chest X-ray images using deep convolutional neural network and content-based image retrieval techniques. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2021:1-8. [PMID: 33777251 PMCID: PMC7985744 DOI: 10.1007/s12652-021-03075-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 03/02/2021] [Indexed: 06/12/2023]
Abstract
In this research, A Deep Convolutional Neural Network was proposed to detect Pneumonia infection in the lung using Chest X-ray images. The proposed Deep CNN models were trained with a Pneumonia Chest X-ray Dataset containing 12,000 images of infected and not infected chest X-ray images. The dataset was preprocessed and developed from the Chest X-ray8 dataset. The Content-based image retrieval technique was used to annotate the images in the dataset using Metadata and further contents. The data augmentation techniques were used to increase the number of images in each of class. The basic manipulation techniques and Deep Convolutional Generative Adversarial Network (DCGAN) were used to create the augmented images. The VGG19 network was used to develop the proposed Deep CNN model. The classification accuracy of the proposed Deep CNN model was 99.34 percent in the unseen chest X-ray images. The performance of the proposed deep CNN was compared with state-of-the-art transfer learning techniques such as AlexNet, VGG16Net and InceptionNet. The comparison results show that the classification performance of the proposed Deep CNN model was greater than the other techniques.
Collapse
Affiliation(s)
- T. Rajasenbagam
- Department of CSE, Government College of Technology, Coimbatore, India
| | - S. Jeyanthi
- Department of CSE, PSNA College of Engineering and Technology, Dindigul, India
| | - J. Arun Pandian
- Department of CSE, Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, Avadi, India
| |
Collapse
|
42
|
Zhang L, Wang H. A novel segmentation method for cervical vertebrae based on PointNet++ and converge segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105798. [PMID: 33545639 DOI: 10.1016/j.cmpb.2020.105798] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 10/10/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND Cervical spine instability is the key pathogenic factor for cervical spondylosis, which may easily cause cervical spinal cord nerve compression, numbness, weakness, and even paralysis of the limbs. The reconstruction of the internal fixation of the cervical spine is of great therapeutic significance, but is a high-risk and difficult procedure that requires precise planning. The high similarities between vertebrae may interfere with automatic operation planning; therefore, the segmentation of vertebrae is of great significance. METHODS Our segmentation algorithm has 3 parts. Firstly, an adaptive threshold filter to segment the cervical vertebra tissue structure form CT images. Secondly, segmentation of single vertebrae based on PointNet++ is introduced to segmentation cervical spine. Finally, converge segmentation which is based on edge information is utilized to clearly distinguish the edges of the two vertebrae to enhance the accuracy segmentation result. RESULTS Our approach improved the accuracy of the system up to 96.15%, and achieved the highest reported average score based on this dataset. We compared the results of the CNN and PointNet methods on a separate dataset of 240 CT scans with 18 classes and achieved a significantly higher performance for any given vertebra. Our experiments illustrated the promise and robustness of recent PointNet++-based segmentation of medical images. CONCLUSION The proposed method has better classification performance for segmentation cervical spine images, which segment a three-dimensional vertebral body directly and effectively. Furthermore, the precise segmentation of a single vertebral body can be used in automatic biomechanical analysis, computer-aided diagnosis and other aspects, so as to improve the level of automation in the treatment of cervical spondylosis.
Collapse
Affiliation(s)
- Lei Zhang
- Spine Surgery Unit, Shengjing Hospital of China Medical University, Shenyang, 110004 P.R.China
| | - Huan Wang
- Spine Surgery Unit, Shengjing Hospital of China Medical University; Address: No.36 Sanhao Street, Heping District, Shenyang, 110004, Liaoning Province, P.R.China.
| |
Collapse
|
43
|
Perez AA, Pickhardt PJ, Elton DC, Sandfort V, Summers RM. Fully automated CT imaging biomarkers of bone, muscle, and fat: correcting for the effect of intravenous contrast. Abdom Radiol (NY) 2021; 46:1229-1235. [PMID: 32948910 DOI: 10.1007/s00261-020-02755-5] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 08/31/2020] [Accepted: 09/03/2020] [Indexed: 01/28/2023]
Abstract
PURPOSE Fully automated CT-based algorithms for quantifying bone, muscle, and fat have been validated for unenhanced abdominal scans. The purpose of this study was to determine and correct for the effect of intravenous (IV) contrast on these automated body composition measures. MATERIALS AND METHODS Initial study cohort consisted of 1211 healthy adults (mean age, 45.2 years; 733 women) undergoing abdominal CT for potential renal donation. Multiphasic CT protocol consisted of pre-contrast, arterial, and parenchymal phases. Fully automated CT-based algorithms for quantifying bone mineral density (BMD, L1 trabecular HU), muscle area and density (L3-level MA and M-HU), and fat (visceral/subcutaneous (V/S) fat ratio) were applied to pre-contrast and parenchymal phases. Effect of IV contrast upon these body composition measures was analyzed. Square of the Pearson correlation coefficient (r2) was generated for each comparison. RESULTS Mean changes (± SD) in L1 BMD, L3-level MA and M-HU, and V/S fat ratio were 26.7 ± 27.2 HU, 2.9 ± 10.2 cm2, 18.8 ± 6.0 HU, - 0.1 ± 0.2, respectively. Good linear correlation between pre- and post-contrast values was observed for all automated measures: BMD (pre = 0.87 × post; r2 = 0.72), MA (pre = 0.98 × post; r2 = 0.92), M-HU (pre = 0.75 × post + 5.7; r2 = 0.75), and V/S (pre = 1.11 × post; r2 = 0.94); p < 0.001 for all r2 values. There were no significant trends according to patient age or gender that required further correction. CONCLUSION Fully automated quantitative tissue measures of bone, muscle, and fat at contrast-enhanced abdominal CT can be correlated with non-contrast equivalents using simple, linear relationships. These findings will facilitate evaluation of mixed CT cohorts involving larger patient populations and could greatly expand the potential for opportunistic screening.
Collapse
Affiliation(s)
- Alberto A Perez
- The University of Wisconsin School of Medicine & Public Health, Madison, WI, USA
| | - Perry J Pickhardt
- The University of Wisconsin School of Medicine & Public Health, Madison, WI, USA.
- Department of Radiology, University of Wisconsin School of Medicine & Public Health, E3/311 Clinical Science Center, 600 Highland Ave., Madison, WI, 53792-3252, USA.
| | - Daniel C Elton
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Veit Sandfort
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| |
Collapse
|
44
|
Kim KC, Cho HC, Jang TJ, Choi JM, Seo JK. Automatic detection and segmentation of lumbar vertebrae from X-ray images for compression fracture evaluation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105833. [PMID: 33250283 DOI: 10.1016/j.cmpb.2020.105833] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 11/04/2020] [Indexed: 06/12/2023]
Abstract
For compression fracture detection and evaluation, an automatic X-ray image segmentation technique that combines deep-learning and level-set methods is proposed. Automatic segmentation is much more difficult for X-ray images than for CT or MRI images because they contain overlapping shadows of thoracoabdominal structures including lungs, bowel gases, and other bony structures such as ribs. Additional difficulties include unclear object boundaries, the complex shape of the vertebra, inter-patient variability, and variations in image contrast. Accordingly, a structured hierarchical segmentation method is presented that combines the advantages of two deep-learning methods. Pose-driven learning is used to selectively identify the five lumbar vertebrae in an accurate and robust manner. With knowledge of the vertebral positions, M-net is employed to segment the individual vertebra. Finally, fine-tuning segmentation is applied by combining the level-set method with the previously obtained segmentation results. The performance of the proposed method was validated by 160 lumbar X-ray images, resulting in a mean Dice similarity metric of 91.60±2.22%. The results show that the proposed method achieves accurate and robust identification of each lumbar vertebra and fine segmentation of individual vertebra.
Collapse
Affiliation(s)
- Kang Cheol Kim
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | - Hyun Cheol Cho
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | - Tae Jun Jang
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | | | - Jin Keun Seo
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| |
Collapse
|
45
|
Li Q, Du Z, Yu H. Grinding trajectory generator in robot-assisted laminectomy surgery. Int J Comput Assist Radiol Surg 2021; 16:485-494. [PMID: 33507483 DOI: 10.1007/s11548-021-02316-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 01/15/2021] [Indexed: 10/22/2022]
Abstract
PURPOSE Grinding trajectory planning for robot-assisted laminectomy is a complicated and cumbersome task. The purpose of this research is to automatically obtain the surgical target area from the CT image, and based on this, formulate a reasonable robotic grinding trajectory. METHODS We propose a deep neural network for laminae positioning, a trajectory generation strategy, and a grinding speed adjusting strategy. These algorithms can obtain surgical information from CT images and automatically complete grinding trajectory planning. RESULTS The proposed laminae positioning network can reach a recognition accuracy of 95.7%, and the positioning error is only 1.12 mm in the desired direction. The simulated surgical planning on the public dataset has achieved the expected results. In a set of comparative robotic grinding experiments, those using the speed adjustment algorithm obtained a smoother grinding force. CONCLUSION Our work can automatically extract laminar centers from the CT image precisely to formulate a reasonable surgical trajectory plan. It simplifies the surgical planning process and reduces the time needed for surgeons to perform such a cumbersome operation manually.
Collapse
Affiliation(s)
- Qian Li
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Zhijiang Du
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Hongjian Yu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China.
| |
Collapse
|
46
|
Kitahama Y, Shizuka H, Kimura R, Suzuki T, Ohara Y, Miyake H, Sakai K. Fluid Lubrication and Cooling Effects in Diamond Grinding of Human Iliac Bone. ACTA ACUST UNITED AC 2021; 57:medicina57010071. [PMID: 33466923 PMCID: PMC7830225 DOI: 10.3390/medicina57010071] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 01/03/2021] [Accepted: 01/07/2021] [Indexed: 11/16/2022]
Abstract
Background and Objectives: Although there have been research on bone cutting, there have been few research on bone grinding. This study reports the measurement results of the experimental system that simulated partial laminectomy in microscopic spine surgery. The purpose of this study was to examine the fluid lubrication and cooling in bone grinding, histological characteristics of workpieces, and differences in grinding between manual and milling machines. Materials and Methods: Thiel-fixed human iliac bones were used as workpieces. A neurosurgical microdrill was used as a drill system. The workpieces were fixed to a 4-component piezo-electric dynamometer and fixtures, which was used to measure the triaxial power during bone grinding. Grinding tasks were performed by manual activity and a small milling machine with or without water. Results: In bone grinding with 4-mm diameter diamond burs and water, reduction in the number of sudden increases in grinding resistance and cooling effect of over 100 °C were confirmed. Conclusion: Manual grinding may enable the control of the grinding speed and cutting depth while giving top priority to uniform torque on the work piece applied by tools. Observing the drill tip using a triaxial dynamometer in the quantification of surgery may provide useful data for the development of safety mechanisms to prevent a sudden deviation of the drill tip.
Collapse
Affiliation(s)
- Yoshihiro Kitahama
- Spine Center, Omaezaki Municipal Hospital, Shizuoka 437-1696, Japan;
- Medical Photonics Research Center, Hamamatsu University School of Medicine, Hamamatsu 431-3192, Japan;
- Correspondence:
| | - Hiroo Shizuka
- Department of Mechanical Engineering, Faculty of Engineering, Shizuoka University, Hamamatsu 422-8529, Japan; (H.S.); (R.K.); (K.S.)
| | - Ritsu Kimura
- Department of Mechanical Engineering, Faculty of Engineering, Shizuoka University, Hamamatsu 422-8529, Japan; (H.S.); (R.K.); (K.S.)
| | - Tomo Suzuki
- Spine Center, Omaezaki Municipal Hospital, Shizuoka 437-1696, Japan;
| | - Yukoh Ohara
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo 113-8421, Japan;
| | - Hideaki Miyake
- Medical Photonics Research Center, Hamamatsu University School of Medicine, Hamamatsu 431-3192, Japan;
| | - Katsuhiko Sakai
- Department of Mechanical Engineering, Faculty of Engineering, Shizuoka University, Hamamatsu 422-8529, Japan; (H.S.); (R.K.); (K.S.)
| |
Collapse
|
47
|
Han Z, Wei B, Xi X, Chen B, Yin Y, Li S. Unifying neural learning and symbolic reasoning for spinal medical report generation. Med Image Anal 2020; 67:101872. [PMID: 33142134 DOI: 10.1016/j.media.2020.101872] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 10/03/2020] [Accepted: 10/05/2020] [Indexed: 11/28/2022]
Abstract
Automated medical report generation in spine radiology, i.e., given spinal medical images and directly create radiologist-level diagnosis reports to support clinical decision making, is a novel yet fundamental study in the domain of artificial intelligence in healthcare. However, it is incredibly challenging because it is an extremely complicated task that involves visual perception and high-level reasoning processes. In this paper, we propose the neural-symbolic learning (NSL) framework that performs human-like learning by unifying deep neural learning and symbolic logical reasoning for the spinal medical report generation. Generally speaking, the NSL framework firstly employs deep neural learning to imitate human visual perception for detecting abnormalities of target spinal structures. Concretely, we design an adversarial graph network that interpolates a symbolic graph reasoning module into a generative adversarial network through embedding prior domain knowledge, achieving semantic segmentation of spinal structures with high complexity and variability. NSL secondly conducts human-like symbolic logical reasoning that realizes unsupervised causal effect analysis of detected entities of abnormalities through meta-interpretive learning. NSL finally fills these discoveries of target diseases into a unified template, successfully achieving a comprehensive medical report generation. When employed in a real-world clinical dataset, a series of empirical studies demonstrate its capacity on spinal medical report generation and show that our algorithm remarkably exceeds existing methods in the detection of spinal structures. These indicate its potential as a clinical tool that contributes to computer-aided diagnosis.
Collapse
Affiliation(s)
- Zhongyi Han
- School of Software, Shandong University, Jinan SD, China
| | - Benzheng Wei
- Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao SD, China.
| | - Xiaoming Xi
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan SD, China
| | - Bo Chen
- School of Health Science, Western University, London ON, Canada
| | - Yilong Yin
- School of Software, Shandong University, Jinan SD, China.
| | - Shuo Li
- Department of Medical Imaging, Western University, London ON, Canada
| |
Collapse
|
48
|
Holistic multitask regression network for multiapplication shape regression segmentation. Med Image Anal 2020; 65:101783. [DOI: 10.1016/j.media.2020.101783] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 05/31/2020] [Accepted: 07/09/2020] [Indexed: 11/23/2022]
|
49
|
Pickhardt PJ, Graffy PM, Zea R, Lee SJ, Liu J, Sandfort V, Summers RM. Automated Abdominal CT Imaging Biomarkers for Opportunistic Prediction of Future Major Osteoporotic Fractures in Asymptomatic Adults. Radiology 2020; 297:64-72. [PMID: 32780005 PMCID: PMC7526945 DOI: 10.1148/radiol.2020200466] [Citation(s) in RCA: 80] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 06/05/2020] [Accepted: 06/10/2020] [Indexed: 12/13/2022]
Abstract
Background Body composition data from abdominal CT scans have the potential to opportunistically identify those at risk for future fracture. Purpose To apply automated bone, muscle, and fat tools to noncontrast CT to assess performance for predicting major osteoporotic fractures and to compare with the Fracture Risk Assessment Tool (FRAX) reference standard. Materials and Methods Fully automated bone attenuation (L1-level attenuation), muscle attenuation (L3-level attenuation), and fat (L1-level visceral-to-subcutaneous [V/S] ratio) measures were derived from noncontrast low-dose abdominal CT scans in a generally healthy asymptomatic adult outpatient cohort from 2004 to 2016. The FRAX score was calculated from data derived from an algorithmic electronic health record search. The cohort was assessed for subsequent future fragility fractures. Subset analysis was performed for patients evaluated with dual x-ray absorptiometry (n = 2106). Hazard ratios (HRs) and receiver operating characteristic curve analyses were performed. Results A total of 9223 adults were evaluated (mean age, 57 years ± 8 [standard deviation]; 5152 women) at CT and were followed over a median time of 8.8 years (interquartile range, 5.1-11.6 years), with documented subsequent major osteoporotic fractures in 7.4% (n = 686), including hip fractures in 2.4% (n = 219). Comparing the highest-risk quartile with the other three quartiles, HRs for bone attenuation, muscle attenuation, V/S fat ratio, and FRAX were 2.1, 1.9, 0.98, and 2.5 for any fragility fracture and 2.0, 2.5, 1.1, and 2.5 for femoral fractures, respectively (P < .001 for all except V/S ratio, which was P ≥ .51). Area under the receiver operating characteristic curve (AUC) values for fragility fracture were 0.71, 0.65, 0.51, and 0.72 at 2 years and 0.63, 0.62, 0.52, and 0.65 at 10 years, respectively. For hip fractures, 2-year AUC for muscle attenuation alone was 0.75 compared with 0.73 for FRAX (P = .43). Multivariable 2-year AUC combining bone and muscle attenuation was 0.73 for any fragility fracture and 0.76 for hip fractures, respectively (P ≥ .73 compared with FRAX). For the subset with dual x-ray absorptiometry T-scores, 2-year AUC was 0.74 for bone attenuation and 0.65 for FRAX (P = .11). Conclusion Automated bone and muscle imaging biomarkers derived from CT scans provided comparable performance to Fracture Risk Assessment Tool score for presymptomatic prediction of future osteoporotic fractures. Muscle attenuation alone provided effective hip fracture prediction. © RSNA, 2020 See also the editorial by Smith in this issue.
Collapse
Affiliation(s)
- Perry J. Pickhardt
- From the Department of Radiology, University of Wisconsin School of Medicine & Public Health, E3/311 Clinical Science Center, 600 Highland Ave, Madison, WI 53792-3252 (P.J.P., P.M.G., R.Z., S.J.L.); and Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (J.L., V.S., R.M.S.)
| | - Peter M. Graffy
- From the Department of Radiology, University of Wisconsin School of Medicine & Public Health, E3/311 Clinical Science Center, 600 Highland Ave, Madison, WI 53792-3252 (P.J.P., P.M.G., R.Z., S.J.L.); and Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (J.L., V.S., R.M.S.)
| | - Ryan Zea
- From the Department of Radiology, University of Wisconsin School of Medicine & Public Health, E3/311 Clinical Science Center, 600 Highland Ave, Madison, WI 53792-3252 (P.J.P., P.M.G., R.Z., S.J.L.); and Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (J.L., V.S., R.M.S.)
| | - Scott J. Lee
- From the Department of Radiology, University of Wisconsin School of Medicine & Public Health, E3/311 Clinical Science Center, 600 Highland Ave, Madison, WI 53792-3252 (P.J.P., P.M.G., R.Z., S.J.L.); and Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (J.L., V.S., R.M.S.)
| | - Jiamin Liu
- From the Department of Radiology, University of Wisconsin School of Medicine & Public Health, E3/311 Clinical Science Center, 600 Highland Ave, Madison, WI 53792-3252 (P.J.P., P.M.G., R.Z., S.J.L.); and Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (J.L., V.S., R.M.S.)
| | - Veit Sandfort
- From the Department of Radiology, University of Wisconsin School of Medicine & Public Health, E3/311 Clinical Science Center, 600 Highland Ave, Madison, WI 53792-3252 (P.J.P., P.M.G., R.Z., S.J.L.); and Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (J.L., V.S., R.M.S.)
| | - Ronald M. Summers
- From the Department of Radiology, University of Wisconsin School of Medicine & Public Health, E3/311 Clinical Science Center, 600 Highland Ave, Madison, WI 53792-3252 (P.J.P., P.M.G., R.Z., S.J.L.); and Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (J.L., V.S., R.M.S.)
| |
Collapse
|
50
|
Löffler MT, Sekuboyina A, Jacob A, Grau AL, Scharr A, El Husseini M, Kallweit M, Zimmer C, Baum T, Kirschke JS. A Vertebral Segmentation Dataset with Fracture Grading. Radiol Artif Intell 2020; 2:e190138. [PMID: 33937831 PMCID: PMC8082364 DOI: 10.1148/ryai.2020190138] [Citation(s) in RCA: 70] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 02/24/2020] [Accepted: 03/04/2020] [Indexed: 04/21/2023]
Abstract
Published under a CC BY 4.0 license. Supplemental material is available for this article.
Collapse
Affiliation(s)
- Maximilian T. Löffler
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Anjany Sekuboyina
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Alina Jacob
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Anna-Lena Grau
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Andreas Scharr
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Malek El Husseini
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Mareike Kallweit
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Claus Zimmer
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Thomas Baum
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Jan S. Kirschke
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| |
Collapse
|