1
|
Ruitenbeek HC, Oei EHG, Visser JJ, Kijowski R. Artificial intelligence in musculoskeletal imaging: realistic clinical applications in the next decade. Skeletal Radiol 2024; 53:1849-1868. [PMID: 38902420 DOI: 10.1007/s00256-024-04684-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 04/06/2024] [Accepted: 04/15/2024] [Indexed: 06/22/2024]
Abstract
This article will provide a perspective review of the most extensively investigated deep learning (DL) applications for musculoskeletal disease detection that have the best potential to translate into routine clinical practice over the next decade. Deep learning methods for detecting fractures, estimating pediatric bone age, calculating bone measurements such as lower extremity alignment and Cobb angle, and grading osteoarthritis on radiographs have been shown to have high diagnostic performance with many of these applications now commercially available for use in clinical practice. Many studies have also documented the feasibility of using DL methods for detecting joint pathology and characterizing bone tumors on magnetic resonance imaging (MRI). However, musculoskeletal disease detection on MRI is difficult as it requires multi-task, multi-class detection of complex abnormalities on multiple image slices with different tissue contrasts. The generalizability of DL methods for musculoskeletal disease detection on MRI is also challenging due to fluctuations in image quality caused by the wide variety of scanners and pulse sequences used in routine MRI protocols. The diagnostic performance of current DL methods for musculoskeletal disease detection must be further evaluated in well-designed prospective studies using large image datasets acquired at different institutions with different imaging parameters and imaging hardware before they can be fully implemented in clinical practice. Future studies must also investigate the true clinical benefits of current DL methods and determine whether they could enhance quality, reduce error rates, improve workflow, and decrease radiologist fatigue and burnout with all of this weighed against the costs.
Collapse
Affiliation(s)
- Huibert C Ruitenbeek
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, P.O. Box 2040, 3000 CA, Rotterdam, The Netherlands
| | - Edwin H G Oei
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, P.O. Box 2040, 3000 CA, Rotterdam, The Netherlands
| | - Jacob J Visser
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, P.O. Box 2040, 3000 CA, Rotterdam, The Netherlands
| | - Richard Kijowski
- Department of Radiology, New York University Grossman School of Medicine, 660 First Avenue, 3rd Floor, New York, NY, 10016, USA.
| |
Collapse
|
2
|
Felfeliyan B, Forkert ND, Hareendranathan A, Cornel D, Zhou Y, Kuntze G, Jaremko JL, Ronsky JL. Self-supervised-RCNN for medical image segmentation with limited data annotation. Comput Med Imaging Graph 2023; 109:102297. [PMID: 37729826 DOI: 10.1016/j.compmedimag.2023.102297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Revised: 09/01/2023] [Accepted: 09/02/2023] [Indexed: 09/22/2023]
Abstract
Many successful methods developed for medical image analysis based on machine learning use supervised learning approaches, which often require large datasets annotated by experts to achieve high accuracy. However, medical data annotation is time-consuming and expensive, especially for segmentation tasks. To overcome the problem of learning with limited labeled medical image data, an alternative deep learning training strategy based on self-supervised pretraining on unlabeled imaging data is proposed in this work. For the pretraining, different distortions are arbitrarily applied to random areas of unlabeled images. Next, a Mask-RCNN architecture is trained to localize the distortion location and recover the original image pixels. This pretrained model is assumed to gain knowledge of the relevant texture in the images from the self-supervised pretraining on unlabeled imaging data. This provides a good basis for fine-tuning the model to segment the structure of interest using a limited amount of labeled training data. The effectiveness of the proposed method in different pretraining and fine-tuning scenarios was evaluated based on the Osteoarthritis Initiative dataset with the aim of segmenting effusions in MRI datasets of the knee. Applying the proposed self-supervised pretraining method improved the Dice score by up to 18% compared to training the models using only the limited annotated data. The proposed self-supervised learning approach can be applied to many other medical image analysis tasks including anomaly detection, segmentation, and classification.
Collapse
Affiliation(s)
- Banafshe Felfeliyan
- Department of Biomedical Engineering, University of Calgary, Calgary, AB, Canada; McCaig Institute for Bone & Joint Health, University of Calgary, Calgary, AB, Canada.
| | - Nils D Forkert
- Department of Biomedical Engineering, University of Calgary, Calgary, AB, Canada
| | | | - David Cornel
- Department of Radiology & Diagnostic Imaging, University of Alberta, Edmonton, AB, Canada
| | - Yuyue Zhou
- Department of Radiology & Diagnostic Imaging, University of Alberta, Edmonton, AB, Canada
| | - Gregor Kuntze
- McCaig Institute for Bone & Joint Health, University of Calgary, Calgary, AB, Canada
| | - Jacob L Jaremko
- Department of Radiology & Diagnostic Imaging, University of Alberta, Edmonton, AB, Canada
| | - Janet L Ronsky
- Department of Biomedical Engineering, University of Calgary, Calgary, AB, Canada; McCaig Institute for Bone & Joint Health, University of Calgary, Calgary, AB, Canada; Mechanical & Manufacturing Engineering, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
3
|
Shi X, Mai Y, Fang X, Wang Z, Xue S, Chen H, Dang Q, Wang X, Tang S, Ding C, Zhu Z. Bone marrow lesions in osteoarthritis: From basic science to clinical implications. Bone Rep 2023; 18:101667. [PMID: 36909666 PMCID: PMC9996250 DOI: 10.1016/j.bonr.2023.101667] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 02/19/2023] [Accepted: 02/23/2023] [Indexed: 02/27/2023] Open
Abstract
Osteoarthritis (OA) is the most prevalent musculoskeletal disease characterized by multiple joint structure damages, including articular cartilage, subchondral bone and synovium, resulting in disability and economic burden. Bone marrow lesions (BMLs) are common and important magnetic resonance imaging (MRI) features in OA patients. Basic and clinical research on subchondral BMLs in the pathogenesis of OA has been a hotspot. New evidence shows that subchondral bone degeneration, including BML and angiogenesis, occurs not only at or after cartilage degeneration, but even earlier than cartilage degeneration. Although BMLs are recognized as important biomarkers for OA, their exact roles in the pathogenesis of OA are still unclear, and disputes about the clinical impact and treatment of BMLs remain. This review summarizes the current basic and clinical research progress of BMLs. We particularly focus on molecular pathways, cellular abnormalities and microenvironmental changes of subchondral bone that contributed to the formation of BMLs, and emphasize the crosstalk between subchondral bone and cartilage in OA development. Finally, potential therapeutic strategies targeting BMLs in OA are discussed, which provides novel strategies for OA treatment.
Collapse
Affiliation(s)
- Xiaorui Shi
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yiying Mai
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Xiaofeng Fang
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Zhiqiang Wang
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Song Xue
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Haowei Chen
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Qin Dang
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Xiaoshuai Wang
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Su'an Tang
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Changhai Ding
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Department of Rheumatology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, China.,Department of Orthopedics, Affiliated Hospital of Youjiang Medical University for Nationalities, Baise, China
| | - Zhaohua Zhu
- Clinical Research Centre, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Department of Orthopedics, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| |
Collapse
|
4
|
Shah R, Astuto Arouche Nunes B, Gleason T, Fletcher W, Banaga J, Sweetwood K, Ye A, Patel R, McGill K, Link T, Crane J, Pedoia V, Majumdar S. Utilizing a Digital Swarm Intelligence Platform to Improve Consensus Among Radiologists and Exploring Its Applications. J Digit Imaging 2023; 36:401-413. [PMID: 36414832 PMCID: PMC10039189 DOI: 10.1007/s10278-022-00662-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 04/17/2022] [Accepted: 05/23/2022] [Indexed: 11/24/2022] Open
Abstract
Radiologists today play a central role in making diagnostic decisions and labeling images for training and benchmarking artificial intelligence (AI) algorithms. A key concern is low inter-reader reliability (IRR) seen between experts when interpreting challenging cases. While team-based decisions are known to outperform individual decisions, inter-personal biases often creep up in group interactions which limit nondominant participants from expressing true opinions. To overcome the dual problems of low consensus and interpersonal bias, we explored a solution modeled on bee swarms. Two separate cohorts, three board-certified radiologists, (cohort 1), and five radiology residents (cohort 2) collaborated on a digital swarm platform in real time and in a blinded fashion, grading meniscal lesions on knee MR exams. These consensus votes were benchmarked against clinical (arthroscopy) and radiological (senior-most radiologist) standards of reference using Cohen's kappa. The IRR of the consensus votes was then compared to the IRR of the majority and most confident votes of the two cohorts. IRR was also calculated for predictions from a meniscal lesion detecting AI algorithm. The attending cohort saw an improvement of 23% in IRR of swarm votes (k = 0.34) over majority vote (k = 0.11). Similar improvement of 23% in IRR (k = 0.25) in 3-resident swarm votes over majority vote (k = 0.02) was observed. The 5-resident swarm had an even higher improvement of 30% in IRR (k = 0.37) over majority vote (k = 0.07). The swarm consensus votes outperformed individual and majority vote decision in both the radiologists and resident cohorts. The attending and resident swarms also outperformed predictions from a state-of-the-art AI algorithm.
Collapse
Affiliation(s)
- Rutwik Shah
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA.
- Center for Intelligent Imaging, University of California San Francisco, San Francisco, CA, USA.
| | - Bruno Astuto Arouche Nunes
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
- Center for Intelligent Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Tyler Gleason
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Will Fletcher
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Justin Banaga
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Kevin Sweetwood
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Allen Ye
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Rina Patel
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Kevin McGill
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Thomas Link
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Jason Crane
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
- Center for Intelligent Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
- Center for Intelligent Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
- Center for Intelligent Imaging, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
5
|
Guan H, Liu M. DomainATM: Domain adaptation toolbox for medical data analysis. Neuroimage 2023; 268:119863. [PMID: 36610676 PMCID: PMC9908850 DOI: 10.1016/j.neuroimage.2023.119863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/11/2022] [Accepted: 01/03/2023] [Indexed: 01/06/2023] Open
Abstract
Domain adaptation (DA) is an important technique for modern machine learning-based medical data analysis, which aims at reducing distribution differences between different medical datasets. A proper domain adaptation method can significantly enhance the statistical power by pooling data acquired from multiple sites/centers. To this end, we have developed the Domain Adaptation Toolbox for Medical data analysis (DomainATM) - an open-source software package designed for fast facilitation and easy customization of domain adaptation methods for medical data analysis. The DomainATM is implemented in MATLAB with a user-friendly graphical interface, and it consists of a collection of popular data adaptation algorithms that have been extensively applied to medical image analysis and computer vision. With DomainATM, researchers are able to facilitate fast feature-level and image-level adaptation, visualization and performance evaluation of different adaptation methods for medical data analysis. More importantly, the DomainATM enables the users to develop and test their own adaptation methods through scripting, greatly enhancing its utility and extensibility. An overview characteristic and usage of DomainATM is presented and illustrated with three example experiments, demonstrating its effectiveness, simplicity, and flexibility. The software, source code, and manual are available online.
Collapse
Affiliation(s)
| | - Mingxia Liu
- The Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
6
|
A More Posterior Tibial Tubercle (Decreased Sagittal Tibial Tubercle-Trochlear Groove Distance) Is Significantly Associated With Patellofemoral Joint Degenerative Cartilage Change: A Deep Learning Analysis. Arthroscopy 2022; 39:1493-1501.e2. [PMID: 36581003 DOI: 10.1016/j.arthro.2022.11.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 11/23/2022] [Accepted: 11/30/2022] [Indexed: 12/27/2022]
Abstract
PURPOSE To perform patellofemoral joint (PFJ) geometric measurements on knee magnetic resonance imaging scans and determine their relations with chondral lesions in a multicenter cohort using deep learning. METHODS The sagittal tibial tubercle-trochlear groove (sTTTG) distance, tibial tubercle-trochlear groove distance, trochlear sulcus angle, trochlear depth, Caton-Deschamps Index (CDI), and flexion angle were measured by use of deep learning-generated segmentations on a subset of the Osteoarthritis Initiative study with radiologist-graded PFJ cartilage grades (n = 2,461). Kruskal-Wallis H tests were performed to compare differences in PFJ morphology between subjects without PFJ osteoarthritis (OA) and those with PFJ OA. PFJ morphology was correlated with secondary outcomes of mean patellar cartilage thickness and mean patellar cartilage T2 relaxation time using linear regression models controlling for age, sex, and body mass index. RESULTS A total of 1,626 knees did not have PFJ OA, whereas 835 knees had PFJ OA. Knees without PFJ OA had an increased (anterior) sTTTG distance (mean ± standard deviation, 11.1 ± 12.8 mm) compared with knees with PFJ OA (8.4 ± 12.7 mm) (P < .001), indicating a more posterior tibial tubercle in subjects with PFJ OA. Knees without PFJ OA had a decreased sulcus angle (127.4° ± 7.1° vs 128.0° ± 8.4°, P = .01) and increased trochlear depth (9.1 ± 1.7 mm vs 9.0 ± 2.0 mm, P = .03) compared with knees with PFJ OA. Decreased patellar cartilage thickness was associated with decreased trochlear depth (β = 0.12, P = .002) and increased CDI (β = -0.07, P < .001). Increased patellar cartilage T2 relaxation time was correlated with decreased sTTTG distance (β = -0.08, P = .01), decreased sulcus angle (β = -0.12, P = .04), and decreased CDI (β = -0.12, P < .001). CONCLUSIONS PFJ OA, patellar cartilage thickness, and patellar cartilage T2 relaxation time were shown to be associated with the underlying geometries within the PFJ. This large longitudinal study highlights that a decreased sTTTG distance (i.e., a more posterior tibial tubercle) is significantly associated with PFJ degenerative cartilage change. LEVEL OF EVIDENCE Level III, retrospective comparative prognostic trial.
Collapse
|
7
|
Calivà F, Namiri NK, Dubreuil M, Pedoia V, Ozhinsky E, Majumdar S. Studying osteoarthritis with artificial intelligence applied to magnetic resonance imaging. Nat Rev Rheumatol 2022; 18:112-121. [PMID: 34848883 DOI: 10.1038/s41584-021-00719-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2021] [Indexed: 02/08/2023]
Abstract
The 3D nature and soft-tissue contrast of MRI makes it an invaluable tool for osteoarthritis research, by facilitating the elucidation of disease pathogenesis and progression. The recent increasing employment of MRI has certainly been stimulated by major advances that are due to considerable investment in research, particularly related to artificial intelligence (AI). These AI-related advances are revolutionizing the use of MRI in clinical research by augmenting activities ranging from image acquisition to post-processing. Automation is key to reducing the long acquisition times of MRI, conducting large-scale longitudinal studies and quantitatively defining morphometric and other important clinical features of both soft and hard tissues in various anatomical joints. Deep learning methods have been used recently for multiple applications in the musculoskeletal field to improve understanding of osteoarthritis. Compared with labour-intensive human efforts, AI-based methods have advantages and potential in all stages of imaging, as well as post-processing steps, including aiding diagnosis and prognosis. However, AI-based methods also have limitations, including the arguably limited interpretability of AI models. Given that the AI community is highly invested in uncovering uncertainties associated with model predictions and improving their interpretability, we envision future clinical translation and progressive increase in the use of AI algorithms to support clinicians in optimizing patient care.
Collapse
Affiliation(s)
- Francesco Calivà
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Nikan K Namiri
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Maureen Dubreuil
- Section of Rheumatology, Department of Medicine, Boston University School of Medicine, Boston, MA, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Eugene Ozhinsky
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
8
|
Oei EHG, Hirvasniemi J, van Zadelhoff TA, van der Heijden RA. Osteoarthritis year in review 2021: imaging. Osteoarthritis Cartilage 2022; 30:226-236. [PMID: 34838670 DOI: 10.1016/j.joca.2021.11.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 10/16/2021] [Accepted: 11/11/2021] [Indexed: 02/02/2023]
Abstract
PURPOSE To provide a narrative review of original articles on imaging of osteoarthritis (OA) published between January 1, 2020 and March 31, 2021, with a special focus on imaging of inflammation, imaging of bone, cartilage and bone-cartilage interactions, imaging of peri-articular tissues, imaging scoring methods for OA, and artificial intelligence (AI) applied to OA imaging. METHODS The Embase, Pubmed, Medline, Cochrane databases were searched for original research articles in the English language on human, in vivo, imaging of OA published between January 1, 2020 and March 31, 2021. Search terms related to osteoarthritis combined with all imaging modalities and artificial intelligence were applied. A selection of articles reporting on one of the focus topics was discussed further. RESULTS The search resulted in 651 articles, of which 214 were deemed relevant to human OA imaging. Among the articles included, the knee joint (69%) and magnetic resonance imaging (MRI) (52%) were the predominant anatomical area and imaging modality studied. There were also a substantial number of papers (n = 46) reporting on AI applications in the field of OA imaging. CONCLUSION Imaging continues to play an important role in the assessment of OA. Recent advances in OA imaging include quantitative, non-contrast, and hybrid imaging techniques for improved characterization of multiple tissue processes in OA. In addition, an increasing effort in AI techniques is undertaken to enhance OA imaging acquisition and analysis.
Collapse
Affiliation(s)
- E H G Oei
- Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, Dr. Molewaterplein 40, 3015 GD Rotterdam, the Netherlands.
| | - J Hirvasniemi
- Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, Dr. Molewaterplein 40, 3015 GD Rotterdam, the Netherlands.
| | - T A van Zadelhoff
- Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, Dr. Molewaterplein 40, 3015 GD Rotterdam, the Netherlands.
| | - R A van der Heijden
- Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, Dr. Molewaterplein 40, 3015 GD Rotterdam, the Netherlands.
| |
Collapse
|
9
|
Fu R, Leader JK, Pradeep T, Shi J, Meng X, Zhang Y, Pu J. Automated delineation of orbital abscess depicted on CT scan using deep learning. Med Phys 2021; 48:3721-3729. [PMID: 33906264 PMCID: PMC8600964 DOI: 10.1002/mp.14907] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 04/06/2021] [Accepted: 04/19/2021] [Indexed: 11/07/2022] Open
Abstract
OBJECTIVES To develop and validate a deep learning algorithm to automatically detect and segment an orbital abscess depicted on computed tomography (CT). METHODS We retrospectively collected orbital CT scans acquired on 67 pediatric subjects with a confirmed orbital abscess in the setting of infectious orbital cellulitis. A context-aware convolutional neural network (CA-CNN) was developed and trained to automatically segment orbital abscess. To reduce the requirement for a large dataset, transfer learning was used by leveraging a pre-trained model for CT-based lung segmentation. An ophthalmologist manually delineated orbital abscesses depicted on the CT images. The classical U-Net and the CA-CNN models with and without transfer learning were trained and tested on the collected dataset using the 10-fold cross-validation method. Dice coefficient, Jaccard index, and Hausdorff distance were used as performance metrics to assess the agreement between the computerized and manual segmentations. RESULTS The context-aware U-Net with transfer learning achieved an average Dice coefficient and Jaccard index of 0.78 ± 0.12 and 0.65 ± 0.13, which were consistently higher than the classical U-Net or the context-aware U-Net without transfer learning (P < 0.01). The average differences of the abscess between the computerized results and the experts in terms of volume and Hausdorff distance were 0.10 ± 0.11 mL and 1.94 ± 1.21 mm, respectively. The context-aware U-Net detected all orbital abscess without false positives. CONCLUSIONS The deep learning solution demonstrated promising performance in detecting and segmenting orbital abscesses on CT images in strong agreement with a human observer.
Collapse
Affiliation(s)
- Roxana Fu
- Department of Ophthalmology University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Joseph K. Leader
- Departments of Radiology and Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Tejus Pradeep
- Johns Hopkins University, School of Medicine, Baltimore, MD, USA
| | - Junli Shi
- Departments of Radiology and Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Xin Meng
- Departments of Radiology and Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Yanchun Zhang
- Departments of Radiology and Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Jiantao Pu
- Departments of Radiology and Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| |
Collapse
|
10
|
Namiri NK, Lee J, Astuto B, Liu F, Shah R, Majumdar S, Pedoia V. Deep learning for large scale MRI-based morphological phenotyping of osteoarthritis. Sci Rep 2021; 11:10915. [PMID: 34035386 PMCID: PMC8149826 DOI: 10.1038/s41598-021-90292-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 05/07/2021] [Indexed: 11/08/2022] Open
Abstract
Osteoarthritis (OA) develops through heterogenous pathophysiologic pathways. As a result, no regulatory agency approved disease modifying OA drugs are available to date. Stratifying knees into MRI-based morphological phenotypes may provide insight into predicting future OA incidence, leading to improved inclusion criteria and efficacy of therapeutics. We trained convolutional neural networks to classify bone, meniscus/cartilage, inflammatory, and hypertrophy phenotypes in knee MRIs from participants in the Osteoarthritis Initiative (n = 4791). We investigated cross-sectional association between baseline morphological phenotypes and baseline structural OA (Kellgren Lawrence grade > 1) and symptomatic OA. Among participants without baseline OA, we evaluated association of baseline phenotypes with 48-month incidence of structural OA and symptomatic OA. The area under the curve of bone, meniscus/cartilage, inflammatory, and hypertrophy phenotype neural network classifiers was 0.89 ± 0.01, 0.93 ± 0.03, 0.96 ± 0.02, and 0.93 ± 0.02, respectively (mean ± standard deviation). Among those with no baseline OA, bone phenotype (OR: 2.99 (95%CI: 1.59-5.62)) and hypertrophy phenotype (OR: 5.80 (95%CI: 1.82-18.5)) each respectively increased odds of developing incident structural OA and symptomatic OA at 48 months. All phenotypes except meniscus/cartilage increased odds of undergoing total knee replacement within 96 months. Artificial intelligence can rapidly stratify knees into structural phenotypes associated with incident OA and total knee replacement, which may aid in stratifying patients for clinical trials of targeted therapeutics.
Collapse
Affiliation(s)
- Nikan K Namiri
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, 1700 Fourth St, Suite 201, QB3 Building, San Francisco, CA, 94107, USA
| | - Jinhee Lee
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, 1700 Fourth St, Suite 201, QB3 Building, San Francisco, CA, 94107, USA
| | - Bruno Astuto
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, 1700 Fourth St, Suite 201, QB3 Building, San Francisco, CA, 94107, USA
| | - Felix Liu
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, 1700 Fourth St, Suite 201, QB3 Building, San Francisco, CA, 94107, USA
| | - Rutwik Shah
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, 1700 Fourth St, Suite 201, QB3 Building, San Francisco, CA, 94107, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, 1700 Fourth St, Suite 201, QB3 Building, San Francisco, CA, 94107, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, 1700 Fourth St, Suite 201, QB3 Building, San Francisco, CA, 94107, USA.
| |
Collapse
|
11
|
Burgos N, Bottani S, Faouzi J, Thibeau-Sutre E, Colliot O. Deep learning for brain disorders: from data processing to disease treatment. Brief Bioinform 2020; 22:1560-1576. [PMID: 33316030 DOI: 10.1093/bib/bbaa310] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Collapse
|
12
|
Zhang Z. Editorial for “Computer‐Aided Detection
AI
Reduces
Interreader
Variability in Grading Hip Abnormalities With
MRI
”. J Magn Reson Imaging 2020; 52:1173-1174. [DOI: 10.1002/jmri.27170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 04/02/2020] [Indexed: 11/05/2022] Open
Affiliation(s)
- Zhongwei Zhang
- Mallinckrodt Institute of Radiology Washington University School of Medicine St. Louis MO USA
| |
Collapse
|