1
|
Yiu C, Griffith JF, Xiao F, Shi L, Zhou B, Wu S, Tam LS. Automated quantification of wrist bone marrow oedema, pre- and post-treatment, in early rheumatoid arthritis. Rheumatol Adv Pract 2024; 8:rkae073. [PMID: 38915843 PMCID: PMC11194532 DOI: 10.1093/rap/rkae073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 05/22/2024] [Indexed: 06/26/2024] Open
Abstract
Objective Bone inflammation (osteitis) in early RA (ERA) manifests as bone marrow oedema (BME) and precedes the development of bone erosion. In this prospective, single-centre study, we developed an automated post-processing pipeline for quantifying the severity of wrist BME on T2-weighted fat-suppressed MRI. Methods A total of 80 ERA patients [mean age 54 years (s.d. 12), 62 females] were enrolled at baseline and 49 (40 females) after 1 year of treatment. For automated bone segmentation, a framework based on a convolutional neural network (nnU-Net) was trained and validated (5-fold cross-validation) for 15 wrist bone areas at baseline in 60 ERA patients. For BME quantification, BME was identified by Gaussian mixture model clustering and thresholding. BME proportion (%) and relative BME intensity within each bone area were compared with visual semi-quantitative assessment of the RA MRI score (RAMRIS). Results For automated wrist bone area segmentation, overall bone Sørensen-Dice similarity coefficient was 0.91 (s.d. 0.02) compared with ground truth manual segmentation. High correlation (Pearson correlation coefficient r = 0.928, P < 0.001) between visual RAMRIS BME and automated BME proportion assessment was found. The automated BME proportion decreased after treatment, correlating highly (r = 0.852, P < 0.001) with reduction in the RAMRIS BME score. Conclusion The automated model developed had an excellent segmentation performance and reliable quantification of both the proportion and relative intensity of wrist BME in ERA patients, providing a more objective and efficient alternative to RAMRIS BME scoring.
Collapse
Affiliation(s)
- Chungwun Yiu
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - James Francis Griffith
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Fan Xiao
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Lin Shi
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Bingjing Zhou
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Su Wu
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Lai-Shan Tam
- Rheumatology Division, Faculty of Medicine, Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
2
|
Stoel BC, Staring M, Reijnierse M, van der Helm-van Mil AHM. Deep learning in rheumatological image interpretation. Nat Rev Rheumatol 2024; 20:182-195. [PMID: 38332242 DOI: 10.1038/s41584-023-01074-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/20/2023] [Indexed: 02/10/2024]
Abstract
Artificial intelligence techniques, specifically deep learning, have already affected daily life in a wide range of areas. Likewise, initial applications have been explored in rheumatology. Deep learning might not easily surpass the accuracy of classic techniques when performing classification or regression on low-dimensional numerical data. With images as input, however, deep learning has become so successful that it has already outperformed the majority of conventional image-processing techniques developed during the past 50 years. As with any new imaging technology, rheumatologists and radiologists need to consider adapting their arsenal of diagnostic, prognostic and monitoring tools, and even their clinical role and collaborations. This adaptation requires a basic understanding of the technical background of deep learning, to efficiently utilize its benefits but also to recognize its drawbacks and pitfalls, as blindly relying on deep learning might be at odds with its capabilities. To facilitate such an understanding, it is necessary to provide an overview of deep-learning techniques for automatic image analysis in detecting, quantifying, predicting and monitoring rheumatic diseases, and of currently published deep-learning applications in radiological imaging for rheumatology, with critical assessment of possible limitations, errors and confounders, and conceivable consequences for rheumatologists and radiologists in clinical practice.
Collapse
Affiliation(s)
- Berend C Stoel
- Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.
| | - Marius Staring
- Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Monique Reijnierse
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | | |
Collapse
|
3
|
Loisel F, Durand S, Goubier JN, Bonnet X, Rouch P, Skalli W. Three-dimensional reconstruction of the hand from biplanar X-rays: Assessment of accuracy and reliability. Orthop Traumatol Surg Res 2023; 109:103403. [PMID: 36108817 DOI: 10.1016/j.otsr.2022.103403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 08/31/2021] [Accepted: 10/04/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND Functional disorders of the hand are generally investigated first using conventional radiographic imaging. However, X-rays (two-dimensional (2D)) provide limited information and the information may be reduced by overlapping bones and projection bias. This work presents a three-dimensional (3D) hand reconstruction method from biplanar X-rays. METHOD This approach consists of the deformation of a generic hand model on biplanar X-rays by manual and automatic processes. The reference examination being the manual CT segmentation, the precision of the method was evaluated by a comparison between the reconstructions from biplanar X-rays and the corresponding reconstructions from the CT scan (0.3mm section thickness). To assess the reproducibility of the method, 6 healthy hands (6 subjects, 3 left, 3 men) were considered. Two operators repeated each reconstruction from biplanar X-rays three times to study inter- and intra-operator variability. Three anatomical parameters that could be calculated automatically from the reconstructions were considered from the bone surfaces: the length of the scaphoid, the depth of the distal end of the radius and the height of the trapezius. RESULTS Double the root mean square error (2 Root Mean Square, 2RMS) at the point/area difference between biplanar X-rays and computed tomography reconstructions ranged from 0.46mm for the distal phalanges to 1.55mm for the bones of the distal carpals. The inter-intra-observer variability showed precision with a 95% confidence interval of less than 1.32mm for the anatomical parameters, and 2.12mm for the bone centroids. DISCUSSION The current method allows to obtain an accurate 3D reconstruction of the hand and wrist compared to the traditional segmented CT scan. By improving the automation of the method, objective information about the position of the bones in space could be obtained quickly. The value of this method lies in the early diagnosis of certain ligament pathologies (carpal instability) and it also has implications for surgical planning and personalized finite element modeling. LEVEL OF PROOF Basic sciences.
Collapse
Affiliation(s)
- François Loisel
- Orthopaedics, traumatology, plastic & reconstructive surgery unit, Hand surgery Unit, University Hospital J. Minjoz, Besançon, France; Institute of Human Biomechanics G. Charpak, National School of Arts and Crafts, Paris, France.
| | - Stan Durand
- Institute of Human Biomechanics G. Charpak, National School of Arts and Crafts, Paris, France
| | - Jean-Noël Goubier
- Institute of Brachial Plexus and Nerve Surgery, 92, boulevard de Courcelles 75017 Paris, France
| | - Xavier Bonnet
- Institute of Human Biomechanics G. Charpak, National School of Arts and Crafts, Paris, France
| | - Philippe Rouch
- Institute of Human Biomechanics G. Charpak, National School of Arts and Crafts, Paris, France
| | - Wafa Skalli
- Institute of Human Biomechanics G. Charpak, National School of Arts and Crafts, Paris, France
| |
Collapse
|
4
|
Fritz B, Fritz J. Artificial intelligence for MRI diagnosis of joints: a scoping review of the current state-of-the-art of deep learning-based approaches. Skeletal Radiol 2022; 51:315-329. [PMID: 34467424 PMCID: PMC8692303 DOI: 10.1007/s00256-021-03830-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 05/17/2021] [Accepted: 05/23/2021] [Indexed: 02/02/2023]
Abstract
Deep learning-based MRI diagnosis of internal joint derangement is an emerging field of artificial intelligence, which offers many exciting possibilities for musculoskeletal radiology. A variety of investigational deep learning algorithms have been developed to detect anterior cruciate ligament tears, meniscus tears, and rotator cuff disorders. Additional deep learning-based MRI algorithms have been investigated to detect Achilles tendon tears, recurrence prediction of musculoskeletal neoplasms, and complex segmentation of nerves, bones, and muscles. Proof-of-concept studies suggest that deep learning algorithms may achieve similar diagnostic performances when compared to human readers in meta-analyses; however, musculoskeletal radiologists outperformed most deep learning algorithms in studies including a direct comparison. Earlier investigations and developments of deep learning algorithms focused on the binary classification of the presence or absence of an abnormality, whereas more advanced deep learning algorithms start to include features for characterization and severity grading. While many studies have focused on comparing deep learning algorithms against human readers, there is a paucity of data on the performance differences of radiologists interpreting musculoskeletal MRI studies without and with artificial intelligence support. Similarly, studies demonstrating the generalizability and clinical applicability of deep learning algorithms using realistic clinical settings with workflow-integrated deep learning algorithms are sparse. Contingent upon future studies showing the clinical utility of deep learning algorithms, artificial intelligence may eventually translate into clinical practice to assist detection and characterization of various conditions on musculoskeletal MRI exams.
Collapse
Affiliation(s)
- Benjamin Fritz
- Department of Radiology, Balgrist University Hospital, Forchstrasse 340, CH-8008 Zurich, Switzerland ,Faculty of Medicine, University of Zurich, Zurich, Switzerland
| | - Jan Fritz
- New York University Grossman School of Medicine, New York University, New York, NY 10016 USA
| |
Collapse
|
5
|
Impact of transfer learning for human sperm segmentation using deep learning. Comput Biol Med 2021; 136:104687. [PMID: 34364259 DOI: 10.1016/j.compbiomed.2021.104687] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 07/18/2021] [Accepted: 07/23/2021] [Indexed: 12/23/2022]
Abstract
BACKGROUND AND OBJECTIVE Infertility affects approximately one in ten couples, and almost half of the infertility cases are due to the malefactor. To diagnose infertility and determine future treatment, a semen analysis is performed. Evaluation of sperm morphology is one of several steps in semen analysis, in which the shape and size of sperm parts are examined. The laboratories dedicated to this use traditional methods susceptible to errors. An alternative to replace the poor visual ability to assess sperm size and shape is to analyze sperm morphology with a computer's help. However, since the automatic sperm classification rates do not show an acceptable precision rate for use in the clinical setting, it is considered an exciting approach to focus efforts on improving the precision in sperm segmentation to extract the contour sperm before classification. This work aims to assess the utility of two image segmentation deep learning models for segmenting human sperm heads, acrosome, and nucleus. METHODS In this work, we evaluate the use of two well-known deep learning architectures (U-Net and Mask-RCNN) to segment parts of human sperm cells using data augmentation, cross-validation, hyperparameter tuning, and transfer learning. The experimental results are carried out using SCIAN-SpermSegGS, a public dataset with more than two hundred manually segmented sperm cells and widely used to validate segmentation methods of human sperm parts. RESULTS Experimental evaluation shows that U-net with transfer learning achieves up to 95% overlapping against hand-segmented masks for sperm head (0.96), acrosome (0.94), and nucleus (0.95), using Dice coefficient as the evaluation metric. These results outperform state-of-the-art sperm parts segmentation methods. CONCLUSIONS The impact of transfer learning is substantial, significantly improving the results of state-of-the-art methods with a higher Dice coefficient, less dispersion, and fewer cases where the model failed to segment sperm parts. These results represent a promising advance in the ultimate goal of performing computer-assisted morphological sperm analysis.
Collapse
|
6
|
Park HJ, Chang SH, Lee JW, Lee SM. Clinical utility of F-18 sodium fluoride PET/CT for estimating disease activity in patients with rheumatoid arthritis. Quant Imaging Med Surg 2021; 11:1156-1169. [PMID: 33816157 DOI: 10.21037/qims-20-788] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Background The present study aimed to investigate the clinical implication of F-18 sodium fluoride (NaF) positron emission tomography/computed tomography (PET/CT) for assessing the disease activity of rheumatoid arthritis. Methods Seventeen patients with rheumatoid arthritis according to the 2010 American College of Rheumatology/European League Against Rheumatism classification criteria were prospectively enrolled. All enrolled patients underwent F-18 NaF PET/CT along with physical examination, blood test, and ultrasonography. On PET/CT images, two quantitative parameters, F-18 NaF uptake of the joint (joint SUV) and joint-to-bone uptake ratio, were measured for each of the 28 joints included in calculating the disease activity score in 28 joints using erythrocyte sedimentation rate (DAS28-ESR). The relationship between PET/CT parameters and clinical factors and the predictive values of PET/CT parameters for joints with synovitis and high disease activity were evaluated. Results Tender joints (joint SUV, 13.6±8.4; joint-to-bone uptake ratio, 1.70±1.02) and both tender and swollen joints (joint SUV, 13.9±5.4; joint-to-normal bone uptake ratio, 1.81±0.76) had significantly higher joint SUV and joint-to-bone uptake ratio than joints without synovitis (joint SUV, 6.0±2.4; joint-to-bone uptake ratio, 0.74±0.31; P<0.001). On correlation analysis, summed joint SUV (P=0.002, correlation coefficient=0.705) and summed joint-to-bone uptake ratio (P<0.001, correlation coefficient=0.861) of 28 joints showed strong positive correlation with DAS28-ESR after adjustment for age and body mass index. Summed joint SUV showed significant positive correlations with ultrasonography findings (grey scale ultrasonography: P=0.047, correlation coefficient =0.468; power Doppler ultrasonography: P=0.045, correlation coefficient =0.507). On the receiver operating characteristic curve analysis, the sensitivity and specificity for predicting synovitis were 83.2% and 92.7%, respectively, for joint SUV and 81.5% and 90.7%, respectively, for joint-to-bone uptake ratio. Moreover, the summation of both PET/CT parameters of 28 joints showed a diagnostic accuracy of 100.0% for predicting high disease activity in rheumatoid arthritis. Conclusions Summed joint uptake on F-18 NaF PET/CT had a strong positive correlation with DAS28-ESR and accurately predicted high disease activity. F-18 NaF PET/CT parameters might be used as an imaging biomarker for disease activity in rheumatoid arthritis. Trial registration This study was registered at the Clinical Research Information Service of the Korea (CRIS, http://cris.nih.go.kr/cris/en; registry number, KCT0002597; registered November 2017).
Collapse
Affiliation(s)
- Hee Jin Park
- Division of Rheumatology, Department of Internal Medicine, Catholic Kwandong University College of Medicine, International St. Mary's Hospital, 25, Simgok-ro 100-gil, Seo-gu, Incheon, Korea
| | - Sung Hae Chang
- Division of Rheumatology, Department of Internal Medicine, Soonchunhyang University Cheonan Hospital, 31 Suncheonhyang 6-gil, Dongnam-gu, Cheonan, Chungcheongnam-do, Korea
| | - Jeong Won Lee
- Department of Nuclear Medicine, Catholic Kwandong University College of Medicine, International St. Mary's Hospital, 25, Simgok-ro 100-gil, Seo-gu, Incheon, Korea
| | - Sang Mi Lee
- Department of Nuclear Medicine, Soonchunhyang University Cheonan Hospital, 31 Suncheonhyang 6-gil, Dongnam-gu, Cheonan, Chungcheongnam-do, Korea
| |
Collapse
|
7
|
Liu B, Zhang X, Yang L, Zhang J. Three-dimensional organ extraction method for color volume image based on the closed-form solution strategy. Quant Imaging Med Surg 2020; 10:862-870. [PMID: 32355650 DOI: 10.21037/qims.2020.03.21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
With the rapid development of computer technology, surgical training, and the digitalized teaching of human body morphology are gaining prominence in medical education. Accurate, true organ models are essential digital material for these computer-assisted systems. However, no direct three-dimensional (3D) true organ model acquisition method currently exists. Thus, the direct extraction of the interested organ models based on the existing Virtual Human Project (VHP) image set is urgently needed. In this paper, a closed-form solution-based volume matting method is proposed. Using a small quantity of graffiti in the foreground and background, target 3D regions can be extracted by closed-form solution computing. The upper triangular storage strategy and the preconditioned conjugate-gradient (PCG) method also promote robustness. Four image data sets (2 virtual human male and 2 virtual human female) from the United States National Library of Medicine (including brain slices, eye slices, lung slices, heart slices, liver slices, kidney slices, spine slices, arm slices, vastus slices, and foot slices) were selected to extract the 3D volume organ models. The experimental results show that the extracted 3D organs were acceptable and satisfactory. This method may provide technical support for medical and other scientific research fields.
Collapse
Affiliation(s)
- Bin Liu
- International School of Information Science & Engineering (DUT-RUISE), Dalian University of Technology, Dalian 116024, China.,Key Lab of Ubiquitous Network and Service Software of Liaoning Province, Dalian University of Technology, Dalian 116024, China
| | - Xiaohui Zhang
- International School of Information Science & Engineering (DUT-RUISE), Dalian University of Technology, Dalian 116024, China
| | - Liang Yang
- The Second Hospital of Dalian Medical University, Dalian Medical University, Dalian 116044, China
| | - Jianxin Zhang
- Key Lab of Advanced Design and Intelligent Computing, Ministry of Education, Dalian University, Dalian 116622, China.,School of Computer Science and Engineering, Dalian Minzu University, Dalian 116600, China
| |
Collapse
|
8
|
Ren H, Zhou L, Liu G, Peng X, Shi W, Xu H, Shan F, Liu L. An unsupervised semi-automated pulmonary nodule segmentation method based on enhanced region growing. Quant Imaging Med Surg 2020; 10:233-242. [PMID: 31956545 DOI: 10.21037/qims.2019.12.02] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Background Nowadays, computer technology is getting popular for clinical aided diagnosis, especially in the direction of medical images. It makes physician diagnosis of lung nodules more efficient by providing them with reliable and accurate segmentation. Methods A region growing based semi-automated pulmonary nodule segmentation algorithm (ReGANS) was developed with three improvements: an automatic threshold calculation method, a lesion area pre-projection method, and an optimized region growing method. The algorithm can quickly and accurately segment a whole lung nodule in a set of computed tomography (CT) images based on an initial manual point. Results The average time taken for ReGANS to segment 1 pulmonary nodule was 0.83s, and the probability rand index (PRI), global consistency error (GCE), and variation of information (VoI) from a comparison between the algorithm and the radiologist's 2 manual results were 0.93, 0.06, and 0.3 for the boundary range (BR), and 0.86, 0.06, 0.3 for the precise range (PR). The number of images covered by one pulmonary nodule in a CT image set was also evaluated to compare the segmentation algorithm with the radiologist's results, with an error rate of 15%. At the same time, the results were verified in multiple data sets to validate the robustness. Conclusions Compared with other algorithms, ReGANS can segment the lung nodule image region more quickly and more precisely. The experimental results show that ReGANS can assist medical imaging diagnosis and has good clinical application value. It also provides a faster and more convenient method for pre-data preparation of intelligent algorithms.
Collapse
Affiliation(s)
- He Ren
- Shanghai Public Health Clinical Center & Institutes of Biomedical Sciences, School of Basic Medical Sciences, School of Data Science, Fudan University, Shanghai 200032, China.,Shanghai University of Medicine & Health Sciences, Shanghai 201318 China
| | - Lingxiao Zhou
- Shanghai Public Health Clinical Center & Institutes of Biomedical Sciences, School of Basic Medical Sciences, School of Data Science, Fudan University, Shanghai 200032, China
| | - Gang Liu
- Shanghai Public Health Clinical Center & Institutes of Biomedical Sciences, School of Basic Medical Sciences, School of Data Science, Fudan University, Shanghai 200032, China
| | - Xueqing Peng
- Shanghai Public Health Clinical Center & Institutes of Biomedical Sciences, School of Basic Medical Sciences, School of Data Science, Fudan University, Shanghai 200032, China
| | - Weiya Shi
- Shanghai Public Health Clinical Center & Institutes of Biomedical Sciences, School of Basic Medical Sciences, School of Data Science, Fudan University, Shanghai 200032, China
| | - Huilin Xu
- Shanghai Public Health Clinical Center & Institutes of Biomedical Sciences, School of Basic Medical Sciences, School of Data Science, Fudan University, Shanghai 200032, China
| | - Fei Shan
- Shanghai Public Health Clinical Center & Institutes of Biomedical Sciences, School of Basic Medical Sciences, School of Data Science, Fudan University, Shanghai 200032, China
| | - Lei Liu
- Shanghai Public Health Clinical Center & Institutes of Biomedical Sciences, School of Basic Medical Sciences, School of Data Science, Fudan University, Shanghai 200032, China
| |
Collapse
|