1
|
Al-Sharify NT, Nser HY, Ghaeb NH, Al-Sharify ZT, See OH, Weng LY, Ahmed SM. Influence of different parameters on the corneal asphericity (Q value) assessed with progress in biomedical optics and imaging - A review. Heliyon 2024; 10:e35924. [PMID: 39224364 PMCID: PMC11367468 DOI: 10.1016/j.heliyon.2024.e35924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 07/20/2024] [Accepted: 08/06/2024] [Indexed: 09/04/2024] Open
Abstract
The corneal eye diseases such as Keratoconus cause weakening of the cornea, with this disease the cornea can change in shape. This condition affects between 1 in 3,000 to 1 in 10,000 people. The main reason for the development of such conditions is unknown and can have significant impacts. Over the last decade, with advancements in computerized corneal topography assessments, researchers have increasingly expressed interest in corneal topography for research as well as clinical activities. Up till now, several aspheric numerical models have been developed as well as proposed to define the complex shape of the cornea. A commonly used term for characterizing the asphericity in an eye is the Q value, a common indicator of the aspherical degree of the cornea. It is one of the critical parameters in the mathematical description model of the cornea as it represents the cornea's shape and the eye's characteristics. Due to the utmost importance of this Q value of the cornea, a couple of studies have attempted to explore this parameter and its distribution, merely in terms of its influence on the human eye's optical properties. The corneal Q value is an important factor that needs to be determined to treat for any refractive errors as corneal degeneration are disease that can lead to potential problems with the structure of the cornea. This study aims to highlight the need to understand Q value of the cornea as this can essentially assist with personalising corneal refractive surgeries and implantation of intraocular lenses. Therefore, the relevance of corneal Q value must be studied in association with different patients, especially ones who have been diagnosed with cataracts, brain tumours, or even COVID-19. To address this issue, this paper first carries out a literature review on the optics of the cornea, the relevance of corneal Q value in ophthalmic practice and studies corneal degenerations and its causes. Thereafter, a detailed review of several noteworthy relevant research studies examining the Q value of the cornea is performed. To do so, an elaborate database is created, which presents a list of different research works examined in this study and provides key evidence derived from these studies. This includes listing details on the age, gender, ethnicity of the eyes assessed, the control variables, the technology used in the study, and even more. The database also delivers important findings and conclusions noted in each study assessed. Next, this paper analyses and discusses the magnitude of corneal Q value in various scenarios and the influence of different parameters on corneal Q value. To design visual optical products as well as to enhance the understanding of the optical properties of an eye, future studies could consider the database and work presented in this study as useful references. In addition, the work can be used to make informed decisions in clinical practice for designing visual optical products as well as to enhance the understanding of the optical properties of an Eye.
Collapse
Affiliation(s)
- Noor T. Al-Sharify
- Department of Electrical & Electronic Engineering, College of Engineering, Universiti Tenaga Nasional, Malaysia
- Medical Instrumentation Engineering Department, Al-Esraa University College, Baghdad, Iraq
| | - Husam Yahya Nser
- Department of Electrical & Electronic Engineering, College of Engineering, Universiti Tenaga Nasional, Malaysia
| | - Nebras H. Ghaeb
- Biomedical Engineering Department, Al Khawarezmi, Engineering College, University of Baghdad, Iraq
| | - Zainab T. Al-Sharify
- Department of Pharmacy, Al Hikma University College, Baghdad, Iraq
- School of Chemical Engineering, University of Birmingham, Edgbaston, B15 2TT, Birmingham, United Kingdom
| | - Ong Hang See
- Department of Electrical & Electronic Engineering, College of Engineering, Universiti Tenaga Nasional, Malaysia
| | - Leong Yeng Weng
- Department of Electrical & Electronic Engineering, College of Engineering, Universiti Tenaga Nasional, Malaysia
| | - Sura M. Ahmed
- Department of Electrical & Electronic Engineering, College of Engineering, Universiti Tenaga Nasional, Malaysia
| |
Collapse
|
2
|
Goodman D, Zhu AY. Utility of artificial intelligence in the diagnosis and management of keratoconus: a systematic review. FRONTIERS IN OPHTHALMOLOGY 2024; 4:1380701. [PMID: 38984114 PMCID: PMC11182163 DOI: 10.3389/fopht.2024.1380701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 04/23/2024] [Indexed: 07/11/2024]
Abstract
Introduction The application of artificial intelligence (AI) systems in ophthalmology is rapidly expanding. Early detection and management of keratoconus is important for preventing disease progression and the need for corneal transplant. We review studies regarding the utility of AI in the diagnosis and management of keratoconus and other corneal ectasias. Methods We conducted a systematic search for relevant original, English-language research studies in the PubMed, Web of Science, Embase, and Cochrane databases from inception to October 31, 2023, using a combination of the following keywords: artificial intelligence, deep learning, machine learning, keratoconus, and corneal ectasia. Case reports, literature reviews, conference proceedings, and editorials were excluded. We extracted the following data from each eligible study: type of AI, input used for training, output, ground truth or reference, dataset size, availability of algorithm/model, availability of dataset, and major study findings. Results Ninety-three original research studies were included in this review, with the date of publication ranging from 1994 to 2023. The majority of studies were regarding the use of AI in detecting keratoconus or subclinical keratoconus (n=61). Among studies regarding keratoconus diagnosis, the most common inputs were corneal topography, Scheimpflug-based corneal tomography, and anterior segment-optical coherence tomography. This review also summarized 16 original research studies regarding AI-based assessment of severity and clinical features, 7 studies regarding the prediction of disease progression, and 6 studies regarding the characterization of treatment response. There were only three studies regarding the use of AI in identifying susceptibility genes involved in the etiology and pathogenesis of keratoconus. Discussion Algorithms trained on Scheimpflug-based tomography seem promising tools for the early diagnosis of keratoconus that can be particularly applied in low-resource communities. Future studies could investigate the application of AI models trained on multimodal patient information for staging keratoconus severity and tracking disease progression.
Collapse
|
3
|
Delsoz M, Madadi Y, Raja H, Munir WM, Tamm B, Mehravaran S, Soleimani M, Djalilian A, Yousefi S. Performance of ChatGPT in Diagnosis of Corneal Eye Diseases. Cornea 2024; 43:664-670. [PMID: 38391243 DOI: 10.1097/ico.0000000000003492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 12/28/2023] [Indexed: 02/24/2024]
Abstract
PURPOSE The aim of this study was to assess the capabilities of ChatGPT-4.0 and ChatGPT-3.5 for diagnosing corneal eye diseases based on case reports and compare with human experts. METHODS We randomly selected 20 cases of corneal diseases including corneal infections, dystrophies, and degenerations from a publicly accessible online database from the University of Iowa. We then input the text of each case description into ChatGPT-4.0 and ChatGPT-3.5 and asked for a provisional diagnosis. We finally evaluated the responses based on the correct diagnoses, compared them with the diagnoses made by 3 corneal specialists (human experts), and evaluated interobserver agreements. RESULTS The provisional diagnosis accuracy based on ChatGPT-4.0 was 85% (17 correct of 20 cases), whereas the accuracy of ChatGPT-3.5 was 60% (12 correct cases of 20). The accuracy of 3 corneal specialists compared with ChatGPT-4.0 and ChatGPT-3.5 was 100% (20 cases, P = 0.23, P = 0.0033), 90% (18 cases, P = 0.99, P = 0.6), and 90% (18 cases, P = 0.99, P = 0.6), respectively. The interobserver agreement between ChatGPT-4.0 and ChatGPT-3.5 was 65% (13 cases), whereas the interobserver agreement between ChatGPT-4.0 and 3 corneal specialists was 85% (17 cases), 80% (16 cases), and 75% (15 cases), respectively. However, the interobserver agreement between ChatGPT-3.5 and each of 3 corneal specialists was 60% (12 cases). CONCLUSIONS The accuracy of ChatGPT-4.0 in diagnosing patients with various corneal conditions was markedly improved than ChatGPT-3.5 and promising for potential clinical integration. A balanced approach that combines artificial intelligence-generated insights with clinical expertise holds a key role for unveiling its full potential in eye care.
Collapse
Affiliation(s)
- Mohammad Delsoz
- Department of Ophthalmology, Hamilton Eye Institute, University of Tennessee Health Science Center, Memphis, TN
| | - Yeganeh Madadi
- Department of Ophthalmology, Hamilton Eye Institute, University of Tennessee Health Science Center, Memphis, TN
| | - Hina Raja
- Department of Ophthalmology, Hamilton Eye Institute, University of Tennessee Health Science Center, Memphis, TN
| | - Wuqaas M Munir
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD
| | - Brendan Tamm
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD
| | - Shiva Mehravaran
- Department of Biology, School of Computer, Mathematical, and Natural Sciences, Morgan State University, Baltimore, MD
| | - Mohammad Soleimani
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran ; and
| | - Ali Djalilian
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL
| | - Siamak Yousefi
- Department of Ophthalmology, Hamilton Eye Institute, University of Tennessee Health Science Center, Memphis, TN
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN
| |
Collapse
|
4
|
Yaraghi S, Khatibi T. Keratoconus disease classification with multimodel fusion and vision transformer: a pretrained model approach. BMJ Open Ophthalmol 2024; 9:e001589. [PMID: 38653536 PMCID: PMC11043764 DOI: 10.1136/bmjophth-2023-001589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 03/28/2024] [Indexed: 04/25/2024] Open
Abstract
OBJECTIVE Our objective is to develop a novel keratoconus image classification system that leverages multiple pretrained models and a transformer architecture to achieve state-of-the-art performance in detecting keratoconus. METHODS AND ANALYSIS Three pretrained models were used to extract features from the input images. These models have been trained on large datasets and have demonstrated strong performance in various computer vision tasks.The extracted features from the three pretrained models were fused using a feature fusion technique. This fusion aimed to combine the strengths of each model and capture a more comprehensive representation of the input images. The fused features were then used as input to a vision transformer, a powerful architecture that has shown excellent performance in image classification tasks. The vision transformer learnt to classify the input images as either indicative of keratoconus or not.The proposed method was applied to the Shahroud Cohort Eye collection and keratoconus detection dataset. The performance of the model was evaluated using standard evaluation metrics such as accuracy, precision, recall and F1 score. RESULTS The research results demonstrated that the proposed model achieved higher accuracy compared with using each model individually. CONCLUSION The findings of this study suggest that the proposed approach can significantly improve the accuracy of image classification models for keratoconus detection. This approach can serve as an effective decision support system alongside physicians, aiding in the diagnosis of keratoconus and potentially reducing the need for invasive procedures such as corneal transplantation in severe cases.
Collapse
Affiliation(s)
- Shokufeh Yaraghi
- Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran (the Islamic Republic of)
| | - Toktam Khatibi
- Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran (the Islamic Republic of)
| |
Collapse
|
5
|
Alzubaidi L, Salhi A, A.Fadhel M, Bai J, Hollman F, Italia K, Pareyon R, Albahri AS, Ouyang C, Santamaría J, Cutbush K, Gupta A, Abbosh A, Gu Y. Trustworthy deep learning framework for the detection of abnormalities in X-ray shoulder images. PLoS One 2024; 19:e0299545. [PMID: 38466693 PMCID: PMC10927121 DOI: 10.1371/journal.pone.0299545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 02/12/2024] [Indexed: 03/13/2024] Open
Abstract
Musculoskeletal conditions affect an estimated 1.7 billion people worldwide, causing intense pain and disability. These conditions lead to 30 million emergency room visits yearly, and the numbers are only increasing. However, diagnosing musculoskeletal issues can be challenging, especially in emergencies where quick decisions are necessary. Deep learning (DL) has shown promise in various medical applications. However, previous methods had poor performance and a lack of transparency in detecting shoulder abnormalities on X-ray images due to a lack of training data and better representation of features. This often resulted in overfitting, poor generalisation, and potential bias in decision-making. To address these issues, a new trustworthy DL framework has been proposed to detect shoulder abnormalities (such as fractures, deformities, and arthritis) using X-ray images. The framework consists of two parts: same-domain transfer learning (TL) to mitigate imageNet mismatch and feature fusion to reduce error rates and improve trust in the final result. Same-domain TL involves training pre-trained models on a large number of labelled X-ray images from various body parts and fine-tuning them on the target dataset of shoulder X-ray images. Feature fusion combines the extracted features with seven DL models to train several ML classifiers. The proposed framework achieved an excellent accuracy rate of 99.2%, F1Score of 99.2%, and Cohen's kappa of 98.5%. Furthermore, the accuracy of the results was validated using three visualisation tools, including gradient-based class activation heat map (Grad CAM), activation visualisation, and locally interpretable model-independent explanations (LIME). The proposed framework outperformed previous DL methods and three orthopaedic surgeons invited to classify the test set, who obtained an average accuracy of 79.1%. The proposed framework has proven effective and robust, improving generalisation and increasing trust in the final results.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Centre for Data Science, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Asma Salhi
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | | | - Jinshuai Bai
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Freek Hollman
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Kristine Italia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Roberto Pareyon
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - A. S. Albahri
- Technical College, Imam Ja’afar Al-Sadiq University, Baghdad, Iraq
| | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Brisbane, QLD, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén, Spain
| | - Kenneth Cutbush
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- School of Medicine, The University of Queensland, Brisbane, QLD, Australia
| | - Ashish Gupta
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
- Greenslopes Private Hospital, Brisbane, QLD, Australia
| | - Amin Abbosh
- School of Information Technology and Electrical Engineering, Brisbane, QLD, Australia
| | - Yuantong Gu
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
6
|
Delsoz M, Madadi Y, Munir WM, Tamm B, Mehravaran S, Soleimani M, Djalilian A, Yousefi S. Performance of ChatGPT in Diagnosis of Corneal Eye Diseases. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.08.25.23294635. [PMID: 37720035 PMCID: PMC10500623 DOI: 10.1101/2023.08.25.23294635] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/19/2023]
Abstract
Introduction Assessing the capabilities of ChatGPT-4.0 and ChatGPT-3.5 for diagnosing corneal eye diseases based on case reports and compare with human experts. Methods We randomly selected 20 cases of corneal diseases including corneal infections, dystrophies, degenerations, and injuries from a publicly accessible online database from the University of Iowa. We then input the text of each case description into ChatGPT-4.0 and ChatGPT3.5 and asked for a provisional diagnosis. We finally evaluated the responses based on the correct diagnoses then compared with the diagnoses of three cornea specialists (Human experts) and evaluated interobserver agreements. Results The provisional diagnosis accuracy based on ChatGPT-4.0 was 85% (17 correct out of 20 cases) while the accuracy of ChatGPT-3.5 was 60% (12 correct cases out of 20). The accuracy of three cornea specialists were 100% (20 cases), 90% (18 cases), and 90% (18 cases), respectively. The interobserver agreement between ChatGPT-4.0 and ChatGPT-3.5 was 65% (13 cases) while the interobserver agreement between ChatGPT-4.0 and three cornea specialists were 85% (17 cases), 80% (16 cases), and 75% (15 cases), respectively. However, the interobserver agreement between ChatGPT-3.5 and each of three cornea specialists was 60% (12 cases). Conclusions The accuracy of ChatGPT-4.0 in diagnosing patients with various corneal conditions was markedly improved than ChatGPT-3.5 and promising for potential clinical integration.
Collapse
Affiliation(s)
- Mohammad Delsoz
- Hamilton Eye Institute, Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Yeganeh Madadi
- Hamilton Eye Institute, Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Wuqaas M Munir
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Brendan Tamm
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Shiva Mehravaran
- School of Computer, Mathematical, and Natural Sciences, Morgan State University, Baltimore, MD, USA
| | - Mohammad Soleimani
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, Illinois, USA
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Ali Djalilian
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, Illinois, USA
| | - Siamak Yousefi
- Hamilton Eye Institute, Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
7
|
Alammar Z, Alzubaidi L, Zhang J, Li Y, Lafta W, Gu Y. Deep Transfer Learning with Enhanced Feature Fusion for Detection of Abnormalities in X-ray Images. Cancers (Basel) 2023; 15:4007. [PMID: 37568821 PMCID: PMC10417687 DOI: 10.3390/cancers15154007] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 07/29/2023] [Accepted: 08/05/2023] [Indexed: 08/13/2023] Open
Abstract
Medical image classification poses significant challenges in real-world scenarios. One major obstacle is the scarcity of labelled training data, which hampers the performance of image-classification algorithms and generalisation. Gathering sufficient labelled data is often difficult and time-consuming in the medical domain, but deep learning (DL) has shown remarkable performance, although it typically requires a large amount of labelled data to achieve optimal results. Transfer learning (TL) has played a pivotal role in reducing the time, cost, and need for a large number of labelled images. This paper presents a novel TL approach that aims to overcome the limitations and disadvantages of TL that are characteristic of an ImageNet dataset, which belongs to a different domain. Our proposed TL approach involves training DL models on numerous medical images that are similar to the target dataset. These models were then fine-tuned using a small set of annotated medical images to leverage the knowledge gained from the pre-training phase. We specifically focused on medical X-ray imaging scenarios that involve the humerus and wrist from the musculoskeletal radiographs (MURA) dataset. Both of these tasks face significant challenges regarding accurate classification. The models trained with the proposed TL were used to extract features and were subsequently fused to train several machine learning (ML) classifiers. We combined these diverse features to represent various relevant characteristics in a comprehensive way. Through extensive evaluation, our proposed TL and feature-fusion approach using ML classifiers achieved remarkable results. For the classification of the humerus, we achieved an accuracy of 87.85%, an F1-score of 87.63%, and a Cohen's Kappa coefficient of 75.69%. For wrist classification, our approach achieved an accuracy of 85.58%, an F1-score of 82.70%, and a Cohen's Kappa coefficient of 70.46%. The results demonstrated that the models trained using our proposed TL approach outperformed those trained with ImageNet TL. We employed visualisation techniques to further validate these findings, including a gradient-based class activation heat map (Grad-CAM) and locally interpretable model-independent explanations (LIME). These visualisation tools provided additional evidence to support the superior accuracy of models trained with our proposed TL approach compared to those trained with ImageNet TL. Furthermore, our proposed TL approach exhibited greater robustness in various experiments compared to ImageNet TL. Importantly, the proposed TL approach and the feature-fusion technique are not limited to specific tasks. They can be applied to various medical image applications, thus extending their utility and potential impact. To demonstrate the concept of reusability, a computed tomography (CT) case was adopted. The results obtained from the proposed method showed improvements.
Collapse
Affiliation(s)
- Zaenab Alammar
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia; (J.Z.); (Y.L.)
- Centre for Data Science, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Laith Alzubaidi
- Centre for Data Science, Queensland University of Technology, Brisbane, QLD 4000, Australia
- School of Mechanical, Medical and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia;
- ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia; (J.Z.); (Y.L.)
- Centre for Data Science, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Yuefeng Li
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia; (J.Z.); (Y.L.)
| | | | - Yuantong Gu
- School of Mechanical, Medical and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia;
- ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| |
Collapse
|