1
|
Saavedra JP, Droppelmann G, Jorquera C, Feijoo F. Automated segmentation and classification of supraspinatus fatty infiltration in shoulder magnetic resonance image using a convolutional neural network. Front Med (Lausanne) 2024; 11:1416169. [PMID: 39290391 PMCID: PMC11405335 DOI: 10.3389/fmed.2024.1416169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Accepted: 08/20/2024] [Indexed: 09/19/2024] Open
Abstract
Background Goutallier's fatty infiltration of the supraspinatus muscle is a critical condition in degenerative shoulder disorders. Deep learning research primarily uses manual segmentation and labeling to detect this condition. Employing unsupervised training with a hybrid framework of segmentation and classification could offer an efficient solution. Aim To develop and assess a two-step deep learning model for detecting the region of interest and categorizing the magnetic resonance image (MRI) supraspinatus muscle fatty infiltration according to Goutallier's scale. Materials and methods A retrospective study was performed from January 1, 2019 to September 20, 2020, using 900 MRI T2-weighted images with supraspinatus muscle fatty infiltration diagnoses. A model with two sequential neural networks was implemented and trained. The first sub-model automatically detects the region of interest using a U-Net model. The second sub-model performs a binary classification using the VGG-19 architecture. The model's performance was computed as the average of five-fold cross-validation processes. Loss, accuracy, Dice coefficient (CI. 95%), AU-ROC, sensitivity, and specificity (CI. 95%) were reported. Results Six hundred and six shoulders MRIs were analyzed. The Goutallier distribution was presented as follows: 0 (66.50%); 1 (18.81%); 2 (8.42%); 3 (3.96%); 4 (2.31%). Segmentation results demonstrate high levels of accuracy (0.9977 ± 0.0002) and Dice score (0.9441 ± 0.0031), while the classification model also results in high levels of accuracy (0.9731 ± 0.0230); sensitivity (0.9000 ± 0.0980); specificity (0.9788 ± 0.0257); and AUROC (0.9903 ± 0.0092). Conclusion The two-step training method proposed using a deep learning model demonstrated strong performance in segmentation and classification tasks.
Collapse
Affiliation(s)
- Juan Pablo Saavedra
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| | - Guillermo Droppelmann
- Clínica MEDS, Santiago, Chile
- Harvard T.H. Chan School of Public Health, Boston, MA, United States
| | - Carlos Jorquera
- Facultad de Ciencias, Escuela de Nutrición y Dietética, Universidad Mayor, Santiago, Chile
| | - Felipe Feijoo
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| |
Collapse
|
2
|
Alzubaidi L, Al-Dulaimi K, Salhi A, Alammar Z, Fadhel MA, Albahri AS, Alamoodi AH, Albahri OS, Hasan AF, Bai J, Gilliland L, Peng J, Branni M, Shuker T, Cutbush K, Santamaría J, Moreira C, Ouyang C, Duan Y, Manoufali M, Jomaa M, Gupta A, Abbosh A, Gu Y. Comprehensive review of deep learning in orthopaedics: Applications, challenges, trustworthiness, and fusion. Artif Intell Med 2024; 155:102935. [PMID: 39079201 DOI: 10.1016/j.artmed.2024.102935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 03/18/2024] [Accepted: 07/22/2024] [Indexed: 08/24/2024]
Abstract
Deep learning (DL) in orthopaedics has gained significant attention in recent years. Previous studies have shown that DL can be applied to a wide variety of orthopaedic tasks, including fracture detection, bone tumour diagnosis, implant recognition, and evaluation of osteoarthritis severity. The utilisation of DL is expected to increase, owing to its ability to present accurate diagnoses more efficiently than traditional methods in many scenarios. This reduces the time and cost of diagnosis for patients and orthopaedic surgeons. To our knowledge, no exclusive study has comprehensively reviewed all aspects of DL currently used in orthopaedic practice. This review addresses this knowledge gap using articles from Science Direct, Scopus, IEEE Xplore, and Web of Science between 2017 and 2023. The authors begin with the motivation for using DL in orthopaedics, including its ability to enhance diagnosis and treatment planning. The review then covers various applications of DL in orthopaedics, including fracture detection, detection of supraspinatus tears using MRI, osteoarthritis, prediction of types of arthroplasty implants, bone age assessment, and detection of joint-specific soft tissue disease. We also examine the challenges for implementing DL in orthopaedics, including the scarcity of data to train DL and the lack of interpretability, as well as possible solutions to these common pitfalls. Our work highlights the requirements to achieve trustworthiness in the outcomes generated by DL, including the need for accuracy, explainability, and fairness in the DL models. We pay particular attention to fusion techniques as one of the ways to increase trustworthiness, which have also been used to address the common multimodality in orthopaedics. Finally, we have reviewed the approval requirements set forth by the US Food and Drug Administration to enable the use of DL applications. As such, we aim to have this review function as a guide for researchers to develop a reliable DL application for orthopaedic tasks from scratch for use in the market.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia.
| | - Khamael Al-Dulaimi
- Computer Science Department, College of Science, Al-Nahrain University, Baghdad, Baghdad 10011, Iraq; School of Electrical Engineering and Robotics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Asma Salhi
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Zaenab Alammar
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Mohammed A Fadhel
- Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - A S Albahri
- Technical College, Imam Ja'afar Al-Sadiq University, Baghdad, Iraq
| | - A H Alamoodi
- Institute of Informatics and Computing in Energy, Universiti Tenaga Nasional, Kajang 43000, Malaysia
| | - O S Albahri
- Australian Technical and Management College, Melbourne, Australia
| | - Amjad F Hasan
- Faculty of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | - Jinshuai Bai
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Luke Gilliland
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Jing Peng
- Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Marco Branni
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Tristan Shuker
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Kenneth Cutbush
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén 23071, Spain
| | - Catarina Moreira
- Data Science Institute, University of Technology Sydney, Australia
| | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Ye Duan
- School of Computing, Clemson University, Clemson, 29631, SC, USA
| | - Mohamed Manoufali
- CSIRO, Kensington, WA 6151, Australia; School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4067, Australia
| | - Mohammad Jomaa
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Ashish Gupta
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Amin Abbosh
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4067, Australia
| | - Yuantong Gu
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| |
Collapse
|
3
|
Kim SH, Yoo HJ, Yoon SH, Kim YT, Park SJ, Chai JW, Oh J, Chae HD. Development of a deep learning-based fully automated segmentation of rotator cuff muscles from clinical MR scans. Acta Radiol 2024; 65:1126-1132. [PMID: 39043149 DOI: 10.1177/02841851241262325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
BACKGROUND The fatty infiltration and atrophy in the muscle after a rotator cuff (RC) tear are important in surgical decision-making and are linked to poor clinical outcomes after rotator cuff repair. An accurate and reliable quantitative method should be developed to assess the entire RC muscles. PURPOSE To develop a fully automated approach based on a deep neural network to segment RC muscles from clinical magnetic resonance imaging (MRI) scans. MATERIAL AND METHODS In total, 94 shoulder MRI scans (mean age = 62.3 years) were utilized for the training and internal validation datasets, while an additional 20 MRI scans (mean age = 62.6 years) were collected from another institution for external validation. An orthopedic surgeon and a radiologist manually segmented muscles and bones as reference masks. Segmentation performance was evaluated using the Dice score, sensitivities, precision, and percent difference in muscle volume (%). In addition, the segmentation performance was assessed based on sex, age, and the presence of a RC tendon tear. RESULTS The average Dice score, sensitivities, precision, and percentage difference in muscle volume of the developed algorithm were 0.920, 0.933, 0.912, and 4.58%, respectively, in external validation. There was no difference in the prediction of shoulder muscles, with the exception of teres minor, where significant prediction errors were observed (0.831, 0.854, 0.835, and 10.88%, respectively). The segmentation performance of the algorithm was generally unaffected by age, sex, and the presence of RC tears. CONCLUSION We developed a fully automated deep neural network for RC muscle and bone segmentation with excellent performance from clinical MRI scans.
Collapse
Affiliation(s)
- Sae Hoon Kim
- Department of Orthopaedic Surgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hye Jin Yoo
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
- MEDICALIP Co. Ltd., Seoul, Republic of Korea
| | - Yong Tae Kim
- Depatment of Orthopaedic Surgery, Hallym University Dongtan Sacred Heart Hospital, Gyeonggi, Republic of Korea
| | - Sang Joon Park
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
- MEDICALIP Co. Ltd., Seoul, Republic of Korea
| | - Jee Won Chai
- Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
| | - Jiseon Oh
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hee Dong Chae
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
4
|
Iftikhar M, Saqib M, Zareen M, Mumtaz H. Artificial intelligence: revolutionizing robotic surgery: review. Ann Med Surg (Lond) 2024; 86:5401-5409. [PMID: 39238994 PMCID: PMC11374272 DOI: 10.1097/ms9.0000000000002426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Accepted: 07/25/2024] [Indexed: 09/07/2024] Open
Abstract
Robotic surgery, known for its minimally invasive techniques and computer-controlled robotic arms, has revolutionized modern medicine by providing improved dexterity, visualization, and tremor reduction compared to traditional methods. The integration of artificial intelligence (AI) into robotic surgery has further advanced surgical precision, efficiency, and accessibility. This paper examines the current landscape of AI-driven robotic surgical systems, detailing their benefits, limitations, and future prospects. Initially, AI applications in robotic surgery focused on automating tasks like suturing and tissue dissection to enhance consistency and reduce surgeon workload. Present AI-driven systems incorporate functionalities such as image recognition, motion control, and haptic feedback, allowing real-time analysis of surgical field images and optimizing instrument movements for surgeons. The advantages of AI integration include enhanced precision, reduced surgeon fatigue, and improved safety. However, challenges such as high development costs, reliance on data quality, and ethical concerns about autonomy and liability hinder widespread adoption. Regulatory hurdles and workflow integration also present obstacles. Future directions for AI integration in robotic surgery include enhancing autonomy, personalizing surgical approaches, and refining surgical training through AI-powered simulations and virtual reality. Overall, AI integration holds promise for advancing surgical care, with potential benefits including improved patient outcomes and increased access to specialized expertise. Addressing challenges and promoting responsible adoption are essential for realizing the full potential of AI-driven robotic surgery.
Collapse
|
5
|
Cheng C, Liang X, Guo D, Xie D. Application of Artificial Intelligence in Shoulder Pathology. Diagnostics (Basel) 2024; 14:1091. [PMID: 38893618 PMCID: PMC11171621 DOI: 10.3390/diagnostics14111091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 05/16/2024] [Accepted: 05/20/2024] [Indexed: 06/21/2024] Open
Abstract
Artificial intelligence (AI) refers to the science and engineering of creating intelligent machines for imitating and expanding human intelligence. Given the ongoing evolution of the multidisciplinary integration trend in modern medicine, numerous studies have investigated the power of AI to address orthopedic-specific problems. One particular area of investigation focuses on shoulder pathology, which is a range of disorders or abnormalities of the shoulder joint, causing pain, inflammation, stiffness, weakness, and reduced range of motion. There has not yet been a comprehensive review of the recent advancements in this field. Therefore, the purpose of this review is to evaluate current AI applications in shoulder pathology. This review mainly summarizes several crucial stages of the clinical practice, including predictive models and prognosis, diagnosis, treatment, and physical therapy. In addition, the challenges and future development of AI technology are also discussed.
Collapse
Affiliation(s)
- Cong Cheng
- Department of Orthopaedics, People’s Hospital of Longhua, Shenzhen 518000, China;
- Department of Joint Surgery and Sports Medicine, Center for Orthopedic Surgery, Orthopedic Hospital of Guangdong Province, The Third Affiliated Hospital of Southern Medical University, Guangzhou 510630, China; (X.L.); (D.G.)
| | - Xinzhi Liang
- Department of Joint Surgery and Sports Medicine, Center for Orthopedic Surgery, Orthopedic Hospital of Guangdong Province, The Third Affiliated Hospital of Southern Medical University, Guangzhou 510630, China; (X.L.); (D.G.)
| | - Dong Guo
- Department of Joint Surgery and Sports Medicine, Center for Orthopedic Surgery, Orthopedic Hospital of Guangdong Province, The Third Affiliated Hospital of Southern Medical University, Guangzhou 510630, China; (X.L.); (D.G.)
| | - Denghui Xie
- Department of Joint Surgery and Sports Medicine, Center for Orthopedic Surgery, Orthopedic Hospital of Guangdong Province, The Third Affiliated Hospital of Southern Medical University, Guangzhou 510630, China; (X.L.); (D.G.)
- Guangdong Provincial Key Laboratory of Bone and Joint Degeneration Diseases, The Third Affiliated Hospital of Southern Medical University, Guangzhou 510630, China
| |
Collapse
|
6
|
Chen W, Lim LJR, Lim RQR, Yi Z, Huang J, He J, Yang G, Liu B. Artificial intelligence powered advancements in upper extremity joint MRI: A review. Heliyon 2024; 10:e28731. [PMID: 38596104 PMCID: PMC11002577 DOI: 10.1016/j.heliyon.2024.e28731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 04/11/2024] Open
Abstract
Magnetic resonance imaging (MRI) is an indispensable medical imaging examination technique in musculoskeletal medicine. Modern MRI techniques achieve superior high-quality multiplanar imaging of soft tissue and skeletal pathologies without the harmful effects of ionizing radiation. Some current limitations of MRI include long acquisition times, artifacts, and noise. In addition, it is often challenging to distinguish abutting or closely applied soft tissue structures with similar signal characteristics. In the past decade, Artificial Intelligence (AI) has been widely employed in musculoskeletal MRI to help reduce the image acquisition time and improve image quality. Apart from being able to reduce medical costs, AI can assist clinicians in diagnosing diseases more accurately. This will effectively help formulate appropriate treatment plans and ultimately improve patient care. This review article intends to summarize AI's current research and application in musculoskeletal MRI, particularly the advancement of DL in identifying the structure and lesions of upper extremity joints in MRI images.
Collapse
Affiliation(s)
- Wei Chen
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| | - Lincoln Jian Rong Lim
- Department of Medical Imaging, Western Health, Footscray Hospital, Victoria, Australia
- Department of Surgery, The University of Melbourne, Victoria, Australia
| | - Rebecca Qian Ru Lim
- Department of Hand & Reconstructive Microsurgery, Singapore General Hospital, Singapore
| | - Zhe Yi
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| | - Jiaxing Huang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jia He
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Ge Yang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Bo Liu
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
7
|
Alipour E, Chalian M, Pooyan A, Azhideh A, Shomal Zadeh F, Jahanian H. Automatic MRI-based rotator cuff muscle segmentation using U-Nets. Skeletal Radiol 2024; 53:537-545. [PMID: 37698626 DOI: 10.1007/s00256-023-04447-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 08/29/2023] [Accepted: 08/29/2023] [Indexed: 09/13/2023]
Abstract
BACKGROUND The rotator cuff (RC) is a crucial anatomical element within the shoulder joint, facilitating an extensive array of motions while maintaining joint stability. Comprised of the subscapularis, infraspinatus, supraspinatus, and teres minor muscles, the RC plays an integral role in shoulder functionality. RC injuries represent prevalent, incapacitating conditions that impose a substantial impact on approximately 8% of the adult population in the USA. Segmentation of these muscles provides valuable anatomical information for evaluating muscle quality and allows for better treatment planning. MATERIALS AND METHODS We developed a model based on residual deep convolutional encoder-decoder U-net to segment RC muscles on oblique sagittal T1-weighted images MRI. Our data consisted of shoulder MRIs from a cohort of 157 individuals, consisting of individuals without RC tendon tear (N=79) and patients with partial RC tendon tear (N=78). We evaluated different modeling approaches. The performance of the models was evaluated by calculating the Dice coefficient on the hold out test set. RESULTS The best-performing model's median Dice coefficient was measured to be 89% (Q1:85%, Q3:96%) for the supraspinatus, 86% (Q1:82%, Q3:88%) for the subscapularis, 86% (Q1:82%, Q3:90%) for the infraspinatus, and 78% (Q1:70%, Q3:81%) for the teres minor muscle, indicating a satisfactory level of accuracy in the model's predictions. CONCLUSION Our computational models demonstrated the capability to delineate RC muscles with a level of precision akin to that of experienced radiologists. As hypothesized, the proposed algorithm exhibited superior performance when segmenting muscles with well-defined boundaries, including the supraspinatus, subscapularis, and infraspinatus muscles.
Collapse
Affiliation(s)
- Ehsan Alipour
- Department of Radiology, Division of Musculoskeletal Imaging and Intervention, University of Washington, UW Radiology-Roosevelt Clinic, 4245 Roosevelt Way NE, Box, Seattle, WA, 354755, USA
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, USA
| | - Majid Chalian
- Department of Radiology, Division of Musculoskeletal Imaging and Intervention, University of Washington, UW Radiology-Roosevelt Clinic, 4245 Roosevelt Way NE, Box, Seattle, WA, 354755, USA.
| | - Atefe Pooyan
- Department of Radiology, Division of Musculoskeletal Imaging and Intervention, University of Washington, UW Radiology-Roosevelt Clinic, 4245 Roosevelt Way NE, Box, Seattle, WA, 354755, USA
| | - Arash Azhideh
- Department of Radiology, Division of Musculoskeletal Imaging and Intervention, University of Washington, UW Radiology-Roosevelt Clinic, 4245 Roosevelt Way NE, Box, Seattle, WA, 354755, USA
| | - Firoozeh Shomal Zadeh
- Department of Radiology, Division of Musculoskeletal Imaging and Intervention, University of Washington, UW Radiology-Roosevelt Clinic, 4245 Roosevelt Way NE, Box, Seattle, WA, 354755, USA
| | | |
Collapse
|
8
|
Velasquez Garcia A, Hsu KL, Marinakis K. Advancements in the diagnosis and management of rotator cuff tears. The role of artificial intelligence. J Orthop 2024; 47:87-93. [PMID: 38059047 PMCID: PMC10696306 DOI: 10.1016/j.jor.2023.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 11/03/2023] [Indexed: 12/08/2023] Open
Abstract
Background This review examined the role of artificial intelligence (AI) in the diagnosis and management of rotator cuff tears (RCTs). Methods A literature search was conducted in October 2023 using PubMed (MEDLINE), SCOPUS, and EMBASE databases, included only peer-reviewed studies. Relevant articles on AI technology in RCTs. A critical analysis of the relevant literature was conducted. Results AI is transforming RCTs management through faster and more precise identification and assessment using algorithms that facilitate segmentation, quantification, and classification of the RCTs across various imaging modalities. Precise algorithms focusing on preoperative factors to assess RCTs reparability have been developed for personalized treatment planning and outcome prediction. AI also aids in exercise classification and promotes patient adherence during at-home physiotherapy. Despite promising advancements, challenges in data quality and symptom integration persist. Future research should include refining AI algorithms, expanding their integration into various imaging techniques, and exploring their roles in postoperative care and surgical decision-making. Conclusions AI-driven solutions improve diagnostic accuracy and have the potential to influence treatment planning and postoperative outcomes through the automated RCTs analysis of medical imaging. Integration of high-quality datasets and clinical symptoms into AI models can enhance their reliability. Current AI algorithms can also be refined, integrated into other imaging techniques, and explored further in surgical decision-making and postoperative care.
Collapse
Affiliation(s)
- Ausberto Velasquez Garcia
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
- Clínica Universidad de los Andes, Department of Orthopedic Surgery, Santiago, Chile
| | - Kai-Lan Hsu
- Department of Orthopaedic Surgery, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
- Department of Biomedical Engineering, National Cheng Kung University, Tainan, Taiwan
| | | |
Collapse
|
9
|
Lee KS, Jung SH, Kim DH, Chung SW, Yoon JP. Artificial intelligence- and computer-assisted navigation for shoulder surgery. J Orthop Surg (Hong Kong) 2024; 32:10225536241243166. [PMID: 38546214 DOI: 10.1177/10225536241243166] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 08/28/2024] Open
Abstract
Background: Over the last few decades, shoulder surgery has undergone rapid advancements, with ongoing exploration and the development of innovative technological approaches. In the coming years, technologies such as robot-assisted surgeries, virtual reality, artificial intelligence, patient-specific instrumentation, and different innovative perioperative and preoperative planning tools will continue to fuel a revolution in the medical field, thereby pushing it toward new frontiers and unprecedented advancements. In relation to this, shoulder surgery will experience significant breakthroughs. Main body: Recent advancements and technological innovations in the field were comprehensively analyzed. We aimed to provide a detailed overview of the current landscape, emphasizing the roles of technologies. Computer-assisted surgery utilizing robotic- or image-guided technologies is widely adopted in various orthopedic specialties. The most advanced components of computer-assisted surgery are navigation and robotic systems, with functions and applications that are continuously expanding. Surgical navigation requires a visual system that presents real-time positional data on surgical instruments or implants in relation to the target bone, displayed on a computer monitor. There are three primary categories of surgical planning that utilize navigation systems. The initial category involves volumetric images, such as ultrasound echogram, computed tomography, and magnetic resonance images. The second type is based on intraoperative fluoroscopic images, and the third type incorporates kinetic information about joints or morphometric data about the target bones acquired intraoperatively. Conclusion: The rapid integration of artificial intelligence and deep learning into the medical domain has a significant and transformative influence. Numerous studies utilizing deep learning-based diagnostics in orthopedics have remarkable achievements and performance.
Collapse
Affiliation(s)
- Kang-San Lee
- Department of Orthopaedic Surgery, School of Medicine, Kyungpook National University, Daegu, Korea
| | - Seung Ho Jung
- Department of Orthopaedic Surgery, School of Medicine, Kyungpook National University, Daegu, Korea
| | - Dong-Hyun Kim
- Department of Orthopaedic Surgery, School of Medicine, Kyungpook National University, Daegu, Korea
| | - Seok Won Chung
- Department of Orthopaedic Surgery, School of Medicine, Konkuk University Medical Center, Seoul, Korea
| | - Jong Pil Yoon
- Department of Orthopaedic Surgery, School of Medicine, Kyungpook National University, Daegu, Korea
| |
Collapse
|
10
|
Lee KC, Cho Y, Ahn KS, Park HJ, Kang YS, Lee S, Kim D, Kang CH. Deep-Learning-Based Automated Rotator Cuff Tear Screening in Three Planes of Shoulder MRI. Diagnostics (Basel) 2023; 13:3254. [PMID: 37892075 PMCID: PMC10606560 DOI: 10.3390/diagnostics13203254] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 10/06/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023] Open
Abstract
This study aimed to develop a screening model for rotator cuff tear detection in all three planes of routine shoulder MRI using a deep neural network. A total of 794 shoulder MRI scans (374 men and 420 women; aged 59 ± 11 years) were utilized. Three musculoskeletal radiologists labeled the rotator cuff tear. The YOLO v8 rotator cuff tear detection model was then trained; training was performed with all imaging planes simultaneously and with axial, coronal, and sagittal images separately. The performances of the models were evaluated and compared using receiver operating curves and the area under the curve (AUC). The AUC was the highest when using all imaging planes (0.94; p < 0.05). Among a single imaging plane, the axial plane showed the best performance (AUC: 0.71), followed by the sagittal (AUC: 0.70) and coronal (AUC: 0.68) imaging planes. The sensitivity and accuracy were also the highest in the model with all-plane training (0.98 and 0.96, respectively). Thus, deep-learning-based automatic rotator cuff tear detection can be useful for detecting torn areas in various regions of the rotator cuff in all three imaging planes.
Collapse
Affiliation(s)
- Kyu-Chong Lee
- Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea (C.H.K.)
| | - Yongwon Cho
- Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea (C.H.K.)
- Advanced Medical Imaging Institute, Korea University College of Medicine, Seoul 02841, Republic of Korea
- AI Center, Korea University Anam Hospital, Seoul 02841, Republic of Korea
| | - Kyung-Sik Ahn
- Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea (C.H.K.)
- Advanced Medical Imaging Institute, Korea University College of Medicine, Seoul 02841, Republic of Korea
- AI Center, Korea University Anam Hospital, Seoul 02841, Republic of Korea
| | - Hyun-Joon Park
- Institute for Healthcare Service Innovation, College of Medicine, Korea University, Seoul 02841, Republic of Korea; (H.-J.P.); (Y.-S.K.)
| | - Young-Shin Kang
- Institute for Healthcare Service Innovation, College of Medicine, Korea University, Seoul 02841, Republic of Korea; (H.-J.P.); (Y.-S.K.)
| | - Sungshin Lee
- Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea (C.H.K.)
| | | | - Chang Ho Kang
- Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea (C.H.K.)
- Advanced Medical Imaging Institute, Korea University College of Medicine, Seoul 02841, Republic of Korea
| |
Collapse
|
11
|
Rodriguez HC, Rust B, Hansen PY, Maffulli N, Gupta M, Potty AG, Gupta A. Artificial Intelligence and Machine Learning in Rotator Cuff Tears. Sports Med Arthrosc Rev 2023; 31:67-72. [PMID: 37976127 DOI: 10.1097/jsa.0000000000000371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
Rotator cuff tears (RCTs) negatively impacts patient well-being. Artificial intelligence (AI) is emerging as a promising tool in medical decision-making. Within AI, deep learning allows to autonomously solve complex tasks. This review assesses the current and potential applications of AI in the management of RCT, focusing on diagnostic utility, challenges, and future perspectives. AI demonstrates promise in RCT diagnosis, aiding clinicians in interpreting complex imaging data. Deep learning frameworks, particularly convoluted neural networks architectures, exhibit remarkable diagnostic accuracy in detecting RCTs on magnetic resonance imaging. Advanced segmentation algorithms improve anatomic visualization and surgical planning. AI-assisted radiograph interpretation proves effective in ruling out full-thickness tears. Machine learning models predict RCT diagnosis and postoperative outcomes, enhancing personalized patient care. Challenges include small data sets and classification complexities, especially for partial thickness tears. Current applications of AI in RCT management are promising yet experimental. The potential of AI to revolutionize personalized, efficient, and accurate care for RCT patients is evident. The integration of AI with clinical expertise holds potential to redefine treatment strategies and optimize patient outcomes. Further research, larger data sets, and collaborative efforts are essential to unlock the transformative impact of AI in orthopedic surgery and RCT management.
Collapse
Affiliation(s)
- Hugo C Rodriguez
- Department of Orthopaedic Surgery, Larkin Community Hospital, South Miami
- Department of Orthopaedic Surgery, Hospital for Special Surgery Florida, West Palm Beach
| | - Brandon Rust
- Nova Southeastern University, Dr. Kiran Patel College of Osteopathic Medicine, Fort Lauderdale
| | - Payton Yerke Hansen
- Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton, FL
| | - Nicola Maffulli
- Department of Musculoskeletal Disorders, School of Medicine and Surgery, University of Salerno, Fisciano
- San Giovanni di Dio e Ruggi D'Aragona Hospital "Clinica Ortopedica" Department, Hospital of Salerno, Salerno, Italy
- Barts and the London School of Medicine and Dentistry, Centre for Sports and Exercise Medicine, Queen Mary University of London, London
- School of Pharmacy and Bioengineering, Keele University School of Medicine, Stoke on Trent, UK
| | - Manu Gupta
- Polar Aesthetics Dental & Cosmetic Centre, Noida, Uttar Pradesh
| | - Anish G Potty
- South Texas Orthopaedic Research Institute (STORI Inc.), Laredo, TX
| | - Ashim Gupta
- Regenerative Orthopaedics, Noida, India
- South Texas Orthopaedic Research Institute (STORI Inc.), Laredo, TX
- Future Biologics
- BioIntegrate, Lawrenceville, GA
| |
Collapse
|
12
|
Guo D, Liu X, Wang D, Tang X, Qin Y. Development and clinical validation of deep learning for auto-diagnosis of supraspinatus tears. J Orthop Surg Res 2023; 18:426. [PMID: 37308995 DOI: 10.1186/s13018-023-03909-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Accepted: 06/04/2023] [Indexed: 06/14/2023] Open
Abstract
BACKGROUND Accurately diagnosing supraspinatus tears based on magnetic resonance imaging (MRI) is challenging and time-combusting due to the experience level variability of the musculoskeletal radiologists and orthopedic surgeons. We developed a deep learning-based model for automatically diagnosing supraspinatus tears (STs) using shoulder MRI and validated its feasibility in clinical practice. MATERIALS AND METHODS A total of 701 shoulder MRI data (2804 images) were retrospectively collected for model training and internal test. An additional 69 shoulder MRIs (276 images) were collected from patients who underwent shoulder arthroplasty and constituted the surgery test set for clinical validation. Two advanced convolutional neural networks (CNN) based on Xception were trained and optimized to detect STs. The diagnostic performance of the CNN was evaluated according to its sensitivity, specificity, precision, accuracy, and F1 score. Subgroup analyses were performed to verify its robustness, and we also compared the CNN's performance with that of 4 radiologists and 4 orthopedic surgeons on the surgery and internal test sets. RESULTS Optimal diagnostic performance was achieved on the 2D model, from which F1-scores of 0.824 and 0.75, and areas under the ROC curves of 0.921 (95% confidence interval, 0.841-1.000) and 0.882 (0.817-0.947) were observed on the surgery and internal test sets. For the subgroup analysis, the 2D CNN model demonstrated a sensitivity of 0.33-1.000 and 0.625-1.000 for different degrees of tears on the surgery and internal test sets, and there was no significant performance difference between 1.5 and 3.0 T data. Compared with eight clinicians, the 2D CNN model exhibited better diagnostic performance than the junior clinicians and was equivalent to senior clinicians. CONCLUSIONS The proposed 2D CNN model realized the adequate and efficient automatic diagnoses of STs, which achieved a comparable performance of junior musculoskeletal radiologists and orthopedic surgeons. It might be conducive to assisting poor-experienced radiologists, especially in community scenarios lacking consulting experts.
Collapse
Affiliation(s)
- Deming Guo
- Orthopaedic Medical Center, The Second Hospital of Jilin University, Changchun, 130041, People's Republic of China
- Jilin Provincial Key Laboratory of Orhtopeadics, Changchun, People's Republic of China
| | - Xiaoning Liu
- Orthopaedic Medical Center, The Second Hospital of Jilin University, Changchun, 130041, People's Republic of China
| | - Dawei Wang
- Beijing Infervision Technology Co Ltd, Beijing, People's Republic of China
| | - Xiongfeng Tang
- Orthopaedic Medical Center, The Second Hospital of Jilin University, Changchun, 130041, People's Republic of China.
- Jilin Provincial Key Laboratory of Orhtopeadics, Changchun, People's Republic of China.
| | - Yanguo Qin
- Orthopaedic Medical Center, The Second Hospital of Jilin University, Changchun, 130041, People's Republic of China.
- Jilin Provincial Key Laboratory of Orhtopeadics, Changchun, People's Republic of China.
| |
Collapse
|
13
|
Saavedra JP, Droppelmann G, García N, Jorquera C, Feijoo F. High-accuracy detection of supraspinatus fatty infiltration in shoulder MRI using convolutional neural network algorithms. Front Med (Lausanne) 2023; 10:1070499. [PMID: 37305126 PMCID: PMC10248442 DOI: 10.3389/fmed.2023.1070499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
Background The supraspinatus muscle fatty infiltration (SMFI) is a crucial MRI shoulder finding to determine the patient's prognosis. Clinicians have used the Goutallier classification to diagnose it. Deep learning algorithms have been demonstrated to have higher accuracy than traditional methods. Aim To train convolutional neural network models to categorize the SMFI as a binary diagnosis based on Goutallier's classification using shoulder MRIs. Methods A retrospective study was performed. MRI and medical records from patients with SMFI diagnosis from January 1st, 2019, to September 20th, 2020, were selected. 900 T2-weighted, Y-view shoulder MRIs were evaluated. The supraspinatus fossa was automatically cropped using segmentation masks. A balancing technique was implemented. Five binary classification classes were developed into two as follows, A: 0, 1 v/s 3, 4; B: 0, 1 v/s 2, 3, 4; C: 0, 1 v/s 2; D: 0, 1, 2, v/s 3, 4; E: 2 v/s 3, 4. The VGG-19, ResNet-50, and Inception-v3 architectures were trained as backbone classifiers. An average of three 10-fold cross-validation processes were developed to evaluate model performance. AU-ROC, sensitivity, and specificity with 95% confidence intervals were used. Results Overall, 606 shoulders MRIs were analyzed. The Goutallier distribution was presented as follows: 0 = 403; 1 = 114; 2 = 51; 3 = 24; 4 = 14. Case A, VGG-19 model demonstrated an AU-ROC of 0.991 ± 0.003 (accuracy, 0.973 ± 0.006; sensitivity, 0.947 ± 0.039; specificity, 0.975 ± 0.006). B, VGG-19, 0.961 ± 0.013 (0.925 ± 0.010; 0.847 ± 0.041; 0.939 ± 0.011). C, VGG-19, 0.935 ± 0.022 (0.900 ± 0.015; 0.750 ± 0.078; 0.914 ± 0.014). D, VGG-19, 0.977 ± 0.007 (0.942 ± 0.012; 0.925 ± 0.056; 0.942 ± 0.013). E, VGG-19, 0.861 ± 0.050 (0.779 ± 0.054; 0.706 ± 0.088; 0.831 ± 0.061). Conclusion Convolutional neural network models demonstrated high accuracy in MRIs SMFI diagnosis.
Collapse
Affiliation(s)
- Juan Pablo Saavedra
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| | - Guillermo Droppelmann
- Research Center on Medicine, Exercise, Sport and Health, MEDS Clinic, Santiago, Chile
- Health Sciences PhD Program, Universidad Católica de Murcia UCAM, Murcia, Spain
- Principles and Practice of Clinical Research (PPCR), Harvard T. H. Chan School of Public Health, Boston, MA, United States
| | - Nicolás García
- Research Center on Medicine, Exercise, Sport and Health, MEDS Clinic, Santiago, Chile
| | - Carlos Jorquera
- Facultad de Ciencias, Escuela de Nutrición y Dietética, Universidad Mayor, Santiago, Chile
| | - Felipe Feijoo
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| |
Collapse
|
14
|
Gupta P, Haeberle HS, Zimmer ZR, Levine WN, Williams RJ, Ramkumar PN. Artificial intelligence-based applications in shoulder surgery leaves much to be desired: a systematic review. JSES REVIEWS, REPORTS, AND TECHNIQUES 2023; 3:189-200. [PMID: 37588443 PMCID: PMC10426484 DOI: 10.1016/j.xrrt.2022.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Background Artificial intelligence (AI) aims to simulate human intelligence using automated computer algorithms. There has been a rapid increase in research applying AI to various subspecialties of orthopedic surgery, including shoulder surgery. The purpose of this review is to assess the scope and validity of current clinical AI applications in shoulder surgery literature. Methods A systematic literature review was conducted using PubMed for all articles published between January 1, 2010 and June 10, 2022. The search query used the terms as follows: (artificial intelligence OR machine learning OR deep learning) AND (shoulder OR shoulder surgery OR rotator cuff). All studies that examined AI application models in shoulder surgery were included and evaluated for model performance and validation (internal, external, or both). Results A total of 45 studies were included in the final analysis. Eighteen studies involved shoulder arthroplasty, 13 rotator cuff, and 14 other areas. Studies applying AI to shoulder surgery primarily involved (1) automated imaging analysis including identifying rotator cuff tears and shoulder implants (2) risk prediction analyses including perioperative complications, functional outcomes, and patient satisfaction. Highest model performance area under the curve ranged from 0.681 (poor) to 1.00 (perfect). Only 2 studies reported external validation. Conclusion Applications of AI in the field of shoulder surgery are expanding rapidly and offer patient-specific risk stratification for shared decision-making and process automation for resource preservation. However, model performance is modest and external validation remains to be demonstrated, suggesting increased scientific rigor is warranted prior to deploying AI-based clinical applications.
Collapse
Affiliation(s)
- Puneet Gupta
- Department of Orthopaedic Surgery, George Washington University School of Medicine and Health Sciences, Washington, DC, USA
- Department of Orthopaedic Surgery, Columbia University Irving Medical Center, New York, NY, USA
| | - Heather S. Haeberle
- Department of Orthopaedic Surgery, Columbia University Irving Medical Center, New York, NY, USA
| | - Zachary R. Zimmer
- Department of Orthopaedic Surgery, George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - William N. Levine
- Department of Orthopaedic Surgery, Columbia University Irving Medical Center, New York, NY, USA
| | - Riley J. Williams
- Institute for Cartilage Repair, Hospital for Special Surgery, New York, NY, USA
| | - Prem N. Ramkumar
- Institute for Cartilage Repair, Hospital for Special Surgery, New York, NY, USA
- Long Beach Orthopaedic Institute, Long Beach, CA, USA
| |
Collapse
|
15
|
Lee SH, Lee J, Oh KS, Yoon JP, Seo A, Jeong Y, Chung SW. Automated 3-dimensional MRI segmentation for the posterosuperior rotator cuff tear lesion using deep learning algorithm. PLoS One 2023; 18:e0284111. [PMID: 37200275 DOI: 10.1371/journal.pone.0284111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/23/2023] [Indexed: 05/20/2023] Open
Abstract
INTRODUCTION Rotator cuff tear (RCT) is a challenging and common musculoskeletal disease. Magnetic resonance imaging (MRI) is a commonly used diagnostic modality for RCT, but the interpretation of the results is tedious and has some reliability issues. In this study, we aimed to evaluate the accuracy and efficacy of the 3-dimensional (3D) MRI segmentation for RCT using a deep learning algorithm. METHODS A 3D U-Net convolutional neural network (CNN) was developed to detect, segment, and visualize RCT lesions in 3D, using MRI data from 303 patients with RCTs. The RCT lesions were labeled by two shoulder specialists in the entire MR image using in-house developed software. The MRI-based 3D U-Net CNN was trained after the augmentation of a training dataset and tested using randomly selected test data (training: validation: test data ratio was 6:2:2). The segmented RCT lesion was visualized in a three-dimensional reconstructed image, and the performance of the 3D U-Net CNN was evaluated using the Dice coefficient, sensitivity, specificity, precision, F1-score, and Youden index. RESULTS A deep learning algorithm using a 3D U-Net CNN successfully detected, segmented, and visualized the area of RCT in 3D. The model's performance reached a 94.3% of Dice coefficient score, 97.1% of sensitivity, 95.0% of specificity, 84.9% of precision, 90.5% of F1-score, and Youden index of 91.8%. CONCLUSION The proposed model for 3D segmentation of RCT lesions using MRI data showed overall high accuracy and successful 3D visualization. Further studies are necessary to determine the feasibility of its clinical application and whether its use could improve care and outcomes.
Collapse
Affiliation(s)
- Su Hyun Lee
- Department of Orthopaedic Surgery, Seoul Red Cross Hospital, Seoul, Korea
| | - JiHwan Lee
- Department of Orthopedic Surgery, Myongji Hospital, Goyang-si, Korea
| | - Kyung-Soo Oh
- Department of Orthopaedic Surgery, Konkuk University School of Medicine, Seoul, Korea
| | - Jong Pil Yoon
- Department of Orthopaedic Surgery, Kyungpook National University College of Medicine, Daegu, Korea
| | - Anna Seo
- SEEANN Solution, Yeonsu-gu, Incheon, Korea
| | | | - Seok Won Chung
- Department of Orthopaedic Surgery, Konkuk University School of Medicine, Seoul, Korea
| |
Collapse
|
16
|
Familiari F, Galasso O, Massazza F, Mercurio M, Fox H, Srikumaran U, Gasparini G. Artificial Intelligence in the Management of Rotator Cuff Tears. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:16779. [PMID: 36554660 PMCID: PMC9779744 DOI: 10.3390/ijerph192416779] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 12/08/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
Technological innovation is a key component of orthopedic surgery. Artificial intelligence (AI), which describes the ability of computers to process massive data and "learn" from it to produce outputs that mirror human cognition and problem solving, may become an important tool for orthopedic surgeons in the future. AI may be able to improve decision making, both clinically and surgically, via integrating additional data-driven problem solving into practice. The aim of this article will be to review the current applications of AI in the management of rotator cuff tears. The article will discuss various stages of the clinical course: predictive models and prognosis, diagnosis, intraoperative applications, and postoperative care and rehabilitation. Throughout the article, which is a review in terms of study design, we will introduce the concept of AI in rotator cuff tears and provide examples of how these tools can impact clinical practice and patient care. Though many advancements in AI have been made regarding evaluating rotator cuff tears-particularly in the realm of diagnostic imaging-further advancements are required before they become a regular facet of daily clinical practice.
Collapse
Affiliation(s)
- Filippo Familiari
- Department of Orthopaedic and Trauma Surgery, “Mater Domini” University Hospital, “Magna Græcia” University, 88100 Catanzaro, Italy
| | - Olimpio Galasso
- Department of Orthopaedic and Trauma Surgery, “Mater Domini” University Hospital, “Magna Græcia” University, 88100 Catanzaro, Italy
| | - Federica Massazza
- Department of Orthopaedic and Trauma Surgery, “Mater Domini” University Hospital, “Magna Græcia” University, 88100 Catanzaro, Italy
| | - Michele Mercurio
- Department of Orthopaedic and Trauma Surgery, “Mater Domini” University Hospital, “Magna Græcia” University, 88100 Catanzaro, Italy
| | - Henry Fox
- Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Uma Srikumaran
- Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Giorgio Gasparini
- Department of Orthopaedic and Trauma Surgery, “Mater Domini” University Hospital, “Magna Græcia” University, 88100 Catanzaro, Italy
| |
Collapse
|
17
|
Liu X, Zhang X, Jing K, Yang Y, Li Y, Niu J, Guo S. Novel Role of Biomedical Sensors and CT/MRI Scanning Image Segmentation Algorithms in Orthopedic Diseases. J Biomed Nanotechnol 2022. [DOI: 10.1166/jbn.2022.3480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
We aimed to evaluate the efficacy of medical image segmentation algorithms in conjunction with biomedical sensors for the diagnosis and treatment of orthopedic diseases. The two-dimensional image data of orthopedic patients were obtained by using CT/MRI scanning along with the biomedical
sensors. Patients are divided into: control group (n = 140 cases) and experimental group (106 cases). The control group has received the traditional orthopedic surgery analysis method, while the experimental group has adopted the medical image segmentation, biomedical sensors and MRI
scanning for the treatment/surgery of orthopedic patients. There is a apparently different level of performance between two groups (P <0.05). The analgesic and sedative effect of the experimental group is observed at 2 h, 6 h, and after 12 h respectively and it is found that the
experimental group exhibits better results with statistical significance (P < 0.05). The experimental group has better rates of fracture, fracture nonunion, osteoporosis, and femoral head necrosis, and a substantial difference in various disease classifications is observed between
two groups (P <0.05). There is a considerable gap between two groups in the rate of subsequent operations. The experimental group has much higher rate of subsequent operations than the control group (P <0.05). The proposed innovative non-invasive medical treatment methods
can not only enhance the accuracy of orthopedic surgeries.
Collapse
|
18
|
Droppelmann G, Tello M, García N, Greene C, Jorquera C, Feijoo F. Lateral elbow tendinopathy and artificial intelligence: Binary and multilabel findings detection using machine learning algorithms. Front Med (Lausanne) 2022; 9:945698. [PMID: 36213676 PMCID: PMC9537568 DOI: 10.3389/fmed.2022.945698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
Background Ultrasound (US) is a valuable technique to detect degenerative findings and intrasubstance tears in lateral elbow tendinopathy (LET). Machine learning methods allow supporting this radiological diagnosis. Aim To assess multilabel classification models using machine learning models to detect degenerative findings and intrasubstance tears in US images with LET diagnosis. Materials and methods A retrospective study was performed. US images and medical records from patients with LET diagnosis from January 1st, 2017, to December 30th, 2018, were selected. Datasets were built for training and testing models. For image analysis, features extraction, texture characteristics, intensity distribution, pixel-pixel co-occurrence patterns, and scales granularity were implemented. Six different supervised learning models were implemented for binary and multilabel classification. All models were trained to classify four tendon findings (hypoechogenicity, neovascularity, enthesopathy, and intrasubstance tear). Accuracy indicators and their confidence intervals (CI) were obtained for all models following a K-fold-repeated-cross-validation method. To measure multilabel prediction, multilabel accuracy, sensitivity, specificity, and receiver operating characteristic (ROC) with 95% CI were used. Results A total of 30,007 US images (4,324 exams, 2,917 patients) were included in the analysis. The RF model presented the highest mean values in the area under the curve (AUC), sensitivity, and also specificity by each degenerative finding in the binary classification. The AUC and sensitivity showed the best performance in intrasubstance tear with 0.991 [95% CI, 099, 0.99], and 0.775 [95% CI, 0.77, 0.77], respectively. Instead, specificity showed upper values in hypoechogenicity with 0.821 [95% CI, 0.82, −0.82]. In the multilabel classifier, RF also presented the highest performance. The accuracy was 0.772 [95% CI, 0.771, 0.773], a great macro of 0.948 [95% CI, 0.94, 0.94], and a micro of 0.962 [95% CI, 0.96, 0.96] AUC scores were detected. Diagnostic accuracy, sensitivity, and specificity with 95% CI were calculated. Conclusion Machine learning algorithms based on US images with LET presented high diagnosis accuracy. Mainly the random forest model shows the best performance in binary and multilabel classifiers, particularly for intrasubstance tears.
Collapse
Affiliation(s)
- Guillermo Droppelmann
- Research Center on Medicine, Exercise, Sport and Health, MEDS Clinic, Santiago, RM, Chile
- Health Sciences Ph.D. Program, Universidad Católica de Murcia UCAM, Murcia, Spain
- Principles and Practice of Clinical Research (PPCR), Harvard T.H. Chan School of Public Health, Boston, MA, United States
- *Correspondence: Guillermo Droppelmann,
| | - Manuel Tello
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| | - Nicolás García
- MSK Diagnostic and Interventional Radiology Department, MEDS Clinic, Santiago, RM, Chile
| | - Cristóbal Greene
- Hand and Elbow Unit, Department of Orthopaedic Surgery, MEDS Clinic, Santiago, RM, Chile
| | - Carlos Jorquera
- Facultad de Ciencias, Escuela de Nutrición y Dietética, Universidad Mayor, Santiago, RM, Chile
| | - Felipe Feijoo
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| |
Collapse
|
19
|
Yao J, Chepelev L, Nisha Y, Sathiadoss P, Rybicki FJ, Sheikh AM. Evaluation of a deep learning method for the automated detection of supraspinatus tears on MRI. Skeletal Radiol 2022; 51:1765-1775. [PMID: 35190850 DOI: 10.1007/s00256-022-04008-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 01/30/2022] [Accepted: 01/30/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To evaluate if deep learning is a feasible approach for automated detection of supraspinatus tears on MRI. MATERIALS AND METHODS A total of 200 shoulder MRI studies performed between 2015 and 2019 were retrospectively obtained from our institutional database using a balanced random sampling of studies containing a full-thickness tear, partial-thickness tear, or intact supraspinatus tendon. A 3-stage pipeline was developed comprised of a slice selection network based on a pre-trained residual neural network (ResNet); a segmentation network based on an encoder-decoder network (U-Net); and a custom multi-input convolutional neural network (CNN) classifier. Binary reference labels were created following review of radiologist reports and images by a radiology fellow and consensus validation by two musculoskeletal radiologists. Twenty percent of the data was reserved as a holdout test set with the remaining 80% used for training and optimization under a fivefold cross-validation strategy. Classification and segmentation accuracy were evaluated using area under the receiver operating characteristic curve (AUROC) and Dice similarity coefficient, respectively. Baseline characteristics in correctly versus incorrectly classified cases were compared using independent sample t-test and chi-squared. RESULTS Test sensitivity and specificity of the classifier at the optimal Youden's index were 85.0% (95% CI: 62.1-96.8%) and 85.0% (95% CI: 62.1-96.8%), respectively. AUROC was 0.943 (95% CI: 0.820-0.991). Dice segmentation accuracy was 0.814 (95% CI: 0.805-0.826). There was no significant difference in AUROC between 1.5 T and 3.0 T studies. Sub-analysis showed superior sensitivity on full-thickness (100%) versus partial-thickness (72.5%) subgroups. DATA CONCLUSION Deep learning is a feasible approach to detect supraspinatus tears on MRI.
Collapse
Affiliation(s)
- Jason Yao
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada.
| | - Leonid Chepelev
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Yashmin Nisha
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Paul Sathiadoss
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Frank J Rybicki
- Department of Radiology, University of Cincinnati College of Medicine, 234 Goodman Street, Box 670761, Cincinnati, OH, 45267-0761, USA
| | - Adnan M Sheikh
- Department of Radiology, The University of British Columbia Faculty of Medicine, 2775 Laurel Street, Vancouver, BC, V5Z 1M9, Canada
| |
Collapse
|
20
|
Godoy IRB, Silva RP, Rodrigues TC, Skaf AY, de Castro Pochini A, Yamada AF. Automatic MRI segmentation of pectoralis major muscle using deep learning. Sci Rep 2022; 12:5300. [PMID: 35351924 PMCID: PMC8964724 DOI: 10.1038/s41598-022-09280-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 03/21/2022] [Indexed: 11/30/2022] Open
Abstract
To develop and validate a deep convolutional neural network (CNN) method capable of selecting the greatest Pectoralis Major Cross-Sectional Area (PMM-CSA) and automatically segmenting PMM on an axial Magnetic Resonance Imaging (MRI). We hypothesized a CNN technique can accurately perform both tasks compared with manual reference standards. Our method is based on two steps: (A) segmentation model, (B) PMM-CSA selection. In step A, we manually segmented the PMM on 134 axial T1-weighted PM MRIs. The segmentation model was trained from scratch (MONAI/Pytorch SegResNet, 4 mini-batch, 1000 epochs, dropout 0.20, Adam, learning rate 0.0005, cosine annealing, softmax). Mean-dice score determined the segmentation score on 8 internal axial T1-weighted PM MRIs. In step B, we used the OpenCV2 (version 4.5.1, https://opencv.org) framework to calculate the PMM-CSA of the model predictions and ground truth. Then, we selected the top-3 slices with the largest cross-sectional area and compared them with the ground truth. If one of the selected was in the top-3 from the ground truth, then we considered it to be a success. A top-3 accuracy evaluated this method on 8 axial T1-weighted PM MRIs internal test cases. The segmentation model (Step A) produced an accurate pectoralis muscle segmentation with a Mean Dice score of 0.94 ± 0.01. The results of Step B showed top-3 accuracy > 98% to select an appropriate axial image with the greatest PMM-CSA. Our results show an overall accurate selection of PMM-CSA and automated PM muscle segmentation using a combination of deep CNN algorithms.
Collapse
Affiliation(s)
- Ivan Rodrigues Barros Godoy
- Department of Radiology, Hospital Do Coração (HCor) and Teleimagem, São Paulo, SP, Brazil. .,Department of Diagnostic Imaging, Universidade Federal de São Paulo - UNIFESP, Rua Napoleão de Barros, 800, São Paulo, SP, 04024-002, Brazil.
| | | | | | - Abdalla Youssef Skaf
- Department of Radiology, Hospital Do Coração (HCor) and Teleimagem, São Paulo, SP, Brazil.,ALTA Diagnostic Center (DASA Group), São Paulo, Brazil
| | - Alberto de Castro Pochini
- Department of Orthopedics and Traumatology, Universidade Federal de São Paulo (UNIFESP), São Paulo, SP, Brazil
| | - André Fukunishi Yamada
- Department of Radiology, Hospital Do Coração (HCor) and Teleimagem, São Paulo, SP, Brazil.,Department of Diagnostic Imaging, Universidade Federal de São Paulo - UNIFESP, Rua Napoleão de Barros, 800, São Paulo, SP, 04024-002, Brazil.,ALTA Diagnostic Center (DASA Group), São Paulo, Brazil
| |
Collapse
|
21
|
Aljabri M, AlAmir M, AlGhamdi M, Abdel-Mottaleb M, Collado-Mesa F. Towards a better understanding of annotation tools for medical imaging: a survey. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25877-25911. [PMID: 35350630 PMCID: PMC8948453 DOI: 10.1007/s11042-022-12100-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/04/2021] [Accepted: 01/03/2022] [Indexed: 05/07/2023]
Abstract
Medical imaging refers to several different technologies that are used to view the human body to diagnose, monitor, or treat medical conditions. It requires significant expertise to efficiently and correctly interpret the images generated by each of these technologies, which among others include radiography, ultrasound, and magnetic resonance imaging. Deep learning and machine learning techniques provide different solutions for medical image interpretation including those associated with detection and diagnosis. Despite the huge success of deep learning algorithms in image analysis, training algorithms to reach human-level performance in these tasks depends on the availability of large amounts of high-quality training data, including high-quality annotations to serve as ground-truth. Different annotation tools have been developed to assist with the annotation process. In this survey, we present the currently available annotation tools for medical imaging, including descriptions of graphical user interfaces (GUI) and supporting instruments. The main contribution of this study is to provide an intensive review of the popular annotation tools and show their successful usage in annotating medical imaging dataset to guide researchers in this area.
Collapse
Affiliation(s)
- Manar Aljabri
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlAmir
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlGhamdi
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | | | - Fernando Collado-Mesa
- Department of Radiology, University of Miami Miller School of Medicine, Florida, FL USA
| |
Collapse
|
22
|
Deep Learning for Orthopedic Disease Based on Medical Image Analysis: Present and Future. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12020681] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Since its development, deep learning has been quickly incorporated into the field of medicine and has had a profound impact. Since 2017, many studies applying deep learning-based diagnostics in the field of orthopedics have demonstrated outstanding performance. However, most published papers have focused on disease detection or classification, leaving some unsatisfactory reports in areas such as segmentation and prediction. This review introduces research published in the field of orthopedics classified according to disease from the perspective of orthopedic surgeons, and areas of future research are discussed. This paper provides orthopedic surgeons with an overall understanding of artificial intelligence-based image analysis and the information that medical data should be treated with low prejudice, providing developers and researchers with insight into the real-world context in which clinicians are embracing medical artificial intelligence.
Collapse
|
23
|
Fritz B, Fritz J. Artificial intelligence for MRI diagnosis of joints: a scoping review of the current state-of-the-art of deep learning-based approaches. Skeletal Radiol 2022; 51:315-329. [PMID: 34467424 PMCID: PMC8692303 DOI: 10.1007/s00256-021-03830-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 05/17/2021] [Accepted: 05/23/2021] [Indexed: 02/02/2023]
Abstract
Deep learning-based MRI diagnosis of internal joint derangement is an emerging field of artificial intelligence, which offers many exciting possibilities for musculoskeletal radiology. A variety of investigational deep learning algorithms have been developed to detect anterior cruciate ligament tears, meniscus tears, and rotator cuff disorders. Additional deep learning-based MRI algorithms have been investigated to detect Achilles tendon tears, recurrence prediction of musculoskeletal neoplasms, and complex segmentation of nerves, bones, and muscles. Proof-of-concept studies suggest that deep learning algorithms may achieve similar diagnostic performances when compared to human readers in meta-analyses; however, musculoskeletal radiologists outperformed most deep learning algorithms in studies including a direct comparison. Earlier investigations and developments of deep learning algorithms focused on the binary classification of the presence or absence of an abnormality, whereas more advanced deep learning algorithms start to include features for characterization and severity grading. While many studies have focused on comparing deep learning algorithms against human readers, there is a paucity of data on the performance differences of radiologists interpreting musculoskeletal MRI studies without and with artificial intelligence support. Similarly, studies demonstrating the generalizability and clinical applicability of deep learning algorithms using realistic clinical settings with workflow-integrated deep learning algorithms are sparse. Contingent upon future studies showing the clinical utility of deep learning algorithms, artificial intelligence may eventually translate into clinical practice to assist detection and characterization of various conditions on musculoskeletal MRI exams.
Collapse
Affiliation(s)
- Benjamin Fritz
- Department of Radiology, Balgrist University Hospital, Forchstrasse 340, CH-8008 Zurich, Switzerland ,Faculty of Medicine, University of Zurich, Zurich, Switzerland
| | - Jan Fritz
- New York University Grossman School of Medicine, New York University, New York, NY 10016 USA
| |
Collapse
|
24
|
Choi KJ, Choi JE, Roh HC, Eun JS, Kim JM, Shin YK, Kang MC, Chung JK, Lee C, Lee D, Kang SW, Cho BH, Kim SJ. Deep learning models for screening of high myopia using optical coherence tomography. Sci Rep 2021; 11:21663. [PMID: 34737335 PMCID: PMC8568935 DOI: 10.1038/s41598-021-00622-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 10/13/2021] [Indexed: 12/02/2022] Open
Abstract
This study aimed to validate and evaluate deep learning (DL) models for screening of high myopia using spectral-domain optical coherence tomography (OCT). This retrospective cross-sectional study included 690 eyes in 492 patients with OCT images and axial length measurement. Eyes were divided into three groups based on axial length: a “normal group,” a “high myopia group,” and an “other retinal disease” group. The researchers trained and validated three DL models to classify the three groups based on horizontal and vertical OCT images of the 600 eyes. For evaluation, OCT images of 90 eyes were used. Diagnostic agreements of human doctors and DL models were analyzed. The area under the receiver operating characteristic curve of the three DL models was evaluated. Absolute agreement of retina specialists was 99.11% (range: 97.78–100%). Absolute agreement of the DL models with multiple-column model was 100.0% (ResNet 50), 90.0% (Inception V3), and 72.22% (VGG 16). Areas under the receiver operating characteristic curves of the DL models with multiple-column model were 0.99 (ResNet 50), 0.97 (Inception V3), and 0.86 (VGG 16). The DL model based on ResNet 50 showed comparable diagnostic performance with retinal specialists. The DL model using OCT images demonstrated reliable diagnostic performance to identify high myopia.
Collapse
Affiliation(s)
- Kyung Jun Choi
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Jung Eun Choi
- Medical AI Research Center, Samsung Medical Center, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Hyeon Cheol Roh
- Department of Ophthalmology, Samsung Changwon Hospital, Sungkyunkwan University School of Medicine, Changwon, Republic of Korea
| | - Jun Soo Eun
- Department of Ophthalmology, Gil Medical Center, Gachon University, Incheon, Republic of Korea
| | | | - Yong Kyun Shin
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Min Chae Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Joon Kyo Chung
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Chaeyeon Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Dongyoung Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Se Woong Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Baek Hwan Cho
- Medical AI Research Center, Samsung Medical Center, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea. .,Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, 06351, Republic of Korea.
| | - Sang Jin Kim
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
| |
Collapse
|
25
|
Ro K, Kim JY, Park H, Cho BH, Kim IY, Shim SB, Choi IY, Yoo JC. Deep-learning framework and computer assisted fatty infiltration analysis for the supraspinatus muscle in MRI. Sci Rep 2021; 11:15065. [PMID: 34301978 PMCID: PMC8302634 DOI: 10.1038/s41598-021-93026-w] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 06/02/2021] [Indexed: 02/07/2023] Open
Abstract
Occupation ratio and fatty infiltration are important parameters for evaluating patients with rotator cuff tears. We analyzed the occupation ratio using a deep-learning framework and studied the fatty infiltration of the supraspinatus muscle using an automated region-based Otsu thresholding technique. To calculate the amount of fatty infiltration of the supraspinatus muscle using an automated region-based Otsu thresholding technique. The mean Dice similarity coefficient, accuracy, sensitivity, specificity, and relative area difference for the segmented lesion, measuring the similarity of clinician assessment and that of a deep neural network, were 0.97, 99.84, 96.89, 99.92, and 0.07, respectively, for the supraspinatus fossa and 0.94, 99.89, 93.34, 99.95, and 2.03, respectively, for the supraspinatus muscle. The fatty infiltration measure using the Otsu thresholding method significantly differed among the Goutallier grades (Grade 0; 0.06, Grade 1; 4.68, Grade 2; 20.10, Grade 3; 42.86, Grade 4; 55.79, p < 0.0001). The occupation ratio and fatty infiltration using Otsu thresholding demonstrated a moderate negative correlation (ρ = - 0.75, p < 0.0001). This study included 240 randomly selected patients who underwent shoulder magnetic resonance imaging (MRI) from January 2015 to December 2016. We used a fully convolutional deep-learning algorithm to quantitatively detect the fossa and muscle regions by measuring the occupation ratio of the supraspinatus muscle. Fatty infiltration was objectively evaluated using the Otsu thresholding method. The proposed convolutional neural network exhibited fast and accurate segmentation of the supraspinatus muscle and fossa from shoulder MRI, allowing automatic calculation of the occupation ratio. Quantitative evaluation using a modified Otsu thresholding method can be used to calculate the proportion of fatty infiltration in the supraspinatus muscle. We expect that this will improve the efficiency and objectivity of diagnoses by quantifying the index used for shoulder MRI.
Collapse
Affiliation(s)
- Kyunghan Ro
- Gangnambon Research Institute, Gangnambon Orthopaedic Cinic, Seoul, Republic of Korea
| | - Joo Young Kim
- Department of Biomedical Engineering, Hanyang University, Seoul, Republic of Korea
| | - Heeseol Park
- Department of Orthopaedic Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Baek Hwan Cho
- Medical AI Research Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
- Department of Medical Device Management and Research, SAIHST, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| | - In Young Kim
- Department of Biomedical Engineering, Hanyang University, Seoul, Republic of Korea
| | - Seung Bo Shim
- Department of Orthopaedic Surgery, Yonsei Thebaro Hospital, Seoul, Republic of Korea
| | - In Young Choi
- Department of Radiology, Korea University Ansan Hospital, Korea University, Ansan-si, Gyeonggi-do, Republic of Korea
| | - Jae Chul Yoo
- Department of Orthopaedic Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
26
|
Nam B, Kim JY, Kim IY, Cho BH. Selective Prediction LSTM for Time Series Health Datasets using Unit-wise Batch Standardization: Algorithm Development and Validation (Preprint). JMIR Med Inform 2021; 10:e30587. [PMID: 35289753 PMCID: PMC8965672 DOI: 10.2196/30587] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 11/16/2021] [Accepted: 01/02/2022] [Indexed: 11/13/2022] Open
Abstract
Background In any health care system, both the classification of data and the confidence level of such classifications are important. Therefore, a selective prediction model is required to classify time series health data according to confidence levels of prediction. Objective This study aims to develop a method using long short-term memory (LSTM) models with a reject option for time series health data classification. Methods An existing selective prediction method was adopted to implement an option for rejecting a classification output in LSTM models. However, a conventional selection function approach to LSTM does not achieve acceptable performance during learning stages. To tackle this problem, we proposed a unit-wise batch standardization that attempts to normalize each hidden unit in LSTM to apply the structural characteristics of LSTM models that concern the selection function. Results The ability of our method to approximate the target confidence level was compared by coverage violations for 2 time series of health data sets consisting of human activity and arrhythmia. For both data sets, our approach yielded lower average coverage violations (0.98% and 1.79% for each data set) than those of the conventional approach. In addition, the classification performance when using the reject option was compared with that of other normalization methods. Our method demonstrated superior performance for selective risk (12.63% and 17.82% for each data set), false-positive rates (2.09% and 5.8% for each data set), and false-negative rates (10.58% and 17.24% for each data set). Conclusions Our normalization approach can help make selective predictions for time series health data. We expect this technique to enhance the confidence of users in classification systems and improve collaborative efforts between humans and artificial intelligence in the medical field through the use of classification that considers confidence.
Collapse
Affiliation(s)
- Borum Nam
- Department of Electronic Engineering, Hanyang University, Seoul, Republic of Korea
| | - Joo Young Kim
- Department of Biomedical Engineering, Hanyang University, Seoul, Republic of Korea
| | - In Young Kim
- Department of Biomedical Engineering, Hanyang University, Seoul, Republic of Korea
| | - Baek Hwan Cho
- Medical AI Research Center, Samsung Medical Center, Seoul, Republic of Korea
| |
Collapse
|
27
|
Deep learning method for segmentation of rotator cuff muscles on MR images. Skeletal Radiol 2021; 50:683-692. [PMID: 32939590 DOI: 10.1007/s00256-020-03599-2] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 08/27/2020] [Accepted: 09/03/2020] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To develop and validate a deep convolutional neural network (CNN) method capable of (1) selecting a specific shoulder sagittal MR image (Y-view) and (2) automatically segmenting rotator cuff (RC) muscles on a Y-view. We hypothesized a CNN approach can accurately perform both tasks compared with manual reference standards. MATERIAL AND METHODS We created 2 models: model A for Y-view selection and model B for muscle segmentation. For model A, we manually selected shoulder sagittal T1 Y-views from 258 cases as ground truth to train a classification CNN (Keras/Tensorflow, Inception v3, 16 batch, 100 epochs, dropout 0.2, learning rate 0.001, RMSprop). A top-3 success rate evaluated model A on 100 internal and 50 external test cases. For model B, we manually segmented subscapularis, supraspinatus, and infraspinatus/teres minor on 1048 sagittal T1 Y-views. After histogram equalization and data augmentation, the model was trained from scratch (U-Net, 8 batch, 50 epochs, dropout 0.25, learning rate 0.0001, softmax). Dice (F1) score determined segmentation accuracy on 105 internal and 50 external test images. RESULTS Model A showed top-3 accuracy > 98% to select an appropriate Y-view. Model B produced accurate RC muscle segmentations with mean Dice scores > 0.93. Individual muscle Dice scores on internal/external datasets were as follows: subscapularis 0.96/0.93, supraspinatus 0.97/0.96, and infraspinatus/teres minor 0.97/0.95. CONCLUSIONS Our results show overall accurate Y-view selection and automated RC muscle segmentation using a combination of deep CNN algorithms.
Collapse
|
28
|
Moon JH, Lee DY, Cha WC, Chung MJ, Lee KS, Cho BH, Choi JH. Automatic stenosis recognition from coronary angiography using convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105819. [PMID: 33213972 DOI: 10.1016/j.cmpb.2020.105819] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 10/26/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Coronary artery disease, which is mostly caused by atherosclerotic narrowing of the coronary artery lumen, is a leading cause of death. Coronary angiography is the standard method to estimate the severity of coronary artery stenosis, but is frequently limited by intra- and inter-observer variations. We propose a deep-learning algorithm that automatically recognizes stenosis in coronary angiographic images. METHODS The proposed method consists of key frame detection, deep learning model training for classification of stenosis on each key frame, and visualization of the possible location of the stenosis. Firstly, we propose an algorithm that automatically extracts key frames essential for diagnosis from 452 right coronary artery angiography movie clips. Our deep learning model is then trained with image-level annotations to classify the areas narrowed by over 50 %. To make the model focus on the salient features, we apply a self-attention mechanism. The stenotic locations are visualized using the activated area of feature maps with gradient-weighted class activation mapping. RESULTS The automatically detected key frame was very close to the manually selected key frame (average distance (1.70 ± 0.12) frame per clip). The model was trained with key frames on internal datasets, and validated with internal and external datasets. Our training method achieved high frame-wise area-under-the-curve of 0.971, frame-wise accuracy of 0.934, and clip-wise accuracy of 0.965 in the average values of cross-validation evaluations. The external validation results showed high performances with the mean frame-wise area-under-the-curve of (0.925 and 0.956) in the single and ensemble model, respectively. Heat map visualization shows the location for different types of stenosis in both internal and external data sets. With the self-attention mechanism, the stenosis could be precisely localized, which helps to accurately classify the stenosis by type. CONCLUSIONS Our automated classification algorithm could recognize and localize coronary artery stenosis highly accurately. Our approach might provide the basis for a screening and assistant tool for the interpretation of coronary angiography.
Collapse
Affiliation(s)
- Jong Hak Moon
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul 06351, South Korea.
| | - Da Young Lee
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul 06351, South Korea.
| | - Won Chul Cha
- Department of Emergency Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, South Korea.
| | - Myung Jin Chung
- Medical AI Research Center, Samsung Medical Center, Seoul 06351, South Korea; Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, South Korea.
| | - Kyu-Sung Lee
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul 06351, South Korea; Department of Urology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, South Korea.
| | - Baek Hwan Cho
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul 06351, South Korea; Medical AI Research Center, Samsung Medical Center, Seoul 06351, South Korea.
| | - Jin Ho Choi
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul 06351, South Korea; Department of Emergency Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, South Korea.
| |
Collapse
|
29
|
You S, Cho BH, Yook S, Kim JY, Shon YM, Seo DW, Kim IY. Unsupervised automatic seizure detection for focal-onset seizures recorded with behind-the-ear EEG using an anomaly-detecting generative adversarial network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 193:105472. [PMID: 32344271 DOI: 10.1016/j.cmpb.2020.105472] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 03/06/2020] [Accepted: 03/19/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Epilepsy is a neurological disorder of the brain, which involves recurrent seizures. An encephalogram (EEG) is a gold standard method in the detection and analysis of epileptic seizures. However, the standard EEG recording system is too obstructive to be used in daily life. Behind-the-ear EEG is an alternative approach to record EEG conveniently. Previous researchers applied machine learning to automatically detect seizures with EEG, but the epileptic EEG waveform contains subtle changes that are difficult to be identified. Furthermore, the extremely small proportion of ictal events in the long-term monitoring may cause the imbalance problem and, consequently, poor prediction performance in supervised learning approaches. In this study, we present an automatic seizure detection algorithm with a generative adversarial network (GAN) trained by unsupervised learning and evaluated it with behind-the-ear EEG. METHODS We recorded behind-the-ear EEGs from 12 patients who have various types of epilepsy. Data were reviewed separately by two epileptologists, who determined the onsets and ends of seizures. First, we conducted unsupervised learning with the normal records for the GAN to learn the representation of normal states. Second, we performed automatic seizure detection with the trained GAN as an anomaly detector. Last, we combined the Gram matrix with other anomaly losses to improve detection performance. RESULTS The proposed approach achieved detection performance with an area under the receiver operating curve of 0.939 and sensitivity of 96.3% with a false alarm rate of 0.14 per hour in the test dataset. In addition, we confirmed distinguishability with the distribution of the anomaly scores in terms of EEG frequency bands. CONCLUSIONS It is expected that the proposed anomaly detection via GAN with the behind-the-ear EEG can be effectively used for long-term seizure monitoring in daily life.
Collapse
Affiliation(s)
- Sungmin You
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
| | - Baek Hwan Cho
- Medical AI Research Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea; Department of Medical Device Management and Research, Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Soonhyun Yook
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
| | - Joo Young Kim
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
| | - Young-Min Shon
- Department of Neurology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Dae-Won Seo
- Department of Neurology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea.
| | - In Young Kim
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea.
| |
Collapse
|
30
|
Cho YS, Cho K, Park CJ, Chung MJ, Kim JH, Kim K, Kim YK, Kim HJ, Ko JW, Cho BH, Chung WH. Automated measurement of hydrops ratio from MRI in patients with Ménière's disease using CNN-based segmentation. Sci Rep 2020; 10:7003. [PMID: 32332804 PMCID: PMC7181627 DOI: 10.1038/s41598-020-63887-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Accepted: 04/08/2020] [Indexed: 11/09/2022] Open
Abstract
Ménière's Disease (MD) is difficult to diagnose and evaluate objectively over the course of treatment. Recently, several studies have reported MD diagnoses by MRI-based endolymphatic hydrops (EH) analysis. However, this method is time-consuming and complicated. Therefore, a fast, objective, and accurate evaluation tool is necessary. The purpose of this study was to develop an algorithm that can accurately analyze EH on intravenous (IV) gadolinium (Gd)-enhanced inner-ear MRI using artificial intelligence (AI) with deep learning. In this study, we developed a convolutional neural network (CNN)-based deep-learning model named INHEARIT (INner ear Hydrops Estimation via ARtificial InTelligence) for the automatic segmentation of the cochlea and vestibule, and calculation of the EH ratio in the segmented region. Measurement of the EH ratio was performed manually by a neuro-otologist and neuro-radiologist and by estimation with the INHEARIT model and were highly consistent (intraclass correlation coefficient = 0.971). This is the first study to demonstrate that automated EH ratio measurements are possible, which is important in the current clinical context where the usefulness of IV-Gd inner-ear MRI for MD diagnosis is increasing.
Collapse
Affiliation(s)
- Young Sang Cho
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Kyeongwon Cho
- Medical AI Research Center, Samsung Medical Center, Seoul, Korea
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Korea
| | - Chae Jung Park
- Medical AI Research Center, Samsung Medical Center, Seoul, Korea
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Korea
| | - Myung Jin Chung
- Medical AI Research Center, Samsung Medical Center, Seoul, Korea
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Jong Hyuk Kim
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Korea
| | - Kyunga Kim
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Korea
- Statistics & Data Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul, Korea
| | - Yi-Kyung Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Hyung-Jin Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Jae-Wook Ko
- Department of Clinical Pharmacology and Therapeutics, Samsung Medical Center, Seoul, Korea
| | - Baek Hwan Cho
- Medical AI Research Center, Samsung Medical Center, Seoul, Korea.
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Korea.
| | - Won-Ho Chung
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea.
| |
Collapse
|