1
|
Ni M, Zhao Y, Zhang L, Chen W, Wang Q, Tian C, Yuan H. MRI-based automated multitask deep learning system to evaluate supraspinatus tendon injuries. Eur Radiol 2024; 34:3538-3551. [PMID: 37964049 DOI: 10.1007/s00330-023-10392-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 08/01/2023] [Accepted: 09/08/2023] [Indexed: 11/16/2023]
Abstract
OBJECTIVE To establish an automated, multitask, MRI-based deep learning system for the detailed evaluation of supraspinatus tendon (SST) injuries. METHODS According to arthroscopy findings, 3087 patients were divided into normal, degenerative, and tear groups (groups 0-2). Group 2 was further divided into bursal-side, articular-side, intratendinous, and full-thickness tear groups (groups 2.1-2.4), and external validation was performed with 573 patients. Visual geometry group network 16 (VGG16) was used for preliminary image screening. Then, the rotator cuff multitask learning (RC-MTL) model performed multitask classification (classifiers 1-4). A multistage decision model produced the final output. Model performance was evaluated by receiver operating characteristic (ROC) curve analysis and calculation of related parameters. McNemar's test was used to compare the differences in the diagnostic effects between radiologists and the model. The intraclass correlation coefficient (ICC) was used to assess the radiologists' reliability. p < 0.05 indicated statistical significance. RESULTS In the in-group dataset, the area under the ROC curve (AUC) of VGG16 was 0.92, and the average AUCs of RC-MTL classifiers 1-4 were 0.99, 0.98, 0.97, and 0.97, respectively. The average AUC of the automated multitask deep learning system for groups 0-2.4 was 0.98 and 0.97 in the in-group and out-group datasets, respectively. The ICCs of the radiologists were 0.97-0.99. The automated multitask deep learning system outperformed the radiologists in classifying groups 0-2.4 in both the in-group and out-group datasets (p < 0.001). CONCLUSION The MRI-based automated multitask deep learning system performed well in diagnosing SST injuries and is comparable to experienced radiologists. CLINICAL RELEVANCE STATEMENT Our study established an automated multitask deep learning system to evaluate supraspinatus tendon (SST) injuries and further determine the location of SST tears. The model can potentially improve radiologists' diagnostic efficiency, reduce diagnostic variability, and accurately assess SST injuries. KEY POINTS • A detailed classification of supraspinatus tendon tears can help clinical decision-making. • Deep learning enables the detailed classification of supraspinatus tendon injuries. • The proposed automated multitask deep learning system is comparable to radiologists.
Collapse
Affiliation(s)
- Ming Ni
- Department of Radiology, Peking University Third Hospital, Haidian District, Beijing, People's Republic of China
| | - Yuqing Zhao
- Department of Radiology, Peking University Third Hospital, Haidian District, Beijing, People's Republic of China
| | - Lihua Zhang
- Department of Radiology, Peking University Third Hospital, Haidian District, Beijing, People's Republic of China
| | - Wen Chen
- Department of Radiology, Peking University Third Hospital, Haidian District, Beijing, People's Republic of China
| | - Qizheng Wang
- Department of Radiology, Peking University Third Hospital, Haidian District, Beijing, People's Republic of China
| | - Chunyan Tian
- Department of Radiology, Peking University Third Hospital, Haidian District, Beijing, People's Republic of China.
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Haidian District, Beijing, People's Republic of China.
| |
Collapse
|
2
|
Fang W, Mao Y, Wang H, Sugimori H, Kiuch S, Sutherland K, Kamishima T. Fully automatic quantification for hand synovitis in rheumatoid arthritis using pixel-classification-based segmentation network in DCE-MRI. Jpn J Radiol 2024:10.1007/s11604-024-01592-6. [PMID: 38789911 DOI: 10.1007/s11604-024-01592-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 05/07/2024] [Indexed: 05/26/2024]
Abstract
PURPOSE A classification-based segmentation method is proposed to quantify synovium in rheumatoid arthritis (RA) patients using a deep learning (DL) method based on time-intensity curve (TIC) analysis in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). MATERIALS AND METHODS This retrospective study analyzed a hand MR dataset of 28 RA patients (six males, mean age 53.7 years). A researcher, under expert guidance, used in-house software to delineate regions of interest (ROIs) for hand muscles, bones, and synovitis, generating a dataset with 27,255 pixels with corresponding TICs (muscle: 11,413, bone: 8502, synovitis: 7340). One experienced musculoskeletal radiologist performed ground truth segmentation of enhanced pannus in the joint bounding box on the 10th DCE phase, or around 5 min after contrast injection. Data preprocessing included median filtering for noise reduction, phase-only correlation algorithm for motion correction, and contrast-limited adaptive histogram equalization for improved image contrast and noise suppression. TIC intensity values were normalized using zero-mean normalization. A DL model with dilated causal convolution and SELU activation function was developed for enhanced pannus segmentation, tested using leave-one-out cross-validation. RESULTS 407 joint bounding boxes were manually segmented, with 129 synovitis masks. On the pixel-based level, the DL model achieved sensitivity of 85%, specificity of 98%, accuracy of 99% and precision of 84% for enhanced pannus segmentation, with a mean Dice score of 0.73. The false-positive rate for predicting cases without synovitis was 0.8%. DL-measured enhanced pannus volume strongly correlated with ground truth at both pixel-based (r = 0.87, p < 0.001) and patient-based levels (r = 0.84, p < 0.001). Bland-Altman analysis showed the mean difference for hand joints at the pixel-based and patient-based levels were -9.46 mm3 and -50.87 mm3, respectively. CONCLUSION Our DL-based DCE-MRI TIC shape analysis has the potential for automatic segmentation and quantification of enhanced synovium in the hands of RA patients.
Collapse
Affiliation(s)
- Wanxuan Fang
- Graduate School of Health Sciences, Hokkaido University, North-12 West-5, Kita-Ku, Sapporo, 060-0812, Japan
| | - Yijun Mao
- Graduate School of Health Sciences, Hokkaido University, North-12 West-5, Kita-Ku, Sapporo, 060-0812, Japan
| | - Haolin Wang
- Graduate School of Health Sciences, Hokkaido University, North-12 West-5, Kita-Ku, Sapporo, 060-0812, Japan
| | - Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, North-12 West-5, Kita-Ku, Sapporo, 060-0812, Japan
| | - Shinji Kiuch
- AIC Yaesu Clinic, C-Road Bldg., 2-1-18, Nihonbashi, Chuo-Ku, Tokyo, Japan
| | - Kenneth Sutherland
- Global Center for Biomedical Science and Engineering, Hokkaido University, North-15 West-7, Kita-Ku, Sapporo, 060-8638, Japan
| | - Tamotsu Kamishima
- Faculty of Health Sciences, Hokkaido University, North-12 West-5, Kita-Ku, Sapporo, 060-0812, Japan.
| |
Collapse
|
3
|
Ahmed F, Abbas S, Athar A, Shahzad T, Khan WA, Alharbi M, Khan MA, Ahmed A. Identification of kidney stones in KUB X-ray images using VGG16 empowered with explainable artificial intelligence. Sci Rep 2024; 14:6173. [PMID: 38486010 PMCID: PMC10940612 DOI: 10.1038/s41598-024-56478-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 03/06/2024] [Indexed: 03/18/2024] Open
Abstract
A kidney stone is a solid formation that can lead to kidney failure, severe pain, and reduced quality of life from urinary system blockages. While medical experts can interpret kidney-ureter-bladder (KUB) X-ray images, specific images pose challenges for human detection, requiring significant analysis time. Consequently, developing a detection system becomes crucial for accurately classifying KUB X-ray images. This article applies a transfer learning (TL) model with a pre-trained VGG16 empowered with explainable artificial intelligence (XAI) to establish a system that takes KUB X-ray images and accurately categorizes them as kidney stones or normal cases. The findings demonstrate that the model achieves a testing accuracy of 97.41% in identifying kidney stones or normal KUB X-rays in the dataset used. VGG16 model delivers highly accurate predictions but lacks fairness and explainability in their decision-making process. This study incorporates the Layer-Wise Relevance Propagation (LRP) technique, an explainable artificial intelligence (XAI) technique, to enhance the transparency and effectiveness of the model to address this concern. The XAI technique, specifically LRP, increases the model's fairness and transparency, facilitating human comprehension of the predictions. Consequently, XAI can play an important role in assisting doctors with the accurate identification of kidney stones, thereby facilitating the execution of effective treatment strategies.
Collapse
Affiliation(s)
- Fahad Ahmed
- School of Computer Science, National College of Business Administration and Economics, Lahore, 54000, Pakistan
| | - Sagheer Abbas
- Department of Computer Sciences, Bahria University, Lahore Campus, Lahore, 54000, Pakistan
| | - Atifa Athar
- Department of Computer Science, Comsats University Islamabad, Lahore Campus, Lahore, 54000, Pakistan
| | - Tariq Shahzad
- Department of Computer Sciences, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, 57000, Pakistan
| | - Wasim Ahmad Khan
- School of Computer Science, National College of Business Administration and Economics, Lahore, 54000, Pakistan
| | - Meshal Alharbi
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, 11942, Alkharj, Saudi Arabia
| | - Muhammad Adnan Khan
- School of Computing, Skyline University College, University City Sharjah, 1797, Sharjah, UAE.
- Department of Software, Faculty of Artificial Intelligence and Software, Gachon University, Seongnam-si, 13120, Republic of Korea.
- Riphah School of Computing and Innovation, Faculty of Computing, Riphah International University, Lahore Campus, Lahore, 54000, Pakistan.
| | - Arfan Ahmed
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar.
| |
Collapse
|
4
|
Park J, Cho H, Ji Y, Lee K, Yoon H. Detection of spondylosis deformans in thoracolumbar and lumbar lateral X-ray images of dogs using a deep learning network. Front Vet Sci 2024; 11:1334438. [PMID: 38425836 PMCID: PMC10902442 DOI: 10.3389/fvets.2024.1334438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 01/30/2024] [Indexed: 03/02/2024] Open
Abstract
Introduction Spondylosis deformans is a non-inflammatory osteophytic reaction that develops to re-establish the stability of weakened joints between intervertebral discs. However, assessing these changes using radiography is subjective and difficult. In human medicine, attempts have been made to use artificial intelligence to accurately diagnose difficult and ambiguous diseases in medical imaging. Deep learning, a form of artificial intelligence, is most commonly used in medical imaging data analysis. It is a technique that utilizes neural networks to self-learn and extract features from data to diagnose diseases. However, no deep learning model has been developed to detect vertebral diseases in canine thoracolumbar and lumbar lateral X-ray images. Therefore, this study aimed to establish a segmentation model that automatically recognizes the vertebral body and spondylosis deformans in the thoracolumbar and lumbar lateral radiographs of dogs. Methods A total of 265 thoracolumbar and lumbar lateral radiographic images from 162 dogs were used to develop and evaluate the deep learning model based on the attention U-Net algorithm to segment the vertebral body and detect spondylosis deformans. Results When comparing the ability of the deep learning model and veterinary clinicians to recognize spondylosis deformans in the test dataset, the kappa value was 0.839, indicating an almost perfect agreement. Conclusions The deep learning model developed in this study is expected to automatically detect spondylosis deformans on thoracolumbar and lumbar lateral radiographs of dogs, helping to quickly and accurately identify unstable intervertebral disc space sites. Furthermore, the segmentation model developed in this study is expected to be useful for developing models that automatically recognize various vertebral and disc diseases.
Collapse
Affiliation(s)
- Junseol Park
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
- Biosafety Research Institute and College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
| | - Hyunwoo Cho
- Department of Electronic Engineering, Sogang University, Seoul, Republic of Korea
| | - Yewon Ji
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
| | - Kichang Lee
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
| | - Hakyoung Yoon
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
- Biosafety Research Institute and College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
| |
Collapse
|
5
|
Xie Y, Li X, Chen F, Wen R, Jing Y, Liu C, Wang J. Artificial intelligence diagnostic model for multi-site fracture X-ray images of extremities based on deep convolutional neural networks. Quant Imaging Med Surg 2024; 14:1930-1943. [PMID: 38415122 PMCID: PMC10895109 DOI: 10.21037/qims-23-878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 11/24/2023] [Indexed: 02/29/2024]
Abstract
Background The rapid and accurate diagnosis of fractures is crucial for timely treatment of trauma patients. Deep learning, one of the most widely used forms of artificial intelligence (AI), is now commonly employed in medical imaging for fracture detection. This study aimed to construct a deep learning model using big data to recognize multiple-fracture X-ray images of extremity bones. Methods Radiographic imaging data of extremities were retrospectively collected from five hospitals between January 2017 and September 2020. The total number of people finally included was 25,635 and the total number of images included was 26,098. After labeling the lesions, the randomized method used 90% of the data as the training set to develop the fracture detection model, and the remaining 10% was used as the validation set to verify the model. The faster region convolutional neural networks (R-CNN) algorithm was adopted to construct diagnostic models for detection. The Dice coefficient was used to evaluate the image segmentation accuracy. The performances of detection models were evaluated with sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Results The free-response receiver operating characteristic (FROC) curve value was 0.886 and 0.843 for the detection of single and multiple fractures, respectively. Additionally, the effective identification AUC for all parts was higher than 0.920. Notably, the AUC for wrist fractures reached 0.952. The average accuracy in detecting bone fracture regions in the extremities was 0.865. When analyzing single and multiple lesions at the patient level, the sensitivity was 0.957 for patients with multiple lesions and 0.852 for those with single lesions. In the segmentation task, the training set (the data set used by the machine learning model to train and learn) and the validation set (the data set used to evaluate the performance of the model) reached 0.996 and 0.975, respectively. Conclusions The faster R-CNN training algorithm exhibits excellent performance in simultaneously identifying fractures in the hands, feet, wrists, ankles, radius and ulna, and tibia and fibula on X-ray images. It demonstrates high accuracy, low false-negative rates, and controllable false-positive rates. It can serve as a valuable screening tool.
Collapse
Affiliation(s)
- Yanling Xie
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Xiaoming Li
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Fengxi Chen
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Ru Wen
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Yang Jing
- Huiying Medical Technology Co., Ltd., Beijing, China
| | - Chen Liu
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Jian Wang
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| |
Collapse
|
6
|
Sabati M, Yang M, Chauhan A. Editorial for "Collaborative Learning for Annotation-Efficient Volumetric MR Image Segmentation". J Magn Reson Imaging 2024. [PMID: 38258419 DOI: 10.1002/jmri.29212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 11/20/2023] [Indexed: 01/24/2024] Open
Affiliation(s)
- Mohammad Sabati
- Hoglund Biomedical Imaging Center, University of Kansas Medical Center, Kansas City, Kansas, USA
- Bioengineering Program, School of Engineering, University of Kansas, Lawrence, Kansas, USA
| | - Mingrui Yang
- Department of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging, Cleveland Clinic, Cleveland, Ohio, USA
| | - Anil Chauhan
- Department of Radiology, University of Kansas Medical Center, Kansas City, Kansas, USA
| |
Collapse
|
7
|
Gao Z, Pan X, Shao J, Jiang X, Su Z, Jin K, Ye J. Automatic interpretation and clinical evaluation for fundus fluorescein angiography images of diabetic retinopathy patients by deep learning. Br J Ophthalmol 2023; 107:1852-1858. [PMID: 36171054 DOI: 10.1136/bjo-2022-321472] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 09/04/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND/AIMS Fundus fluorescein angiography (FFA) is an important technique to evaluate diabetic retinopathy (DR) and other retinal diseases. The interpretation of FFA images is complex and time-consuming, and the ability of diagnosis is uneven among different ophthalmologists. The aim of the study is to develop a clinically usable multilevel classification deep learning model for FFA images, including prediagnosis assessment and lesion classification. METHODS A total of 15 599 FFA images of 1558 eyes from 845 patients diagnosed with DR were collected and annotated. Three convolutional neural network (CNN) models were trained to generate the label of image quality, location, laterality of eye, phase and five lesions. Performance of the models was evaluated by accuracy, F-1 score, the area under the curve and human-machine comparison. The images with false positive and false negative results were analysed in detail. RESULTS Compared with LeNet-5 and VGG16, ResNet18 got the best result, achieving an accuracy of 80.79%-93.34% for prediagnosis assessment and an accuracy of 63.67%-88.88% for lesion detection. The human-machine comparison showed that the CNN had similar accuracy with junior ophthalmologists. The false positive and false negative analysis indicated a direction of improvement. CONCLUSION This is the first study to do automated standardised labelling on FFA images. Our model is able to be applied in clinical practice, and will make great contributions to the development of intelligent diagnosis of FFA images.
Collapse
Affiliation(s)
- Zhiyuan Gao
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Ji Shao
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Xiaoyu Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| | - Zhaoan Su
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Kai Jin
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Juan Ye
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| |
Collapse
|
8
|
Bousson V, Attané G, Benoist N, Perronne L, Diallo A, Hadid-Beurrier L, Martin E, Hamzi L, Depil Duval A, Revue E, Vicaut E, Salvat C. Artificial Intelligence for Detecting Acute Fractures in Patients Admitted to an Emergency Department: Real-Life Performance of Three Commercial Algorithms. Acad Radiol 2023; 30:2118-2139. [PMID: 37468377 DOI: 10.1016/j.acra.2023.06.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 06/08/2023] [Accepted: 06/20/2023] [Indexed: 07/21/2023]
Abstract
RATIONALE AND OBJECTIVES Interpreting radiographs in emergency settings is stressful and a burden for radiologists. The main objective was to assess the performance of three commercially available artificial intelligence (AI) algorithms for detecting acute peripheral fractures on radiographs in daily emergency practice. MATERIALS AND METHODS Radiographs were collected from consecutive patients admitted for skeletal trauma at our emergency department over a period of 2 months. Three AI algorithms-SmartUrgence, Rayvolve, and BoneView-were used to analyze 13 body regions. Four musculoskeletal radiologists determined the ground truth from radiographs. The diagnostic performance of the three AI algorithms was calculated at the level of the radiography set. Accuracies, sensitivities, and specificities for each algorithm and two-by-two comparisons between algorithms were obtained. Analyses were performed for the whole population and for subgroups of interest (sex, age, body region). RESULTS A total of 1210 patients were included (mean age 41.3 ± 18.5 years; 742 [61.3%] men), corresponding to 1500 radiography sets. The fracture prevalence among the radiography sets was 23.7% (356/1500). Accuracy was 90.1%, 71.0%, and 88.8% for SmartUrgence, Rayvolve, and BoneView, respectively; sensitivity 90.2%, 92.6%, and 91.3%, with specificity 92.5%, 70.4%, and 90.5%. Accuracy and specificity were significantly higher for SmartUrgence and BoneView than Rayvolve for the whole population (P < .0001) and for subgroups. The three algorithms did not differ in sensitivity (P = .27). For SmartUrgence, subgroups did not significantly differ in accuracy, specificity, or sensitivity. For Rayvolve, accuracy and specificity were significantly higher with age 27-36 than ≥53 years (P = .0029 and P = .0019). Specificity was higher for the subgroup knee than foot (P = .0149). For BoneView, accuracy was significantly higher for the subgroups knee than foot (P = .0006) and knee than wrist/hand (P = .0228). Specificity was significantly higher for the subgroups knee than foot (P = .0003) and ankle than foot (P = .0195). CONCLUSION The performance of AI detection of acute peripheral fractures in daily radiological practice in an emergency department was good to high and was related to the AI algorithm, patient age, and body region examined.
Collapse
Affiliation(s)
- Valérie Bousson
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.).
| | - Grégoire Attané
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Nicolas Benoist
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Laetitia Perronne
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Abdourahmane Diallo
- Clinical Research Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D., E.V.)
| | - Lama Hadid-Beurrier
- Medical Physics Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (L.H.-B., C.S.)
| | - Emmanuel Martin
- Information Technology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (E.M.)
| | - Lounis Hamzi
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Arnaud Depil Duval
- Emergency Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D.D., E.R.); Emergency Department, Saint-Joseph's Hospital, Paris, France (A.D.D.)
| | - Eric Revue
- Emergency Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D.D., E.R.)
| | - Eric Vicaut
- Clinical Research Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D., E.V.)
| | - Cécile Salvat
- Medical Physics Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (L.H.-B., C.S.)
| |
Collapse
|
9
|
Isaieva K, Leclère J, Felblinger J, Gillet R, Dubernard X, Vuissoz PA. Methodology for quantitative evaluation of mandibular condyles motion symmetricity from real-time MRI in the axial plane. Magn Reson Imaging 2023; 102:115-125. [PMID: 37187265 DOI: 10.1016/j.mri.2023.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 05/04/2023] [Accepted: 05/09/2023] [Indexed: 05/17/2023]
Abstract
Diagnosis of temporomandibular disorders is currently based on clinical examination and static MRI. Real-time MRI enables tracking of condylar motion and, thus, evaluation of their motion symmetricity (which could be associated with temporomandibular joint disorders). The purpose of this work is to propose an acquisition protocol, an image processing approach, and a set of parameters enabling objective assessment of motion asymmetry; to check the reliability and find the limitations of the approach, and to verify if the automatically calculated parameters are associated with the motion symmetricity. A rapid radial FLASH sequence was used to acquire a dynamic set of axial images for 10 subjects. One more subject was involved to estimate the dependence of the motion parameters on the slice placement. The images were segmented with a semi-automatic approach based on U-Net convolutional neural network, and the condyles' mass centers were projected on the mid-sagittal axis. Resulting projection curves were used for the extraction of various motion parameters including latency, velocity peak delay, and maximal displacement between the right and the left condyle. These automatically calculated parameters were compared with the physicians' scores. The proposed segmentation approach allowed a reliable center of mass tracking. Latency and velocity peak delay were found to be invariant to the slice position, and maximal displacement difference considerably varied. The automatically calculated parameters demonstrated a significant correlation with the experts' scores. The proposed acquisition and data processing protocol enables the automatizable extraction of quantitative parameters that characterize the symmetricity of condylar motion.
Collapse
Affiliation(s)
- Karyna Isaieva
- IADI, University of Lorraine, INSERM U1254, Nancy, France.
| | - Justine Leclère
- IADI, University of Lorraine, INSERM U1254, Nancy, France; Oral Medicine Department, University Hospital of Reims, Reims, France
| | - Jacques Felblinger
- IADI, University of Lorraine, INSERM U1254, Nancy, France; CIC-IT 1433, INSERM, CHRU de Nancy, Nancy, France
| | - Romain Gillet
- IADI, University of Lorraine, INSERM U1254, Nancy, France; Guilloz Imaging Department, CHRU of Nancy, Nancy, France
| | | | | |
Collapse
|
10
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
11
|
Gerami MH, Khorram R, Rasoolzadegan S, Mardpour S, Nakhaei P, Hashemi S, Al-Naqeeb BZT, Aminian A, Samimi S. Emerging role of mesenchymal stem/stromal cells (MSCs) and MSCs-derived exosomes in bone- and joint-associated musculoskeletal disorders: a new frontier. Eur J Med Res 2023; 28:86. [PMID: 36803566 PMCID: PMC9939872 DOI: 10.1186/s40001-023-01034-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Accepted: 01/26/2023] [Indexed: 02/22/2023] Open
Abstract
Exosomes are membranous vesicles with a 30 to 150 nm diameter secreted by mesenchymal stem/stromal cells (MSCs) and other cells, such as immune cells and cancer cells. Exosomes convey proteins, bioactive lipids, and genetic components to recipient cells, such as microRNAs (miRNAs). Consequently, they have been implicated in regulating intercellular communication mediators under physiological and pathological circumstances. Exosomes therapy as a cell-free approach bypasses many concerns regarding the therapeutic application of stem/stromal cells, including undesirable proliferation, heterogeneity, and immunogenic effects. Indeed, exosomes have become a promising strategy to treat human diseases, particularly bone- and joint-associated musculoskeletal disorders, because of their characteristics, such as potentiated stability in circulation, biocompatibility, low immunogenicity, and toxicity. In this light, a diversity of studies have indicated that inhibiting inflammation, inducing angiogenesis, provoking osteoblast and chondrocyte proliferation and migration, and negative regulation of matrix-degrading enzymes result in bone and cartilage recovery upon administration of MSCs-derived exosomes. Notwithstanding, insufficient quantity of isolated exosomes, lack of reliable potency test, and exosomes heterogeneity hurdle their application in clinics. Herein, we will deliver an outline respecting the advantages of MSCs-derived exosomes-based therapy in common bone- and joint-associated musculoskeletal disorders. Moreover, we will have a glimpse the underlying mechanism behind the MSCs-elicited therapeutic merits in these conditions.
Collapse
Affiliation(s)
- Mohammad Hadi Gerami
- grid.412571.40000 0000 8819 4698Bone and Joint Diseases Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Roya Khorram
- grid.412571.40000 0000 8819 4698Bone and Joint Diseases Research Center, Department of Orthopedic Surgery, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Soheil Rasoolzadegan
- grid.411600.2Department of Surgery, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Saeid Mardpour
- grid.411705.60000 0001 0166 0922Department of Radiology, Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Pooria Nakhaei
- grid.411705.60000 0001 0166 0922Endocrinology and Metabolism Research Center (EMRC), Vali-Asr Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Soheyla Hashemi
- grid.411036.10000 0001 1498 685XObstetrician, Gynaecology & Infertility Department, Isfahan University of Medical Sciences, Isfahan, Iran
| | | | - Amir Aminian
- Bone and Joint Reconstruction Research Center, Shafa Orthopedic Hospital, Iran University of Medical Sciences, Tehran, Iran.
| | - Sahar Samimi
- Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
12
|
Benzakour A, Altsitzioglou P, Lemée JM, Ahmad A, Mavrogenis AF, Benzakour T. Artificial intelligence in spine surgery. INTERNATIONAL ORTHOPAEDICS 2023; 47:457-465. [PMID: 35902390 DOI: 10.1007/s00264-022-05517-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 07/11/2022] [Indexed: 01/28/2023]
Abstract
The continuous progress of research and clinical trials has offered a wide variety of information concerning the spine and the treatment of the different spinal pathologies that may occur. Planning the best therapy for each patient could be a very difficult and challenging task as it often requires thorough processing of the patient's history and individual characteristics by the clinician. Clinicians and researchers also face problems when it comes to data availability due to patients' personal information protection policies. Artificial intelligence refers to the reproduction of human intelligence via special programs and computers that are trained in a way that simulates human cognitive functions. Artificial intelligence implementations to daily clinical practice such as surgical robots that facilitate spine surgery and reduce radiation dosage to medical staff, special algorithms that can predict the possible outcomes of conservative versus surgical treatment in patients with low back pain and disk herniations, and systems that create artificial populations with great resemblance and similar characteristics to real patients are considered to be a novel breakthrough in modern medicine. To enhance the body of the related literature and inform the readers on the clinical applications of artificial intelligence, we performed this review to discuss the contribution of artificial intelligence in spine surgery and pathology.
Collapse
Affiliation(s)
- Ahmed Benzakour
- Centre Orléanais du Dos - Pôle Santé Oréliance, Saran, France
| | - Pavlos Altsitzioglou
- First Department of Orthopaedics, National and Kapodistrian University of Athens, School of Medicine, Athens, Greece
| | - Jean Michel Lemée
- Department of Neurosurgery, University Hospital of Angers, Angers, France
| | | | - Andreas F Mavrogenis
- First Department of Orthopaedics, National and Kapodistrian University of Athens, School of Medicine, Athens, Greece.
| | | |
Collapse
|
13
|
Oeding JF, Williams RJ, Nwachukwu BU, Martin RK, Kelly BT, Karlsson J, Camp CL, Pearle AD, Ranawat AS, Pareek A. A practical guide to the development and deployment of deep learning models for the Orthopedic surgeon: part I. Knee Surg Sports Traumatol Arthrosc 2023; 31:382-389. [PMID: 36427077 DOI: 10.1007/s00167-022-07239-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 11/15/2022] [Indexed: 11/26/2022]
Abstract
Deep learning has a profound impact on daily life. As Orthopedics makes use of this rapid escalation in technology, Orthopedic surgeons will need to take leadership roles on deep learning projects. Moreover, surgeons must possess an understanding of what is necessary to design and implement deep learning-based project pipelines. This review provides a practical guide for the Orthopedic surgeon to understand the steps needed to design, develop, and deploy a deep learning pipeline for clinical applications. A detailed description of the processes involved in defining the problem, building the team, acquiring and curating the data, labeling the data, establishing the ground truth, pre-processing and augmenting the data, and selecting the required hardware is provided. In addition, an overview of unique considerations involved in the training and evaluation of deep learning models is provided. This review strives to provide surgeons with the groundwork needed to identify gaps in the clinical landscape that deep learning models may be able to fill and equips them with the knowledge needed to lead an interdisciplinary team through the process of creating novel deep-learning-based solutions to fill those gaps.
Collapse
Affiliation(s)
- Jacob F Oeding
- School of Medicine, Mayo Clinic Alix School of Medicine, Rochester, MN, USA
| | - Riley J Williams
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, 535 East 70th Street, New York, NY, 10021, USA
| | - Benedict U Nwachukwu
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, 535 East 70th Street, New York, NY, 10021, USA
| | - R Kyle Martin
- Department of Orthopedic Surgery, University of Minnesota, Minneapolis, MN, USA
| | - Bryan T Kelly
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, 535 East 70th Street, New York, NY, 10021, USA
| | - Jón Karlsson
- Department of Orthopaedics, Sahlgrenska University Hospital, Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
| | - Christopher L Camp
- Department of Orthopedic Surgery and Sports Medicine, Rochester, MN, USA
| | - Andrew D Pearle
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, 535 East 70th Street, New York, NY, 10021, USA
| | - Anil S Ranawat
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, 535 East 70th Street, New York, NY, 10021, USA
| | - Ayoosh Pareek
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, 535 East 70th Street, New York, NY, 10021, USA.
| |
Collapse
|
14
|
Lin CT, Ghosh S, Hinkley LB, Dale CL, Souza ACS, Sabes JH, Hess CP, Adams ME, Cheung SW, Nagarajan SS. Multi-tasking deep network for tinnitus classification and severity prediction from multimodal structural MR images. J Neural Eng 2023; 20. [PMID: 36595270 DOI: 10.1088/1741-2552/acab33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 12/13/2022] [Indexed: 12/15/2022]
Abstract
Objective:Subjective tinnitus is an auditory phantom perceptual disorder without an objective biomarker. Fast and efficient diagnostic tools will advance clinical practice by detecting or confirming the condition, tracking change in severity, and monitoring treatment response. Motivated by evidence of subtle anatomical, morphological, or functional information in magnetic resonance images of the brain, we examine data-driven machine learning methods for joint tinnitus classification (tinnitus or no tinnitus) and tinnitus severity prediction.Approach:We propose a deep multi-task multimodal framework for tinnitus classification and severity prediction using structural MRI (sMRI) data. To leverage complementary information multimodal neuroimaging data, we integrate two modalities of three-dimensional sMRI-T1 weighted (T1w) and T2 weighted (T2w) images. To explore the key components in the MR images that drove task performance, we segment both T1w and T2w images into three different components-cerebrospinal fluid, grey matter and white matter, and evaluate performance of each segmented image.Main results:Results demonstrate that our multimodal framework capitalizes on the information across both modalities (T1w and T2w) for the joint task of tinnitus classification and severity prediction.Significance:Our model outperforms existing learning-based and conventional methods in terms of accuracy, sensitivity, specificity, and negative predictive value.
Collapse
Affiliation(s)
- Chieh-Te Lin
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America
| | - Sanjay Ghosh
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America
| | - Leighton B Hinkley
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America
| | - Corby L Dale
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America
| | - Ana C S Souza
- Department of Telecommunication and Mechatronics Engineering, Federal University of Sao Joao del-Rei, Praca Frei Orlando, 170, Sao Joao del Rei 36307, MG, Brazil
| | - Jennifer H Sabes
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, 2380 Sutter St., San Francisco, CA 94115, United States of America
| | - Christopher P Hess
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America
| | - Meredith E Adams
- Department of Otolaryngology-Head and Neck Surgery, University of Minnesota, Phillips Wangensteen Building, 516 Delaware St., Minneapolis, MN 55455, United States of America
| | - Steven W Cheung
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, 2380 Sutter St., San Francisco, CA 94115, United States of America.,Surgical Services, Veterans Affairs, 4150 Clement St., San Francisco, CA 94121, United States of America
| | - Srikantan S Nagarajan
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America.,Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, 2380 Sutter St., San Francisco, CA 94115, United States of America.,Surgical Services, Veterans Affairs, 4150 Clement St., San Francisco, CA 94121, United States of America
| |
Collapse
|
15
|
Ni M, Zhao Y, Wen X, Lang N, Wang Q, Chen W, Zeng X, Yuan H. Deep learning-assisted classification of calcaneofibular ligament injuries in the ankle joint. Quant Imaging Med Surg 2023; 13:80-93. [PMID: 36620152 PMCID: PMC9816759 DOI: 10.21037/qims-22-470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 09/07/2022] [Indexed: 11/07/2022]
Abstract
Background The classification of calcaneofibular ligament (CFL) injuries on magnetic resonance imaging (MRI) is time-consuming and subject to substantial interreader variability. This study explores the feasibility of classifying CFL injuries using deep learning methods by comparing them with the classifications of musculoskeletal (MSK) radiologists and further examines image cropping screening and calibration methods. Methods The imaging data of 1,074 patients who underwent ankle arthroscopy and MRI examinations in our hospital were retrospectively analyzed. According to the arthroscopic findings, patients were divided into normal (class 0, n=475); degeneration, strain, and partial tear (class 1, n=217); and complete tear (class 2, n=382) groups. All patients were divided into training, validation, and test sets at a ratio of 8:1:1. After preprocessing, the images were cropped using Mask region-based convolutional neural network (R-CNN), followed by the application of an attention algorithm for image screening and calibration and the implementation of LeNet-5 for CFL injury classification. The diagnostic effects of the axial, coronal, and combined models were compared, and the best method was selected for outgroup validation. The diagnostic results of the models in the intragroup and outgroup test sets were compared with those results of 4 MSK radiologists of different seniorities. Results The mean average precision (mAP) of the Mask R-CNN using the attention algorithm for the left and right image cropping of axial and coronal sequences was 0.90-0.96. The accuracy of LeNet-5 for classifying classes 0-2 was 0.92, 0.93, and 0.92, respectively, for the axial sequences and 0.89, 0.92, and 0.90, respectively, for the coronal sequences. After sequence combination, the classification accuracy for classes 0-2 was 0.95, 0.97, and 0.96, respectively. The mean accuracies of the 4 MSK radiologists in classifying the intragroup test set as classes 0-2 were 0.94, 0.91, 0.86, and 0.85, all of which were significantly different from the model. The mean accuracies of the MSK radiologists in classifying the outgroup test set as classes 0-2 were 0.92, 0.91, 0.87, and 0.85, with the 2 senior MSK radiologists demonstrating similar diagnostic performance to the model and the junior MSK radiologists demonstrating worse accuracy. Conclusions Deep learning can be used to classify CFL injuries at similar levels to those of MSK radiologists. Adding an attention algorithm after cropping is helpful for accurately cropping CFL images.
Collapse
Affiliation(s)
- Ming Ni
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Yuqing Zhao
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Xiaoyi Wen
- Institute of Statistics and Big Data, Renmin University of China, Beijing, China
| | - Ning Lang
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Qizheng Wang
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Wen Chen
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Xiangzhu Zeng
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing, China
| |
Collapse
|
16
|
Lee SH, Lee J, Oh KS, Yoon JP, Seo A, Jeong Y, Chung SW. Automated 3-dimensional MRI segmentation for the posterosuperior rotator cuff tear lesion using deep learning algorithm. PLoS One 2023; 18:e0284111. [PMID: 37200275 DOI: 10.1371/journal.pone.0284111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/23/2023] [Indexed: 05/20/2023] Open
Abstract
INTRODUCTION Rotator cuff tear (RCT) is a challenging and common musculoskeletal disease. Magnetic resonance imaging (MRI) is a commonly used diagnostic modality for RCT, but the interpretation of the results is tedious and has some reliability issues. In this study, we aimed to evaluate the accuracy and efficacy of the 3-dimensional (3D) MRI segmentation for RCT using a deep learning algorithm. METHODS A 3D U-Net convolutional neural network (CNN) was developed to detect, segment, and visualize RCT lesions in 3D, using MRI data from 303 patients with RCTs. The RCT lesions were labeled by two shoulder specialists in the entire MR image using in-house developed software. The MRI-based 3D U-Net CNN was trained after the augmentation of a training dataset and tested using randomly selected test data (training: validation: test data ratio was 6:2:2). The segmented RCT lesion was visualized in a three-dimensional reconstructed image, and the performance of the 3D U-Net CNN was evaluated using the Dice coefficient, sensitivity, specificity, precision, F1-score, and Youden index. RESULTS A deep learning algorithm using a 3D U-Net CNN successfully detected, segmented, and visualized the area of RCT in 3D. The model's performance reached a 94.3% of Dice coefficient score, 97.1% of sensitivity, 95.0% of specificity, 84.9% of precision, 90.5% of F1-score, and Youden index of 91.8%. CONCLUSION The proposed model for 3D segmentation of RCT lesions using MRI data showed overall high accuracy and successful 3D visualization. Further studies are necessary to determine the feasibility of its clinical application and whether its use could improve care and outcomes.
Collapse
Affiliation(s)
- Su Hyun Lee
- Department of Orthopaedic Surgery, Seoul Red Cross Hospital, Seoul, Korea
| | - JiHwan Lee
- Department of Orthopedic Surgery, Myongji Hospital, Goyang-si, Korea
| | - Kyung-Soo Oh
- Department of Orthopaedic Surgery, Konkuk University School of Medicine, Seoul, Korea
| | - Jong Pil Yoon
- Department of Orthopaedic Surgery, Kyungpook National University College of Medicine, Daegu, Korea
| | - Anna Seo
- SEEANN Solution, Yeonsu-gu, Incheon, Korea
| | | | - Seok Won Chung
- Department of Orthopaedic Surgery, Konkuk University School of Medicine, Seoul, Korea
| |
Collapse
|
17
|
Radiomics and Deep Learning for Disease Detection in Musculoskeletal Radiology: An Overview of Novel MRI- and CT-Based Approaches. Invest Radiol 2023; 58:3-13. [PMID: 36070548 DOI: 10.1097/rli.0000000000000907] [Citation(s) in RCA: 26] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
ABSTRACT Radiomics and machine learning-based methods offer exciting opportunities for improving diagnostic performance and efficiency in musculoskeletal radiology for various tasks, including acute injuries, chronic conditions, spinal abnormalities, and neoplasms. While early radiomics-based methods were often limited to a smaller number of higher-order image feature extractions, applying machine learning-based analytic models, multifactorial correlations, and classifiers now permits big data processing and testing thousands of features to identify relevant markers. A growing number of novel deep learning-based methods describe magnetic resonance imaging- and computed tomography-based algorithms for diagnosing anterior cruciate ligament tears, meniscus tears, articular cartilage defects, rotator cuff tears, fractures, metastatic skeletal disease, and soft tissue tumors. Initial radiomics and deep learning techniques have focused on binary detection tasks, such as determining the presence or absence of a single abnormality and differentiation of benign versus malignant. Newer-generation algorithms aim to include practically relevant multiclass characterization of detected abnormalities, such as typing and malignancy grading of neoplasms. So-called delta-radiomics assess tumor features before and after treatment, with temporal changes of radiomics features serving as surrogate markers for tumor responses to treatment. New approaches also predict treatment success rates, surgical resection completeness, and recurrence risk. Practice-relevant goals for the next generation of algorithms include diagnostic whole-organ and advanced classification capabilities. Important research objectives to fill current knowledge gaps include well-designed research studies to understand how diagnostic performances and suggested efficiency gains of isolated research settings translate into routine daily clinical practice. This article summarizes current radiomics- and machine learning-based magnetic resonance imaging and computed tomography approaches for musculoskeletal disease detection and offers a perspective on future goals and objectives.
Collapse
|
18
|
Performance of a deep convolutional neural network for MRI-based vertebral body measurements and insufficiency fracture detection. Eur Radiol 2022; 33:3188-3199. [PMID: 36576545 PMCID: PMC10121505 DOI: 10.1007/s00330-022-09354-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 09/23/2022] [Accepted: 11/29/2022] [Indexed: 12/29/2022]
Abstract
OBJECTIVES The aim is to validate the performance of a deep convolutional neural network (DCNN) for vertebral body measurements and insufficiency fracture detection on lumbar spine MRI. METHODS This retrospective analysis included 1000 vertebral bodies in 200 patients (age 75.2 ± 9.8 years) who underwent lumbar spine MRI at multiple institutions. 160/200 patients had ≥ one vertebral body insufficiency fracture, 40/200 had no fracture. The performance of the DCNN and that of two fellowship-trained musculoskeletal radiologists in vertebral body measurements (anterior/posterior height, extent of endplate concavity, vertebral angle) and evaluation for insufficiency fractures were compared. Statistics included (a) interobserver reliability metrics using intraclass correlation coefficient (ICC), kappa statistics, and Bland-Altman analysis, and (b) diagnostic performance metrics (sensitivity, specificity, accuracy). A statistically significant difference was accepted if the 95% confidence intervals did not overlap. RESULTS The inter-reader agreement between radiologists and the DCNN was excellent for vertebral body measurements, with ICC values of > 0.94 for anterior and posterior vertebral height and vertebral angle, and good to excellent for superior and inferior endplate concavity with ICC values of 0.79-0.85. The performance of the DCNN in fracture detection yielded a sensitivity of 0.941 (0.903-0.968), specificity of 0.969 (0.954-0.980), and accuracy of 0.962 (0.948-0.973). The diagnostic performance of the DCNN was independent of the radiological institution (accuracy 0.964 vs. 0.960), type of MRI scanner (accuracy 0.957 vs. 0.964), and magnetic field strength (accuracy 0.966 vs. 0.957). CONCLUSIONS A DCNN can achieve high diagnostic performance in vertebral body measurements and insufficiency fracture detection on heterogeneous lumbar spine MRI. KEY POINTS • A DCNN has the potential for high diagnostic performance in measuring vertebral bodies and detecting insufficiency fractures of the lumbar spine.
Collapse
|
19
|
Coarse X-ray Lumbar Vertebrae Pose Localization and Registration Using Triangulation Correspondence. Processes (Basel) 2022. [DOI: 10.3390/pr11010061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Plain film X-ray scanners are indispensable for medical diagnostics and clinical procedures. This type of device typically produces two radiographic images of the human spine, including the anteroposterior and lateral views. However, these two photographs presented perspectives that were distinct. The proposed procedure consists of three fundamental steps. For automated cropping, the grayscale lumbar input image was initially projected vertically using its vertical pattern. Then, Delaunay triangulation was performed with the SURF features serving as the triangle nodes. The posture area of the vertebrae was calculated by utilizing the edge density of each node. The proposed method provided an automated estimation of the position of the human lumbar vertebrae, thereby decreasing the radiologist’s workload, computing time, and complexity in a variety of bone-clinical applications. Numerous applications can be supported by the results of the proposed method, including the segmentation of lumbar vertebrae pose, bone mineral density examination, and vertebral pose deformation. The proposed method can estimate the vertebral position with an accuracy of 80.32 percent, a recall rate of 85.37 percent, a precision rate of 82.36%, and a false-negative rate of 15.42 percent.
Collapse
|
20
|
Nazari-Farsani S, Yu Y, Duarte Armindo R, Lansberg M, Liebeskind DS, Albers G, Christensen S, Levin CS, Zaharchuk G. Predicting final ischemic stroke lesions from initial diffusion-weighted images using a deep neural network. Neuroimage Clin 2022; 37:103278. [PMID: 36481696 PMCID: PMC9727698 DOI: 10.1016/j.nicl.2022.103278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 11/20/2022] [Accepted: 11/30/2022] [Indexed: 12/03/2022]
Abstract
BACKGROUND For prognosis of stroke, measurement of the diffusion-perfusion mismatch is a common practice for estimating tissue at risk of infarction in the absence of timely reperfusion. However, perfusion-weighted imaging (PWI) adds time and expense to the acute stroke imaging workup. We explored whether a deep convolutional neural network (DCNN) model trained with diffusion-weighted imaging obtained at admission could predict final infarct volume and location in acute stroke patients. METHODS In 445 patients, we trained and validated an attention-gated (AG) DCNN to predict final infarcts as delineated on follow-up studies obtained 3 to 7 days after stroke. The input channels consisted of MR diffusion-weighted imaging (DWI), apparent diffusion coefficients (ADC) maps, and thresholded ADC maps with values less than 620 × 10-6 mm2/s, while the output was a voxel-by-voxel probability map of tissue infarction. We evaluated performance of the model using the area under the receiver-operator characteristic curve (AUC), the Dice similarity coefficient (DSC), absolute lesion volume error, and the concordance correlation coefficient (ρc) of the predicted and true infarct volumes. RESULTS The model obtained a median AUC of 0.91 (IQR: 0.84-0.96). After thresholding at an infarction probability of 0.5, the median sensitivity and specificity were 0.60 (IQR: 0.16-0.84) and 0.97 (IQR: 0.93-0.99), respectively, while the median DSC and absolute volume error were 0.50 (IQR: 0.17-0.66) and 27 ml (IQR: 7-60 ml), respectively. The model's predicted lesion volumes showed high correlation with ground truth volumes (ρc = 0.73, p < 0.01). CONCLUSION An AG-DCNN using diffusion information alone upon admission was able to predict infarct volumes at 3-7 days after stroke onset with comparable accuracy to models that consider both DWI and PWI. This may enable treatment decisions to be made with shorter stroke imaging protocols.
Collapse
Affiliation(s)
| | - Yannan Yu
- Department of Radiology, Stanford University, CA, USA; Internal Medicine Department, University of Massachusetts Memorial Medical Center, University of Massachusetts, Boston, USA
| | - Rui Duarte Armindo
- Department of Radiology, Stanford University, CA, USA; Department of Neuroradiology, Hospital Beatriz Ângelo, Loures, Lisbon, Portugal
| | | | - David S Liebeskind
- Department of Neurology, University of California Los Angeles, Los Angeles, CA, USA
| | | | | | - Craig S Levin
- Department of Radiology, Stanford University, CA, USA
| | | |
Collapse
|
21
|
Ma Y, Jang H, Jerban S, Chang EY, Chung CB, Bydder GM, Du J. Making the invisible visible-ultrashort echo time magnetic resonance imaging: Technical developments and applications. APPLIED PHYSICS REVIEWS 2022; 9:041303. [PMID: 36467869 PMCID: PMC9677812 DOI: 10.1063/5.0086459] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 09/12/2022] [Indexed: 05/25/2023]
Abstract
Magnetic resonance imaging (MRI) uses a large magnetic field and radio waves to generate images of tissues in the body. Conventional MRI techniques have been developed to image and quantify tissues and fluids with long transverse relaxation times (T2s), such as muscle, cartilage, liver, white matter, gray matter, spinal cord, and cerebrospinal fluid. However, the body also contains many tissues and tissue components such as the osteochondral junction, menisci, ligaments, tendons, bone, lung parenchyma, and myelin, which have short or ultrashort T2s. After radio frequency excitation, their transverse magnetizations typically decay to zero or near zero before the receiving mode is enabled for spatial encoding with conventional MR imaging. As a result, these tissues appear dark, and their MR properties are inaccessible. However, when ultrashort echo times (UTEs) are used, signals can be detected from these tissues before they decay to zero. This review summarizes recent technical developments in UTE MRI of tissues with short and ultrashort T2 relaxation times. A series of UTE MRI techniques for high-resolution morphological and quantitative imaging of these short-T2 tissues are discussed. Applications of UTE imaging in the musculoskeletal, nervous, respiratory, gastrointestinal, and cardiovascular systems of the body are included.
Collapse
Affiliation(s)
- Yajun Ma
- Department of Radiology, University of California, San Diego, California 92037, USA
| | - Hyungseok Jang
- Department of Radiology, University of California, San Diego, California 92037, USA
| | - Saeed Jerban
- Department of Radiology, University of California, San Diego, California 92037, USA
| | | | | | - Graeme M Bydder
- Department of Radiology, University of California, San Diego, California 92037, USA
| | - Jiang Du
- Author to whom correspondence should be addressed:. Tel.: (858) 246-2248, Fax: (858) 246-2221
| |
Collapse
|
22
|
D'Angelo T, Caudo D, Blandino A, Albrecht MH, Vogl TJ, Gruenewald LD, Gaeta M, Yel I, Koch V, Martin SS, Lenga L, Muscogiuri G, Sironi S, Mazziotti S, Booz C. Artificial intelligence, machine learning and deep learning in musculoskeletal imaging: Current applications. JOURNAL OF CLINICAL ULTRASOUND : JCU 2022; 50:1414-1431. [PMID: 36069404 DOI: 10.1002/jcu.23321] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 08/18/2022] [Accepted: 08/20/2022] [Indexed: 06/15/2023]
Abstract
Artificial intelligence is rapidly expanding in all technological fields. The medical field, and especially diagnostic imaging, has been showing the highest developmental potential. Artificial intelligence aims at human intelligence simulation through the management of complex problems. This review describes the technical background of artificial intelligence, machine learning, and deep learning. The first section illustrates the general potential of artificial intelligence applications in the context of request management, data acquisition, image reconstruction, archiving, and communication systems. In the second section, the prospective of dedicated tools for segmentation, lesion detection, automatic diagnosis, and classification of musculoskeletal disorders is discussed.
Collapse
Affiliation(s)
- Tommaso D'Angelo
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
- Department of Radiology and Nuclear Medicine, Rotterdam, Netherlands
| | - Danilo Caudo
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
- Department or Radiology, IRRCS Centro Neurolesi "Bonino Pulejo", Messina, Italy
| | - Alfredo Blandino
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
| | - Moritz H Albrecht
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Thomas J Vogl
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Leon D Gruenewald
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Michele Gaeta
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
| | - Ibrahim Yel
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Vitali Koch
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Simon S Martin
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Lukas Lenga
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Giuseppe Muscogiuri
- School of Medicine and Surgery, University of Milano-Bicocca, Milan, Italy
- Department of Radiology, IRCCS Istituto Auxologico Italiano, San Luca Hospital, Milan, Italy
| | - Sandro Sironi
- School of Medicine and Surgery, University of Milano-Bicocca, Milan, Italy
- Department of Radiology, ASST Papa Giovanni XXIII Hospital, Bergamo, Italy
| | - Silvio Mazziotti
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
| | - Christian Booz
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
23
|
Kim S, Jeong WK, Choi JH, Kim JH, Chun M. Development of deep learning-assisted overscan decision algorithm in low-dose chest CT: Application to lung cancer screening in Korean National CT accreditation program. PLoS One 2022; 17:e0275531. [PMID: 36174098 PMCID: PMC9522252 DOI: 10.1371/journal.pone.0275531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 09/19/2022] [Indexed: 12/01/2022] Open
Abstract
We propose a deep learning-assisted overscan decision algorithm in chest low-dose computed tomography (LDCT) applicable to the lung cancer screening. The algorithm reflects the radiologists’ subjective evaluation criteria according to the Korea institute for accreditation of medical imaging (KIAMI) guidelines, where it judges whether a scan range is beyond landmarks’ criterion. The algorithm consists of three stages: deep learning-based landmark segmentation, rule-based logical operations, and overscan determination. A total of 210 cases from a single institution (internal data) and 50 cases from 47 institutions (external data) were utilized for performance evaluation. Area under the receiver operating characteristic (AUROC), accuracy, sensitivity, specificity, and Cohen’s kappa were used as evaluation metrics. Fisher’s exact test was performed to present statistical significance for the overscan detectability, and univariate logistic regression analyses were performed for validation. Furthermore, an excessive effective dose was estimated by employing the amount of overscan and the absorbed dose to effective dose conversion factor. The algorithm presented AUROC values of 0.976 (95% confidence interval [CI]: 0.925–0.987) and 0.997 (95% CI: 0.800–0.999) for internal and external dataset, respectively. All metrics showed average performance scores greater than 90% in each evaluation dataset. The AI-assisted overscan decision and the radiologist’s manual evaluation showed a statistically significance showing a p-value less than 0.001 in Fisher’s exact test. In the logistic regression analysis, demographics (age and sex), data source, CT vendor, and slice thickness showed no statistical significance on the algorithm (each p-value > 0.05). Furthermore, the estimated excessive effective doses were 0.02 ± 0.01 mSv and 0.03 ± 0.05 mSv for each dataset, not a concern within slight deviations from an acceptable scan range. We hope that our proposed overscan decision algorithm enables the retrospective scan range monitoring in LDCT for lung cancer screening program, and follows an as low as reasonably achievable (ALARA) principle.
Collapse
Affiliation(s)
- Sihwan Kim
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Republic of Korea
- ClariPi Research, Seoul, Republic of Korea
| | - Woo Kyoung Jeong
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jin Hwa Choi
- Department of Radiation Oncology, Chung-Ang University College of Medicine, Seoul, Republic of Korea
| | - Jong Hyo Kim
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Republic of Korea
- ClariPi Research, Seoul, Republic of Korea
- Center for Medical-IT Convergence Technology Research, Advanced Institutes of Convergence Technology, Suwon, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
| | - Minsoo Chun
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
- Department of Radiation Oncology, Chung-Ang University Gwang Myeong Hospital, Gyeonggi-do, Republic of Korea
- * E-mail:
| |
Collapse
|
24
|
Yao J, Chepelev L, Nisha Y, Sathiadoss P, Rybicki FJ, Sheikh AM. Evaluation of a deep learning method for the automated detection of supraspinatus tears on MRI. Skeletal Radiol 2022; 51:1765-1775. [PMID: 35190850 DOI: 10.1007/s00256-022-04008-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 01/30/2022] [Accepted: 01/30/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To evaluate if deep learning is a feasible approach for automated detection of supraspinatus tears on MRI. MATERIALS AND METHODS A total of 200 shoulder MRI studies performed between 2015 and 2019 were retrospectively obtained from our institutional database using a balanced random sampling of studies containing a full-thickness tear, partial-thickness tear, or intact supraspinatus tendon. A 3-stage pipeline was developed comprised of a slice selection network based on a pre-trained residual neural network (ResNet); a segmentation network based on an encoder-decoder network (U-Net); and a custom multi-input convolutional neural network (CNN) classifier. Binary reference labels were created following review of radiologist reports and images by a radiology fellow and consensus validation by two musculoskeletal radiologists. Twenty percent of the data was reserved as a holdout test set with the remaining 80% used for training and optimization under a fivefold cross-validation strategy. Classification and segmentation accuracy were evaluated using area under the receiver operating characteristic curve (AUROC) and Dice similarity coefficient, respectively. Baseline characteristics in correctly versus incorrectly classified cases were compared using independent sample t-test and chi-squared. RESULTS Test sensitivity and specificity of the classifier at the optimal Youden's index were 85.0% (95% CI: 62.1-96.8%) and 85.0% (95% CI: 62.1-96.8%), respectively. AUROC was 0.943 (95% CI: 0.820-0.991). Dice segmentation accuracy was 0.814 (95% CI: 0.805-0.826). There was no significant difference in AUROC between 1.5 T and 3.0 T studies. Sub-analysis showed superior sensitivity on full-thickness (100%) versus partial-thickness (72.5%) subgroups. DATA CONCLUSION Deep learning is a feasible approach to detect supraspinatus tears on MRI.
Collapse
Affiliation(s)
- Jason Yao
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada.
| | - Leonid Chepelev
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Yashmin Nisha
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Paul Sathiadoss
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Frank J Rybicki
- Department of Radiology, University of Cincinnati College of Medicine, 234 Goodman Street, Box 670761, Cincinnati, OH, 45267-0761, USA
| | - Adnan M Sheikh
- Department of Radiology, The University of British Columbia Faculty of Medicine, 2775 Laurel Street, Vancouver, BC, V5Z 1M9, Canada
| |
Collapse
|
25
|
Improving Image Quality and Reducing Scan Time for Synthetic MRI of Breast by Using Deep Learning Reconstruction. BIOMED RESEARCH INTERNATIONAL 2022; 2022:3125426. [PMID: 36060133 PMCID: PMC9439918 DOI: 10.1155/2022/3125426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 07/20/2022] [Accepted: 07/26/2022] [Indexed: 11/17/2022]
Abstract
Objectives. To investigate a deep learning reconstruction algorithm to reduce the time of synthetic MRI (SynMRI) scanning on the breast and improve the image quality. Materials and Methods. A total of 192 healthy female volunteers (mean age: 48.1 years) underwent the breast MR examination at 3.0 T from September 2020 to June 2021. Standard SynMRI and fast SynMRI scans were collected simultaneously on the same volunteer. Deep learning technology with a generative adversarial network (GAN) was used to generate high-quality fast SynMRI images by end-to-end training. Peak signal-to-noise ratio (PSNR), mean squared error (MSE), and structural similarity index measure (SSIM) were used to compare the image quality of generated images from fast SynMRI by deep learning algorithms. Results. Fast SynMRI acquisition time is half of the standard SynMRI scan, and the generated images of the GAN model show that PSNR and SSIM are improved and MSE is reduced. Conclusion. The application of deep learning algorithms with GAN model in breast MAGiC MRI improves the image quality and reduces the scanning time.
Collapse
|
26
|
Feng C, Zhou X, Wang H, He Y, Li Z, Tu C. Research hotspots and emerging trends of deep learning applications in orthopedics: A bibliometric and visualized study. Front Public Health 2022; 10:949366. [PMID: 35928480 PMCID: PMC9343683 DOI: 10.3389/fpubh.2022.949366] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 06/27/2022] [Indexed: 11/13/2022] Open
Abstract
Background As a research hotspot, deep learning has been continuously combined with various research fields in medicine. Recently, there is a growing amount of deep learning-based researches in orthopedics. This bibliometric analysis aimed to identify the hotspots of deep learning applications in orthopedics in recent years and infer future research trends. Methods We screened global publication on deep learning applications in orthopedics by accessing the Web of Science Core Collection. The articles and reviews were collected without language and time restrictions. Citespace was applied to conduct the bibliometric analysis of the publications. Results A total of 822 articles and reviews were finally retrieved. The analysis showed that the application of deep learning in orthopedics has great prospects for development based on the annual publications. The most prolific country is the USA, followed by China. University of California San Francisco, and Skeletal Radiology are the most prolific institution and journal, respectively. LeCun Y is the most frequently cited author, and Nature has the highest impact factor in the cited journals. The current hot keywords are convolutional neural network, classification, segmentation, diagnosis, image, fracture, and osteoarthritis. The burst keywords are risk factor, identification, localization, and surgery. The timeline viewer showed two recent research directions for bone tumors and osteoporosis. Conclusion Publications on deep learning applications in orthopedics have increased in recent years, with the USA being the most prolific. The current research mainly focused on classifying, diagnosing and risk predicting in osteoarthritis and fractures from medical images. Future research directions may put emphasis on reducing intraoperative risk, predicting the occurrence of postoperative complications, screening for osteoporosis, and identification and classification of bone tumors from conventional imaging.
Collapse
Affiliation(s)
- Chengyao Feng
- The Department of Orthopaedics, The Second Xiangya Hospital of Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Xiaowen Zhou
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Hua Wang
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Yu He
- The Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Zhihong Li
- The Department of Orthopaedics, The Second Xiangya Hospital of Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Chao Tu
- The Department of Orthopaedics, The Second Xiangya Hospital of Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital of Central South University, Changsha, China
- *Correspondence: Chao Tu
| |
Collapse
|
27
|
Yang M, Colak C, Chundru KK, Gaj S, Nanavati A, Jones MH, Winalski CS, Subhas N, Li X. Automated knee cartilage segmentation for heterogeneous clinical MRI using generative adversarial networks with transfer learning. Quant Imaging Med Surg 2022; 12:2620-2633. [PMID: 35502381 PMCID: PMC9014147 DOI: 10.21037/qims-21-459] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Accepted: 10/26/2021] [Indexed: 08/27/2023]
Abstract
BACKGROUND This study aimed to build a deep learning model to automatically segment heterogeneous clinical MRI scans by optimizing a pre-trained model built from a homogeneous research dataset with transfer learning. METHODS Conditional generative adversarial networks pretrained on the Osteoarthritis Initiative MR images was transferred to 30 sets of heterogenous MR images collected from clinical routines. Two trained radiologists manually segmented the 30 sets of clinical MR images for model training, validation and test. The model performance was compared to models trained from scratch with different datasets, as well as two radiologists. A 5-fold cross validation was performed. RESULTS The transfer learning model obtained an overall averaged Dice coefficient of 0.819, an averaged 95 percentile Hausdorff distance of 1.463 mm, and an averaged average symmetric surface distance of 0.350 mm on the 5 random holdout test sets. A 5-fold cross validation had a mean Dice coefficient of 0.801, mean 95 percentile Hausdorff distance of 1.746 mm, and mean average symmetric surface distance of 0.364 mm. It outperformed other models and performed similarly as the radiologists. CONCLUSIONS A transfer learning model was able to automatically segment knee cartilage, with performance comparable to human, using heterogeneous clinical MR images with a small training data size. In addition, the model proved robust when tested through cross validation and on images from a different vendor. We found it feasible to perform fully automated cartilage segmentation of clinical knee MR images, which would facilitate the clinical application of quantitative MRI techniques and other prediction models for improved patient treatment planning.
Collapse
Affiliation(s)
- Mingrui Yang
- Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, OH, USA
| | - Ceylan Colak
- Department of Diagnostic Radiology, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Kishore K. Chundru
- Department of Diagnostic Radiology, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Sibaji Gaj
- Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, OH, USA
| | - Andreas Nanavati
- Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, OH, USA
| | - Morgan H. Jones
- Department of Orthopaedic Surgery, Brigham and Women’s Hospital, Boston, MA, USA
| | - Carl S. Winalski
- Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, OH, USA
- Department of Diagnostic Radiology, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Naveen Subhas
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, OH, USA
- Department of Diagnostic Radiology, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Xiaojuan Li
- Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, OH, USA
- Department of Diagnostic Radiology, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| |
Collapse
|
28
|
Feng L, Ma D, Liu F. Rapid MR relaxometry using deep learning: An overview of current techniques and emerging trends. NMR IN BIOMEDICINE 2022; 35:e4416. [PMID: 33063400 PMCID: PMC8046845 DOI: 10.1002/nbm.4416] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 08/25/2020] [Accepted: 09/09/2020] [Indexed: 05/08/2023]
Abstract
Quantitative mapping of MR tissue parameters such as the spin-lattice relaxation time (T1 ), the spin-spin relaxation time (T2 ), and the spin-lattice relaxation in the rotating frame (T1ρ ), referred to as MR relaxometry in general, has demonstrated improved assessment in a wide range of clinical applications. Compared with conventional contrast-weighted (eg T1 -, T2 -, or T1ρ -weighted) MRI, MR relaxometry provides increased sensitivity to pathologies and delivers important information that can be more specific to tissue composition and microenvironment. The rise of deep learning in the past several years has been revolutionizing many aspects of MRI research, including image reconstruction, image analysis, and disease diagnosis and prognosis. Although deep learning has also shown great potential for MR relaxometry and quantitative MRI in general, this research direction has been much less explored to date. The goal of this paper is to discuss the applications of deep learning for rapid MR relaxometry and to review emerging deep-learning-based techniques that can be applied to improve MR relaxometry in terms of imaging speed, image quality, and quantification robustness. The paper is comprised of an introduction and four more sections. Section 2 describes a summary of the imaging models of quantitative MR relaxometry. In Section 3, we review existing "classical" methods for accelerating MR relaxometry, including state-of-the-art spatiotemporal acceleration techniques, model-based reconstruction methods, and efficient parameter generation approaches. Section 4 then presents how deep learning can be used to improve MR relaxometry and how it is linked to conventional techniques. The final section concludes the review by discussing the promise and existing challenges of deep learning for rapid MR relaxometry and potential solutions to address these challenges.
Collapse
Affiliation(s)
- Li Feng
- Biomedical Engineering and Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Dan Ma
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
| | - Fang Liu
- Department of Radiology, Massachusetts General Hospital, Harvard University, Boston, Massachusetts
| |
Collapse
|
29
|
Artificial intelligence in musculoskeletal imaging: a perspective on value propositions, clinical use, and obstacles. Skeletal Radiol 2022; 51:239-243. [PMID: 33983500 DOI: 10.1007/s00256-021-03802-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 04/25/2021] [Accepted: 04/25/2021] [Indexed: 02/08/2023]
Abstract
Artificial intelligence and deep learning (DL) offer musculoskeletal radiology exciting possibilities in multiple areas, including image reconstruction and transformation, tissue segmentation, workflow support, and disease detection. Novel DL-based image reconstruction algorithms correcting aliasing artifacts, signal loss, and noise amplification with previously unobtainable effectiveness are prime examples of how DL algorithms deliver promised value propositions in musculoskeletal radiology. The speed of DL-based tissue segmentation promises great efficiency gains that may permit the inclusion of tissue compositional-based information routinely into radiology reports. Similarly, DL algorithms give rise to a myriad of opportunities for workflow improvements, including intelligent and adaptive hanging protocols, speech recognition, report generation, scheduling, precertification, and billing. The value propositions of disease-detecting DL algorithms include reduced error rates and increased productivity. However, more studies using authentic clinical workflow settings are necessary to fully understand the value of DL algorithms for disease detection in clinical practice. Successful workflow integration and management of multiple algorithms are critical for translating the value propositions of DL algorithms into clinical practice but represent a major roadblock for which solutions are critically needed. While there is no consensus about the most sustainable business model, radiology departments will need to carefully weigh the benefits and disadvantages of each commercially available DL algorithm. Although more studies are needed to understand the value and impact of DL algorithms on clinical practice, DL technology will likely play an important role in the future of musculoskeletal imaging.
Collapse
|
30
|
Li MD, Ahmed SR, Choy E, Lozano-Calderon SA, Kalpathy-Cramer J, Chang CY. Artificial intelligence applied to musculoskeletal oncology: a systematic review. Skeletal Radiol 2022; 51:245-256. [PMID: 34013447 DOI: 10.1007/s00256-021-03820-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 05/13/2021] [Accepted: 05/13/2021] [Indexed: 02/02/2023]
Abstract
Developments in artificial intelligence have the potential to improve the care of patients with musculoskeletal tumors. We performed a systematic review of the published scientific literature to identify the current state of the art of artificial intelligence applied to musculoskeletal oncology, including both primary and metastatic tumors, and across the radiology, nuclear medicine, pathology, clinical research, and molecular biology literature. Through this search, we identified 252 primary research articles, of which 58 used deep learning and 194 used other machine learning techniques. Articles involving deep learning have mostly involved bone scintigraphy, histopathology, and radiologic imaging. Articles involving other machine learning techniques have mostly involved transcriptomic analyses, radiomics, and clinical outcome prediction models using medical records. These articles predominantly present proof-of-concept work, other than the automated bone scan index for bone metastasis quantification, which has translated to clinical workflows in some regions. We systematically review and discuss this literature, highlight opportunities for multidisciplinary collaboration, and identify potentially clinically useful topics with a relative paucity of research attention. Musculoskeletal oncology is an inherently multidisciplinary field, and future research will need to integrate and synthesize noisy siloed data from across clinical, imaging, and molecular datasets. Building the data infrastructure for collaboration will help to accelerate progress towards making artificial intelligence truly useful in musculoskeletal oncology.
Collapse
Affiliation(s)
- Matthew D Li
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA. .,Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Syed Rakin Ahmed
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.,Harvard Medical School, Harvard Graduate Program in Biophysics, Harvard University, Cambridge, MA, USA.,Geisel School of Medicine At Dartmouth, Dartmouth College, Hanover, NH, USA
| | - Edwin Choy
- Division of Hematology Oncology, Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Santiago A Lozano-Calderon
- Department of Orthopedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Connie Y Chang
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
31
|
Afsahi AM, Ma Y, Jang H, Jerban S, Chung CB, Chang EY, Du J. Ultrashort Echo Time Magnetic Resonance Imaging Techniques: Met and Unmet Needs in Musculoskeletal Imaging. J Magn Reson Imaging 2021; 55:1597-1612. [PMID: 34962335 DOI: 10.1002/jmri.28032] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 12/06/2021] [Accepted: 12/07/2021] [Indexed: 12/14/2022] Open
Abstract
This review article summarizes recent technical developments in ultrashort echo time (UTE) magnetic resonance imaging of musculoskeletal (MSK) tissues with short-T2 relaxation times. A series of contrast mechanisms are discussed for high-contrast morphological imaging of short-T2 MSK tissues including the osteochondral junction, menisci, ligaments, tendons, and bone. Quantitative UTE mapping of T1, T2*, T1ρ, adiabatic T1ρ, magnetization transfer ratio, MT modeling of macromolecular proton fraction, quantitative susceptibility mapping, and water content is also introduced. Met and unmet needs in MSK imaging are discussed. EVIDENCE LEVEL: 1 TECHNICAL EFFICACY: Stage 3.
Collapse
Affiliation(s)
- Amir Masoud Afsahi
- Department of Radiology, University of California, San Diego, California, USA
| | - Yajun Ma
- Department of Radiology, University of California, San Diego, California, USA
| | - Hyungseok Jang
- Department of Radiology, University of California, San Diego, California, USA
| | - Saeed Jerban
- Department of Radiology, University of California, San Diego, California, USA
| | - Christine B Chung
- Department of Radiology, University of California, San Diego, California, USA.,Research Service, Veterans Affairs San Diego Healthcare System, San Diego, California, USA
| | - Eric Y Chang
- Department of Radiology, University of California, San Diego, California, USA.,Research Service, Veterans Affairs San Diego Healthcare System, San Diego, California, USA
| | - Jiang Du
- Department of Radiology, University of California, San Diego, California, USA.,Research Service, Veterans Affairs San Diego Healthcare System, San Diego, California, USA
| |
Collapse
|
32
|
Hasani N, Farhadi F, Morris MA, Nikpanah M, Rhamim A, Xu Y, Pariser A, Collins MT, Summers RM, Jones E, Siegel E, Saboury B. Artificial Intelligence in Medical Imaging and its Impact on the Rare Disease Community: Threats, Challenges and Opportunities. PET Clin 2021; 17:13-29. [PMID: 34809862 DOI: 10.1016/j.cpet.2021.09.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Almost 1 in 10 individuals can suffer from one of many rare diseases (RDs). The average time to diagnosis for an RD patient is as high as 7 years. Artificial intelligence (AI)-based positron emission tomography (PET), if implemented appropriately, has tremendous potential to advance the diagnosis of RDs. Patient advocacy groups must be active stakeholders in the AI ecosystem if we are to avoid potential issues related to the implementation of AI into health care. AI medical devices must not only be RD-aware at each stage of their conceptualization and life cycle but also should be trained on diverse and augmented datasets representative of the end-user population including RDs. Inability to do so leads to potential harm and unsustainable deployment of AI-based medical devices (AIMDs) into clinical practice.
Collapse
Affiliation(s)
- Navid Hasani
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; University of Queensland Faculty of Medicine, Ochsner Clinical School, New Orleans, LA 70121, USA
| | - Faraz Farhadi
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Michael A Morris
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland-Baltimore Country, Baltimore, MD, USA
| | - Moozhan Nikpanah
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Arman Rhamim
- Department of Radiology, BC Cancer Research Institute, University of British Columbia, 675 West 10th Avenue, Vancouver, British Columbia, V5Z 1L3, Canada; Department of Physics, BC cancer Research Institute, University of British Columbia, Vancouver, British Columbia, Canada
| | - Yanji Xu
- Office of Rare Diseases Research, National Center for Advancing Translational Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Anne Pariser
- Office of Rare Diseases Research, National Center for Advancing Translational Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Michael T Collins
- Skeletal Disorders and Mineral Homeostasis Section, National Institute of Dental and Craniofacial Research, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Ronald M Summers
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Elizabeth Jones
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Eliot Siegel
- Department of Radiology and Nuclear Medicine, University of Maryland Medical Center, 655 W. Baltimore Street, Baltimore, MD 21201, USA
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland-Baltimore Country, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
33
|
Tsai KJ, Chang CC, Lo LC, Chiang JY, Chang CS, Huang YJ. Automatic segmentation of paravertebral muscles in abdominal CT scan by U-Net: The application of data augmentation technique to increase the Jaccard ratio of deep learning. Medicine (Baltimore) 2021; 100:e27649. [PMID: 34871238 PMCID: PMC8568419 DOI: 10.1097/md.0000000000027649] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 09/17/2021] [Accepted: 10/11/2021] [Indexed: 01/05/2023] Open
Abstract
ABSTRACT Sarcopenia, characterized by a decline of skeletal muscle mass, has emerged as an important prognostic factor for cancer patients. Trunk computed tomography (CT) is a commonly used modality for assessment of cancer disease extent and treatment outcome. CT images can also be used to analyze the skeletal muscle mass filtered by the appropriate range of Hounsfield scale. However, a manual depiction of skeletal muscle in CT scan images for assessing skeletal muscle mass is labor-intensive and unrealistic in clinical practice. In this paper, we propose a novel U-Net based segmentation system for CT scan of paravertebral muscles in the third and fourth lumbar spines. Since the number of training samples is limited (i.e., 1024 CT images only), it is well-known that the performance of the deep learning approach is restricted due to overfitting. A data augmentation strategy to enlarge the diversity of the training set to boost the performance further is employed. On the other hand, we also discuss how the number of features in our U-Net affects the performance of the semantic segmentation. The efficacies of the proposed methodology based on w/ and w/o data augmentation and different feature maps are compared in the experiments. We show that the Jaccard score is approximately 95.0% based on the proposed data augmentation method with only 16 feature maps used in U-Net. The stability and efficiency of the proposed U-Net are verified in the experiments in a cross-validation manner.
Collapse
Affiliation(s)
- Kuen-Jang Tsai
- Department of Surgery, E-Da Cancer Hospital, Taiwan
- College of Medicine, I-Shou University, Kaohsiung, Taiwan
| | - Chih-Chun Chang
- Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Lun-Chien Lo
- School of Chinese Medicine, China Medical University, Taichung, Taiwan
- Department of Chinese Medicine, China Medical University Hospital, Taichung, Taiwan
| | - John Y. Chiang
- Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
- Department of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Chao-Sung Chang
- Department of Hematology/Oncology, E-Da Cancer Hospital, School of Medicine for International Students, I-Shou University, Kaohsiung, Taiwan
| | - Yu-Jung Huang
- Department of Electronic Engineering, I-Shou University, Kaohsiung, Taiwan
| |
Collapse
|
34
|
Chalian M, Li X, Guermazi A, Obuchowski NA, Carrino JA, Oei EH, Link TM. The QIBA Profile for MRI-based Compositional Imaging of Knee Cartilage. Radiology 2021; 301:423-432. [PMID: 34491127 PMCID: PMC8574057 DOI: 10.1148/radiol.2021204587] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 06/18/2021] [Accepted: 07/07/2021] [Indexed: 12/16/2022]
Abstract
MRI-based cartilage compositional analysis shows biochemical and microstructural changes at early stages of osteoarthritis before changes become visible with structural MRI sequences and arthroscopy. This could help with early diagnosis, risk assessment, and treatment monitoring of osteoarthritis. Spin-lattice relaxation time constant in rotating frame (T1ρ) and T2 mapping are the MRI techniques best established for assessing cartilage composition. Only T2 mapping is currently commercially available, which is sensitive to water, collagen content, and orientation of collagen fibers, whereas T1ρ is more sensitive to proteoglycan content. Clinical application of cartilage compositional imaging is limited by high variability and suboptimal reproducibility of the biomarkers, which was the motivation for creating the Quantitative Imaging Biomarkers Alliance (QIBA) Profile for cartilage compositional imaging by the Musculoskeletal Biomarkers Committee of the QIBA. The profile aims at providing recommendations to improve reproducibility and to standardize cartilage compositional imaging. The QIBA Profile provides two complementary claims (summary statements of the technical performance of the quantitative imaging biomarkers that are being profiled) regarding the reproducibility of biomarkers. First, cartilage T1ρ and T2 values are measurable at 3.0-T MRI with a within-subject coefficient of variation of 4%-5%. Second, a measured increase or decrease in T1ρ and T2 of 14% or more indicates a minimum detectable change with 95% confidence. If only an increase in T1ρ and T2 values is expected (progressive cartilage degeneration), then an increase of 12% represents a minimum detectable change over time. The QIBA Profile provides recommendations for clinical researchers, clinicians, and industry scientists pertaining to image data acquisition, analysis, and interpretation and assessment procedures for T1ρ and T2 cartilage imaging and test-retest conformance. This special report aims to provide the rationale for the proposed claims, explain the content of the QIBA Profile, and highlight the future needs and developments for MRI-based cartilage compositional imaging for risk prediction, early diagnosis, and treatment monitoring of osteoarthritis.
Collapse
Affiliation(s)
- Majid Chalian
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - Xiaojuan Li
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - Ali Guermazi
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - Nancy A. Obuchowski
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - John A. Carrino
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - Edwin H. Oei
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - Thomas M. Link
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| | - for the RSNA QIBA MSK Biomarker Committee
- From the Department of Radiology, Division of Musculoskeletal Imaging
and Intervention, University of Washington, UW Radiology–Roosevelt
Clinic, 4245 Roosevelt Way NE, Box 354755, Seattle, WA 98105 (M.C.); Department
of Biomedical Engineering, Program of Advanced Musculoskeletal Imaging (PAMI)
(X.L.), and Department of Biostatistics (N.A.O.), Cleveland Clinic, Cleveland,
Ohio; Department of Radiology, Boston University School of Medicine, Boston,
Mass (A.G.); Department of Radiology and Imaging, Hospital for Special Surgery,
New York, NY (J.A.C.); Department of Radiology & Nuclear Medicine,
Erasmus MC University Medical Center, Rotterdam, the Netherlands (E.H.O.);
European Imaging Biomarkers Alliance (E.H.O.); and Department of Radiology and
Biomedical Imaging, University of California, San Francisco, Calif
(T.M.L.)
| |
Collapse
|
35
|
Tack A, Ambellan F, Zachow S. Towards novel osteoarthritis biomarkers: Multi-criteria evaluation of 46,996 segmented knee MRI data from the Osteoarthritis Initiative. PLoS One 2021; 16:e0258855. [PMID: 34673842 PMCID: PMC8530341 DOI: 10.1371/journal.pone.0258855] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 10/06/2021] [Indexed: 01/16/2023] Open
Abstract
Convolutional neural networks (CNNs) are the state-of-the-art for automated assessment of knee osteoarthritis (KOA) from medical image data. However, these methods lack interpretability, mainly focus on image texture, and cannot completely grasp the analyzed anatomies' shapes. In this study we assess the informative value of quantitative features derived from segmentations in order to assess their potential as an alternative or extension to CNN-based approaches regarding multiple aspects of KOA. Six anatomical structures around the knee (femoral and tibial bones, femoral and tibial cartilages, and both menisci) are segmented in 46,996 MRI scans. Based on these segmentations, quantitative features are computed, i.e., measurements such as cartilage volume, meniscal extrusion and tibial coverage, as well as geometric features based on a statistical shape encoding of the anatomies. The feature quality is assessed by investigating their association to the Kellgren-Lawrence grade (KLG), joint space narrowing (JSN), incident KOA, and total knee replacement (TKR). Using gold standard labels from the Osteoarthritis Initiative database the balanced accuracy (BA), the area under the Receiver Operating Characteristic curve (AUC), and weighted kappa statistics are evaluated. Features based on shape encodings of femur, tibia, and menisci plus the performed measurements showed most potential as KOA biomarkers. Differentiation between non-arthritic and severely arthritic knees yielded BAs of up to 99%, 84% were achieved for diagnosis of early KOA. Weighted kappa values of 0.73, 0.72, and 0.78 were achieved for classification of the grade of medial JSN, lateral JSN, and KLG, respectively. The AUC was 0.61 and 0.76 for prediction of incident KOA and TKR within one year, respectively. Quantitative features from automated segmentations provide novel biomarkers for KLG and JSN classification and show potential for incident KOA and TKR prediction. The validity of these features should be further evaluated, especially as extensions of CNN-based approaches. To foster such developments we make all segmentations publicly available together with this publication.
Collapse
Affiliation(s)
| | | | - Stefan Zachow
- Zuse Institute Berlin, Berlin, Germany
- Charité – Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
36
|
Yabu A, Hoshino M, Tabuchi H, Takahashi S, Masumoto H, Akada M, Morita S, Maeno T, Iwamae M, Inose H, Kato T, Yoshii T, Tsujio T, Terai H, Toyoda H, Suzuki A, Tamai K, Ohyama S, Hori Y, Okawa A, Nakamura H. Using artificial intelligence to diagnose fresh osteoporotic vertebral fractures on magnetic resonance images. Spine J 2021; 21:1652-1658. [PMID: 33722728 DOI: 10.1016/j.spinee.2021.03.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 02/21/2021] [Accepted: 03/08/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND CONTEXT Accurate diagnosis of osteoporotic vertebral fracture (OVF) is important for improving treatment outcomes; however, the gold standard has not been established yet. A deep-learning approach based on convolutional neural network (CNN) has attracted attention in the medical imaging field. PURPOSE To construct a CNN to detect fresh OVF on magnetic resonance (MR) images. STUDY DESIGN/SETTING Retrospective analysis of MR images PATIENT SAMPLE: This retrospective study included 814 patients with fresh OVF. For CNN training and validation, 1624 slices of T1-weighted MR image were obtained and used. OUTCOME MEASURE We plotted the receiver operating characteristic (ROC) curve and calculated the area under the curve (AUC) in order to evaluate the performance of the CNN. Consequently, the sensitivity, specificity, and accuracy of the diagnosis by CNN and that of the two spine surgeons were compared. METHODS We constructed an optimal model using ensemble method by combining nine types of CNNs to detect fresh OVFs. Furthermore, two spine surgeons independently evaluated 100 vertebrae, which were randomly extracted from test data. RESULTS The ensemble method using VGG16, VGG19, DenseNet201, and ResNet50 was the combination with the highest AUC of ROC curves. The AUC was 0.949. The evaluation metrics of the diagnosis (CNN/surgeon 1/surgeon 2) for 100 vertebrae were as follows: sensitivity: 88.1%/88.1%/100%; specificity: 87.9%/86.2%/65.5%; accuracy: 88.0%/87.0%/80.0%. CONCLUSIONS In detecting fresh OVF using MR images, the performance of the CNN was comparable to that of two spine surgeons.
Collapse
Affiliation(s)
- Akito Yabu
- Department of Orthopaedic Surgery, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Masatoshi Hoshino
- Department of Orthopaedic Surgery, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan.
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji, Hyogo 671-1227, Japan; Department of Technology and Design Thinking for Medicine, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima City, Hiroshima 734-8551, Japan
| | - Shinji Takahashi
- Department of Orthopaedic Surgery, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Hiroki Masumoto
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji, Hyogo 671-1227, Japan
| | - Masahiro Akada
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji, Hyogo 671-1227, Japan
| | - Shoji Morita
- Graduate School of Engineering, University of Hyogo, 2167, Shosha, Himeji, Hyogo 671-2280, Japan
| | - Takafumi Maeno
- Department of Orthopaedic Surgery, Ishikiriseiki Hospital, 18-28, Yayoi-machi, Higashiosaka, Osaka 579-8026, Japan
| | - Masayoshi Iwamae
- Department of Orthopaedic Surgery, Ishikiriseiki Hospital, 18-28, Yayoi-machi, Higashiosaka, Osaka 579-8026, Japan
| | - Hiroyuki Inose
- Department of Orthopaedic Surgery, Tokyo Medical and Dental University, Graduate School, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Tsuyoshi Kato
- Department of Orthopaedic Surgery, Ome municipal general Hospital, 4-16-5, Higashiome, Ome, Tokyo 198-0042, Japan
| | - Toshitaka Yoshii
- Department of Orthopaedic Surgery, Tokyo Medical and Dental University, Graduate School, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Tadao Tsujio
- Department of Orthopaedic Surgery, Shiraniwa Hospital, 6-10-1, Shiraniwadai, Ikoma, Nara 630-0136, Japan
| | - Hidetomi Terai
- Department of Orthopaedic Surgery, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Hiromitsu Toyoda
- Department of Orthopaedic Surgery, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Akinobu Suzuki
- Department of Orthopaedic Surgery, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Koji Tamai
- Department of Orthopaedic Surgery, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Shoichiro Ohyama
- Department of Orthopaedic Surgery, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Yusuke Hori
- Department of Orthopaedic Surgery, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Atsushi Okawa
- Department of Orthopaedic Surgery, Tokyo Medical and Dental University, Graduate School, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Hiroaki Nakamura
- Department of Orthopaedic Surgery, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| |
Collapse
|
37
|
Li S, Zheng J, Li D. Precise segmentation of non-enhanced computed tomography in patients with ischemic stroke based on multi-scale U-Net deep network model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106278. [PMID: 34274610 DOI: 10.1016/j.cmpb.2021.106278] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Accepted: 07/04/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Acute ischemic stroke requires timely diagnosis and thrombolytic therapy, but it is difficult to locate and quantify the lesion site manually. The purpose of this study was to explore a more rapid and effective method for automatic image segmentation of acute ischemic stroke. METHODS The image features of 30 stroke patients were segmented from non-enhanced computed tomography (CT) images using a multi-scale U-Net deep network model. The Dice loss function training model was used to counter the similar imbalance problem in the data. The difference was compared between manual segmentation and automatic segmentation. RESULTS The Dice similarity coefficient based on multi-scale convolution U-Net network segmentation was 0.86±0.04, higher than the Dice based on classic U-Net (0.81±0.07, P=0.001). The lesion contour of automatic segmentation based on multi-scale U-Net was very close to manual segmentation. The error of lesion area is 1.28±0.59 mm2, and the Pearson correlation coefficient was r=0.986 (P<0.01). The motion time of automatic segmentation is less than 20 ms. CONCLUSIONS Multi-scale U-Net deep network model can effectively segment ischemic stroke lesions in non-enhanced CT and meet real-time clinical requirements.
Collapse
Affiliation(s)
- Shaoquan Li
- Department of Neurosurgery, Cangzhou Central Hospital, Hebei 061000, China.
| | - Jianye Zheng
- Department of Neurosurgery, Cangzhou Central Hospital, Hebei 061000, China
| | - Dongjiao Li
- Department of Neurosurgery, Cangzhou Central Hospital, Hebei 061000, China
| |
Collapse
|
38
|
Yang J, Yang J, Zhao F, Zhang W. An unsupervised multi-scale framework with attention-based network (MANet) for lung 4D-CT registration. Phys Med Biol 2021; 66. [PMID: 34126608 DOI: 10.1088/1361-6560/ac0afc] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 06/14/2021] [Indexed: 01/25/2023]
Abstract
Deformable image registration (DIR) of 4D-CT is very important in many radiotherapeutic applications including tumor target definition, image fusion, dose accumulation and response evaluation. It is a challenging task to performing accurate and fast DIR of lung 4D-CT images due to its large and complicated deformations. In this study, we propose an unsupervised multi-scale DIR framework with attention-based network (MANet). Three cascaded models used for aligning CT images in different resolution levels were involved and trained by minimizing the loss functions, which were defined as the combination of dissimilarity between the fixed image and the deformed image and DVF regularization term. In addition, attention gates were incorporated into the three models to distinguish the moving structures from non-moving or minimal-moving structures during registration. The three models were trained sequentially and separately to minimize the loss function in each scale to initialize the MANet, and then trained jointly to minimize the total loss function which incorporated an additional dissimilarity between fixed image and deformed image. Besides, an adversarial network was integrated into MANet to enforce the DVF regularization by penalizing the unrealistic deformed images. The proposed MANet was evaluated on the public dir-lab dataset, and the target registration errors (TREs) of the model were compared with convention iterative optimization-based methods and three recently published deep learning-based methods. The initial results showed that the MANet with an average of TRE of 1.53 ± 1.02 mm outperformed other registration methods, and its execution time took about 1 s for DVF estimation with no requirement of manual-tuning for parameters, which demonstrating that our proposed method had the ability of performing superior registration for 4D-CT.
Collapse
Affiliation(s)
- Juan Yang
- School of Physics and Electronics, Shandong Normal University, Jinan 250358, People's Republic of China
| | - Jinhui Yang
- School of Physics and Electronics, Shandong Normal University, Jinan 250358, People's Republic of China
| | - Fen Zhao
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Jinan 250117, People's Republic of China
| | - Wenjun Zhang
- Department of Human Resources, Shandong Provincial Third Hospital, Jinan 250031, People's Republic of China
| |
Collapse
|
39
|
Deep learning model for automated kidney stone detection using coronal CT images. Comput Biol Med 2021; 135:104569. [PMID: 34157470 DOI: 10.1016/j.compbiomed.2021.104569] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 06/01/2021] [Accepted: 06/09/2021] [Indexed: 11/23/2022]
Abstract
Kidney stones are a common complaint worldwide, causing many people to admit to emergency rooms with severe pain. Various imaging techniques are used for the diagnosis of kidney stone disease. Specialists are needed for the interpretation and full diagnosis of these images. Computer-aided diagnosis systems are the practical approaches that can be used as auxiliary tools to assist the clinicians in their diagnosis. In this study, an automated detection of kidney stone (having stone/not) using coronal computed tomography (CT) images is proposed with deep learning (DL) technique which has recently made significant progress in the field of artificial intelligence. A total of 1799 images were used by taking different cross-sectional CT images for each person. Our developed automated model showed an accuracy of 96.82% using CT images in detecting the kidney stones. We have observed that our model is able to detect accurately the kidney stones of even small size. Our developed DL model yielded superior results with a larger dataset of 433 subjects and is ready for clinical application. This study shows that recently popular DL methods can be employed to address other challenging problems in urology.
Collapse
|
40
|
Chang KP, Lin SH, Chu YW. Artificial intelligence in gastrointestinal radiology: A review with special focus on recent development of magnetic resonance and computed tomography. Artif Intell Gastroenterol 2021; 2:27-41. [DOI: 10.35712/aig.v2.i2.27] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/21/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI), particularly the deep learning technology, have been proven influential to radiology in the recent decade. Its ability in image classification, segmentation, detection and reconstruction tasks have substantially assisted diagnostic radiology, and has even been viewed as having the potential to perform better than radiologists in some tasks. Gastrointestinal radiology, an important subspecialty dealing with complex anatomy and various modalities including endoscopy, have especially attracted the attention of AI researchers and engineers worldwide. Consequently, recently many tools have been developed for lesion detection and image construction in gastrointestinal radiology, particularly in the fields for which public databases are available, such as diagnostic abdominal magnetic resonance imaging (MRI) and computed tomography (CT). This review will provide a framework for understanding recent advancements of AI in gastrointestinal radiology, with a special focus on hepatic and pancreatobiliary diagnostic radiology with MRI and CT. For fields where AI is less developed, this review will also explain the difficulty in AI model training and possible strategies to overcome the technical issues. The authors’ insights of possible future development will be addressed in the last section.
Collapse
Affiliation(s)
- Kai-Po Chang
- PhD Program in Medical Biotechnology, National Chung Hsing University, Taichung 40227, Taiwan
- Department of Pathology, China Medical University Hospital, Taichung 40447, Taiwan
| | - Shih-Huan Lin
- PhD Program in Medical Biotechnology, National Chung Hsing University, Taichung 40227, Taiwan
| | - Yen-Wei Chu
- PhD Program in Medical Biotechnology, National Chung Hsing University, Taichung 40227, Taiwan
- Institute of Genomics and Bioinformatics, National Chung Hsing University, Taichung 40227, Taiwan
- Institute of Molecular Biology, National Chung Hsing University, Taichung 40227, Taiwan
- Agricultural Biotechnology Center, National Chung Hsing University, Taichung 40227, Taiwan
- Biotechnology Center, National Chung Hsing University, Taichung 40227, Taiwan
- PhD Program in Translational Medicine, National Chung Hsing University, Taichung 40227, Taiwan
- Rong Hsing Research Center for Translational Medicine, Taichung 40227, Taiwan
| |
Collapse
|
41
|
Schock J, Truhn D, Abrar DB, Merhof D, Conrad S, Post M, Mittelstrass F, Kuhl C, Nebelung S. Automated Analysis of Alignment in Long-Leg Radiographs by Using a Fully Automated Support System Based on Artificial Intelligence. Radiol Artif Intell 2020; 3:e200198. [PMID: 33937861 DOI: 10.1148/ryai.2020200198] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 11/20/2020] [Accepted: 12/10/2020] [Indexed: 11/11/2022]
Abstract
Purpose To develop and validate a deep learning-based method for automatic quantitative analysis of lower-extremity alignment. Materials and Methods In this retrospective study, bilateral long-leg radiographs (LLRs) from 255 patients that were obtained between January and September of 2018 were included. For training data (n = 109), a U-Net convolutional neural network was trained to segment the femur and tibia versus manual segmentation. For validation data (n = 40), model parameters were optimized. Following identification of anatomic landmarks, anatomic and mechanical axes were identified and used to quantify alignment through the hip-knee-ankle angle (HKAA) and femoral anatomic-mechanical angle (AMA). For testing data (n = 106), algorithm-based angle measurements were compared with reference measurements by two radiologists. Angles and time for 30 random radiographs were compared by using repeated-measures analysis of variance and one-way analysis of variance, whereas correlations were quantified by using Pearson r and intraclass correlation coefficients. Results Bilateral LLRs of 255 patients (mean age, 26 years ± 23 [standard deviation]; range, 0-88 years; 157 male patients) were included. Mean Sørensen-Dice coefficients for segmentation were 0.97 ± 0.09 for the femur and 0.96 ± 0.11 for the tibia. Mean HKAAs and AMAs as measured by the readers and the algorithm ranged from 0.05° to 0.11° (P = .5) and from 4.82° to 5.43° (P < .001). Interreader correlation coefficients ranged from 0.918 to 0.995 (r range, P < .001), and agreement was almost perfect (intraclass correlation coefficient range, 0.87-0.99). Automatic analysis was faster than the two radiologists' manual measurements (3 vs 36 vs 35 seconds, P < .001). Conclusion Fully automated analysis of LLRs yielded accurate results across a wide range of clinical and pathologic indications and is fast enough to enhance and accelerate clinical workflows.Supplemental material is available for this article.© RSNA, 2020See also commentary by Andreisek in this issue.
Collapse
Affiliation(s)
- Justus Schock
- Department of Diagnostic and Interventional Radiology, University Hospital Düsseldorf, Düsseldorf, Germany (J.S., D.B.A., S.N.); Institute of Computer Vision and Imaging, RWTH University Aachen, Pauwelsstrasse 30, 52072 Aachen, Germany (J.S., D.M.); Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany (D.T., M.P., F.M., C.K., S.N.); and Faculty of Mathematics and Natural Sciences, Institute of Informatics, Heinrich Heine University Düsseldorf, Düsseldorf, Germany (S.C.)
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital Düsseldorf, Düsseldorf, Germany (J.S., D.B.A., S.N.); Institute of Computer Vision and Imaging, RWTH University Aachen, Pauwelsstrasse 30, 52072 Aachen, Germany (J.S., D.M.); Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany (D.T., M.P., F.M., C.K., S.N.); and Faculty of Mathematics and Natural Sciences, Institute of Informatics, Heinrich Heine University Düsseldorf, Düsseldorf, Germany (S.C.)
| | - Daniel B Abrar
- Department of Diagnostic and Interventional Radiology, University Hospital Düsseldorf, Düsseldorf, Germany (J.S., D.B.A., S.N.); Institute of Computer Vision and Imaging, RWTH University Aachen, Pauwelsstrasse 30, 52072 Aachen, Germany (J.S., D.M.); Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany (D.T., M.P., F.M., C.K., S.N.); and Faculty of Mathematics and Natural Sciences, Institute of Informatics, Heinrich Heine University Düsseldorf, Düsseldorf, Germany (S.C.)
| | - Dorit Merhof
- Department of Diagnostic and Interventional Radiology, University Hospital Düsseldorf, Düsseldorf, Germany (J.S., D.B.A., S.N.); Institute of Computer Vision and Imaging, RWTH University Aachen, Pauwelsstrasse 30, 52072 Aachen, Germany (J.S., D.M.); Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany (D.T., M.P., F.M., C.K., S.N.); and Faculty of Mathematics and Natural Sciences, Institute of Informatics, Heinrich Heine University Düsseldorf, Düsseldorf, Germany (S.C.)
| | - Stefan Conrad
- Department of Diagnostic and Interventional Radiology, University Hospital Düsseldorf, Düsseldorf, Germany (J.S., D.B.A., S.N.); Institute of Computer Vision and Imaging, RWTH University Aachen, Pauwelsstrasse 30, 52072 Aachen, Germany (J.S., D.M.); Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany (D.T., M.P., F.M., C.K., S.N.); and Faculty of Mathematics and Natural Sciences, Institute of Informatics, Heinrich Heine University Düsseldorf, Düsseldorf, Germany (S.C.)
| | - Manuel Post
- Department of Diagnostic and Interventional Radiology, University Hospital Düsseldorf, Düsseldorf, Germany (J.S., D.B.A., S.N.); Institute of Computer Vision and Imaging, RWTH University Aachen, Pauwelsstrasse 30, 52072 Aachen, Germany (J.S., D.M.); Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany (D.T., M.P., F.M., C.K., S.N.); and Faculty of Mathematics and Natural Sciences, Institute of Informatics, Heinrich Heine University Düsseldorf, Düsseldorf, Germany (S.C.)
| | - Felix Mittelstrass
- Department of Diagnostic and Interventional Radiology, University Hospital Düsseldorf, Düsseldorf, Germany (J.S., D.B.A., S.N.); Institute of Computer Vision and Imaging, RWTH University Aachen, Pauwelsstrasse 30, 52072 Aachen, Germany (J.S., D.M.); Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany (D.T., M.P., F.M., C.K., S.N.); and Faculty of Mathematics and Natural Sciences, Institute of Informatics, Heinrich Heine University Düsseldorf, Düsseldorf, Germany (S.C.)
| | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital Düsseldorf, Düsseldorf, Germany (J.S., D.B.A., S.N.); Institute of Computer Vision and Imaging, RWTH University Aachen, Pauwelsstrasse 30, 52072 Aachen, Germany (J.S., D.M.); Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany (D.T., M.P., F.M., C.K., S.N.); and Faculty of Mathematics and Natural Sciences, Institute of Informatics, Heinrich Heine University Düsseldorf, Düsseldorf, Germany (S.C.)
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital Düsseldorf, Düsseldorf, Germany (J.S., D.B.A., S.N.); Institute of Computer Vision and Imaging, RWTH University Aachen, Pauwelsstrasse 30, 52072 Aachen, Germany (J.S., D.M.); Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany (D.T., M.P., F.M., C.K., S.N.); and Faculty of Mathematics and Natural Sciences, Institute of Informatics, Heinrich Heine University Düsseldorf, Düsseldorf, Germany (S.C.)
| |
Collapse
|
42
|
Meijering E. A bird's-eye view of deep learning in bioimage analysis. Comput Struct Biotechnol J 2020; 18:2312-2325. [PMID: 32994890 PMCID: PMC7494605 DOI: 10.1016/j.csbj.2020.08.003] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/26/2020] [Accepted: 08/01/2020] [Indexed: 02/07/2023] Open
Abstract
Deep learning of artificial neural networks has become the de facto standard approach to solving data analysis problems in virtually all fields of science and engineering. Also in biology and medicine, deep learning technologies are fundamentally transforming how we acquire, process, analyze, and interpret data, with potentially far-reaching consequences for healthcare. In this mini-review, we take a bird's-eye view at the past, present, and future developments of deep learning, starting from science at large, to biomedical imaging, and bioimage analysis in particular.
Collapse
Affiliation(s)
- Erik Meijering
- School of Computer Science and Engineering & Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
43
|
Abstract
Deep learning methods have shown promising results for accelerating quantitative musculoskeletal (MSK) magnetic resonance imaging (MRI) for T2 and T1ρ relaxometry. These methods have been shown to improve musculoskeletal tissue segmentation on parametric maps, allowing efficient and accurate T2 and T1ρ relaxometry analysis for monitoring and predicting MSK diseases. Deep learning methods have shown promising results for disease detection on quantitative MRI with diagnostic performance superior to conventional machine-learning methods for identifying knee osteoarthritis.
Collapse
Affiliation(s)
- Fang Liu
- Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|