1
|
Hausmann D, Lerch A, Hitziger S, Farkas M, Weiland E, Lemke A, Grimm M, Kubik-Huch RA. AI-Supported Autonomous Uterus Reconstructions: First Application in MRI Using 3D SPACE with Iterative Denoising. Acad Radiol 2024; 31:1400-1409. [PMID: 37925344 DOI: 10.1016/j.acra.2023.09.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 09/19/2023] [Accepted: 09/22/2023] [Indexed: 11/06/2023]
Abstract
RATIONALE AND OBJECTIVES T2-weighted imaging in at least two orthogonal planes is recommended for assessment of the uterus. To determine whether a convolutional neural network-based algorithm could be used for the re-constructions of uterus axes derived from a 3D SPACE with iterative denoising. MATERIALS AND METHODS 50 patients aged 18-81 (mean: 42) years who underwent an MRI examination of the uterus participated voluntarily in this prospective study after informed consent. In addition to a standard MRI pelvis protocol, a 3D SPACE research application sequence was acquired in sagittal orientation. Reconstructions for both the cervix and the cavum in the short and long axes were performed by a research trainee (T), an experienced radiologist (E), and the prototype software (P). In the next step, the reconstructions were evaluated anonymously by two experienced readers according to 5-point-Likert-Scales. In addition, the length of the cervical canal, the length of the cavum and the distance between the tube angles were measured on all reconstructions. Interobserver agreement was assessed for all ratings. RESULTS For all axes, significant differences were found between the scores of the reconstructions by research T, E and P. P received higher scores and was preferred significantly more often with the exception of the comparison of the reconstruction Cervix short of E (Cervix short: P vs. T: p = 0.02; P vs. E: p = 0.26; Cervix long: P vs. T: p = 0.01; P vs. E: p < 0.01; Cavum short: P vs. T: p = 0.01; P vs. E: p = 0.02; Cavum long: P vs. T: p < 0.01; P vs. E: p < 0.01). Regarding the measured diameters, (length of cervical canal/cavum/distance between tube angles) significantly larger diameters were recorded for P compared to E and T (Cervix long (mm): T: 25.43; E: 25.65; P: 26.65; Cavum short (mm): T: 26.24; E: 25.04; P: 27.33; Cavum long (mm): T: 31.98; E: 32.91; P: 34.41; P vs. T: p < 0.01); P vs. E: p = 0.04). Moderate to substantial agreement was found between Reader 1 and Reader 2 (range: 0.39-0.67). CONCLUSION P was able to reconstruct the axes at least as well as or better than E and T. P could thereby lead to workflow facilitation and enable more efficient reporting of uterine MRI.
Collapse
Affiliation(s)
- Daniel Hausmann
- Department of Radiology, Kantonsspital Baden, Im Ergel 1, Baden, 5404, Switzerland (D.H., A.L., M.F., M.G., K.H.); Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany (D.H.).
| | - Aline Lerch
- Department of Radiology, Kantonsspital Baden, Im Ergel 1, Baden, 5404, Switzerland (D.H., A.L., M.F., M.G., K.H.); Institute for Translational Medicine, ETH Zurich, Zurich, Switzerland (A.L); ETH, Department of Health Sciences and Technology (A.L.)
| | | | - Monika Farkas
- Department of Radiology, Kantonsspital Baden, Im Ergel 1, Baden, 5404, Switzerland (D.H., A.L., M.F., M.G., K.H.)
| | - Elisabeth Weiland
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany (E.W.)
| | | | - Maximilian Grimm
- Department of Radiology, Kantonsspital Baden, Im Ergel 1, Baden, 5404, Switzerland (D.H., A.L., M.F., M.G., K.H.)
| | - Rahel A Kubik-Huch
- Department of Radiology, Kantonsspital Baden, Im Ergel 1, Baden, 5404, Switzerland (D.H., A.L., M.F., M.G., K.H.)
| |
Collapse
|
2
|
Ozen M, Patel R, Hoffman M, Raissi D. Update on Endovascular Therapy for Fibroids and Adenomyosis. Semin Intervent Radiol 2023; 40:327-334. [PMID: 37575341 PMCID: PMC10415060 DOI: 10.1055/s-0043-1770713] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Uterine fibroids and adenomyosis are prevalent benign neoplasms that can lead to serious deleterious health effects including life-threatening anemia, prolonged menses, and pelvic pain; however, up to 40% of women remain undiagnosed. Traditional treatment options such as myomectomy or hysterectomy can effectively manage symptoms but may entail longer hospital stays and hinder future fertility. Endovascular treatment, such as uterine artery embolization (UAE), is a minimally invasive procedure that has emerged as a well-validated alternative to surgical options while preserving the uterus and offering shorter hospital stays. Careful patient selection and appropriate techniques are crucial to achieving optimal outcomes. There have been advancements in recent times that encompass pre- and postprocedural care aimed at enhancing results and alleviating discomfort prior to, during, and after UAE. Furthermore, success and reintervention rates may also depend on the size and location of the fibroids. This article reviews the current state of endovascular treatments of uterine fibroids and adenomyosis.
Collapse
Affiliation(s)
- Merve Ozen
- Department of Radiology, University of Kentucky College of Medicine, Lexington, Kentucky
| | - Ronak Patel
- University of Kentucky College of Medicine, William R. Willard Medical Education Building, Lexington, Kentucky
| | - Mark Hoffman
- Department of Obstetrics and Gynecology, University of Kentucky College of Medicine, Lexington, Kentucky
| | - Driss Raissi
- Department of Radiology, University of Kentucky College of Medicine, Lexington, Kentucky
| |
Collapse
|
3
|
Shahzad A, Mushtaq A, Sabeeh AQ, Ghadi YY, Mushtaq Z, Arif S, Ur Rehman MZ, Qureshi MF, Jamil F. Automated Uterine Fibroids Detection in Ultrasound Images Using Deep Convolutional Neural Networks. Healthcare (Basel) 2023; 11:healthcare11101493. [PMID: 37239779 DOI: 10.3390/healthcare11101493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 04/28/2023] [Accepted: 05/12/2023] [Indexed: 05/28/2023] Open
Abstract
Fibroids of the uterus are a common benign tumor affecting women of childbearing age. Uterine fibroids (UF) can be effectively treated with earlier identification and diagnosis. Its automated diagnosis from medical images is an area where deep learning (DL)-based algorithms have demonstrated promising results. In this research, we evaluated state-of-the-art DL architectures VGG16, ResNet50, InceptionV3, and our proposed innovative dual-path deep convolutional neural network (DPCNN) architecture for UF detection tasks. Using preprocessing methods including scaling, normalization, and data augmentation, an ultrasound image dataset from Kaggle is prepared for use. After the images are used to train and validate the DL models, the model performance is evaluated using different measures. When compared to existing DL models, our suggested DPCNN architecture achieved the highest accuracy of 99.8 percent. Findings show that pre-trained deep-learning model performance for UF diagnosis from medical images may significantly improve with the application of fine-tuning strategies. In particular, the InceptionV3 model achieved 90% accuracy, with the ResNet50 model achieving 89% accuracy. It should be noted that the VGG16 model was found to have a lower accuracy level of 85%. Our findings show that DL-based methods can be effectively utilized to facilitate automated UF detection from medical images. Further research in this area holds great potential and could lead to the creation of cutting-edge computer-aided diagnosis systems. To further advance the state-of-the-art in medical imaging analysis, the DL community is invited to investigate these lines of research. Although our proposed innovative DPCNN architecture performed best, fine-tuned versions of pre-trained models like InceptionV3 and ResNet50 also delivered strong results. This work lays the foundation for future studies and has the potential to enhance the precision and suitability with which UF is detected.
Collapse
Affiliation(s)
- Ahsan Shahzad
- Rural Health Centre, Farooka, Sahiwal, Sargodha 40100, Pakistan
| | - Abid Mushtaq
- Rural Health Centre, Farooka, Sahiwal, Sargodha 40100, Pakistan
| | | | - Yazeed Yasin Ghadi
- Department of Computer Science, Al Ain University, Abu Dhabi P.O. Box 112612, United Arab Emirates
| | - Zohaib Mushtaq
- Department of Electrical Engineering, College of Engineering and Technology, University of Sargodha, Sargodha 40100, Pakistan
| | - Saad Arif
- Department of Mechanical Engineering, HITEC University, Taxila 47080, Pakistan
| | - Muhammad Zia Ur Rehman
- Department of Biomedical Engineering, Riphah International University, Islamabad 44000, Pakistan
| | - Muhammad Farrukh Qureshi
- Department of Electrical Engineering, Riphah International University, Islamabad 44000, Pakistan
| | - Faisal Jamil
- Department of ICT and Natural Sciences, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, 6009 Alesund, Norway
| |
Collapse
|
4
|
Zhao A, Du X, Yuan S, Shen W, Zhu X, Wang W. Automated Detection of Endometrial Polyps from Hysteroscopic Videos Using Deep Learning. Diagnostics (Basel) 2023; 13:diagnostics13081409. [PMID: 37189510 DOI: 10.3390/diagnostics13081409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 03/31/2023] [Accepted: 04/07/2023] [Indexed: 05/17/2023] Open
Abstract
Endometrial polyps are common gynecological lesions. The standard treatment for this condition is hysteroscopic polypectomy. However, this procedure may be accompanied by misdetection of endometrial polyps. To improve the diagnostic accuracy and reduce the risk of misdetection, a deep learning model based on YOLOX is proposed to detect endometrial polyps in real time. Group normalization is employed to improve its performance with large hysteroscopic images. In addition, we propose a video adjacent-frame association algorithm to address the problem of unstable polyp detection. Our proposed model was trained on a dataset of 11,839 images from 323 cases provided by a hospital and was tested on two datasets of 431 cases from two hospitals. The results show that the lesion-based sensitivity of the model reached 100% and 92.0% for the two test sets, compared with 95.83% and 77.33%, respectively, for the original YOLOX model. This demonstrates that the improved model may be used effectively as a diagnostic tool during clinical hysteroscopic procedures to reduce the risk of missing endometrial polyps.
Collapse
Affiliation(s)
- Aihua Zhao
- Graduate School of Computer Science and Engineering, The University of Aizu, Aizu-Wakamatsu 965-8580, Japan
| | - Xin Du
- Department of Gynecology, Maternal and Child Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430070, China
| | - Suzhen Yuan
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Wenfeng Shen
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
| | - Xin Zhu
- Graduate School of Computer Science and Engineering, The University of Aizu, Aizu-Wakamatsu 965-8580, Japan
| | - Wenwen Wang
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| |
Collapse
|
5
|
Mongan J, Kohli MD, Houshyar R, Chang PD, Glavis-Bloom J, Taylor AG. Automated detection of IVC filters on radiographs with deep convolutional neural networks. Abdom Radiol (NY) 2023; 48:758-764. [PMID: 36371471 PMCID: PMC9902407 DOI: 10.1007/s00261-022-03734-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 10/26/2022] [Accepted: 10/26/2022] [Indexed: 11/13/2022]
Abstract
PURPOSE To create an algorithm able to accurately detect IVC filters on radiographs without human assistance, capable of being used to screen radiographs to identify patients needing IVC filter retrieval. METHODS A primary dataset of 5225 images, 30% of which included IVC filters, was assembled and annotated. 85% of the data was used to train a Cascade R-CNN (Region Based Convolutional Neural Network) object detection network incorporating a pre-trained ResNet-50 backbone. The remaining 15% of the data, independently annotated by three radiologists, was used as a test set to assess performance. The algorithm was also assessed on an independently constructed 1424-image dataset, drawn from a different institution than the primary dataset. RESULTS On the primary test set, the algorithm achieved a sensitivity of 96.2% (95% CI 92.7-98.1%) and a specificity of 98.9% (95% CI 97.4-99.5%). Results were similar on the external test set: sensitivity 97.9% (95% CI 96.2-98.9%), specificity 99.6 (95% CI 98.9-99.9%). CONCLUSION Fully automated detection of IVC filters on radiographs with high sensitivity and excellent specificity required for an automated screening system can be achieved using object detection neural networks. Further work will develop a system for identifying patients for IVC filter retrieval based on this algorithm.
Collapse
Affiliation(s)
- John Mongan
- Department of Radiology and Biomedical Imaging, Center for Intelligent Imaging, University of California San Francisco, 505 Parnassus Avenue, San Francisco, CA, 94143-0628, USA.
| | - Marc D. Kohli
- Department of Radiology and Biomedical Imaging, Center for Intelligent Imaging, University of California San Francisco, 505 Parnassus Avenue, San Francisco, CA 94143-0628 USA
| | - Roozbeh Houshyar
- Department of Radiological Sciences, Center for Artificial Intelligence in Diagnostic Medicine, University of California Irvine, Irvine, USA
| | - Peter D. Chang
- Department of Radiological Sciences, Center for Artificial Intelligence in Diagnostic Medicine, University of California Irvine, Irvine, USA
| | - Justin Glavis-Bloom
- Department of Radiological Sciences, Center for Artificial Intelligence in Diagnostic Medicine, University of California Irvine, Irvine, USA
| | - Andrew G. Taylor
- Department of Radiology and Biomedical Imaging, Center for Intelligent Imaging, University of California San Francisco, 505 Parnassus Avenue, San Francisco, CA 94143-0628 USA
| |
Collapse
|
6
|
Santomartino SM, Yi PH. Systematic Review of Radiologist and Medical Student Attitudes on the Role and Impact of AI in Radiology. Acad Radiol 2022; 29:1748-1756. [PMID: 35105524 DOI: 10.1016/j.acra.2021.12.032] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 12/30/2021] [Accepted: 12/30/2021] [Indexed: 12/28/2022]
Abstract
RATIONALE AND OBJECTIVES The introduction of AI in radiology has prompted both excitement and hesitation within the field. We performed a systematic review of original studies evaluating the attitudes of radiologists, radiology trainees, and medical students towards AI in radiology. MATERIALS AND METHODS We searched PubMed for studies published as of August 24, 2021 for original studies evaluating attitudes of radiologists (attendings and trainees) and medical students towards AI in radiology. We summarized the baseline article characteristics and performed thematic analysis of the questions asked in each study. RESULTS Nineteen studies were included evaluating attitudes across different levels of training (medical students, radiology trainees, and radiology attendings) with representation from nearly every continent. Medical students and radiologists alike favored increased educational initiatives, and displayed interest in learning about and implementing AI solutions themselves, despite reporting of a current gap in formal AI training. There was general optimism about the role of AI in radiology, although radiologists and trainees had greater consensus than medical students. CONCLUSION Although there is interest in incorporating AI into medical education and optimism among radiologists towards AI, medical students are more divided in their views. We propose that outreach to and AI education for medical students may help improve their attitudes towards the potentially transformative technology of AI for radiology.
Collapse
Affiliation(s)
- Samantha M Santomartino
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland Medical Intelligent Imaging (UM2ii) Center, University of Maryland School of Medicine, Baltimore, Maryland
| | - Paul H Yi
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland Medical Intelligent Imaging (UM2ii) Center, University of Maryland School of Medicine, Baltimore, Maryland; Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland.
| |
Collapse
|
7
|
Delanerolle G, Yang X, Shetty S, Raymont V, Shetty A, Phiri P, Hapangama DK, Tempest N, Majumder K, Shi JQ. Artificial intelligence: A rapid case for advancement in the personalization of Gynaecology/Obstetric and Mental Health care. ACTA ACUST UNITED AC 2021; 17:17455065211018111. [PMID: 33990172 PMCID: PMC8127586 DOI: 10.1177/17455065211018111] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
To evaluate and holistically treat the mental health sequelae and potential psychiatric comorbidities associated with obstetric and gynaecological conditions, it is important to optimize patient care, ensure efficient use of limited resources and improve health-economic models. Artificial intelligence applications could assist in achieving the above. The World Health Organization and global healthcare systems have already recognized the use of artificial intelligence technologies to address 'system gaps' and automate some of the more cumbersome tasks to optimize clinical services and reduce health inequalities. Currently, both mental health and obstetric and gynaecological services independently use artificial intelligence applications. Thus, suitable solutions are shared between mental health and obstetric and gynaecological clinical practices, independent of one another. Although, to address complexities with some patients who may have often interchanging sequelae with mental health and obstetric and gynaecological illnesses, 'holistically' developed artificial intelligence applications could be useful. Therefore, we present a rapid review to understand the currently available artificial intelligence applications and research into multi-morbid conditions, including clinical trial-based validations. Most artificial intelligence applications are intrinsically data-driven tools, and their validation in healthcare can be challenging as they require large-scale clinical trials. Furthermore, most artificial intelligence applications use rate-limiting mock data sets, which restrict their applicability to a clinical population. Some researchers may fail to recognize the randomness in the data generating processes in clinical care from a statistical perspective with a potentially minimal representation of a population, limiting their applicability within a real-world setting. However, novel, innovative trial designs could pave the way to generate better data sets that are generalizable to the entire global population. A collaboration between artificial intelligence and statistical models could be developed and deployed with algorithmic and domain interpretability to achieve this. In addition, acquiring big data sets is vital to ensure these artificial intelligence applications provide the highest accuracy within a real-world setting, especially when used as part of a clinical diagnosis or treatment.
Collapse
Affiliation(s)
| | - Xuzhi Yang
- Southern University of Science and Technology, Shenzhen, China
| | | | | | - Ashish Shetty
- University College London, London, UK.,University College London NHS Foundation Trust, London, UK
| | - Peter Phiri
- Southern Health NHS Foundation Trust, Southampton, UK.,Primary Care, Population Sciences and Medical Education, University of Southampton, Southampton, UK
| | | | | | - Kingshuk Majumder
- University of Manchester Hospitals NHS Foundation Trust, Manchester, UK
| | - Jian Qing Shi
- Southern University of Science and Technology, Shenzhen, China.,The Alan Turing Institute, London, UK
| |
Collapse
|
8
|
Stewart JK. Uterine Artery Embolization for Uterine Fibroids: A Closer Look at Misperceptions and Challenges. Tech Vasc Interv Radiol 2021; 24:100725. [PMID: 34147198 DOI: 10.1016/j.tvir.2021.100725] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Uterine artery embolization (UAE) has been shown to be a safe and effective treatment for symptomatic uterine fibroids, with over 25 years of supporting data. Although UAE is a well-established treatment option, several misperceptions exist that may limit the number of patients who are considered candidates for UAE. There are also challenges that may affect the ability of interventional radiologists to effectively treat some patients and offer the best possible experience. This article will discuss these misperceptions and challenges, which represent opportunities for further growth and innovation that will allow interventional radiologists to better serve this patient population.
Collapse
Affiliation(s)
- Jessica K Stewart
- Division of Interventional Radiology, Department of Radiologic Sciences, David Geffen School of Medicine at UCLA, Los Angeles, CA.
| |
Collapse
|
9
|
He Y, Pan I, Bao B, Halsey K, Chang M, Liu H, Peng S, Sebro RA, Guan J, Yi T, Delworth AT, Eweje F, States LJ, Zhang PJ, Zhang Z, Wu J, Peng X, Bai HX. Deep learning-based classification of primary bone tumors on radiographs: A preliminary study. EBioMedicine 2020; 62:103121. [PMID: 33232868 PMCID: PMC7689511 DOI: 10.1016/j.ebiom.2020.103121] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 10/19/2020] [Accepted: 10/26/2020] [Indexed: 12/18/2022] Open
Abstract
BACKGROUND To develop a deep learning model to classify primary bone tumors from preoperative radiographs and compare performance with radiologists. METHODS A total of 1356 patients (2899 images) with histologically confirmed primary bone tumors and pre-operative radiographs were identified from five institutions' pathology databases. Manual cropping was performed by radiologists to label the lesions. Binary discriminatory capacity (benign versus not-benign and malignant versus not-malignant) and three-way classification (benign versus intermediate versus malignant) performance of our model were evaluated. The generalizability of our model was investigated on data from external test set. Final model performance was compared with interpretation from five radiologists of varying level of experience using the Permutations tests. FINDINGS For benign vs. not benign, model achieved area under curve (AUC) of 0•894 and 0•877 on cross-validation and external testing, respectively. For malignant vs. not malignant, model achieved AUC of 0•907 and 0•916 on cross-validation and external testing, respectively. For three-way classification, model achieved 72•1% accuracy vs. 74•6% and 72•1% for the two subspecialists on cross-validation (p = 0•03 and p = 0•52, respectively). On external testing, model achieved 73•4% accuracy vs. 69•3%, 73•4%, 73•1%, 67•9%, and 63•4% for the two subspecialists and three junior radiologists (p = 0•14, p = 0•89, p = 0•93, p = 0•02, p < 0•01 for radiologists 1-5, respectively). INTERPRETATION Deep learning can classify primary bone tumors using conventional radiographs in a multi-institutional dataset with similar accuracy compared to subspecialists, and better performance than junior radiologists. FUNDING The project described was supported by RSNA Research & Education Foundation, through grant number RSCH2004 to Harrison X. Bai.
Collapse
Affiliation(s)
- Yu He
- Department of Radiology, The Second Xiangya Hospital of Central South University, No.139 Middle Renmin Road, Changsha, Hunan 410011, PR China
| | - Ian Pan
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence 02912, USA
| | - Bingting Bao
- Department of Radiology, The Second Xiangya Hospital of Central South University, No.139 Middle Renmin Road, Changsha, Hunan 410011, PR China
| | - Kasey Halsey
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence 02912, USA
| | | | - Hui Liu
- Department of Radiology, The Second Xiangya Hospital of Central South University, No.139 Middle Renmin Road, Changsha, Hunan 410011, PR China
| | - Shuping Peng
- Department of Radiology, The Second Xiangya Hospital of Central South University, No.139 Middle Renmin Road, Changsha, Hunan 410011, PR China
| | - Ronnie A Sebro
- Musculoskeletal Imaging, Department of Radiology, University of Pennsylvania, Philadelphia 19104, USA
| | - Jing Guan
- Department of Radiology, The Second Xiangya Hospital of Central South University, No.139 Middle Renmin Road, Changsha, Hunan 410011, PR China
| | - Thomas Yi
- Warren Alpert Medical School of Brown University, Providence 02903, USA
| | | | - Feyisope Eweje
- Perelman School of Medicine at the University of Pennsylvania, Philadelphia 19104, USA
| | - Lisa J States
- Department of Radiology, Children's Hospital of Philadelphia, 19104, USA
| | - Paul J Zhang
- Department of Pathology and Laboratory Medicine, Hospital of the University of Pennsylvania, Philadelphia 19104, USA
| | - Zishu Zhang
- Department of Radiology, The Second Xiangya Hospital of Central South University, No.139 Middle Renmin Road, Changsha, Hunan 410011, PR China
| | - Jing Wu
- Department of Radiology, The Second Xiangya Hospital of Central South University, No.139 Middle Renmin Road, Changsha, Hunan 410011, PR China.
| | - Xianjing Peng
- Department of Radiology, Xiangya Hospital, Central South University, No.87 Xiangya Road, Changsha, Hunan 410008, PR China.
| | - Harrison X Bai
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence 02912, USA.
| |
Collapse
|