1
|
Tucci F, Laurinavicius A, Kather JN, Eloy C. The digital revolution in pathology: Towards a smarter approach to research and treatment. TUMORI JOURNAL 2024; 110:241-251. [PMID: 38606831 DOI: 10.1177/03008916241231035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
Artificial intelligence (AI) applications in oncology are at the forefront of transforming healthcare during the Fourth Industrial Revolution, driven by the digital data explosion. This review provides an accessible introduction to the field of AI, presenting a concise yet structured overview of the foundations of AI, including expert systems, classical machine learning, and deep learning, along with their contextual application in clinical research and healthcare. We delve into the current applications of AI in oncology, with a particular focus on diagnostic imaging and pathology. Numerous AI tools have already received regulatory approval, and more are under active development, bringing clear benefits but not without challenges. We discuss the importance of data security, the need for transparent and interpretable models, and the ethical considerations that must guide AI development in healthcare. By providing a perspective on the opportunities and challenges, this review aims to inform and guide researchers, clinicians, and policymakers in the adoption of AI in oncology.
Collapse
Affiliation(s)
- Francesco Tucci
- School of Pathology, University of Milan, Milan, Italy
- European Institute of Oncology (IEO) IRCCS, Milan, Italy
| | - Arvydas Laurinavicius
- Department of Pathology, Forensic Medicine and Pharmacology, Faculty of Medicine, Institute of Biomedical Sciences, Vilnius University, Vilnius, Lithuania
- National Centre of Pathology, Affiliate of Vilnius University Hospital Santaros Clinics, Vilnius, Lithuania
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
- Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Catarina Eloy
- Ipatimup - Institute of Molecular Pathology and Immunology of University of Porto, Porto, Portugal
- Medical Faculty, University of Porto, Porto, Portugal
- i3S-Instituto de Investigação e Inovação em Saúde, Porto, Portugal
| |
Collapse
|
2
|
Bhatnagar A, Kekatpure AL, Velagala VR, Kekatpure A. A Review on the Use of Artificial Intelligence in Fracture Detection. Cureus 2024; 16:e58364. [PMID: 38756254 PMCID: PMC11097122 DOI: 10.7759/cureus.58364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Accepted: 04/16/2024] [Indexed: 05/18/2024] Open
Abstract
Artificial intelligence (AI) simulates intelligent behavior using computers with minimum human intervention. Recent advances in AI, especially deep learning, have made significant progress in perceptual operations, enabling computers to convey and comprehend complicated input more accurately. Worldwide, fractures affect people of all ages and in all regions of the planet. One of the most prevalent causes of inaccurate diagnosis and medical lawsuits is overlooked fractures on radiographs taken in the emergency room, which can range from 2% to 9%. The workforce will soon be under a great deal of strain due to the growing demand for fracture detection on multiple imaging modalities. A dearth of radiologists worsens this rise in demand as a result of a delay in hiring and a significant percentage of radiologists close to retirement. Additionally, the process of interpreting diagnostic images can sometimes be challenging and tedious. Integrating orthopedic radio-diagnosis with AI presents a promising solution to these problems. There has recently been a noticeable rise in the application of deep learning techniques, namely convolutional neural networks (CNNs), in medical imaging. In the field of orthopedic trauma, CNNs are being documented to operate at the proficiency of expert orthopedic surgeons and radiologists in the identification and categorization of fractures. CNNs can analyze vast amounts of data at a rate that surpasses that of human observations. In this review, we discuss the use of deep learning methods in fracture detection and classification, the integration of AI with various imaging modalities, and the benefits and disadvantages of integrating AI with radio-diagnostics.
Collapse
Affiliation(s)
- Aayushi Bhatnagar
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Aditya L Kekatpure
- Orthopedic Surgery, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Vivek R Velagala
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Aashay Kekatpure
- Orthopedic Surgery, Narendra Kumar Prasadrao Salve Institute of Medical Sciences and Research, Nagpur, IND
| |
Collapse
|
3
|
Li X, Zhang Y, Jin J, Sun F, Li N, Liang S. A model of integrating convolution and BiGRU dual-channel mechanism for Chinese medical text classifications. PLoS One 2023; 18:e0282824. [PMID: 36928266 PMCID: PMC10019650 DOI: 10.1371/journal.pone.0282824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 02/23/2023] [Indexed: 03/18/2023] Open
Abstract
Recently, a lot of Chinese patients consult treatment plans through social networking platforms, but the Chinese medical text contains rich information, including a large number of medical nomenclatures and symptom descriptions. How to build an intelligence model to automatically classify the text information consulted by patients and recommend the correct department for patients is very important. In order to address the problem of insufficient feature extraction from Chinese medical text and low accuracy, this paper proposes a dual channel Chinese medical text classification model. The model extracts feature of Chinese medical text at different granularity, comprehensively and accurately obtains effective feature information, and finally recommends departments for patients according to text classification. One channel of the model focuses on medical nomenclatures, symptoms and other words related to hospital departments, gives different weights, calculates corresponding feature vectors with convolution kernels of different sizes, and then obtains local text representation. The other channel uses the BiGRU network and attention mechanism to obtain text representation, highlighting the important information of the whole sentence, that is, global text representation. Finally, the model uses full connection layer to combine the representation vectors of the two channels, and uses Softmax classifier for classification. The experimental results show that the accuracy, recall and F1-score of the model are improved by 10.65%, 8.94% and 11.62% respectively compared with the baseline models in average, which proves that our model has better performance and robustness.
Collapse
Affiliation(s)
- Xiaoli Li
- School of Software, Henan University, Kaifeng, China
| | - Yuying Zhang
- School of Software, Henan University, Kaifeng, China
| | - Jiangyong Jin
- School of Software, Henan University, Kaifeng, China
| | - Fuqi Sun
- School of Software, Henan University, Kaifeng, China
| | - Na Li
- School of Digital Arts and Communication, Shandong University of Art & Design, Jinan, China
| | - Shengbin Liang
- School of Software, Henan University, Kaifeng, China
- Institute for Data Engineering and Science, University of Saint Joseph, Macao, China
- * E-mail:
| |
Collapse
|
4
|
Pan H, Bakalov V, Cox L, Engle ML, Erickson SW, Feolo M, Guo Y, Huggins W, Hwang S, Kimura M, Krzyzanowski M, Levy J, Phillips M, Qin Y, Williams D, Ramos EM, Hamilton CM. Identifying Datasets for Cross-Study Analysis in dbGaP using PhenX. Sci Data 2022; 9:532. [PMID: 36050327 PMCID: PMC9434066 DOI: 10.1038/s41597-022-01660-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 08/23/2022] [Indexed: 11/09/2022] Open
Abstract
Identifying relevant studies and harmonizing datasets are major hurdles for data reuse. Common Data Elements (CDEs) can help identify comparable study datasets and reduce the burden of retrospective data harmonization, but they have not been required, historically. The collaborative team at PhenX and dbGaP developed an approach to use PhenX variables as a set of CDEs to link phenotypic data and identify comparable studies in dbGaP. Variables were identified as either comparable or related, based on the data collection mode used to harmonize data across mapped datasets. We further added a CDE data field in the dbGaP data submission packet to indicate use of PhenX and annotate linkages in the future. Some 13,653 dbGaP variables from 521 studies were linked through PhenX variable mapping. These variable linkages have been made accessible for browsing and searching in the repository through dbGaP CDE-faceted search filter and the PhenX variable search tool. New features in dbGaP and PhenX enable investigators to identify variable linkages among dbGaP studies and reveal opportunities for cross-study analysis.
Collapse
Affiliation(s)
- Huaqin Pan
- RTI International, Research Triangle Park, NC, USA.
| | | | - Lisa Cox
- RTI International, Research Triangle Park, NC, USA
| | | | | | - Michael Feolo
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Yuelong Guo
- GeneCentric Therapeutics Inc., Durham, NC, USA
| | | | | | - Masato Kimura
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | | | - Josh Levy
- Levy Informatics, Chapel Hill, NC, USA
| | | | - Ying Qin
- RTI International, Research Triangle Park, NC, USA
| | | | - Erin M Ramos
- National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, USA
| | | |
Collapse
|
5
|
Walsh J, Dwumfour C, Cave J, Griffiths F. Spontaneously generated online patient experience data - how and why is it being used in health research: an umbrella scoping review. BMC Med Res Methodol 2022; 22:139. [PMID: 35562661 PMCID: PMC9106384 DOI: 10.1186/s12874-022-01610-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 04/13/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Social media has led to fundamental changes in the way that people look for and share health related information. There is increasing interest in using this spontaneously generated patient experience data as a data source for health research. The aim was to summarise the state of the art regarding how and why SGOPE data has been used in health research. We determined the sites and platforms used as data sources, the purposes of the studies, the tools and methods being used, and any identified research gaps. METHODS A scoping umbrella review was conducted looking at review papers from 2015 to Jan 2021 that studied the use of SGOPE data for health research. Using keyword searches we identified 1759 papers from which we included 58 relevant studies in our review. RESULTS Data was used from many individual general or health specific platforms, although Twitter was the most widely used data source. The most frequent purposes were surveillance based, tracking infectious disease, adverse event identification and mental health triaging. Despite the developments in machine learning the reviews included lots of small qualitative studies. Most NLP used supervised methods for sentiment analysis and classification. Very early days, methods need development. Methods not being explained. Disciplinary differences - accuracy tweaks vs application. There is little evidence of any work that either compares the results in both methods on the same data set or brings the ideas together. CONCLUSION Tools, methods, and techniques are still at an early stage of development, but strong consensus exists that this data source will become very important to patient centred health research.
Collapse
Affiliation(s)
- Julia Walsh
- Warwick Medical School, University of Warwick, Coventry, UK.
| | | | - Jonathan Cave
- Department of Economics, University of Warwick, Coventry, UK
| | - Frances Griffiths
- Warwick Medical School, University of Warwick, Coventry, UK.,Centre for Health Policy, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
6
|
Steinkamp J, Cook TS. Basic Artificial Intelligence Techniques: Natural Language Processing of Radiology Reports. Radiol Clin North Am 2021; 59:919-931. [PMID: 34689877 DOI: 10.1016/j.rcl.2021.06.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Natural language processing (NLP) is a subfield of computer science and linguistics that can be applied to extract meaningful information from radiology reports. Symbolic NLP is rule based and well suited to problems that can be explicitly defined by a set of rules. Statistical NLP is better situated to problems that cannot be well defined and requires annotated or labeled examples from which machine learning algorithms can infer the rules. Both symbolic and statistical NLP have found success in a variety of radiology use cases. More recently, deep learning approaches, including transformers, have gained traction and demonstrated good performance.
Collapse
Affiliation(s)
- Jackson Steinkamp
- Department of Medicine, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Tessa S Cook
- Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, 1 Silverstein Radiology, Philadelphia, PA 19104, USA.
| |
Collapse
|
7
|
Kolanu N, Brown AS, Beech A, Center JR, White CP. Natural language processing of radiology reports for the identification of patients with fracture. Arch Osteoporos 2021; 16:6. [PMID: 33403479 DOI: 10.1007/s11657-020-00859-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 11/13/2020] [Indexed: 02/03/2023]
Abstract
UNLABELLED Text-search software can be used to identify people at risk of re-fracture. The software studied identified a threefold higher number of people with fractures compared with conventional case finding. Automated software could assist fracture liaison services to identify more people at risk than traditional case finding. PURPOSE Fracture liaison services address the post-fracture treatment gap in osteoporosis (OP). Natural language processing (NLP) is able to identify previously unrecognized patients by screening large volumes of radiology reports. The aim of this study was to compare an NLP software tool, XRAIT (X-Ray Artificial Intelligence Tool), with a traditional fracture liaison service at its development site (Prince of Wales Hospital [POWH], Sydney) and externally validate it in an adjudicated cohort from the Dubbo Osteoporosis Epidemiology Study (DOES). METHODS XRAIT searches radiology reports for fracture-related terms. At the development site (POWH), XRAIT and a blinded fracture liaison clinician (FLC) reviewed 5,089 reports and 224 presentations, respectively, of people 50 years or over during a simultaneous 3-month period. In the external cohort of DOES, XRAIT was used without modification to analyse digitally readable radiology reports (n = 327) to calculate its sensitivity and specificity. RESULTS XRAIT flagged 433 fractures after searching 5,089 reports (421 true fractures, positive predictive value of 97%). It identified more than a threefold higher number of fractures (421 fractures/339 individuals) compared with manual case finding (98 individuals). Unadjusted for the local reporting style in an external cohort (DOES), XRAIT had a sensitivity of 70% and specificity of 92%. CONCLUSION XRAIT identifies significantly more clinically significant fractures than manual case finding. High specificity in an untrained cohort suggests that it could be used at other sites. Automated methods of fracture identification may assist fracture liaison services so that limited resources can be spent on treatment rather than case finding.
Collapse
Affiliation(s)
- Nithin Kolanu
- Clinical Epidemiology/Healthy Ageing Division, Garvan Institute of Medical Research, Sydney, NSW, 2010, Australia. .,Prince of Wales Hospital, Randwick, Sydney, NSW, Australia.
| | - A Shane Brown
- Royal Hospital for Women, Randwick, Sydney, NSW, Australia
| | - Amanda Beech
- Prince of Wales Hospital, Randwick, Sydney, NSW, Australia.,Royal Hospital for Women, Randwick, Sydney, NSW, Australia.,University of New South Wales, Sydney, NSW, Australia
| | - Jacqueline R Center
- Clinical Epidemiology/Healthy Ageing Division, Garvan Institute of Medical Research, Sydney, NSW, 2010, Australia.,University of New South Wales, Sydney, NSW, Australia.,St Vincent's Hospital Clinical School, Darlinghurst, NSW, Australia
| | - Christopher P White
- Clinical Epidemiology/Healthy Ageing Division, Garvan Institute of Medical Research, Sydney, NSW, 2010, Australia.,Prince of Wales Hospital, Randwick, Sydney, NSW, Australia.,Royal Hospital for Women, Randwick, Sydney, NSW, Australia.,University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
8
|
Wang J, Deng H, Liu B, Hu A, Liang J, Fan L, Zheng X, Wang T, Lei J. Systematic Evaluation of Research Progress on Natural Language Processing in Medicine Over the Past 20 Years: Bibliometric Study on PubMed. J Med Internet Res 2020; 22:e16816. [PMID: 32012074 PMCID: PMC7005695 DOI: 10.2196/16816] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Revised: 12/05/2019] [Accepted: 12/15/2019] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Natural language processing (NLP) is an important traditional field in computer science, but its application in medical research has faced many challenges. With the extensive digitalization of medical information globally and increasing importance of understanding and mining big data in the medical field, NLP is becoming more crucial. OBJECTIVE The goal of the research was to perform a systematic review on the use of NLP in medical research with the aim of understanding the global progress on NLP research outcomes, content, methods, and study groups involved. METHODS A systematic review was conducted using the PubMed database as a search platform. All published studies on the application of NLP in medicine (except biomedicine) during the 20 years between 1999 and 2018 were retrieved. The data obtained from these published studies were cleaned and structured. Excel (Microsoft Corp) and VOSviewer (Nees Jan van Eck and Ludo Waltman) were used to perform bibliometric analysis of publication trends, author orders, countries, institutions, collaboration relationships, research hot spots, diseases studied, and research methods. RESULTS A total of 3498 articles were obtained during initial screening, and 2336 articles were found to meet the study criteria after manual screening. The number of publications increased every year, with a significant growth after 2012 (number of publications ranged from 148 to a maximum of 302 annually). The United States has occupied the leading position since the inception of the field, with the largest number of articles published. The United States contributed to 63.01% (1472/2336) of all publications, followed by France (5.44%, 127/2336) and the United Kingdom (3.51%, 82/2336). The author with the largest number of articles published was Hongfang Liu (70), while Stéphane Meystre (17) and Hua Xu (33) published the largest number of articles as the first and corresponding authors. Among the first author's affiliation institution, Columbia University published the largest number of articles, accounting for 4.54% (106/2336) of the total. Specifically, approximately one-fifth (17.68%, 413/2336) of the articles involved research on specific diseases, and the subject areas primarily focused on mental illness (16.46%, 68/413), breast cancer (5.81%, 24/413), and pneumonia (4.12%, 17/413). CONCLUSIONS NLP is in a period of robust development in the medical field, with an average of approximately 100 publications annually. Electronic medical records were the most used research materials, but social media such as Twitter have become important research materials since 2015. Cancer (24.94%, 103/413) was the most common subject area in NLP-assisted medical research on diseases, with breast cancers (23.30%, 24/103) and lung cancers (14.56%, 15/103) accounting for the highest proportions of studies. Columbia University and the talents trained therein were the most active and prolific research forces on NLP in the medical field.
Collapse
Affiliation(s)
- Jing Wang
- School of Medical Informatics and Engineering, Southwest Medical University, Luzhou, China
| | - Huan Deng
- School of Medical Informatics and Engineering, Southwest Medical University, Luzhou, China
| | - Bangtao Liu
- School of Medical Informatics and Engineering, Southwest Medical University, Luzhou, China
| | - Anbin Hu
- School of Medical Informatics and Engineering, Southwest Medical University, Luzhou, China
| | - Jun Liang
- IT Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Lingye Fan
- Affiliated Hospital, Southwest Medical University, Luzhou, China
| | - Xu Zheng
- Center for Medical Informatics, Peking University, Beijing, China
| | - Tong Wang
- School of Public Health, Jilin University, Jilin, China
| | - Jianbo Lei
- School of Medical Informatics and Engineering, Southwest Medical University, Luzhou, China.,Center for Medical Informatics, Peking University, Beijing, China.,Institute of Medical Technology, Health Science Center, Peking University, Beijing, China
| |
Collapse
|
9
|
Wang Y, Sohn S, Liu S, Shen F, Wang L, Atkinson EJ, Amin S, Liu H. A clinical text classification paradigm using weak supervision and deep representation. BMC Med Inform Decis Mak 2019; 19:1. [PMID: 30616584 PMCID: PMC6322223 DOI: 10.1186/s12911-018-0723-6] [Citation(s) in RCA: 138] [Impact Index Per Article: 27.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Accepted: 12/10/2018] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND Automatic clinical text classification is a natural language processing (NLP) technology that unlocks information embedded in clinical narratives. Machine learning approaches have been shown to be effective for clinical text classification tasks. However, a successful machine learning model usually requires extensive human efforts to create labeled training data and conduct feature engineering. In this study, we propose a clinical text classification paradigm using weak supervision and deep representation to reduce these human efforts. METHODS We develop a rule-based NLP algorithm to automatically generate labels for the training data, and then use the pre-trained word embeddings as deep representation features for training machine learning models. Since machine learning is trained on labels generated by the automatic NLP algorithm, this training process is called weak supervision. We evaluat the paradigm effectiveness on two institutional case studies at Mayo Clinic: smoking status classification and proximal femur (hip) fracture classification, and one case study using a public dataset: the i2b2 2006 smoking status classification shared task. We test four widely used machine learning models, namely, Support Vector Machine (SVM), Random Forest (RF), Multilayer Perceptron Neural Networks (MLPNN), and Convolutional Neural Networks (CNN), using this paradigm. Precision, recall, and F1 score are used as metrics to evaluate performance. RESULTS CNN achieves the best performance in both institutional tasks (F1 score: 0.92 for Mayo Clinic smoking status classification and 0.97 for fracture classification). We show that word embeddings significantly outperform tf-idf and topic modeling features in the paradigm, and that CNN captures additional patterns from the weak supervision compared to the rule-based NLP algorithms. We also observe two drawbacks of the proposed paradigm that CNN is more sensitive to the size of training data, and that the proposed paradigm might not be effective for complex multiclass classification tasks. CONCLUSION The proposed clinical text classification paradigm could reduce human efforts of labeled training data creation and feature engineering for applying machine learning to clinical text classification by leveraging weak supervision and deep representation. The experimental experiments have validated the effectiveness of paradigm by two institutional and one shared clinical text classification tasks.
Collapse
Affiliation(s)
- Yanshan Wang
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Sunghwan Sohn
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Sijia Liu
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Feichen Shen
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Liwei Wang
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Elizabeth J. Atkinson
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Shreyasee Amin
- Division of Rheumatology, Department of Medicine, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
- Division of Epidemiology, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Hongfang Liu
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| |
Collapse
|