1
|
Venus K, Kwan JL, Frost DW. Rising to the Challenge of Rare Diagnoses. J Gen Intern Med 2024:10.1007/s11606-024-09086-x. [PMID: 39485586 DOI: 10.1007/s11606-024-09086-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Accepted: 09/25/2024] [Indexed: 11/03/2024]
Abstract
Patients with rare conditions often experience substantial delays between presentation and diagnosis, and some remain undiagnosed. In this Perspective, we outline the many challenges in diagnosing rare conditions in the modern clinical context. We review relevant concepts of diagnostic reasoning as they relate to rare conditions. We present solutions currently available for clinicians to mitigate some of these problems, including facilitating deliberate reflection, utilizing a diagnostic management team, and optimizing diagnostic calibration. Finally, we speculate how technology, such as chatbots and decision support tools enhanced by artificial intelligence, may augment a clinician's ability to diagnose rare conditions in a timely and accurate manner without excessive resource use.
Collapse
Affiliation(s)
- Kevin Venus
- Division of General Internal Medicine and Geriatrics, University Health Network and Sinai Health System, Toronto, Canada
- Department of Medicine, University of Toronto, Toronto, Canada
- Toronto Western Hospital, 399 Bathurst St., Toronto, ON, Canada
| | - Janice L Kwan
- Division of General Internal Medicine and Geriatrics, University Health Network and Sinai Health System, Toronto, Canada
- Department of Medicine, University of Toronto, Toronto, Canada
- Sinai Health, Toronto, ON, Canada
| | - David W Frost
- Division of General Internal Medicine and Geriatrics, University Health Network and Sinai Health System, Toronto, Canada.
- Department of Medicine, University of Toronto, Toronto, Canada.
- Toronto Western Hospital, 399 Bathurst St., Toronto, ON, Canada.
| |
Collapse
|
2
|
Centola D, Becker J, Zhang J, Aysola J, Guilbeault D, Khoong E. Experimental evidence for structured information-sharing networks reducing medical errors. Proc Natl Acad Sci U S A 2023; 120:e2108290120. [PMID: 37487106 PMCID: PMC10401006 DOI: 10.1073/pnas.2108290120] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Accepted: 05/08/2023] [Indexed: 07/26/2023] Open
Abstract
Errors in clinical decision-making are disturbingly common. Recent studies have found that 10 to 15% of all clinical decisions regarding diagnoses and treatment are inaccurate. Here, we experimentally study the ability of structured information-sharing networks among clinicians to improve clinicians' diagnostic accuracy and treatment decisions. We use a pool of 2,941 practicing clinicians recruited from around the United States to conduct 84 independent group-level trials, ranging across seven different clinical vignettes for topics known to exhibit high rates of diagnostic or treatment error (e.g., acute cardiac events, geriatric care, low back pain, and diabetes-related cardiovascular illness prevention). We compare collective performance in structured information-sharing networks to collective performance in independent control groups, and find that networks significantly reduce clinical errors, and improve treatment recommendations, as compared to control groups of independent clinicians engaged in isolated reflection. Our results show that these improvements are not a result of simple regression to the group mean. Instead, we find that within structured information-sharing networks, the worst clinicians improved significantly while the best clinicians did not decrease in quality. These findings offer implications for the use of social network technologies to reduce errors among clinicians.
Collapse
Affiliation(s)
- Damon Centola
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA19104
- School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA19104
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA19104
- Network Dynamics Group, University of Pennsylvania, Philadelphia, PA19104
| | - Joshua Becker
- School of Management, University College London, LondonE14 5AA, United Kingdom
| | - Jingwen Zhang
- Network Dynamics Group, University of Pennsylvania, Philadelphia, PA19104
- Department of Communication, University of California, Davis, CA95616
| | - Jaya Aysola
- Penn Medicine Center for Health Equity Advancement, University of Pennsylvania Health System and Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA19104
| | - Douglas Guilbeault
- Network Dynamics Group, University of Pennsylvania, Philadelphia, PA19104
- Haas School of Management, University of California, Berkeley, CA94720
| | - Elaine Khoong
- Network Dynamics Group, University of Pennsylvania, Philadelphia, PA19104
- Center for Vulnerable Populations at San Francisco General Hospital, University of California, San Francisco, CA94110
- Division of General Internal Medicine at San Francisco General Hospital, University of California, San Francisco, CA94110
| |
Collapse
|
3
|
Vukicevic A, Vukicevic M, Radovanovic S, Delibasic B. BargCrEx: A System for Bargaining Based Aggregation of Crowd and Expert Opinions in Crowdsourcing. GROUP DECISION AND NEGOTIATION 2022; 31:789-818. [PMID: 35615756 PMCID: PMC9123878 DOI: 10.1007/s10726-022-09783-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 04/21/2022] [Indexed: 06/15/2023]
Abstract
Crowdsourcing and crowd voting systems are being increasingly used in societal, industry, and academic problems (labeling, recommendations, social choice, etc.) due to their possibility to exploit "wisdom of crowd" and obtain good quality solutions, and/or voter satisfaction, with high cost-efficiency. However, the decisions based on crowd vote aggregation do not guarantee high-quality results due to crowd voter data quality. Additionally, such decisions often do not satisfy the majority of voters due to data heterogeneity (multimodal or uniform vote distributions) and/or outliers, which cause traditional aggregation procedures (e.g., central tendency measures) to propose decisions with low voter satisfaction. In this research, we propose a system for the integration of crowd and expert knowledge in a crowdsourcing setting with limited resources. The system addresses the problem of sparse voting data by using machine learning models (matrix factorization and regression) for the estimation of crowd and expert votes/grades. The problem of vote aggregation under multimodal or uniform vote distributions is addressed by the inclusion of expert votes and aggregation of crowd and expert votes based on optimization and bargaining models (Kalai-Smorodinsky and Nash) usually used in game theory. Experimental evaluation on real world and artificial problems showed that the bargaining-based aggregation outperforms the traditional methods in terms of cumulative satisfaction of experts and crowd. Additionally, the machine learning models showed satisfactory predictive performance and enabled cost reduction in the process of vote collection.
Collapse
Affiliation(s)
- Ana Vukicevic
- Faculty of Organizational Sciences, University of Belgrade, Jove Ilica 154, 11000 Beograd, Serbia
- Saga New Frontier Group ltd, Belgrade, Serbia
| | - Milan Vukicevic
- Faculty of Organizational Sciences, University of Belgrade, Jove Ilica 154, 11000 Beograd, Serbia
| | - Sandro Radovanovic
- Faculty of Organizational Sciences, University of Belgrade, Jove Ilica 154, 11000 Beograd, Serbia
| | - Boris Delibasic
- Faculty of Organizational Sciences, University of Belgrade, Jove Ilica 154, 11000 Beograd, Serbia
| |
Collapse
|
4
|
Abstract
Only the correct diagnosis enables an effective treatment of rheumatic diseases. Digitalization has already significantly accelerated and simplified our everyday life. An increasing number of digital options are available to patients and medical personnel in rheumatology to accelerate and improve the diagnosis. This work gives an overview of current developments and tools for patients and rheumatologists, regarding digital diagnostic support in rheumatology.
Collapse
|
5
|
LeBlanc K, Glanton E, Nagy A, Bater J, Berro T, McGuinness MA, Studwell C, Might M. Rare disease patient matchmaking: development and outcomes of an internet case-finding strategy in the Undiagnosed Diseases Network. Orphanet J Rare Dis 2021; 16:210. [PMID: 33971915 PMCID: PMC8108446 DOI: 10.1186/s13023-021-01825-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 04/20/2021] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Although clinician, researcher, and patient resources for matchmaking exist, finding similar patients remains an obstacle for rare disease diagnosis. The goals of this study were to develop and test the effectiveness of an Internet case-finding strategy and identify factors associated with increased matching within a rare disease population. METHODS Public web pages were created for consented participants. Matches made, time to each inquiry and match, and outcomes were recorded and analyzed using descriptive statistics. A Poisson regression model was run to identify characteristics associated with matches. RESULTS 385 participants were referred to the project and 158 had pages posted. 579 inquiries were received; 89.0% were from the general public and 24.7% resulted in a match. 81.6% of pages received at least one inquiry and 15.0% had at least one patient match. Primary symptom category of neurology, diagnosis, gene page, and photo were associated with increased matches (p ≤ 0.05). CONCLUSIONS This Internet case-finding strategy was of interest to patients, families, and clinicians, and similar patients were identified using this approach. Extending matchmaking efforts to the general public resulted in matches and suggests including this population in matchmaking activities can improve identification of similar patients.
Collapse
Affiliation(s)
- Kimberly LeBlanc
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
| | - Emily Glanton
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Anna Nagy
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Jorick Bater
- Department of Nutrition, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Tala Berro
- Department of Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - Molly A McGuinness
- Bass Center for Childhood Cancer and Blood Diseases, Stanford Children's Health, Palo Alto, CA, USA
| | - Courtney Studwell
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
| | - Matthew Might
- Hugh Kaul Precision Medicine Institute, University of Alabama at Birmingham, Birmingham, AL, USA
| |
Collapse
|
6
|
Ronzio L, Campagner A, Cabitza F, Gensini GF. Unity Is Intelligence: A Collective Intelligence Experiment on ECG Reading to Improve Diagnostic Performance in Cardiology. J Intell 2021; 9:jintelligence9020017. [PMID: 33915991 PMCID: PMC8167709 DOI: 10.3390/jintelligence9020017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 02/21/2021] [Accepted: 03/09/2021] [Indexed: 12/03/2022] Open
Abstract
Medical errors have a huge impact on clinical practice in terms of economic and human costs. As a result, technology-based solutions, such as those grounded in artificial intelligence (AI) or collective intelligence (CI), have attracted increasing interest as a means of reducing error rates and their impacts. Previous studies have shown that a combination of individual opinions based on rules, weighting mechanisms, or other CI solutions could improve diagnostic accuracy with respect to individual doctors. We conducted a study to investigate the potential of this approach in cardiology and, more precisely, in electrocardiogram (ECG) reading. To achieve this aim, we designed and conducted an experiment involving medical students, recent graduates, and residents, who were asked to annotate a collection of 10 ECGs of various complexity and difficulty. For each ECG, we considered groups of increasing size (from three to 30 members) and applied three different CI protocols. In all cases, the results showed a statistically significant improvement (ranging from 9% to 88%) in terms of diagnostic accuracy when compared to the performance of individual readers; this difference held for not only large groups, but also smaller ones. In light of these results, we conclude that CI approaches can support the tasks mentioned above, and possibly other similar ones as well. We discuss the implications of applying CI solutions to clinical settings, such as cases of augmented ‘second opinions’ and decision-making.
Collapse
Affiliation(s)
- Luca Ronzio
- Dipartimento di Informatica, Sistemistica e Comunicazione, University of Milano-Bicocca, Viale Sarca 336, 20126 Milan, Italy; (L.R.); (A.C.)
| | - Andrea Campagner
- Dipartimento di Informatica, Sistemistica e Comunicazione, University of Milano-Bicocca, Viale Sarca 336, 20126 Milan, Italy; (L.R.); (A.C.)
| | - Federico Cabitza
- Dipartimento di Informatica, Sistemistica e Comunicazione, University of Milano-Bicocca, Viale Sarca 336, 20126 Milan, Italy; (L.R.); (A.C.)
- Correspondence:
| | | |
Collapse
|
7
|
Kühnle L, Mücke U, Lechner WM, Klawonn F, Grigull L. Development of a Social Network for People Without a Diagnosis (RarePairs): Evaluation Study. J Med Internet Res 2020; 22:e21849. [PMID: 32990634 PMCID: PMC7556379 DOI: 10.2196/21849] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 08/13/2020] [Accepted: 08/18/2020] [Indexed: 12/26/2022] Open
Abstract
Background Diagnostic delay in rare disease (RD) is common, occasionally lasting up to more than 20 years. In attempting to reduce it, diagnostic support tools have been studied extensively. However, social platforms have not yet been used for systematic diagnostic support. This paper illustrates the development and prototypic application of a social network using scientifically developed questions to match individuals without a diagnosis. Objective The study aimed to outline, create, and evaluate a prototype tool (a social network platform named RarePairs), helping patients with undiagnosed RDs to find individuals with similar symptoms. The prototype includes a matching algorithm, bringing together individuals with similar disease burden in the lead-up to diagnosis. Methods We divided our project into 4 phases. In phase 1, we used known data and findings in the literature to understand and specify the context of use. In phase 2, we specified the user requirements. In phase 3, we designed a prototype based on the results of phases 1 and 2, as well as incorporating a state-of-the-art questionnaire with 53 items for recognizing an RD. Lastly, we evaluated this prototype with a data set of 973 questionnaires from individuals suffering from different RDs using 24 distance calculating methods. Results Based on a step-by-step construction process, the digital patient platform prototype, RarePairs, was developed. In order to match individuals with similar experiences, it uses answer patterns generated by a specifically designed questionnaire (Q53). A total of 973 questionnaires answered by patients with RDs were used to construct and test an artificial intelligence (AI) algorithm like the k-nearest neighbor search. With this, we found matches for every single one of the 973 records. The cross-validation of those matches showed that the algorithm outperforms random matching significantly. Statistically, for every data set the algorithm found at least one other record (match) with the same diagnosis. Conclusions Diagnostic delay is torturous for patients without a diagnosis. Shortening the delay is important for both doctors and patients. Diagnostic support using AI can be promoted differently. The prototype of the social media platform RarePairs might be a low-threshold patient platform, and proved suitable to match and connect different individuals with comparable symptoms. This exchange promoted through RarePairs might be used to speed up the diagnostic process. Further studies include its evaluation in a prospective setting and implementation of RarePairs as a mobile phone app.
Collapse
Affiliation(s)
| | - Urs Mücke
- Hannover Medical School, Hannover, Germany
| | | | - Frank Klawonn
- Helmholtz Centre for Infection Research, Braunschweig, Germany.,Ostfalia University, Wolfenbüttel, Germany
| | | |
Collapse
|
8
|
Nobles AL, Leas EC, Dredze M, Ayers JW. Examining Peer-to-Peer and Patient-Provider Interactions on a Social Media Community Facilitating Ask the Doctor Services. PROCEEDINGS OF THE ... INTERNATIONAL AAAI CONFERENCE ON WEBLOGS AND SOCIAL MEDIA. INTERNATIONAL AAAI CONFERENCE ON WEBLOGS AND SOCIAL MEDIA 2020; 14:464-475. [PMID: 32724726 PMCID: PMC7386284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Ask the Doctor (AtD) services provide patients the opportunity to seek medical advice using online platforms. While these services represent a new mode of healthcare delivery, study of these online health communities and how they are used is limited. In particular, it is unknown if these platforms replicate existing barriers and biases in traditional healthcare delivery across demographic groups. We present an analysis of AskDocs, a subreddit that functions as a public AtD platform on social media. We examine the demographics of users, the health topics discussed, if biases present in offline healthcare settings exist on this platform, and how empathy is expressed in interactions between users and physicians. Our findings suggest a number of implications to enhance and support peer-to-peer and patient-provider interactions on online platforms.
Collapse
Affiliation(s)
| | - Eric C Leas
- Department of Family Medicine and Public Health, University of California San Diego
| | - Mark Dredze
- Department of Computer Science, Johns Hopkins University
| | - John W Ayers
- Department of Medicine, University of California San Diego
| |
Collapse
|
9
|
Chang WH, Mashouri P, Lozano AX, Johnstone B, Husić M, Olry A, Maiella S, Balci TB, Sawyer SL, Robinson PN, Rath A, Brudno M. Phenotate: crowdsourcing phenotype annotations as exercises in undergraduate classes. Genet Med 2020; 22:1391-1400. [PMID: 32366968 DOI: 10.1038/s41436-020-0812-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 04/09/2020] [Accepted: 04/10/2020] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Computational documentation of genetic disorders is highly reliant on structured data for differential diagnosis, pathogenic variant identification, and patient matchmaking. However, most information on rare diseases (RDs) exists in freeform text, such as academic literature. To increase availability of structured RD data, we developed a crowdsourcing approach for collecting phenotype information using student assignments. METHODS We developed Phenotate, a web application for crowdsourcing disease phenotype annotations through assignments for undergraduate genetics students. Using student-collected data, we generated composite annotations for each disease through a machine learning approach. These annotations were compared with those from clinical practitioners and gold standard curated data. RESULTS Deploying Phenotate in five undergraduate genetics courses, we collected annotations for 22 diseases. Student-sourced annotations showed strong similarity to gold standards, with F-measures ranging from 0.584 to 0.868. Furthermore, clinicians used Phenotate annotations to identify diseases with comparable accuracy to other annotation sources and gold standards. For six disorders, no gold standards were available, allowing us to create some of the first structured annotations for them, while students demonstrated ability to research RDs. CONCLUSION Phenotate enables crowdsourcing RD phenotypic annotations through educational assignments. Presented as an intuitive web-based tool, it offers pedagogical benefits and augments the computable RD knowledgebase.
Collapse
Affiliation(s)
- Willie H Chang
- Centre for Computational Medicine, The Hospital For Sick Children, Toronto, ON, Canada.,Department of Computer Science, Princeton University, Princeton, NJ, USA
| | - Pouria Mashouri
- Centre for Computational Medicine, The Hospital For Sick Children, Toronto, ON, Canada
| | - Alexander X Lozano
- Centre for Computational Medicine, The Hospital For Sick Children, Toronto, ON, Canada.,Faculty of Medicine, University of Toronto, Toronto, ON, Canada.,Department of Materials Science & Engineering, Stanford University, Stanford, CA, USA
| | - Brittney Johnstone
- Centre for Computational Medicine, The Hospital For Sick Children, Toronto, ON, Canada.,Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - Mia Husić
- Centre for Computational Medicine, The Hospital For Sick Children, Toronto, ON, Canada
| | - Annie Olry
- Orphanet, Institut national de la santé et de la recherche médicale, Paris, France
| | - Sylvie Maiella
- Orphanet, Institut national de la santé et de la recherche médicale, Paris, France
| | - Tugce B Balci
- Medical Genetics Program of Southwestern Ontario, London Health Sciences Centre, London, ON, Canada
| | - Sarah L Sawyer
- Department of Genetics, Children's Hospital of Eastern Ontario and Children's Hospital of Eastern Ontario Research Institute, University of Ottawa, Ottawa, ON, Canada
| | - Peter N Robinson
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA.,Institute for Systems Genomics, University of Connecticut, Farmington, CT, USA
| | - Ana Rath
- Orphanet, Institut national de la santé et de la recherche médicale, Paris, France
| | - Michael Brudno
- Centre for Computational Medicine, The Hospital For Sick Children, Toronto, ON, Canada. .,Department of Computer Science, University of Toronto, Toronto, ON, Canada. .,Genetics and Genome Biology Program, The Hospital for Sick Children, Toronto, ON, Canada. .,University Health Network, Toronto, ON, Canada.
| |
Collapse
|
10
|
Hulsen T. Sharing Is Caring-Data Sharing Initiatives in Healthcare. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17093046. [PMID: 32349396 PMCID: PMC7246891 DOI: 10.3390/ijerph17093046] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 03/17/2020] [Accepted: 04/24/2020] [Indexed: 02/05/2023]
Abstract
In recent years, more and more health data are being generated. These data come not only from professional health systems, but also from wearable devices. All these 'big data' put together can be utilized to optimize treatments for each unique patient ('precision medicine'). For this to be possible, it is necessary that hospitals, academia and industry work together to bridge the 'valley of death' of translational medicine. However, hospitals and academia often are reluctant to share their data with other parties, even though the patient is actually the owner of his/her own health data. Academic hospitals usually invest a lot of time in setting up clinical trials and collecting data, and want to be the first ones to publish papers on this data. There are some publicly available datasets, but these are usually only shared after study (and publication) completion, which means a severe delay of months or even years before others can analyse the data. One solution is to incentivize the hospitals to share their data with (other) academic institutes and the industry. Here, we show an analysis of the current literature around data sharing, and we discuss five aspects of data sharing in the medical domain: publisher requirements, data ownership, growing support for data sharing, data sharing initiatives and how the use of federated data might be a solution. We also discuss some potential future developments around data sharing, such as medical crowdsourcing and data generalists.
Collapse
Affiliation(s)
- Tim Hulsen
- Department of Professional Health Solutions & Services, Philips Research, 5656AE Eindhoven, The Netherlands
| |
Collapse
|
11
|
Schwitzguebel AJP, Jeckelmann C, Gavinio R, Levallois C, Benaïm C, Spechbach H. Differential Diagnosis Assessment in Ambulatory Care With an Automated Medical History-Taking Device: Pilot Randomized Controlled Trial. JMIR Med Inform 2019; 7:e14044. [PMID: 31682590 PMCID: PMC6913752 DOI: 10.2196/14044] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2019] [Revised: 08/09/2019] [Accepted: 09/02/2019] [Indexed: 01/26/2023] Open
Abstract
Background Automated medical history–taking devices (AMHTDs) are emerging tools with the potential to increase the quality of medical consultations by providing physicians with an exhaustive, high-quality, standardized anamnesis and differential diagnosis. Objective This study aimed to assess the effectiveness of an AMHTD to obtain an accurate differential diagnosis in an outpatient service. Methods We conducted a pilot randomized controlled trial involving 59 patients presenting to an emergency outpatient unit and suffering from various conditions affecting the limbs, the back, and the chest wall. Resident physicians were randomized into 2 groups, one assisted by the AMHTD and one without access to the device. For each patient, physicians were asked to establish an exhaustive differential diagnosis based on the anamnesis and clinical examination. In the intervention group, residents read the AMHTD report before performing the anamnesis. In both the groups, a senior physician had to establish a differential diagnosis, considered as the gold standard, independent of the resident’s opinion and AMHTD report. Results A total of 29 patients were included in the intervention group and 30 in the control group. Differential diagnosis accuracy was higher in the intervention group (mean 75%, SD 26%) than in the control group (mean 59%, SD 31%; P=.01). Subgroup analysis showed a between-group difference of 3% (83% [17/21]-80% [14/17]) for low complexity cases (1-2 differential diagnoses possible) in favor of the AMHTD (P=.76), 31% (87% [13/15]-56% [18/33]) for intermediate complexity (3 differential diagnoses; P=.02), and 24% (63% [34/54]-39% [14/35]) for high complexity (4-5 differential diagnoses; P=.08). Physicians in the intervention group (mean 4.3, SD 2) had more years of clinical practice compared with the control group (mean 5.5, SD 2; P=.03). Differential diagnosis accuracy was negatively correlated to case complexity (r=0.41; P=.001) and the residents’ years of practice (r=0.04; P=.72). The AMHTD was able to determine 73% (SD 30%) of correct differential diagnoses. Patient satisfaction was good (4.3/5), and 26 of 29 patients (90%) considered that they were able to accurately describe their symptomatology. In 8 of 29 cases (28%), residents considered that the AMHTD helped to establish the differential diagnosis. Conclusions The AMHTD allowed physicians to make more accurate differential diagnoses, particularly in complex cases. This could be explained not only by the ability of the AMHTD to make the right diagnoses, but also by the exhaustive anamnesis provided.
Collapse
Affiliation(s)
- Adrien Jean-Pierre Schwitzguebel
- Division of Physical Medicine and Rehabilitation, Department of Rheumatology, Lausanne University Hospital, Lausanne, Switzerland
| | | | - Roberto Gavinio
- Ambulatory Emergency Care Unit, Department of Primary Care Medicine, Geneva University Hospitals, Geneva, Switzerland
| | - Cécile Levallois
- Ambulatory Emergency Care Unit, Department of Primary Care Medicine, Geneva University Hospitals, Geneva, Switzerland
| | - Charles Benaïm
- Division of Physical Medicine and Rehabilitation, Department of Rheumatology, Lausanne University Hospital, Lausanne, Switzerland
| | - Hervé Spechbach
- Ambulatory Emergency Care Unit, Department of Primary Care Medicine, Geneva University Hospitals, Geneva, Switzerland
| |
Collapse
|
12
|
Abstract
To establish a comprehensive diagnosis is by far the most challenging task in a physician's daily routine. Especially rare diseases place high demands on differential diagnosis, caused by the high number of around 8000 diseases and their clinical variability. No clinician can be aware of all the different entities and memorizing them all is impossible and inefficient. Specific diagnostic decision-supported systems provide better results than standard search engines in this context. The systems FindZebra, Phenomizer, Orphanet, and Isabel are presented here concisely with their advantages and limitations. An outlook is given to social media usage and big data technologies. Due to the high number of initial misdiagnoses and long periods of time until a confirmatory diagnosis is reached, these tools might be promising in practice to improve the diagnosis of rare diseases.
Collapse
Affiliation(s)
- T Müller
- Zentrum für unerkannte und seltene Erkrankungen (ZusE), Universitätsklinikum Gießen und Marburg (UKGM), Baldingerstr. 1, 35043, Marburg, Deutschland.
| | - A Jerrentrup
- Zentrum für unerkannte und seltene Erkrankungen (ZusE), Universitätsklinikum Gießen und Marburg (UKGM), Baldingerstr. 1, 35043, Marburg, Deutschland
| | - J R Schäfer
- Zentrum für unerkannte und seltene Erkrankungen (ZusE), Universitätsklinikum Gießen und Marburg (UKGM), Baldingerstr. 1, 35043, Marburg, Deutschland
| |
Collapse
|
13
|
Berenson R, Singh H. Payment Innovations To Improve Diagnostic Accuracy And Reduce Diagnostic Error. Health Aff (Millwood) 2019; 37:1828-1835. [PMID: 30395510 DOI: 10.1377/hlthaff.2018.0714] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Diagnostic accuracy is essential for treatment decisions but is largely unaccounted for by payers, including in fee-for-service Medicare and proposed Alternative Payment Models (APMs). We discuss three payment-related approaches to reducing diagnostic error. First, coding changes in the Medicare Physician Fee Schedule could facilitate the more effective use of teamwork and information technology in the diagnostic process and better support the cognitive work and time commitment that physicians make in the quest for diagnostic accuracy, especially in difficult or uncertain cases. Second, new APMs could be developed to focus on improving diagnostic accuracy in challenging cases and make available support resources for diagnosis, including condition-specific centers of diagnostic expertise or general diagnostic centers of excellence that provide second (or even third) opinions. Performing quality improvement activities that promote safer diagnosis should be a part of the accountability of APM recipients. Third, the accuracy of diagnoses that trigger APM payments and establish payment amounts should be confirmed by APM recipients. Implementation of these multipronged approaches can make current payment models more accountable for addressing diagnostic error and position diagnostic performance as a critical component of quality-based payment.
Collapse
Affiliation(s)
- Robert Berenson
- Robert Berenson ( ) is an institute fellow at the Urban Institute, in Washington, D.C
| | - Hardeep Singh
- Hardeep Singh is chief of the Health Policy, Quality, and Informatics Program, Center for Innovations in Quality, Effectiveness, and Safety, Michael E. DeBakey Veterans Affairs Medical Center, and a professor of medicine at the Baylor College of Medicine, both in Houston, Texas
| |
Collapse
|
14
|
Washington P, Kalantarian H, Tariq Q, Schwartz J, Dunlap K, Chrisman B, Varma M, Ning M, Kline A, Stockham N, Paskov K, Voss C, Haber N, Wall DP. Validity of Online Screening for Autism: Crowdsourcing Study Comparing Paid and Unpaid Diagnostic Tasks. J Med Internet Res 2019; 21:e13668. [PMID: 31124463 PMCID: PMC6552453 DOI: 10.2196/13668] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 04/15/2019] [Accepted: 04/16/2019] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND Obtaining a diagnosis of neuropsychiatric disorders such as autism requires long waiting times that can exceed a year and can be prohibitively expensive. Crowdsourcing approaches may provide a scalable alternative that can accelerate general access to care and permit underserved populations to obtain an accurate diagnosis. OBJECTIVE We aimed to perform a series of studies to explore whether paid crowd workers on Amazon Mechanical Turk (AMT) and citizen crowd workers on a public website shared on social media can provide accurate online detection of autism, conducted via crowdsourced ratings of short home video clips. METHODS Three online studies were performed: (1) a paid crowdsourcing task on AMT (N=54) where crowd workers were asked to classify 10 short video clips of children as "Autism" or "Not autism," (2) a more complex paid crowdsourcing task (N=27) with only those raters who correctly rated ≥8 of the 10 videos during the first study, and (3) a public unpaid study (N=115) identical to the first study. RESULTS For Study 1, the mean score of the participants who completed all questions was 7.50/10 (SD 1.46). When only analyzing the workers who scored ≥8/10 (n=27/54), there was a weak negative correlation between the time spent rating the videos and the sensitivity (ρ=-0.44, P=.02). For Study 2, the mean score of the participants rating new videos was 6.76/10 (SD 0.59). The average deviation between the crowdsourced answers and gold standard ratings provided by two expert clinical research coordinators was 0.56, with an SD of 0.51 (maximum possible SD is 3). All paid crowd workers who scored 8/10 in Study 1 either expressed enjoyment in performing the task in Study 2 or provided no negative comments. For Study 3, the mean score of the participants who completed all questions was 6.67/10 (SD 1.61). There were weak correlations between age and score (r=0.22, P=.014), age and sensitivity (r=-0.19, P=.04), number of family members with autism and sensitivity (r=-0.195, P=.04), and number of family members with autism and precision (r=-0.203, P=.03). A two-tailed t test between the scores of the paid workers in Study 1 and the unpaid workers in Study 3 showed a significant difference (P<.001). CONCLUSIONS Many paid crowd workers on AMT enjoyed answering screening questions from videos, suggesting higher intrinsic motivation to make quality assessments. Paid crowdsourcing provides promising screening assessments of pediatric autism with an average deviation <20% from professional gold standard raters, which is potentially a clinically informative estimate for parents. Parents of children with autism likely overfit their intuition to their own affected child. This work provides preliminary demographic data on raters who may have higher ability to recognize and measure features of autism across its wide range of phenotypic manifestations.
Collapse
Affiliation(s)
- Peter Washington
- Department of Bioengineering, Stanford University, Stanford, CA, United States
| | - Haik Kalantarian
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Qandeel Tariq
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Jessey Schwartz
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Kaitlyn Dunlap
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Brianna Chrisman
- Department of Bioengineering, Stanford University, Stanford, CA, United States
| | - Maya Varma
- Department of Computer Science, Stanford University, Stanford, CA, United States
| | - Michael Ning
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Aaron Kline
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Nathaniel Stockham
- Department of Neuroscience, Stanford University, Stanford, CA, United States
| | - Kelley Paskov
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Catalin Voss
- Department of Computer Science, Stanford University, Stanford, CA, United States
| | - Nick Haber
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Psychology, Stanford University, Stanford, CA, United States
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| | - Dennis Paul Wall
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
- Division of Systems Medicine, Department of Biomedical Data Science, Stanford University, Palo Alto, CA, United States
| |
Collapse
|
15
|
A systematic review of natural language processing and text mining of symptoms from electronic patient-authored text data. Int J Med Inform 2019; 125:37-46. [PMID: 30914179 DOI: 10.1016/j.ijmedinf.2019.02.008] [Citation(s) in RCA: 85] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Revised: 01/13/2019] [Accepted: 02/19/2019] [Indexed: 02/04/2023]
Abstract
OBJECTIVE In this systematic review, we aim to synthesize the literature on the use of natural language processing (NLP) and text mining as they apply to symptom extraction and processing in electronic patient-authored text (ePAT). MATERIALS AND METHODS A comprehensive literature search of 1964 articles from PubMed and EMBASE was narrowed to 21 eligible articles. Data related to purpose, text source, number of users and/or posts, evaluation metrics, and quality indicators were recorded. RESULTS Pain (n = 18) and fatigue and sleep disturbance (n = 18) were the most frequently evaluated symptom clinical content categories. Studies accessed ePAT from sources such as Twitter and online community forums or patient portals focused on diseases, including diabetes, cancer, and depression. Fifteen studies used NLP as a primary methodology. Studies reported evaluation metrics including the precision, recall, and F-measure for symptom-specific research questions. DISCUSSION NLP and text mining have been used to extract and analyze patient-authored symptom data in a wide variety of online communities. Though there are computational challenges with accessing ePAT, the depth of information provided directly from patients offers new horizons for precision medicine, characterization of sub-clinical symptoms, and the creation of personal health libraries as outlined by the National Library of Medicine. CONCLUSION Future research should consider the needs of patients expressed through ePAT and its relevance to symptom science. Understanding the role that ePAT plays in health communication and real-time assessment of symptoms, through the use of NLP and text mining, is critical to a patient-centered health system.
Collapse
|
16
|
Millenson ML, Baldwin JL, Zipperer L, Singh H. Beyond Dr. Google: the evidence on consumer-facing digital tools for diagnosis. ACTA ACUST UNITED AC 2018; 5:95-105. [PMID: 30032130 DOI: 10.1515/dx-2018-0009] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2018] [Accepted: 06/01/2018] [Indexed: 12/17/2022]
Abstract
Over a third of adults go online to diagnose their health condition. Direct-to-consumer (DTC), interactive, diagnostic apps with information personalization capabilities beyond those of static search engines are rapidly proliferating. While these apps promise faster, more convenient and more accurate information to improve diagnosis, little is known about the state of the evidence on their performance or the methods used to evaluate them. We conducted a scoping review of the peer-reviewed and gray literature for the period January 1, 2014–June 30, 2017. We found that the largest category of evaluations involved symptom checkers that applied algorithms to user-answered questions, followed by sensor-driven apps that applied algorithms to smartphone photos, with a handful of evaluations examining crowdsourcing. The most common clinical areas evaluated were dermatology and general diagnostic and triage advice for a range of conditions. Evaluations were highly variable in methodology and conclusions, with about half describing app characteristics and half examining actual performance. Apps were found to vary widely in functionality, accuracy, safety and effectiveness, although the usefulness of this evidence was limited by a frequent failure to provide results by named individual app. Overall, the current evidence base on DTC, interactive diagnostic apps is sparse in scope, uneven in the information provided and inconclusive with respect to safety and effectiveness, with no studies of clinical risks and benefits involving real-world consumer use. Given that DTC diagnostic apps are rapidly evolving, rigorous and standardized evaluations are essential to inform decisions by clinicians, patients, policymakers and other stakeholders.
Collapse
Affiliation(s)
- Michael L Millenson
- Health Quality Advisors LLC, Highland Park, IL 60035, USA
- Northwestern University Feinberg School of Medicine, Department of General Internal Medicine and Geriatrics, Chicago, IL, USA
| | - Jessica L Baldwin
- Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey VA Medical Center, Houston, TX, USA
- Department of Medicine, Baylor College of Medicine, Houston, TX, USA
| | | | - Hardeep Singh
- Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey VA Medical Center, Houston, TX, USA
- Department of Medicine, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
17
|
Muse ED, Godino JG, Netting JF, Alexander JF, Moran HJ, Topol EJ. From second to hundredth opinion in medicine: A global consultation platform for physicians. NPJ Digit Med 2018; 1:55. [PMID: 31304334 PMCID: PMC6550165 DOI: 10.1038/s41746-018-0064-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2018] [Revised: 09/17/2018] [Accepted: 09/19/2018] [Indexed: 11/19/2022] Open
Abstract
Serious medical diagnostic errors lead to adverse patient outcomes and increased healthcare costs. The use of virtual online consultation platforms may lead to better-informed physicians and reduce the incidence of diagnostic errors. Our aim was to assess the usage characteristics of an online, physician-to-physician, no-cost, medical consultation platform, Medscape Consult, from November 2015 through October 2017. Physicians creating original content were noted as “presenters” and those following up as “responders”. During the study period, 37,706 physician users generated a combined 117,346 presentations and responses. The physicians had an average age of 56 years and were from 171 countries on every continent. Over 90% of all presentations received responses with the median time to first response of 1.5 h. Overall, computer- and device-based medical consultation has the capacity to rapidly reach a global medical community and may play a role in the reduction of diagnostic errors.
Collapse
Affiliation(s)
- Evan D Muse
- 1Scripps Research Translational Institute, The Scripps Research Institute, La Jolla, CA USA.,2Division of Cardiovascular Disease, Scripps Clinic-Scripps Health, La Jolla, CA USA
| | - Job G Godino
- 1Scripps Research Translational Institute, The Scripps Research Institute, La Jolla, CA USA.,3University of California San Diego, La Jolla, CA USA
| | | | | | | | - Eric J Topol
- 1Scripps Research Translational Institute, The Scripps Research Institute, La Jolla, CA USA.,2Division of Cardiovascular Disease, Scripps Clinic-Scripps Health, La Jolla, CA USA
| |
Collapse
|
18
|
Sims MH, Hodges Shaw M, Gilbertson S, Storch J, Halterman MW. Legal and ethical issues surrounding the use of crowdsourcing among healthcare providers. Health Informatics J 2018; 25:1618-1630. [PMID: 30192688 DOI: 10.1177/1460458218796599] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
As the pace of medical discovery widens the knowledge-to-practice gap, technologies that enable peer-to-peer crowdsourcing have become increasingly common. Crowdsourcing has the potential to help medical providers collaborate to solve patient-specific problems in real time. We recently conducted the first trial of a mobile, medical crowdsourcing application among healthcare providers in a university hospital setting. In addition to acknowledging the benefits, our participants also raised concerns regarding the potential negative consequences of this emerging technology. In this commentary, we consider the legal and ethical implications of the major findings identified in our previous trial including compliance with the Health Insurance Portability and Accountability Act, patient protections, healthcare provider liability, data collection, data retention, distracted doctoring, and multi-directional anonymous posting. We believe the commentary and recommendations raised here will provide a frame of reference for individual providers, provider groups, and institutions to explore the salient legal and ethical issues before they implement these systems into their workflow.
Collapse
Affiliation(s)
| | | | - Seth Gilbertson
- University at Buffalo, The State University of New York, USA
| | | | | |
Collapse
|
19
|
Househ M, Grainger R, Petersen C, Bamidis P, Merolli M. Balancing Between Privacy and Patient Needs for Health Information in the Age of Participatory Health and Social Media: A Scoping Review. Yearb Med Inform 2018; 27:29-36. [PMID: 29681040 PMCID: PMC6115243 DOI: 10.1055/s-0038-1641197] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022] Open
Abstract
OBJECTIVES With the increased use of participatory health enabling technologies, such as social media, balancing the need for health information with patient privacy and confidentiality has become a more complex and immediate concern. The purpose of this paper produced by the members of the IMIA Participatory Health and Social Media (PHSM) working group is to investigate patient needs for health information using participatory health enabling technologies, while balancing their needs for privacy and confidentiality. METHODS Six domain areas including media sharing platforms, patient portals, web-based platforms, crowdsourcing websites, medical avatars, and other mobile health technologies were identified by five members of the IMIA PHSM working group as relevant to participatory health and the balance between data sharing and patient needs for privacy and confidentiality. After identifying the relevant domain areas, our scoping review began by searching several databases such as PubMed, MEDLINE, Scopus, and Google Scholar using a variety of key search terms. RESULTS A total of 1,973 studies were identified, of which 68 studies met our inclusion criteria and were included in the analysis. Results showed that challenges for balancing patient needs for information and privacy and confidentiality concerns included: cross-cultural understanding, clinician and patient awareness, de-identification of data, and commercialization of patient data. Some opportunities identified were patient empowerment, connecting participatory health enabling technologies with clinical records, open data sharing agreement, and e-consent. CONCLUSION Balancing between privacy and patient needs for health information in the age of participatory health and social media offers several opportunities and challenges. More people are engaging in actively managing health through participatory health enabling technologies. Such activity often includes sharing health information and with this comes a perennial tension between balancing individual needs and the desire to uphold privacy and confidentiality. We recommend that guidelines for both patients and clinicians, in terms of their use of participatory health-enabling technologies, are developed to ensure that patient privacy and confidentiality are protected, and a maximum benefit can be realized.
Collapse
Affiliation(s)
- Mowafa Househ
- Department of Health Informatics, College of Public Health and Health Informatics, King Saud Bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
| | - Rebecca Grainger
- Rehabilitation Teaching and Research Unit (RTRU), University of Otago, Wellington, New Zealand
| | - Carolyn Petersen
- Global Business Solutions, Mayo Clinic, Rochester, Minnesota, United States
| | - Panagiotis Bamidis
- Lab of Medical Physics, Medical School, Aristotle University, Thessaloniki, Greece
- Leeds Institute of Medical Education, University of Leeds, Leeds, United Kingdom
| | - Mark Merolli
- School of Health Science, Swinburne University of Technology, Melbourne, Australia
| |
Collapse
|
20
|
Créquit P, Mansouri G, Benchoufi M, Vivot A, Ravaud P. Mapping of Crowdsourcing in Health: Systematic Review. J Med Internet Res 2018; 20:e187. [PMID: 29764795 PMCID: PMC5974463 DOI: 10.2196/jmir.9330] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Revised: 02/10/2018] [Accepted: 03/14/2018] [Indexed: 11/22/2022] Open
Abstract
Background Crowdsourcing involves obtaining ideas, needed services, or content by soliciting Web-based contributions from a crowd. The 4 types of crowdsourced tasks (problem solving, data processing, surveillance or monitoring, and surveying) can be applied in the 3 categories of health (promotion, research, and care). Objective This study aimed to map the different applications of crowdsourcing in health to assess the fields of health that are using crowdsourcing and the crowdsourced tasks used. We also describe the logistics of crowdsourcing and the characteristics of crowd workers. Methods MEDLINE, EMBASE, and ClinicalTrials.gov were searched for available reports from inception to March 30, 2016, with no restriction on language or publication status. Results We identified 202 relevant studies that used crowdsourcing, including 9 randomized controlled trials, of which only one had posted results at ClinicalTrials.gov. Crowdsourcing was used in health promotion (91/202, 45.0%), research (73/202, 36.1%), and care (38/202, 18.8%). The 4 most frequent areas of application were public health (67/202, 33.2%), psychiatry (32/202, 15.8%), surgery (22/202, 10.9%), and oncology (14/202, 6.9%). Half of the reports (99/202, 49.0%) referred to data processing, 34.6% (70/202) referred to surveying, 10.4% (21/202) referred to surveillance or monitoring, and 5.9% (12/202) referred to problem-solving. Labor market platforms (eg, Amazon Mechanical Turk) were used in most studies (190/202, 94%). The crowd workers’ characteristics were poorly reported, and crowdsourcing logistics were missing from two-thirds of the reports. When reported, the median size of the crowd was 424 (first and third quartiles: 167-802); crowd workers’ median age was 34 years (32-36). Crowd workers were mainly recruited nationally, particularly in the United States. For many studies (58.9%, 119/202), previous experience in crowdsourcing was required, and passing a qualification test or training was seldom needed (11.9% of studies; 24/202). For half of the studies, monetary incentives were mentioned, with mainly less than US $1 to perform the task. The time needed to perform the task was mostly less than 10 min (58.9% of studies; 119/202). Data quality validation was used in 54/202 studies (26.7%), mainly by attention check questions or by replicating the task with several crowd workers. Conclusions The use of crowdsourcing, which allows access to a large pool of participants as well as saving time in data collection, lowering costs, and speeding up innovations, is increasing in health promotion, research, and care. However, the description of crowdsourcing logistics and crowd workers’ characteristics is frequently missing in study reports and needs to be precisely reported to better interpret the study findings and replicate them.
Collapse
Affiliation(s)
- Perrine Créquit
- INSERM UMR1153, Methods Team, Epidemiology and Statistics Sorbonne Paris Cité Research Center, Paris Descartes University, Paris, France.,Centre d'Epidémiologie Clinique, Hôpital Hôtel Dieu, Assistance Publique des Hôpitaux de Paris, Paris, France.,Cochrane France, Paris, France
| | - Ghizlène Mansouri
- INSERM UMR1153, Methods Team, Epidemiology and Statistics Sorbonne Paris Cité Research Center, Paris Descartes University, Paris, France
| | - Mehdi Benchoufi
- Centre d'Epidémiologie Clinique, Hôpital Hôtel Dieu, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Alexandre Vivot
- INSERM UMR1153, Methods Team, Epidemiology and Statistics Sorbonne Paris Cité Research Center, Paris Descartes University, Paris, France.,Centre d'Epidémiologie Clinique, Hôpital Hôtel Dieu, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Philippe Ravaud
- INSERM UMR1153, Methods Team, Epidemiology and Statistics Sorbonne Paris Cité Research Center, Paris Descartes University, Paris, France.,Centre d'Epidémiologie Clinique, Hôpital Hôtel Dieu, Assistance Publique des Hôpitaux de Paris, Paris, France.,Cochrane France, Paris, France.,Department of Epidemiology, Columbia University, Mailman School of Public Health, New York, NY, United States
| |
Collapse
|
21
|
Reynolds TL, Ali N, McGregor E, O'Brien T, Longhurst C, Rosenberg AL, Rudkin SE, Zheng K. Understanding Patient Questions about their Medical Records in an Online Health Forum: Opportunity for Patient Portal Design. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2018; 2017:1468-1477. [PMID: 29854216 PMCID: PMC5977702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
There are many benefits of online patient access to their medical records through technologies such as patient portals. However, patients often have difficulties understanding the clinical data presented in portals. In response, increasingly, patients go online to make sense of this data. One commonly used online resource is health forums. In this pilot study, we focus on one type of clinical data, laboratory results, and one popular forum, MedHelp. We examined patient question posts that contain laboratory results to gain insights into the nature of these questions and of the answers. Our analyses revealed a typology of confusion (i.e., topics of their questions) and potential gaps in traditional healthcare supports (i.e., patients' requests and situational factors), as well as the supports patients may gain through the forum (i.e., what the community provides). These results offer preliminary evidence of opportunities to redesign patient portals, and will inform our future work.
Collapse
Affiliation(s)
| | - Nida Ali
- University of Michigan, Ann Arbor, MI
| | | | | | | | | | | | - Kai Zheng
- University of California, Irvine, CA
| |
Collapse
|
22
|
Dainty KN, Vaid H, Brooks SC. North American Public Opinion Survey on the Acceptability of Crowdsourcing Basic Life Support for Out-of-Hospital Cardiac Arrest With the PulsePoint Mobile Phone App. JMIR Mhealth Uhealth 2017; 5:e63. [PMID: 28526668 PMCID: PMC5451638 DOI: 10.2196/mhealth.6926] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2016] [Revised: 01/27/2017] [Accepted: 02/22/2017] [Indexed: 11/13/2022] Open
Abstract
Background The PulsePoint Respond app is a novel system that can be implemented in emergency dispatch centers to crowdsource basic life support (BLS) for patients with cardiac arrest and facilitate bystander cardiopulmonary resuscitation (CPR) and automated external defibrillator use while first responders are en route. Objective The aim of this study was to conduct a North American survey to evaluate the public perception of the above-mentioned strategy, including acceptability and willingness to respond to alerts. Methods We designed a Web-based survey administered by IPSOS Reid, an established external polling vendor. Sampling was designed to ensure broad representation using recent census statistics. Results A total of 2415 survey responses were analyzed (1106 from Canada and 1309 from the United States). It was found that 98.37% (1088/1106) of Canadians and 96% (1259/1309) of Americans had no objections to PulsePoint being implemented in their community; 84.27% (932/1106) of Canadians and 55.61% (728/1309) of Americans said they would download the app to become a potential responder to cardiac arrest, respectively. Among Canadians, those who said they were likely to download PulsePoint were also more likely to have ever had CPR training (OR 1.7, 95% CI 1.2-2.4; P=.002); however, this was not true of American respondents (OR 1.0, 95% CI 0.79-1.3; P=.88). When asked to imagine themselves as a cardiac arrest victim, 95.39% (1055/1106) of Canadians and 92.44% (1210/1309) of Americans had no objections to receiving crowdsourced help in a public setting; 88.79% (982/1106) of Canadians and 84.87% (1111/1309) of Americans also had no objections to receiving help in a private setting, respectively. The most common concern identified with respect to PulsePoint implementation was a responder’s lack of ability, training, or access to proper equipment in a public setting. Conclusions The North American public finds the concept of crowdsourcing BLS for out-of-hospital cardiac arrest to be acceptable. It demonstrates willingness to respond to PulsePoint CPR notifications and to accept help from others alerted by the app if they themselves suffered a cardiac arrest.
Collapse
Affiliation(s)
- Katie N Dainty
- Rescu, Li Ka Shing Knowledge Institute, St Michael's Hospital, Toronto, ON, Canada.,Institute of Health Policy Management and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Haris Vaid
- School of Medicine, Queen's University, Kingston, ON, Canada
| | - Steven C Brooks
- Department of Emergency Medicine, Queen's University, Kingston, ON, Canada
| |
Collapse
|
23
|
Cocos A, Qian T, Callison-Burch C, Masino AJ. Crowd control: Effectively utilizing unscreened crowd workers for biomedical data annotation. J Biomed Inform 2017; 69:86-92. [DOI: 10.1016/j.jbi.2017.04.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Revised: 03/27/2017] [Accepted: 04/02/2017] [Indexed: 01/17/2023]
|
24
|
Liebeskind DS. Crowdsourcing Precision Cerebrovascular Health: Imaging and Cloud Seeding A Million Brains Initiative™. Front Med (Lausanne) 2016; 3:62. [PMID: 27921034 PMCID: PMC5118427 DOI: 10.3389/fmed.2016.00062] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2016] [Accepted: 11/10/2016] [Indexed: 11/13/2022] Open
Abstract
Crowdsourcing, an unorthodox approach in medicine, creates an unusual paradigm to study precision cerebrovascular health, eliminating the relative isolation and non-standardized nature of current imaging data infrastructure, while shifting emphasis to the astounding capacity of big data in the cloud. This perspective envisions the use of imaging data of the brain and vessels to orient and seed A Million Brains Initiative™ that may leapfrog incremental advances in stroke and rapidly provide useful data to the sizable population around the globe prone to the devastating effects of stroke and vascular substrates of dementia. Despite such variability in the type of data available and other limitations, the data hierarchy logically starts with imaging and can be enriched with almost endless types and amounts of other clinical and biological data. Crowdsourcing allows an individual to contribute to aggregated data on a population, while preserving their right to specific information about their own brain health. The cloud now offers endless storage, computing prowess, and neuroimaging applications for postprocessing that is searchable and scalable. Collective expertise is a windfall of the crowd in the cloud and particularly valuable in an area such as cerebrovascular health. The rise of precision medicine, rapidly evolving technological capabilities of cloud computing and the global imperative to limit the public health impact of cerebrovascular disease converge in the imaging of A Million Brains Initiative™. Crowdsourcing secure data on brain health may provide ultimate generalizability, enable focused analyses, facilitate clinical practice, and accelerate research efforts.
Collapse
Affiliation(s)
- David S Liebeskind
- Department of Neurology, Neurovascular Imaging Research Core and UCLA Stroke Center, University of California Los Angeles , Los Angeles, CA , USA
| |
Collapse
|
25
|
Juusola JL, Quisel TR, Foschini L, Ladapo JA. The Impact of an Online Crowdsourcing Diagnostic Tool on Health Care Utilization: A Case Study Using a Novel Approach to Retrospective Claims Analysis. J Med Internet Res 2016; 18:e127. [PMID: 27251384 PMCID: PMC4909973 DOI: 10.2196/jmir.5644] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Revised: 04/07/2016] [Accepted: 04/24/2016] [Indexed: 11/13/2022] Open
Abstract
Background Patients with difficult medical cases often remain undiagnosed despite visiting multiple physicians. A new online platform, CrowdMed, uses crowdsourcing to quickly and efficiently reach an accurate diagnosis for these patients. Objective This study sought to evaluate whether CrowdMed decreased health care utilization for patients who have used the service. Methods Novel, electronic methods of patient recruitment and data collection were utilized. Patients who completed cases on CrowdMed’s platform between July 2014 and April 2015 were recruited for the study via email and screened via an online survey. After providing eConsent, participants provided identifying information used to access their medical claims data, which was retrieved through a third-party web application program interface (API). Utilization metrics including frequency of provider visits and medical charges were compared pre- and post-case resolution to assess the impact of resolving a case on CrowdMed. Results Of 45 CrowdMed users who completed the study survey, comprehensive claims data was available via API for 13 participants, who made up the final enrolled sample. There were a total of 221 health care provider visits collected for the study participants, with service dates ranging from September 2013 to July 2015. Frequency of provider visits was significantly lower after resolution of a case on CrowdMed (mean of 1.07 visits per month pre-resolution vs. 0.65 visits per month post-resolution, P=.01). Medical charges were also significantly lower after case resolution (mean of US $719.70 per month pre-resolution vs. US $516.79 per month post-resolution, P=.03). There was no significant relationship between study results and disease onset date, and there was no evidence of regression to the mean influencing results. Conclusions This study employed technology-enabled methods to demonstrate that patients who used CrowdMed had lower health care utilization after case resolution. However, since the final sample size was limited, results should be interpreted as a case study. Despite this limitation, the statistically significant results suggest that online crowdsourcing shows promise as an efficient method of solving difficult medical cases.
Collapse
|