1
|
Whitehorn A, Lockwood C, Hu Y, Xing W, Zhu Z, Porritt K. Methodological components, structure and quality assessment tools for evidence summaries: a scoping review. JBI Evid Synth 2024:02174543-990000000-00344. [PMID: 39192814 DOI: 10.11124/jbies-23-00557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2024]
Abstract
OBJECTIVE The objective of this review was to identify and map the available information related to the definition, structure, and core methodological components of evidence summaries, as well as to identify any indicators of quality. INTRODUCTION Evidence summaries offer a practical solution to overcoming some of the barriers present in evidence-based health care, such as lack of access to evidence at the point of care, and the knowledge and expertise to evaluate the quality and translate the evidence into clinical decision-making. However, lack of transparency in reporting and inconsistencies in the methodology of evidence summary development have previously been cited and pose problems for end-users (eg, clinicians, policymakers). INCLUSION CRITERIA Any English-language resource that described the methodological development or appraisal of an evidence summary was included. METHODS PubMed, Embase, and CINAHL (EBSCOhost) were systematically searched in November 2019, with no limits on the search. The search was updated in June 2021 and January 2023. Gray literature searches and pearling of references of included sources were also conducted at the same time as the database searches. All resources (ie, articles, papers, books, dissertations, reports, and websites) were eligible for inclusion in the review if they evaluated or described the development or appraisal of an evidence summary methodology within a point-of-care context and were published in English. Literature reviews (eg, systematic reviews, rapid reviews), including summaries of evidence on interventions or health care activities that either measure effects, a phenomena of interest, or where the objective was the development, description or evaluation of methods without a clear point-of-care target, were excluded from the review. RESULTS A total of 76 resources (n=56 articles from databases and n=20 reports from gray literature sources) were included in the review. The most common type/name included critically appraised topic (n=18) and evidence summary (n=17). A total of 25 resources provided a definition of an evidence summary: commonalities included a clinical question; a structured, systematic literature search; a description of literature selection; and appraisal of evidence. Of these 25, 16 included descriptors such as brief, concise, rapid, short, succinct and snapshot. The reported methodological components closely reflected the definition results, with the most reported methodological components being a systematic, multi-database search, and critical appraisal. Evidence summary examples were mostly presented as narrative summaries and usually included a reference list, background or clinical context, and recommendations or implications for practice or policy. Four quality assessment tools and a systematic review of tools were included. CONCLUSIONS The findings of this study highlight the wide variability in the definition, language, methodological components and structure used for point-of-care resources that met our definition of an evidence summary. This scoping review is one of the first steps aimed at improving the credibility and transparency of evidence summaries in evidence-based health care, with further research required to standardize the definitions and methodologies associated with point-of-care resources and accepted tools for quality assessment. SUPPLEMENTAL DIGITAL CONTENT A Chinese-language version of the abstract of this review is available at http://links.lww.com/SRX/A59, studies ineligible following full-text review http://links.lww.com/SRX/A60.
Collapse
Affiliation(s)
- Ashley Whitehorn
- JBI, School of Public Health, Faculty of Health Sciences, University of Adelaide, Adelaide, SA, Australia
| | - Craig Lockwood
- JBI, School of Public Health, Faculty of Health Sciences, University of Adelaide, Adelaide, SA, Australia
| | - Yan Hu
- Fudan University Centre for Evidence-based Nursing: A JBI Centre of Excellence, Shanghai, China
- School of Nursing, Fudan University, Shanghai, China
| | - Weijie Xing
- Fudan University Centre for Evidence-based Nursing: A JBI Centre of Excellence, Shanghai, China
- School of Nursing, Fudan University, Shanghai, China
| | - Zheng Zhu
- Fudan University Centre for Evidence-based Nursing: A JBI Centre of Excellence, Shanghai, China
- School of Nursing, Fudan University, Shanghai, China
| | - Kylie Porritt
- JBI, School of Public Health, Faculty of Health Sciences, University of Adelaide, Adelaide, SA, Australia
| |
Collapse
|
2
|
Wasserman RL, Seger DL, Amato MG, Co Z, Mugal A, Rui A, Garabedian PM, Marceau M, Syrowatka A, Volk LA, Bates DW. Straight to the point: evaluation of a Point of Care Information (POCI) resource in answering disease-related questions. J Med Libr Assoc 2024; 112:13-21. [PMID: 38911524 PMCID: PMC11189136 DOI: 10.5195/jmla.2024.1770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Objective To evaluate the ability of DynaMedex, an evidence-based drug and disease Point of Care Information (POCI) resource, in answering clinical queries using keyword searches. Methods Real-world disease-related questions compiled from clinicians at an academic medical center, DynaMedex search query data, and medical board review resources were categorized into five clinical categories (complications & prognosis, diagnosis & clinical presentation, epidemiology, prevention & screening/monitoring, and treatment) and six specialties (cardiology, endocrinology, hematology-oncology, infectious disease, internal medicine, and neurology). A total of 265 disease-related questions were evaluated by pharmacist reviewers based on if an answer was found (yes, no), whether the answer was relevant (yes, no), difficulty in finding the answer (easy, not easy), cited best evidence available (yes, no), clinical practice guidelines included (yes, no), and level of detail provided (detailed, limited details). Results An answer was found for 259/265 questions (98%). Both reviewers found an answer for 241 questions (91%), neither found the answer for 6 questions (2%), and only one reviewer found an answer for 18 questions (7%). Both reviewers found a relevant answer 97% of the time when an answer was found. Of all relevant answers found, 68% were easy to find, 97% cited best quality of evidence available, 72% included clinical guidelines, and 95% were detailed. Recommendations for areas of resource improvement were identified. Conclusions The resource enabled reviewers to answer most questions easily with the best quality of evidence available, providing detailed answers and clinical guidelines, with a high level of replication of results across users.
Collapse
Affiliation(s)
- Rachel L Wasserman
- , Division of General Internal Medicine, Brigham and Women's Hospital, Boston, MA and Massachusetts College of Pharmacy and Health Sciences (MCPHS), Boston, MA
| | - Diane L Seger
- , Clinical and Quality Analysis, Mass General Brigham, Somerville, MA
| | - Mary G Amato
- , Division of General Internal Medicine, Brigham and Women's Hospital, Boston, MA
| | - Zoe Co
- , Division of General Internal Medicine, Brigham and Women's Hospital, Boston, MA
| | - Aqsa Mugal
- , Clinical and Quality Analysis, Mass General Brigham, Somerville, MA
| | - Angela Rui
- , Division of General Internal Medicine, Brigham and Women's Hospital, Boston, MA
| | | | - Marlika Marceau
- , Clinical and Quality Analysis, Mass General Brigham, Somerville, MA
| | - Ania Syrowatka
- , Division of General Internal Medicine, Brigham and Women's Hospital, Boston, MA and Harvard Medical School, Boston, MA
| | - Lynn A Volk
- , Clinical and Quality Analysis, Mass General Brigham, Somerville, MA
| | - David W Bates
- , Division of General Internal Medicine, Brigham and Women's Hospital, Boston, MA, Clinical and Quality Analysis, Mass General Brigham, Somerville, MA and Harvard Medical School, Boston, MA
| |
Collapse
|
3
|
Bayani A, Ayotte A, Nikiema JN. Automated Credibility Assessment of Web-Based Health Information Considering Health on the Net Foundation Code of Conduct (HONcode): Model Development and Validation Study. JMIR Form Res 2023; 7:e52995. [PMID: 38133919 PMCID: PMC10770789 DOI: 10.2196/52995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 12/12/2023] [Accepted: 12/13/2023] [Indexed: 12/23/2023] Open
Abstract
BACKGROUND An increasing number of users are turning to web-based sources as an important source of health care guidance information. Thus, trustworthy sources of information should be automatically identifiable using objective criteria. OBJECTIVE The purpose of this study was to automate the assessment of the Health on the Net Foundation Code of Conduct (HONcode) criteria, enhancing our ability to pinpoint trustworthy health information sources. METHODS A data set of 538 web pages displaying health content was collected from 43 health-related websites. HONcode criteria have been considered as web page and website levels. For the website-level criteria (confidentiality, transparency, financial disclosure, and advertising policy), a bag of keywords has been identified to assess the criteria using a rule-based model. For the web page-level criteria (authority, complementarity, justifiability, and attribution) several machine learning (ML) approaches were used. In total, 200 web pages were manually annotated until achieving a balanced representation in terms of frequency. In total, 3 ML models-random forest, support vector machines (SVM), and Bidirectional Encoder Representations from Transformers (BERT)-were trained on the initial annotated data. A second step of training was implemented for the complementarity criterion using the BERT model for multiclass classification of the complementarity sentences obtained by annotation and data augmentation (positive, negative, and noncommittal sentences). Finally, the remaining web pages were classified using the selected model and 100 sentences were randomly selected for manual review. RESULTS For web page-level criteria, the random forest model showed a good performance for the attribution criterion while displaying subpar performance in the others. BERT and SVM had a stable performance across all the criteria. BERT had a better area under the curve (AUC) of 0.96, 0.98, and 1.00 for neutral sentences, justifiability, and attribution, respectively. SVM had the overall better performance for the classification of complementarity with the AUC equal to 0.98. Finally, SVM and BERT had an equal AUC of 0.98 for the authority criterion. For the website level criteria, the rule-based model was able to retrieve web pages with an accuracy of 0.97 for confidentiality, 0.82 for transparency, and 0.51 for both financial disclosure and advertising policy. The final evaluation of the sentences determined 0.88 of precision and the agreement level of reviewers was computed at 0.82. CONCLUSIONS Our results showed the potential power of automating the HONcode criteria assessment using ML approaches. This approach could be used with different types of pretrained models to accelerate the text annotation, and classification and to improve the performance in low-resource cases. Further work needs to be conducted to determine how to assign different weights to the criteria, as well as to identify additional characteristics that should be considered for consolidating these criteria into a comprehensive reliability score.
Collapse
Affiliation(s)
- Azadeh Bayani
- Centre de recherche en santé publique, Université de Montréal et Centre intégré universitaire de santé et de services sociaux du Centre-Sud-de-l'Île-de-Montréal, Montréal, QC, Canada
- Laboratoire Transformation Numérique en Santé, Montreal, QC, Canada
| | - Alexandre Ayotte
- Centre de recherche en santé publique, Université de Montréal et Centre intégré universitaire de santé et de services sociaux du Centre-Sud-de-l'Île-de-Montréal, Montréal, QC, Canada
- Laboratoire Transformation Numérique en Santé, Montreal, QC, Canada
| | - Jean Noel Nikiema
- Centre de recherche en santé publique, Université de Montréal et Centre intégré universitaire de santé et de services sociaux du Centre-Sud-de-l'Île-de-Montréal, Montréal, QC, Canada
- Laboratoire Transformation Numérique en Santé, Montreal, QC, Canada
- Department of Management, Evaluation and Health Policy, School of Public Health, Université de Montréal, Montéal, QC, Canada
| |
Collapse
|
4
|
Lee M, Lin X, Chai JZQ, Lee ES, Smith H, Tudor Car L. Smartphone apps for point-of-care information summaries: systematic assessment of the quality and content. BMJ Evid Based Med 2023; 28:320-327. [PMID: 36922021 DOI: 10.1136/bmjebm-2022-112146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/18/2023] [Indexed: 03/17/2023]
Abstract
BACKGROUND Clinicians need easy access to evidence-based information to inform their clinical practice. Point-of-care information summaries are increasingly available in the form of smartphone apps. However, the quality of information from the apps is questionable as there is currently no regulation on the content of the medical apps. OBJECTIVES This study aimed to systematically assess the quality and content of the medical apps providing point-of-care information summaries that were available in two major app stores. We evaluated apps designed specifically for healthcare professionals and assessed their content development, editorial policy, coverage of medical conditions and trustworthiness. METHODS We conducted a systematic assessment of medical apps providing point-of-care information summaries available in Google Play and Apple app stores. Apps launched or updated since January 2020 were identified through a systematic search using 42matters. Apps meeting the inclusion criteria were downloaded and assessed. The data extraction and app assessment were done in parallel and independently by at least two reviewers. Apps were evaluated against the adapted criteria: (1) general characteristics, (2) content presentation of the summaries, (3) editorial quality, (4) evidence-based methodology, (5) coverage (volume) of the medical conditions, (6) usability of apps and (7) trustworthiness of the app based on HONcode principles. HONcode principles are guidelines used to inform users about the credibility and reliability of health information online. The results were reported as a narrative review. RESULTS Eight medical apps met the inclusion criteria and were systematically appraised. Based on our evaluation criteria, UpToDate supported 16 languages, and all other apps were English. Bullet points and brief paragraphs were used in all apps, and only DynaMed and Micromedex and Pathway-medical knowledge provided a formal grading system for the strength of recommendations for all the medical conditions in their apps. All the other apps either lacked a formal grading system altogether or offered one for some of the medical conditions. About 30% of the editorial quality assessment and 47.5% of the evidence-based methodology assessment were unclear or missing. UpToDate contained the most point-of-care evidence-based documents with >10 500 documents. All apps except 5-Minute Clinical Consult and DynaMed and Micromedex were available for offline access. Only Medscape complied with the HONcode principles. CONCLUSIONS Future apps should report a more detailed evidence-based methodology, be accessible for offline use and support search in more than one language. There should be clearer information provided in future apps regarding the declaration of authorship and conflict of interest.
Collapse
Affiliation(s)
- Mauricette Lee
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Xiaowen Lin
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Joanne Zhi Qi Chai
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Eng Sing Lee
- Family Medicine and Primary Care, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Helen Smith
- Family Medicine and Primary Care, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Lorainne Tudor Car
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
- Department of Primary Care and Public Health, School of Public Health, Imperial College London, London, UK
| |
Collapse
|
5
|
Meade MJ, Dreyer CW. A Content Analysis of Orthodontic Treatment Information Contained within the Websites of General Dental Practices. JOURNAL OF CONSUMER HEALTH ON THE INTERNET 2022. [DOI: 10.1080/15398285.2022.2124494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Affiliation(s)
- Maurice J. Meade
- Orthodontic Unit, Adelaide Dental School, The University of Adelaide, Adelaide, Australia
| | - Craig W. Dreyer
- Orthodontic Unit, Adelaide Dental School, The University of Adelaide, Adelaide, Australia
| |
Collapse
|
6
|
Lenaerts G, Bekkering GE, Goossens M, De Coninck L, Delvaux N, Cordyn S, Adriaenssens J, Aertgeerts B, Vankrunkelsven P. A Tool to Assess the Trustworthiness of Evidence-Based Point-of-Care Information for Health Care Professionals (CAPOCI): Design and Validation Study. J Med Internet Res 2021; 23:e27174. [PMID: 34609314 PMCID: PMC8527381 DOI: 10.2196/27174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 05/08/2021] [Accepted: 07/13/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND User-friendly information at the point of care for health care professionals should be well structured, rapidly accessible, comprehensive, and trustworthy. The reliability of information and the associated methodological process must be clear. There is no standard tool to evaluate the trustworthiness of such point-of-care (POC) information. OBJECTIVE We aim to develop and validate a new tool for assessment of trustworthiness of evidence-based POC resources to enhance the quality of POC resources and facilitate evidence-based practice. METHODS We designed the Critical Appraisal of Point-of-Care Information (CAPOCI) tool based on the criteria important for assessment of trustworthiness of POC information, reported in a previously published review. A group of health care professionals and methodologists (the authors of this paper) defined criteria for the CAPOCI tool in an iterative process of discussion and pilot testing until consensus was reached. In the next step, all criteria were subject to content validation with a Delphi study. We invited an international panel of 10 experts to rate their agreement with the relevance and wording of the criteria and to give feedback. Consensus was reached when 70% of the experts agreed. When no consensus was reached, we reformulated the criteria based on the experts' comments for a next round of the Delphi study. This process was repeated until consensus was reached for each criterion. In a last step, the interrater reliability of the CAPOCI tool was calculated with a 2-tailed Kendall tau correlation coefficient to quantify the agreement between 2 users who piloted the CAPOCI tool on 5 POC resources. Two scoring systems were tested: a 3-point ordinal scale and a 7-point Likert scale. RESULTS After validation, the CAPOCI tool was designed with 11 criteria that focused on methodological quality and author-related information. The criteria assess authorship, literature search, use of preappraised evidence, critical appraisal of evidence, expert opinions, peer review, timeliness and updating, conflict of interest, and commercial support. Interrater agreement showed substantial agreement between 2 users for scoring with the 3-point ordinal scale (τ=.621, P<.01) and scoring with the 7-point Likert scale (τ=.677, P<.01). CONCLUSIONS The CAPOCI tool may support validation teams in the assessment of trustworthiness of POC resources. It may also provide guidance for producers of POC resources.
Collapse
Affiliation(s)
| | - Geertruida E Bekkering
- Belgian Centre for Evidence-Based Medicine, Leuven, Belgium.,Cochrane Belgium, Leuven, Belgium
| | | | - Leen De Coninck
- Belgian Centre for Evidence-Based Medicine, Leuven, Belgium.,Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium
| | - Nicolas Delvaux
- Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium
| | - Sam Cordyn
- Federation of the White and Yellow Cross of Flanders, Brussels, Belgium
| | | | - Bert Aertgeerts
- Belgian Centre for Evidence-Based Medicine, Leuven, Belgium.,Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium
| | - Patrik Vankrunkelsven
- Belgian Centre for Evidence-Based Medicine, Leuven, Belgium.,Cochrane Belgium, Leuven, Belgium.,Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium
| |
Collapse
|
7
|
Rambaud C, Fauquert B, Charbonnel P, Falcoff H, Letrilliart L. Evaluation of a guidelines website capitalizing on Finnish content and Belgium interface: A pilot study in French general practice. Health Informatics J 2021; 27:14604582211024702. [PMID: 34159842 DOI: 10.1177/14604582211024702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
No reference point-of-care, web-based medical compendium is available in general practice in France. We have then conducted the experimentation of EBMPracticeNet, a Belgian website of guidelines translated and adapted from the Finnish EBM Guidelines. We collected data from three sources: (i) the website logbook; (ii) a search-specific assessment questionnaire; (iii) a global assessment questionnaire. A cumulative number of 262 (62.8%) physicians performed at least one search on the website and clicked on average 5.9 times per month. Physicians globally got an accurate answer (74.2%). They found the information provided by the website reliable (92.2%) and useful for practice (78.6%). They perceived the website ergonomics as good. The main reported barriers were the time and effort required to find an accurate answer and the uneven relevance of the information retrieved. Improvements should focus on guidelines indexing and their adaptation to the French context, and training physicians to search medical databases.
Collapse
Affiliation(s)
- Claire Rambaud
- Université Claude Bernard Lyon 1, France.,Collège de la Médecine Générale, France
| | | | | | | | | |
Collapse
|
8
|
Vaghela U, Rabinowicz S, Bratsos P, Martin G, Fritzilas E, Markar S, Purkayastha S, Stringer K, Singh H, Llewellyn C, Dutta D, Clarke JM, Howard M, Serban O, Kinross J. Using a Secure, Continually Updating, Web Source Processing Pipeline to Support the Real-Time Data Synthesis and Analysis of Scientific Literature: Development and Validation Study. J Med Internet Res 2021; 23:e25714. [PMID: 33835932 PMCID: PMC8104004 DOI: 10.2196/25714] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 12/30/2020] [Accepted: 04/03/2021] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND The scale and quality of the global scientific response to the COVID-19 pandemic have unquestionably saved lives. However, the COVID-19 pandemic has also triggered an unprecedented "infodemic"; the velocity and volume of data production have overwhelmed many key stakeholders such as clinicians and policy makers, as they have been unable to process structured and unstructured data for evidence-based decision making. Solutions that aim to alleviate this data synthesis-related challenge are unable to capture heterogeneous web data in real time for the production of concomitant answers and are not based on the high-quality information in responses to a free-text query. OBJECTIVE The main objective of this project is to build a generic, real-time, continuously updating curation platform that can support the data synthesis and analysis of a scientific literature framework. Our secondary objective is to validate this platform and the curation methodology for COVID-19-related medical literature by expanding the COVID-19 Open Research Dataset via the addition of new, unstructured data. METHODS To create an infrastructure that addresses our objectives, the PanSurg Collaborative at Imperial College London has developed a unique data pipeline based on a web crawler extraction methodology. This data pipeline uses a novel curation methodology that adopts a human-in-the-loop approach for the characterization of quality, relevance, and key evidence across a range of scientific literature sources. RESULTS REDASA (Realtime Data Synthesis and Analysis) is now one of the world's largest and most up-to-date sources of COVID-19-related evidence; it consists of 104,000 documents. By capturing curators' critical appraisal methodologies through the discrete labeling and rating of information, REDASA rapidly developed a foundational, pooled, data science data set of over 1400 articles in under 2 weeks. These articles provide COVID-19-related information and represent around 10% of all papers about COVID-19. CONCLUSIONS This data set can act as ground truth for the future implementation of a live, automated systematic review. The three benefits of REDASA's design are as follows: (1) it adopts a user-friendly, human-in-the-loop methodology by embedding an efficient, user-friendly curation platform into a natural language processing search engine; (2) it provides a curated data set in the JavaScript Object Notation format for experienced academic reviewers' critical appraisal choices and decision-making methodologies; and (3) due to the wide scope and depth of its web crawling method, REDASA has already captured one of the world's largest COVID-19-related data corpora for searches and curation.
Collapse
Affiliation(s)
- Uddhav Vaghela
- PanSurg Collaborative, Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Simon Rabinowicz
- PanSurg Collaborative, Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Paris Bratsos
- PanSurg Collaborative, Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Guy Martin
- PanSurg Collaborative, Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | | | - Sheraz Markar
- PanSurg Collaborative, Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Sanjay Purkayastha
- PanSurg Collaborative, Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | | | | | | | | | - Jonathan M Clarke
- PanSurg Collaborative, Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | | | - Ovidiu Serban
- Data Science Institute, Imperial College London, London, United Kingdom
| | - James Kinross
- PanSurg Collaborative, Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| |
Collapse
|
9
|
Information-seeking behaviors and barriers to the incorporation of scientific evidence into clinical practice: A survey with Brazilian dentists. PLoS One 2021; 16:e0249260. [PMID: 33765079 PMCID: PMC7993878 DOI: 10.1371/journal.pone.0249260] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 03/16/2021] [Indexed: 12/11/2022] Open
Abstract
The aim of this online cross-sectional study is to identify the sources of scientific information used by Brazilian dentists in clinical decision-making and the barriers that they perceive as important to the incorporation of scientific evidence into clinical practice. A pretested questionnaire created in Google Forms which was made available to participants through links sent by e-mail or shared on Facebook® and Instagram® was used to collect the data between October 2018 and May 2019. Only dentists who were involved in direct or indirect care of patients (i.e. clinicians who performed dental procedures or dental educators who participated in the clinical training of graduate or postgraduate dental students) were asked to complete the questionnaire. The sample was comprised of 528 dentists (the response rate from the alumni database was 6.9%); their mean age was 45.2 years (±12.5) and 30.9% had an academic position. The majority were women (68.0%) and lived in Southern or Southeastern Brazil (96.0%). The sources of scientific information more frequently used by them in clinical decision-making were clinical guidelines (65.1%; 95% CI: 60.9, 69.2), scientific articles (56.8%; 95%CI: 52.5, 61.1) and bibliographic databases (48.3%; 95% CI: 43.9, 52.6). The information resource less frequently used was social media. The most important barriers to the clinical use of scientific evidence were: difficulty in determining whether scientific contents found on the Internet were reliable or not (41.8%; 95% CI: 37.6, 46.2), high cost of access to scientific papers (37.7%; 95% CI: 33.5, 41.9), and lack of time for reading scientific articles (32.4%; 95% CI: 28.4, 36.6). Although Brazilian dentists show a positive attitude towards obtaining scientific evidence from reliable sources, there still remain important barriers to the translation of evidence into practice. This can have significant implications for quality of care and should be further investigated.
Collapse
|
10
|
Battineni G, Baldoni S, Chintalapudi N, Sagaro GG, Pallotta G, Nittari G, Amenta F. Factors affecting the quality and reliability of online health information. Digit Health 2020; 6:2055207620948996. [PMID: 32944269 PMCID: PMC7466903 DOI: 10.1177/2055207620948996] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 07/11/2020] [Indexed: 12/26/2022] Open
Abstract
Background Internet represents a relevant source of information, but reliability of data that can be obtained by the web is still an unsolved issue. Non-reliable online information may have a relevance, especially in taking decisions related to health problems. Uncertainties on the quality of online health data may have a negative impact on health-related choices of citizens. Objective This work consisted in a cross-sectional literature review of published papers on online health information. The two main research objectives consisted in the analysis of trends in the use of health web sites and in the quality assessment and reliability levels of web medical sites. Methods Literature research was made using four digital reference databases, namely PubMed, British Medical Journal, Biomed, and CINAHL. Entries used were “trustworthy of medical information online,” “survey to evaluate medical information online,” “medical information online,” and “habits of web-based health information users”. Analysis included only papers published in English. The Newcastle Ottawa Scale was used to conduct quality checks of selected works. Results Literature analysis using the above entries resulted in 212 studies. Twenty-four articles in line with study objectives, and user characteristics were selected. People more prone to use the internet for obtaining health information were females, younger people, scholars, and employees. Reliability of different online health sites is an issue taken into account by the majority of people using the internet for obtaining health information and physician assistance could help people to surf more safe health web sites. Conclusions Limited health information and/or web literacy can cause misunderstandings in evaluating medical data found in the web. An appropriate education plan and evaluation tools could enhance user skills and bring to a more cautious analysis of health information found in the web.
Collapse
Affiliation(s)
- Gopi Battineni
- Telemedicine and Telepharmacy Centre, School of Medicinal and Health Products Sciences, University of Camerino, Camerino, Italy
| | - Simone Baldoni
- Telemedicine and Telepharmacy Centre, School of Medicinal and Health Products Sciences, University of Camerino, Camerino, Italy
| | - Nalini Chintalapudi
- Telemedicine and Telepharmacy Centre, School of Medicinal and Health Products Sciences, University of Camerino, Camerino, Italy
| | - Getu Gamo Sagaro
- Telemedicine and Telepharmacy Centre, School of Medicinal and Health Products Sciences, University of Camerino, Camerino, Italy
| | - Graziano Pallotta
- Telemedicine and Telepharmacy Centre, School of Medicinal and Health Products Sciences, University of Camerino, Camerino, Italy
| | - Giulio Nittari
- Telemedicine and Telepharmacy Centre, School of Medicinal and Health Products Sciences, University of Camerino, Camerino, Italy
| | - Francesco Amenta
- Telemedicine and Telepharmacy Centre, School of Medicinal and Health Products Sciences, University of Camerino, Camerino, Italy.,Research Department, International Radio Medical Centre (C.I.R.M.), Rome, Italy
| |
Collapse
|