1
|
Moura-Coelho N, Papa-Vettorazzi R, Reyes A, Cunha JP, Güell JL. Ultrathin DSAEK versus DMEK - Review of systematic reviews. Eur J Ophthalmol 2024; 34:913-923. [PMID: 37964555 DOI: 10.1177/11206721231214605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
The efficacy and safety of Descemet's membrane endothelial keratoplasty (DMEK) and ultrathin Descemet stripping automated endothelial keratoplasty (UT-DSAEK) have been recently compared in several systematic reviews (SRs). The aim of this study was to assess the evidence quality of such SRs, in order to obtain a scientifically rigorous comparison between the two techniques. We performed a systematic review of SRs and meta-analyses comparing the efficacy and safety between UT-DSAEK and DMEK up to 24th March 2023, using 3 electronic databases (PubMed, Cochrane Library, Google Scholar) plus manual reference search. Specific outcomes analyzed included best-corrected visual acuity (BCVA), endothelial cell density (ECD), rebubbling rate, and other postoperative complications. Of 90 titles/abstracts screened, four SRs met the inclusion criteria. All SRs adequately analyzed potential bias of the included studies. One SR raised concern for potential literature search bias and two SRs have heterogeneity in some outcomes analyzed. All SRs found higher BCVA after DMEK, but one SR reported significant heterogeneity. All SRs found significant heterogeneity in ECD analysis, with one SR providing inconsistent analysis of this outcome. Three SRs analyzed rebubbling rates, favoring UT-DSAEK over DMEK. Three SRs concluded a higher overall complication rate after DMEK, although rebubbling may be a confounding factor. This systematic review clarifies the strengths and weaknesses of published SRs and reinforces the conclusion that DMEK leads to superior visual outcomes compared to UT-DSAEK, with the trade-off of higher rebubbling rates and possibly other postoperative complications. Studies with longer follow-up are needed to ascertain these differences between procedures.
Collapse
Affiliation(s)
- Nuno Moura-Coelho
- Cornea and Refractive Surgery Unit, Instituto de Microcirugía Ocular (IMO) Barcelona Grupo Miranza, Barcelona, Spain
- NOVA Medical School (NMS) - Universidade Nova de Lisboa, Lisbon, Portugal
- European School for Advanced Studies in Ophthalmology (ESASO), Lugano, Switzerland
| | - Renato Papa-Vettorazzi
- Cornea and Refractive Surgery Unit, Instituto de Microcirugía Ocular (IMO) Barcelona Grupo Miranza, Barcelona, Spain
- Anterior Segment Unit, Clínica Visualiza Guatemala, Guatemala, Guatemala
| | - Alonso Reyes
- Cornea and Refractive Surgery Unit, Instituto de Microcirugía Ocular (IMO) Barcelona Grupo Miranza, Barcelona, Spain
| | - João Paulo Cunha
- Ophthalmology, Hospital CUF Cascais, Lisbon, Portugal
- Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL), Lisbon, Portugal
| | - José Luis Güell
- Cornea and Refractive Surgery Unit, Instituto de Microcirugía Ocular (IMO) Barcelona Grupo Miranza, Barcelona, Spain
- European School for Advanced Studies in Ophthalmology (ESASO), Lugano, Switzerland
- Ophthalmology, Universidad Autónoma de Barcelona, Barcelona, Spain
| |
Collapse
|
2
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Br J Pharmacol 2024; 181:180-210. [PMID: 37282770 DOI: 10.1111/bph.16100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 06/08/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
3
|
McCann P, Kruoch Z, Lopez S, Malli S, Qureshi R, Li T. Interventions for Dry Eye: An Overview of Systematic Reviews. JAMA Ophthalmol 2024; 142:58-74. [PMID: 38127364 DOI: 10.1001/jamaophthalmol.2023.5751] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
Importance Dry eye is a common ocular disease that can have substantial morbidity. Systematic reviews provide evidence for dry eye interventions and can be useful for patients, clinicians, and clinical guideline developers. Overviews of reviews use explicit and systematic methods to synthesize findings from multiple systematic reviews, but currently, there are no overviews of systematic reviews investigating interventions for dry eye. Objective To summarize the results of reliable systematic reviews of dry eye interventions and to highlight the evidence gaps identified. Evidence Review We searched the Cochrane Eyes and Vision US satellite database and included reliable systematic reviews evaluating dry eye interventions published from 2016 to 2022. We reported the proportion of systematic reviews that were reliable with reasons for unreliability. Critical and important outcomes from reliable systematic reviews were extracted and verified. Critical outcomes included dry eye-related patient-reported outcome measures. Results were synthesized from reliable systematic reviews to provide summaries of evidence for each intervention. Evidence for each intervention was defined as conclusive or inconclusive depending on whether high-certainty evidence across systematic reviews was available according to Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) criteria and whether findings reached statistical or clinical significance. Recommendations were made for further research. Findings Within the Cochrane Eyes and Vision US satellite database, 138 potentially relevant systematic reviews were identified, 71 were considered eligible, and 26 (37%) were assessed as reliable. Among reliable systematic reviews, no conclusive evidence was identified for any dry eye intervention. Inconclusive evidence suggested that environmental modifications, dietary modifications, artificial tears and lubricants, punctal occlusion, intense pulsed light therapy, vectored thermal pulsation therapy (Lipiflow), topical corticosteroids, topical cyclosporine A, topical secretagogues, and autologous serum may be effective. Only unreliable systematic reviews evaluated lifitegrast, oral antibiotics, and moisture chamber devices. Conclusions and Relevance This overview of systematic reviews found some evidence that dry eye interventions may be effective, but no conclusive evidence was available. The conduct and reporting of most systematic reviews for dry eye interventions warrant improvement, and reliable systematic reviews are needed to evaluate lifitegrast, oral antibiotics, and moisture chamber devices.
Collapse
Affiliation(s)
- Paul McCann
- Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora
| | - Zanna Kruoch
- College of Optometry, Rocky Mountain University of Health Professions, Provo, Utah
| | - Sarah Lopez
- Francis I. Proctor Foundation, University of California, San Francisco
| | - Shreya Malli
- Department of Ophthalmology, University of California, San Francisco
| | - Riaz Qureshi
- Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora
- Department of Epidemiology, Colorado School of Public Health, Denver
| | - Tianjing Li
- Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora
- Department of Epidemiology, Colorado School of Public Health, Denver
| |
Collapse
|
4
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Acta Anaesthesiol Scand 2023; 67:1148-1177. [PMID: 37288997 DOI: 10.1111/aas.14295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 06/09/2023]
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
5
|
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P.A. Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
6
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Syst Rev 2023; 12:96. [PMID: 37291658 DOI: 10.1186/s13643-023-02255-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 02/19/2023] [Indexed: 06/10/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
7
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. BMC Infect Dis 2023; 23:383. [PMID: 37286949 DOI: 10.1186/s12879-023-08304-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 05/03/2023] [Indexed: 06/09/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
8
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to Best Tools and Practices for Systematic Reviews. JBJS Rev 2023; 11:01874474-202306000-00009. [PMID: 37285444 DOI: 10.2106/jbjs.rvw.23.00077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
» Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.» A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.» Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California
| |
Collapse
|
9
|
Downie LE, Britten-Jones AC, Hogg RE, Jalbert I, Li T, Lingham G, Liu SH, Qureshi R, Saldanha IJ, Singh S, Craig JP. TFOS Lifestyle - Evidence quality report: Advancing the evaluation and synthesis of research evidence. Ocul Surf 2023; 28:200-212. [PMID: 37054912 PMCID: PMC11246749 DOI: 10.1016/j.jtos.2023.04.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 04/10/2023] [Indexed: 04/15/2023]
Abstract
Evidence-based practice is a dominant paradigm in healthcare that emphasizes the importance of ensuring the translation of the best available, relevant research evidence into practice. An Evidence Quality Subcommittee was established to provide specialized methodological support and expertise to promote rigorous and evidence-based approaches for the Tear Film and Ocular Surface Society (TFOS) Lifestyle Epidemic reports. The present report describes the purpose, scope, and activity of the Evidence Quality Subcommittee in the undertaking of high-quality narrative-style literature reviews, and leading prospectively registered, reliable systematic reviews of high priority research questions, using standardized methods for each topic area report. Identification of predominantly low or very low certainty evidence across the eight systematic reviews highlights a need for further research to define the efficacy and/or safety of specific lifestyle interventions on the ocular surface, and to clarify relationships between certain lifestyle factors and ocular surface disease. To support the citation of reliable systematic review evidence in the narrative review sections of each report, the Evidence Quality Subcommittee curated topic-specific systematic review databases and relevant systematic reviews underwent standardized reliability assessment. Inconsistent methodological rigor was noted in the published systematic review literature, emphasizing the importance of internal validity assessment. Based on the experience of implementing the Evidence Quality Subcommittee, this report makes suggestions for incorporation of such initiatives in future international taskforces and working groups. Content areas broadly relevant to the activity of the Evidence Quality Subcommittee, including the critical appraisal of research, clinical evidence hierarchies (levels of evidence), and risk of bias assessment, are also outlined.
Collapse
Affiliation(s)
- Laura E Downie
- Department of Optometry and Vision Sciences, The University of Melbourne, Parkville, Victoria, Australia.
| | | | - Ruth E Hogg
- Centre for Public Health, School of Medicine, Dentistry and Biomedical Sciences, Belfast, United Kingdom
| | | | - Tianjing Li
- Department of Ophthalmology and Epidemiology, University of Colorado Anschutz Medical Campus, Aurora, CO, United States
| | - Gareth Lingham
- Centre for Eye Research Ireland, Technological University Dublin, Dublin, Ireland
| | - Su-Hsun Liu
- Department of Ophthalmology and Epidemiology, University of Colorado Anschutz Medical Campus, Aurora, CO, United States
| | - Riaz Qureshi
- Department of Ophthalmology and Epidemiology, University of Colorado Anschutz Medical Campus, Aurora, CO, United States
| | - Ian J Saldanha
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, United States
| | - Sumeer Singh
- Department of Optometry and Vision Sciences, The University of Melbourne, Parkville, Victoria, Australia
| | - Jennifer P Craig
- Department of Ophthalmology, New Zealand National Eye Centre, The University of Auckland, Auckland, New Zealand
| |
Collapse
|
10
|
Kolaski K, Romeiser Logan L, Ioannidis JPA. Guidance to best tools and practices for systematic reviews1. J Pediatr Rehabil Med 2023; 16:241-273. [PMID: 37302044 DOI: 10.3233/prm-230019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/12/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
11
|
Song M, Song YM. Randomized Controlled Trials of Digital Mental Health Interventions on Patients with Schizophrenia Spectrum Disorder: A Systematic Review. Telemed J E Health 2022. [PMID: 36264184 DOI: 10.1089/tmj.2022.0135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Background: This systematic review aimed to examine the study protocol of Digital Mental Health Interventions (DMHIs) and to review the effect of DMHIs among patients with Schizophrenia Spectrum Disorder (SSD). Methods: This review followed the guideline of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A systematic literature search was performed using PubMed, CINAHL, Embase, and PsycINFO electronic databases to identify randomized clinical trials without any limit on the publication year. Overall, 18 studies were selected and evaluated for the quality assessment utilizing the Risk of Bias 2 tool of Cochranes' Collaboration. In the quality assessment, four studies evaluated as overall high risk of bias were excluded from the selection, and the final 14 studies were chosen. Results: No DMHIs were provided for acute schizophrenia-related symptoms, and there were some studies related to schizophrenia-related symptoms (26.4%). Some studies for improving cognitive function (42.9%) were reported, and there was a significant effect when interventions that were proven to be effective when implemented in a face-to-face manner were delivered using various online devices and sensory stimuli. Nearly half of the studies reported intervention frequency and time (57.1%), and those with unclear reports relied either on a mobile app or telemedicine and were designed to self-pace the frequency and speed of the intervention. Conclusion: Based on our findings, it will be possible to understand the characteristics of DMHIs, without physical contact, for only SSD patients, providing a basis for digital mental health services.
Collapse
Affiliation(s)
- MoonJu Song
- Division of Admission Management and Policy Development, National Center for Mental health, Seoul, Republic of Korea
- College of Nursing, Korea University, Seoul, Republic of Korea
| | - Yul-Mai Song
- Department of Nursing, Honam University, Gwangju, Republic of Korea
| |
Collapse
|
12
|
McCann P, Kruoch Z, Qureshi R, Li T. Effectiveness of interventions for dry eye: a protocol for an overview of systematic reviews. BMJ Open 2022; 12:e058708. [PMID: 35672062 PMCID: PMC9174758 DOI: 10.1136/bmjopen-2021-058708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 04/13/2022] [Indexed: 11/15/2022] Open
Abstract
INTRODUCTION Dry eye is a leading cause of ocular morbidity and economic and societal burden for patients and healthcare systems. There are several treatment options available for dry eye and high-quality systematic reviews synthesise the evidence for their effectiveness and potential harms. METHODS AND ANALYSIS We will search the Cochrane Eyes and Vision US satellite (CEV@US) database of eyes and vision systematic reviews for systematic reviews on interventions for dry eye. CEV@US conducted an initial search of PubMed and Embase to populate the CEV@US database of eyes and vision systematic reviews in 2007, which was updated most recently in August 2021. We will search the database for systematic reviews published since 1 January 2016 because systematic reviews more than 5 years are unlikely to be up to date. We will consider Cochrane and non-Cochrane systematic reviews eligible for inclusion. Two authors will independently screen articles. We will include studies that evaluate interventions for dry eye and/or meibomian gland dysfunction with no restriction on types of participants or review language. We will select reliable systematic reviews (ie, those meeting pre-established methodological criteria) for inclusion, assessed by one investigator and verified by a second investigator. We will extract ratings of the certainty of evidence from within each review. We will report the degree of overlap for systematic reviews that answer similar questions and include overlapping primary studies. We will present results of the overview in alignment with guidelines in the Cochrane Handbook of Systematic Reviews of Interventions (Online Chapter 5: Overviews of Reviews), the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement, and an overview of reviews quality and transparency checklist. The anticipated start and completion dates for this overview are 1 May 2021 and 30 April 2022, respectively. ETHICS AND DISSEMINATION This overview will not require the approval of an Ethics Committee because it will use published studies. We will publish results in a peer-reviewed journal. PROSPERO REGISTRATION NUMBER CRD42021279880.
Collapse
Affiliation(s)
- Paul McCann
- Department of Ophthalmology, University of Colorado - Anschutz Medical Campus, Aurora, Colorado, USA
| | - Zanna Kruoch
- Cedar Springs Eye Clinic, College of Optometry, University of Houston, Houston, Texas, USA
| | - Riaz Qureshi
- Department of Ophthalmology, University of Colorado - Anschutz Medical Campus, Aurora, Colorado, USA
| | - Tianjing Li
- Department of Ophthalmology, University of Colorado - Anschutz Medical Campus, Aurora, Colorado, USA
| |
Collapse
|
13
|
Qureshi R, Mayo-Wilson E, Rittiphairoj T, McAdams-DeMarco M, Guallar E, Li T. Harms in Systematic Reviews Paper 2: Methods used to assess harms are neglected in systematic reviews of gabapentin. J Clin Epidemiol 2022; 143:212-223. [PMID: 34742789 PMCID: PMC9875742 DOI: 10.1016/j.jclinepi.2021.10.024] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 10/27/2021] [Accepted: 10/29/2021] [Indexed: 01/27/2023]
Abstract
OBJECTIVE We compared methods used with current recommendations for synthesizing harms in systematic reviews and meta-analyses (SRMAs) of gabapentin. STUDY DESIGN & SETTING We followed recommended systematic review practices. We selected reliable SRMAs of gabapentin (i.e., met a pre-defined list of methodological criteria) that assessed at least one harm. We extracted and compared methods in four areas: pre-specification, searching, analysis, and reporting. Whereas our focus in this paper is on the methods used, Part 2 examines the results for harms across reviews. RESULTS We screened 4320 records and identified 157 SRMAs of gabapentin, 70 of which were reliable. Most reliable reviews (51/70; 73%) reported following a general guideline for SRMA conduct or reporting, but none reported following recommendations specifically for synthesizing harms. Across all domains assessed, review methods were designed to address questions of benefit and rarely included the additional methods that are recommended for evaluating harms. CONCLUSION Approaches to assessing harms in SRMAs we examined are tokenistic and unlikely to produce valid summaries of harms to guide decisions. A paradigm shift is needed. At a minimal, reviewers should describe any limitations to their assessment of harms and provide clearer descriptions of methods for synthesizing harms.
Collapse
Affiliation(s)
- Riaz Qureshi
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| | - Evan Mayo-Wilson
- Department of Epidemiology and Biostatistics, Indiana University School of Public Health, Bloomington, ID, USA
| | - Thanitsara Rittiphairoj
- Cochrane Eyes and Vision United States, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Mara McAdams-DeMarco
- Department of Surgery, Department of Epidemiology, Johns Hopkins School of Medicine and Bloomberg School of Public Health, Baltimore, MD, USA
| | - Eliseo Guallar
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| | - Tianjing Li
- Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, CO, USA.
| |
Collapse
|
14
|
Kolaski K, Romeiser Logan L, Goss KD, Butler C. Quality appraisal of systematic reviews of interventions for children with cerebral palsy reveals critically low confidence. Dev Med Child Neurol 2021; 63:1316-1326. [PMID: 34091900 DOI: 10.1111/dmcn.14949] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/06/2021] [Indexed: 01/07/2023]
Abstract
AIM To evaluate the methodological quality of recent systematic reviews of interventions for children with cerebral palsy in order to determine the level of confidence in the reviews' conclusions. METHOD A comprehensive search of 22 databases identified eligible systematic reviews with and without meta-analysis published worldwide from 2015 to 2019. We independently extracted data and used A Measurement Tool to Assess Systematic Reviews-2 (AMSTAR-2) to appraise methodological quality. RESULTS Eighty-three systematic reviews met strict eligibility criteria. Most were from Europe and Latin America and reported on rehabilitative interventions. AMSTAR-2 appraisal found critically low confidence in 88% (n=73) because of multiple and varied deficiencies. Only 7% (n=6) had no AMSTAR-2 critical domain deficiency. The number of systematic reviews increased fivefold from 2015 to 2019; however, quality did not improve over time. INTERPRETATION Most of these systematic reviews are considered unreliable according to AMSTAR-2. Current recommendations for treating children with CP based on these flawed systematic reviews need re-evaluation. Findings are comparable to reports from other areas of medicine, despite the general perception that systematic reviews are high-level evidence. The required use of current widely accepted guidance for conducting and reporting systematic reviews by authors, peer reviewers, and editors is critical to ensure reliable, unbiased, and transparent systematic reviews. What this paper adds Confidence was critically low in the conclusions of 88% of systematic reviews about interventions for children with cerebral palsy (CP). Quality issues in the sample were not limited to systematic reviews of non-randomized trials, or to those about certain populations of CP or interventions. The inclusion of meta-analysis did not improve the level of confidence in these systematic reviews. Numbers of systematic reviews on this topic increased over the 5 search years but their methodological quality did not improve.
Collapse
Affiliation(s)
- Kat Kolaski
- Department of Orthopedics, Wake Forest University, Winston-Salem, NC, USA.,Department of Pediatrics, Wake Forest University, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - Katherine D Goss
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | | |
Collapse
|
15
|
Qureshi R, Azuara-Blanco A, Michelessi M, Virgili G, Barbosa Breda J, Cutolo CA, Pazos M, Katsanos A, Garhöfer G, Kolko M, Prokosch-Willing V, Al Rajhi AA, Lum F, Musch D, Gedde S, Li T. What Do We Really Know about the Effectiveness of Glaucoma Interventions?: An Overview of Systematic Reviews. Ophthalmol Glaucoma 2021; 4:454-462. [PMID: 33571689 PMCID: PMC8349936 DOI: 10.1016/j.ogla.2021.01.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 01/03/2021] [Accepted: 01/26/2021] [Indexed: 01/08/2023]
Abstract
PURPOSE To identify systematic reviews of interventions for glaucoma conditions and to assess their reliability, thereby generating a list of potentially reliable reviews for updating glaucoma practice guidelines. DESIGN Cross-sectional study. PARTICIPANTS Systematic reviews of interventions for glaucoma conditions. METHODS We used a database of systematic reviews and meta-analyses in vision research and eye care maintained by the Cochrane Eyes and Vision United States Satellite. We examined all Cochrane systematic reviews of interventions for glaucoma conditions published before August 7, 2019, and all non-Cochrane systematic reviews of interventions for glaucoma conditions published between January 1, 2014, and August 7, 2019. MAIN OUTCOME MEASURES We assessed eligible reviews for reliability, extracted characteristics, and summarized key findings from reviews classified as reliable. RESULTS Of the 4451 systematic reviews in eyes and vision identified, 129 met our eligibility criteria and were assessed for reliability. Of these, we classified 49 (38%) as reliable. We found open-angle glaucoma (22/49) to be the condition with the most reviews and medical management (17/49) and intraocular pressure (IOP; 43/49) to be the most common interventions and outcomes studied. Most reviews found a high degree of uncertainty in the evidence, which hinders the possibility of making strong recommendations in guidelines. These reviews found high-certainty evidence about a few topics: reducing IOP helps to prevent glaucoma and its progression, prostaglandin analogs are the most effective medical treatment for lowering IOP, laser trabeculoplasty is as effective as medical treatment as a first-line therapy in controlling IOP, the use of IOP-lowering medications in the perioperative or postoperative periods to accompany laser (e.g., trabeculoplasty) reduces the risk of postoperative IOP spikes, conventional surgery (i.e., trabeculectomy) is more effective than medications in reducing IOP, and antimetabolites and β-radiation improve IOP control after trabeculectomy. The evidence is weak regarding the effectiveness of minimally invasive glaucoma surgeries. CONCLUSIONS Most systematic reviews evaluating interventions for glaucoma are of poor reliability. Even among those that may be considered reliable, important limitations exist in the value of information because of the uncertainty of the evidence as well as small and sometimes unimportant clinical differences between interventions.
Collapse
Affiliation(s)
- Riaz Qureshi
- Department of Epidemiology, Johns Hopkins University, Baltimore, Maryland
| | - Augusto Azuara-Blanco
- School of Medicine, Dentistry and Biomedical Sciences, Centre for Public Health, Queen's University Belfast, Belfast, United Kingdom
| | | | - Gianni Virgili
- Department of Neurosciences, Psychology Drug Research and Child Health (NEUROFARBA), University of Florence, Florence, Italy
| | - João Barbosa Breda
- Cardiovascular R&D Center, Faculty of Medicine, University of Porto, Porto, Portugal; and Research Group Ophthalmology, Department of Neurosciences, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Carlo Alberto Cutolo
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics and Maternal and Child Sciences, University of Genoa and IRCCS San Martino Policlinic Hospital, Genova, Italy
| | - Marta Pazos
- Department of Ophthalmology, Hospital Clínic of Barcelona, Barcelona, Spain
| | - Andreas Katsanos
- Department of Ophthalmology, University of Ioannina, Ioannina, Greece
| | - Gerhard Garhöfer
- Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria
| | - Miriam Kolko
- Department of Ophthalmology, Copenhagen University Hospital, Rigshospitalet-Glostrup, Glostrup, and Department of Drug Design and Pharmacology, University of Copenhagen, Copenhagen, Denmark
| | | | | | - Flora Lum
- American Academy of Ophthalmology, San Francisco, California
| | - David Musch
- Departments of Ophthalmology and Visual Sciences and of Epidemiology, University of Michigan, Ann Arbor, Michigan
| | | | - Tianjing Li
- Department of Ophthalmology, School of Medicine, University of Colorado Denver, Aurora, Colorado.
| |
Collapse
|
16
|
Le JT, Qureshi R, Twose C, Rosman L, Han G, Fapohunda K, Saldanha IJ, Scherer RW, Lum F, Al-Rajhi A, Musch DC, Hawkins BS, Dickersin K, Li T. Evaluation of Systematic Reviews of Interventions for Retina and Vitreous Conditions. JAMA Ophthalmol 2021; 137:1399-1405. [PMID: 31600387 DOI: 10.1001/jamaophthalmol.2019.4016] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Importance Patient care and clinical practice guidelines should be informed by evidence from reliable systematic reviews. The reliability of systematic reviews related to forthcoming guidelines for retina and vitreous conditions is unknown. Objectives To summarize the reliability of systematic reviews on interventions for 7 retina and vitreous conditions, describe characteristics of reliable and unreliable systematic reviews, and examine the primary area in which they appeared to be lacking. Design, Setting, and Participants A cross-sectional study of systematic reviews was conducted. Systematic reviews of interventions for retina- and vitreous-related conditions in a database maintained by the Cochrane Eyes and Vision United States Satellite were identified. Databases that the reviewers searched, whether any date or language restrictions were applied, and bibliographic information, such as year and journal of publication, were documented. The initial search was conducted in March 2007, and the final update was performed in July 2018. The conditions of interest were age-related macular degeneration; diabetic retinopathy; idiopathic epiretinal membrane and vitreomacular traction; idiopathic macular hole; posterior vitreous detachment, retinal breaks, and lattice degeneration; retinal and ophthalmic artery occlusions; and retinal vein occlusions. The reliability of each review was evaluated using prespecified criteria. Data were extracted by 2 research assistants working independently, with disagreements resolved through discussion or by 1 research assistant with verification by a senior team member. Main Outcomes and Measures Proportion of reviews that meet all of the following criteria: (1) defined eligibility criteria for study selection, (2) described conducting a comprehensive literature search, (3) reported assessing risk of bias in included studies, (4) described using appropriate methods for any meta-analysis performed, and (5) provided conclusions consistent with review findings. Results A total of 327 systematic reviews that addressed retina and vitreous conditions were identified; of these, 131 reviews (40.1%) were classified as reliable and 196 reviews (59.9%) were classified as not reliable. At least 1 reliable review was found for each of the 7 retina and vitreous conditions. The most common reason that a review was classified as not reliable was lack of evidence that a comprehensive literature search for relevant studies had been conducted (149 of 196 reviews [76.0%]). Conclusion and Relevance The findings of this study suggest that most systematic reviews that addressed interventions for retina and vitreous conditions were not reliable. Systematic review teams and guideline developers should work with information professionals who can help navigate sophisticated and varied syntaxes required to search different resources.
Collapse
Affiliation(s)
- Jimmy T Le
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Riaz Qureshi
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Claire Twose
- Welch Medical Library, Johns Hopkins University, Baltimore, Maryland
| | - Lori Rosman
- Welch Medical Library, Johns Hopkins University, Baltimore, Maryland
| | - Genie Han
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Kolade Fapohunda
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Ian J Saldanha
- Department of Health Services, Policy, and Practice (Primary), Brown University School of Public Health, Providence, Rhode Island.,Department of Epidemiology (Joint), Brown University School of Public Health, Providence, Rhode Island
| | - Roberta W Scherer
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Flora Lum
- American Academy of Ophthalmology, San Francisco, California
| | - Ali Al-Rajhi
- American Academy of Ophthalmology, San Francisco, California
| | - David C Musch
- Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor.,Department of Epidemiology, University of Michigan, Ann Arbor
| | - Barbara S Hawkins
- Wilmer Eye Institute, Johns Hopkins School of Medicine, Baltimore, Maryland
| | - Kay Dickersin
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Tianjing Li
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| |
Collapse
|
17
|
Michelessi M, Li T, Miele A, Azuara-Blanco A, Qureshi R, Virgili G. Accuracy of optical coherence tomography for diagnosing glaucoma: an overview of systematic reviews. Br J Ophthalmol 2021; 105:490-495. [PMID: 32493760 PMCID: PMC7876780 DOI: 10.1136/bjophthalmol-2020-316152] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 05/06/2020] [Accepted: 05/10/2020] [Indexed: 01/06/2023]
Abstract
AIMS To assess the diagnostic accuracy (DTA) of optical coherence tomography (OCT) for detecting glaucoma by systematically searching and appraising systematic reviews (SRs) on this issue. METHODS We searched a database of SRs in eyes and vision maintained by the Cochrane Eyes and Vision United States on the DTA of OCT for detecting glaucoma. Two authors working independently screened the records, abstracted data and assessed the risk of bias using the Risk of Bias in Systematic Reviews checklist. We extracted quantitative DTA estimates as well as qualitative statements on their relevance to practice. RESULTS We included four SRs published between 2015 and 2018. These SRs included between 17 and 113 studies on OCT for glaucoma diagnosis. Two reviews were at low risk of bias and the other two had two to four domains at high or unclear risk of bias with concerns on applicability. The two reliable SRs reported the accuracy of average retinal nerve fibre layer (RNFL) thickness and found a sensitivity of 0.69 (0.63 to 0.73) and 0.78 (0.74 to 0.83) and a specificity of 0.94 (0.93 to 0.95) and 0.93 (0.92 to 0.95) in 57 and 50 studies, respectively. Only one review included a clear specification of the clinical pathway. Both reviews highlighted the limitations of primary DTA studies on this topic. CONCLUSIONS The quality of published DTA reviews on OCT for diagnosing glaucoma was mixed. Two reliable SRs found moderate sensitivity at high specificity for average RNFL thickness in diagnosing manifest glaucoma. Our overview suggests that the methodological quality of both primary and secondary DTA research on glaucoma is in need of improvement.
Collapse
Affiliation(s)
| | - Tianjing Li
- Department of Ophthalmology, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Alba Miele
- Eye Clinic, Department of NEUROFARBA, University of Florence, Florence, Italy
| | | | - Riaz Qureshi
- Department of Epidemiology, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, USA
| | - Gianni Virgili
- Eye Clinic, Department of NEUROFARBA, University of Florence, Florence, Italy
| |
Collapse
|
18
|
Colder Carras M, Shi J, Hard G, Saldanha IJ. Evaluating the quality of evidence for gaming disorder: A summary of systematic reviews of associations between gaming disorder and depression or anxiety. PLoS One 2020; 15:e0240032. [PMID: 33104730 PMCID: PMC7588081 DOI: 10.1371/journal.pone.0240032] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 09/18/2020] [Indexed: 12/24/2022] Open
Abstract
Gaming disorder has been described as an urgent public health problem and has garnered many systematic reviews of its associations with other health conditions. However, review methodology can contribute to bias in the conclusions, leading to research, policy, and patient care that are not truly evidence-based. This study followed a pre-registered protocol (PROSPERO 2018 CRD42018090651) with the objective of identifying reliable and methodologically-rigorous systematic reviews that examine the associations between gaming disorder and depression or anxiety in any population. We searched PubMed and PsycInfo for published systematic reviews and the gray literature for unpublished systematic reviews as of June 24, 2020. Reviews were classified as reliable according to several quality criteria, such as whether they conducted a risk of bias assessment of studies and whether they clearly described how outcomes from each study were selected. We assessed possible selective outcome reporting among the reviews. Seven reviews that included a total of 196 studies met inclusion criteria. The overall number of participants was not calculable because not all reviews reported these data. All reviews specified eligibility criteria for studies, but not for outcomes within studies. Only one review assessed risk of bias. Evidence of selective outcome reporting was found in all reviews-only one review incorporated any of the null findings from studies it included. Thus, none were classified as reliable according to prespecified quality criteria. Systematic reviews related to gaming disorder do not meet methodological standards. As clinical and policy decisions are heavily reliant on reliable, accurate, and unbiased evidence synthesis; researchers, clinicians, and policymakers should consider the implications of selective outcome reporting. Limitations of the current summary include using counts of associations and restricting to systematic reviews published in English. Systematic reviewers should follow established guidelines for review conduct and transparent reporting to ensure evidence about technology use disorders is reliable.
Collapse
Affiliation(s)
| | - Jing Shi
- Institute for Mental Health Policy Research, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- School of Rehabilitation Science, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Gregory Hard
- MGH Institute of Health Professions, Mass General Brigham, Boston, Massachusetts, United States of America
| | - Ian J. Saldanha
- Center for Evidence Synthesis in Health, Department of Health Services, Policy, and Practice, and Department of Epidemiology, Brown University School of Public Health, Providence, Rhode Island, United States of America
| |
Collapse
|
19
|
Qureshi R, Han G, Fapohunda K, Abariga S, Wilson R, Li T. Authorship diversity among systematic reviews in eyes and vision. Syst Rev 2020; 9:192. [PMID: 32854764 PMCID: PMC7450569 DOI: 10.1186/s13643-020-01451-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 08/07/2020] [Indexed: 11/16/2022] Open
Abstract
IMPORTANCE The inclusion of authors from diverse backgrounds and with different lived experiences is critical to ensuring the questions addressed in systematic reviews (SRs), as well as the subsequent conclusions and recommendations made, are representative of the global community. OBJECTIVE To assess the gender and geographic diversity of authors among all Cochrane SRs in eyes and vision as compared with a random sample of non-Cochrane SRs of interventions in the field of eyes and vision. DESIGN The Cochrane Eyes and Vision US Satellite maintains a database of SRs in the field of eyes and vision. We selected all (n = 313) eyes and vision intervention SRs published in The Cochrane Library and a random sample of 313 eyes and vision intervention SRs published elsewhere for this study. We determined gender of the first and corresponding authors ("woman," "man," or "unknown") using a previously developed algorithm and their location based on institution country and the World Health Organization region. RESULTS From the 626 reviews included in our sample, we identified 751 unique authors who comprised 887 author positions (i.e., first and/or corresponding authors). We were able to ascertain the gender of 647/751 (86%) authors: 276 women and 371 men. Among Cochrane eyes and vision SRs, the proportions of women in first and/or corresponding author positions were consistent and approximately equal to men. Among non-Cochrane eyes and vision SRs, the representation of women was markedly lower as corresponding authors than other positions. Most authors of Cochrane eyes and vision SRs were from the UK (31%) and USA (26%), whereas most authors of non-Cochrane SRs were from China (34%). CONCLUSIONS AND RELEVANCE Compared with authors of non-Cochrane SRs in eyes and vision, authors of Cochrane SRs appear to have approximately equal representation of women and men among perceived important author positions and be located in European and North American countries, possibly due to the locations of the Cochrane editorial teams. Cochrane Eyes and Vision should continue to recruit authors from around the world in locations that reflect the global burden of eye disease.
Collapse
Affiliation(s)
- Riaz Qureshi
- Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD USA
| | - Genie Han
- Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD USA
| | - Kolade Fapohunda
- Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD USA
| | - Samuel Abariga
- Department of Ophthalmology, School of Medicine, University of Colorado Anschutz Medical Campus, 1675 Aurora Ct. F731, Aurora, CO 80045 USA
| | - Renee Wilson
- Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD USA
| | - Tianjing Li
- Department of Ophthalmology, School of Medicine, University of Colorado Anschutz Medical Campus, 1675 Aurora Ct. F731, Aurora, CO 80045 USA
| |
Collapse
|
20
|
Keel S, Evans JR, Block S, Bourne R, Calonge M, Cheng CY, Friedman DS, Furtado JM, Khanna RC, Mathenge W, Mariotti S, Matoto E, Müller A, Rabiu MM, Rasengane T, Zhao J, Wormald R, Cieza A. Strengthening the integration of eye care into the health system: methodology for the development of the WHO package of eye care interventions. BMJ Open Ophthalmol 2020; 5:e000533. [PMID: 32821853 PMCID: PMC7418692 DOI: 10.1136/bmjophth-2020-000533] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 06/23/2020] [Accepted: 07/08/2020] [Indexed: 02/06/2023] Open
Abstract
Objective To describe the rational for, and the methods that will be employed to develop, the WHO package of eye care interventions (PECI). Methods and analysis The development of the package will be conducted in four steps: (1) selection of eye conditions (for which interventions will be included in the package) based on epidemiological data on the causes of vision impairment and blindness, prevalence estimates of eye conditions and health facility data; (2) identification of interventions and related evidence for the selected eye conditions from clinical practice guidelines and high-quality systematic reviews by a technical working group; (3) expert agreement on the inclusion of eye care interventions in the package and the description of resources required for the provision of the selected interventions; and (4) peer review. The project will be led by the WHO Vision Programme in collaboration with Cochrane Eyes and Vision. A Technical Advisory Group, comprised of public health and clinical experts in the field, will provide technical input throughout all stages of development. Results After considering the feedback of Technical Advisory Group members and reviewing-related evidence, a final list of eye conditions for which interventions will be included in the package has been collated. Conclusion The PECI will support Ministries of Health in prioritising, planning, budgeting and integrating eye care interventions into health systems. It is anticipated that the PECI will be available for use in 2021.
Collapse
Affiliation(s)
- Stuart Keel
- Department of Noncommunicable Diseases, World Health Organization, Geneva, Switzerland
| | - Jennifer R Evans
- International Centre for Eye Health, London School of Hygiene and Tropical Medicine, London, United Kingdom
| | - Sandra Block
- Illinois College of Optometry, Chicago, United States
| | - Rupert Bourne
- Cambridge University Hospitals, Cambridge, United Kingdom
- Vision & Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom
| | - Margarita Calonge
- Institute of Applied OphthalmoBiology, University of Valladolid and CIBER-BBN (Biomedical Research Networking Center Bioengineering, Biomaterials and Nanomedicine), Carlos III National Institute of Health, Valladolid, Spain
| | - Ching-Yu Cheng
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - David S Friedman
- Massachusetts Eye and Ear, Harvard University, Boston, United States
| | - João M Furtado
- Division of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Rohit C Khanna
- Allen Foster Community Eye Health Research Centre, Gullapalli Pratibha Rao International Centre for Advancement of Rural Eye care, L V Prasad Eye Institute, Hyderabad, India
| | | | - Silvio Mariotti
- Department of Noncommunicable Diseases, World Health Organization, Geneva, Switzerland
| | | | - Andreas Müller
- Department of Noncommunicable Diseases, World Health Organization, Geneva, Switzerland
| | - M Mansur Rabiu
- Noor Dubai Foundation, Dubai Health Authority, Dubai, United Arab Emirates
| | - Tuwani Rasengane
- Department of Optometry, University of the Free State and Universitas Hospital, Bloemfontein, South Africa
| | - Jialang Zhao
- Department of Ophthalmology, Peking Union Medical College Hospital, Eye Research Center Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Richard Wormald
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- UCL Institute of Ophthalmology, London, United Kingdom
| | - Alarcos Cieza
- Department of Noncommunicable Diseases, World Health Organization, Geneva, Switzerland
| |
Collapse
|
21
|
Saldanha IJ, Lindsley KB, Lum F, Dickersin K, Li T. Reliability of the Evidence Addressing Treatment of Corneal Diseases: A Summary of Systematic Reviews. JAMA Ophthalmol 2020; 137:775-785. [PMID: 31070698 DOI: 10.1001/jamaophthalmol.2019.1063] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Importance Patient care should be informed by clinical practice guidelines, which in turn should be informed by evidence from reliable systematic reviews. The American Academy of Ophthalmology is updating its Preferred Practice Patterns (PPPs) for the management of the following 6 corneal diseases: bacterial keratitis, blepharitis, conjunctivitis, corneal ectasia, corneal edema and opacification, and dry eye syndrome. Objective To summarize the reliability of the existing systematic reviews addressing interventions for corneal diseases. Data Source The Cochrane Eyes and Vision US Satellite database. Study Selection In this study of published systematic reviews from 1997 to 2017 (median, 2014), the Cochrane Eyes and Vision US Satellite database was searched for systematic reviews evaluating interventions for the management of any corneal disease, combining eyes and vision keywords and controlled vocabulary terms with a validated search filter. Data Extraction and Synthesis The study classified systematic reviews as reliable when each of the following 5 criteria were met: the systematic review specified eligibility criteria for inclusion of studies, conducted a comprehensive literature search for studies, assessed risk of bias of the individual included studies, used appropriate methods for quantitative syntheses (meta-analysis) (only assessed if meta-analysis was performed), and had conclusions that were supported by the results of the systematic review. They were classified as unreliable if at least 1 criterion was not met. Main Outcomes and Measures The proportion of systematic reviews that were reliable and the reasons for unreliability. Results This study identified 98 systematic reviews that addressed interventions for 15 corneal diseases. Thirty-three of 98 systematic reviews (34%) were classified as unreliable. The most frequent reasons for unreliability were that the systematic review did not conduct a comprehensive literature search for studies (22 of 33 [67%]), did not assess risk of bias of the individual included studies (13 of 33 [39%]), and did not use appropriate methods for quantitative syntheses (meta-analysis) (12 of 17 systematic reviews that conducted a quantitative synthesis [71%]). Sixty-five of 98 systematic reviews (66%) were classified as reliable. Forty-two of the 65 reliable systematic reviews (65%) addressed corneal diseases relevant to the 2018 American Academy of Ophthalmology PPPs; 33 of these 42 systematic reviews (79%) are cited in the 2018 PPPs. Conclusions and Relevance One in 3 systematic reviews addressing interventions for corneal diseases are unreliable and thus were not used to inform PPP recommendations. Careful adherence by systematic reviewers and journal editors to well-established best practices regarding systematic review conduct and reporting might help make future systematic reviews in eyes and vision more reliable.
Collapse
Affiliation(s)
- Ian J Saldanha
- Center for Evidence Synthesis in Health, Department of Health Services, Policy, and Practice, Brown University School of Public Health, Providence, Rhode Island
| | - Kristina B Lindsley
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Flora Lum
- American Academy of Ophthalmology, San Francisco, California
| | - Kay Dickersin
- Center for Clinical Trials and Evidence Synthesis, Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Tianjing Li
- Center for Clinical Trials and Evidence Synthesis, Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| |
Collapse
|
22
|
Wormald R, Virgili G, Azuara-Blanco A. Systematic reviews and randomised controlled trials on open angle glaucoma. Eye (Lond) 2020; 34:161-167. [PMID: 31796882 PMCID: PMC7002425 DOI: 10.1038/s41433-019-0687-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 10/16/2019] [Accepted: 11/05/2019] [Indexed: 01/07/2023] Open
Abstract
Evidence for the effectiveness of interventions to prevent the progression of optic nerve damage in open angle glaucoma has evolved over the last 25 years. We describe that evolution through the systematic reviews on various aspects of the topic and how those reviews have highlighted the need for new trials. Though we can be confident that lowering pressure does indeed reduce the risk of progression, we still lack good evidence on the comparative effectiveness of different treatments not so much on lowering pressure but on preventing progression of the disease. This is true for different medicines, types of laser and especially for different surgical interventions. As always there is a need for more research, but this needs to be focussed on key uncertainties using core outcome sets which avoid research waste. Ultimately, our guidelines can be based on sound and comprehensive evidence of effectiveness.
Collapse
Affiliation(s)
- Richard Wormald
- NIHR Biomedical Research Centre at Moorfields Eye Hospital and UCL Institute of Ophthalmology, London, UK.
- London School of Hygiene and Tropical Medicine, London, UK.
| | | | | |
Collapse
|
23
|
Evans J, Li T, Virgili G, Wormald R. Cochrane Eyes and Vision: a perspective introducing Cochrane Corner in Eye. Eye (Lond) 2019; 33:882-886. [PMID: 30783261 DOI: 10.1038/s41433-019-0357-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Accepted: 10/12/2018] [Indexed: 11/09/2022] Open
Abstract
In 1972, Archie Cochrane wrote "It is surely a great criticism of our profession that we have not organised a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomised controlled trials". The Cochrane Collaboration arose in response to Archie Cochrane's challenge. Cochrane Eyes and Vision aims to prepare and promote access to systematic reviews of interventions for preventing or treating eye conditions and/or visual impairment, and helping people adjust to visual impairment or blindness. To identify all relevant randomised controlled trials, Cochrane Eyes and Vision has a team of information specialists who develop search strategies to identify studies for inclusion in Cochrane reviews. Since 1997 we have published 266 protocols, 193 new reviews and 158 updated reviews. The majority of these are reviews of intervention effectiveness; three reviews are diagnostic test accuracy reviews. Overall 18% of reviews contain no trials, highlighting a potential evidence gap. We provide training, education and guidance to systematic review authors and work with clinical and patient partners to prioritise and disseminate reviews. In addition, Cochrane Eyes and Vision US satellite carries out critical methodologic research addressing topics relevant to producing high-quality reviews. We are partnering with the journal Eye to publish commentaries on selected Cochrane systematic review findings. This partnership will allow us to make high-quality evidence available to ophthalmologists and other practitioners, researchers, policy makers and patients.
Collapse
Affiliation(s)
- Jennifer Evans
- International Centre for Eye Health, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK.
| | - Tianjing Li
- Department of Epidemiology, Johns Hopkins University Bloomberg School of Public Health, 615 North Wolfe Street, Baltimore, MD, 21205, USA
| | - Gianni Virgili
- Department of Translational Surgery and Medicine, Eye Clinic, Via le Morgagni 85, University of Florence, 50134, Florence, Italy
| | - Richard Wormald
- Moorfields Eye Hospital NHS Foundation Trust, City Road, London, EC1V 2PD, UK
| | | |
Collapse
|