1
|
Ho L, Cheung YMK, Choi CCC, Wu IX, Mao C, Chung VCH. Methodological quality of systematic reviews on atopic dermatitis treatments: a cross-sectional study. J DERMATOL TREAT 2024; 35:2343072. [PMID: 38626923 DOI: 10.1080/09546634.2024.2343072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Accepted: 03/26/2024] [Indexed: 04/19/2024]
Abstract
BACKGROUND Systematic reviews (SRs) could offer the best evidence supporting interventions, but methodological flaws limit their trustworthiness in decision-making. This cross-sectional study appraised the methodological quality of SRs on atopic dermatitis (AD) treatments. METHODS We searched MEDLINE, EMBASE, PsycINFO, and Cochrane Database for SRs on AD treatments published in 2019-2022. We extracted SRs' bibliographical data and appraised SRs' methodological quality with AMSTAR (A MeaSurement Tool to Assess systematic Reviews) 2. We explored associations between methodological quality and bibliographical characteristics. RESULTS Among the 52 appraised SRs, only one (1.9%) had high methodological quality, while 45 (86.5%) critically low. For critical domains, only five (9.6%) employed comprehensive search strategy, seven (13.5%) provided list of excluded studies, 17 (32.7%) considered risk of bias in primary studies, 21 (40.4%) contained registered protocol, and 24 (46.2%) investigated publication bias. Cochrane reviews, SR updates, SRs with European corresponding authors, and SRs funded by European institutions had better overall quality. Impact factor and author number positively associated with overall quality. CONCLUSIONS Methodological quality of SRs on AD treatments is unsatisfactory. Future reviewers should improve the above critical methodological aspects. Resources should be devolved into upscaling evidence synthesis infrastructure and improving critical appraisal skills of evidence users.
Collapse
Affiliation(s)
- Leonard Ho
- Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
| | - Yolenda Man Kei Cheung
- Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
| | - Cyrus Chung Ching Choi
- Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
| | - Irene Xinyin Wu
- Xiangya School of Public Health, Central South University, Changsha, Hunan, China
- Hunan Provincial Key Laboratory of Clinical Epidemiology, Changsha, Hunan, China
| | - Chen Mao
- Department of Epidemiology, School of Public Health, Southern Medical University, Guangzhou, Guangdong, China
| | - Vincent Chi Ho Chung
- Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
- School of Chinese Medicine, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
| |
Collapse
|
2
|
Ho L, Chen X, Kwok YL, Wu IXY, Mao C, Chung VCH. Methodological quality of systematic reviews on sepsis treatments: A cross-sectional study. Am J Emerg Med 2024; 77:21-28. [PMID: 38096636 DOI: 10.1016/j.ajem.2023.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/22/2023] [Accepted: 12/04/2023] [Indexed: 02/16/2024] Open
Abstract
OBJECTIVE Systematic reviews (SRs) offer updated evidence to support decision-making on sepsis treatments. However, the rigour of SRs may vary, and methodological flaws may limit their validity in guiding clinical practice. This cross-sectional study appraised the methodological quality of SRs on sepsis treatments. METHODS We searched MEDLINE, EMBASE, and Cochrane Database for eligible SRs on randomised controlled trials on sepsis treatments with at least one meta-analysis published between 2018 and 2023. We extracted SRs' bibliographical characteristics with a pre-designed form and appraised their methodological quality using AMSTAR (A MeaSurement Tool to Assess systematic Reviews) 2. We applied logistic regressions to explore associations between bibliographical characteristics and methodological quality ratings. RESULTS Among the 102 SRs, two (2.0%) had high overall quality, while respectively four (3.9%), seven (6.9%) and 89 (87.3%) were of moderate, low, and critically low quality. Performance in several critical methodological domains was poor, with only 32 (31.4%) considering the risk of bias in primary studies in result interpretation, 22 (21.6%) explaining excluded primary studies, and 16 (15.7%) applying comprehensive searching strategies. SRs published in higher impact factor journals (adjusted odds ratio: 1.19; 95% confidence interval: 1.05 to 1.36) was associated with higher methodological quality. CONCLUSIONS The methodological quality of recent SRs on sepsis treatments is unsatisfactory. Future reviewers should address the above critical methodological aspects. More resources should also be allocated to support continuous training in critical appraisal among healthcare professionals and other evidence users.
Collapse
Affiliation(s)
- Leonard Ho
- Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
| | - Xi Chen
- Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong; Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
| | - Yan Ling Kwok
- Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
| | - Irene X Y Wu
- Xiangya School of Public Health, Central South University, Changsha, Hunan, China; Hunan Provincial Key Laboratory of Clinical Epidemiology, Changsha, Hunan, China
| | - Chen Mao
- Department of Epidemiology, School of Public Health, Southern Medical University, Guangzhou, Guangdong, China
| | - Vincent Chi Ho Chung
- Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong; School of Chinese Medicine, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong.
| |
Collapse
|
3
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Br J Pharmacol 2024; 181:180-210. [PMID: 37282770 DOI: 10.1111/bph.16100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 06/08/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
4
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Acta Anaesthesiol Scand 2023; 67:1148-1177. [PMID: 37288997 DOI: 10.1111/aas.14295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 06/09/2023]
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
5
|
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P.A. Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
6
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. BMC Infect Dis 2023; 23:383. [PMID: 37286949 DOI: 10.1186/s12879-023-08304-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 05/03/2023] [Indexed: 06/09/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
7
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Syst Rev 2023; 12:96. [PMID: 37291658 DOI: 10.1186/s13643-023-02255-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 02/19/2023] [Indexed: 06/10/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
8
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to Best Tools and Practices for Systematic Reviews. JBJS Rev 2023; 11:01874474-202306000-00009. [PMID: 37285444 DOI: 10.2106/jbjs.rvw.23.00077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
» Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.» A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.» Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California
| |
Collapse
|
9
|
Kolaski K, Romeiser Logan L, Ioannidis JPA. Guidance to best tools and practices for systematic reviews1. J Pediatr Rehabil Med 2023; 16:241-273. [PMID: 37302044 DOI: 10.3233/prm-230019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/12/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
10
|
Zhong CCW, Zhao J, Wong CHL, Wu IXY, Mao C, Yeung JWF, Chung VCH. Methodological quality of systematic reviews on treatments for Alzheimer's disease: a cross-sectional study. Alzheimers Res Ther 2022; 14:159. [PMID: 36309725 PMCID: PMC9617345 DOI: 10.1186/s13195-022-01100-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 09/30/2022] [Indexed: 11/10/2022]
Abstract
BACKGROUND Carefully conducted systematic reviews (SRs) can provide reliable evidence on the effectiveness of treatment strategies for Alzheimer's disease (AD). Nevertheless, the reliability of SR results can be limited by methodological flaws. This cross-sectional study aimed to examine the methodological quality of SRs on AD treatments, along with potentially relevant factors. METHODS To identify eligible SRs on AD treatments, four databases including the Cochrane Database of Systematic Reviews, MEDLINE, EMBASE, and PsycINFO were searched. The Assessing the Methodological Quality of Systematic Reviews 2 instrument was used for quality appraisal of SRs. Multivariable regression analyses were used to examine factors related to methodological quality. RESULTS A total of 102 SRs were appraised. Four (3.90%) SRs were considered as high quality; 14 (13.7%), 48 (47.1%), and 36 (35.3%) were as moderate, low, and critically low quality, respectively. The following significant methodological limitations were identified: only 22.5% of SRs registered protocols a priori, 6.9% discussed the rationales of chosen study designs, 21.6% gave a list of excluded studies with reasons, and 23.5% documented funding sources of primary studies. Cochrane SRs (adjusted odds ratio (AOR): 31.9, 95% confidence interval (CI): 3.81-266.9) and SRs of pharmacological treatments (AOR: 3.96, 95%CI: 1.27-12.3) were related to the higher overall methodological quality of SRs. CONCLUSION Methodological quality of SRs on AD treatments is unsatisfactory, especially among non-Cochrane SRs and SRs of non-pharmacological interventions. Improvement in the following methodological domains requires particular attention due to poor performance: registering and publishing protocols a priori, justifying study design selection, providing a list of excluded studies, and reporting funding sources of primary studies.
Collapse
Affiliation(s)
- Claire C. W. Zhong
- grid.10784.3a0000 0004 1937 0482Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Jinglun Zhao
- grid.10784.3a0000 0004 1937 0482Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Charlene H. L. Wong
- grid.10784.3a0000 0004 1937 0482Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Irene X. Y. Wu
- grid.216417.70000 0001 0379 7164Xiangya School of Public Health, Central South University, 5/F, No. 238, Shang ma Yuan ling Alley, Kaifu District, Changsha, Hunan China
| | - Chen Mao
- grid.284723.80000 0000 8877 7471Department of Epidemiology, School of Public Health, Southern Medical University, Guangzhou, China
| | - Jerry W. F. Yeung
- grid.16890.360000 0004 1764 6123School of Nursing, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
| | - Vincent C. H. Chung
- grid.10784.3a0000 0004 1937 0482Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong ,grid.10784.3a0000 0004 1937 0482School of Chinese Medicine, The Chinese University of Hong Kong, Shatin, Hong Kong
| |
Collapse
|
11
|
Cheung AKL, Wong CHL, Ho L, Wu IXY, Ke FYT, Chung VCH. Methodological quality of systematic reviews on Chinese herbal medicine: a methodological survey. BMC Complement Med Ther 2022; 22:48. [PMID: 35197038 PMCID: PMC8867833 DOI: 10.1186/s12906-022-03529-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 02/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Systematic reviews (SRs) synthesise the best evidence of effectiveness and safety on Chinese herbal medicine (CHM). Decision-making should be supported by the high-quality evidence of prudently conducted SRs, but the trustworthiness of conclusions may be limited by poor methodological rigour. METHODS This survey aimed to examine the methodological quality of a representative sample of SRs on CHM published during January 2018 to March 2020. We conducted literature search in Cochrane Database of Systematic Reviews, MEDLINE via Ovid, and EMBASE via Ovid. Eligible SRs must be in Chinese or English with at least one meta-analysis on the treatment effect of any CHM documented in the 2015 Chinese Pharmacopoeia. Two reviewers extracted the bibliographical characteristics of SRs and appraised their methodological quality using AMSTAR 2 (Assessing the Methodological Quality of Systematic Reviews 2). The associations between bibliographical characteristics and methodological quality were investigated using Kruskal-Wallis tests and Spearman's rank correlation coefficients. RESULTS We sampled and appraised one hundred forty-eight SRs. Overall, one (0.7%) was of high methodological quality; zero (0%), four (2.7%), and one-hundred forty-three (96.6%) SRs were of moderate, low, and critically-low quality. Only thirteen SRs (8.8%) provided a pre-defined protocol; none (0%) provided justifications for including particular primary study designs; six (4.1%) conducted a comprehensive literature search; two (1.4%) provided a list of excluded studies; nine (6.1%) undertook meta-analysis with appropriate methods; and seven (4.7%) reported funding sources of included primary studies. Cochrane reviews had higher overall quality than non-Cochrane reviews (P < 0.001). SRs with European funding support were less likely to have critically-low quality when compared with their counterparts (P = 0.020). SRs conducted by more authors (rs = 0.23; P = 0.006) and published in higher impact factor journals (rs = 0.20; P = 0.044) were associated with higher methodological quality. CONCLUSIONS Our results indicated that the methodological quality of SRs on CHM is low. Future authors should enhance the methodological quality through registering a priori protocols, justifying selection of study designs, conducting comprehensive literature search, providing a list of excluded studies with rationales, using appropriate method for meta-analyses, and reporting funding sources among primary studies.
Collapse
Affiliation(s)
- Andy K L Cheung
- Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Charlene H L Wong
- Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Leonard Ho
- School of Chinese Medicine, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Irene X Y Wu
- 5/F, Xiangya School of Public Health, Central South University, 238 Shang-Ma-Yuan-Ling Alley, Kai-Fu District, Changsha, Hunan, China.
| | - Fiona Y T Ke
- Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Vincent C H Chung
- Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong.,School of Chinese Medicine, The Chinese University of Hong Kong, Shatin, Hong Kong
| |
Collapse
|
12
|
Ho L, Ke FYT, Wong CHL, Wu IXY, Cheung AKL, Mao C, Chung VCH. Low methodological quality of systematic reviews on acupuncture: a cross-sectional study. BMC Med Res Methodol 2021; 21:237. [PMID: 34717563 PMCID: PMC8557536 DOI: 10.1186/s12874-021-01437-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 10/14/2021] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND While well-conducted systematic reviews (SRs) can provide the best evidence on the potential effectiveness of acupuncture, limitations on the methodological rigour of SRs may impact the trustworthiness of their conclusions. This cross-sectional study aimed to evaluate the methodological quality of a representative sample of SRs on acupuncture effectiveness. METHODS Cochrane Database of Systematic Reviews, MEDLINE, and EMBASE were searched for SRs focusing on the treatment effect of manual acupuncture or electro-acupuncture published during January 2018 and March 2020. Eligible SRs must contain at least one meta-analysis and be published in English language. Two independent reviewers extracted the bibliographical characteristics of the included SRs with a pre-designed questionnaire and appraised the methodological quality of the studies with the validated AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews 2). The associations between bibliographical characteristics and methodological quality ratings were explored using Kruskal-Wallis rank tests and Spearman's rank correlation coefficients. RESULTS A total of 106 SRs were appraised. Only one (0.9%) SR was of high overall methodological quality, zero (0%) was of moderate-quality, six (5.7%) and 99 (93.4%) were of low-quality and critically low-quality respectively. Among appraised SRs, only ten (9.4%) provided an a priori protocol, four (3.8%) conducted a comprehensive literature search, five (4.7%) provided a list of excluded studies, and six (5.7%) performed meta-analysis appropriately. Cochrane SRs, updated SRs, and SRs that did not search non-English databases had relatively higher overall quality. CONCLUSIONS Methodological quality of SRs on acupuncture is unsatisfactory. Future reviewers should improve critical methodological aspects of publishing protocols, performing comprehensive search, providing a list of excluded studies with justifications for exclusion, and conducting appropriate meta-analyses. These recommendations can be implemented via enhancing the technical competency of reviewers in SR methodology through established education approaches as well as quality gatekeeping by journal editors and reviewers. Finally, for evidence users, skills in SR critical appraisal remain to be essential as relevant evidence may not be available in pre-appraised formats.
Collapse
Affiliation(s)
- Leonard Ho
- School of Chinese Medicine, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Fiona Y T Ke
- The Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Charlene H L Wong
- The Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Irene X Y Wu
- Xiangya School of Public Health, Central South University, 5/F, 238 Shang-Ma-Yuan-Ling Alley, Kai-Fu District, Changsha, Hunan, China.
| | - Andy K L Cheung
- The Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Chen Mao
- Department of Epidemiology, School of Public Health, Southern Medical University, Guangzhou, China
| | - Vincent C H Chung
- School of Chinese Medicine, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, Hong Kong.,The Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, Hong Kong
| |
Collapse
|
13
|
Ruszkowski J, Majkutewicz K, Rybka E, Kutek M, Dębska-Ślizień A, Witkowski JM. The methodological quality and clinical applicability of meta-analyses on probiotics in 2020: A cross-sectional study. Biomed Pharmacother 2021; 142:112044. [PMID: 34399202 DOI: 10.1016/j.biopha.2021.112044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 08/09/2021] [Accepted: 08/09/2021] [Indexed: 01/08/2023] Open
Abstract
Systematic reviews with meta-analyses (SR/MA) are frequently conducted to investigate clinical efficacy of probiotics. However, only rigorously prepared analyses can serve as the highest level of evidence for a specified research question. We have aimed to determine (1) what is the methodological quality of recent SR/MA conducted to assess the efficacy of probiotics; (2) whether the results of SR/MA have a clinical application; and (3) what are factors associated with better quality and applicability of the SR/MA. We systematically searched 4 databases for SR/MA on the probiotics efficacy published in 2020 (PROSPERO CRD42020222716). The AMSTAR 2 tool and pre-defined authors' criteria were used to evaluate methodological quality and clinical applicability, respectively. A total of 114 SR/MA were appraised. In the case of 88 papers (77%), the overall confidence in the results was rated as "critically low". The most prevalent flaws were lack of list of excluded studies with justification (79.8%), lack of study protocol (60.5%), and problems with appropriate results combination(54.4%). A declaration of conduction a probiotic efficacy SR/MA could have been misleading in case of 18 studies that included also synbiotics, paraprobiotics, and prebiotics trials in analyses. Only 14 SR/MA provided results that can be apply in clinical practice. Higher journal impact factor and European affiliation of the 1st and corresponding authors were most consistently associated with higher odds of AMSTAR 2 items fulfillments. Based on our findings, SR/MA of probiotics trials cannot be treated as the highest level of evidence without a careful evaluation of their methodological validity.
Collapse
Affiliation(s)
- Jakub Ruszkowski
- Department of Pathophysiology, Faculty of Medicine, Medical University of Gdańsk, Dębinki 7, 80-211 Gdańsk, Poland; Department of Nephrology, Transplantology and Internal Medicine, Faculty of Medicine, Medical University of Gdańsk, Poland.
| | - Katarzyna Majkutewicz
- Pathophysiology and Experimental Rheumatology Student Interest Club, Departments of Pathophysiology and Experimental Rheumatology, Faculty of Medicine, Medical University of Gdańsk, Poland
| | | | - Marcin Kutek
- Department of Pathophysiology, Faculty of Medicine, Medical University of Gdańsk, Dębinki 7, 80-211 Gdańsk, Poland
| | - Alicja Dębska-Ślizień
- Department of Nephrology, Transplantology and Internal Medicine, Faculty of Medicine, Medical University of Gdańsk, Poland
| | - Jacek M Witkowski
- Department of Pathophysiology, Faculty of Medicine, Medical University of Gdańsk, Dębinki 7, 80-211 Gdańsk, Poland
| |
Collapse
|
14
|
Wang H, Chen Y, Lin Y, Abesig J, Wu IX, Tam W. The methodological quality of individual participant data meta-analysis on intervention effects: systematic review. BMJ 2021; 373:n736. [PMID: 33875446 PMCID: PMC8054226 DOI: 10.1136/bmj.n736] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
OBJECTIVE To assess the methodological quality of individual participant data (IPD) meta-analysis and to identify areas for improvement. DESIGN Systematic review. DATA SOURCES Medline, Embase, and Cochrane Database of Systematic Reviews. ELIGIBILITY CRITERIA FOR SELECTING STUDIES Systematic reviews with IPD meta-analyses of randomised controlled trials on intervention effects published in English. RESULTS 323 IPD meta-analyses covering 21 clinical areas and published between 1991 and 2019 were included: 270 (84%) were non-Cochrane reviews and 269 (84%) were published in journals with a high impact factor (top quarter). The IPD meta-analyses showed low compliance in using a satisfactory technique to assess the risk of bias of the included randomised controlled trials (43%, 95% confidence interval 38% to 48%), accounting for risk of bias when interpreting results (40%, 34% to 45%), providing a list of excluded studies with justifications (32%, 27% to 37%), establishing an a priori protocol (31%, 26% to 36%), prespecifying methods for assessing both the overall effects (44%, 39% to 50%) and the participant-intervention interactions (31%, 26% to 36%), assessing and considering the potential of publication bias (31%, 26% to 36%), and conducting a comprehensive literature search (19%, 15% to 23%). Up to 126 (39%) IPD meta-analyses failed to obtain IPD from 90% or more of eligible participants or trials, among which only 60 (48%) provided reasons and 21 (17%) undertook certain strategies to account for the unavailable IPD. CONCLUSIONS The methodological quality of IPD meta-analyses is unsatisfactory. Future IPD meta-analyses need to establish an a priori protocol with prespecified data syntheses plan, comprehensively search the literature, critically appraise included randomised controlled trials with appropriate technique, account for risk of bias during data analyses and interpretation, and account for unavailable IPD.
Collapse
Affiliation(s)
- Huan Wang
- Xiangya School of Public Health, Central South University, 5/F, Xiangya School of Public Health, No. 238, Shang ma Yuan ling Alley, Kaifu district, Changsha, Hunan, China
| | - Yancong Chen
- Xiangya School of Public Health, Central South University, 5/F, Xiangya School of Public Health, No. 238, Shang ma Yuan ling Alley, Kaifu district, Changsha, Hunan, China
| | - Yali Lin
- Xiangya School of Public Health, Central South University, 5/F, Xiangya School of Public Health, No. 238, Shang ma Yuan ling Alley, Kaifu district, Changsha, Hunan, China
| | - Julius Abesig
- Xiangya School of Public Health, Central South University, 5/F, Xiangya School of Public Health, No. 238, Shang ma Yuan ling Alley, Kaifu district, Changsha, Hunan, China
| | - Irene Xy Wu
- Xiangya School of Public Health, Central South University, 5/F, Xiangya School of Public Health, No. 238, Shang ma Yuan ling Alley, Kaifu district, Changsha, Hunan, China
| | - Wilson Tam
- Alice Lee Centre for Nursing Studies, National University of Singapore, Singapore
| |
Collapse
|