1
|
Kezirian EJ. In reference to Orofacial Myofunctional Therapy for Obstructive Sleep Apnea: A Systematic Review and Meta-Analysis. Laryngoscope 2024; 134:E10. [PMID: 37905776 DOI: 10.1002/lary.31132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 08/29/2023] [Indexed: 11/02/2023]
Affiliation(s)
- Eric J Kezirian
- Department of Head and Neck Surgery, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| |
Collapse
|
2
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Br J Pharmacol 2024; 181:180-210. [PMID: 37282770 DOI: 10.1111/bph.16100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 06/08/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
3
|
Lisik D, Pires GN, Zou D. Perspective: Systematic review and meta-analysis in obstructive sleep apnea - What is lacking? Sleep Med 2023; 111:54-61. [PMID: 37717377 DOI: 10.1016/j.sleep.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 07/20/2023] [Accepted: 09/05/2023] [Indexed: 09/19/2023]
Abstract
Obstructive sleep apnea (OSA) affects nearly one billion of the global adult population. It is associated with substantial burden in terms of quality of life, cognitive function, and cardiovascular health. Positive airway pressure (PAP) therapy, commonly considered the first-line treatment, is limited by low compliance and lacking efficacy on long-term cardiovascular outcomes. A substantial body of research has been produced investigating (novel) non-PAP treatments. With increased understanding of OSA pathogenesis, promising therapeutic approaches are emerging. There is an imperative need of high-quality synthesis of evidence; however, current systematic reviews and meta-analyses (SR/MA) on the topic demonstrate important methodological limitations and are seldom based on research questions that fully reflect the complex intricacies of OSA management. Here, we discuss the current challenges in management of OSA, the need of treatable traits based OSA treatment, the methodological limitations of existing SR/MA in the field, potential remedies, as well as future perspectives.
Collapse
Affiliation(s)
- Daniil Lisik
- Krefting Research Centre, Department of Internal Medicine and Clinical Nutrition, Institute of Medicine, University of Gothenburg, Gothenburg, Sweden.
| | - Gabriel Natan Pires
- Departamento de Psicobiologia, Universidade Federal de São Paulo, Rua Napoleão de Barros, São Paulo, Brazil
| | - Ding Zou
- Center for Sleep and Vigilance Disorders, Department of Internal Medicine and Clinical Nutrition, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
4
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Acta Anaesthesiol Scand 2023; 67:1148-1177. [PMID: 37288997 DOI: 10.1111/aas.14295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 06/09/2023]
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
5
|
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P.A. Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
6
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Syst Rev 2023; 12:96. [PMID: 37291658 DOI: 10.1186/s13643-023-02255-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 02/19/2023] [Indexed: 06/10/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
7
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. BMC Infect Dis 2023; 23:383. [PMID: 37286949 DOI: 10.1186/s12879-023-08304-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 05/03/2023] [Indexed: 06/09/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
8
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to Best Tools and Practices for Systematic Reviews. JBJS Rev 2023; 11:01874474-202306000-00009. [PMID: 37285444 DOI: 10.2106/jbjs.rvw.23.00077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
» Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.» A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.» Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California
| |
Collapse
|
9
|
Kolaski K, Romeiser Logan L, Ioannidis JPA. Guidance to best tools and practices for systematic reviews1. J Pediatr Rehabil Med 2023; 16:241-273. [PMID: 37302044 DOI: 10.3233/prm-230019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/12/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
10
|
McGregor RH, Warner FM, Linde LD, Cragg JJ, Osborn JA, Varshney VP, Schwarz SKW, Kramer JLK. Quality of meta-analyses of non-opioid, pharmacological, perioperative interventions for chronic postsurgical pain: a systematic review. Reg Anesth Pain Med 2022; 47:263-269. [DOI: 10.1136/rapm-2021-102981] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 12/20/2021] [Indexed: 12/14/2022]
Abstract
BackgroundIn an attempt to aggregate observations from clinical trials, several meta-analyses have been published examining the effectiveness of systemic, non-opioid, pharmacological interventions to reduce the incidence of chronic postsurgical pain.ObjectiveTo inform the design and reporting of future studies, the purpose of our study was to examine the quality of these meta-analyses.Evidence reviewWe conducted an electronic literature search in Embase, MEDLINE, and the Cochrane Database of Systematic Reviews. Published meta-analyses, from the years 2010 to 2020, examining the effect of perioperative, systemic, non-opioid pharmacological treatments on the incidence of chronic postsurgical pain in adult patients were identified. Data extraction focused on methodological details. Meta-analysis quality was assessed using the A Measurement Tool to Assess Systematic Reviews 2 (AMSTAR 2) critical appraisal tool.FindingsOur search yielded 17 published studies conducting 58 meta-analyses for gabapentinoids (gabapentin and pregabalin), ketamine, lidocaine, non-steroidal anti-inflammatory drugs, and mexiletine. According to AMSTAR 2, 88.2% of studies (or 15/17) were low or critically low in quality. The most common critical element missing was an analysis of publication bias. Trends indicated an improvement in quality over time and association with journal impact factor.ConclusionsWith few individual trials adequately powered to detect treatment effects, meta-analyses play a crucial role in informing the perioperative management of chronic postsurgical pain. In light of this inherent value and despite a number of attempts, high-quality meta-analyses are still needed.PROSPERO registration numberCRD42021230941.
Collapse
|
11
|
Dhillon J, Khan T, Siddiqui B, Torgerson T, Ottwell R, Johnson AL, Skinner M, Buchanan P, Hartwell M, Vassar M. Analysis of Systematic Reviews in Clinical Practice Guidelines for Head and Neck Cancer. Laryngoscope 2022; 132:1976-1983. [DOI: 10.1002/lary.30051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Revised: 12/17/2021] [Accepted: 01/25/2022] [Indexed: 11/11/2022]
Affiliation(s)
- Jaydeep Dhillon
- Rocky Vista University College of Osteopathic Medicine Parker Colorado U.S.A
| | - Taimoor Khan
- Office of Medical Student Research Oklahoma State University Center for Health Sciences Tulsa Oklahoma U.S.A
| | - Bilal Siddiqui
- Office of Medical Student Research Oklahoma State University Center for Health Sciences Tulsa Oklahoma U.S.A
| | - Trevor Torgerson
- Office of Medical Student Research Oklahoma State University Center for Health Sciences Tulsa Oklahoma U.S.A
| | - Ryan Ottwell
- Office of Medical Student Research Oklahoma State University Center for Health Sciences Tulsa Oklahoma U.S.A
- Department of Internal Medicine University of Oklahoma, School of Community Medicine Tulsa Oklahoma U.S.A
| | - Austin L. Johnson
- Office of Medical Student Research Oklahoma State University Center for Health Sciences Tulsa Oklahoma U.S.A
| | - Mason Skinner
- Office of Medical Student Research Oklahoma State University Center for Health Sciences Tulsa Oklahoma U.S.A
- Department of Otolaryngology—Head and Neck Surgery Oklahoma State University Medical Center Tulsa Oklahoma U.S.A
| | - Patrick Buchanan
- Office of Medical Student Research Oklahoma State University Center for Health Sciences Tulsa Oklahoma U.S.A
- Ascension Medical Group St. John ENT and Head and Neck Surgery Ascension St. John Tulsa Oklahoma U.S.A
| | - Micah Hartwell
- Office of Medical Student Research Oklahoma State University Center for Health Sciences Tulsa Oklahoma U.S.A
| | - Matt Vassar
- Office of Medical Student Research Oklahoma State University Center for Health Sciences Tulsa Oklahoma U.S.A
| |
Collapse
|
12
|
Kezirian EJ. High-quality research is needed much more than commonly published (low quality) meta-analyses. J Clin Sleep Med 2021; 17:1961-1962. [PMID: 33960293 DOI: 10.5664/jcsm.9366] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Affiliation(s)
- Eric James Kezirian
- USC Caruso Department of Otolaryngology - Head & Neck Surgery, Keck School of Medicine of USC, Los Angeles, CA
| |
Collapse
|