1
|
Heitmar R, Kirchhoff P, Blann A, Kotliar K. Retinal vascular dynamics: A window for observing an irregular heartbeat. A case report. Microcirculation 2024; 31:e12844. [PMID: 38241091 DOI: 10.1111/micc.12844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 12/09/2023] [Accepted: 12/27/2023] [Indexed: 01/21/2024]
Abstract
OBJECTIVE We aimed to characterize several aspects of retinal vascular dynamics in a patient with arrythmia in order to elicit additional diagnostic information on microvascular dysfunction. METHODS A 68-year-old male patient with arrythmia and an age- and gender-matched control subject underwent ocular examination including dynamic retinal vessel assessment with flicker light provocation. Retinal vessel diameters were measured continuously following a standard protocol (IMEDOS Systems, Jena, Germany). The data were evaluated using methods of signal analysis. RESULTS Retinal vessel response following flicker provocation as well as local structural and functional behavior of retinal vessels were comparable between both individuals. The arrhythmia case demonstrated irregular arterial and venous heart rate (HR) pulsation with an average frequency of 1 Hz. Moreover, the case showed a higher magnitude and larger periods of low-frequency retinal vessel oscillations as well as lower periodicity of both HR pulsations and low-frequency vasomotions. CONCLUSIONS Besides numerical examination of irregular HR pulsations in case of arrhythmia, from the direct noninvasive assessment of retinal vessel dynamics one can derive more detailed information on microvascular function including the whole spectrum of retinal arterial and venous pulsations and vasomotions. This may have implications for health screening not limited to atrial fibrillation.
Collapse
Affiliation(s)
- Rebekka Heitmar
- Centre for Vision Across the Lifespan, School of Applied Sciences, University of Huddersfield, Huddersfield, UK
| | - Paulus Kirchhoff
- University of Birmingham Centre for Cardiovascular Sciences, City Hospital, Birmingham, UK
| | - Andrew Blann
- University of Birmingham Centre for Cardiovascular Sciences, City Hospital, Birmingham, UK
| | - Konstantin Kotliar
- Department of Medical Engineering and Technomathematics, Aachen University of Applied Sciences, Juelich, Germany
| |
Collapse
|
2
|
Hasan SU, Siddiqui MAR. Diagnostic accuracy of smartphone-based artificial intelligence systems for detecting diabetic retinopathy: A systematic review and meta-analysis. Diabetes Res Clin Pract 2023; 205:110943. [PMID: 37805002 DOI: 10.1016/j.diabres.2023.110943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 07/28/2023] [Accepted: 10/05/2023] [Indexed: 10/09/2023]
Abstract
AIMS Diabetic retinopathy (DR) is a major cause of blindness globally, early detection is critical to prevent vision loss. Traditional screening that, rely on human experts are, however, costly, and time-consuming. The purpose of this systematic review is to assess the diagnostic accuracy of smartphone-based artificial intelligence(AI) systems for DR detection. METHODS Literature review was conducted on MEDLINE, Embase, Scopus, CINAHL Plus, and Cochrane from inception to December 2022. We included diagnostic test accuracy studies evaluating the use of smartphone-based AI algorithms for DR screening in patients with diabetes, with expert human grader as the reference standard. Random-effects model was used to pool sensitivity and specificity. Any DR(ADR) and referable DR(RDR) were analyzed separately. RESULTS Out of 968 identified articles, six diagnostic test accuracy studies met our inclusion criteria, comprising 3,931 patients. Four of these studies used the Medios AI algorithm. The pooled sensitivity and specificity for diagnosis of ADR were 88 % and 91.5 % respectively and for diagnosis of RDR were 98.2 % and 81.2 % respectively. The overall risk of bias across the studies was low. CONCLUSIONS Smartphone-based AI algorithms show high diagnostic accuracy for detecting DR. However, more high-quality comparative studies are needed to evaluate the effectiveness in real-world clinical settings.
Collapse
Affiliation(s)
- S Umar Hasan
- Department of Ophthalmology and Visual Sciences, Aga Khan University Hospital, National Stadium Road, Karachi, Pakistan
| | - M A Rehman Siddiqui
- Department of Ophthalmology and Visual Sciences, Aga Khan University Hospital, National Stadium Road, Karachi, Pakistan.
| |
Collapse
|
3
|
Stahl AC, Tietz AS, Kendziora B, Dewey M. Has the STARD statement improved the quality of reporting of diagnostic accuracy studies published in European Radiology? Eur Radiol 2023; 33:97-105. [PMID: 35907025 PMCID: PMC9362582 DOI: 10.1007/s00330-022-09008-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Revised: 06/19/2022] [Accepted: 06/30/2022] [Indexed: 12/31/2022]
Abstract
OBJECTIVES To investigate whether encouraging authors to follow the Standards for Reporting Diagnostic Accuracy (STARD) guidelines improves the quality of reporting of diagnostic accuracy studies. METHODS In mid-2017, European Radiology started encouraging its authors to follow the STARD guidelines. Our MEDLINE search identified 114 diagnostic accuracy studies published in European Radiology in 2015 and 2019. The quality of reporting was evaluated by two independent reviewers using the revised STARD statement. Item 11 was excluded because a meaningful decision about adherence was not possible. Student's t test for independent samples was used to analyze differences in the mean number of reported STARD items between studies published in 2015 and in 2019. In addition, we calculated differences related to the study design, data collection, and citation rate. RESULTS The mean total number of reported STARD items for all 114 diagnostic accuracy studies analyzed was 15.9 ± 2.6 (54.8%) of 29 items (range 9.5-22.5). The quality of reporting of diagnostic accuracy studies was significantly better in 2019 (mean ± standard deviation (SD), 16.3 ± 2.7) than in 2015 (mean ± SD, 15.1 ± 2.3; p < 0.02). No significant differences in the reported STARD items were identified in relation to study design (p = 0.13), data collection (p = 0.87), and citation rate (p = 0.09). CONCLUSION The quality of reporting of diagnostic accuracy studies according to the STARD statement was moderate with a slight improvement since European Radiology started to recommend its authors to follow the STARD guidelines. KEY POINTS • The quality of reporting of diagnostic accuracy studies was moderate with a mean total number of reported STARD items of 15.9 ± 2.6. • The adherence to STARD was significantly better in 2019 than in 2015 (16.3 ± 2.7 vs. 15.1 ± 2.3; p = 0.016). • No significant differences in the reported STARD items were identified in relation to study design (p = 0.13), data collection (p = 0.87), and citation rate (p = 0.09).
Collapse
Affiliation(s)
- Ann-Christine Stahl
- Department of Radiology, Charité - Universitätsmedizin Berlin, joint Medical Faculty of Humboldt-Universität zu Berlin and Freie Universität Berlin, Berlin, Germany
| | - Anne-Sophie Tietz
- Department of Radiology, Charité - Universitätsmedizin Berlin, joint Medical Faculty of Humboldt-Universität zu Berlin and Freie Universität Berlin, Berlin, Germany
| | - Benjamin Kendziora
- Department of Dermatology and Allergy, University Hospital, Ludwig Maximilian University, Munich, Germany
| | - Marc Dewey
- Department of Radiology, Charité - Universitätsmedizin Berlin, joint Medical Faculty of Humboldt-Universität zu Berlin and Freie Universität Berlin, Berlin, Germany
| |
Collapse
|
4
|
Adequate Reporting of Dental Diagnostic Accuracy Studies is Lacking: An Assessment of Reporting in Relation to the Standards for Reporting of Diagnostic Accuracy Studies Statement. J Evid Based Dent Pract 2019; 19:283-294. [DOI: 10.1016/j.jebdp.2019.02.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 02/07/2019] [Accepted: 02/28/2019] [Indexed: 11/22/2022]
|
5
|
Jin Y, Sanger N, Shams I, Luo C, Shahid H, Li G, Bhatt M, Zielinski L, Bantoto B, Wang M, Abbade LP, Nwosu I, Leenus A, Mbuagbaw L, Maaz M, Chang Y, Sun G, Levine MA, Adachi JD, Thabane L, Samaan Z. Does the medical literature remain inadequately described despite having reporting guidelines for 21 years? - A systematic review of reviews: an update. J Multidiscip Healthc 2018; 11:495-510. [PMID: 30310289 PMCID: PMC6166749 DOI: 10.2147/jmdh.s155103] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
PURPOSE Reporting guidelines (eg, Consolidated Standards of Reporting Trials [CONSORT] statement) are intended to improve reporting standards and enhance the transparency and reproducibility of research findings. Despite accessibility of such guidelines, researchers are not required to adhere to them. Our goal was to determine the current status of reporting quality in the medical literature and examine whether adherence of reporting guidelines has improved since the inception of reporting guidelines. MATERIALS AND METHODS Eight reporting guidelines, such as CONSORT, Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), STrengthening the Reporting of OBservational studies in Epidemiology (STROBE), Quality of Reporting of Meta-analysis (QUOROM), STAndards for Reporting of Diagnostic accuracy (STARD), Animal Research: Reporting In Vivo Experiments (ARRIVE), Consolidated Health Economic Evaluation Reporting Standards (CHEERS), and Meta-analysis of Observational Studies in Epidemiology (MOOSE) were examined. Our inclusion criteria included reviews published between January 1996 to September 2016 which investigated the adherence to reporting guidelines in the literature that addressed clinical trials, systematic reviews, observational studies, meta-analysis, diagnostic accuracy, economic evaluations, and preclinical animal studies that were in English. All reviews were found on Web of Science, Excerpta Medical Database (EMBASE), MEDLINE, and Cumulative Index to Nursing and Allied Health Literature (CINAHL). RESULTS Among the general searching of 26,819 studies by using the designed searching method, 124 studies were included post screening. We found that 87.9% of the included studies reported suboptimal adherence to reporting guidelines. Factors associated with poor adherence included non-pharmacological interventions, year of publication, and trials concluding with significant results. Improved adherence was associated with better study designs such as allocation concealment, random sequence, large sample sizes, adequately powered studies, multiple authorships, and being published in journals endorsing guidelines. CONCLUSION We conclude that the level of adherence to reporting guidelines remains suboptimal. Endorsement of reporting guidelines by journals is important and recommended.
Collapse
Affiliation(s)
- Yanling Jin
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
| | - Nitika Sanger
- Department of Medical Science, Medical Sciences Graduate Program, McMaster University, Hamilton, ON, Canada
| | - Ieta Shams
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | - Candice Luo
- Faculty of Health Sciences, Bachelors of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Hamnah Shahid
- Department of Arts and Science, McMaster University, Hamilton, ON, Canada
| | - Guowei Li
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
| | - Meha Bhatt
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
| | - Laura Zielinski
- Department of Neuroscience, McMaster Integrative Neuroscience Discovery and Study, McMaster University, Hamilton, ON, Canada
| | - Bianca Bantoto
- Department of Science, Honours Integrated Sciences Program, McMaster University, Hamilton, ON, Canada
| | - Mei Wang
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
| | - Luciana Pf Abbade
- Department of Dermatology and Radiotherapy, Botucatu Medical School, Universidade Estadual Paulista, UNESP, São Paulo, Brazil
| | - Ikunna Nwosu
- Faculty of Health Sciences, Bachelors of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Alvin Leenus
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
| | - Lawrence Mbuagbaw
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
| | - Muhammad Maaz
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
| | - Yaping Chang
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
| | - Guangwen Sun
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
| | - Mitchell Ah Levine
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
- St. Joseph's Healthcare Hamilton, Hamilton, ON, Canada
| | - Jonathan D Adachi
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
- St. Joseph's Healthcare Hamilton, Hamilton, ON, Canada
| | - Lehana Thabane
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
- St. Joseph's Healthcare Hamilton, Hamilton, ON, Canada
| | - Zainab Samaan
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,
- Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada,
| |
Collapse
|
6
|
Grob ATM, van der Vaart LR, Withagen MIJ, van der Vaart CH. Quality of reporting of diagnostic accuracy studies on pelvic floor three-dimensional transperineal ultrasound: a systematic review. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2017; 50:451-457. [PMID: 28000958 DOI: 10.1002/uog.17390] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Revised: 12/02/2016] [Accepted: 12/13/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE In recent years, a large number of studies have been published on the clinical relevance of pelvic floor three-dimensional (3D) transperineal ultrasound. Several studies compare sonography with other imaging modalities or clinical examination. The quality of reporting in these studies is not known. The objective of this systematic review was to determine the compliance of diagnostic accuracy studies investigating pelvic floor 3D ultrasound with the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines. METHODS Published articles on pelvic floor 3D ultrasound were identified by a systematic literature search of MEDLINE, Web of Science and Scopus databases. Prospective and retrospective studies that compared pelvic floor 3D ultrasound with other clinical and imaging diagnostics were included in the analysis. STARD compliance was assessed and quantified by two independent investigators, using 22 of the original 25 STARD checklist items. Items with the qualifier 'if done' (Items 13, 23 and 24) were excluded because they were not applicable to all papers. Each item was scored as reported (score = 1) or not reported (score = 0). Observer variability, the total number of reported STARD items per article and summary scores for each item were calculated. The difference in total score between STARD-adopting and non-adopting journals was tested statistically, as was the effect of year of publication. RESULTS Forty studies published in 13 scientific journals were included in the analysis. Mean ± SD STARD checklist score of the included articles was 16.0 ± 2.5 out of a maximum of 22 points. The lowest scores (< 50%) were found for reporting of handling of indeterminate results or missing responses, adverse events and the time interval between tests. Interobserver agreement for rating the STARD items was excellent (intraclass correlation coefficient, 0.77). An independent t-test showed no significant mean difference ± SD in total STARD checklist score between STARD-adopting and non-adopting journals (16.4 ± 2.2 vs 15.9 ± 2.6, respectively). Mean ± SD STARD checklist score for articles published in 2003-2009 was lower, but not statistically different, compared with those published in 2010-2015 (15.2 ± 2.5 vs 16.6 ± 2.4, respectively). CONCLUSION The overall compliance with reporting guidelines of diagnostic accuracy studies on pelvic floor 3D transperineal ultrasound is relatively good compared with other fields of medicine. However, specific checklist items require more attention when reported. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- A T M Grob
- Department of Reproductive Medicine and Gynecology, University Medical Center, Utrecht, The Netherlands
- MIRA Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, The Netherlands
| | | | - M I J Withagen
- Department of Reproductive Medicine and Gynecology, University Medical Center, Utrecht, The Netherlands
| | - C H van der Vaart
- Department of Reproductive Medicine and Gynecology, University Medical Center, Utrecht, The Netherlands
| |
Collapse
|
7
|
Dilauro M, McInnes MDF, Korevaar DA, van der Pol CB, Petrcich W, Walther S, Quon J, Kurowecki D, Bossuyt PMM. Is There an Association between STARD Statement Adherence and Citation Rate? Radiology 2016; 280:62-7. [DOI: 10.1148/radiol.2016151384] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
8
|
Fidalgo BMR, Crabb DP, Lawrenson JG. Methodology and reporting of diagnostic accuracy studies of automated perimetry in glaucoma: evaluation using a standardised approach. Ophthalmic Physiol Opt 2016; 35:315-23. [PMID: 25913874 DOI: 10.1111/opo.12208] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2015] [Accepted: 03/12/2015] [Indexed: 11/29/2022]
Abstract
PURPOSE To evaluate methodological and reporting quality of diagnostic accuracy studies of perimetry in glaucoma and to determine whether there had been any improvement since the publication of the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines. METHODS A systematic review of English language articles published between 1993 and 2013 reporting the diagnostic accuracy of perimetry in glaucoma. Articles were appraised for methodological quality using the 14-item Quality assessment tool for diagnostic accuracy studies (QUADAS) and evaluated for quality of reporting by applying the STARD checklist. RESULTS Fifty-eight articles were appraised. Overall methodological quality of these studies was moderate with a median number of QUADAS items rated as 'yes' equal to nine (out of a maximum of 14) (IQR 7-10). The studies were often poorly reported; median score of STARD items fully reported was 11 out of 25 (IQR 10-14). A comparison of the studies published in 10-year periods before and after the publication of the STARD checklist in 2003 found quality of reporting had not substantially improved. CONCLUSIONS Methodological and reporting quality of diagnostic accuracy studies of perimetry is sub-optimal and appears not to have improved substantially following the development of the STARD reporting guidance. This observation is consistent with previous studies in ophthalmology and in other medical specialities.
Collapse
Affiliation(s)
- Bruno M R Fidalgo
- Division of Optometry and Visual Science, City University London, London, UK
| | | | | |
Collapse
|
9
|
de Boer MW, LeBlanc SJ, Dubuc J, Meier S, Heuwieser W, Arlt S, Gilbert RO, McDougall S. Invited review: Systematic review of diagnostic tests for reproductive-tract infection and inflammation in dairy cows. J Dairy Sci 2014; 97:3983-99. [PMID: 24835959 DOI: 10.3168/jds.2013-7450] [Citation(s) in RCA: 73] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2013] [Accepted: 03/29/2014] [Indexed: 11/19/2022]
Abstract
The objective of this study was to conduct a systematic and critical appraisal of the quality of previous publications and describe diagnostic methods, diagnostic criteria and definitions, repeatability, and agreement among methods for diagnosis of vaginitis, cervicitis, endometritis, salpingitis, and oophoritis in dairy cows. Publications (n=1,600) that included the words "dairy," "cows," and at least one disease of interest were located with online search engines. In total, 51 papers were selected for comprehensive review by pairs of the authors. Only 61% (n=31) of the 51 reviewed papers provided a definition or citation for the disease or diagnostic methods studied, and only 49% (n=25) of the papers provided the data or a citation to support the test cut point used for diagnosing disease. Furthermore, a large proportion of the papers did not provide sufficient detail to allow critical assessment of the quality of design or reporting. Of 11 described diagnostic methods, only one complete methodology, i.e., vaginoscopy, was assessed for both within- and between-operator repeatability (κ=0.55-0.60 and 0.44, respectively). In the absence of a gold standard, comparisons between different tests have been undertaken. Agreement between the various diagnostic methods is at a low level. These discrepancies may indicate that these diagnostic methods assess different aspects of reproductive health and underline the importance of tying diagnostic criteria to objective measures of reproductive performance. Those studies that used a reproductive outcome to select cut points and tests have the greatest clinical utility. This approach has demonstrated, for example, that presence of (muco)purulent discharge in the vagina and an increased proportion of leukocytes in cytological preparations following uterine lavage or cytobrush sampling are associated with poorer reproductive outcomes. The lack of validated, consistent definitions and outcome variables makes comparisons of the different tests difficult. The quality of design and reporting in future publications could be improved by using checklists as a guideline. Further high-quality research based on published standards to improve study design and reporting should improve cow-side diagnostic tests. Specifically, more data on intra- and interobserver agreement are needed to evaluate test variability. Also, more studies are necessary to determine optimal cut points and time postpartum of examination.
Collapse
Affiliation(s)
- M W de Boer
- Cognosco, Anexa Animal Health, Morrinsville 3300, New Zealand; Epicentre, Institute of Veterinary, Animal and Biomedical Sciences, Massey University, Palmerston North 4442, New Zealand.
| | - S J LeBlanc
- Department of Population Medicine, Ontario Veterinary College, University of Guelph, Guelph, Ontario N1G 2W1, Canada
| | - J Dubuc
- Département de Sciences Cliniques, Faculté de Médecine Vétérinaire, Université de Montréal, Saint-Hyacinthe, Québec J2S 7C6, Canada
| | - S Meier
- DairyNZ Limited, Hamilton 3240, New Zealand
| | - W Heuwieser
- Clinic for Animal Reproduction, Faculty of Veterinary Medicine, Freie Universität Berlin, 14163 Berlin, Germany
| | - S Arlt
- Clinic for Animal Reproduction, Faculty of Veterinary Medicine, Freie Universität Berlin, 14163 Berlin, Germany
| | - R O Gilbert
- Department of Clinical Sciences, College of Veterinary Medicine, Cornell University, Ithaca, NY 14853
| | - S McDougall
- Cognosco, Anexa Animal Health, Morrinsville 3300, New Zealand
| |
Collapse
|
10
|
Walther S, Schueler S, Tackmann R, Schuetz GM, Schlattmann P, Dewey M. Compliance with STARD Checklist among Studies of Coronary CT Angiography: Systematic Review. Radiology 2014; 271:74-86. [DOI: 10.1148/radiol.13121720] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
11
|
Korevaar DA, van Enst WA, Spijker R, Bossuyt PMM, Hooft L. Reporting quality of diagnostic accuracy studies: a systematic review and meta-analysis of investigations on adherence to STARD. ACTA ACUST UNITED AC 2013; 19:47-54. [PMID: 24368333 DOI: 10.1136/eb-2013-101637] [Citation(s) in RCA: 85] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
BACKGROUND Poor reporting of diagnostic accuracy studies impedes an objective appraisal of the clinical performance of diagnostic tests. The Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement, first published in 2003, aims to improve the reporting quality of such studies. OBJECTIVE To investigate to which extent published diagnostic accuracy studies adhere to the 25-item STARD checklist, whether the reporting quality has improved after STARD's launch and whether there are any factors associated with adherence. STUDY SELECTION We performed a systematic review and searched MEDLINE, EMBASE and the Methodology Register of the Cochrane Library for studies that primarily aimed to examine the reporting quality of articles on diagnostic accuracy studies in humans by evaluating adherence to STARD. Study selection was performed in duplicate; data were extracted by one author and verified by the second author. FINDINGS We included 16 studies, analysing 1496 articles in total. Three studies investigated adherence in a general sample of diagnostic accuracy studies; the others did so in a specific field of research. The overall mean number of items reported varied from 9.1 to 14.3 between 13 evaluations that evaluated all 25 STARD items. Six studies quantitatively compared post-STARD with pre-STARD articles. Combining these results in a random-effects meta-analysis revealed a modest but significant increase in adherence after STARD's introduction (mean difference 1.41 items (95% CI 0.65 to 2.18)). CONCLUSIONS The reporting quality of diagnostic accuracy studies was consistently moderate, at least through halfway the 2000s. Our results suggest a small improvement in the years after the introduction of STARD. Adherence to STARD should be further promoted among researchers, editors and peer reviewers.
Collapse
Affiliation(s)
- Daniël A Korevaar
- Department of Clinical Epidemiology, Biostatistics and Bioinformatics (KEBB), Academic Medical Centre (AMC), University of Amsterdam (UvA), , Amsterdam, The Netherlands
| | | | | | | | | |
Collapse
|
12
|
Zintzaras E, Papathanasiou AA, Ziogas DC, Voulgarelis M. The reporting quality of studies investigating the diagnostic accuracy of anti-CCP antibody in rheumatoid arthritis and its impact on diagnostic estimates. BMC Musculoskelet Disord 2012; 13:113. [PMID: 22730931 PMCID: PMC3488511 DOI: 10.1186/1471-2474-13-113] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2011] [Accepted: 06/08/2012] [Indexed: 11/30/2022] Open
Abstract
BACKGROUND Recently anti-CCP testing has become popular in the diagnosis of rheumatoid arthritis (RA). However, the inadequate reporting of the relevant diagnostic studies may overestimate and bias the results, directing scientists into making false decisions. The aim of the present study was to evaluate the reporting quality of studies used anti-CCP2 for the diagnosis of RA and to explore the impact of reporting quality on pooled estimates of diagnostic measures. METHODS PubMed was searched for clinical studies investigated the diagnostic accuracy of anti-CCP. The studies were evaluated for their reporting quality according to STARD statement. The overall reporting quality and the differences between high and low quality studies were explored. The effect of reporting quality on pooled estimates of diagnostic accuracy was also examined. RESULTS The overall reporting quality was relatively good but there are some essential methodological aspects of the studies that are seldom reported making the assessment of study validity difficult. Comparing the quality of reporting in high versus low quality articles, significant differences were seen in a relatively large number of methodological items. Overall, the STARD score (high/low) has no effect on the pooled sensitivities and specificities. However, the reporting of specific STARD items (e.g. reporting sufficiently the methods used in calculating the measures of diagnostic accuracy and reporting of demographic and clinical characteristics/features of the study population) has an effect on sensitivity and specificity. CONCLUSIONS The reporting quality of the diagnostic studies needs further improvement since the study quality may bias the estimates of diagnostic accuracy.
Collapse
Affiliation(s)
- Elias Zintzaras
- Department of Biomathematics, University of Thessaly School of Medicine, 2 Panepistimiou Str, Larissa, 41110, Greece
- The Institute for Clinical Research and Health Policy Studies, Tufts-New England Medical Center, Tufts University School of Medicine, Boston, USA
| | - Afroditi A Papathanasiou
- Department of Biomathematics, University of Thessaly School of Medicine, 2 Panepistimiou Str, Larissa, 41110, Greece
| | - Dimitrios C Ziogas
- Department of Gastroenterology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Michael Voulgarelis
- Department of Pathophysiology, National University of Athens School of Medicine, Athens, Greece
| |
Collapse
|
13
|
Selman TJ, Morris RK, Zamora J, Khan KS. The quality of reporting of primary test accuracy studies in obstetrics and gynaecology: application of the STARD criteria. BMC WOMENS HEALTH 2011; 11:8. [PMID: 21429185 PMCID: PMC3072919 DOI: 10.1186/1472-6874-11-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2010] [Accepted: 03/23/2011] [Indexed: 11/10/2022]
Abstract
Background In obstetrics and gynaecology there has been a rapid growth in the development of new tests and primary studies of their accuracy. It is imperative that such studies are reported with transparency allowing the detection of any potential bias that may invalidate the results. The objective of this study was to determine the quality of reporting in diagnostic test accuracy studies in obstetrics and gynaecology using the Standards for Reporting of Diagnostic Accuracy - STARD checklist. Methods The included studies of ten systematic reviews were assessed for compliance with each of the reporting criteria. Using appropriate statistical tests we investigated whether there was an improvement in reporting quality since the introduction of the STARD checklist, whether a correlation existed between study sample size, country of origin of study and reporting quality. Results A total of 300 studies were included (195 for obstetrics, 105 for gynaecology). The overall reporting quality of included studies to the STARD criteria was poor. Obstetric studies reported adequately > 50% of the time for 62.1% (18/29) of the items while gynaecologic studies did the same 51.7% (15/29). There was a greater mean compliance with STARD criteria in the included obstetric studies than the gynaecological (p < 0.0001). There was a positive correlation, in both obstetrics (p < 0.0001) and gynaecology (p = 0.0123), between study sample size and reporting quality. No correlation between geographical area of publication and compliance with the reporting criteria could be demonstrated. Conclusions The reporting quality of papers in obstetrics and gynaecology is improving. This may be due to initiatives such as the STARD checklist as well as historical progress in awareness among authors of the need to accurately report studies. There is however considerable scope for further improvement.
Collapse
Affiliation(s)
- Tara J Selman
- School of Clinical and Experimental Medicine (Reproduction, Genes and Development), University of Birmingham, Birmingham Women's Hospital, Birmingham, B15 2TG, UK
| | | | | | | |
Collapse
|