Reporting Weaknesses in Conference Abstracts of Diagnostic Accuracy Studies in Ophthalmology.
Ontology highlight
ABSTRACT: IMPORTANCE:Conference abstracts present information that helps clinicians and researchers to decide whether to attend a presentation. They also provide a source of unpublished research that could potentially be included in systematic reviews. We systematically assessed whether conference abstracts of studies that evaluated the accuracy of a diagnostic test were sufficiently informative. OBSERVATIONS:We identified all abstracts describing work presented at the 2010 Annual Meeting of the Association for Research in Vision and Ophthalmology. Abstracts were eligible if they included a measure of diagnostic accuracy, such as sensitivity, specificity, or likelihood ratios. Two independent reviewers evaluated each abstract using a list of 21 items, selected from published guidance for adequate reporting. A total of 126 of 6310 abstracts presented were eligible. Only a minority reported inclusion criteria (5%), clinical setting (24%), patient sampling (10%), reference standard (48%), whether test readers were masked (7%), 2?×?2 tables (16%), and confidence intervals around accuracy estimates (16%). The mean number of items reported was 8.9 of 21 (SD, 2.1; range, 4-17). CONCLUSIONS AND RELEVANCE:Crucial information about study methods and results is often missing in abstracts of diagnostic studies presented at the Association for Research in Vision and Ophthalmology Annual Meeting, making it difficult to assess risk for bias and applicability to specific clinical settings.
SUBMITTER: Korevaar DA
PROVIDER: S-EPMC5031079 | biostudies-literature | 2015 Dec
REPOSITORIES: biostudies-literature
ACCESS DATA