Reported estimates of diagnostic accuracy in ophthalmology conference abstracts were not associated with full-text publication.
Ontology highlight
ABSTRACT: To assess whether conference abstracts that report higher estimates of diagnostic accuracy are more likely to reach full-text publication in a peer-reviewed journal.We identified abstracts describing diagnostic accuracy studies, presented between 2007 and 2010 at the Association for Research in Vision and Ophthalmology (ARVO) Annual Meeting. We extracted reported estimates of sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and diagnostic odds ratio (DOR). Between May and July 2015, we searched MEDLINE and EMBASE to identify corresponding full-text publications; if needed, we contacted abstract authors. Cox regression was performed to estimate associations with full-text publication, where sensitivity, specificity, and AUC were logit transformed, and DOR was log transformed.A full-text publication was found for 226/399 (57%) included abstracts. There was no association between reported estimates of sensitivity and full-text publication (hazard ratio [HR] 1.09 [95% confidence interval {CI} 0.98, 1.22]). The same applied to specificity (HR 1.00 [95% CI 0.88, 1.14]), AUC (HR 0.91 [95% CI 0.75, 1.09]), and DOR (HR 1.01 [95% CI 0.94, 1.09]).Almost half of the ARVO conference abstracts describing diagnostic accuracy studies did not reach full-text publication. Studies in abstracts that mentioned higher accuracy estimates were not more likely to be reported in a full-text publication.
<h4>Objective</h4>To assess whether conference abstracts that report higher estimates of diagnostic accuracy are more likely to reach full-text publication in a peer-reviewed journal.<h4>Study design and setting</h4>We identified abstracts describing diagnostic accuracy studies, presented between 2007 and 2010 at the Association for Research in Vision and Ophthalmology (ARVO) Annual Meeting. We extracted reported estimates of sensitivity, specificity, area under the receiver operating characteri ...[more]
Project description:IMPORTANCE:Conference abstracts present information that helps clinicians and researchers to decide whether to attend a presentation. They also provide a source of unpublished research that could potentially be included in systematic reviews. We systematically assessed whether conference abstracts of studies that evaluated the accuracy of a diagnostic test were sufficiently informative. OBSERVATIONS:We identified all abstracts describing work presented at the 2010 Annual Meeting of the Association for Research in Vision and Ophthalmology. Abstracts were eligible if they included a measure of diagnostic accuracy, such as sensitivity, specificity, or likelihood ratios. Two independent reviewers evaluated each abstract using a list of 21 items, selected from published guidance for adequate reporting. A total of 126 of 6310 abstracts presented were eligible. Only a minority reported inclusion criteria (5%), clinical setting (24%), patient sampling (10%), reference standard (48%), whether test readers were masked (7%), 2?×?2 tables (16%), and confidence intervals around accuracy estimates (16%). The mean number of items reported was 8.9 of 21 (SD, 2.1; range, 4-17). CONCLUSIONS AND RELEVANCE:Crucial information about study methods and results is often missing in abstracts of diagnostic studies presented at the Association for Research in Vision and Ophthalmology Annual Meeting, making it difficult to assess risk for bias and applicability to specific clinical settings.
Project description:Abstracts submitted to meetings are subject to less rigorous peer review than full-text manuscripts. This study aimed to explore the publication outcome of abstracts presented at the American Academy of Ophthalmology (AAO) annual meeting.Abstracts presented at the 2008 AAO meeting were analyzed. Each presented abstract was sought via PubMed to identify if it had been published as a full-text manuscript. The publication outcome, journal impact factor (IF), and time to publication were recorded.A total of 690 abstracts were reviewed, of which 39.1% were subsequently published. They were published in journals with a median IF of 2.9 (range 0-7.2) and a median publication time of 426 days (range 0-2,133 days). A quarter were published in the journal Ophthalmology, with a shorter time to publication (median 282 vs. 534 days, p=0.003). Oral presentations were more likely to be published than poster presentations (57.8% vs. 35.9%, p<0.001) and in journals with higher IFs (3.2 vs. 2.8, p=0.02). Abstracts describing rare diseases had higher publication rates (49.4% vs. 38.0%, p=0.04) and were published in higher IF journals (3.7 vs. 2.9, p=0.03), within a shorter period of time (358 vs. 428 days, p=0.03). In multivariate analysis, affiliation with an institute located in the United States (p=0.002), abstracts describing rare diseases (p=0.03), and funded studies (p=0.03) were associated with publication in higher IF journals.Almost 40% of abstracts were published. Factors that correlated with publication in journals with higher IF were a focus on rare diseases, affiliation with a US institute, and funding.
Project description:BackgroundScientists communicate progress and exchange information via publication and presentation at scientific meetings. We previously showed that text similarity analysis applied to Medline can identify and quantify plagiarism and duplicate publications in peer-reviewed biomedical journals. In the present study, we applied the same analysis to a large sample of conference abstracts.MethodsWe downloaded 144,149 abstracts from 207 national and international meetings of 63 biomedical conferences. Pairwise comparisons were made using eTBLAST: a text similarity engine. A domain expert then reviewed random samples of highly similar abstracts (1500 total) to estimate the extent of text overlap and possible plagiarism.ResultsOur main findings indicate that the vast majority of textual overlap occurred within the same meeting (2%) and between meetings of the same conference (3%), both of which were significantly higher than instances of plagiarism, which occurred in less than .5% of abstracts.ConclusionsThis analysis indicates that textual overlap in abstracts of papers presented at scientific meetings is one-tenth that of peer-reviewed publications, yet the plagiarism rate is approximately the same as previously measured in peer-reviewed publications. This latter finding underscores a need for monitoring scientific meeting submissions - as is now done when submitting manuscripts to peer-reviewed journals - to improve the integrity of scientific communications.
Project description:BackgroundIncluding results from unpublished randomized controlled trials (RCTs) in a systematic review may ameliorate the effect of publication bias in systematic review results. Unpublished RCTs are sometimes described in abstracts presented at conferences, included in trials registers, or both. Trial results may not be available in a trials register and abstracts describing RCT results often lack study design information. Complementary information from a trials register record may be sufficient to allow reliable inclusion of an unpublished RCT only available as an abstract in a systematic review.MethodsWe identified 496 abstracts describing RCTs presented at the 2007 to 2009 Association for Research in Vision and Ophthalmology (ARVO) meetings; 154 RCTs were registered in ClinicalTrials.gov. Two persons extracted verbatim primary and non-primary outcomes reported in the abstract and ClinicalTrials.gov record. We compared each abstract outcome with all ClinicalTrials.gov outcomes and coded matches as complete, partial, or no match.ResultsWe identified 800 outcomes in 152 abstracts (95 primary [51 abstracts] and 705 [141 abstracts] non-primary outcomes). No outcomes were reported in 2 abstracts. Of 95 primary outcomes, 17 (18%) agreed completely, 53 (56%) partially, and 25 (26%) had no match with a ClinicalTrials.gov primary or non-primary outcome. Among 705 non-primary outcomes, 56 (8%) agreed completely, 205 (29%) agreed partially, and 444 (63%) had no match with a ClinicalTrials.gov primary or non-primary outcome. Among the 258 outcomes partially agreeing, we found additional information on the time when the outcome was measured more often in ClinicalTrials.gov than in the abstract (141/258 (55%) versus 55/258 (21%)). We found no association between the presence of non-matching "new" outcomes and year of registration, time to registry update, industry sponsorship, or multi-center status.ConclusionConference abstracts may be a valuable source of information about results for outcomes of unpublished RCTs that have been registered in ClinicalTrials.gov. Complementary additional descriptive information may be present for outcomes reported in both sources. However, ARVO abstract authors also present outcomes not reported in ClinicalTrials.gov and these may represent analyses not originally planned.
Project description:ObjectiveTo determine the publication rate of abstracts presented at the Japan Primary Care Association Annual Meetings and the factors associated with publication.DesignA retrospective observational study.ParticipantsAll abstracts presented at the Japan Primary Care Association Annual Meetings (2010-2012).Main outcome measuresPublication rates were determined by searching the MEDLINE database for full-text articles published by September 2017. Data on presentation format (oral vs poster), affiliation of the first author, number of authors, number of involved institutions, journal of publication and publication date were abstracted.ResultsOf the 1003 abstracts evaluated, 38 (3.8%, 95% CI 2.6% to 5.0%) were subsequently published in peer-reviewed journals indexed in the MEDLINE database. The median time to publication was 15.5 months (IQR, 9.3-29.3 months). More than 95% of published abstracts were published within 4 years. The publications appeared in 23 different journals (21 English-language journals and two Japanese-language journals). Based on univariate analysis using binary logistic regression, publication was more frequent for oral presentations (7.3%vs2.0% for poster presentations; OR 3.91,95% CI 1.98 to 7.75), and for first authors affiliated with university-associated institutions (6.4%vs2.4% for first authors affiliated with non-university-associated institutions; OR 2.75,95% CI 1.42 to 5.30). Based on multivariate analysis, oral presentation and first author affiliation with a university-associated institution were still the only independent predictive factors for publication (adjusted OR 3.50(95% CI 1.72 to 7.12) and adjusted OR 2.35(95% CI 1.19 to 4.63), respectively). Even among 151 abstracts presented orally by first authors affiliated with a university-associated institution, only 18 abstracts (11.9%) were subsequently published in peer-reviewed journals.ConclusionsThe publication rate of abstracts presented at the Japan Primary Care Association Annual Meetings was extremely low. Further studies are warranted to investigate the barriers to publication among investigators who participate in conferences where the publication rate is extremely low.
Project description:BackgroundThe preliminary results of a study are usually presented as an abstract in conference meetings. The reporting quality of those abstracts and the relationship between their study designs and full paper publication rate is unknown. We hypothesized that randomized controlled trials are more likely to be published as full papers than observational studies.Methods154 oral abstracts presented at the World Congress of Sports Injury Prevention 2005 Oslo and the corresponding full paper publication were identified and analysed. The main outcome measures were frequency of publication, time to publication, impact factor, CONSORT (for Consolidated Standards of Reporting Trials) score, STROBE (for Strengthening the Reporting of Observational Studies in Epidemiology) score, and minor and major inconsistencies between the abstract and the full paper publication.ResultsOverall, 76 of the 154 (49%) presented abstracts were published as full papers in a peer-reviewed journal with an impact factor of 1.946 ± 0.812. No significant difference existed between the impact factor for randomized controlled trials (2.122 ± 1.015) and observational studies (1.913 ± 0.765, p = 0.469). The full papers for the randomized controlled trials were published after an average (SD) of 17 months (± 13 months); for observational studies, the average (SD) was 12 months (± 14 months) (p = 0.323). A trend was observed in this study that a higher percentage of randomized controlled trial abstracts were published as full papers (71% vs. 47%, p = 0.078) than observational trials. The reporting quality of abstracts, published as full papers, significantly increased compared to conference abstracts both in randomized control studies (Consort5.7 ± 0.7 to 7.2 ± 1.3; p = 0.018, CI -2.7 to -0.32) and in observational studies (STROBE: 8.2 ± 1.3 to 8.6 ± 1.4; p = 0.007, CI -0.63 to -0.10). All of the published abstracts had at least one minor inconsistency (title, authors, research center, outcome presentation, conclusion), while 65% had at least major inconsistencies (study objective, hypothesis, study design, primary outcome measures, sample size, statistical analysis, results, SD/CI). Comparing the results of conference and full paper; results changed in 90% vs. 68% (randomized, controlled studies versus observational studies); data were added (full paper reported more result data) in 60% vs. 30%, and deleted (full paper reported fewer result data) in 40% vs. 30%.ConclusionsNo significant differences with respect to type of study (randomized controlled versus observational), impact factor, and time to publication existed for the likelihood that a World Congress of Sports Injury conference abstract could be published as a full paper.
Project description:BackgroundThe problem of access to medical information, particularly in low-income countries, has been under discussion for many years. Although a number of developments have occurred in the last decade (e.g., the open access (OA) movement and the website Sci-Hub), everyone agrees that these difficulties still persist very widely, mainly due to the fact that paywalls still limit access to approximately 75% of scholarly documents. In this study, we compare the accessibility of recent full text articles in the field of ophthalmology in 27 established institutions located worldwide.MethodsA total of 200 references from articles were retrieved using the PubMed database. Each article was individually checked for OA. Full texts of non-OA (i.e., "paywalled articles") were examined to determine whether they were available using institutional and Hinari access in each institution studied, using "alternative ways" (i.e., PubMed Central, ResearchGate, Google Scholar, and Online Reprint Request), and using the website Sci-Hub.ResultsThe number of full texts of "paywalled articles" available using institutional and Hinari access showed strong heterogeneity, scattered between 0% full texts to 94.8% (mean = 46.8%; SD = 31.5; median = 51.3%). We found that complementary use of "alternative ways" and Sci-Hub leads to 95.5% of full text "paywalled articles," and also divides by 14 the average extra costs needed to obtain all full texts on publishers' websites using pay-per-view.ConclusionsThe scant number of available full text "paywalled articles" in most institutions studied encourages researchers in the field of ophthalmology to use Sci-Hub to search for scientific information. The scientific community and decision-makers must unite and strengthen their efforts to find solutions to improve access to scientific literature worldwide and avoid an implosion of the scientific publishing model. This study is not an endorsement for using Sci-Hub. The authors, their institutions, and publishers accept no responsibility on behalf of readers.
Project description:Across academia and industry, text mining has become a popular strategy for keeping up with the rapid growth of the scientific literature. Text mining of the scientific literature has mostly been carried out on collections of abstracts, due to their availability. Here we present an analysis of 15 million English scientific full-text articles published during the period 1823-2016. We describe the development in article length and publication sub-topics during these nearly 250 years. We showcase the potential of text mining by extracting published protein-protein, disease-gene, and protein subcellular associations using a named entity recognition system, and quantitatively report on their accuracy using gold standard benchmark data sets. We subsequently compare the findings to corresponding results obtained on 16.5 million abstracts included in MEDLINE and show that text mining of full-text articles consistently outperforms using abstracts only.