Project description:Many publications on COVID-19 were released on preprint servers such as medRxiv and bioRxiv. It is unknown how reliable these preprints are, and which ones will eventually be published in scientific journals. In this study, we use crowdsourced human forecasts to predict publication outcomes and future citation counts for a sample of 400 preprints with high Altmetric score. Most of these preprints were published within 1 year of upload on a preprint server (70%), with a considerable fraction (45%) appearing in a high-impact journal with a journal impact factor of at least 10. On average, the preprints received 162 citations within the first year. We found that forecasters can predict if preprints will be published after 1 year and if the publishing journal has high impact. Forecasts are also informative with respect to Google Scholar citations within 1 year of upload on a preprint server. For both types of assessment, we found statistically significant positive correlations between forecasts and observed outcomes. While the forecasts can help to provide a preliminary assessment of preprints at a faster pace than traditional peer-review, it remains to be investigated if such an assessment is suited to identify methodological problems in preprints.
Project description:IntroductionOcular complaints are considered non-classical presentations for COVID-19 infection; the initial diagnosis of keratoconjunctivitis is even rarer. Indeed, this puts treating clinicians in danger of getting infected, especially when patients present without the classic respiratory symptoms.CaseHere we report a case of COVID-19 that was initially presented with keratoconjunctivitis with the appearance of respiratory symptoms four days later. The case showed improvement within four days of successful treatment for both covid pneumonia and ocular disease.DiscussionCountable cases reported initial ocular symptoms to co-occur with systemic symptoms or even before. Only two cases reported the diagnosis of keratoconjunctivitis in COVID-19. The two cases differ in the proposed mechanism of developing such disorder. One by direct invasion of the virus, the other one by cytokines-induced epithelial injury. Our case did not show positivity for SARS-CoV 2 in the eye secretion, which aligns with the later proposed mechanism of pathology.ConclusionIt is crucial to report such cases to increase the awareness of atypical presentation for COVID-19 infection. This is too important for two reasons: first, to diagnose the disease itself, and second, to take infection control precautions when treating such cases, with unexpected initial presentation.
Project description:BackgroundPreprints are increasingly used to disseminate research results, providing multiple sources of information for the same study. We assessed the consistency in effect estimates between preprint and subsequent journal article of COVID-19 randomized controlled trials.MethodsThe study utilized data from the COVID-NMA living systematic review of pharmacological treatments for COVID-19 (covid-nma.com) up to July 20, 2022. We identified randomized controlled trials (RCTs) evaluating pharmacological treatments vs. standard of care/placebo for patients with COVID-19 that were originally posted as preprints and subsequently published as journal articles. Trials that did not report the same analysis in both documents were excluded. Data were extracted independently by pairs of researchers with consensus to resolve disagreements. Effect estimates extracted from the first preprint were compared to effect estimates from the journal article.ResultsThe search identified 135 RCTs originally posted as a preprint and subsequently published as a journal article. We excluded 26 RCTs that did not meet the eligibility criteria, of which 13 RCTs reported an interim analysis in the preprint and a final analysis in the journal article. Overall, 109 preprint-article RCTs were included in the analysis. The median (interquartile range) delay between preprint and journal article was 121 (73-187) days, the median sample size was 150 (71-464) participants, 76% of RCTs had been prospectively registered, 60% received industry or mixed funding, 72% were multicentric trials. The overall risk of bias was rated as 'some concern' for 80% of RCTs. We found that 81 preprint-article pairs of RCTs were consistent for all outcomes reported. There were nine RCTs with at least one outcome with a discrepancy in the number of participants with outcome events or the number of participants analyzed, which yielded a minor change in the estimate of the effect. Furthermore, six RCTs had at least one outcome missing in the journal article and 14 RCTs had at least one outcome added in the journal article compared to the preprint. There was a change in the direction of effect in one RCT. No changes in statistical significance or conclusions were found.ConclusionsEffect estimates were generally consistent between COVID-19 preprints and subsequent journal articles. The main results and interpretation did not change in any trial. Nevertheless, some outcomes were added and deleted in some journal articles.
Project description:IntroductionPreprints have been widely cited during the COVID-19 pandemics, even in the major medical journals. However, since subsequent publication of preprint is not always mentioned in preprint repositories, some may be inappropriately cited or quoted. Our objectives were to assess the reliability of preprint citations in articles on COVID-19, to the rate of publication of preprints cited in these articles and to compare, if relevant, the content of the preprints to their published version.MethodsArticles published on COVID in 2020 in the BMJ, The Lancet, the JAMA and the NEJM were manually screened to identify all articles citing at least one preprint from medRxiv. We searched PubMed, Google and Google Scholar to assess if the preprint had been published in a peer-reviewed journal, and when. Published articles were screened to assess if the title, data or conclusions were identical to the preprint version.ResultsAmong the 205 research articles on COVID published by the four major medical journals in 2020, 60 (29.3%) cited at least one medRxiv preprint. Among the 182 preprints cited, 124 were published in a peer-reviewed journal, with 51 (41.1%) before the citing article was published online and 73 (58.9%) later. There were differences in the title, the data or the conclusion between the preprint cited and the published version for nearly half of them. MedRxiv did not mentioned the publication for 53 (42.7%) of preprints.ConclusionsMore than a quarter of preprints citations were inappropriate since preprints were in fact already published at the time of publication of the citing article, often with a different content. Authors and editors should check the accuracy of the citations and of the quotations of preprints before publishing manuscripts that cite them.
Project description:BackgroundPreprints are preliminary reports that have not been peer-reviewed. In December 2019, a novel coronavirus appeared in China, and since then, scientific production, including preprints, has drastically increased. In this study, we intend to evaluate how often preprints about COVID-19 were published in scholarly journals and cited.MethodsWe searched the iSearch COVID-19 portfolio to identify all preprints related to COVID-19 posted on bioRxiv, medRxiv, and Research Square from January 1, 2020, to May 31, 2020. We used a custom-designed program to obtain metadata using the Crossref public API. After that, we determined the publication rate and made comparisons based on citation counts using non-parametric methods. Also, we compared the publication rate, citation counts, and time interval from posting on a preprint server to publication in a scholarly journal among the three different preprint servers.ResultsOur sample included 5,061 preprints, out of which 288 were published in scholarly journals and 4,773 remained unpublished (publication rate of 5.7%). We found that articles published in scholarly journals had a significantly higher total citation count than unpublished preprints within our sample (p < 0.001), and that preprints that were eventually published had a higher citation count as preprints when compared to unpublished preprints (p < 0.001). As well, we found that published preprints had a significantly higher citation count after publication in a scholarly journal compared to as a preprint (p < 0.001). Our results also show that medRxiv had the highest publication rate, while bioRxiv had the highest citation count and shortest time interval from posting on a preprint server to publication in a scholarly journal.ConclusionsWe found a remarkably low publication rate for preprints within our sample, despite accelerated time to publication by multiple scholarly journals. These findings could be partially attributed to the unprecedented surge in scientific production observed during the COVID-19 pandemic, which might saturate reviewing and editing processes in scholarly journals. However, our findings show that preprints had a significantly lower scientific impact, which might suggest that some preprints have lower quality and will not be able to endure peer-reviewing processes to be published in a peer-reviewed journal.
Project description:ObjectivePreprints have had a prominent role in the swift scientific response to COVID-19. Two years into the pandemic, we investigated how much preprints had contributed to timely data sharing by analyzing the lag time from preprint posting to journal publication.ResultsTo estimate the median number of days between the date a manuscript was posted as a preprint and the date of its publication in a scientific journal, we analyzed preprints posted from January 1, 2020, to December 31, 2021 in the NIH iSearch COVID-19 Portfolio database and performed a Kaplan-Meier (KM) survival analysis using a non-mixture parametric cure model. Of the 39,243 preprints in our analysis, 7712 (20%) were published in a journal, after a median lag of 178 days (95% CI: 175-181). Most of the published preprints were posted on the bioRxiv (29%) or medRxiv (65%) servers, which allow authors to choose a subject category when posting. Of the 20,698 preprints posted on these two servers, 7358 (36%) were published, including approximately half of those categorized as biochemistry, biophysics, and genomics, which became published articles within the study interval, compared with 29% categorized as epidemiology and 26% as bioinformatics.
Project description:BackgroundThe use of social media assists in the distribution of information about COVID-19 to the general public and health professionals. Alternative-level metrics (ie, Altmetrics) is an alternative method to traditional bibliometrics that assess the extent of dissemination of a scientific article on social media platforms.ObjectiveOur study objective was to characterize and compare traditional bibliometrics (citation count) with newer metrics (Altmetric Attention Score [AAS]) of the top 100 Altmetric-scored articles on COVID-19.MethodsThe top 100 articles with the highest AAS were identified using the Altmetric explorer in May 2020. AAS, journal name, and mentions from various social media platforms (Twitter, Facebook, Wikipedia, Reddit, Mendeley, and Dimension) were collected for each article. Citation counts were collected from the Scopus database.ResultsThe median AAS and citation count were 4922.50 and 24.00, respectively. TheNew England Journal of Medicine published the most articles (18/100, 18%). Twitter was the most frequently used social media platform with 985,429 of 1,022,975 (96.3%) mentions. Positive correlations were observed between AAS and citation count (r2=0.0973; P=.002).ConclusionsOur research characterized the top 100 COVID-19-related articles by AAS in the Altmetric database. Altmetrics could complement traditional citation count when assessing the dissemination of an article regarding COVID-19.International registered report identifier (irrid)RR2-10.2196/21408.
Project description:BackgroundPreprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader's ability to independently interpret data and reproduce findings.MethodsIn this observational study, we initially compared independent samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. After that, we performed paired comparisons between preprints from bioRxiv to their own peer-reviewed versions in journals.ResultsPeer-reviewed articles had, on average, higher quality of reporting than preprints, although the difference was small, with absolute differences of 5.0% [95% CI 1.4, 8.6] and 4.7% [95% CI 2.4, 7.0] of reported items in the independent samples and paired sample comparison, respectively. There were larger differences favoring peer-reviewed articles in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Changes in reporting from preprints to peer-reviewed versions did not correlate with the impact factor of the publication venue or with the time lag from bioRxiv to journal publication.ConclusionsOur results suggest that, on average, publication in a peer-reviewed journal is associated with improvement in quality of reporting. They also show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.