Ontology highlight
ABSTRACT: Objective
We investigated whether responses about possible scientific misconduct from journals to journalists would differ in speed, usefulness, and tone from responses to academics. Twelve journals that published 23 clinical trials about which concerns had been previously raised were randomly assigned to enquiries by a journalist or academics. Emails were sent every 3 weeks to the journal editor. We recorded the time for the journal to respond, and two investigators independently assessed the usefulness and tone of the journal responses.Results
10/12 journals responded: 3 after one email, 5 after two emails, and 2 after three emails (median time from first email to response: 21 days; no difference in response times to journalist or academics, P?=?0.25). Of the 10 responses, 8 indicated the journal was investigating, 5 had a positive tone, 4 a neutral tone, and 1 a negative tone. Five of the enquiries by the academics produced information of limited use and 1 no useful information, whereas none of the 6 journalist enquiries produced useful information (P?=?0.015). None of the 10 responses was considered very useful. In conclusion, journal responses to a journalist were less useful than those to academics in understanding the status or outcomes of journal investigations.
SUBMITTER: Bolland MJ
PROVIDER: S-EPMC6065063 | biostudies-literature | 2018 Jul
REPOSITORIES: biostudies-literature
BMC research notes 20180730 1
<h4>Objective</h4>We investigated whether responses about possible scientific misconduct from journals to journalists would differ in speed, usefulness, and tone from responses to academics. Twelve journals that published 23 clinical trials about which concerns had been previously raised were randomly assigned to enquiries by a journalist or academics. Emails were sent every 3 weeks to the journal editor. We recorded the time for the journal to respond, and two investigators independently assess ...[more]