Project description:Predatory publishing represents a major challenge to scholarly communication. This paper maps the infiltration of journals suspected of predatory practices into the citation database Scopus and examines cross-country differences in the propensity of scholars to publish in such journals. Using the names of "potential, possible, or probable" predatory journals and publishers on Beall's lists, we derived the ISSNs of 3,293 journals from Ulrichsweb and searched Scopus with them. 324 of journals that appear both in Beall's lists and Scopus with 164 thousand articles published over 2015-2017 were identified. Analysis of data for 172 countries in 4 fields of research indicates that there is a remarkable heterogeneity. In the most affected countries, including Kazakhstan and Indonesia, around 17% of articles fall into the predatory category, while some other countries have no predatory articles whatsoever. Countries with large research sectors at the medium level of economic development, especially in Asia and North Africa, tend to be most susceptible to predatory publishing. Arab, oil-rich and/or eastern countries also appear to be particularly vulnerable. Policymakers and stakeholders in these and other developing countries need to pay more attention to the quality of research evaluation.Supplementary informationThe online version contains supplementary material available at (10.1007/s11192-020-03852-4).
Project description:The COVID-19 pandemic catalyzed the rapid dissemination of papers and preprints investigating the disease and its associated virus, SARS-CoV-2. The multifaceted nature of COVID-19 demands a multidisciplinary approach, but the urgency of the crisis combined with the need for social distancing measures present unique challenges to collaborative science. We applied a massive online open publishing approach to this problem using Manubot. Through GitHub, collaborators summarized and critiqued COVID-19 literature, creating a review manuscript. Manubot automatically compiled citation information for referenced preprints, journal publications, websites, and clinical trials. Continuous integration workflows retrieved up-to-date data from online sources nightly, regenerating some of the manuscript's figures and statistics. Manubot rendered the manuscript into PDF, HTML, LaTeX, and DOCX outputs, immediately updating the version available online upon the integration of new content. Through this effort, we organized over 50 scientists from a range of backgrounds who evaluated over 1,500 sources and developed seven literature reviews. While many efforts from the computational community have focused on mining COVID-19 literature, our project illustrates the power of open publishing to organize both technical and non-technical scientists to aggregate and disseminate information in response to an evolving crisis.
Project description:The COVID-19 pandemic catalyzed the rapid dissemination of papers and preprints investigating the disease and its associated virus, SARS-CoV-2. The multifaceted nature of COVID-19 demands a multidisciplinary approach, but the urgency of the crisis combined with the need for social distancing measures present unique challenges to collaborative science. We applied a massive online open publishing approach to this problem using Manubot. Through GitHub, collaborators summarized and critiqued COVID-19 literature, creating a review manuscript. Manubot automatically compiled citation information for referenced preprints, journal publications, websites, and clinical trials. Continuous integration workflows retrieved up-to-date data from online sources nightly, regenerating some of the manuscript's figures and statistics. Manubot rendered the manuscript into PDF, HTML, LaTeX, and DOCX outputs, immediately updating the version available online upon the integration of new content. Through this effort, we organized over 50 scientists from a range of backgrounds who evaluated over 1,500 sources and developed seven literature reviews. While many efforts from the computational community have focused on mining COVID-19 literature, our project illustrates the power of open publishing to organize both technical and non-technical scientists to aggregate and disseminate information in response to an evolving crisis.
Project description:OBJECTIVES:To develop effective interventions to prevent publishing in presumed predatory journals (ie, journals that display deceptive characteristics, markers or data that cannot be verified), it is helpful to understand the motivations and experiences of those who have published in these journals. DESIGN:An online survey delivered to two sets of corresponding authors containing demographic information, and questions about researchers' perceptions of publishing in the presumed predatory journal, type of article processing fees paid and the quality of peer review received. The survey also asked six open-ended items about researchers' motivations and experiences. PARTICIPANTS:Using Beall's lists, we identified two groups of individuals who had published empirical articles in biomedical journals that were presumed to be predatory. RESULTS:Eighty-two authors partially responded (~14%?response rate (11.4%[44/386] from the initial sample, 19.3%[38/197] from second sample) to our survey. The top three countries represented were India (n=21, 25.9%), USA (n=17, 21.0%) and Ethiopia (n=5, 6.2%). Three participants (3.9%) thought the journal they published in was predatory at the time of article submission. The majority of participants first encountered the journal via an email invitation to submit an article (n=32, 41.0%), or through an online search to find a journal with relevant scope (n=22, 28.2%). Most participants indicated their study received peer review (n=65, 83.3%) and that this was helpful and substantive (n=51, 79.7%). More than a third (n=32, 45.1%) indicated they did not pay fees to publish. CONCLUSIONS:This work provides some evidence to inform policy to prevent future research from being published in predatory journals. Our research suggests that common views about predatory journals (eg, no peer review) may not always be true, and that a grey zone between legitimate and presumed predatory journals exists. These results are based on self-reports and may be biased thus limiting their interpretation.
Project description:ObjectiveAcademics are under great pressure to publish their research, the rewards for which are well known (tenure, promotion, grant funding, professional prestige). As open access publishing gains acceptance as a publishing option, researchers may choose a "predatory publisher." The purpose of this study is to investigate the motivations and rationale of pharmacy and nursing academics in the United States to publish in open access journals that may be considered "predatory."MethodsA 26-item questionnaire was programmed in Qualtrics and distributed electronically to approximately 4,500 academic pharmacists and nurses, 347 of whom completed questionnaires (~8%). Pairwise correlations were performed followed by a logistic regression to evaluate statistical associations between participant characteristics and whether participants had ever paid an article processing fee (APF).ResultsParticipants who had published more articles, were more familiar with predatory publishing, and who were more concerned about research metrics and tenure were more likely to have published in open access journals. Moderate to high institutional research intensity has an impact on the likelihood of publishing open access. The majority of participants who acknowledged they had published in a predatory journal took no action after realizing the journal was predatory and reported no negative impact on their career for having done so.ConclusionThe results of this study provide data and insight into publication decisions made by pharmacy and nursing academics. Gaining a better understanding of who publishes in predatory journals and why can help address the problems associated with predatory publishing at the root.
Project description:OBJECTIVE:To investigate the scope of academic spam emails (ASEs) among career development grant awardees and the factors associated with the amount of time spent addressing them. DESIGN:A cross-sectional survey of career development grant investigators via an anonymous online survey was conducted. In addition to demographic and professional information, we asked investigators to report the number of ASEs received each day, how they determined whether these emails were spam and time they spent per day addressing them. We used bivariate analysis to assess factors associated with the amount of time spent on ASEs. SETTING:An online survey sent via email on three separate occasions between November and December 2016. PARTICIPANTS:All National Institutes of Health career development awardees funded in the 2015 fiscal year. MAIN OUTCOME MEASURES:Factors associated with the amount of time spent addressing ASEs. RESULTS:A total of 3492 surveys were emailed, of which 206 (5.9%) were returned as undeliverable and 96 (2.7%) reported an out-of-office message; our overall response rate was 22.3% (n=733). All respondents reported receiving ASEs, with the majority (54.4%) receiving between 1 and 10 per day and spending between 1 and 10 min each day evaluating them. The amount of time respondents reported spending on ASEs was associated with the number of peer-reviewed journal articles authored (p<0.001), a history of publishing in open access format (p<0.01), the total number of ASEs received (p<0.001) and a feeling of having missed opportunities due to ignoring these emails (p=0.04). CONCLUSIONS:ASEs are a common distraction for career development grantees that may impact faculty productivity. There is an urgent need to mitigate this growing problem.
Project description:We aimed to develop an in-depth understanding of quality criteria for scholarly journals by analyzing journals and publishers indexed in blacklists of predatory journals and whitelists of legitimate journals and the lists' inclusion criteria. To quantify content overlaps between blacklists and whitelists, we employed the Jaro-Winkler string metric. To identify topics addressed by the lists' inclusion criteria and to derive their concepts, we conducted qualitative coding. We included two blacklists (Beall's and Cabells Scholarly Analytics') and two whitelists (the Directory of Open Access Journals' and Cabells Scholarly Analytics'). The number of journals per list ranged from 1,404 to 12,357, and the number of publishers ranged from 473 to 5,638. Seventy-two journals and 42 publishers were included in both a blacklist and a whitelist. Seven themes were identified in the inclusion criteria: (i) peer review; (ii) editorial services; (iii) policy; (iv) business practices; (v) publishing, archiving, and access; (vi) website; and (vii) indexing and metrics. Business practices accounted for almost half of the blacklists' criteria, whereas whitelists gave more emphasis to criteria related to policy. Criteria could be allocated to four concepts: (i) transparency, (ii) ethics, (iii) professional standards, and (iv) peer review and other services. Whitelists gave most weight to transparency. Blacklists focused on ethics and professional standards. Whitelist criteria were easier to verify than those used in blacklists. Both types gave little emphasis to quality of peer review. Overall, the results show that there is overlap of journals and publishers between blacklists and whitelists. Lists differ in their criteria for quality and the weight given to different dimensions of quality. Aspects that are central but difficult to verify receive little attention.IMPORTANCE Predatory journals are spurious scientific outlets that charge fees for editorial and publishing services that they do not provide. Their lack of quality assurance of published articles increases the risk that unreliable research is published and thus jeopardizes the integrity and credibility of research as a whole. There is increasing awareness of the risks associated with predatory publishing, but efforts to address this situation are hampered by the lack of a clear definition of predatory outlets. Blacklists of predatory journals and whitelists of legitimate journals have been developed but not comprehensively examined. By systematically analyzing these lists, this study provides insights into their utility and delineates the different notions of quality and legitimacy in scholarly publishing used. This study contributes to a better understanding of the relevant concepts and provides a starting point for the development of a robust definition of predatory journals.
Project description:BackgroundManuscript preparation and the (re)submission of articles can create a significant workload in academic jobs. In this exploratory analysis, we estimate the time and costs needed to meet the diverse formatting requirements for manuscript submissions in biomedical publishing.MethodsWe reviewed 302 leading biomedical journals' submission guidelines and extracted information on the components that tend to vary the most among submission guidelines (the length of the title, the running title, the abstract, and the manuscript; the structure of the abstract and the manuscript, number of items and references allowed, whether the journal has a template). We estimated annual research funding lost due to manuscript formatting by calculating hourly academic salaries, the time lost to reformatting articles, and quantifying the total number of resubmissions per year. We interviewed several researchers and senior journal editors and editors-in-chief to contextualize our findings and develop guidelines that could help both biomedical journals and researchers work more efficiently.ResultsAmong the analyzed journals, we found a huge diversity in submission requirements. By calculating average researcher salaries in the European Union and the USA, and the time spent on reformatting articles, we estimated that ~ 230 million USD were lost in 2021 alone due to reformatting articles. Should the current practice remain unchanged within this decade, we estimate ~ 2.5 billion USD could be lost between 2022 and 2030-solely due to reformatting articles after a first editorial desk rejection. In our interviews, we found alignment between researchers and editors; researchers would like the submission process alignment between researchers and editors; researchers would like the submission process to be as straightforward and simple as possible, and editors want to easily identify strong, suitable articles and not waste researchers' time.ConclusionsBased on the findings from our quantitative analysis and contextualized by the qualitative interviews, we conclude that free-format submission guidelines would benefit both researchers and editors. However, a minimum set of requirements is necessary to avoid manuscript submissions that lack structure. We developed our guidelines to improve the status quo, and we urge the publishers and the editorial-advisory boards of biomedical journals to adopt them. This may also require support from publishers and major international organizations that govern the work of editors.
Project description:Pesticides have been reported in treated wastewater effluent at concentrations that exceed aquatic toxicity thresholds, indicating that treatment may be insufficient to adequately address potential pesticide impacts on aquatic life. Gaining a better understanding of the relative contribution from specific use patterns, transport pathways, and flow characteristics is an essential first step to informing source control measures. The results of this study are the first of their kind, reporting pesticide concentrations at sub-sewershed sites within a single sewer catchment to provide information on the relative contribution from various urban sources. Seven monitoring events were collected from influent, effluent, and seven sub-sewershed sites to capture seasonal variability. In addition, samples were collected from sites with the potential for relatively large mass fluxes of pesticides (pet grooming operations, pest control operators, and laundromats). Fipronil and imidacloprid were detected in most samples (>70%). Pyrethroids were detected in >50% of all influent and lateral samples. There were significant removals of pyrethroids from the aqueous process stream within the facility to below reporting limits. Imidacloprid and fiproles were the only pesticides that were detected above reporting limits in effluent, highlighting the importance of source identification and control for the more hydrophilic compounds. Single source monitoring revealed large contributions of fipronil, imidacloprid, and permethrin originating from a pet groomer, with elevated levels of cypermethrin at a commercial laundry location. The results provide important information needed to prioritize future monitoring efforts, calibrate down-the-drain models, and identify potential mitigation strategies at the site of pesticide use to prevent introduction to sewersheds.