Project description:We conducted a cross-sectional study to assess how the top 3 highest circulation newspapers from 25 countries are comparing and presenting COVID-19 epidemiological data to their readers. Of 75 newspapers evaluated, 51(68%) presented at their websites at least one comparison of cases and/or deaths between regions of their country and/or between countries. Quality assessment of the comparisons showed that only a minority of newspapers adjusted the data for population size in case comparisons between regions (37.2%) and between countries (25.6%), and the same was true for death comparisons between regions (27.3%) and between countries (27%). Of those making comparisons, only 13.7% explained the difference in the interpretation of cases and deaths. Of 17 that presented a logarithmic curve, only 29.4% explained its meaning. Although the press plays a key role in conveying correct medical information to the general public, we identified inconsistencies in the reporting of COVID-19 epidemiological data.
Project description:Listening habits are strongly influenced by two opposing aspects, the desire for variety and the demand for uniformity in music. In this work we quantify these two notions in terms of instrumentation and production technologies that are typically involved in crafting popular music. We assign an 'instrumentational complexity value' to each music style. Styles of low instrumentational complexity tend to have generic instrumentations that can also be found in many other styles. Styles of high complexity, on the other hand, are characterized by a large variety of instruments that can only be found in a small number of other styles. To model these results we propose a simple stochastic model that explicitly takes the capabilities of artists into account. We find empirical evidence that individual styles show dramatic changes in their instrumentational complexity over the last fifty years. 'New wave' or 'disco' quickly climbed towards higher complexity in the 70s and fell back to low complexity levels shortly afterwards, whereas styles like 'folk rock' remained at constant high instrumentational complexity levels. We show that changes in the instrumentational complexity of a style are related to its number of sales and to the number of artists contributing to that style. As a style attracts a growing number of artists, its instrumentational variety usually increases. At the same time the instrumentational uniformity of a style decreases, i.e. a unique stylistic and increasingly complex expression pattern emerges. In contrast, album sales of a given style typically increase with decreasing instrumentational complexity. This can be interpreted as music becoming increasingly formulaic in terms of instrumentation once commercial or mainstream success sets in.
Project description:The use of machine learning for predicting ecotoxicological outcomes is promising, but underutilized. The curation of data with informative features requires both expertise in machine learning as well as a strong biological and ecotoxicological background, which we consider a barrier of entry for this kind of research. Additionally, model performances can only be compared across studies when the same dataset, cleaning, and splittings were used. Therefore, we provide ADORE, an extensive and well-described dataset on acute aquatic toxicity in three relevant taxonomic groups (fish, crustaceans, and algae). The core dataset describes ecotoxicological experiments and is expanded with phylogenetic and species-specific data on the species as well as chemical properties and molecular representations. Apart from challenging other researchers to try and achieve the best model performances across the whole dataset, we propose specific relevant challenges on subsets of the data and include datasets and splittings corresponding to each of these challenge as well as in-depth characterization and discussion of train-test splitting approaches.
Project description:Different methods are used in ecotoxicology to estimate thresholds in survival data. This paper uses Monte Carlo simulations to evaluate the accuracy of three methods (maximum likelihood (MLE) and Markov Chain Monte Carlo estimates (Bayesian) of the no-effect concentration (NEC) model and Piecewise regression) in estimating true and apparent thresholds in survival experiments with datasets having different slopes, background mortalities, and experimental designs. Datasets were generated with models that include a threshold parameter (NEC) or not (log-logistic). Accuracy was estimated using root-mean square errors (RMSEs), and RMSE ratios were used to estimate the relative improvement in accuracy by each design and method. All methods had poor performances in shallow and intermediate curves, and accuracy increased with the slope of the curve. The EC5 was generally the most accurate method to estimate true and apparent thresholds, except for steep curves with a true threshold. In that case, the EC5 underestimated the threshold, and MLE and Bayesian estimates were more accurate. In most cases, information criteria weights did not provide strong evidence in support of the true model, suggesting that identifying the true model is a difficult task. Piecewise regression was the only method where the information criteria weights had high support for the threshold model; however, the rate of spurious threshold model selection was also high. Even though thresholds are an attractive concept from a regulatory and practical point of view, threshold estimates, under the experimental conditions evaluated in this work, should be carefully used in survival analysis or when there are any biological reasons to support the existence of a threshold.
Project description:Modern automotive press shops are reaching their process limits due to increasing demands on car body shapes. At the same time, transmission of information and readjustment in the event of quality losses because of process errors is still largely controlled manually. The survey presented here, deals with better connected processes as well as data acquisition, and track and trace applications in press shops. The survey was directed to experts from the automotive industry and is to determine how automated and connected the processes in press shops already are. The survey was conducted from March till April 2020. With a total of 24 questions, an attempt is made to gain a comprehensive picture of the current status and the existing potential regarding smart press shops. In addition to questions on the marking and tracking of pressed parts, the objective is to find out which process data is already being recorded today and what conclusions can be drawn from it regarding the expected part quality. The evaluation of the survey is intended to build the basis for research activities on smart, connected press shops.
Project description:This paper presents evidence that when an analyst makes an out-of-consensus forecast of a company's quarterly earnings that turns out to be incorrect, she escalates her commitment to maintaining an out-of-consensus view on the company. Relative to an analyst who was close to the consensus, the out-of-consensus analyst adjusts her forecasts for the current fiscal year's earnings less in the direction of the quarterly earnings surprise. On average, this type of updating behavior reduces forecasting accuracy, so it does not seem to reflect superior private information. Further empirical results suggest that analysts do not have financial incentives to stand by extreme stock calls in the face of contradictory evidence. Managerial and financial market implications are discussed.
Project description:Lately, behavioral ecotoxicology has flourished because of increasing standardization of analyses of endpoints like movement. However, research tends to focus on a few model species, which limits possibilities of extrapolating and predicting toxicological effects and adverse outcomes at the population and ecosystem level. In this regard, it is recommended to assess critical species-specific behavioral responses in taxa playing key roles in trophic food webs, such as cephalopods. These latter, known as masters of camouflage, display rapid physiological color changes to conceal themselves and adapt to their surrounding environments. The efficiency of this process depends on visual abilities and acuity, information processing, and control of chromatophores dynamics through nervous and hormonal regulation with which many contaminants can interfere. Therefore, the quantitative measurement of color change in cephalopod species could be developed as a powerful endpoint for toxicological risk assessment. Based on a wide body of research having assessed the effect of various environmental stressors (pharmaceutical residues, metals, carbon dioxide, anti-fouling agents) on the camouflage abilities of juvenile common cuttlefish, we discuss the relevance of this species as a toxicological model and address the challenge of color change quantification and standardization through a comparative review of the available measurement techniques.
Project description:It is crucial to understand the effects caused by experimental parameters such as temperature, light, and food type on lab and field-based ecotoxicology experiments, as these variables, and combinations thereof, can affect results. The type of substrate used in exposure experiments, however, is generally assumed to have no effect. This may not always be correct. The metabolic changes in the freshwater crustacean, Austrochiltonia subtenuis exposed to copper, using three common substrates, gauze; toilet paper; and cellulose were investigated. Substrate alone did not affect survival, but each substrate elicited a different metabolic response and adult and juvenile amphipods had different substrate preferences. Several classes of metabolites were shown to change in response to different substrates and toxicant. These included disaccharides, monosaccharides, fatty acids, and tricarboxylic acid cycle intermediates. The results illustrate that metabolomic responses can differ in response to experimental factors that were previously thought not to be significant. In fact, our data indicate that substrate should be viewed as an experimental factor as important to control for as more well-known confounders such as temperature or food, thus challenging the current paradigm. Assuming substrate type has no effect on the experiment could potentially lead to errors in contaminant toxicity assessments. We propose that ideal good practise would be that all experimental factors should be evaluated for their potential influence on metabolomic profiles prior to contaminant response experiments being undertaken.