Project description:In this paper, we suggest that cortical anatomy recapitulates the temporal hierarchy that is inherent in the dynamics of environmental states. Many aspects of brain function can be understood in terms of a hierarchy of temporal scales at which representations of the environment evolve. The lowest level of this hierarchy corresponds to fast fluctuations associated with sensory processing, whereas the highest levels encode slow contextual changes in the environment, under which faster representations unfold. First, we describe a mathematical model that exploits the temporal structure of fast sensory input to track the slower trajectories of their underlying causes. This model of sensory encoding or perceptual inference establishes a proof of concept that slowly changing neuronal states can encode the paths or trajectories of faster sensory states. We then review empirical evidence that suggests that a temporal hierarchy is recapitulated in the macroscopic organization of the cortex. This anatomic-temporal hierarchy provides a comprehensive framework for understanding cortical function: the specific time-scale that engages a cortical area can be inferred by its location along a rostro-caudal gradient, which reflects the anatomical distance from primary sensory areas. This is most evident in the prefrontal cortex, where complex functions can be explained as operations on representations of the environment that change slowly. The framework provides predictions about, and principled constraints on, cortical structure-function relationships, which can be tested by manipulating the time-scales of sensory input.
Project description:Speech perception presumably arises from internal models of how specific sensory features are associated with speech sounds. These features change constantly (e.g. different speakers, articulation modes etc.), and listeners need to recalibrate their internal models by appropriately weighing new versus old evidence. Models of speech recalibration classically ignore this volatility. The effect of volatility in tasks where sensory cues were associated with arbitrary experimenter-defined categories were well described by models that continuously adapt the learning rate while keeping a single representation of the category. Using neurocomputational modelling we show that recalibration of natural speech sound categories is better described by representing the latter at different time scales. We illustrate our proposal by modeling fast recalibration of speech sounds after experiencing the McGurk effect. We propose that working representations of speech categories are driven both by their current environment and their long-term memory representations.
Project description:Gravity is a major abiotic cue for plant growth. However, little is known about the responses of plants to various patterns of gravi-stimulation, with apparent contradictions being observed between the dose-like responses recorded under transient stimuli in microgravity environments and the responses under steady-state inclinations recorded on earth. Of particular importance is how the gravitropic response of an organ is affected by the temporal dynamics of downstream processes in the signalling pathway, such as statolith motion in statocytes or the redistribution of auxin transporters. Here, we used a combination of experiments on the whole-plant scale and live-cell imaging techniques on wheat coleoptiles in centrifuge devices to investigate both the kinematics of shoot-bending induced by transient inclination, and the motion of the statoliths in response to cell inclination. Unlike previous observations in microgravity, the response of shoots to transient inclinations appears to be independent of the level of gravity, with a response time much longer than the duration of statolith sedimentation. This reveals the existence of a memory process in the gravitropic signalling pathway, independent of statolith dynamics. By combining this memory process with statolith motion, a mathematical model is built that unifies the different laws found in the literature and that predicts the early bending response of shoots to arbitrary gravi-stimulations.
Project description:Natural communication often occurs in dialogue, differentially engaging auditory and sensorimotor brain regions during listening and speaking. However, previous attempts to decode speech directly from the human brain typically consider listening or speaking tasks in isolation. Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance's identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate.
Project description:Speech production is one of the most fundamental activities of humans. A core cognitive operation involved in this skill is the retrieval of words from long-term memory, that is, from the mental lexicon. In this article, we establish the time course of lexical access by recording the brain electrical activity of participants while they named pictures aloud. By manipulating the ordinal position of pictures belonging to the same semantic categories, the cumulative semantic interference effect, we were able to measure the exact time at which lexical access takes place. We found significant correlations between naming latencies, ordinal position of pictures, and event-related potential mean amplitudes starting 200 ms after picture presentation and lasting for 180 ms. The study reveals that the brain engages extremely fast in the retrieval of words one wishes to utter and offers a clear time frame of how long it takes for the competitive process of activating and selecting words in the course of speech to be resolved.
Project description:PurposeTo determine the mechanisms of speech intelligibility impairment due to neurologic impairments, intelligibility decline was modeled as a function of co-occurring changes in the articulatory, resonatory, phonatory, and respiratory subsystems.MethodSixty-six individuals diagnosed with amyotrophic lateral sclerosis (ALS) were studied longitudinally. The disease-related changes in articulatory, resonatory, phonatory, and respiratory subsystems were quantified using multiple instrumental measures, which were subjected to a principal component analysis and mixed effects models to derive a set of speech subsystem predictors. A stepwise approach was used to select the best set of subsystem predictors to model the overall decline in intelligibility.ResultsIntelligibility was modeled as a function of five predictors that corresponded to velocities of lip and jaw movements (articulatory), number of syllable repetitions in the alternating motion rate task (articulatory), nasal airflow (resonatory), maximum fundamental frequency (phonatory), and speech pauses (respiratory). The model accounted for 95.6% of the variance in intelligibility, among which the articulatory predictors showed the most substantial independent contribution (57.7%).ConclusionArticulatory impairments characterized by reduced velocities of lip and jaw movements and resonatory impairments characterized by increased nasal airflow served as the subsystem predictors of the longitudinal decline of speech intelligibility in ALS. Declines in maximum performance tasks such as the alternating motion rate preceded declines in intelligibility, thus serving as early predictors of bulbar dysfunction. Following the rapid decline in speech intelligibility, a precipitous decline in maximum performance tasks subsequently occurred.
Project description:During binocular rivalry, physical stimulation is dissociated from conscious visual awareness. Human brain imaging reveals a tight linkage between the neural events in human primary visual cortex (V1) and the dynamics of perceptual waves during transitions in dominance during binocular rivalry. Here, we report results from experiments in which observers' attention was diverted from the rival stimuli, implying that: competition between two rival stimuli involves neural circuits in V1, and attention is crucial for the consequences of this neural competition to advance to higher visual areas and promote perceptual waves.
Project description:The asynchronous time-based neuromorphic image sensor ATIS is an array of autonomously operating pixels able to encode luminance information with an exceptionally high dynamic range (>143 dB). This paper introduces an event-based methodology to display data from this type of event-based imagers, taking into account the large dynamic range and high temporal accuracy that go beyond available mainstream display technologies. We introduce an event-based tone mapping methodology for asynchronously acquired time encoded gray-level data. A global and a local tone mapping operator are proposed. Both are designed to operate on a stream of incoming events rather than on time frame windows. Experimental results on real outdoor scenes are presented to evaluate the performance of the tone mapping operators in terms of quality, temporal stability, adaptation capability, and computational time.
Project description:Global silicate weathering drives long-time-scale fluctuations in atmospheric CO(2). While tectonics, climate, and rock-type influence silicate weathering, it is unclear how these factors combine to drive global rates. Here, we explore whether local erosion rates, GCM-derived dust fluxes, temperature, and water balance can capture global variation in silicate weathering. Our spatially explicit approach predicts 1.9-4.6 x 10(13) mols of Si weathered globally per year, within a factor of 4-10 of estimates of global silicate fluxes derived from riverine measurements. Similarly, our watershed-based estimates are within a factor of 4-18 (mean of 5.3) of the silica fluxes measured in the world's ten largest rivers. Eighty percent of total global silicate weathering product traveling as dissolved load occurs within a narrow range (0.01-0.5 mm/year) of erosion rates. Assuming each mol of Mg or Ca reacts with 1 mol of CO(2), 1.5-3.3 x 10(8) tons/year of CO(2) is consumed by silicate weathering, consistent with previously published estimates. Approximately 50% of this drawdown occurs in the world's active mountain belts, emphasizing the importance of tectonic regulation of global climate over geologic timescales.