Project description:BackgroundNeural circuit function is highly sensitive to energetic limitations. Much like mammals, brain activity in American bullfrogs quickly fails in hypoxia. However, after emergence from overwintering, circuits transform to function for approximately 30-fold longer without oxygen using only anaerobic glycolysis for fuel, a unique trait among vertebrates considering the high cost of network activity. Here, we assessed neuronal functions that normally limit network output and identified components that undergo energetic plasticity to increase robustness in hypoxia.ResultsIn control animals, oxygen deprivation depressed excitatory synaptic drive within native circuits, which decreased postsynaptic firing to cause network failure within minutes. Assessments of evoked and spontaneous synaptic transmission showed that hypoxia impairs synaptic communication at pre- and postsynaptic loci. However, control neurons maintained membrane potentials and a capacity for firing during hypoxia, indicating that those processes do not limit network activity. After overwintering, synaptic transmission persisted in hypoxia to sustain motor function for at least 2 h.ConclusionsAlterations that allow anaerobic metabolism to fuel synapses are critical for transforming a circuit to function without oxygen. Data from many vertebrate species indicate that anaerobic glycolysis cannot fuel active synapses due to the low ATP yield of this pathway. Thus, our results point to a unique strategy whereby synapses switch from oxidative to exclusively anaerobic glycolytic metabolism to preserve circuit function during prolonged energy limitations.
Project description:We present a new supervised image classification method applicable to a broad class of image deformation models. The method makes use of the previously described Radon Cumulative Distribution Transform (R-CDT) for image data, whose mathematical properties are exploited to express the image data in a form that is more suitable for machine learning. While certain operations such as translation, scaling, and higher-order transformations are challenging to model in native image space, we show the R-CDT can capture some of these variations and thus render the associated image classification problems easier to solve. The method - utilizing a nearest-subspace algorithm in the R-CDT space - is simple to implement, non-iterative, has no hyper-parameters to tune, is computationally efficient, label efficient, and provides competitive accuracies to state-of-the-art neural networks for many types of classification problems. In addition to the test accuracy performances, we show improvements (with respect to neural network-based methods) in terms of computational efficiency (it can be implemented without the use of GPUs), number of training samples needed for training, as well as out-of-distribution generalization. The Python code for reproducing our results is available at [1].
Project description:We propose and study a reconstruction method for photoacoustic tomography (PAT) based on total generalized variation (TGV) regularization for the inversion of the slice-wise 2D-Radon transform in 3D. The latter problem occurs for recently-developed PAT imaging techniques with parallelized integrating ultrasound detection where projection data from various directions is sequentially acquired. As the imaging speed is presently limited to 20 seconds per 3D image, the reconstruction of temporally-resolved 3D sequences of, e.g., one heartbeat or breathing cycle, is very challenging and currently, the presence of motion artifacts in the reconstructions obstructs the applicability for biomedical research. In order to push these techniques forward towards real time, it thus becomes necessary to reconstruct from less measured data such as few-projection data and consequently, to employ sophisticated reconstruction methods in order to avoid typical artifacts. The proposed TGV-regularized Radon inversion is a variational method that is shown to be capable of such artifact-free inversion. It is validated by numerical simulations, compared to filtered back projection (FBP), and performance-tested on real data from phantom as well as in-vivo mouse experiments. The results indicate that a speed-up factor of four is possible without compromising reconstruction quality.
Project description:Let (M, g) be an analytic, compact, Riemannian manifold with boundary, of dimension n?2 . We study a class of generalized Radon transforms, integrating over a family of hypersurfaces embedded in M, satisfying the Bolker condition (in: Quinto, Proceedings of conference "Seventy-five Years of Radon Transforms", Hong Kong, 1994). Using analytic microlocal analysis, we prove a microlocal regularity theorem for generalized Radon transforms on analytic manifolds defined on an analytic family of hypersurfaces. We then show injectivity and stability for an open, dense subset of smooth generalized Radon transforms satisfying the Bolker condition, including the analytic ones.
Project description:Neural activity leads to hemodynamic changes which can be detected by functional magnetic resonance imaging (fMRI). The determination of blood flow changes in individual vessels is an important aspect of understanding these hemodynamic signals. Blood flow can be calculated from the measurements of vessel diameter and blood velocity. When using line-scan imaging, the movement of blood in the vessel leads to streaks in space-time images, where streak angle is a function of the blood velocity. A variety of methods have been proposed to determine blood velocity from such space-time image sequences. Of these, the Radon transform is relatively easy to implement and has fast data processing. However, the precision of the velocity measurements is dependent on the number of Radon transforms performed, which creates a trade-off between the processing speed and measurement precision. In addition, factors like image contrast, imaging depth, image acquisition speed, and movement artifacts especially in large mammals, can potentially lead to data acquisition that results in erroneous velocity measurements. Here we show that pre-processing the data with a Sobel filter and iterative application of Radon transforms address these issues and provide more accurate blood velocity measurements. Improved signal quality of the image as a result of Sobel filtering increases the accuracy and the iterative Radon transform offers both increased precision and an order of magnitude faster implementation of velocity measurements. This algorithm does not use a priori knowledge of angle information and therefore is sensitive to sudden changes in blood flow. It can be applied on any set of space-time images with red blood cell (RBC) streaks, commonly acquired through line-scan imaging or reconstructed from full-frame, time-lapse images of the vasculature.
Project description:A compact Green's function for general dispersive anisotropic poroelastic media in a full-frequency regime is presented for the first time. First, starting in a frequency domain, the anisotropic dispersion is exactly incorporated into the constitutive relationship, thus avoiding fractional derivatives in a time domain. Then, based on the Radon transform, the original three-dimensional differential equation is effectively reduced to a one-dimensional system in space. Furthermore, inspired by the strategy adopted in the characteristic analysis of hyperbolic equations, the eigenvector diagonalization method is applied to decouple the one-dimensional vector problem into several independent scalar equations. Consequently, the fundamental solutions are easily obtained. A further derivation shows that Green's function can be decomposed into circumferential and spherical integrals, corresponding to static and transient responses, respectively. The procedures shown in this study are also compatible with other pertinent multi-physics coupling problems, such as piezoelectric, magneto-electro-elastic and thermo-elastic materials. Finally, the verifications and validations with existing analytical solutions and numerical solvers corroborate the correctness of the proposed Green's function.
Project description:Linear mixed-effect models (LMMs) are being increasingly widely used in psychology to analyse multi-level research designs. This feature allows LMMs to address some of the problems identified by Speelman and McGann (2013) about the use of mean data, because they do not average across individual responses. However, recent guidelines for using LMM to analyse skewed reaction time (RT) data collected in many cognitive psychological studies recommend the application of non-linear transformations to satisfy assumptions of normality. Uncritical adoption of this recommendation has important theoretical implications which can yield misleading conclusions. For example, Balota et al. (2013) showed that analyses of raw RT produced additive effects of word frequency and stimulus quality on word identification, which conflicted with the interactive effects observed in analyses of transformed RT. Generalized linear mixed-effect models (GLMM) provide a solution to this problem by satisfying normality assumptions without the need for transformation. This allows differences between individuals to be properly assessed, using the metric most appropriate to the researcher's theoretical context. We outline the major theoretical decisions involved in specifying a GLMM, and illustrate them by reanalysing Balota et al.'s datasets. We then consider the broader benefits of using GLMM to investigate individual differences.
Project description:Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N x N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude.
Project description:Globally, the number of dengue cases has been on the increase since 1990 and this trend has also been found in Brazil and its most populated city-São Paulo. Surveillance systems based on predictions allow for timely decision making processes, and in turn, timely and efficient interventions to reduce the burden of the disease. We conducted a comparative study of dengue predictions in São Paulo city to test the performance of trained seasonal autoregressive integrated moving average models, generalized additive models and artificial neural networks. We also used a naïve model as a benchmark. A generalized additive model with lags of the number of cases and meteorological variables had the best performance, predicted epidemics of unprecedented magnitude and its performance was 3.16 times higher than the benchmark and 1.47 higher that the next best performing model. The predictive models captured the seasonal patterns but differed in their capacity to anticipate large epidemics and all outperformed the benchmark. In addition to be able to predict epidemics of unprecedented magnitude, the best model had computational advantages, since its training and tuning was straightforward and required seconds or at most few minutes. These are desired characteristics to provide timely results for decision makers. However, it should be noted that predictions are made just one month ahead and this is a limitation that future studies could try to reduce.
Project description:We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods.