Project description:Many bioimage analysis projects produce quantitative descriptors of regions of interest in images. Associating these descriptors with visual characteristics of the objects they describe is a key step in understanding the data at hand. However, as many bioimage data and their analysis workflows are moving to the cloud, addressing interactive data exploration in remote environments has become a pressing issue. To address it, we developed the Image Data Explorer (IDE) as a web application that integrates interactive linked visualization of images and derived data points with exploratory data analysis methods, annotation, classification and feature selection functionalities. The IDE is written in R using the shiny framework. It can be easily deployed on a remote server or on a local computer. The IDE is available at https://git.embl.de/heriche/image-data-explorer and a cloud deployment is accessible at https://shiny-portal.embl.de/shinyapps/app/01_image-data-explorer.
Project description:In underwater environment, the study of object recognition is an important basis for implementing an underwater unmanned vessel. For this purpose, abundant experimental data to train deep learning model is required. However, it is very difficult to obtain these data because the underwater experiment itself is very limited in terms of preparation time and resources. In this study, the image transformation model, Pix2Pix is utilized to generate data similar to experimental one obtained by our ROV named SPARUS between the pool and reservoir. These generated data are applied to train the other deep learning model, FCN for a pixel segmentation of images. The original sonar image and its mask image have to be prepared for all training data to train the image segmentation model and it takes a lot of effort to do it what if all training data are supposed to be real sonar images. Fortunately, this burden can be released here, for the pairs of mask image and synthesized sonar image are already consisted in the image transformation step. The validity of the proposed procedures is verified from the performance of the image segmentation result. In this study, when only real sonar images are used for training, the mean accuracy is 0.7525 and the mean IoU is 0.7275. When the both synthetic and real data is used for training, the mean accuracy is 0.81 and the mean IoU is 0.7225. Comparing the results, the performance of mean accuracy increase to 6%, performance of the mean IoU is similar value.
Project description:Data classification is one of the most commonly used applications of machine learning. The are many developed algorithms that can work in various environments and for different data distributions that perform this task with excellence. Classification algorithms, just like other machine learning algorithms have one thing in common: in order to operate on data, they must see the data. In the present world, where concerns about privacy, GDPR (General Data Protection Regulation), business confidentiality and security are growing bigger and bigger; this requirement to work directly on the original data might become, in some situations, a burden. In this paper, an approach to the classification of images that cannot be directly accessed during training has been made. It has been shown that one can train a deep neural network to create such a representation of the original data that i) without additional information, the original data cannot be restored, and ii) that this representation-called a masked form-can still be used for classification purposes. Moreover, it has been shown that classification of the masked data can be done using both classical and neural network-based classifiers.
Project description:We report an algorithm for reconstructing images when the average number of photons recorded per pixel is of order unity, i.e. photon-sparse data. The image optimisation algorithm minimises a cost function incorporating both a Poissonian log-likelihood term based on the deviation of the reconstructed image from the measured data and a regularization-term based upon the sum of the moduli of the second spatial derivatives of the reconstructed image pixel intensities. The balance between these two terms is set by a bootstrapping technique where the target value of the log-likelihood term is deduced from a smoothed version of the original data. When compared to the original data, the processed images exhibit lower residuals with respect to the true object. We use photon-sparse data from two different experimental systems, one system based on a single-photon, avalanche photo-diode array and the other system on a time-gated, intensified camera. However, this same processing technique could most likely be applied to any low photon-number image irrespective of how the data is collected.
Project description:Despite of the ongoing interest in the fusion of multi-band images for surveillance applications and a steady stream of publications in this area, there is only a very small number of static registered multi-band test images (and a total lack of dynamic image sequences) publicly available for the development and evaluation of image fusion algorithms. To fill this gap, the TNO Multiband Image Collection provides intensified visual (390-700 nm), near-infrared (700-1000 nm), and longwave infrared (8-12 µm) nighttime imagery of different military and surveillance scenarios, showing different objects and targets (e.g., people, vehicles) in a range of different (e.g., rural, urban) backgrounds. The dataset will be useful for the development of static and dynamic image fusion algorithms, color fusion algorithms, multispectral target detection and recognition algorithms, and dim target detection algorithms.
Project description:PurposeImage-based data mining (IBDM) is a novel voxel-based method for analyzing radiation dose responses that has been successfully applied in adult data. Because anatomic variability and side effects of interest differ for children compared to adults, we investigated the feasibility of IBDM for pediatric analyses.MethodsWe tested IBDM with CT images and dose distributions collected from 167 children (aged 10 months to 20 years) who received proton radiotherapy for primary brain tumors. We used data from four reference patients to assess IBDM sensitivity to reference selection. We quantified spatial-normalization accuracy via contour distances and deviations of the centers-of-mass of brain substructures. We performed dose comparisons with simplified and modified clinical dose distributions with a simulated effect, assessing their accuracy via sensitivity, positive predictive value (PPV) and Dice similarity coefficient (DSC).ResultsSpatial normalizations and dose comparisons were insensitive to reference selection. Normalization discrepancies were small (average contour distance < 2.5 mm, average center-of-mass deviation < 6 mm). Dose comparisons identified differences (p < 0.01) in 81% of simplified and all modified clinical dose distributions. The DSCs for simplified doses were high (peak frequency magnitudes of 0.9-1.0). However, the PPVs and DSCs were low (maximum 0.3 and 0.4, respectively) in the modified clinical tests.ConclusionsIBDM is feasible for childhood late-effects research. Our findings may inform cohort selection in future studies of pediatric radiotherapy dose responses and facilitate treatment planning to reduce treatment-related toxicities and improve quality of life among childhood cancer survivors.
Project description:The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion.
Project description:Cellular Ca2+ signals are often constrained to cytosolic micro- or nano-domains where stochastic openings of Ca2+ channels cause large fluctuations in local Ca2+ concentration (Ca2+ 'noise'). With the advent of TIRF microscopy to image the fluorescence of Ca2+-sensitive probes from attoliter volumes it has become possible to directly monitor these signals, which closely track the gating of plasmalemmal and ER Ca2+-permeable channels. Nevertheless, it is likely that many physiologically important Ca2+ signals are too small to resolve as discrete events in fluorescence recordings. By analogy with noise analysis of electrophysiological data, we explore here the use of statistical approaches to detect and analyze such Ca2+ noise in images obtained using Ca2+-sensitive indicator dyes. We describe two techniques - power spectrum analysis and spatio-temporal correlation - and demonstrate that both effectively identify discrete, spatially localized calcium release events (Ca2+ puffs). Moreover, we show they are able to detect localized noise fluctuations in a case where discrete events cannot directly be resolved.
Project description:Access to primary research data is vital for the advancement of science. To extend the data types supported by community repositories, we built a prototype Image Data Resource (IDR) that collects and integrates imaging data acquired across many different imaging modalities. IDR links data from several imaging modalities, including high-content screening, super-resolution and time-lapse microscopy, digital pathology, public genetic or chemical databases, and cell and tissue phenotypes expressed using controlled ontologies. Using this integration, IDR facilitates the analysis of gene networks and reveals functional interactions that are inaccessible to individual studies. To enable re-analysis, we also established a computational resource based on Jupyter notebooks that allows remote access to the entire IDR. IDR is also an open source platform that others can use to publish their own image data. Thus IDR provides both a novel on-line resource and a software infrastructure that promotes and extends publication and re-analysis of scientific image data.