Project description:BackgroundThe localization of objects of interest is a key initial step in most image analysis workflows. For biomedical image data, classical image-segmentation methods like thresholding or edge detection are typically used. While those methods perform well for labelled objects, they are reaching a limit when samples are poorly contrasted with the background, or when only parts of larger structures should be detected. Furthermore, the development of such pipelines requires substantial engineering of analysis workflows and often results in case-specific solutions. Therefore, we propose a new straightforward and generic approach for object-localization by template matching that utilizes multiple template images to improve the detection capacity.ResultsWe provide a new implementation of template matching that offers higher detection capacity than single template approach, by enabling the detection of multiple template images. To provide an easy-to-use method for the automatic localization of objects of interest in microscopy images, we implemented multi-template matching as a Fiji plugin, a KNIME workflow and a python package. We demonstrate its application for the localization of entire, partial and multiple biological objects in zebrafish and medaka high-content screening datasets. The Fiji plugin can be installed by activating the Multi-Template-Matching and IJ-OpenCV update sites. The KNIME workflow is available on nodepit and KNIME Hub. Source codes and documentations are available on GitHub (https://github.com/multi-template-matching).ConclusionThe novel multi-template matching is a simple yet powerful object-localization algorithm, that requires no data-pre-processing or annotation. Our implementation can be used out-of-the-box by non-expert users for any type of 2D-image. It is compatible with a large variety of applications including, for instance, analysis of large-scale datasets originating from automated microscopy, detection and tracking of objects in time-lapse assays, or as a general image-analysis step in any custom processing pipelines. Using different templates corresponding to distinct object categories, the tool can also be used for classification of the detected regions.
Project description:Quantitative analysis of bioimaging data is often skewed by both shading in space and background variation in time. We introduce BaSiC, an image correction method based on low-rank and sparse decomposition which solves both issues. In comparison to existing shading correction tools, BaSiC achieves high-accuracy with significantly fewer input images, works for diverse imaging conditions and is robust against artefacts. Moreover, it can correct temporal drift in time-lapse microscopy data and thus improve continuous single-cell quantification. BaSiC requires no manual parameter setting and is available as a Fiji/ImageJ plugin.
Project description:BackgroundLight microscopy is of central importance in cell biology. The recent introduction of automated high content screening has expanded this technology towards automation of experiments and performing large scale perturbation assays. Nevertheless, evaluation of microscopy data continues to be a bottleneck in many projects. Currently, among open source software, CellProfiler and its extension Analyst are widely used in automated image processing. Even though revolutionizing image analysis in current biology, some routine and many advanced tasks are either not supported or require programming skills of the researcher. This represents a significant obstacle in many biology laboratories.ResultsWe have developed a tool, Enhanced CellClassifier, which circumvents this obstacle. Enhanced CellClassifier starts from images analyzed by CellProfiler, and allows multi-class classification using a Support Vector Machine algorithm. Training of objects can be done by clicking directly "on the microscopy image" in several intuitive training modes. Many routine tasks like out-of focus exclusion and well summary are also supported. Classification results can be integrated with other object measurements including inter-object relationships. This makes a detailed interpretation of the image possible, allowing the differentiation of many complex phenotypes. For the generation of the output, image, well and plate data are dynamically extracted and summarized. The output can be generated as graphs, Excel-files, images with projections of the final analysis and exported as variables.ConclusionHere we describe Enhanced CellClassifier which allows multiple class classification, elucidating complex phenotypes. Our tool is designed for the biologist who wants both, simple and flexible analysis of images without requiring programming skills. This should facilitate the implementation of automated high-content screening.
Project description:Since its inception, scanning probe microscopy (SPM) has established itself as the tool of choice for probing surfaces and functionalities at the nanoscale. Although recent developments in the instrumentation have greatly improved the metrological aspects of SPM, it is still plagued by the drifts and nonlinearities of the piezoelectric actuators underlying the precise nanoscale motion. In this work, we present an innovative computer-vision-based distortion correction algorithm for offline processing of functional SPM measurements, allowing two images to be directly overlaid with minimal error - thus correlating position with time evolution and local functionality. To demonstrate its versatility, the algorithm is applied to two very different systems. First, we show the tracking of polarisation switching in an epitaxial Pb(Zr0.2Ti0.8)O3 thin film during high-speed continuous scanning under applied tip bias. Thanks to the precise time-location-polarisation correlation we can extract the regions of domain nucleation and track the motion of domain walls until the merging of the latter in avalanche-like events. Secondly, the morphology of surface folds and wrinkles in graphene deposited on a PET substrate is probed as a function of applied strain, allowing the relaxation of individual wrinkles to be tracked.
Project description:Most essential cellular functions are performed by proteins assembled into larger complexes. Fluorescence Polarization Microscopy (FPM) is a powerful technique that goes beyond traditional imaging methods by allowing researchers to measure not only the localization of proteins within cells, but also their orientation or alignment within complexes or cellular structures. FPM can be easily integrated into standard widefield microscopes with the addition of a polarization modulator. However, the extensive image processing and analysis required to interpret the data have limited its widespread adoption. To overcome these challenges and enhance accessibility, we introduce OOPS (Object-Oriented Polarization Software), a MATLAB package for object-based analysis of FPM data. By combining flexible image segmentation and novel object-based analyses with a high-throughput FPM processing pipeline, OOPS empowers researchers to simultaneously study molecular order and orientation in individual biological structures; conduct population assessments based on morphological features, intensity statistics, and FPM measurements; and create publication-quality visualizations, all within a user-friendly graphical interface. Here, we demonstrate the power and versatility of our approach by applying OOPS to punctate and filamentous structures.
Project description:Motivation:Fluorescence localization microscopy is extensively used to study the details of spatial architecture of subcellular compartments. This modality relies on determination of spatial positions of fluorophores, labeling an extended biological structure, with precision exceeding the diffraction limit. Several established models describe influence of pixel size, signal-to-noise ratio and optical resolution on the localization precision. The labeling density has been also recognized as important factor affecting reconstruction fidelity of the imaged biological structure. However, quantitative data on combined influence of sampling and localization errors on the fidelity of reconstruction are scarce. It should be noted that processing localization microscopy data is similar to reconstruction of a continuous (extended) non-periodic signal from a non-uniform, noisy point samples. In two dimensions the problem may be formulated within the framework of matrix completion. However, no systematic approach has been adopted in microscopy, where images are typically rendered by representing localized molecules with Gaussian distributions (widths determined by localization precision). Results:We analyze the process of two-dimensional reconstruction of extended biological structures as a function of the density of registered emitters, localization precision and the area occupied by the rendered localized molecule. We quantify overall reconstruction fidelity with different established image similarity measures. Furthermore, we analyze the recovered similarity measure in the frequency space for different reconstruction protocols. We compare the cut-off frequency to the limiting sampling frequency, as determined by labeling density. Availability and implementation:The source code used in the simulations along with test images is available at https://github.com/blazi13/qbioimages. Contact:bruszczy@nencki.gov.pl or t.bernas@nencki.gov.pl. Supplementary information:Supplementary data are available at Bioinformatics online.
Project description:The presence of systematic noise in images in high-throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non-homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high-content screen readouts due to software-based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real-world high-throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z'-factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high-content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post-hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open-source image analysis pipelines publicly available. This software-based solution has the potential to improve outcomes for a wide-variety of image-based HTS experiments.
Project description:Optimization of sample, imaging and data processing parameters is an essential task in localization based super-resolution microscopy, where the final image quality strongly depends on the imaging of single isolated fluorescent molecules. A computational solution that uses a simulator software for the generation of test data stacks was proposed, developed and tested. The implemented advanced physical models such as scalar and vector based point spread functions, polarization sensitive detection, drift, spectral crosstalk, structured background etc., made the simulation results more realistic and helped us interpret the final super-resolved images and distinguish between real structures and imaging artefacts.
Project description:With the advent of in vivo laser scanning fluorescence microscopy techniques, time-series and three-dimensional volumes of living tissue and vessels at micron scales can be acquired to firmly analyze vessel architecture and blood flow. Analysis of a large number of image stacks to extract architecture and track blood flow manually is cumbersome and prone to observer bias. Thus, an automated framework to accomplish these analytical tasks is imperative. The first initiative toward such a framework is to compensate for motion artifacts manifest in these microscopy images. Motion artifacts in in vivo microscopy images are caused by respiratory motion, heart beats, and other motions from the specimen. Consequently, the amount of motion present in these images can be large and hinders further analysis of these images. In this article, an algorithmic framework for the correction of time-series images is presented. The automated algorithm is comprised of a rigid and a nonrigid registration step based on shape contexts. The framework performs considerably well on time-series image sequences of the islets of Langerhans and provides for the pivotal step of motion correction in the further automatic analysis of microscopy images.
Project description:Single-molecule-localization-based superresolution microscopy requires accurate sample drift correction to achieve good results. Common approaches for drift compensation include using fiducial markers and direct drift estimation by image correlation. The former increases the experimental complexity and the latter estimates drift at a reduced temporal resolution. Here, we present, to our knowledge, a new approach for drift correction based on the Bayesian statistical framework. The technique has the advantage of being able to calculate the drifts for every image frame of the data set directly from the single-molecule coordinates. We present the theoretical foundation of the algorithm and an implementation that achieves significantly higher accuracy than image-correlation-based estimations.