Project description:SummaryVisualization in 3D space is a standard but critical process for examining the complex structure of high-dimensional data. Stereoscopic imaging technology can be adopted to enhance 3D representation of many complex data, especially those consisting of points and lines. We illustrate the simple steps that are involved and strongly recommend others to implement it in designing visualization software. To facilitate its application, we created a new software that can convert a regular 3D scatterplot or network figure to a pair of stereo images.Availability and implementationStereo3D is freely available as an open source R package released under an MIT license at https://github.com/bioinfoDZ/Stereo3D. Others can integrate the codes and implement the method in academic software.Contactdeyou.zheng@einsteinmed.org.Supplementary informationSupplementary data are available at Bioinformatics online.
Project description:The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization.
Project description:Compared to traditional vat photopolymerization 3D printing methods, pixel blending technique provides greater freedom in terms of user-defined lighting sources. Based on this technology, scientists have conducted research on 3D printing manufacturing for elastic materials, biologically inert materials, and materials with high transparency, making significant contributions to the fields of portable healthcare and specialty material processing. However, there has been a lack of a universal and simple algorithm to facilitate low-cost printing experiments for researchers not in the 3D printing industry. Here, we propose a mathematical approach based on morphology to simulate the light dose distribution and virtual visualization of parts produced using grayscale mask vat photopolymerization 3D printing technology. Based on this simulation, we develop an auto-correction method inspired by circle packing to modify the grayscale values of projection images, thereby improving the dimensional accuracy of printed devices. This method can significantly improve printing accuracy with just a single parameter adjustment. We conducted experimental validation of this method on a vat photopolymerization printer using common commercial resins, demonstrating its feasibility for printing high precision structures. The parameters utilized in this method are comparatively simpler to acquire compared to conventional techniques for obtaining optical parameters. For researchers in non-vat photopolymerization 3D printing industry, it is relatively user-friendly.
Project description:Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'.
Project description:BackgroundVirtual reality (VR) enables data visualization in an immersive and engaging manner, and it can be used for creating ways to explore scientific data. Here, we use VR for visualization of 3D histology data, creating a novel interface for digital pathology to aid cancer research.MethodsOur contribution includes 3D modeling of a whole organ and embedded objects of interest, fusing the models with associated quantitative features and full resolution serial section patches, and implementing the virtual reality application. Our VR application is multi-scale in nature, covering two object levels representing different ranges of detail, namely organ level and sub-organ level. In addition, the application includes several data layers, including the measured histology image layer and multiple representations of quantitative features computed from the histology.ResultsIn our interactive VR application, the user can set visualization properties, select different samples and features, and interact with various objects, which is not possible in the traditional 2D-image view used in digital pathology. In this work, we used whole mouse prostates (organ level) with prostate cancer tumors (sub-organ objects of interest) as example cases, and included quantitative histological features relevant for tumor biology in the VR model.ConclusionsOur application enables a novel way for exploration of high-resolution, multidimensional data for biomedical research purposes, and can also be used in teaching and researcher training. Due to automated processing of the histology data, our application can be easily adopted to visualize other organs and pathologies from various origins.
Project description:To glean an appreciation of the holistic genetic activity in the gastrulating mouse embryo, we performed a genome-wide spatial transcriptome analysis (Stereo-seq), using a low-cell number sequencing protocol on laser microdissected samples of epiblast cells with retained positional address. The 3D transcriptome reveals that (i) the epiblast is partitioned into transcription domains corresponding to regions of epiblast where cells are endowed specifically with ectoderm and mesendoderm potency, (ii) novel lineage markers are identified as genes expressed in epiblast domains populated by cells displaying different lineage fates, (iii) functionally related gene regulatory circuitry and signaling pathways are acting in concert in the transcriptional domains, and (iv) the spatial information provides reference zipcodes for mapping the prospective address of cell samples from different embryos and stem cell lines. The quantified expression data can also be visualized as â3D digitized whole mount in situ hybridizationâ of all the expressed transcripts in the epiblast. (i) By using laser-microdissection, we carried out transcriptome profiling on embryo sections at a high resolution of ~20 cells per sample with the spatial information preserved. We then constructed a comprehensive spatial transcriptome map in the mid-gastrulation embryo that is visualized in a 3D embryonic model based on the sequencing data. Embryo position (A/L/P/R) and section (1-11) descriptors: A stands for laser capture microdissected samples from the anterior epiblast of the embryo; P for posterior; L for the left lateral epiblast of the embryo; R for the right lateral. The section is collected from distal to proximal, and the section 1 to 11 is the cryosection order, covering the whole embryonic part of a late mid-streak embryo. Section 1 is the most distal section and 11 is the most proximal section. (ii) Additional samples are RNA-seq data of 70 single cells from E7.0 mouse embryo. These 70 samples were randomly picked from the anterior or posterior embryonic half.
Project description:Extracellular vesicles (EVs) showed therapeutic properties in several applications, many in regenerative medicine. A clear example is in the treatment of osteoarthritis (OA), where adipose-derived mesenchymal stem cells (ASCs)-EVs were able to promote regeneration and reduce inflammation in both synovia and cartilage. A still obscure issue is the effective ability of EVs to be internalized by target cells, rather than simply bound to the extracellular matrix (ECM) or plasma membrane, since the current detection or imaging technologies cannot fully decipher it due to technical limitations. In the present study, human articular chondrocytes (ACHs) and fibroblast-like synoviocytes (FLSs) isolated from the same OA patients were cocultured in 2D as well as in 3D conditions with fluorescently labeled ASC-EVs, and analyzed by flow cytometry or confocal microscopy, respectively. In contrast with conventional 2D, in 3D cultures, confocal microscopy allowed a clear detection of the tridimensional morphology of the cells and thus an accurate discrimination of EV interaction with the external and/or internal cell environment. In both 2D and 3D conditions, FLSs were more efficient in interacting with ASC-EVs and 3D imaging demonstrated a faster uptake process. The removal of the hyaluronic acid component from the ECM of both cell types reduced their interaction with ASC-EVs only in the 2D system, showing that 2D and 3D conditions can yield different outcomes when investigating events where ECM plays a key role. These results indicate that studying EVs binding and uptake both in 2D and 3D guarantees a more precise and complementary characterization of the molecular mechanisms involved in the process. The implementation of this strategy can become a valuable tool not only for basic research, but also for release assays and potency prediction for clinical EV batches.
Project description:SignificanceOptical imaging in the second near-infrared (NIR-II, 1000 to 1700 nm) region is capable of deep tumor vascular imaging due to low light scattering and low autofluorescence. Non-invasive real-time NIR-II fluorescence imaging is instrumental in monitoring tumor status.AimOur aim is to develop an NIR-II fluorescence rotational stereo imaging system for 360-deg three-dimensional (3D) imaging of whole-body blood vessels, tumor vessels, and 3D contour of mice.ApproachOur study combined an NIR-II camera with a 360-deg rotational stereovision technique for tumor vascular imaging and 3D surface contour for mice. Moreover, self-made NIR-II fluorescent polymer dots were applied in high-contrast NIR-II vascular imaging, along with a 3D blood vessel enhancement algorithm for acquiring high-resolution 3D blood vessel images. The system was validated with a custom-made 3D printing phantom and in vivo experiments of 4T1 tumor-bearing mice.ResultsThe results showed that the NIR-II 3D 360-deg tumor blood vessels and mice contour could be reconstructed with 0.15 mm spatial resolution, 0.3 mm depth resolution, and 5 mm imaging depth in an ex vivo experiment.ConclusionsThe pioneering development of an NIR-II 3D 360-deg rotational stereo imaging system was first applied in small animal tumor blood vessel imaging and 3D surface contour imaging, demonstrating its capability of reconstructing tumor blood vessels and mice contour. Therefore, the 3D imaging system can be instrumental in monitoring tumor therapy effects.
Project description:Photoacoustic (PA) imaging (or optoacoustic imaging) is a novel biomedical imaging method in biological and medical research. This modality performs morphological, functional, and molecular imaging with and without labels in both microscopic and deep tissue imaging domains. A variety of innovations have enhanced 3D PA imaging performance and thus has opened new opportunities in preclinical and clinical imaging. However, the 3D visualization tools for PA images remains a challenge. There are several commercially available software packages to visualize the generated 3D PA images. They are generally expensive, and their features are not optimized for 3D visualization of PA images. Here, we demonstrate a specialized 3D visualization software package, namely 3D Photoacoustic Visualization Studio (3D PHOVIS), specifically targeting photoacoustic data, image, and visualization processes. To support the research environment for visualization and fast processing, we incorporated 3D PHOVIS onto the MATLAB with graphical user interface and developed multi-core graphics processing unit modules for fast processing. The 3D PHOVIS includes following modules: (1) a mosaic volume generator, (2) a scan converter for optical scanning photoacoustic microscopy, (3) a skin profile estimator and depth encoder, (4) a multiplanar viewer with a navigation map, and (5) a volume renderer with a movie maker. This paper discusses the algorithms present in the software package and demonstrates their functions. In addition, the applicability of this software to ultrasound imaging and optical coherence tomography is also investigated. User manuals and application files for 3D PHOVIS are available for free on the website (www.boa-lab.com). Core functions of 3D PHOVIS are developed as a result of a summer class at POSTECH, "High-Performance Algorithm in CPU/GPU/DSP, and Computer Architecture." We believe our 3D PHOVIS provides a unique tool to PA imaging researchers, expedites its growth, and attracts broad interests in a wide range of studies.
Project description:Due to technical roadblocks, it is unclear how visual circuits represent multiple features or how behaviorally relevant representations are selected for long-term memory. Here we developed Moculus, a head-mounted virtual reality platform for mice that covers the entire visual field, and allows binocular depth perception and full visual immersion. This controllable environment, with three-dimensional (3D) corridors and 3D objects, in combination with 3D acousto-optical imaging, affords rapid visual learning and the uncovering of circuit substrates in one measurement session. Both the control and reinforcement-associated visual cue coding neuronal assemblies are transiently expanded by reinforcement feedback to near-saturation levels. This increases computational capability and allows competition among assemblies that encode behaviorally relevant information. The coding assemblies form partially orthogonal and overlapping clusters centered around hub cells with higher and earlier ramp-like responses, as well as locally increased functional connectivity.