Project description:The three-dimensional (3D) vectorial nature of electromagnetic waves of light has not only played a fundamental role in science but also driven disruptive applications in optical display, microscopy, and manipulation. However, conventional optical holography can address only the amplitude and phase information of an optical beam, leaving the 3D vectorial feature of light completely inaccessible. We demonstrate 3D vectorial holography where an arbitrary 3D vectorial field distribution on a wavefront can be precisely reconstructed using the machine learning inverse design based on multilayer perceptron artificial neural networks. This 3D vectorial holography allows the lensless reconstruction of a 3D vectorial holographic image with an ultrawide viewing angle of 94° and a high diffraction efficiency of 78%, necessary for floating displays. The results provide an artificial intelligence-enabled holographic paradigm for harnessing the vectorial nature of light, enabling new machine learning strategies for holographic 3D vectorial fields multiplexing in display and encryption.
Project description:Chemical short-range order (CSRO) refers to atoms of specific elements self-organising within a disordered crystalline matrix to form particular atomic neighbourhoods. CSRO is typically characterized indirectly, using volume-averaged or through projection microscopy techniques that fail to capture the three-dimensional atomistic architectures. Here, we present a machine-learning enhanced approach to break the inherent resolution limits of atom probe tomography enabling three-dimensional imaging of multiple CSROs. We showcase our approach by addressing a long-standing question encountered in body-centred-cubic Fe-Al alloys that see anomalous property changes upon heat treatment. We use it to evidence non-statistical B2-CSRO instead of the generally-expected D03-CSRO. We introduce quantitative correlations among annealing temperature, CSRO, and nano-hardness and electrical resistivity. Our approach is further validated on modified D03-CSRO detected in Fe-Ga. The proposed strategy can be generally employed to investigate short/medium/long-range ordering phenomena in different materials and help design future high-performance materials.
Project description:Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training.
Project description:To quantitatively assess pathological gait, we developed a novel smartphone application for full-body human motion tracking in real time from markerless video-based images using a smartphone monocular camera and deep learning. As training data for deep learning, the original three-dimensional (3D) dataset comprising more than 1 million captured images from the 3D motion of 90 humanoid characters and the two-dimensional dataset of COCO 2017 were prepared. The 3D heatmap offset data consisting of 28 × 28 × 28 blocks with three red-green-blue colors at the 24 key points of the entire body motion were learned using the convolutional neural network, modified ResNet34. At each key point, the hottest spot deviating from the center of the cell was learned using the tanh function. Our new iOS application could detect the relative tri-axial coordinates of the 24 whole-body key points centered on the navel in real time without any markers for motion capture. By using the relative coordinates, the 3D angles of the neck, lumbar, bilateral hip, knee, and ankle joints were estimated. Any human motion could be quantitatively and easily assessed using a new smartphone application named Three-Dimensional Pose Tracker for Gait Test (TDPT-GT) without any body markers or multipoint cameras.
Project description:Biochemical oxygen demand (BOD) is an important indicator of the degree of organic pollution in water bodies. Traditional methods for BOD5 determination, although widely used, are complicated and dependent on accurate chemical measurements of dissolved oxygen. The aim of this study was to propose a facile method for predicting biochemical oxygen demand by fluorescence signals using three-dimensional fluorescence spectroscopy and parallel factor analysis in combination with a machine learning algorithm. The water samples were incubated for five days using the national standard method, during which the dissolved oxygen contents and three-dimensional fluorescence spectroscopy data were measured at eight-hour intervals. The maximum fluorescence intensity of three fluorescence components was decomposed and extracted by parallel factor analysis. The relationship between the maximum fluorescence of the three fluorescence components and the BOD5 values was established using a random forest model. The results showed that there was a good correlation between the fluorescence components and BOD values. The BOD5 values were effectively predicted by the random forest model with a high goodness of fit (R2 = 0.878) and low mean square error (MSE = 0.28). Although this method did not shorten the incubation time, successful BOD5 prediction was realized by the non-contact measurement of fluorescence signals. This avoids the complicated operation of DO determination, improves detection efficiency, and provides a convenient solution for analyzing large quantities of water samples and monitoring facile water quality.
Project description:PurposeTo develop a machine-learning image processing model for three-dimensional (3D) reconstruction of vitreous anatomy visualized with swept-source optical coherence tomography (SS-OCT).MethodsHealthy subjects were imaged with SS-OCT. Scans of sufficient quality were transferred into the Fiji is just ImageJ image processing toolkit, and proportions of the resulting stacks were adjusted to form cubic voxels. Image-averaging and Trainable Weka Segmentation using Sobel and variance edge detection and directional membrane projections filters were used to enhance and interpret the signals from vitreous gel, liquid spaces within the vitreous, and interfaces between the former. Two classes were defined: "Septa" and "Other." Pixels were selected and added to each class to train the classifier. Results were generated as a probability map. Thresholding was performed to remove pixels that were classified with low confidence. Volume rendering was performed with TomViz.ResultsForty-seven eyes of 34 healthy subjects were imaged with SS-OCT. Thirty-four cube scans from 25 subjects were of sufficient quality for volume rendering. Clinically relevant vitreous features including the premacular bursa, area of Martegiani, and prevascular vitreous fissures and cisterns, as well as varying degrees of vitreous degeneration were visualized in 3D.ConclusionsA machine-learning model for 3D vitreous reconstruction of SS-OCT cube scans was developed. The resultant high-resolution 3D movies illustrated vitreous anatomy in a manner like triamcinolone-assisted vitrectomy or postmortem dye injection.Translational relevanceThis machine learning model now allows for comprehensive examination of the vitreous structure beyond the vitreoretinal interface in 3D with potential applications for common disease states such as the vitreomacular traction and Macular Hole spectrum of diseases or proliferative diabetic retinopathy.
Project description:Purpose To determine if patient survival and mechanisms of right ventricular failure in pulmonary hypertension could be predicted by using supervised machine learning of three-dimensional patterns of systolic cardiac motion. Materials and Methods The study was approved by a research ethics committee, and participants gave written informed consent. Two hundred fifty-six patients (143 women; mean age ± standard deviation, 63 years ± 17) with newly diagnosed pulmonary hypertension underwent cardiac magnetic resonance (MR) imaging, right-sided heart catheterization, and 6-minute walk testing with a median follow-up of 4.0 years. Semiautomated segmentation of short-axis cine images was used to create a three-dimensional model of right ventricular motion. Supervised principal components analysis was used to identify patterns of systolic motion that were most strongly predictive of survival. Survival prediction was assessed by using difference in median survival time and area under the curve with time-dependent receiver operating characteristic analysis for 1-year survival. Results At the end of follow-up, 36% of patients (93 of 256) died, and one underwent lung transplantation. Poor outcome was predicted by a loss of effective contraction in the septum and free wall, coupled with reduced basal longitudinal motion. When added to conventional imaging and hemodynamic, functional, and clinical markers, three-dimensional cardiac motion improved survival prediction (area under the receiver operating characteristic curve, 0.73 vs 0.60, respectively; P < .001) and provided greater differentiation according to difference in median survival time between high- and low-risk groups (13.8 vs 10.7 years, respectively; P < .001). Conclusion A machine-learning survival model that uses three-dimensional cardiac motion predicts outcome independent of conventional risk factors in patients with newly diagnosed pulmonary hypertension. Online supplemental material is available for this article.
Project description:Many physics problems involve integration in multi-dimensional space whose analytic solution is not available. The integrals can be evaluated using numerical integration methods, but it requires a large computational cost in some cases, so an efficient algorithm plays an important role in solving the physics problems. We propose a novel numerical multi-dimensional integration algorithm using machine learning (ML). After training a ML regression model to mimic a target integrand, the regression model is used to evaluate an approximation of the integral. Then, the difference between the approximation and the true answer is calculated to correct the bias in the approximation of the integral induced by ML prediction errors. Because of the bias correction, the final estimate of the integral is unbiased and has a statistically correct error estimation. Three ML models of multi-layer perceptron, gradient boosting decision tree, and Gaussian process regression algorithms are investigated. The performance of the proposed algorithm is demonstrated on six different families of integrands that typically appear in physics problems at various dimensions and integrand difficulties. The results show that, for the same total number of integrand evaluations, the new algorithm provides integral estimates with more than an order of magnitude smaller uncertainties than those of the VEGAS algorithm in most of the test cases.
Project description:Markerless three-dimensional (3D) pose estimation has become an indispensable tool for kinematic studies of laboratory animals. Most current methods recover 3D poses by multi-view triangulation of deep network-based two-dimensional (2D) pose estimates. However, triangulation requires multiple synchronized cameras and elaborate calibration protocols that hinder its widespread adoption in laboratory studies. Here we describe LiftPose3D, a deep network-based method that overcomes these barriers by reconstructing 3D poses from a single 2D camera view. We illustrate LiftPose3D's versatility by applying it to multiple experimental systems using flies, mice, rats and macaques, and in circumstances where 3D triangulation is impractical or impossible. Our framework achieves accurate lifting for stereotypical and nonstereotypical behaviors from different camera angles. Thus, LiftPose3D permits high-quality 3D pose estimation in the absence of complex camera arrays and tedious calibration procedures and despite occluded body parts in freely behaving animals.
Project description:Various material compositions have been successfully used in 3D printing with promising applications as scaffolds in tissue engineering. However, identifying suitable printing conditions for new materials requires extensive experimentation in a time and resource-demanding process. This study investigates the use of Machine Learning (ML) for distinguishing between printing configurations that are likely to result in low-quality prints and printing configurations that are more promising as a first step toward the development of a recommendation system for identifying suitable printing conditions. The ML-based framework takes as input the printing conditions regarding the material composition and the printing parameters and predicts the quality of the resulting print as either "low" or "high." We investigate two ML-based approaches: a direct classification-based approach that trains a classifier to distinguish between low- and high-quality prints and an indirect approach that uses a regression ML model that approximates the values of a printing quality metric. Both modes are built upon Random Forests. We trained and evaluated the models on a dataset that was generated in a previous study, which investigated fabrication of porous polymer scaffolds by means of extrusion-based 3D printing with a full-factorial design. Our results show that both models were able to correctly label the majority of the tested configurations while a simpler linear ML model was not effective. Additionally, our analysis showed that a full factorial design for data collection can lead to redundancies in the data, in the context of ML, and we propose a more efficient data collection strategy.