Project description:Ab initio modeling methods have proven to be powerful means of interpreting solution scattering data. In the absence of atomic models, or complementary to them, ab initio modeling approaches can be used for generating low-resolution particle envelopes using only solution scattering profiles. Recently, a new ab initio reconstruction algorithm has been introduced to the scientific community, called DENSS. DENSS is unique among ab initio modeling algorithms in that it solves the inverse scattering problem, i.e., the 1D scattering intensities are directly used to determine the 3D particle density. The reconstruction of particle density has several advantages over conventional uniform density modeling approaches, including the ability to reconstruct a much wider range of particle types and the ability to visualize low-resolution density fluctuations inside the particle envelope. In this chapter we will discuss the theory behind this new approach, how to use DENSS, and how to interpret the results. Several examples with experimental and simulated data will be provided.
Project description:Solution scattering techniques, such as small and wide-angle X-ray scattering (SWAXS), provide valuable insights into the structure and dynamics of biological macromolecules in solution. In this study, we present an approach to accurately predict solution X-ray scattering profiles at wide angles from atomic models by generating high-resolution electron density maps. Our method accounts for the excluded volume of bulk solvent by calculating unique adjusted atomic volumes directly from the atomic coordinates. This approach eliminates the need for a free fitting parameter commonly used in existing algorithms, resulting in improved accuracy of the calculated SWAXS profile. An implicit model of the hydration shell is generated which uses the form factor of water. Two parameters, namely the bulk solvent density and the mean hydration shell contrast, are adjusted to best fit the data. Results using eight publicly available SWAXS profiles show high quality fits to the data. In each case, the optimized parameter values show small adjustments demonstrating that the default values are close to the true solution. Disabling parameter optimization results in a significant improvement of the calculated scattering profiles compared to the leading software. The algorithm is computationally efficient, showing more than tenfold reduction in execution time compared to the leading software. The algorithm is encoded in a command line script called denss.pdb2mrc.py and is available open source as part of the DENSS v1.7.0 software package (https://github.com/tdgrant1/denss). In addition to improving the ability to compare atomic models to experimental SWAXS data, these developments pave the way for increasing the accuracy of modeling algorithms utilizing SWAXS data while decreasing the risk of overfitting.
Project description:The multitiered iterative phasing (MTIP) algorithm is used to determine the biological structures of macromolecules from fluctuation scattering data. It is an iterative algorithm that reconstructs the electron density of the sample by matching the computed fluctuation X-ray scattering data to the external observations, and by simultaneously enforcing constraints in real and Fourier space. This paper presents the first ever MTIP algorithm acceleration efforts on contemporary graphics processing units (GPUs). The Compute Unified Device Architecture (CUDA) programming model is used to accelerate the MTIP algorithm on NVIDIA GPUs. The computational performance of the CUDA-based MTIP algorithm implementation outperforms the CPU-based version by an order of magnitude. Furthermore, the Heterogeneous-Compute Interface for Portability (HIP) runtime APIs are used to demonstrate portability by accelerating the MTIP algorithm across NVIDIA and AMD GPUs.
Project description:The recent advent of tensor tomography techniques has enabled tomographic investigations of the 3D nanostructure organization of biological and material science samples. These techniques extended the concept of conventional X-ray tomography by reconstructing not only a scalar value such as the attenuation coefficient per voxel, but also a set of parameters that capture the local anisotropy of nanostructures within every voxel of the sample. Tensor tomography data sets are intrinsically large as each pixel of a conventional X-ray projection is substituted by a scattering pattern, and projections have to be recorded at different sample angular orientations with several tilts of the rotation axis with respect to the X-ray propagation direction. Currently available reconstruction approaches for such large data sets are computationally expensive. Here, a novel, fast reconstruction algorithm, named iterative reconstruction tensor tomography (IRTT), is presented to simplify and accelerate tensor tomography reconstructions. IRTT is based on a second-rank tensor model to describe the anisotropy of the nanostructure in every voxel and on an iterative error backpropagation reconstruction algorithm to achieve high convergence speed. The feasibility and accuracy of IRTT are demonstrated by reconstructing the nanostructure anisotropy of three samples: a carbon fiber knot, a human bone trabecula specimen and a fixed mouse brain. Results and reconstruction speed were compared with those obtained by the small-angle scattering tensor tomography (SASTT) reconstruction method introduced by Liebi et al. [Nature (2015), 527, 349-352]. The principal orientation of the nanostructure within each voxel revealed a high level of agreement between the two methods. Yet, for identical data sets and computer hardware used, IRTT was shown to be more than an order of magnitude faster. IRTT was found to yield robust results, it does not require prior knowledge of the sample for initializing parameters, and can be used in cases where simple anisotropy metrics are sufficient, i.e. the tensor approximation adequately captures the level of anisotropy and the dominant orientation within a voxel. In addition, by greatly accelerating the reconstruction, IRTT is particularly suitable for handling large tomographic data sets of samples with internal structure or as a real-time analysis tool during the experiment for online feedback during data acquisition. Alternatively, the IRTT results might be used as an initial guess for models capturing a higher complexity of structural anisotropy such as spherical harmonics based SASTT in Liebi et al. (2015), improving both overall convergence speed and robustness of the reconstruction.
Project description:We describe a general approach for refining protein structure models on the basis of cryo-electron microscopy maps with near-atomic resolution. The method integrates Monte Carlo sampling with local density-guided optimization, Rosetta all-atom refinement and real-space B-factor fitting. In tests on experimental maps of three different systems with 4.5-Å resolution or better, the method consistently produced models with atomic-level accuracy largely independently of starting-model quality, and it outperformed the molecular dynamics-based MDFF method. Cross-validated model quality statistics correlated with model accuracy over the three test systems.
Project description:Grating-based phase-contrast computed tomography (PCCT) is a promising imaging tool on the horizon for pre-clinical and clinical applications. Until now PCCT has been plagued by strong artifacts when dense materials like bones are present. In this paper, we present a new statistical iterative reconstruction algorithm which overcomes this limitation. It makes use of the fact that an X-ray interferometer provides a conventional absorption as well as a dark-field signal in addition to the phase-contrast signal. The method is based on a statistical iterative reconstruction algorithm utilizing maximum-a-posteriori principles and integrating the statistical properties of the raw data as well as information of dense objects gained from the absorption signal. Reconstruction of a pre-clinical mouse scan illustrates that artifacts caused by bones are significantly reduced and image quality is improved when employing our approach. Especially small structures, which are usually lost because of streaks, are recovered in our results. In comparison with the current state-of-the-art algorithms our approach provides significantly improved image quality with respect to quantitative and qualitative results. In summary, we expect that our new statistical iterative reconstruction method to increase the general usability of PCCT imaging for medical diagnosis apart from applications focused solely on soft tissue visualization.
Project description:Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iterates between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. Equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.
Project description:PurposeTo compare the image quality between a deep learning-based image reconstruction algorithm (DLIR) and an adaptive statistical iterative reconstruction algorithm (ASiR-V) in noncontrast trauma head CT.MethodsHead CT scans from 94 consecutive trauma patients were included. Images were reconstructed with ASiR-V 50% and the DLIR strengths: low (DLIR-L), medium (DLIR-M), and high (DLIR-H). The image quality was assessed quantitatively and qualitatively and compared between the different reconstruction algorithms. Inter-reader agreement was assessed by weighted kappa.ResultsDLIR-M and DLIR-H demonstrated lower image noise (p < 0.001 for all pairwise comparisons), higher SNR of up to 82.9% (p < 0.001), and higher CNR of up to 53.3% (p < 0.001) compared to ASiR-V. DLIR-H outperformed other DLIR strengths (p ranging from < 0.001 to 0.016). DLIR-M outperformed DLIR-L (p < 0.001) and ASiR-V (p < 0.001). The distribution of reader scores for DLIR-M and DLIR-H shifted towards higher scores compared to DLIR-L and ASiR-V. There was a tendency towards higher scores with increasing DLIR strengths. There were fewer non-diagnostic CT series for DLIR-M and DLIR-H compared to ASiR-V and DLIR-L. No images were graded as non-diagnostic for DLIR-H regarding intracranial hemorrhage. The inter-reader agreement was fair-good between the second most and the less experienced reader, poor-moderate between the most and the less experienced reader, and poor-fair between the most and the second most experienced reader.ConclusionThe image quality of trauma head CT series reconstructed with DLIR outperformed those reconstructed with ASiR-V. In particular, DLIR-M and DLIR-H demonstrated significantly improved image quality and fewer non-diagnostic images. The improvement in qualitative image quality was greater for the second most and the less experienced readers compared to the most experienced reader.
Project description:In this study, we investigate the feasibility of improving the imaging quality for low-dose multislice helical computed tomography (CT) via iterative reconstruction with tensor framelet (TF) regularization. TF based algorithm is a high-order generalization of isotropic total variation regularization. It is implemented on a GPU platform for a fast parallel algorithm of X-ray forward band backward projections, with the flying focal spot into account. The solution algorithm for image reconstruction is based on the alternating direction method of multipliers or the so-called split Bregman method. The proposed method is validated using the experimental data from a Siemens SOMATOM Definition 64-slice helical CT scanner, in comparison with FDK, the Katsevich and the total variation (TV) algorithm. To test the algorithm performance with low-dose data, ACR and Rando phantoms were scanned with different dosages and the data was equally undersampled with various factors. The proposed method is robust for the low-dose data with 25% undersampling factor. Quantitative metrics have demonstrated that the proposed algorithm achieves superior results over other existing methods.