Project description:Facial asymmetries exist in all individuals. Due to these facial asymmetries that exist, a standardized approach in locating the occlusal plane that is parallel to the ala-tragus and interpupillary lines, may result in less than ideal esthetics in the final restoration. The challenge for the prosthodontist is to determine an acceptable occlusal plane with an individualized approach that can be used as a guide for alignment of the maxillary anterior teeth in cases that require their replacement or extensive restoration. The present study uses an inexpensive and standardized digital photographic technique along with computer assisted analysis to measure the asymmetries of the human face. Statistical Analysis used-Karl pearson's correlation coeffient was used. The correlation coefficient was then subjected to 't' test and 'p' value was used to find out the level of statistical significance. Left side of the face was found to be at a higher level than the right side.
Project description:BACKGROUND:Collectively, an estimated 5% of the population have a genetic disease. Many of them feature characteristics that can be detected by facial phenotyping. Face2Gene CLINIC is an online app for facial phenotyping of patients with genetic syndromes. DeepGestalt, the neural network driving Face2Gene, automatically prioritizes syndrome suggestions based on ordinary patient photographs, potentially improving the diagnostic process. Hitherto, studies on DeepGestalt's quality highlighted its sensitivity in syndromic patients. However, determining the accuracy of a diagnostic methodology also requires testing of negative controls. OBJECTIVE:The aim of this study was to evaluate DeepGestalt's accuracy with photos of individuals with and without a genetic syndrome. Moreover, we aimed to propose a machine learning-based framework for the automated differentiation of DeepGestalt's output on such images. METHODS:Frontal facial images of individuals with a diagnosis of a genetic syndrome (established clinically or molecularly) from a convenience sample were reanalyzed. Each photo was matched by age, sex, and ethnicity to a picture featuring an individual without a genetic syndrome. Absence of a facial gestalt suggestive of a genetic syndrome was determined by physicians working in medical genetics. Photos were selected from online reports or were taken by us for the purpose of this study. Facial phenotype was analyzed by DeepGestalt version 19.1.7, accessed via Face2Gene CLINIC. Furthermore, we designed linear support vector machines (SVMs) using Python 3.7 to automatically differentiate between the 2 classes of photographs based on DeepGestalt's result lists. RESULTS:We included photos of 323 patients diagnosed with 17 different genetic syndromes and matched those with an equal number of facial images without a genetic syndrome, analyzing a total of 646 pictures. We confirm DeepGestalt's high sensitivity (top 10 sensitivity: 295/323, 91%). DeepGestalt's syndrome suggestions in individuals without a craniofacially dysmorphic syndrome followed a nonrandom distribution. A total of 17 syndromes appeared in the top 30 suggestions of more than 50% of nondysmorphic images. DeepGestalt's top scores differed between the syndromic and control images (area under the receiver operating characteristic [AUROC] curve 0.72, 95% CI 0.68-0.76; P<.001). A linear SVM running on DeepGestalt's result vectors showed stronger differences (AUROC 0.89, 95% CI 0.87-0.92; P<.001). CONCLUSIONS:DeepGestalt fairly separates images of individuals with and without a genetic syndrome. This separation can be significantly improved by SVMs running on top of DeepGestalt, thus supporting the diagnostic process of patients with a genetic syndrome. Our findings facilitate the critical interpretation of DeepGestalt's results and may help enhance it and similar computer-aided facial phenotyping tools.
Project description:In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon's desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time.
Project description:An advantage of using eye tracking for diagnosis is that it is non-invasive and can be performed in individuals with different functional levels and ages. Computer/aided diagnosis using eye tracking data is commonly based on eye fixation points in some regions of interest (ROI) in an image. However, besides the need for every ROI demarcation in each image or video frame used in the experiment, the diversity of visual features contained in each ROI may compromise the characterization of visual attention in each group (case or control) and consequent diagnosis accuracy. Although some approaches use eye tracking signals for aiding diagnosis, it is still a challenge to identify frames of interest when videos are used as stimuli and to select relevant characteristics extracted from the videos. This is mainly observed in applications for autism spectrum disorder (ASD) diagnosis. To address these issues, the present paper proposes: (1) a computational method, integrating concepts of Visual Attention Model, Image Processing and Artificial Intelligence techniques for learning a model for each group (case and control) using eye tracking data, and (2) a supervised classifier that, using the learned models, performs the diagnosis. Although this approach is not disorder-specific, it was tested in the context of ASD diagnosis, obtaining an average of precision, recall and specificity of 90%, 69% and 93%, respectively.
Project description:Multimaterials deposition, a distinct advantage in bioprinting, overcomes material's limitation in hydrogel-based bioprinting. Multimaterials are deposited in a build/support configuration to improve the structural integrity of three-dimensional bioprinted construct. A combination of rapid cross-linking hydrogel has been chosen for the build/support setup. The bioprinted construct was further chemically cross-linked to ensure a stable construct after print. This paper also proposes a file segmentation and preparation technique to be used in bioprinting for printing freeform structures.
Project description:PURPOSE:The interpretation of genetic variants after genome-wide analysis is complex in heterogeneous disorders such as intellectual disability (ID). We investigate whether algorithms can be used to detect if a facial gestalt is present for three novel ID syndromes and if these techniques can help interpret variants of uncertain significance. METHODS:Facial features were extracted from photos of ID patients harboring a pathogenic variant in three novel ID genes (PACS1, PPM1D, and PHIP) using algorithms that model human facial dysmorphism, and facial recognition. The resulting features were combined into a hybrid model to compare the three cohorts against a background ID population. RESULTS:We validated our model using images from 71 individuals with Koolen-de Vries syndrome, and then show that facial gestalts are present for individuals with a pathogenic variant in PACS1 (p?=?8?×?10-4), PPM1D (p?=?4.65?×?10-2), and PHIP (p?=?6.3?×?10-3). Moreover, two individuals with a de novo missense variant of uncertain significance in PHIP have significant similarity to the expected facial phenotype of PHIP patients (p?<?1.52?×?10-2). CONCLUSION:Our results show that analysis of facial photos can be used to detect previously unknown facial gestalts for novel ID syndromes, which will facilitate both clinical and molecular diagnosis of rare and novel syndromes.
Project description:Recent clinical trials using antibodies with low toxicity and high efficiency have raised expectations for the development of next-generation protein therapeutics. However, the process of obtaining therapeutic antibodies remains time consuming and empirical. This review summarizes recent progresses in the field of computer-aided antibody development mainly focusing on antibody modeling, which is divided essentially into two parts: (i) modeling the antigen-binding site, also called the complementarity determining regions (CDRs), and (ii) predicting the relative orientations of the variable heavy (V(H)) and light (V(L)) chains. Among the six CDR loops, the greatest challenge is predicting the conformation of CDR-H3, which is the most important in antigen recognition. Further computational methods could be used in drug development based on crystal structures or homology models, including antibody-antigen dockings and energy calculations with approximate potential functions. These methods should guide experimental studies to improve the affinities and physicochemical properties of antibodies. Finally, several successful examples of in silico structure-based antibody designs are reviewed. We also briefly review structure-based antigen or immunogen design, with application to rational vaccine development.
Project description:A full-term female baby, a product of non-consanguineous marriage, was born at 37?weeks of gestation with a birth weight of 2.08?kg. Antenatal scan at 31?weeks revealed complex congenital heart disease with a hypoplastic right ventricle, pulmonary atresia and an intact septum. Immediately after birth, the infant was shifted to the nursery and was started on intravenous fluids and infusion prostaglandin E1 (Alprostidil). On examination, she had microcephaly, periorbital puffiness, a long philtrum, a broad nasal bridge and retrognathia, up slanting palpebral fissures, widely spaced nipples, a sacral dimple and right upper limb postaxial polydactyly. Postnatal echocardiography confirmed a large ostium secundum atrial septal defect with left to right shunt, right ventricle hypoplasia, pulmonary atresia with an intact septum and a large vertical patent ductus arteriosus. Ophthalmological examination showed a bilateral chorioretinal coloboma sparing disc and fovea. Karyotyping showed an extra small marker chromosome suggestive of the Cat eye syndrome.
Project description:Emotional facial expressions play a critical role in theories of emotion and figure prominently in research on almost every aspect of emotion. This article provides a background for a new database of basic emotional expressions. The goal in creating this set was to provide high quality photographs of genuine facial expressions. Thus, after proper training, participants were inclined to express "felt" emotions. The novel approach taken in this study was also used to establish whether a given expression was perceived as intended by untrained judges. The judgment task for perceivers was designed to be sensitive to subtle changes in meaning caused by the way an emotional display was evoked and expressed. Consequently, this allowed us to measure the purity and intensity of emotional displays, which are parameters that validation methods used by other researchers do not capture. The final set is comprised of those pictures that received the highest recognition marks (e.g., accuracy with intended display) from independent judges, totaling 210 high quality photographs of 30 individuals. Descriptions of the accuracy, intensity, and purity of displayed emotion as well as FACS AU's codes are provided for each picture. Given the unique methodology applied to gathering and validating this set of pictures, it may be a useful tool for research using face stimuli. The Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) is freely accessible to the scientific community for non-commercial use by request at http://www.emotional-face.org.
Project description:Rodent models of retinal angiogenesis play a pivotal role in angiogenesis research. These models are a window to developmental angiogenesis, to pathological retinopathy, and are also in vivo tools for anti-angiogenic drug screening in cancer and ophthalmic research. The mouse model of oxygen-induced retinopathy (OIR) has emerged as one of the leading in vivo models for these purposes. Many of the animal studies that laid the foundation for the recent breakthrough of anti-angiogenic treatments into clinical practice were performed in the OIR model. However, readouts from the OIR model have been time-consuming and can vary depending on user experience. Here, we present a computer-aided quantification method that is characterized by (i) significantly improved efficiency, (ii) high correlation with the established hand-measurement protocols, and (iii) high intra- and inter-individual reproducibility of results. This method greatly facilitates quantification of retinal angiogenesis while at the same time increasing lab-to-lab reproducibility of one of the most widely used in vivo models in angiogenesis research.