Project description:Facial asymmetries exist in all individuals. Due to these facial asymmetries that exist, a standardized approach in locating the occlusal plane that is parallel to the ala-tragus and interpupillary lines, may result in less than ideal esthetics in the final restoration. The challenge for the prosthodontist is to determine an acceptable occlusal plane with an individualized approach that can be used as a guide for alignment of the maxillary anterior teeth in cases that require their replacement or extensive restoration. The present study uses an inexpensive and standardized digital photographic technique along with computer assisted analysis to measure the asymmetries of the human face. Statistical Analysis used-Karl pearson's correlation coeffient was used. The correlation coefficient was then subjected to 't' test and 'p' value was used to find out the level of statistical significance. Left side of the face was found to be at a higher level than the right side.
Project description:BackgroundCollectively, an estimated 5% of the population have a genetic disease. Many of them feature characteristics that can be detected by facial phenotyping. Face2Gene CLINIC is an online app for facial phenotyping of patients with genetic syndromes. DeepGestalt, the neural network driving Face2Gene, automatically prioritizes syndrome suggestions based on ordinary patient photographs, potentially improving the diagnostic process. Hitherto, studies on DeepGestalt's quality highlighted its sensitivity in syndromic patients. However, determining the accuracy of a diagnostic methodology also requires testing of negative controls.ObjectiveThe aim of this study was to evaluate DeepGestalt's accuracy with photos of individuals with and without a genetic syndrome. Moreover, we aimed to propose a machine learning-based framework for the automated differentiation of DeepGestalt's output on such images.MethodsFrontal facial images of individuals with a diagnosis of a genetic syndrome (established clinically or molecularly) from a convenience sample were reanalyzed. Each photo was matched by age, sex, and ethnicity to a picture featuring an individual without a genetic syndrome. Absence of a facial gestalt suggestive of a genetic syndrome was determined by physicians working in medical genetics. Photos were selected from online reports or were taken by us for the purpose of this study. Facial phenotype was analyzed by DeepGestalt version 19.1.7, accessed via Face2Gene CLINIC. Furthermore, we designed linear support vector machines (SVMs) using Python 3.7 to automatically differentiate between the 2 classes of photographs based on DeepGestalt's result lists.ResultsWe included photos of 323 patients diagnosed with 17 different genetic syndromes and matched those with an equal number of facial images without a genetic syndrome, analyzing a total of 646 pictures. We confirm DeepGestalt's high sensitivity (top 10 sensitivity: 295/323, 91%). DeepGestalt's syndrome suggestions in individuals without a craniofacially dysmorphic syndrome followed a nonrandom distribution. A total of 17 syndromes appeared in the top 30 suggestions of more than 50% of nondysmorphic images. DeepGestalt's top scores differed between the syndromic and control images (area under the receiver operating characteristic [AUROC] curve 0.72, 95% CI 0.68-0.76; P<.001). A linear SVM running on DeepGestalt's result vectors showed stronger differences (AUROC 0.89, 95% CI 0.87-0.92; P<.001).ConclusionsDeepGestalt fairly separates images of individuals with and without a genetic syndrome. This separation can be significantly improved by SVMs running on top of DeepGestalt, thus supporting the diagnostic process of patients with a genetic syndrome. Our findings facilitate the critical interpretation of DeepGestalt's results and may help enhance it and similar computer-aided facial phenotyping tools.
Project description:In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon's desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time.
Project description:An advantage of using eye tracking for diagnosis is that it is non-invasive and can be performed in individuals with different functional levels and ages. Computer/aided diagnosis using eye tracking data is commonly based on eye fixation points in some regions of interest (ROI) in an image. However, besides the need for every ROI demarcation in each image or video frame used in the experiment, the diversity of visual features contained in each ROI may compromise the characterization of visual attention in each group (case or control) and consequent diagnosis accuracy. Although some approaches use eye tracking signals for aiding diagnosis, it is still a challenge to identify frames of interest when videos are used as stimuli and to select relevant characteristics extracted from the videos. This is mainly observed in applications for autism spectrum disorder (ASD) diagnosis. To address these issues, the present paper proposes: (1) a computational method, integrating concepts of Visual Attention Model, Image Processing and Artificial Intelligence techniques for learning a model for each group (case and control) using eye tracking data, and (2) a supervised classifier that, using the learned models, performs the diagnosis. Although this approach is not disorder-specific, it was tested in the context of ASD diagnosis, obtaining an average of precision, recall and specificity of 90%, 69% and 93%, respectively.
Project description:Multimaterials deposition, a distinct advantage in bioprinting, overcomes material's limitation in hydrogel-based bioprinting. Multimaterials are deposited in a build/support configuration to improve the structural integrity of three-dimensional bioprinted construct. A combination of rapid cross-linking hydrogel has been chosen for the build/support setup. The bioprinted construct was further chemically cross-linked to ensure a stable construct after print. This paper also proposes a file segmentation and preparation technique to be used in bioprinting for printing freeform structures.
Project description:PURPOSE:The interpretation of genetic variants after genome-wide analysis is complex in heterogeneous disorders such as intellectual disability (ID). We investigate whether algorithms can be used to detect if a facial gestalt is present for three novel ID syndromes and if these techniques can help interpret variants of uncertain significance. METHODS:Facial features were extracted from photos of ID patients harboring a pathogenic variant in three novel ID genes (PACS1, PPM1D, and PHIP) using algorithms that model human facial dysmorphism, and facial recognition. The resulting features were combined into a hybrid model to compare the three cohorts against a background ID population. RESULTS:We validated our model using images from 71 individuals with Koolen-de Vries syndrome, and then show that facial gestalts are present for individuals with a pathogenic variant in PACS1 (p?=?8?×?10-4), PPM1D (p?=?4.65?×?10-2), and PHIP (p?=?6.3?×?10-3). Moreover, two individuals with a de novo missense variant of uncertain significance in PHIP have significant similarity to the expected facial phenotype of PHIP patients (p?<?1.52?×?10-2). CONCLUSION:Our results show that analysis of facial photos can be used to detect previously unknown facial gestalts for novel ID syndromes, which will facilitate both clinical and molecular diagnosis of rare and novel syndromes.
Project description:A full-term female baby, a product of non-consanguineous marriage, was born at 37 weeks of gestation with a birth weight of 2.08 kg. Antenatal scan at 31 weeks revealed complex congenital heart disease with a hypoplastic right ventricle, pulmonary atresia and an intact septum. Immediately after birth, the infant was shifted to the nursery and was started on intravenous fluids and infusion prostaglandin E1 (Alprostidil). On examination, she had microcephaly, periorbital puffiness, a long philtrum, a broad nasal bridge and retrognathia, up slanting palpebral fissures, widely spaced nipples, a sacral dimple and right upper limb postaxial polydactyly. Postnatal echocardiography confirmed a large ostium secundum atrial septal defect with left to right shunt, right ventricle hypoplasia, pulmonary atresia with an intact septum and a large vertical patent ductus arteriosus. Ophthalmological examination showed a bilateral chorioretinal coloboma sparing disc and fovea. Karyotyping showed an extra small marker chromosome suggestive of the Cat eye syndrome.
Project description:Recent clinical trials using antibodies with low toxicity and high efficiency have raised expectations for the development of next-generation protein therapeutics. However, the process of obtaining therapeutic antibodies remains time consuming and empirical. This review summarizes recent progresses in the field of computer-aided antibody development mainly focusing on antibody modeling, which is divided essentially into two parts: (i) modeling the antigen-binding site, also called the complementarity determining regions (CDRs), and (ii) predicting the relative orientations of the variable heavy (V(H)) and light (V(L)) chains. Among the six CDR loops, the greatest challenge is predicting the conformation of CDR-H3, which is the most important in antigen recognition. Further computational methods could be used in drug development based on crystal structures or homology models, including antibody-antigen dockings and energy calculations with approximate potential functions. These methods should guide experimental studies to improve the affinities and physicochemical properties of antibodies. Finally, several successful examples of in silico structure-based antibody designs are reviewed. We also briefly review structure-based antigen or immunogen design, with application to rational vaccine development.
Project description:Neurodevelopment disorders can result in facial dysmorphisms. Therefore, the analysis of facial images using image processing and machine learning techniques can help construct systems for diagnosing genetic syndromes and neurodevelopmental disorders. The systems offer faster and cost-effective alternatives for genotyping tests, particularly when dealing with large-scale applications. However, there are still challenges to overcome to ensure the accuracy and reliability of computer-aided diagnosis systems. This article presents a systematic review of such initiatives, including 55 articles. The main aspects used to develop these diagnostic systems were discussed, namely datasets - availability, type of image, size, ethnicities and syndromes - types of facial features, techniques used for normalization, dimensionality reduction and classification, deep learning, as well as a discussion related to the main gaps, challenges and opportunities.
Project description:Three-dimensional facial stereophotogrammetry provides a detailed representation of craniofacial soft tissue without the use of ionizing radiation. While manual annotation of landmarks serves as the current gold standard for cephalometric analysis, it is a time-consuming process and is prone to human error. The aim in this study was to develop and evaluate an automated cephalometric annotation method using a deep learning-based approach. Ten landmarks were manually annotated on 2897 3D facial photographs. The automated landmarking workflow involved two successive DiffusionNet models. The dataset was randomly divided into a training and test dataset. The precision of the workflow was evaluated by calculating the Euclidean distances between the automated and manual landmarks and compared to the intra-observer and inter-observer variability of manual annotation and a semi-automated landmarking method. The workflow was successful in 98.6% of all test cases. The deep learning-based landmarking method achieved precise and consistent landmark annotation. The mean precision of 1.69 ± 1.15 mm was comparable to the inter-observer variability (1.31 ± 0.91 mm) of manual annotation. Automated landmark annotation on 3D photographs was achieved with the DiffusionNet-based approach. The proposed method allows quantitative analysis of large datasets and may be used in diagnosis, follow-up, and virtual surgical planning.