Project description:Hyoid bone movement is an important physiological event during swallowing that contributes to normal swallowing function. In order to determine the adequate hyoid bone movement, clinicians conduct an X-ray videofluoroscopic swallowing study, which even though it is the gold-standard technique, has limitations such as radiation exposure and cost. Here, we demonstrated the ability to track the hyoid bone movement using a non-invasive accelerometry sensor attached to the surface of the human neck. Specifically, deep neural networks were used to mathematically describe the relationship between hyoid bone movement and sensor signals. Training and validation of the system were conducted on a dataset of 400 swallows from 114 patients. Our experiments indicated the computer-aided hyoid bone movement prediction has a promising performance when compared with human experts' judgements, revealing that the universal pattern of the hyoid bone movement is acquirable by the highly nonlinear algorithm. Such a sensor-supported strategy offers an alternative and widely available method for online hyoid bone movement tracking without any radiation side-effects and provides a pronounced and flexible approach for identifying dysphagia and other swallowing disorders.
Project description:We are developing a system for long term Semi-Automated Rehabilitation At the Home (SARAH) that relies on low-cost and unobtrusive video-based sensing. We present a cyber-human methodology used by the SARAH system for automated assessment of upper extremity stroke rehabilitation at the home. We propose a hierarchical model for automatically segmenting stroke survivor's movements and generating training task performance assessment scores during rehabilitation. The hierarchical model fuses expert therapist knowledge-based approaches with data-driven techniques. The expert knowledge is more observable in the higher layers of the hierarchy (task and segment) and therefore more accessible to algorithms incorporating high level constraints relating to activity structure (i.e., type and order of segments per task). We utilize an HMM and a Decision Tree model to connect these high level priors to data driven analysis. The lower layers (RGB images and raw kinematics) need to be addressed primarily through data driven techniques. We use a transformer based architecture operating on low-level action features (tracking of individual body joints and objects) and a Multi-Stage Temporal Convolutional Network(MS-TCN) operating on raw RGB images. We develop a sequence combining these complimentary algorithms effectively, thus encoding the information from different layers of the movement hierarchy. Through this combination, we produce a robust segmentation and task assessment results on noisy, variable and limited data, which is characteristic of low cost video capture of rehabilitation at the home. Our proposed approach achieves 85% accuracy in per-frame labeling, 99% accuracy in segment classification and 93% accuracy in task completion assessment. Although the methodology proposed in this paper applies to upper extremity rehabilitation using the SARAH system, it can potentially be used, with minor alterations, to assist automation in many other movement rehabilitation contexts (i.e., lower extremity training for neurological accidents).
Project description:Breast imaging techniques are used to assess the tumor response to neoadjuvant treatment (NAT), which is increasingly one of the preferred therapeutic options and increases the rate of breast conservation for breast cancer. Herein, we report a case in which a woman was diagnosed with invasive ductal carcinoma in the left breast and received NAT before surgery. Automated breast ultrasound (AB US) was regularly performed before and during the NAT to evaluate the tumor response to NAT by measuring diameter changes and volume reductions of the tumor. Images showed that the tumor size was significantly reduced and disappeared after 7 cycles of NAT, except for macrocalcification. Postoperative histopathological examination confirmed that there were no residual tumor cells. We found that AB US overcame the limitations of handheld US, such as operator dependence, poor reproducibility and limited field of view, and can be an alternative modality to assess the tumor response of NAT in the absence of magnetic resonance imaging (MRI) instruments.
Project description:To understand the functional significance of skeletal muscle anatomy, a method of quantifying local shape changes in different tissue structures during dynamic tasks is required. Taking advantage of the good spatial and temporal resolution of B-mode ultrasound imaging, we describe a method of automatically segmenting images into fascicle and aponeurosis regions and tracking movement of features, independently, in localized portions of each tissue. Ultrasound images (25 Hz) of the medial gastrocnemius muscle were collected from eight participants during ankle joint rotation (2° and 20°), isometric contractions (1, 5, and 50 Nm), and deep knee bends. A Kanade-Lucas-Tomasi feature tracker was used to identify and track any distinctive and persistent features within the image sequences. A velocity field representation of local movement was then found and subdivided between fascicle and aponeurosis regions using segmentations from a multiresolution active shape model (ASM). Movement in each region was quantified by interpolating the effect of the fields on a set of probes. ASM segmentation results were compared with hand-labeled data, while aponeurosis and fascicle movement were compared with results from a previously documented cross-correlation approach. ASM provided good image segmentations (<1 mm average error), with fully automatic initialization possible in sequences from seven participants. Feature tracking provided similar length change results to the cross-correlation approach for small movements, while outperforming it in larger movements. The proposed method provides the potential to distinguish between active and passive changes in muscle shape and model strain distributions during different movements/conditions and quantify nonhomogeneous strain along aponeuroses.
Project description:There has been little attention given to the relationship between variations in normal craniofacial morphology and swallowing physiology. This preliminary investigation evaluated the relationship between the Frankfort-mandibular plane angle (FMA) and hyoid displacement during swallowing. Hyoid movement was evaluated during 12-ml and 24-ml swallows of liquid barium in 12 healthy subjects (age = 20-29 years, median = 23 years). Lateral projection videofluorography was utilized. Positions of the hyoid at maximum forward displacement, maximum upward displacement, starting position, and ending position were determined using image analysis software. The mean FMA was 28.92 degrees +/- 4.08 degrees (mean +/- SD, range = 20 degrees -34 degrees ). A Pearson correlation (<or=0.05) demonstrated that hyoid forward displacement was significantly inversely correlated with the FMA [R = -0.68, p = 0.015 (12 ml) and R = -0.72, p = 0.009 (24 ml)]; thus, the greater the FMA, the smaller the hyoid forward displacement. Upward displacement of the hyoid was not significantly correlated with FMA for 12-ml (R = -0.41, p = 0.55) or 24-ml swallows (R = 0.21, p = 0.512). In addition, there was no significant correlation between hyoid starting or ending positions. In conclusion, the results of this preliminary study suggest that normal variations in morphology, as measured by the FMA, may influence hyoid movement and therefore affect swallowing physiology.
Project description:Purpose:Because it shows the movement of different parts of the tongue in real time, ultrasound biofeedback therapy is a promising technology for speech research and remediation. One limitation is the difficulty of interpreting real-time ultrasound images of tongue motion. Our image processing system, TonguePART, tracks the tongue surface and allows for the acquisition of quantitative tongue part trajectories. Method:TonguePART automatically identifies the tongue contour based on ultrasound image brightness and tracks motion of the tongue root, dorsum, and blade in real time. We present tongue part trajectory data from 2 children with residual sound errors on /r/ and 2 children with typical speech, focusing on /r/ (International Phonetic Alphabet ?) in the phonetic context /?r/. We compared the tongue trajectories to magnetic resonance images of sustained vowel /?/ and /r/. Results:Measured trajectories show larger overall displacement and greater differentiation of tongue part movements for children with typical speech during the production of /?r/, compared to children with residual speech sound disorders. Conclusion:TonguePART is a fast, reliable method of tracking articulatory movement of tongue parts for syllables such as /?r/. It is extensible to other sounds and phonetic contexts. By tracking tongue parts, clinical researchers can investigate lingual coordination. TonguePART is suitable for real-time data collection and biofeedback. Ultrasound biofeedback therapy users may make more progress using simplified biofeedback of tongue movement.
Project description:Digital breast tomosynthesis (DBT) offers poor image quality along the depth direction. This paper presents a new method that improves the image quality of DBT considerably through the a priori information from automated ultrasound (AUS) images.DBT and AUS images of a complex breast-mimicking phantom are acquired by a DBT/AUS dual-modality system. The AUS images are taken in the same geometry as the DBT images and the gradient information of the in-slice AUS images is adopted into the new loss functional during the DBT reconstruction process. The additional data allow for new iterative equations through solving the optimization problem utilizing the gradient descent method. Both visual comparison and quantitative analysis are employed to evaluate the improvement on DBT images. Normalized line profiles of lesions are obtained to compare the edges of the DBT and AUS-corrected DBT images. Additionally, image quality metrics such as signal difference to noise ratio (SDNR) and artifact spread function (ASF) are calculated to quantify the effectiveness of the proposed method.In traditional DBT image reconstructions, serious artifacts can be found along the depth direction (Z direction), resulting in the blurring of lesion edges in the off-focus planes parallel to the detector. However, by applying the proposed method, the quality of the reconstructed DBT images is greatly improved. Visually, the AUS-corrected DBT images have much clearer borders in both in-focus and off-focus planes, fewer Z direction artifacts and reduced overlapping effect compared to the conventional DBT images. Quantitatively, the corrected DBT images have better ASF, indicating a great reduction in Z direction artifacts as well as better Z resolution. The sharper line profiles along the Y direction show enhancement on the edges. Besides, noise is also reduced, evidenced by the obviously improved SDNR values.The proposed method provides great improvement on the quality of DBT images. This improvement makes it easier to locate and to distinguish a lesion, which may help improve the accuracy of the diagnosis using DBT imaging.
Project description:(1) Background: Ultrasound provides a radiation-free and portable method for assessing swallowing. Hyoid bone locations and displacements are often used as important indicators for the evaluation of swallowing disorders. However, this requires clinicians to spend a great deal of time reviewing the ultrasound images. (2) Methods: In this study, we applied tracking algorithms based on deep learning and correlation filters to detect hyoid locations in ultrasound videos collected during swallowing. Fifty videos were collected from 10 young, healthy subjects for training, evaluation, and testing of the trackers. (3) Results: The best performing deep learning algorithm, Fully-Convolutional Siamese Networks (SiamFC), proved to have reliable performance in getting accurate hyoid bone locations from each frame of the swallowing ultrasound videos. While having a real-time frame rate (175 fps) when running on an RTX 2060, SiamFC also achieved a precision of 98.9% at the threshold of 10 pixels (3.25 mm) and 80.5% at the threshold of 5 pixels (1.63 mm). The tracker's root-mean-square error and average error were 3.9 pixels (1.27 mm) and 3.3 pixels (1.07 mm), respectively. (4) Conclusions: Our results pave the way for real-time automatic tracking of the hyoid bone in ultrasound videos for swallowing assessment.
Project description:BackgroundUltrasound can be used to assess diaphragm movement. Existing methods focus on movement at a single point at the hemidiaphragm and may not consider the anatomic and functional complexity. We aimed to develop an ultrasound method, the Area method, to assess movement of the entire hemidiaphragm dome and to compare it with existing methods to evaluate accuracy, inter-rater agreement, and feasibility.MethodsMovement of the diaphragm was evaluated by ultrasonography in 19 healthy subjects and correlated with simultaneously performed spirometry. Two existing methods, the M-mode excursion at the posterior part of diaphragm and the B-mode at the top of the diaphragm, were compared with the Area method. Two independent raters reviewed film clips to analyze inter-rater agreement. Feasibility was tested by novice ultrasound operators.ResultsCorrelation with expired lung volume was higher with the Area method, 0.88 (95% CI 0.81-0.95), p?<?0.001, and with the M-mode measurement, 0.84 (95% CI 0.75-0.92), p?<?0.001, than with the B-mode measurement, 0.71 (95% CI 0.59-0.83), p?<?0.001. Inter-rater agreement was highest with the Area method, 0.9, p?<?0.001, and M-mode measurement 0.9, p?<?0.001, and lower with the B-mode measurement, 0.8, p?<?0.001. The M-mode measurement could be done in only 20% at the left side. The Area method could be performed in all participants at both hemidiaphragms, and novice operators found it easy to perform.ConclusionA new method to evaluate diaphragm movement is introduced. Accuracy and inter-rater agreement are high. The Area method is equally feasible at both hemidiaphragms in contrast to existing methods. However, additional studies should include more participants, different types of pulmonary diseases, and investigate the role of patient position to validate the Area method fully.
Project description:Background: Studies aiming to objectively quantify movement disorders during upper limb tasks using wearable sensors have recently increased, but there is a wide variety in described measurement and analyzing methods, hampering standardization of methods in research and clinics. Therefore, the primary objective of this review was to provide an overview of sensor set-up and type, included tasks, sensor features and methods used to quantify movement disorders during upper limb tasks in multiple pathological populations. The secondary objective was to identify the most sensitive sensor features for the detection and quantification of movement disorders on the one hand and to describe the clinical application of the proposed methods on the other hand. Methods: A literature search using Scopus, Web of Science, and PubMed was performed. Articles needed to meet following criteria: 1) participants were adults/children with a neurological disease, 2) (at least) one sensor was placed on the upper limb for evaluation of movement disorders during upper limb tasks, 3) comparisons between: groups with/without movement disorders, sensor features before/after intervention, or sensor features with a clinical scale for assessment of the movement disorder. 4) Outcome measures included sensor features from acceleration/angular velocity signals. Results: A total of 101 articles were included, of which 56 researched Parkinson's Disease. Wrist(s), hand(s) and index finger(s) were the most popular sensor locations. Most frequent tasks were: finger tapping, wrist pro/supination, keeping the arms extended in front of the body and finger-to-nose. Most frequently calculated sensor features were mean, standard deviation, root-mean-square, ranges, skewness, kurtosis/entropy of acceleration and/or angular velocity, in combination with dominant frequencies/power of acceleration signals. Examples of clinical applications were automatization of a clinical scale or discrimination between a patient/control group or different patient groups. Conclusion: Current overview can support clinicians and researchers in selecting the most sensitive pathology-dependent sensor features and methodologies for detection and quantification of upper limb movement disorders and objective evaluations of treatment effects. Insights from Parkinson's Disease studies can accelerate the development of wearable sensors protocols in the remaining pathologies, provided that there is sufficient attention for the standardisation of protocols, tasks, feasibility and data analysis methods.