Project description:Some patients have residual non-specific symptoms after therapy for Lyme disease, referred to as post-treatment Lyme disease symptoms or syndrome, depending on whether there is functional impairment. A standardized test battery was used to characterize a diverse group of Lyme disease patients with and without residual symptoms. There was a strong correlation between sleep disturbance and certain other symptoms such as fatigue, pain, anxiety, and cognitive complaints. Results were subjected to a Logistic Regression model using the Neuro-QoL Fatigue t-score together with Short Form-36 Physical Functioning scale and Mental Health component scores; and to a Decision Tree model using only the QoL Fatigue t-score. The Logistic Regression model had an accuracy of 97% and Decision Tree model had an accuracy of 93%, when compared with clinical categorization. The Logistic Regression and Decision Tree models were then applied to a separate cohort. Both models performed with high sensitivity (90%), but moderate specificity (62%). The overall accuracy was 74%. Agreement between 2 time points, separated by a mean of 4 months, was 89% using the Decision Tree model and 87% with the Logistic Regression model. These models are simple and can help to quantitate the level of symptom severity in post-treatment Lyme disease symptoms. More research is needed to increase the specificity of the models, exploring additional approaches that could potentially strengthen an operational definition for post-treatment Lyme disease symptoms. Evaluation of how sleep disturbance, fatigue, pain and cognitive complains interrelate can potentially lead to new interventions that will improve the overall health of these patients.
Project description:Consumer wearables and sensors are a rich source of data about patients' daily disease and symptom burden, particularly in the case of movement disorders like Parkinson's disease (PD). However, interpreting these complex data into so-called digital biomarkers requires complicated analytical approaches, and validating these biomarkers requires sufficient data and unbiased evaluation methods. Here we describe the use of crowdsourcing to specifically evaluate and benchmark features derived from accelerometer and gyroscope data in two different datasets to predict the presence of PD and severity of three PD symptoms: tremor, dyskinesia, and bradykinesia. Forty teams from around the world submitted features, and achieved drastically improved predictive performance for PD status (best AUROC = 0.87), as well as tremor- (best AUPR = 0.75), dyskinesia- (best AUPR = 0.48) and bradykinesia-severity (best AUPR = 0.95).
Project description:BackgroundA 3-step clinical prediction tool including falling in the previous year, freezing of gait in the past month and self-selected gait speed <1.1 m/s has shown high accuracy in predicting falls in people with Parkinson's disease (PD). The accuracy of this tool when including only self-report measures is yet to be determined.ObjectivesTo validate the 3-step prediction tool using only self-report measures (3-step self-reported prediction tool), and to externally validate the 3-step clinical prediction tool.MethodsThe clinical tool was used with 137 individuals with PD. Participants also answered a question about self-reported gait speed, enabling scoring of the self-reported tool, and were followed-up for 6 months. An intraclass correlation coefficient (ICC2,1) was calculated to evaluate test-retest reliability of the 3-step self-reported prediction tool. Multivariate logistic regression models were used to evaluate the performance of both tools and their discriminative ability was determined using the area under the curve (AUC).ResultsForty-two participants (31%) reported ≥1 fall during follow-up. The 3-step self-reported tool had an ICC2,1 of 0.991 (95% CI 0.971-0.997; P < 0.001) and AUC = 0.68; 95% CI 0.59-0.77, while the 3-step clinical tool had an AUC = 0.69; 95% CI 0.60-0.78.ConclusionsThe 3-step self-reported prediction tool showed excellent test-retest reliability and was validated with acceptable accuracy in predicting falls in the next 6 months. The 3-step clinical prediction tool was externally validated with similar accuracy. The 3-step self-reported prediction tool may be useful to identify people with PD at risk of falls in e/tele-health settings.
Project description:Scientific research is growingly increasingly reliant on "microwork" or "crowdsourcing" provided by digital platforms to collect new data. Digital platforms connect clients and workers, charging a fee for an algorithmically managed workflow based on Terms of Service agreements. Although these platforms offer a way to make a living or complement other sources of income, microworkers lack fundamental labor rights and basic safe working conditions, especially in the Global South. We ask how researchers and research institutions address the ethical issues involved in considering microworkers as "human participants." We argue that current scientific research fails to treat microworkers in the same way as in-person human participants, producing de facto a double morality: one applied to people with rights acknowledged by states and international bodies (e.g., the Helsinki Declaration), the other to guest workers of digital autocracies who have almost no rights at all. We illustrate our argument by drawing on 57 interviews conducted with microworkers in Spanish-speaking countries.
Project description:One of the promising opportunities of digital health is its potential to lead to more holistic understandings of diseases by interacting with the daily life of patients and through the collection of large amounts of real-world data. Validating and benchmarking indicators of disease severity in the home setting is difficult, however, given the large number of confounders present in the real world and the challenges in collecting ground truth data in the home. Here we leverage two datasets collected from patients with Parkinson's disease, which couples continuous wrist-worn accelerometer data with frequent symptom reports in the home setting, to develop digital biomarkers of symptom severity. Using these data, we performed a public benchmarking challenge in which participants were asked to build measures of severity across 3 symptoms (on/off medication, dyskinesia, and tremor). 42 teams participated and performance was improved over baseline models for each subchallenge. Additional ensemble modeling across submissions further improved performance, and the top models validated in a subset of patients whose symptoms were observed and rated by trained clinicians.
Project description:Objective: This study examines the impact of digital mobile devices on different aspects of family time in the United Kingdom.Background: Recent years have witnessed increasing concerns surrounding the consequences of the widespread diffusion of Internet-enabled mobile devices such as smartphones for family well-being. However, research examining the extent to which mobile devices have influenced family time remains limited.Method: Using nationally representative time-diary data spanning a period of unprecedented technological change (U.K. 2000 and 2015 Time Use Surveys), the authors construct a set of novel family time measures that capture varying degrees of family togetherness and examine changes in these measures over time. Novel diary data are also analyzed to explore the occurrence of mobile device use during different aspects of family time in 2015.Results: Children and parents spent more time at the same location in 2015, and there was no change in the time they spent doing activities together. However, there was a marked increase of alone-together time, when children were at the same location as their parents, but did not report that they were copresent with them. The results show that children and parents used mobile devices during all aspects of family time in 2015, but device use was notably concentrated during alone-together time.Conclusion: This study provides an empirical basis for documenting the impact of mobile device use on family time.
Project description:Sensor data from digital health technologies (DHTs) used in clinical trials provides a valuable source of information, because of the possibility to combine datasets from different studies, to combine it with other data types, and to reuse it multiple times for various purposes. To date, there exist no standards for capturing or storing DHT biosensor data applicable across modalities and disease areas, and which can also capture the clinical trial and environment-specific aspects, so-called metadata. In this perspectives paper, we propose a metadata framework that divides the DHT metadata into metadata that is independent of the therapeutic area or clinical trial design (concept of interest and context of use), and metadata that is dependent on these factors. We demonstrate how this framework can be applied to data collected with different types of DHTs deployed in the WATCH-PD clinical study of Parkinson's disease. This framework provides a means to pre-specify and therefore standardize aspects of the use of DHTs, promoting comparability of DHTs across future studies.
Project description:BACKGROUND:Healthcare services are being increasingly digitalised in European countries. However, in studies evaluating digital health technology, some people are less likely to participate than others, e.g. those who are older, those with a lower level of education and those with poorer digital skills. Such non-participation in research - deriving from the processes of non-recruitment of targeted individuals and self-selection - can be a driver of old-age exclusion from new digital health technologies. We aim to introduce, discuss and test an instrument to measure non-participation in digital health studies, in particular, the process of self-selection. METHODS:Based on a review of the relevant literature, we designed an instrument - the NPART survey questionnaire - for the analysis of self-selection, covering five thematic areas: socioeconomic factors, self-rated health and subjective overall quality of life, social participation, time resources, and digital skills and use of technology. The instrument was piloted on 70 older study persons in Sweden, approached during the recruitment process for a trial study. RESULTS:Results indicated that participants, as compared to decliners, were on average slightly younger and more educated, and reported better memory, higher social participation, and higher familiarity with and greater use of digital technologies. Overall, the survey questionnaire was able to discriminate between participants and decliners on the key aspects investigated, along the lines of the relevant literature. CONCLUSIONS:The NPART survey questionnaire can be applied to characterise non-participation in digital health research, in particular, the process of self-selection. It helps to identify underrepresented groups and their needs. Data generated from such an investigation, combined with hospital registry data on non-recruitment, allows for the implementation of improved sampling strategies, e.g. focused recruitment of underrepresented groups, and for the post hoc adjustment of results generated from biased samples, e.g. weighting procedures.