ABSTRACT: OBJECTIVE:To systematically review published literature and identify consistency and variation in the aims, measures, and methods of studies using electronic health record (EHR) audit logs to observe clinical activities. MATERIALS AND METHODS:In July 2019, we searched PubMed for articles using EHR audit logs to study clinical activities. We coded and clustered the aims, measures, and methods of each article into recurring categories. We likewise extracted and summarized the methods used to validate measures derived from audit logs and limitations discussed of using audit logs for research. RESULTS:Eighty-five articles met inclusion criteria. Study aims included examining EHR use, care team dynamics, and clinical workflows. Studies employed 6 key audit log measures: counts of actions captured by audit logs (eg, problem list viewed), counts of higher-level activities imputed by researchers (eg, chart review), activity durations, activity sequences, activity clusters, and EHR user networks. Methods used to preprocess audit logs varied, including how authors filtered extraneous actions, mapped actions to higher-level activities, and interpreted repeated actions or gaps in activity. Nineteen studies validated results (22%), but only 9 (11%) through direct observation, demonstrating varying levels of measure accuracy. DISCUSSION:While originally designed to aid access control, EHR audit logs have been used to observe diverse clinical activities. However, most studies lack sufficient discussion of measure definition, calculation, and validation to support replication, comparison, and cross-study synthesis. CONCLUSION:EHR audit logs have potential to scale observational research but the complexity of audit log measures necessitates greater methodological transparency and validated standards.