Hoeffding's inequality for general Markov chains with its applications to statistical learning.
Ontology highlight
ABSTRACT: This paper establishes Hoeffding's lemma and inequality for bounded functions of general-state-space and not necessarily reversible Markov chains. The sharpness of these results is characterized by the optimality of the ratio between variance proxies in the Markov-dependent and independent settings. The boundedness of functions is shown necessary for such results to hold in general. To showcase the usefulness of the new results, we apply them for non-asymptotic analyses of MCMC estimation, respondent-driven sampling and high-dimensional covariance matrix estimation on time series data with a Markovian nature. In addition to statistical problems, we also apply them to study the time-discounted rewards in econometric models and the multi-armed bandit problem with Markovian rewards arising from the field of machine learning.
SUBMITTER: Fan J
PROVIDER: S-EPMC8457514 | biostudies-literature |
REPOSITORIES: biostudies-literature
ACCESS DATA