Domain Adaptation with Conditional Transferable Components.
Ontology highlight
ABSTRACT: Domain adaptation arises in supervised learning when the training (source domain) and test (target domain) data have different distributions. Let X and Y denote the features and target, respectively, previous work on domain adaptation mainly considers the covariate shift situation where the distribution of the features P(X) changes across domains while the conditional distribution P(Y?X) stays the same. To reduce domain discrepancy, recent methods try to find invariant components [Formula: see text] that have similar [Formula: see text] on different domains by explicitly minimizing a distribution discrepancy measure. However, it is not clear if [Formula: see text] in different domains is also similar when P(Y?X) changes. Furthermore, transferable components do not necessarily have to be invariant. If the change in some components is identifiable, we can make use of such components for prediction in the target domain. In this paper, we focus on the case where P(X?Y) and P(Y) both change in a causal system in which Y is the cause for X. Under appropriate assumptions, we aim to extract conditional transferable components whose conditional distribution [Formula: see text] is invariant after proper location-scale (LS) transformations, and identify how P(Y) changes between domains simultaneously. We provide theoretical analysis and empirical evaluation on both synthetic and real-world data to show the effectiveness of our method.
SUBMITTER: Gong M
PROVIDER: S-EPMC5321138 | biostudies-literature | 2016 Jun
REPOSITORIES: biostudies-literature
ACCESS DATA