On the Interventional Kullback-Leibler Divergence

Abstract

Modern machine learning approaches excel in static settings where a large amount of iid training data are available for a given task. In a dynamic environment though, an intelligent agent needs to be able to transfer knowledge and re-use learned components across domains. It has been argued that this may be possible through causal models, aiming to mirror the modularity of the real world in terms of independent causal mechanisms. However, the true causal structure underlying a given set of data is generally not identifiable, so it is desirable to have means to quantify differences between models (eg, between the ground truth and an estimate), on both the observational and interventional level. In the present work, we introduce the Interventional Kullback-Leibler (IKL) divergence to quantify both structural and distributional differences between models based on a finite set of multi-environment distributions generated by interventions from the ground truth. Since we generally cannot quantify all differences between causal models for every finite set of interventional distributions, we propose a sufficient condition on the intervention targets to identify subsets of observed variables on which the models provably agree or disagree.

Publication
In 2nd Conference on Causal Learning and Reasoning (CLeaR), 2023
Siyuan Guo
Siyuan Guo
PhD student in Computer Science