Abstract. Causal inference from observational data is one of the most fundamental problems in science. In general, the task is to tell whether it is more likely that \(X\) caused \(Y\), or vice versa, given only data over their joint distribution. In this paper we propose a general inference framework based on Kolmogorov complexity, as well as a practical and computable instantiation based on the Minimum Description Length (MDL) principle.
Simply put, we propose causal inference by compression. That is, we infer that \(X\) is a likely cause of \(Y\) if we can better compress the data by first encoding \(X\), and then encoding \(Y\) given \(X\), than in the other direction. To show this works in practice, we propose Origo, an efficient method for inferring the causal direction from binary data. Origo employs the lossless Pack compressor (Tatti & Vreeken, 2008) and searches for that set of decision trees that encodes the data most succinctly. Importantly, it works directly on the data and does not require assumptions about neither distributions nor the type of causal relations.
To evaluate Origo in practice, we provide extensive experiments on synthetic, benchmark, and real-world data, including three case studies. Altogether the experiments show that Origo reliably infers the correct causal direction on a wide range of settings.