Explainable Data Decompositions

Abstract.
Our goal is to discover the components of a dataset, characterize *why* we deem these components, explain *how* these components are different from each other, as well as identify what properties they *share* among each other. As is usual, we consider regions in the data to be components if they show significantly different distributions. What is not usual, however, is that we parameterize these distributions with patterns that are informative for one or more components. We do so because these patterns allow us to characterize what is going on in our data as well as explain our decomposition.

We define the problem in terms of a regularized maximum likelihood, in which we use the Maximum Entropy principle to model each data component with a set of patterns. As the search space is large and unstructured, we propose the deterministic Disc algorithm to efficiently discover high-quality decompositions via an alternating optimization approach. Empirical evaluation on synthetic and real-world data shows that efficiently discovers meaningful components and accurately characterises these in easily understandable terms.

the C++, Python and R source code by Sebastian Dalleiger.

the datasets used in the paper

Explainable Data Decompositions. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI'20), AAAI, 2020. (oral presentation 4.5% acceptance rate; overall 20.6%) |