We consider non-parametric causal inference. That is, given two variables of which we know that they are be correlated, Ergo can efficiently and reliably infer their causal direction – without having to assume a distribution. More information here.

Causal Inference by Direction of Information. In: Proceedings of the SIAM International Conference on Data Mining (SDM), pp 909-917, SIAM, 2015. |

We propose the Origo algorithm for identifying the most likely direction of causation between two univariate or multivariate discrete nominal or binary variables. More information here.

Causal Inference by Compression. In: Proceedings of the IEEE International Conference on Data Mining (ICDM'16), IEEE, 2016. |

Suppose we are given a set of databases, such as sales records over different branches. How can we characterise the differences and the norm between these datasets? What are the patterns that characterise the overall distribution, and what are those that are important for the individual datasets? That is exactly what the DiffNorm algorithm reveals. More information here.

The Difference and the Norm – Characterising Similarities and Differences between Databases. In: Proceedings of European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), pp 206-223, Springer, 2015. |

In subgroup discovery, discovering discover high quality one-dimensional subgroups as well as high quality refinements is a crucial task. For nominal attributes this is easy, but for numerical attributes this is more challenging. We propose to use optimal binning to find high quality binary features for numeric and ordinal attributes. More information here.

Flexibly Mining Better Subgroups. In: Proceedings of the SIAM International Conference on Data Mining (SDM), pp 585-593, SIAM, 2016. |

mtv is a well-founded approach for summarizing data with itemsets; using a probabilistic maximum entropy model, we iteratively find that itemset that provides us the most new information, and update our model accordingly. We can either mine top-k patterns, or identify the best summarisation by MDL or BIC. More information here.

Summarizing Data Succinctly with the Most Informative Itemsets. Transactions on Knowledge Discovery from Data vol.6(4), pp 1-44, ACM, 2012. |

Slim mines high-quality Krimp code tables directly from data, as opposed to filtering a candidate collection. By doing so, Slim obtains smaller code tables that provide better compression ratios, while also improving on classification accuracy, runtime, and reducing the memory complexity with orders of magnitude. More information here.

Slim: Directly Mining Descriptive Patterns. In: Proceedings of the SIAM International Conference on Data Mining (SDM), pp 236-247, SIAM, 2012. |

We aim at finding itemsets that characterise the data well. To this end, we construct decision trees by which we can pack the data succinctly, and from which we can subsequently identify the most important itemsets. The Pack algorithm can either filter a candidate collection, as well as mine its models directly from data. More information here.

Finding Good Itemsets by Packing Data. In: Proceedings of the IEEE International Conference on Data Mining (ICDM), pp 588-597, IEEE, 2008. |

The Krimp algorithm mines sets of itemsets by the MDL principle, defining the best set of patterns as the set that compresses the data best. The resulting *code tables* are orders of magnitude smaller than the number of (closed) frequent itemsets. They are highly characteristic for the data, and obtain high accuracy on many data mining tasks. More information here.

Krimp: Mining Itemsets that Compress. Data Mining and Knowledge Discovery vol.23(1), pp 169-214, Springer, 2011. |

Self-sufficient itemsets are an effective way to summarise the key associations in data. However, their computation appears demanding, as assessing whether an itemset is self-sufficient requires consideration of all pairwise partitions of an itemset, as well as all its supersets. We propose an branch-and-bound algorithm that employs two powerful pruning techniques to extract them efficiently. More information here.

Efficient Discovery of the Most Interesting Associations. Transactions on Knowledge Discovery from Data vol.8(3), pp 1-31, ACM, 2014. |

Measuring the difference between data mining results is an important open problem in exploratory data mining. We discuss an information theoretic approach for measuring how much information is shared between results, and give a proof of concept for binary data. More information here.

Comparing Apples and Oranges – Measuring Differences between Exploratory Data Mining Results. Data Mining and Knowledge Discovery vol.25(2), pp 173-207, Springer, 2012. |

We consider mining informative serial episodes — subsequences allowing for gaps — from event sequence data. We formalize the problem by the Minimum Description Length principle, and give algorithms for selecting good pattern sets from candidate collections as well as for parameter free mining of such models directly from data. More information here.

Efficiently Summarising Event Sequences with Rich Interleaving Patterns. In: Proceedings of the SIAM Conference on Data Mining (SDM), SIAM, 2017. |

The Long and the Short of It: Summarising Event Sequences with Serial Episodes. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pp 462-470, ACM, 2012. |

We study how to obtain concise descriptions of discrete multivariate sequential data in terms of rich multivariate sequential patterns. We introduce Ditto, and show it discovers succinct pattern sets that capture highly interesting associations within and between sequences. More information here.

Keeping it Short and Simple: Summarising Complex Event Sequences with Multivariate Patterns. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'16), pp 735-744, ACM, 2016. |

Detecting whether any important statistics over your time series changed is an important aspect of time series analysis. With Light, we tackle the problem of efficiently and effectively detecting non-linear changes over very high dimensional time series. More information here.

Linear-time Detection of Non-Linear Changes in Massively High Dimensional Time Series. In: Proceedings of the SIAM International Conference on Data Mining (SDM), pp 828-836, SIAM, 2016. |

CompreX discovers anomalies in data using pattern-based compression. Informally, it finds a collection of dictionaries that describe the *norm* of a database succinctly, and subsequently flags points dissimilar to the norm – those with high compression cost – as anomalies. More information here.

Fast and Reliable Anomaly Detection in Categoric Data. In: Proceedings of ACM Conference on Information and Knowledge Management (CIKM), pp 415-424, ACM, 2012. |

Anomalies are often characterised as the absence of patterns.
We observe that the *co-occurrence* of patterns can also be anomalous – many people prefer Coca Cola, while others prefer buy Pepsi Cola, and hence anyone who buys *both* stands out.
We formally introduce this new class of anomalies, and propose UpC, an efficient algorithm to discover these anomalies in transaction data. More information here.

Efficiently Discovering Unexpected Pattern-Co-Occurrences. In: Proceedings of the SIAM International Conference on Data Mining (SDM), SIAM, 2017. |

Stijl mines descriptions of ordered binary data. We model data hierarchically with noisy tiles - rectangles with significantly different density than their parent tile. To identify good trees, we employ the Minimum Description Length principle, and give an algorithm for mining *optimal* sub-tiles in just O(*nm*min(*n,m*)) time. More information here.

Discovering Descriptive Tile Trees by Fast Mining of Optimal Geometric Subtiles. In: Proceedings of European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), pp 9-24, Springer, 2012. |

Boolean Matrix Factorization has many desirable properties, such as high interpretability and natural sparsity. However, no method for selecting the correct model order has been available. We propose to use the Minimum Description Length principle, and show that besides solving the problem, this well-founded approach has numerous benefits, e.g., it is automatic, does not require a likelihood function, and, as experiments show, is highly accurate. More information here.

mdl4bmf: Minimal Description Length for Boolean Matrix Factorization. Transactions on Knowledge Discovery from Data vol.8(4), pp 1-30, ACM, 2014. |

Visualizing large graphs often results in excessive edges crossings and overlapping nodes. We propose Facets, a new scalable
approach for adaptively exploring large million-node graphs from a *local* perspective, guiding them
to focus on nodes and neighborhoods that are most subjectively
interesting. More information here.

AdaptiveNav: Adaptive Discovery of Interesting and Surprising Nodes in Large Graphs. In: Proceedings of the IEEE Conference on Visualization (VIS), IEEE, 2015. |

With CulT, we propose a method to reconstruct an epidemic over time, or, more general, reconstructing the propagation of an activity in a network. More information here.

Reconstructing an Epidemic over Time. In: Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), pp 1835-1844, ACM, 2016. |

VoG: Summarizing and Understanding Large Graphs. In: Proceedings of the SIAM International Conference on Data Mining (SDM), pp 91-99, SIAM, 2014. |

Suppose we are given a large graph in which, by some external process, a handful of nodes are marked. What can we say about these nodes? Are they close together in the graph? or, if segregated, how many groups do they form? We approach this problem by trying to find *simple connection pathways* between sets of marked nodes — using MDL to identify the optimal result. We propose the efficient dot2dot algorithm for approximating this goal. More information here.

Mining Connection Pathways for Marked Nodes in Large Graphs. In: Proceedings of the SIAM International Conference on Data Mining (SDM), pp 37-45, SIAM, 2013. |

Given a snapshot of a large graph, in which an infection has been spreading for some time, can we identify those nodes from which the infection started to spread? In other words, can we reliably tell who the culprits are? With NetSleuth, we answer this question affirmatively for the Susceptible-Infected virus propagation model. More information here.

Spotting Culprits in Epidemics: How many and Which ones?. In: Proceedings of the IEEE International Conference on Data Mining (ICDM), pp 11-20, IEEE, 2012. |

We provide a general theory on correlation, without making any such assumptions. Simply put, we propose correlation by compression. To this end, we propose two correlation measures based on solid information theoretic foundations, i.e. Kolmogorov complexity, and show how to instantiate these for discrete data. More information here.

Correlation by Compression. In: Proceedings of the SIAM Conference on Data Mining (SDM), SIAM, 2017. |

Discovering whether a *subspace* is correlated is a core task in data mining. For practical use, such a measure should be *universal* in the sense that it captures correlation in subspaces of any dimensionality and allows to meaningfully compare correlation scores across different subspaces. To this end we propose UdS, an efficient non-parametric multivariate non-linear dependency measure. More information here.

Universal Dependency Analysis. In: Proceedings of the SIAM International Conference on Data Mining (SDM), pp 792-800, SIAM, 2016. |

Correlation analysis is one of the key elements of data mining, but many methods are limited to either linear or pairwise correlations.
We propose mac, a non-parametric, *multivariate* and non-linear correlation measure based on maximal correlation analysis. More information here.

Multivariate Maximal Correlation Analysis. In: Proceedings of the International Conference on Machine Learning (ICML), pp 775-783, JMLR: W&CP vol.32, 2014. |

How can we quantify the difference between two datasets, without having to assume a distribution? We formalise the well-known Jensen-Shannon divergence using cumulative distribution funtions. cjs allows us to calculate divergences directly and efficiently from data without the need for estimation. More information here.

Non-Parametric Jensen-Shannon Divergence. In: Proceedings of European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), pp 173-189, Springer, 2015. |

Discretization is the transformation of continuous data into discrete bins. The key question is how to discretize data such that multivariate associations and patterns are preserved. That is exactly what ipd does. More information here.

Unsupervised Interaction-Preserving Discretization of Multivariate Data. Data Mining and Knowledge Discovery vol.28(5), pp 1366-1397, Springer, 2014. |

A good first impression of a dataset is paramount to how we proceed our analysis. We discuss mining high-quality high-level descriptive summaries for binary and categorical data. Our approach builds summaries by clustering attributes that strongly correlate, and uses the Minimum Description Length principle to identify the best clustering. More information here.

Summarizing Categorical Data by Clustering Attributes. Data Mining and Knowledge Discovery vol.26(1), pp 130-173, Springer, 2013. |