9:30 Dominik Peters: Robust Fair Division
Joint work with Ariel D. Procaccia and David Zhu.
Abstract: In fair rent division, the problem is to assign rooms to roommates and fairly split the rent based on roommates’ reported valuations for the rooms. Envy-free rent division is the most popular application on the fair division website Spliddit. The standard model assumes that agents can correctly report their valuations for each room. In practice, agents may be unsure about their valuations, for example because they have had only limited time to inspect the rooms. Our goal is to find a robust rent division that remains fair even if agent valuations are slightly different from the reported ones. We introduce the lexislack solution, which selects a rent division that remains envy-free for valuations within as large a radius as possible of the reported valuations. We also consider robustness notions for valuations that come from a probability distribution, and use results from learning theory to show how we can find rent divisions that (almost) maximize the probability of being envy-free, or that minimize the expected envy. We show that an almost optimal allocation can be identified based on polynomially many samples from the valuation distribution. Finding the best allocation given these samples is NP-hard, but in practice such an allocation can be found using integer linear programming.
10:00 Hugo Gilbert: Revisiter l’équité pour le partage de loyer avec budgets
Travail réalisé avec Stéphane Airiau, Umberto Grandi, Jérôme Lang, et Anaëlle Wilczynski.
Résumé : Le partage des loyers consiste à calculer simultanément une affectation des chambres aux agents et un paiement, à partir des valuations individuelles de chaque chambre par chaque agent. Lorsque les agents ont une certaine contrainte budgétaire à respecter, une solution sans envie n’existe pas nécessairement. Dans cette présentation, nous proposons deux manières de contourner ce problème. Premièrement, nous relâchons le critère d’absence d’envie pour tenir compte des disparités budgétaires. Deuxièmement, nous autorisons les allocations fractionnaires, dans lesquelles les agents peuvent changer de chambres pendant la durée de la location.
10:45 Nawal Benabbou: Diversity constraints and group-fairness in the house allocation problem
Joint work with Aurélie Beynier, Mithun Chakraborty, Edith Elkind, Nathanaël Gross–Humbert, Nicolas Maudet and Yair Zick.
Abstract: We address important extensions to the house allocation problem: The agents are partitioned into disjoint groups (based on ethnicities or socio-economic status) and we want the allocation to respect some notion of diversity and/or fairness with respect to these groups. We study two specific incarnations of this general problem. First, we address a constrained optimization problem, inspired by diversity quotas in some real-world allocation problems, where the items are also partitioned into blocks and there is an upper bound on the number of items from each block that can be assigned to agents in each group. We theoretically analyze the price of diversity and the price of stability, and report experiments based on real-world data sets. Next, instead of imposing hard constraints, we cast the problem as a variant of fair allocation of indivisible goods – we treat each group of agents as a single entity receiving a bundle of items whose valuation is the maximum total utility of matching agents in that group to items in that bundle; we present algorithms that achieve a standard relaxation of envy-freeness in conjunction with specific efficiency criteria, and we propose a new fairness notion better suited to house allocation problems with disjoint groups.
11:15 Theo Delemazure: Measuring a Priori Voting Power – Taking Delegations Seriously
Joint work with Rachael Colley and Hugo Gilbert.
Abstract: We introduce new power indices to measure the a priori voting power of voters in liquid democracy elections where an underlying network restricts delegations. We argue that our power indices are natural extensions of the standard Penrose-Banzhaf index in simple voting games.
We show that computing the criticality of a voter is #P-hard even when voting weights are polynomially bounded in the size of the instance. However, for specific settings, such as when the underlying network is a bipartite or complete graph, recursive formulas can compute these indices for weighted voting games in pseudo-polynomial time. We highlight their theoretical properties and provide numerical results to illustrate how restricting the possible delegations can alter voters’ voting power.
11h50 Michel Grabisch: Minimal balanced collections: generation, applications and generalization
Joint work with Dylan Laplace Mermoud and Peter Sudhölter.
Abstract: Minimal balanced collections are a generalization of partitions of a finite set of n elements and have important applications in cooperative game theory and discrete mathematics. However, their number is not known beyond n = 4. In this paper we investigate the problem of generating minimal balanced collections and implement the Peleg algorithm, permitting to generate all minimal balanced collections till n = 7. Secondly, we provide pratical algorithms to check many properties of coalitions and games, based on minimal balanced collections, in a way which is faster than linear programming based methods. In particular we construct an algorithm to check if the core of a cooperative game is a stable set in the sense of von Neumann and Morgenstern. The algorithm implements a theorem according to which the core is a stable set if and only if a certain nested balancedness condition is valid. The second level of this condition requires to generalize the notion of balanced collection to balanced sets.
14:00 Margot Herin: Learning Preference Models with Sparse Interactions of Criteria
Joint work with Patrice Perny and Natliya Sokolovska.
Abstract: Multicriteria decision making requires defining the result of conflicting and possibly interacting criteria. Allowing criteria interactions in a decision model increases the complexity of the preference learning task due to the combinatorial nature of the possible interactions. In this paper, we propose an approach to learn a decision model in which the interaction pattern is revealed from preference data and kept as simple as possible. We consider weighted aggregation functions like multilinear utilities or Choquet integrals, admitting representations including non-linear terms measuring the joint benefit or penalty attached to some combinations of criteria. The weighting coefficients known as Möbius masses model positive or negative synergies among criteria. We propose an approach to learn the Möbius masses, based on iterative reweighted least square for sparse recovery, and dualization to improve scalability. This approach is applied to learn sparse representations of the multilinear utility model and conjunctive/disjunctive forms of the discrete Choquet integral from preferences examples, in aggregation problems possibly involving more than 20 criteria.
14:30 Frederic Koriche: Approximation Algorithms for Probabilistic Explanations in Classification
Abstract: Clarifying the predictions made by classifiers in accurate and intelligible terms is a crucial challenge of eXplainable Artificial Intelligence (XAI). Of the various formal explanations proposed in the literature for tackling this challenge, “probabilistic explanations” offer a natural trade-off between accuracy and conciseness. Conceptually, a probabilistic explanation is a set of attributes that together determine, with high probability, the prediction made by a classifier on a given instance. Unfortunately, the problem of finding a maximally probable explanation with few attributes is computationally very expensive (NP^PP complete). During this talk, we shall demonstrate that this problem is approximable for various families of classifiers, by exploiting the “stratified supermodular” property of probabilistic explanations. We shall present two simple greedy algorithms for inferring explanations with approximation guarantees, together with a comparative experimental analysis with respect to exact, constraint-based solvers.