Envy-free divisions of multi-layered cakes

Dividing a multi-layered cake under non-overlapping constraints captures several scenarios, e.g, allocating multiple facilities over time where each agent can utilize at most one facility simultaneously.

We establish the existence of an envy-free multi-division that is non-overlapping and contiguous within each layer when the number of agents is a prime power, solving partially an open question by Hosseini et al. (2020). Our approach follows an idea proposed by Jojić et al. (2020) for envy-free divisions, relying on a general fixed-point theorem. We further design a fully polynomial-time approximation scheme for the two-layer three-agent case, with monotone preferences. All results are actually established for divisions among groups of almost the same size. In the one-layer three-group case, our algorithm is able to deal with any predetermined sizes, still with monotone preferences. For three groups, this provides an algorithmic version of a recent theorem by Segal-Halevi and Suksompong (2021). Joint work with Ayumi Igarashi

Graphes denses aléatoires : l’exemple des cographes

Les cographes sont une classe naturels de graphes, intéressants pour leurs propriétés algorithmiques et combinatoires. L’objectif de cet exposé est de décrire à quoi ressemble un grand cographe aléatoire tiré uniformément au hasard. Le comportement asymptotique se décrit bien dans le cadre des graphons (qui sont l’analogue continu des graphes denses). La théorie des graphons a émergé dans les années 2000, initiée notamment par Lovász. Cette théorie a été étendue aux graphes denses aléatoires (sous l’impulsion de Diaconis et Janson), mais il existe très peu d’exemples où la limite est elle-même aléatoire. Il se trouve que les cographes admettent comme limite un graphon fractal aléatoire « brownien ».

(Basé sur des travaux avec Frédérique Bassino, Mathilde Bouvel, Valentin Féray, Mickaël Maazoun, Adeline Pierrot.)

On the Length of Strongly Monotone Descending Chains over ℕᵈ

I will present the main results of this joint work with Lia Schütze, available online from arXiv, starting with Rackoff’s 1978 upper bound for the coverability problem in vector addition systems, followed by the 2013 breakthrough by Künnemann, Mazowiecki, Schütze, Sinclair-Banks, and Wegrzycki on this upper bound, and finally presenting the `ideal’ viewpoint on these results and its consequences for various extensions of vector addition systems.

BIM and Digital Twin-Planning the Future of Digital Construction

First part: Registration of Heterogeneous Data for Urban Modeling

Indoor/Outdoor modeling of buildings is an important issue in the field of building life cycle management. It is seen as a joint process where the two aspects collaborate to take advantage of their semantic and geometric complementary. This global approach will allow a more complete, correct, precise and coherent reconstruction of the buildings.  Addressing this hard problem requires combining:
• Terrestrial indoor and outdoor and aerial acquisitions in order to capture the inside, outside, and roof of the building.
• LiDAR and image in order to have the geometric precision of the LiDAR with the rich texture information from the images.
The first issue of such modeling is thus to precisely register such heterogeneous data. The work carried out has confirmed that the environment and the type of data drive the choice of the registration algorithm. As in a building environment, most objects are composed of geometric primitives (planar polygons, straight lines, and openings), so we chose to introduce registration algorithms based on these primitives. The basic idea of these algorithms consists of the definition of global energy between the extracted primitives from the data sets to register and the proposal of a robust method for optimizing this energy based on the RANSAC paradigm. The proposed solutions have exceeded the limitations of existing algorithms and have proven their effectiveness in solving the challenging problems encountered by this kind of modeling  such as the indoor (static mode)/outdoor (dynamic mode) registration, image/LiDAR data registration, and aerial/terrestrial registration.

Second Part: From BIM To Digital Twin: Towards New Challenges In Construction
For this part of the presentation, we will focus  on the presentation of the BIM2TWIN project and our main contribution as partners of the project. BIM2TWIN project aims to build a Digital Building Twin (DBT) platform that provides full situational awareness for construction management by implementing lean principles to reduce operational waste of all kinds, shorten schedules, reduce carbon footprint and costs, and enhance quality and safety. It supports a closed loop Plan-Do-Check-Act mode of construction and an extensive set of construction management applications. Our work focuses  on the progress monitoring and quality
control using volumetric building data. For this end, we need first to detect the constructed objects and then check if they are constructed up to norms (quality control) and on time (progress monitoring). Detecting objects during the construction cycle is difficult due to certain challenges: noise, missing data and local shift. In this talk, we will introduce a potential solution for object detection that can deal with most of the challenges encountered in construction sites.

Bio
Rahima Djahel obtained a master’s degree in automatic image and language processing from the University of Caen Normandy in 2016. During her master’s training, she did two internships:
The first at the GREYC laboratory of ENSICAEN, where she worked on 3D Point Cloud Processing and Analysis
The second was at the MIPS laboratory of the University of Haute Alsace, where she worked on Images registration from several views of the same object in 3D microscopy.
After that, she worked as a study engineer in 3D reconstruction of microscopic objects as part of the Interreg France-Switzerland CITHaDel project at FEMTO-ST Institute. In January 2019 she started a PhD thesis at the Ecole des Ponts ParisTech where she worked under the supervision of prof. Pascal Monasse and Dr Bruno Vallet on the registration of heterogeneous data for urban modeling. She defended her thesis in June 2022. During her PhD thesis, she proposed potential solutions for challenging problems in building modeling:
indoor/outdoor registration, Image/LiDAR registration, aerial/terrestrial registration. She obtained two awards: The ISPRS award of best young author (for her first published paper 2021) and the ISPRS award of best poster (for her second published paper 2022).
Currently, she is a postdoctoral researcher at INRIA Université Côte d’Azur.  She works as part of the European BIM2TWIN project (Optimal Construction Management & Production Control). Her work focuses on the progress monitoring and quality control using volumetric building data.  For this work she concentrates more on object detection in construction sites, where she proposed, in collaboration with Kacper Pluta and Pierre Alliez, a potential  solution which can deal with the most encountered challenges in construction sites. This algorithm will be submitted soon.
She is interested in digital twin, robotics and point cloud processing.

Algorithms for n-manifold homeomorphism

When is there an an algorithm to decide if two mathematical objects are equivalent? We will discuss this question for manifolds, which are central objects in mathematics and physics. We will focus on the combinatorial tools used to resolve the problem in low dimensions. Minimal mathematical knowledge will be assumed.

Algorithms for subgame-perfect equilibria in graph games

Games are a powerful tool to model reactive systems; especially systems that include several agents, of which one can assume that they behave rationally with regards to some objectives. But, then, several concepts of rationality are available, and while Nash equilibrium is the most classical of them, subgame-perfect equilibrium (SPE) may be preferred in many situations. Therefore, we propose some algorithmic tools to decide the existence of SPEs satisfying some constraints on the payoffs, in perfect-information sequential games played on graphs, with various classes of objectives.

Embedding graphs into 2-dimensional simplicial complexes

The graph planarity testing problem asks whether we can draw an input graph in the plane in such a way that it is actually crossing-free (or embedded). In a seminal paper from 1974, Hopcroft and Tarjan gave a linear-time algorithm for this problem, initiating a long series of variations and generalizations. One of them is to make the target space more general, namely, to consider embeddings not into the plane, but into a more general surface. It is known that the problem of deciding the embeddability of a graph into a surface is NP-hard, but Mohar (1999) and Kawarabayashi, Mohar, and Reed (2008) gave algorithms that are linear in the size of the graph (and exponential in the genus of the surface).

In this talk, we consider an even more general case, in which the target space is a 2-dimensional simplicial complex (a topological space obtained from a graph by attaching a solid triangle to some of its 3-cycles). It turns out that this generalization encompasses some other well known problems, such as the crossing number, the splitting number, and the planarity number (and their generalizations to surfaces). We give an algorithm that is quadratic in the size of the input graph (and exponential in the size of the input complex), independent from the previous algorithms to embed graphs into surfaces. The techniques mix graph theory (branchwidth, excluded grid theorem, dynamic programming) with topology (combinatorial maps of graphs on surfaces).

Unit multiple interval graphs

Multiple interval graphs are a well-known generalization of interval graphs introduced in the 1970s to deal with situations arising naturally in scheduling and allocation. A (disjoint) d-intervals is the union of d (disjoint) intervals on the real line, and a graph G is a (disjoint) d-interval graph if it is the intersection graph of (disjoint) d-intervals. The numerous concrete applications of this class of graphs have led to the study of natural restrictions, such as unit multiple interval graphs (where every interval has unit length).

Whereas it has been known for a long time that recognizing 2-interval graphs and other related classes is NP-complete, the complexity of recognizing unit 2-interval graphs remained open for a long time. Here, we show that recognizing disjoint unit d-interval graphs is also NP-complete. On the positive side, we give a characterization of unit d-interval graphs which are also interval graphs, and a partial characterization of disjoint unit d-interval graphs which are also interval. This last result implies that the class of disjoint unit d-interval graphs is strictly included in the class of unit d-interval graphs.

Analysing the Peeling Algorithm – A Tutorial

In this tutorial we will review an influential phenomenon concerning random structures that can be stated in balls-into-bins terminology.

Assume there are n balls and m bins and we wish to place each ball into a bin chosen at random. By the birthday paradox we have to expect a collision of two balls in the same bin as soon as n = Ω(√m). If, however, we generate 3 candidate bins for each ball, then as long as n < 0.81m it is with high probability possible to place each ball into a candidate bin without any collisions. Such a placement can even be computed by the so-called peeling algorithm that keeps greedily looking for a bin that is a candidate for only one unplaced ball.

This result and its generalisations can be stated in several languages and have wide applications. In graph terminology it is related to the emergence of cores in Erdős-Reyni graphs (and related graphs), in coding theory to fast decoding of LDPC codes, and in data structure theory to cuckoo hash tables as well as data structures for retrieval and approximate membership. We will see a (more or less) complete proof, which will take us to a few interesting places along the way. For instance, we will study the properties of certain random infinite trees, which are, perhaps surprisingly, much easier to understand than random finite graphs.

Complexité en états : Renverser un langage réduit la complexité de l’opération racine

Les automates (DFA) sont des machines à états qui acceptent ou rejettent des mots. L’ensemble des mots reconnus par un automate est son langage. Les langages rationnels coïncident avec les langages reconnaissable par des automates. Ici nous allons nous intéresser à une mesure, à savoir la complexité en états. Sur les langages rationnels elle est définie comme la taille du plus petit automate (déterministe) qui reconnait le langage. Sur les opérations, elle est définie comme l’action d’une opération sur les automates minimaux des langages rationnels.

Les opérations racine et renversé sont des opérations bien connues. Leur complexité en états est respectivement n^n – n(n-1)/2 et 2^n. On pourrait s’attendre à une explosion du nombre d’états lorsqu’on les compose.

L’enjeu de ce séminaire sera d’établir toutes les bases nécessaires à la compréhension du problème, puis de montrer que non seulement la composition racine-renversé ne produit pas d’explosion combinatoire mais en fait produit un automate plus petit que celui de la racine dans les pires cas.