Local search is one of the most fundamental and successful paradigms for designing heuristics for hard optimization problems. The main idea is to perform a search in the space of feasible solutions. During this search, a feasible solution is replaced by a better feasible solution in a suitably defined local neighborhood until a locally optimal solution is reached. A major goal in the design of local search algorithms is to avoid bad local optima. In this talk, we discuss how parameterized algorithmics can contribute to this goal via the approach of parameterized local search. Here, the idea is to avoid bad local optima by increasing the neighborhood size at the cost of increased running times. We present theoretical and experimental results to demonstrate that parameterized local search can be a valuable addition to the toolbox for the design of state-of-the-art heuristics.
Tag : Français
In an embedded real-time system (ERTS), real-time tasks (software) are typically executed on a multicore shared-memory platform (hardware). The number of cores is usually small, contrasted with a larger number of complex tasks that share data to collaborate. Since most ERTSs are safety-critical, it is crucial to rigorously verify their software against various real-time requirements under the actual hardware constraints (concurrent access to data, number of cores). Both the real-time systems and the formal methods communities provide elegant techniques to realize such verification, which nevertheless face major challenges. For instance, model checking (formal methods) suffers from the state- space explosion problem, whereas schedulability analysis (real-time systems) is pessimistic and restricted to simple task models and schedulability properties. In this paper, we propose a scalable and generic approach to formally verify ERTSs. The core contribution is enabling, through joining the forces of both communities, compositional verification to tame the state-space size. To that end, we formalize a realistic ERTS model where tasks are complex with an arbitrary number of jobs and job segments, then show that compositional verification of such model is possible, using a hybrid approach (from both communities), under the state-of-the-art partitioned fixed-priority (P-FP) with limited preemption scheduling algorithm. The approach consists of the following steps, given the above ERTS model and scheduling algorithm. First, we compute fine-grained data sharing overheads for each job segment that reads or writes some data from the shared memory. Second, we generalize an algorithm that, aware of the data sharing overheads, computes an affinity (task-core allocation) guaranteeing the schedulability of hard-real-time (HRT) tasks. Third, we devise a timed automata (TA) model of the ERTS, that takes into account the affinity, the data sharing overheads and the scheduling algorithm, on which we demonstrate that various properties can be verified compositionally, i.e., on a subset of cores instead of the whole ERTS, therefore reducing the state-space size. In particular, we enable the scalable computation of tight worst-case response times (WCRTs) and other tight bounds separating events on different cores, thus overcoming the pessimism of schedulability analysis techniques. We fully automate our approach and show its benefits on three real-world complex ERTSs, namely two autonomous robots and an automotive case study from the WATERS 2017 industrial challenge.
We introduce the notion of delineation. A graph class C is said delineated by twin-width (or simply, delineated) if for every hereditary closure D of a subclass of C, it holds that D has bounded twin-width if and only if D is monadically dependent. An effective strengthening of delineation for a class C implies that tractable FO model checking on C is perfectly understood: On hereditary closures of subclasses D of C, FO model checking on D is fixed-parameter tractable (FPT) exactly when D has bounded twin-width. Ordered graphs [BGOdMSTT, STOC ’22] and permutation graphs [BKTW, JACM ’22] are effectively delineated, while subcubic graphs are not. On the one hand, we prove that interval graphs, and even, rooted directed path graphs are delineated. On the other hand, we observe or show that segment graphs, directed path graphs (with arbitrarily many roots), and visibility graphs of simple polygons are not delineated.
In an effort to draw the delineation frontier between interval graphs (that are delineated) and axis-parallel two-lengthed segment graphs (that are not), we investigate the twin-width of restricted segment intersection classes. It was known that (triangle-free) pure axis-parallel unit segment graphs have unbounded twin-width [BGKTW, SODA ’21]. We show that Kt,t-free segment graphs, and axis-parallel Ht-free unit segment graphs have bounded twin-width, where Ht is the half-graph or ladder of height t. In contrast, axis-parallel H4-free two-lengthed segment graphs have unbounded twin-width. We leave as an open question whether unit segment graphs are delineated.
More broadly, we explore which structures (large bicliques, half-graphs, or independent sets) are responsible for making the twin-width large on the main classes of intersection and visibility graphs. Our new results, combined with the FPT algorithm for first-order model checking on graphs given with O(1)-sequences [BKTW, JACM ’22], give rise to a variety of algorithmic win-win arguments. They all fall in the same framework: If p is an FO definable graph parameter that effectively functionally upperbounds twin-width on a class C, then p(G) ⩾ k can be decided in FPT time f(k) · |V (G)|O(1). For instance, we readily derive FPT algorithms for k-Ladder on visibility graphs of 1.5D terrains, and k-Independent Set on visibility graphs of simple polygons. This showcases that the theory of twin-width can serve outside of classes of bounded twin-width.
Joint work with Édouard Bonnet, Dibyayan Chakraborty, Eun Jung Kim, Raul Lopesand Stéphan Thomassé.
In Participatory Budgeting, cities allow their residents to influence the city’s budget through a vote. The standard voting method simply sorts the project proposals by their number of votes, and then greedily selects projects until the budget limit is reached. We have developed a new voting rule, the Method of Equal Shares, that leads to fairer outcomes. It does this by providing proportional representation. Our method has been used for the first time in 2023 in Wielieczka in Poland, and will be used in Aarau in Switzerland later this year. In this talk, I will explain how the method works and the formal proportionality guarantees it satisfies. I will also show data from simulations: because we have full election data from several cities, we can simulate what would have happened if those cities had used the Method of Equal Shares. Thus, we can compare how well the voters’ preferences are represented by our method compared to the standard method. I will also discuss algorithmic aspects and open problems.
9h00-9h45 Exposé : « Mots équilibrés et systèmes dynamiques » – Valérie Berthé, DR CNRS, IRIF
Résumé : Le problème de l’affectation du président (the chairman assignment problem) s’énonce comme suit : on suppose que k états forment une union et que, chaque année, un président d’union doit être sélectionné de telle sorte qu’à tout moment, le nombre cumulé de présidents de chaque état soit proportionnel à son poids. La richesse de ce problème réside dans le fait qu’il peut être reformulé à la fois comme un problème d’ordonnancement en recherche opérationnelle, et comme un problème de discrépance symbolique, en termes de combinatoire des mots (la discrépance mesure la différence entre le nombre d’occurrences d’une lettre dans un préfixe d’un mot infini et la valeur attendue en termes de fréquence d’apparition de cette lettre). Nous verrons dans cet exposé comment construire des mots infinis à valeurs dans un alphabet fini ayant la plus petite discrépance possible, en revisitant une construction due à R. Tijdeman en termes de systèmes dynamiques. Il s’agit d’un travail en collaboration avec O. Carton, N. Chevallier, W. Steiner, R. Yassawi.
9h45-10h30 Exposé : “L’intelligence artificielle dans l’histopathologie numérique” – Maria Vakalopoulou, Assistant Professor, CentraleSupélec, Université Paris Saclay
Résumé : L’intelligence artificielle change la donne dans de nombreux domaines, y compris celui des soins de santé. En effet, les méthodes récentes basées sur les données suscitent beaucoup d’attention de la part de la communauté médicale, car elles améliorent les performances des méthodes informatiques pour toute une série de tâches. Parmi les différentes modalités médicales, la pathologie numérique est l’étalon-or pour le dépistage et le diagnostic du cancer. De nouvelles méthodes de calcul robustes pour la pathologie numérique pourraient avoir un impact considérable sur la routine clinique, en facilitant le processus clinique. Cet exposé présente une vue d’ensemble et des techniques récentes pour la pathologie numérique. Tout d’abord, les principaux défis ainsi que l’état actuel de la pathologie numérique seront discutés. En outre, des méthodes récentes basées sur l’apprentissage profond pour les problèmes de classification et de segmentation sont présentées, mettant en lumière l’utilisation de nouvelles techniques pour le traitement de ces modalités médicales.
10h30 – 10h50 Pause café
10h50 – 12h20 Table ronde autour des questions de diversité – égalité – parité dans les laboratoires
Invité(e)s : Valérie Berthé, Philippe Gambette, Corinne Palescandolo, Zephyr Salvy et Maria Vakalopoulou
Animatrice : Anna-Livia Morand
12h20-14h00 Déjeuner buffet
*** ***
9:00-9:45 Keynote: “Balanced words and dynamical systems” – Valérie Berthé, DR CNRS, IRIF
Abstract: The chairman assignment problem is stated as follows: k states are assumed to form a union and each year a union chairman must be selected so that at any time the cumulative number of chairmen of each state is proportional to its weight. The richness of this problem lies in the fact that it can be reformulated both as an operational research scheduling problem, and as a symbolic discrepancy problem, in terms of word combinatorics (discrepancy measures the difference between the number of occurrences of a letter in a prefix of an infinite word and the expected value in terms of frequency of occurrence of this letter). We will see in this talk how to construct infinite words with values in a finite alphabet having the smallest possible discrepancy, by revisiting a construction due to R. Tijdeman in terms of dynamical systems. This is a collaborative work with O. Carton, N. Chevallier, W. Steiner, R. Yassawi.
9:45-10:30 Keynote: “Artificial Intelligence in Digital Histopathology” – Maria Vakalopoulou, Assistant Professor, CentraleSupélec, Université Paris Saclay
Abstract: Artificial intelligence is a game changer for a variety of domains including also healthcare. Indeed, recent data driven methods gather a lot of attention from the medical community since they boost the performance of computational methods for a variety of tasks. Among the different medical modalities, digital pathology is the golden standard for cancer screening and diagnosis. Novel and robust computational methods for digital pathology could make a huge impact into the clinical routine, facilitating the clinical process. This talk presents an overview and recent techniques for digital pathology. First, the main challenges as well as the current state-of-the-art on digital pathology will be discussed. Moreover, recent state-of-the-art deep learning-based methods for classification and segmentation problems are presented, shedding light into the use of novel techniques for the processing of these medical modalities.
10:30 – 10:50 Coffee break
10:50 – 12:20 Roundtable discussion on diversity – equality – parity in research laboratories
Special guests: Valérie Berthé, Philippe Gambette, Corinne Palescandolo, Zephyr Salvy and Maria Vakalopoulou
Moderator: Anna-Livia Morand
12h20-14h00 Buffet lunch
Normality has been introduced by É. Borel more than one hundred years ago. A real number is normal to an integer base if, in its infinite expansion expressed in that base, all blocks of digits of same length have the same frequency. Normality can also be characterized by non-compressibility by finite state machines. In this talk we will show that pushdown machines, even deterministic, can compress some normal sequences. This solves positively a question left open by V. Becher, P. A. Heiber and O. Carton. Joint work with O. Carton
Je vais vous parler de résultats variés concernant le nombre de croisements dans les graphes dessinés sur des surfaces.
Je commencerai par une introduction générale sur le nombre de croisements, en particulier je vous rappellerai la célèbre lemme de croisement et ses applications.
Ensuite, je vous présenterai des résultats récents concernant deux variantes de ce concept. Dans le cas classique, nous comptons chaque intersection entre chaque paire d’arêtes comme un croisement. Dans une variante, le nombre de croisement dégénéré, plusieurs arêtes peuvent se croiser en un seul point. Mohar a formulé la conjecture selon laquelle le nombre de croisements dégénéré dans le plan est égal au genre non orientable d’un graphe. Negami a formulé une conjecture similaire pour un autre paramètre, à savoir le nombre de croisements conjoints entre deux graphes G et H plongés dans S.
Dans sa thèse (que je codirige avec Arnaud de Mesmay), Niloufar Fuladi a découvert des résultats positifs et négatifs liés à ces conjectures en se basant sur des résultats de la biologie algorithmique. Plus précisément, elle a adapté un cas simple de l’algorithme de réarrangement génomique de Hannenhalli-Pevzner au contexte de nombre de croisement des graphes.
Given a group G, a set of colors A and a set of forbidden patterns F, the subshift X_F is the set of colorings of G by A that avoid the patterns in F. I recently obtained a simple condition on the set of forbidden patterns F that implies that X_F is nonempty and even provides some lower bound on its growth rate (« the size of X_F »). It seems to be more powerful and it is easier to use than previous results by Aubrun et al.; Bertnshteyn; Miller; Pavlov. Interestingly, the proof of the main theorem relies on a really simple counting argument. This counting argument can be applied to other settings and has been applied to setting where Lovász local lemma, Bertnshteyn cut Lemma and related techniques would usually be applied.
After an introduction, I will present the main result and give the simple proof of the main Lemma behind the result, and then provide some applications to strongly aperiodic subshifts, nonrepetitive subshifts, and Kolmogorov complexity of subshifts. No knowledge of group theory is required (for this result, the argument over Z^2 (the infinite grid) is identical for the more general case and I will focus on this case).
Tutte dans les années 60 fut le premier à énumérer les cartes, des graphes plongés dans des surfaces. Il développa une méthode fondée sur une décomposition combinatoire et une équation fonctionnelle associée, via laquelle il mit en lumière des formules énumératives d’une simplicité extrêmement remarquable. Après de nombreux développements en combinatoire des cartes en genre 0, et en genre non-nul mais fixé, on comprend encore mal le comportement en fonction du genre. En dépit des équations fonctionnelles et des bijections savamment mises au point et manipulées par des mains expertes, c’est une autre technique, mise au point par Goulden et Jackson en 2008, qui a permis d’obtenir les plus efficaces récurrences connues sur le nombre de cartes en fonction de la taille et du genre. Cette technique repose sur une mystérieuse série d’équations appelée hiérarchie intégrable. Dans des travaux en commun avec G. Chapuy (IRIF) et M. Dolega (Académie des sciences de Pologne), nous avons étendu la hiérarchie intégrable et les formules de récurrence aux cartes sur des surfaces non-orientées. Dans cet exposé, j’expliquerai l’origine de la hiérarchie intégrable, comment l’utiliser pour obtenir des récurrences et pourquoi on s’est intéressé au cas des surfaces non-orientées.
Lua is a well-known scripting language, popular among many programmers, most notably in the gaming industry. The only data-structuring mechanism in Lua is an associative array called table. Lua 5.0, introduced an implementation with hybrid tables that combine both a hash table and a dynamically growing array. All this is hidden from the user, who uses just simple interface for the tables. In this talk we will examine the implementation and performance of Lua’s Tables from a theoretical point of view. We consider various worst-case and probabilistic scenarios. Interestingly, we uncover some problematic situations for a simple, yet unusual, probabilistic model: the cost of performing T such operations is proved to be at least of order Ω(T log T) with high probability, instead of O(T).
Joint work with Conrado Martínez and Cyril Nicaud.
