Les automates de Büchi sont l’extension naturelle des automates finis à un modèle de calcul qui accepte des entrées de longueur infinie. On dit qu’un sous-ensemble X des réels est r-automatique s’il existe un automate de Büchi qui accepte (l’une des) représentations en base-r de chaque élément de X et rejette les représentations en base-r de chaque élément dans son complément. On peut définir de manière analogue des sous-ensembles r-automatique d’arités supérieures, et ces ensembles présentent souvent un comportement de type fractal – par exemple, l’ensemble de Cantor est 3-automatique. Il existe des liens intéressants entre la géométrie fractale et les automates de Büchi, et on les considère du point de vue de la théorie des modèles. Dans cet exposé, on présente une caractérisation des cas dans lesquels les différentes notions de dimension fractale concordent et des cas dans lesquels elles ne concordent pas pour les ensembles r-automatique.
Tag : Français
We present a method to identify superfluous edge sets in the de Bruijn and Kautz graphs —two classical types of directed graphs — of cardinality at least C ε (Δ – ε)D , with D the graph’s diameter, and Δ its maximum out-degree.
Additionally, we extend this approach to the underlying graph of the line digraph of any digraph G, replacing Δ – ε with the Perron-Frobenius eigenvalue of G.
On the 2024-06-20, at 1:30 PM, there will be two talks in Room 1201@ESIEE
First talk:
Quantile Based Approaches for Robust Classification
A. Challa
Abstract: Robustness to distortions is a key challenge affecting the reliability of machine learning models. This talk explores using quantile-based approaches to address this issue. We focus on two main questions — (1) How can we make any given classifier return well-calibrated probabilities without compromising its properties? (2) How can we build networks that incorporate robustness within the training process itself? We find that quantiles offer a simple and elegant solution to both problems. Quantiles have an intrinsic property of characterising the underlying distribution and hence can be used for efficient control of ML models. In simple words, we address (1) by forcing the network to learn the output quantiles, and (2) by using quantiles to normalize training across distributions.
Brief Bio: Dr. Aditya Challa has obtained his bachelor’s degree from Indian Statistical Institute, masters from University of Warwick. He completed his PhD from Indian Statistical Institute under the guidance of Prof. B.S.Daya Sagar and Prof. Laurent Najman (as a co-supervisor). He is currently working as an assistant professor at Birla Institute of Technology and Sciences, Goa Campus. He is primarily interested in the fields of mathematical morphology, machine learning and its statistical underpinnings.
Second talk:
Watershed as Greedy 1NN: Regularising 1NN classifier for Improved Generalization
S. Danda
Abstract: In this talk, I will present a novel approach to deep learning classification that challenges the dominance of linear classifiers. Traditionally, a deep learning classifier is represented as a composition of two parameterized functions g o f, where f learns the embedding and g classifies the embeddings. Despite the success of linear classifiers, non-parametric methods like Nearest Neighbour classifiers have not performed as well. Our proposed method introduces a non-parametric approach that matches the performance of linear classifiers. We demonstrate that the hyperparameter ‘K’ in KNN acts as a regularization mechanism, but is not optimal. We then propose a new regularization technique, the watershed classifier, which employs a greedy approach. By introducing a loss function tailored to the watershed classifier, we achieve superior performance compared to Neighbourhood Component Analysis (NCA). Our results show that the non-parametric watershed classifier performs on par with, or better than, traditional linear classifiers trained with cross-entropy.
Brief Bio: Dr. Sravan Danda has obtained his bachelor’s degree in Mathematics and master’s degree in Statistics, both from Indian Statistical Institute. He completed his PhD from Indian Statistical Institute under the joint guidance of Prof. B.S.Daya Sagar and Prof. Laurent Najman, He is currently working as an assistant professor at Birla Institute of Technology and Sciences, Goa Campus. His current research interests include applications of machine learning for agriculture, mathematical morphology and robust machine learning.
The Freeze-Tag Problem, introduced in Arkin et al. (SODA’02) consists of waking up a swarm of n robots, starting from a single active robot. In the basic geometric version, every robot is given coordinates in the plane. As soon as a robot is awakened, it can move towards inactive robots to wake them up. The goal is to minimize the wake time of the last robot, the makespan.
Despite significant progress on the computational complexity of this problem and on approximation algorithms, the characterization of exact bounds on the makespan remains one of the main open questions. In this paper, we settle this question for the L1-norm, showing that a makespan of at most 5r can always be achieved, where r is the maximum distance between the initial active robot and any sleeping robot. Moreover, a schedule achieving a makespan of at most 5r can be computed in optimal time O(n). Both bounds, the time and the makespan are optimal. This implies for the L2-norm a new upper bound of 5\sqrt{2}r \approx 7.07r on the makespan, improving the best known bound so far (5+2\sqrt{2}+\sqrt{5})r \approx 10.06r.
Along the way, we introduce new linear time wake-up strategies, that applies to any norm, and that show that an optimal bound on the makespan can always be achieved by a schedule computable in linear time.
In this talk I will present a recent result regarding the non-existence of polynomial kernels for the knapsack parameter under two natural parameters: The number of different item profits, and the number of different item weights. The talk will not assume any prior knowledge on any of these concepts. The work presented is joint work with Klaus Heeger, Matthias Mnich, and Dvir Shabtay, and has been accepted to the next ICALP.
Minimizing the crossing number of a graph, i.e., the minimum number of pairwise edge crossings over all drawings in the plane, is a notoriously computationally hard problem.
It remains hard even under very restrictive settings of the input. We survey some the many know hardness results in this area, and, in response to a long standing open question of the area, we present a new hardness reduction proving that the crossing number problem remains NP-complete for graphs of constant tree-width and path-width.
The latter result is a joint work with Liana Khazaliya from TU Wien.
Consider a graph drawn on a surface (for example, the plane minus a finite set of obstacle points), possibly with crossings. We provide a polynomial time algorithm to decide whether such a drawing can be untangled, namely, if one can slide the vertices and edges of the graph on the surface (avoiding the obstacles) to remove all crossings; in other words, whether the drawing is homotopic to an embedding. While the problem boils down to planarity testing when the surface is the sphere or the disk (or equivalently the plane without any obstacle), the other cases have never been studied before, except when the input graph is a cycle, in an abundant literature in topology and more recently by Despré and Lazarus [SoCG 2017, J. ACM 2019].
CONGEST is a model in which computers distributed over a network graph have to efficiently exchange quantified information in order to solve a problem. One of its most natural problems is the detection of subgraphs in the network.
In this talk will be given an overview of the problem of cycle detection in the CONGEST model and especially of our latest state-of-the-art algorithm for even-length cycles. If the time allows, the quantum approach to quadratically speed up the detection will also be addressed.
In various domains, statistical algorithms trained on personal data take pivotal decisions which influence our lives on a daily basis. Recent studies show that a naive use of these algorithms in sensitive domains may lead to unfair and discriminating decisions, often inheriting or even amplifying biases present in data. In the first part of the talk, I will introduce and discuss the question of fairness in machine learning through concrete examples of biases coming from the data and/or from the algorithms. In a second part, I will demonstrate how statistical learning theory can help us better understand and overcome some of those biases. In particular, I will present a selection of recent results from two of my papers on the Demographic Parity constraint. In particular I will present an interesting link between fairness and optimal transport.
References:
– A minimax framework for quantifying risk-fairness trade-off in regression (with E. Chzhen), Ann. Statist. 50(4): 2416-2442 (Aug. 2022). DOI: 10.1214/22-AOS2198;
– Fairlearning with Wassersteinbarycenters for non-decomposable performance measures (with S. Gaucher and E. Chzhen), AISTATS 2023.
Sparsest cut is a fundamental graph problem, which models a general notion of balanced separator of a graph, and has uses in graph theory and divide-and-conquer approaches. In this problem, we are given an edge-capacitated graph, together with demands over pairs of vertices, and we want to find a cut that minimizes the ratio between the capacities and demands across the cut. In other words, we aim to separate as much demand as possible using little cut capacity. For general graphs, the best known approximation factor is Õ(sqrt(log n)), and the problem is known to have no constant-approximation under the unique games conjecture.
In this talk, we look at the problem from the point of view of parameterized complexity, by studying the approximability of the problem in the simpler setting of bounded-treewidth graphs (a common structural constraint on the input graph). Previous algorithms in this setting either have large approximation factors (exponential in the treewidth) or are inefficient. We propose a new algorithm framework which unifies all known algorithms for sparsest cut in bounded treewidth, and then show how to obtain improved approximations to the problem. As a result, we give the first efficient (FPT) constant-factor approximation algorithm.
Joint work with P. Chalermsook, M. Kaul, M. Mnich, J. Spoerhase and S. Uniyal, titled « Approximating Sparsest Cut in Low-treewidth Graphs via Combinatorial Diameter », published in ACM Transactions on Algorithms (2024).
