## Contributed Videos

Presentation Title Presenter Resources
The Sunflower Problem

Coauthors: Suchakree Chueluecha (Lehigh University), Lutz Warnke (Georgia Institute of Technology)

Abstract. A sunflower with $p$ petals consists of $p$ sets whose pairwise intersections are all the same set. The goal of the sunflower problem is to find the smallest $r = r(p,k)$ such that every family of at least $r^k$ $k$-element sets must contain a sunflower with $p$ petals. Major breakthroughs within the last year by Alweiss-Lovett-Wu-Zhang and others show that $r = O(p \log(pk))$ suffices. In this talk, after reviewing the history and significance of the Sunflower Problem, I will present our improvement to $r = O(p \log k)$. This is based on joint work with Suchakree Chueluecha (Lehigh University) and Lutz Warnke (Georgia Tech), see https://arxiv.org/abs/2009.09327.

Partitioning using pairwise queries

Coauthors: Quentin Lutz (Nokia Bell Labs), Maya Stein (Universidad de Chile), Alexander Scott (University of Oxford)

Abstract. We call 'partitioning' the problem of recovering a set partition using only pairwise queries, telling whether two elements belong to the same block or not. We prove that the partitioning algorithms reaching the minimal average number of queries are exactly those that keep the graph of the negative answers chordal. Furthermore, those optimal algorithms share the same distribution on the number of queries. We characterize this distribution and prove a Gaussian limit law.

Packing nearly optimal Ramsey R(3,t) graphs

Coauthors: Lutz Warnke (Georgia Institute of Technology)

Abstract. In 1995 Kim famously proved the Ramsey bound $R(3,t) \ge c t^2/\log t$ by constructing an $n$-vertex graph that is triangle-free and has independence number at most $C \sqrt{n \log n}$. We extend this celebrated result, which is best possible up to the value of the constants, by approximately decomposing the complete graph $K_n$ into a packing of such nearly optimal Ramsey $R(3,t)$ graphs. More precisely, for any $\epsilon>0$ we find an edge-disjoint collection $(G_i)_i$ of $n$-vertex graphs $G_i \subseteq K_n$ such that (a) each $G_i$ is triangle-free and has independence number at most $C_\epsilon \sqrt{n \log n}$, and (b) the union of all the $G_i$ contains at least $(1-\epsilon)\binom{n}{2}$ edges. Our algorithmic proof proceeds by sequentially choosing the graphs $G_i$ via a semi-random (i.e., Rodl nibble type) variation of the triangle-free process. As an application, we prove a conjecture in Ramsey theory by Fox, Grinshpun, Liebenau, Person, and Szabo (concerning a Ramsey-type parameter introduced by Burr, Erdos, Lovasz in 1976). Namely, denoting by $s_r(H)$ the smallest minimum degree of $r$-Ramsey minimal graphs for $H$, we close the existing logarithmic gap for $H=K_3$ and establish that $s_r(K_3) = \Theta(r^2 \log r)$.

Pattern Matching and Covid
Philippe Jacquet (Inria)

Abstract. I will present some applications of pattern matching and Joint Complexity over the SARS-2 Covid genome and see how a pure algorithmic tool can provide insights on a biological topic.

Faster and Accurate Similarity Evaluations via Sampling

Coauthors: Jun Wang (Universitat Politècnica de Catalunya)

Abstract. There are many contexts in which we need to make many similarity evaluations between complex objects , for example in query-by-content searches of images, sounds, documents or genome/protein sequences, or in classification and clustering tasks. We can speed up similarity evaluations (and thus the overall computation) using approximations via sampling; we discuss how to efficiently obtain accurate unbiased estimates for several well known similarity measures.

Prague Dimension of Random Graphs
Kalen Patton (Georgia Institute of Technology)

Coauthors: He Guo (Georgia Institute of Technology), Lutz Warnke (Georgia Institute of Technology)

Abstract. The Prague dimension of graphs was introduced by Nesetril, Pultr and Rodl in the 1970s. Proving a conjecture of Furedi and Kantor, we show that the Prague dimension of the binomial random graph is typically of order $n/\log n$ for constant edge-probabilities. The main new proof ingredient is a Pippenger-Spencer type edge-coloring result for random hypergraphs with large uniformities, i.e., edges of size $O(\log n)$.

Boltzmann sampling in linear time: a promise from two years ago

Abstract. I was speaker at AofA'19. The title of my seminar was 'The challenge of linear-time Boltzmann sampling'. I presented some results on the topic, and many good wishes for the future. Here I present one of these wishes that recently came true: Boltzmann sampling of arbitrary irreducible context-free structures in linear time. The main tools are: (1) the Drmota-Lalley-Woods theorem; (2) an algorithmic strategy that I call 'rejection paradigm for decomposed measures'; (3) replacing the critical measure with a combination of slightly sub-critical and super-critical measures, which allows to pump up the average acceptance rate. Work based on https://arxiv.org/abs/2105.12881.

Irregular $\mathbf{d_n}$-Process is distinguishable from Uniform Random $\mathbf{d_n}$-graph

Coauthors: Mike Molloy (University of Toronto), Lutz Warnke (Georgia Institute of Technology)

Abstract. The random d-process corresponds to a natural algorithmic model for generating d-regular graphs: starting with an empty graph on n vertices, it evolves by sequentially adding new random edges so that the maximum degree remains at most d. In 1999 Wormald conjectured that the final graph of the random d-process is 'similar' to a uniform random d-regular graph. We show that this conjecture does not extend to a natural generalization of this process with mixed degree bounds, i.e., where each vertex has its own degree restriction (as long as these restrictions are not 'close' to being regular). Our proof uses the method of switchings, which is usually only applied to uniform random graph models (rather than to stochastic processes). Based on joint work with Mike Molloy and Lutz Warnke.

The Density of Costas Arrays Decays Exponentially

Coauthors: Bill Correll, Jr (Maxar Technologies), Christopher Swanson (Ashland University)

Abstract. Costas arrays are useful in radar and sonar engineering, and many other settings in which optimal 2-D autocorrelation is needed: formally they are simply permutation matrices with the extra property that vectors joining pairs of ones are all distinct. Using ideas from random graph theory, we prove that the density of Costas arrays among permutation matrices decays exponentially, solving a core theoretical problem for Costas arrays. Many intriguing questions remain open for this interdisciplinary topic at the intersection of combinatorics, probability and enumeration. Based on joint work with Bill Correll, Jr and Christopher Swanson.