Invited talks:

Dion Gijswijt (TU Delft) - A solution to the cap set problem

A subset of GF(3)^n, the n-dimensional vector space over the field of 3 elements, is called a cap set if it does not contain an affine line. In dimension 4, this relates to the famous card game SET. There, a cap set corresponds to a collection of cards without a SET. The cap set problem is concerned with upper bounds on the size of cap sets. A question raised by Frankl, Graham and Rödl is: do cap sets have exponentially small density? A positive answer implies the Erdös-Szemerédi sunflower conjecture. On the other hand, several approaches for fast matrix multiplication rely on a negative answer. In this talk we give a positive answer by showing that the maximum size of a cap set is O(c^n), where c=2.756. The proof is surprisingly simple and uses the polynomial method, building upon work of Croot, Lev and Pach. This is joint work with Jordan Ellenberg.

Jörg Jahnel (U Siegen) - Moduli spaces and the inverse Galois problem for cubic surfaces

I will report on our project, joint with A.-S. Elsenhans, to construct cubic surfaces over Q with prescribed Galois operation on the 27 lines. This involves the application of classical algebro-geometric constructions such as J.J. Sylvester's pentahedral form for a cubic surface, A. Clebsch's 5 fundamental invariants of a cubic surface, and A. Coble's 40 irrational invariants for a marked cubic surface.

Quentin Louveaux (Université de Liège) -  The k-Frobenius number : bounds and algorithms

The Frobenius number is a well-known problem in number theory and integer optimization. Given a set of integer numbers, the problem is to find the largest integer b that cannot be represented as a nonnegative integral
combination of the given integers. The k-Frobenius number is a generalization  in which we ask for the largest integer b that cannot be represented by at least k different nonnegative integral combinations of the
given integers. In this talk we make a survey on different algorithms to find these numbers. We also discuss its known properties and in particular the lower and upper bounds that are known about it.

Oleg Pikhurko (U Warwick) - Measurable equidecomposition via augmenting paths

Two subsets A,B of R^n are called equidecomposable if one can partition A into finitely many parts and move them using isometries to form a partition of B. One example is the famous Banach-Tarski Paradox that a unit ball can be doubled if the dimension n is at least 3. We present sufficient criteria for the existence of equidecompositions with Lebesgue measurable parts, by running analytic version of the augmenting path algorithm. This in particular gives measurable versions of Tarski's circle squaring and Hilbert's 3d problem. Joint work with András Máthé and Łukasz Grabowski.

Reem Yassawi (Trent U, Canada) - Linear cellular automata and p-automatic sequences

Let p be a prime number. A p-automatic sequence is one that is generated by a deterministic finite state machine which is fed base-p expansions of natural numbers. Cobham's theorem characterizes these sequences as the letter to letter projections of fixed points of length p substitutions: these are generated by a morphism on some finite alphabet A, which replaces letters with words. These sequences generate a rich source of combinatorial and symbolic dynamical systems, whose dynamical, spectral and measurable properties have been studied extensively.

If one further assumes that A has a field structure, then the beautiful theorems of Christol and Furstenberg give other characterizations of p-automatic sequences. We describe these characterizations, and using them, we show that a sequence over a finite field F of characteristic p is p-automatic if and only if it occurs as a column of the spacetime diagram (with eventually periodic initial conditions), of a linear cellular automaton with `memory' over F_q. We compare this phenomenon to the one of spacetime diagrams whose initial condition is m-random for some measures m. Our proofs are constructive. As a corollary, we show that the dynamical system generated by a length p-substitution can be realized as a topological factor of a linear cellular automaton. This is joint work with Eric Rowland (Hofstra University).

Contributed talks:

Richard Griffon (UL) - An analogue of the Brauer-Siegel theorem for Fermat surfaces over finite fields

The classical Brauer-Siegel theorem gives upper and lower bounds on the product of the class-number times the regulator of units of a number field, in terms of its discriminant. These bounds can be seen as a measure of the "arithmetic complexity" of number fields. In this talk, I'd like to report on a work in progress in a more geometric setting. Consider a projective smooth surface S defined over a finite field k : assuming that its Brauer group is finite, one can form the product of the order of this group by the determinant of a basis of the Néron-Severi group of S over k with respect to the intersection form. We are interested in bounding this quantity in terms of the geometric genus of S, in the case when S is a Fermat surface. When the genus grows to infinity, the bounds we prove provide a complete analogue of the Brauer-Siegel theorem for the Fermat surfaces. The proof is mostly "analytical" in that it boils down to estimates about the zeta functions of these surfaces.

Peter Koymans (UL) - An application of Vinogradov's method to class groups

We show that the density of prime numbers p for which the class group of the imaginary quadratic field Q(\sqrt{-p}) has an element of order 16 is 1/16, as predicted by the Cohen-Lenstra heuristics. Our proof relies on a method due to Vinogradov, which we will explain during the talk. Joint work with Djordjo Milovic.

Bart de Keijzer - The Ground-Set-Cost Budgeted Maximum Coverage Problem

The weighted budgeted max-coverage problem is a well-known optimization problem where we are given a budget an collection of subsets over a finite ground set (the elements of which we refer to as the vertices), each subset has a certain cost, and each vertex has a certain profit. The goal is to select a subcollection of the subsets such that the total cost does not exceed the budget, and the total profit of the covered elements is maximized. We study a natural and strictly harder variant of this problem where the costs are on the vertices instead of on the subsets, so that each vertex has both a profit and a cost. We show that this problem is APX-hard, and prove that for the following special cases of the problem, there exist constant factor approximation algorithms that run in polynomial time. When the size of the sets is bounded by a constant, there exists a (1−1/sqrt(e))/2-approximation algorithm. When there is a bound d on the degree of each vertex (when viewed as a hypergraph), there exists an O(d^2)-approximation algorithm. When the incidence graph of the set system is a forest, the problem can be solved to optimality
in polynomial time. Besides that this problem being a natural generalization of the well-studied max-coverage problem, we additionaly illustrate how this problem arises naturally in the context of bid optimization in sponsored search auctions, such as in Google’s AdWords system.

### News - more news

Full professor position in Discrete Mathematics in Delft
1-11-2018

Delft Institute of Applied mathematics, Delft University of Technology, seeks a full Professor in the field of Discrete Mathematics. A description of the position can be found here. The deadline for applications is January 15, 2019.

ERC Starting grant for Daniel Dadush
26-9-2018

Daniel Dadush (CWI) has been awarded an ERC Starting Grant of 1.5 ME for his proposal ‘Towards a Quantitative Theory of Integer Programming’. With this grant, Dadush aims to revolutionize the understanding of integer programming (IP), the most popular method used today for finding optimal solutions to real-world optimization problems.

VICI grant for Nikhil Bansal
26-9-2018

Nikhil Bansal has been awarded a Vici grant of 1.5 ME. He is one of the 35 academics to receive this grant from NWO in 2018. Bansal aims to use his grant to develop new algorithmic methods to make discrete decisions in a continuous way. He expects this to lead to applications in the fields of logistics, bio-informatics, chip design and machine learning.