Stanford Theory Seminar


Stanford Theory Seminar (formerly known as Stanford Algorithms Seminar/AFLB) is usually held in Gates 463A.
Faculty Contact: Aviad Rubinstein, Li-Yang Tan
Student Contact: Bruce Spang
Click here for directions.

Wednesday, October 9

Muli Safra (Tel Aviv University)

2-to-2 Games is NP-Hard

The PCP theorem characterizes the computational class NP, to facilitate proofs that almost all approximation problems are NP-hard. to within a ratio only slightly better than the the one known to be efficiently achievable. It can be viewed as a significant strengthening of the Cook-Levin theorem, which states that the problem of deciding the satisfiability of a given CNF formula is NP-hard. The PCP characterization asserts that even coming close to satisfying a given formula is NP-hard.

A fundamental open questions in PCP theory was whether a special type of PCP, namely, 2-to-2-Games, is still NP-hard. The conjecture stating it is NP-hard is a variant of Khot's infamous Unique-Games Conjecture.

A recent line of study pursued a new attack on the 2-to-2 Games Conjecture (albeit with imperfect completeness). The approach is based on a mathematical object called the Grassmann Graph, and relies on the study of edge-expansion in this graph (which requires Analysis of Boolean functions on steroids). More specifically, the study of sets of vertices in the Grassmann Graph that contain even a tiny fraction of the edges incident to the set.

A characterization of such sets was recently accomplished, completing a proof for the 2-to-2 Games Conjecture (albeit with imperfect completeness).

The talk discusses the 2-to-2 Games Conjecture, its implications and the line of study.


Wednesday, October 2

Dean Doron (Stanford)

Nearly Optimal Pseudorandomness From Hardness

Existing proofs that deduce P=BPP from circuit lower bounds convert randomized algorithms to deterministic ones with a large polynomial slowdown in running time. In this talk, we will see that if we assume exponential lower bounds against nondeterministic circuits, we can convert any randomized algorithm running in time T to a deterministic one running in time T^{2+𝛼} for an arbitrarily small constant 𝛼. Under complexity-theoretic assumptions, such a slowdown is nearly optimal. Our result follows from a new pseudorandom generator whose construction uses, among other ideas, a new connection between pseudoentropy generators and locally list-recoverable codes.

Joint work with Dana Moshkovitz, Justin Oh and David Zuckerman


Friday, September 27

Omri Ben-Eliezer (Tel Aviv University)

Finding monotone patterns in sublinear time

We consider efficient algorithms for finding monotone subsequences in arrays, that operate in the property testing regime. Namely, given an array of n real numbers, that is far from free of monotone increasing subsequences of length k, the goal is to find such an increasing subsequence with good probability using as few queries as possible. This is a generalization of monotonicity testing (which corresponds to the case k=2) and is closely related to the classical longest increasing subsequence problem.

In this talk, we shall settle the non-adaptive query complexity of the above problem for any fixed k: the number of non-adaptive queries required to find such an increasing subsequence is $\Theta((\log n)^{\lfloor \log_2 (k) \rfloor})$, where the underlying constant depends only on k and the distance from freeness.

The main ingredient of the proof is an "abundance versus regularity" structural dichotomy that we establish for arrays that are far from free of increasing subsequences. We define a notion of a hierarchical gap profile for subsequences of the array, and show that any such array either contains many long and easy to find increasing subsequences, or it contains a very large collection of disjoint increasing subsequences with the exact same gap profile.

Joint work with C. Canonne, S. Letzter, E. Waingarten, to appear in FOCS'19.


Wednesday, September 25

Gates 463A, 3:00PM

David Wajc (CMU)

Online Matching with General Arrivals

The online matching problem was introduced by Karp, Vazirani and Vazirani nearly three decades ago. In that seminal work, they studied this problem in bipartite graphs with vertices arriving only on one side, and presented optimal deterministic and randomized algorithms for this setting. In comparison, more general arrival models, such as edge arrivals and general vertex arrivals, have proven more challenging and positive results are known only for various relaxations of the problem. In particular, even the basic question of whether randomization allows one to beat the trivially-optimal deterministic competitive ratio of 1/2 for either of these models was open. In this paper, we resolve this question for both these natural arrival models, and show the following.

  1. For edge arrivals, randomization does not help — no randomized algorithm is better than 1/2 competitive.
  2. For general vertex arrivals, randomization helps — there exists a randomized (1/2+Ω(1))-competitive online matching algorithm.

Based on joint work with Buddhima Gamlath, Michael Kapralov, Andreas Maggiori and Ola Svensson at EPFL, to appear in FOCS 2019.

A preprint can be found here: http://www.cs.cmu.edu/~dwajc/pdfs/gamlath19.pdf