Stanford Theory Seminar


Stanford Theory Seminar (formerly known as Stanford Algorithms Seminar/AFLB) is usually held in Gates 498 or Theory Lounge(Gates 463A)
Faculty Contact: Ryan Williams
Student Contact: Huacheng Yu
Click here for directions.

September 20, 2013

Gates 463A, 4:15 PM

Jelani Nelson (Harvard)

OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings

Abstract: An "oblivious subspace embedding" (OSE) is a distribution over matrices S such that for any low-dimensional subspace V, with high probability over the choice of S, ||Sx||_2 approximately equals ||x||_2 (up to 1+eps multiplicative error) for all x in V simultaneously. Sarlos in 2006 pioneered the use of OSE's for speeding up algorithms for several numerical linear algebra problems. Problems that benefit from OSE's include: approximate least squares regression, low-rank approximation, l_p regression, approximating leverage scores, and constructing good preconditioners.

We give a class of OSE distributions we call "oblivious sparse norm-approximating projections" (OSNAP) that yield matrices S with few rows that are also extremely sparse, yielding improvements over recent work in this area by (Clarkson, Woodruff STOC '13). In particular, we show S can have O(d^2) rows and 1 non-zero entry per column, or even O(d^{1+gamma}) rows and poly(1/gamma) non-zero entries per column for any desired constant gamma>0. When applying the latter bound for example to the approximate least squares regression problem of finding x to minimize ||Ax - b||_2 up to a constant factor, where A is n x d for n >> d, we obtain an algorithm with running time O(nnz(A) + d^{omega + gamma}). Here nnz(A) is the number of non-zero entries in A, and omega is the exponent of square matrix multiplication.

Our main technical result is essentially a Bai-Yin type theorem in random matrix theory and is likely to be of independent interest: i.e. we show that for any U in R^{n x d} with orthonormal columns and random sparse S with appropriately chosen entries and sufficiently many rows, all singular values of SU lie in the interval [1-eps, 1+eps] with good probability.

Joint work with Huy Lê Nguyễn (Princeton).


October 10, 2013

Gates 463A, 4:15 PM

Nikhil Srivastava (MSR India)

Interlacing Families, Mixed Characteristic Polynomials, and the Kadison-Singer Problem

Abstract: The Kadison-Singer problem is a question in operator theory which arose while trying to make certain assertions in Dirac's formulation of quantum mechanics mathematically rigorous. Over several decades, this question was shown to be equivalent to several discrepancy-type conjectures about finite matrices, with applications in signal processing, harmonic analysis, and computer science. We prove a strong variant of the conjecture due to Nik Weaver, which says that every set of vectors satisfying some mild conditions can be divided into two sets each of which approximates the whole set spectrally. The proof is based on two significant ingredients: a new existence argument, which reduces the problem to bounding the roots of the expected characteristic polynomial of a certain random matrix derived from the vectors, and a systematic method to prove sharp bounds on the roots of such polynomials. The techniques are mostly elementary and based on tools from the theory of real stable polynomials.

Joint work with Adam Marcus and Dan Spielman.


October 24, 2013

Gates 463A, 4:15 PM

Omri Weinstein (Princeton)

Information complexity and applications

Over the past three decades, communication complexity has found applications in nearly every area of computer science, and constitutes one of the few known techniques for proving unconditional lower bounds. Developing tools in communication complexity is therefore a promising approach for making progress in other computational models such as circuit complexity, streaming, data structures, and privacy to mention a few.

One striking example of such tool is information theory, introduced by Shannon in the late 1940's in the context of the one way data transmission problem. Shannon's work revealed the intimate connection between information and communication, namely, that the amortized transmission cost of a random message is equal to the amount of information it contains. This compression theory, however, does not readily convert to interactive setups, where two (or more) parties must engage in a multi-round conversation to accomplish a task.

The goal of our ongoing research is to extend this theory, develop the right tools, and understand how information behaves in interactive setups, such as the communication complexity model.

In this introductory talk, I will give an overview of Information Complexity, an interactive analogue of Shannon's theory. I will describe some of the main open problems in this emerging field, and some of the interesting applications we found, including an exact bound on the communication complexity of the Set Disjointness function (~0.48n), and how information helped us understand the limits of parallel computation.


October 30, 2013

Gates 463A, 10:00AM

Aleksander Madry (Ecole Polytechnique Federale de Lausanne)

Navigating Central Path with Electrical Flows: from Flows to Matchings, and Back

We describe a new way of employing electrical flow computations to solve the maximum flow and minimum s-t cut problems. This approach draws on ideas underlying path-following interior-point methods (IPMs) – a powerful tool in convex optimization - and exploits certain interplay between maximum flows and bipartite matchings.

The resulting algorithm provides improvements over some long-standing running time bounds for the maximum flow and minimum s-t cut problems, as well as, the closely related bipartite matching problem. Additionally, we establish a connection between primal-dual structure of electrical flows and convergence behavior of IPMs when applied to flow problems. This connection enables us to overcome the notorious Omega(sqrt(m))-iterations convergence barrier that all the known interior-point methods suffer from.


November 14, 2013

Gates 463A, 4:15PM

Prasad Raghavendra (Berkeley)

Approximate constraint satisfaction requires large LP relaxations

We prove super-polynomial lower bounds on the size of linear programming relaxations for approximation versions of constraint satisfaction problems. In particular, we show that no polynomial-sized LP can achieve better thana 1/2-approximation for MAX-CUT, a 7/8-approximation for MAX-3SAT, or a3/4-approximation for MAX-2SAT.

There has been a recent sequence of exciting lower bounds on the size ofextended formulations for various polytopes that arise in combinatorialoptimization. Unfortunately, these prior results are essentially based onone particular family of "high-complexity" matrices coming from the setdisjointness problem. We show that for max-CSP problems, general polynomial-sized LPs are exactly as power as LPs arising from a constantnumber of bounds of the Sherali-Adams hierarchy. This allows us access to a wealth of non-trivial constructions.

Joint work with Siu On Chan, James Lee, and David Steurer.


November 21, 2013

Gates 463A, 4:15PM

Matt Weinberg (MIT)

Algorithms for Strategic Agents

In traditional algorithm design, no incentives come into play: the input is given, and your algorithm must produce a correct output. How much harder is it to solve the same problem when the input is not given directly, but instead reported by strategic agents (who may choose to lie) whose incentives may not align with your own?

This talk will present recent results on black-box reductions from mechanism to algorithm design. That is, we show how to solve optimization problems while properly incentivizing strategic agents using only black-box access to an algorithm for (a modified version of) the same optimization problem when the input is directly given. We also apply our reduction to the problem of job scheduling on unrelated machines, obtaining a truthful mechanism whose makespan is within a factor of 10.5 of optimal. This talk will not assume any knowledge of economics or game theory.

Based on joint works with Yang Cai and Costis Daskalakis.


January 23, 2014

Gates 463A, 4:15PM

Shiri Chechik (MSR Silicon Valley)

Compact Routing Schemes with Improved Stretch

Routing is one of the most fundamental problems in the area of distributed networking.

The goal in this problem is to construct a distributed mechanism that allows any node in the network to send packages of data to any other node efficiently.

As in all distributed algorithms, a routing scheme runs locally on every node of the network allowing it to forward arriving data while utilizing local information that is stored at the node itself. This local information is commonly referred to as the routing table of the node. We say that a routing scheme is compact if the size of the routing tables is sub-linear in the number of nodes.

There are usually two key concerns in designing routing schemes. The first concern is to allow routing on paths that are close as possible to the shortest paths, and the second concern is to minimize the size of the routing tables.

Much of the work on designing compact routing schemes focuses on the tradeoff between these two concerns.

In this talk I am going to discuss the first improvement to the compact routing scheme of Thorup and Zwick [SPAA'01].


February 6, 2014

Gates 463A, 4:15PM

Uri Zwick (Tel Aviv University)

Stochastic and non-stochastic games - a survey

Stochastic games, initially introduced by Shapley in 1953, form an intriguing class of games. The talk would focus mainly on 2-player, zero-sum, turn-based, stochastic games (2TBSGs). (Turn-based here means that the players do not move simultaneously, making the game a perfect information game.) Such games are closely related to the Simple Stochastic Games considered by Condon in 1992. The 1-player versions of these games are commonly known as Markov Decision Processes (MDPs).

The decision problem corresponding to 2TBSGs is perhaps the most natural combinatorial problem that lies in NP intersect co-NP which is not known to be in P. The fastest algorithm for solving such games is a randomized subexponential-time algorithm obtained by casting the problem is an LP-type problem and using the randomized subexponential algorithm of Kalai and Matou{\v{s}}ek, Sharir and Welzl. The question whether (non-discounted) 2TBSGs can be solved in polynomial time is an intriguing open problem.

In the paper I will survey several recent and non-recent results related to these types of games and mention many open problems.


February 20, 2014

Gates 463A, 4:15PM

Dana Moshkovitz (MIT)

An Approach To The Sliding Scale Conjecture Via Parallel Repetition For Low Degree Testing

The Sliding Scale Conjecture in PCP states that there are PCP verifiers with a constant number of queries and soundness error that is exponentially small in the randomness of the verifier and the length of the prover's answers. The Sliding Scale Conjecture is one of the oldest open problems in PCP, and it implies hardness of approximation up to polynomial factors for problems like Max-CSP (with polynomial-sized alphabet), Directed-Sparsest-Cut and Directed-Multi-Cut.

In this work we prove: 1. The Sliding Scale Conjecture follows from a construction of a low degree test whose soundness error is exponential in the randomness of the verifier. 2. A parallel repetition theorem for low degree testing that decreases the error probability of a low degree test exponentially. 3. Applying the parallel repetition theorem on an existing low degree test, we get the low degree test with the smallest error known to date.

The missing piece for proving the Sliding Scale Conjecture is a derandomization of the parallel repetition theorem. This seems plausible given the algebraic structure of the low degree testing problem, which was utilized for derandomization in the past.


February 27, 2014

Gates 463A, 4:15PM

Sam Buss (UCSD)

Towards (Non-)Separations in Propositional Proof Complexity

This talk will discuss recent progress in understanding the complexity of propositional proof systems. The first part will discuss fragments of resolution used by SAT solvers based on CDCL or DPLL with clause learning. These SAT solvers are essentially searching for resolution proofs, and it is known that, with restarts, CDCL can polynomially simulate resolution. Without restarts, this is an open question. Several candidates have been proposed in recent years for separating CDCL and resolution, including Stone tautologies and guarded graph principles. However, it has recently been shown that these candidates all have short CDCL refutations, and do not provide a superpolynomial separation pf CDCL from resolution.

The second part of the talk will concern Frege and extended Frege proofs. These are “textbook” propositional proof systems based on modus ponens, reasoning about either Boolean formulas or Boolean circuits, respectively. It has long been conjectured that Frege systems do not polynomially simulate extended Frege systems. Several conjectured separation principles had remained open since the mid-1990’s. However, recent work of Hrubes and Tzameret, and the speaker and his collaborators, have now shown that these principles, including Frankl’s theorem, do not provide exponential separations. Thus, unlike in the computation complexity setting, we have no separation conjectures for propositional proof systems based on Boolean formulas versus Boolean circuits other than consistency statements.

The talk will be largely expository, and will cover the needed proof complexity background and connections to computational complexity.


March 13, 2014

Gates 463A, 4:15PM

Justin Thaler (Simons Institute)

Stream Computation and Arthur-Merlin Communication

We continue the study of streaming interactive proofs (SIPs). In this setting, a client (verifier) needs to process a massive stream of data, arriving online, but is unable to store even a small fraction of the data. It outsources the processing to a commercial cloud computing service (prover), but is unwilling to blindly trust answers returned by this service. Thus, the service must both supply the final answer and convince the verifier of its correctness.

Our primary objects of study are "barely interactive'" SIPs. Specifically, we show that constant-round SIPs are surprisingly powerful, by giving polylogarithmic cost two- and three-round protocols for several "query" problems, including Index, Nearest Neighbor Search, and Range Counting. This was thought to be impossible based on previous work.

On the other hand, in order to study the *limitations* of constant-round SIPs, we introduce a new hierarchy of communication models that we call "online Interactive Proofs" (OIPs). We give upper and lower bounds that (1) characterize every finite level of the OIP hierarchy in terms of previously-studied communication complexity classes, and (2) separate the first four levels of the hierarchy. Our study of OIPs reveals marked contrasts and some parallels with the classic Turing Machine theory of interactive proofs.


March 27, 2014

Gates 463A, 4:15PM

Grigory Yaroslavtsev (Brown University)

Parallel Algorithms for Geometric Graph Problems

I will describe algorithms for geometric graph problems in the modern parallel models inspired by MapReduce. For example, for the Minimum Spanning Tree (MST) problem over a set of points in the two-dimensional space, our algorithm computes an approximate MST. Our algorithms work in a constant number of rounds of communication, while using total space and communication proportional to the size of the data (linear space and near linear time algorithms).

I will also show how the main ideas from the MST algorithm can be captured within a general “Solve-and-Sketch” algorithmic framework that we develop. Besides MST it also applies to the approximate Earth-Mover Distance (EMD) and the transportation cost problem. Algorithms designed in the “Solve-and-Sketch” framework have implications which go beyond parallel models. In particular, our work implies new near-linear time algorithms for EMD cost and transportation cost in the plane. Other implications include algorithms in the streaming with sorting model.

The talk will be self-contained, including a formal introduction of the modern theoretical computational models used to capture computations in massively parallel “MapReduce”-like systems. It will also include a sample of major open problems in the area.

Joint work with Alexandr Andoni, Krzysztof Onak and Aleksandar Nikolov.


April 10, 2014

Gates 463A, 4:15PM

Raghu Meka (IAS)

Pseudorandomness from Shrinkage

One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use of lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer a quantitative loss in parameters, and hence do not give nontrivial implications for models where we only know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can indeed construct PRGs which are essentially best possible without in turn improving the lower bounds.

More specifically, say that a circuit family has shrinkage exponent Gamma if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of p^{Gamma}. Our PRG uses a seed of length roughly s^{1/(Gamma + 1)} to fool circuits in the family of size s. By instantiating this generic construction, we get PRGs for the following classes: 1) de Morgan formulas of size s, seed length s^{1/3}. 2) Formulas over an arbitrary basis, seed length s^{1/2}. 3) Branching programs of size s, seed length s^{1/2}.

The previous best PRGs known for these classes used seeds of length bigger than n/2 to output n bits, and worked only when the size s=O(n).

Joint work with Russell Impagliazzo and David Zuckerman.


April 17, 2014

Gates 463A, 4:15PM

Nike Sun (Stanford)

The satisfiability transitions in random regular NAE-SAT and MAX-IND-SET

Satisfaction and optimization problems subject to random constraints are a well-studied area in the theory of computation. These problems also arise naturally from investigating the combinatorics of random graphs. For a large class of such problems, the "1-RSB formalism" of statistical physics predicts the exact location of the satisfiability transition, but has not been previously verified. For two problems within this class, random regular k-NAE-SAT and random regular graph MAX-IND-SET, we establish an exact satisfiability transition which validates the 1-RSB prediction. We further show that the MAX-IND-SET value on random regular graphs is tightly concentrated with only constant-order fluctuations.

Joint work with Jian Ding and Allan Sly.


April 24, 2014

Gates 463A, 4:15PM

Bernhard Haeupler (Microsoft Research Silicon Valley)

How to Have a Conversation Despite Noise (Error Correction for Interactive Communications)

Error correcting codes have transformed how we transfer information, e.g., by providing efficient ways to protect one-way communications against noise. Recently, coding schemes have been developed which do the same for interactive two-way communications, that is, which protect any interactive conversation against (worst-case) noise.

This talk explores how such noise tolerant conversations are even possible and how much noise can be (efficiently) tolerated. In particular, we show that 2/7-\eps is a fundamental limit on the tolerable noise level and that adaptive rate adaptation is crucial in obtaining such a tolerance.

Lastly, we provide the first computationally efficient coding schemes with optimal noise tolerance for a variety of settings and explain how list decoding plays an important role in achieving these results.

Bio:

Dr. Bernhard Haeupler is a researcher at Microsoft Research. He will join the CS faculty of Carnegie Mellon University in August 2014. Dr. Haeupler obtained his PhD and MSc in Computer Science from MIT and a Diploma, MSc, and BSc in Mathematics from the Technical University of Munich. His research interests lie in the intersection of classical combinatorial algorithms design, distributed computing, and information theory. He has co-authored over 40 publications and his work on distributed information dissemination via network coding and gossip won best student paper awards at STOC'11 and SODA'13 as well as a George Sprowls dissertation award.


May 2, 2014

Gates 463A, 4:00PM

Aravindan Vijayaraghavan (CMU)

Approximation Algorithms for Average-Case Graph Partitioning.

Graph Partitioning problems like Balanced Cut form a central topic of study in computer science. However, constant factor approximations for many of these problems have been elusive. Since real-world instances are unlikely to behave like worst-case instances, a compelling question is : can we design better algorithms for realistic average case instances?

I will introduce new average-case models for graph partitioning that are more general that previous models, and that we believe, capture many properties of real-world instances. I will then present a new algorithmic framework based on semidefinite programming that gives constant factor approximation algorithms in these average-case models.

Based on joint works with Konstantin Makarychev and Yury Makarychev.

Bio: Dr. Aravindan Vijayaraghavan is a Simons Postdoctoral Fellow in the Department of Computer Science at Carnegie Mellon University. He received his PhD (in 2012) from Princeton University under the supervision of Prof. Moses Charikar. His research interests are in designing algorithms with provable guarantees for problems in combinatorial optimization and machine learning.


May 9, 2014

Gates 463A, 4:00PM

Seung Woo Shin (Berkeley)

How "Quantum" is the D-Wave Machine?

Recently there has been intense public interest in research surrounding the D-Wave "quantum computer". While claims about speedups over classical computers have been largely refuted, studies also claim that D-Wave machines exhibit signatures of "quantum behavior." In this talk, I will outline a very simple classical model which explains the published large scale input-output behavior of the D-Wave One machine. While raising serious questions about "how quantum" the D-Wave machine is, the new model also provides insights into the native problem solved by the D-Wave machine.

Joint work with Graeme Smith, John Smolin, and Umesh Vazirani.


May 16, 2014

Gates 463A, 4:00PM

Moritz Hardt (IBM)

Preventing False Discovery in Interactive Data Analysis is Hard

We show that, under a standard hardness assumption, there is no computationally efficient algorithm that given n samples from an unknown distribution can give valid answers to n^{3+o(1)} adaptively chosen statistical queries. A statistical query asks for the expectation of a predicate over the underlying distribution, and an answer to a statistical query is valid if it is "close" to the correct expectation over the distribution. Our result stands in stark contrast to the well known fact that exponentially many statistical queries can be answered validly and efficiently if the queries are chosen non-adaptively (no query may depend on the answers to previous queries). Moreover, recent work shows how to accurately answer exponentially many adaptively chosen statistical queries via a computationally inefficient algorithm. There is also an efficient algorithm that can answer O(n^2) adaptively chosen queries, which shows our result is almost quantitatively tight.

Conceptually, our result demonstrates that achieving statistical validity alone can be a source of computational intractability in adaptive settings. For example, in the modern large collaborative research environment, data analysts typically choose a particular approach based on previous findings. False discovery occurs if a research finding is supported by the data but not by the underlying distribution. While the study of preventing false discovery is decades old in Statistics, to the best of our knowledge our result is the first to demonstrate a computational barrier. In particular, our result suggests that the perceived difficulty of preventing false discovery in today's collaborative research environment may be inherent.

Joint work with Jonathan Ullman.


May 22, 2014

Gates 463A, 4:15PM

Vangelis Markakis (Athens University of Economics and Business)

On the Inefficiency of Standard Multi-unit Auction Formats

We study two standard multi-unit auction formats for allocating multiple units of a single good to multi-demand bidders. The first one is the Discriminatory Price Auction, which charges every winner his winning bids. The second is the Uniform Price Auction, which determines a uniform price to be paid per unit. Variants of both formats find applications ranging from the allocation of bonds to investors, to online sales over the internet, facilitated by popular online brokers.

For these formats, we consider two bidding interfaces: (i) standard bidding, which is most prevalent in the scientific literature, and (ii) uniform bidding, which is the most widely used interface in practice. We evaluate the economic inefficiency of the two formats for both bidding interfaces, by means of upper and lower bounds on the Price of Anarchy for pure equilibria and mixed Bayes-Nash equilibria. Our results for bidders with submodular valuations improve upon bounds that have been obtained in the recent literature. Moreover, we also consider bidders with subadditive valuation functions and obtain constant upper bounds there as well.

This is joint work with Bart de Keijzer, Guido Schaefer, Orestis Telelis


Aug 29, 2014

Gates 463A, 11:00AM

Omri Weinstein (Princeton University)

Information Theoretic secure communication: An Interactive Information Odometer and Applications

We introduce a novel technique which enables two players to maintain an estimate of the internal information cost of their conversation in an online fashion without revealing much extra information. We use this construction to obtain new results about information-theoretically secure computation and interactive compression.

More specifically, we show that the information odometer can be used to achieve information-theoretic secure communication between two untrusted parties: If the players' goal is to compute a function f(x,y), and f admits a protocol with information cost I and communication cost C, then our odometer can be used to produce a "robust" protocol which: (i) Assuming both players are honest, computes f with high probability, and (ii) Even if one party is malicious/adversarial, then for any k, the probability that the honest player reveals more than O(k(I+ log C)) bits of information to the other player is at most 2^{-Omega(k)}.

We also outline a potential approach which uses our odometer as a proxy for braking state of the art interactive compression results: Any progress on interactive compression in the regime where I=O(\log C) will lead to new *general* compression results in all regimes, and hence to new direct sum theorems in communication complexity.

Joint work with Mark Braverman