Stanford Theory Seminar


Stanford Theory Seminar (formerly known as Stanford Algorithms Seminar/AFLB) is usually held in Gates 498 or Theory Lounge(Gates 463A)
Faculty Contact: Tim Roughgarden, Luca Trevisan
Student Contact: Kshipra Bhawalkar , Qiqi Yan
Click here for directions.

Contents:

  1. 6 October 2011 : C. Seshadhri (Sandia National Laboratories).
    Deterministic blackbox polynomial identity testing for depth 3 circuit: the constant fanin case falls at last.
  2. 13 October 2011 : Vijay V. Vazirani (Georgia Tech).
    Extending General Equilibrium Theory to the Digital Economy: Equilibrium Pricing of Semantically Substitutable Digital Goods.
  3. 20 October 2011 : Udi Wieder (Microsoft Research, SVC).
    Balls and Bins: Smaller Hash Families and Faster Evaluation.
  4. 27 October 2011 : Or Meir (Stanford).
    IP = PSPACE vs. error correcting codes
  5. 1 December 2011 : Anindya De (Berkeley).
    Nearly optimal solutions for the Chow Parameters Problem and low-weight approximation of half spaces.
  6. 8 December 2011 : Alantha Newman (DIMACS).
    A Counterexample to Beck's Three Permutation Conjecture.
  7. 13 January 2012 : Virginia Vassilevska Williams (Stanford/Berkeley).
    Multiplying matrices faster than Coppersmith-Winograd.
  8. 2 February 2012 : Vijay Vazirani (Georgia Tech).
    A Complementary Pivot Algorithm for Market Equilibrium under Separable, Piecewise-Linear Concave Utilities.
  9. 1 March 2012 : Michael Kapralov (Stanford).
    On the communication and streaming complexity of maximum bipartite matching.
  10. 15 March 2012 : Ryan Williams (Stanford).
    Algorithms for Circuits and Circuits for Algorithms.
  11. 5 April 2012 : Vijay Vazirani (Georgia Tech).
    Dispelling an Old Myth about an Ancient Algorithm.
  12. 10 April 2012 : Robert Sedgewick (Princeton).
    From Analysis of Algorithms to Analytic Combinatorics.
  13. 12 April 2012 : Edith Cohen (AT&T Labs-Research).
    How to Get the Most Out of Your Sampled Data.
  14. 26 April 2012 : Jan Vondrak (IBM Almaden Research Center).
    Hardness of Randomized Truthful Mechanisms for Combinatorial Auctions.

6 October 2011

Gates 463A, 4:15 PM

C. Seshadhri (Sandia National Laboratories)

Deterministic blackbox polynomial identity testing for depth 3 circuit: the constant fanin case falls at last

Abstract: The problem of polynomial identity testing (PIT) is: given an arithmetic circuit C that computes a polynomial (over some base field F), can we decide in deterministic polynomial time whether C computes the zero polynomial? The Schwartz-Zippel lemma gives a simple randomized test, and PIT is arguably one of the most important derandomization questions. Its importance is heightened by the connection to lower bounds, and it is known that deterministic algorithms for PIT will result in circuit lower bounds (Impagliazzo-Kabanets 04, Agrawal 05). A deterministic blackbox algorithm for PIT constructs a polynomial sized hitting set: this is a set of poly(s,n) inputs such that every non-zero n-variate circuit of size s has a non-zero value for some input.

Let C be a depth-3 circuit with n variables, degree d and top-fanin k over base field F. Klivans and Spielman noticed (way back in 2001) that PIT is open for this even when k is a constant. This has been the subject of intense (or maybe not so intense) study over the past decade. We finally resolve this question with a deterministic construction of a hitting set with size poly(n,d^k), for any field.

This is based on a lot of past work by many different people, and I will try to highlight the "path" to our result. The aim of this talk is to give an introduction to PIT for depth 3 circuits, the basic tools developed for this problem, and finally, discuss our contribution.

Joint work with Nitin Saxena


13 October 2011

Gates 463A, 4:15 PM

Vijay V. Vazirani (Georgia Tech)

Extending General Equilibrium Theory to the Digital Economy: Equilibrium Pricing of Semantically Substitutable Digital Goods

General Equilibrium Theory, the undisputed crown jewel of Mathematical Economics for over a century, gave a very satisfactory solution to the problem of arriving at a principled method of pricing goods and services. The solution was based on Adam Smith's principle of maintaining parity between supply and demand, Walras' notion of equilibrium, and the Arrow-Debreu Theorem, which proved the existence of equilibrium in a very general model of the economy.

However, this solution, designed for conventional goods, is not applicable for digital goods -- once produced, a digital good can be reproduced at (essentially) zero cost, thus making its supply infinite. Considering the current size of the digital economy and its huge growth potential, it is imperative that we obtain an equally convincing theory for pricing of digital goods.

For a broad class of digital goods, coming primarily from the entertainment industry, which we call semantically substitutable goods, we show how a suitable adaptation of the Arrow-Debreu model yields a simple equilibrium-based pricing structure. We also give examples of (multi-billion dollar) markets that use this pricing structure.

The far reaching significance of a competitive equilibrium is made explicit in the Fundamental Theorems of Welfare Economics. We give an appropriate notion of efficiency under which both Welfare Theorems hold in our mixed economy consisting of conventional and digital goods. Finally, we outline a multitude of issues that still need to be addressed to obtain a theory for the digital economy that is on par with the original theory.

Based on the following joint paper with Kamal Jain: http://arxiv.org/abs/1007.4586


20 October 2011

Gates 463A, 4:15 PM

Udi Weider (Microsoft Research)

Balls and Bins: Smaller Hash Families and Faster Evaluation

A fundamental fact in the analysis of randomized algorithm is that when $n$ balls are hashed into $n$ bins independently and uniformly at random, with high probability each bin contains at most $O(\log n / \log \log n)$ balls. In various applications, however, the assumption that a truly random hash function is available is not always valid, and explicit functions are required.

In this work we study the size of families (or, equivalently, the description length of their functions) that guarantee a maximal load of $O(\log n / \log \log n)$ with high probability, as well as the evaluation time of their functions.

We construct two families that guarantee a maximal load of $O(\log n / \log \log n)$ with high probability. Our constructions are based on two different approaches, and exhibit different trade-offs between the description length and the evaluation time. The first construction shows that $O(\log n / \log \log n)$-wise independence can in fact be replaced by ``gradually increasing independence'', resulting in functions that are described using $O(\log n \log \log n)$ bits and evaluated in time $O(\log n \log \log n)$. The second construction is based on derandomization techniques for space-bounded computations combined with a tailored construction of a pseudorandom generator.

Joint work with Laura Elisa Celis, Omer Reingold and Gil Segev


27 October 2011

Gates 463A, 4:15 PM

Or Meir (Stanford)

IP = PSPACE vs. error correcting codes

The IP theorem, which asserts that IP = PSPACE (Lund et. al., and Shamir, in J. ACM 39(4)), is one of the major achievements of complexity theory. The known proofs of the theorem are based on the arithmetization technique, which transforms a quantified Boolean formula into a related polynomial. The intuition that underlies the use of polynomials is commonly explained by the fact that polynomials constitute good error correcting codes. However, the known proofs seem tailored to the use of polynomials, and do not generalize to arbitrary error correcting codes.

In this work, we show that the IP theorem can be proved by using general error correcting codes. We believe that this establishes a rigorous basis for the aforementioned intuition, and sheds further light on the IP theorem.


1 December 2011

Gates 463A, 4:15 PM

Anindya De (Berkeley)

Nearly optimal solutions for the Chow Parameters Problem and low-weight approximation of halfspaces

The Chow parameters of a Boolean function f: {-1,1}^n -> {-1,1} are its n+1 degree-0 and degree-1 Fourier coefficients. It has been known since the 1960s (Chow, FOCS 1961) that the (exact values of the) Chow parameters of any linear threshold function f uniquely specify f within the space of all Boolean functions, but until recently (O'Donnell and Servedio, STOC 2008) nothing was known about efficient algorithms for reconstructing f (exactly or approximately) from exact or approximate values of its Chow parameters. We refer to this reconstruction problem as the Chow Parameters Problem. Our main result is a new algorithm for the Chow Parameters Problem which, given (sufficiently accurate approximations to) the Chow parameters of any linear threshold function f, runs in time poly(n,(1/epsilon)^{log^2(1/epsilon)}) and outputs another linear threshold function f' which is epsilon-close to f. The best previous algorithm had running time worse than doubly exponential in 1/epsilon.

As a byproduct of our approach, we show that for any linear threshold function f over {-1,1\}^n, there is a linear threshold function f' which is \eps-close to f and has all weights that are integers at most \sqrt{n} \cdot (1/epsilon)^{O(log^2(1/epsilon))}. This improves on results of Servedio (CCC 2007) and Diakonikolas and Servedio (CCC 2009) both of which got an exponential dependence on epsilon. Our results also have interesting consequences for problems in learning theory.

Joint work with Ilias Diakonikolas, Vitaly Feldman and Rocco Servedio.


8 December 2011

Gates 463A, 4:15 PM

Alantha Newman (DIMACS)

A Counterexample to Beck's Three Permutations Conjecture

Given three permutations on the integers 1 through n, consider the set system consisting of each interval in each of the three permutations. In 1982, Jozsef Beck conjectured that the discrepancy of this set system is O(1). In other words, Beck conjectured that for every three permutations, each integer from 1 through n can be colored either red or blue so that the number of red and blue integers in each interval of each permutation differs only by a constant. (The discrepancy of a set system based on two permutations is two.)

We will present a counterexample to this conjecture: for any positive integer n = 3^k, we construct three permutations whose corresponding set system has discrepancy Omega(log(n)). Our counterexample is based on a simple recursive construction, and our proof of the discrepancy lower bound is by induction.

Our work was inspired by an insightful and intriguing paper from SODA 2011 by Fritz Eisenbrand, Domotor Palvolgyi and Thomas Rothvoss, who show that Beck's Conjecture implies a constant additive integrality gap for the Gilmore-Gomory LP relaxation of the well-studied special case of Bin Packing in which each item has weight between 1/4 and 1/2, also known as the Three Partition problem.

In a more recent extended version of their paper, they show an interesting consequence of our construction, which was also independently observed by Ofer Neiman: There are instances of the Three Partition problem and corresponding optimal LP solutions, such that any bin packing that only uses "patterns" from the non-zero support of these optimal LP solutions requires at least OPT + Omega(log(n)) bins.

Time permitting, we will discuss this and other observations about the structure of the three permutations in our counterexample.

(Joint work with Aleksandar Nikolov.)


13 January 2012

CIS Auditorium, 2:00 PM

Virginia Vassilevska Williams (Stanford/Berkeley)

Multiplying matrices faster than Coppersmith-Winograd

In 1987 Coppersmith and Winograd presented an algorithm to multiply two n by n matrices using O(n^{2.3755}) arithmetic operations. This algorithm has remained the theoretically fastest approach for matrix multiplication for 24 years. We have recently been able to design an algorithm that multiplies n by n matrices and uses at most O(n^{2.3727}) arithmetic operations, thus improving the Coppersmith-Winograd running time.

The improvement is based on a recursive application of the original Coppersmith-Winograd construction, together with a general theorem that reduces the analysis of the algorithm running time to solving a nonlinear constraint program. The final analysis is then done by numerically solving this program. To fully optimize the running time we utilize an idea from independent work by Stothers who claimed an O(n^{2.3737}) runtime in his Ph.D. thesis.

The aim of the talk will be to give some intuition and to highlight the main new ideas needed to obtain the improvement.


2 February 2012

Gates 463A, 4:15 PM

Vijay Vazirani (Georgia Tech)

A Complentary Pivot Algorithm for Market Equilibrium under Separable, Piecewise-Linear Concave Utilities

Using the powerful machinery of the linear complementarity problem and Lemke's algorithm, we give a practical algorithm for computing an equilibrium for Fisher and Arrow-Debreu markets under separable, piecewise-linear concave (SPLC) utilities, despite the PPAD-completeness of this case.

In 1975, Eaves had given such an algorithm for the case of linear utilities and had asked for an extension to the piecewise-linear, concave utilities. Our result settles the relevant subcase of his problem as well as the problem of Vazirani and Yannakakis of obtaining a path following algorithm for SPLC markets, thereby giving a direct proof of membership of this case in PPAD.

We also prove that SPLC markets have an odd number of equilibria (up to scaling), hence matching the classical result of Shapley (1974), which was based on the Lemke-Howson algorithm and shows a similar fact about Nash equilibria of a 2-person bimatrix game.

For the linear case, Eaves had asked for a combinatorial interpretation of his algorithm. We provide this and use it to get a particularly simple proof of the fact that the set of equilibrium prices is convex.

(This is joint work with Jugal Garg, Ruta Mehta and Milind Sohoni.)


1 March 2012

Gates 463A, 4:15 PM

Michael Kapralov (Stanford)

On the communication and streaming complexity of maximum bipartite matching

Consider the following communication problem. Alice holds a graph $G_A=(P,Q,E_A)$ and Bob holds a graph $G_B=(P,Q,E_B)$, where $|P|=|Q|=n$. Alice is allowed to send Bob a message that depends only on the graph $G_A$. Bob must then output a matching $M\subseteq E_A\cup E_B$. What is the minimum size of the message that Alice sends to Bob that allows Bob to recover a matching of size at least $(1-\e)$ times the maximum matching in $G_A\cup G_B$? The minimum message length is the {\em one-round communication complexity} of approximating bipartite matching, which also gives a lower bound on the space needed by a one-pass streaming algorithm to compute a $(1-\e)$-approximate matching. In this paper we address the question of how well one can approximate maximum bipartite matchings with linear communication and space. Prior to our work, only a $1/2$-approximation was known for both these problems.

In order to study these questions, we introduce the concept of an $\e$-matching cover of a bipartite graph $G$, which is a sparse subgraph of the original graph that preserves the size of maximum matching between large subsets of vertices. We give a polynomial time construction of a $1/2$-matching cover of size $O(n)$ with some crucial additional properties, thereby showing that Alice and Bob can achieve a $2/3$-approximation with a message of size $O(n)$. Further, we establish a connection between $\e$-matching covers and so-called $\e$-Ruzsa-Szemer\'edi graphs, using it to show that any better approximation requires communication complexity at least $n^{1+\Omega(1/\log \log n)}$.

Finally, we build on our techniques to design a one-pass streaming algorithm for the case of vertex arrivals, obtaining the first deterministic one-pass streaming $(1-1/e)$-approximation using $O(n)$ space for this setting (randomized $1-1/e$ approximation can be obtained using the celebrated algorithm of Karp-Vazirani-Vazirani (KVV) for the online problem). We also show that this is tight -- any better approximation in a single pass requires $n^{1+\Omega(1/\log\log n)}$ space.

(Based on joint work with Ashish Goel and Sanjeev Khanna)


15 March 2012

Gates 463A, 4:15 PM

Ryan Williams (Stanford)

Algorithms for Circuits and Circuits for Algorithms

Over the years, interesting connections have been developed between the existence of non-trivial circuit-analysis algorithms and proofs of circuit size lower bounds. Algorithms that can check whether a given circuit from some class satisfies a simple property (such as computing the all-zeroes function) can be used to prove limitations on that circuit class, provided the algorithms have good performance. For example, a non-trivial SAT algorithm for a circuit class can be used to find functions computable in nondeterministic exponential time that cannot be computed by that circuit class. Informally, this connection can be interpreted as saying "good algorithms for circuits imply there are no good circuits for some algorithms." This talk will explore the ideas behind this basic theme, and review some of the connections that have been found so far.


5 April 2012

Gates 463A, 4:15 PM

Vijay Vazirani (Georgia Tech)

Dispelling an Old Myth about an Ancient Algorithm

Myth -- and grapevine -- has it that the Micali-Vazirani maximum matching algorithm is "too complicated". The purpose of this talk is to show the stunningly beautiful structure of minimum length alternating paths and to convince the audience that our algorithm exploits this structure in such a simple and natural manner that the entire algorithm can fit into one's mind as a single thought.

For all practical purposes, the Micali-Vazirani algorithm, discovered in 1980, is still the most efficient known algorithm (for very dense graphs, slight asymptotic improvement can be obtained using fast matrix multiplication). Yet, no more than a handful of people have understood this algorithm; we hope to change this.

Based on the following two papers:
http://www.cc.gatech.edu/~vazirani/MV.pdf
http://www.cc.gatech.edu/~vazirani/Theory-of-matching.pdf


10 April 2012

Gates 498, 4:15 PM

Robert Sedgewick (Princeton)

From Analysis of Algorithms to Analytic Combinatorics

Analytic Combinatorics aims to enable precise quantitative predictions of the properties of large combinatorial structures. Primarily due to the efforts of Philippe Flajolet and his many research collaborators, the theory has emerged over recent decades as essential both for the scientific analysis of algorithms in computer science and for the study of scientific models in other disciplines, including probability theory, statistical physics, computational biology and information theory. This talk surveys thirty years of joint work with Flajolet that was inspired by learning the analysis of algorithms from Knuth and that culminated in the publication of two books: "An Introduction to the Analysis of Algorithms" and "Analytic Combinatorics".

12 April 2012

Gates 463A, 4:15 PM

Edith Cohen (AT&T Labs Research)

How to Get the Most Out of Your Sampled Data

Random sampling is an important tool for retaining the ability to query data under resource limitations. It is used to summarize data too large to store or manipulate and meet resource constraints on bandwidth or battery power. Estimators that are applied to the sample provide fast approximate answers to queries posed over the original data and the value of the sample hinges on the quality of these estimators.

We are interested in queries, such as maximum or range, that span multiple data points. Sum aggregates of these queries correspond to distinct counts and difference norms and are used for planning or change/anomaly detection over traffic logs and measurement data. Unbiased estimators are particularly effective -- While the estimate of each basic query inevitably has high variance, the relative error decreases with aggregation.

The sample may provide no information, exact value, or partial information on the queried value. The Horvitz-Thompson estimator, known to minimize variance for sampling with ``all or nothing'' outcomes (which reveal the exact or no information on the queried value), is not optimal or altogether inapplicable to such queries.

We present a general principled methodology for the derivation of (Pareto) optimal nonnegative unbiased estimators over sampled data and aim to understand its potential. We demonstrate significant improvement in estimation accuracy.

This work is joint with Haim Kaplan (Tel Aviv University).


26 April 2012

Gates 463A, 4:15 PM

Jan Vondrak (IBM Almaden Research Center)

Hardness of randomized truthful mechanisms for combinatorial auctions

The problem of combinatorial auctions is one of the basic questions in algorithmic mechanism design: how can we allocate m items to n agents with private valuations of different combinations of items, so that the agents are motivated to reveal their true valuations and the outcome is (approximately) optimal in terms of social welfare? While approximation algorithms exist for several non-trivial classes of valuations, they typically do not motivate agents to report truthfully. The classical VCG mechanism, although truthful, is not computationally efficient. Thus the main question is whether the requirements of truthfulness and computational efficiency can be combined, or whether they are incompatible.

We identify a class of explicit (succinctly represented) submodular valuations, for which it is known that combinatorial auctions without the requirement of truthfulness admit a (1-1/e)-approximation; however, we prove that unless NP \subset P/poly, there is no truthful (even truthful-in-expectation) mechanism for this class providing approximation better than polynomial in the number of agents. (Previous work by Dobzinski already ruled out deterministic and universally truthful mechanisms for submodular valuations given by a value oracle.)

Joint work with Shaddin Dughmi and Shahar Dobzinski.