Stanford Theory Seminar


Stanford Theory Seminar (formerly known as Stanford Algorithms Seminar/AFLB) is usually held in Gates 498 or Theory Lounge(Gates 463A)
Faculty Contact: Tim Roughgarden, Luca Trevisan
Student Contact: Kshipra Bhawalkar
Click here for directions.

Contents:

  1. 23 September 2010: Madhu Sudan (Microsoft/MIT). The Method of Multiplicities.
  2. 18 October 2010: Ran Raz (Weizmann Institute). Parallel Repetition of Two Prover Games: A Survey.
  3. 21 October 2010: Robert Krauthgamer (Weizmann Institute). Polylogarithmic Approximation for Edit Distance and the Asyymetric Query Complexity.
  4. 28 October 2010: Elad Verbin (University of Aarhus). The Coin Problem, and Pseudorandomness for Branching Programs.
  5. 4 November 2010: Jelani Nelson (MIT). Optimal Moment Estimation in Data Streams.
  6. 11 November 2010: Silvio Micali (MIT). Conservative Rationalizability and the Second Knowledge Mechanism.
  7. 18 November 2010: Amit Sahai (UCLA). Resolving the Asymptotic Computational Complexity of Universal Hashing.
  8. 1 December 2010: Ricky Rosen (Tel Aviv University). A Strong Parallel Repetition Theorem for Projection Games on Expanders.
  9. 2 December 2010: James Lee (University of Washington). Discrete Differentiation and Local Rigidity of Cuts.
  10. 20 January 2011 : Mohammad Hossein Bateni (Princeton). Prize collecting clustering and its applications.
  11. 27 January 2011 : Debmalya Panigrahi (MIT). A General Gramework for Graph Sparsification.
  12. 3 February 2011 : Ilya Mironov (Microsoft Research). The Limits of Two-Party Differential Privacy: Differential Privacy, Deterministic Extractors, and Communication Complexity.
  13. 17 February 2011 : Andrew Goldberg (Microsoft Research). Highway Dimension: From Practice to Theory and Back.
  14. 1 March 2011 : Facundo Memoli (Department of Mathematics, Stanford). The Gromov-Hausdorff distance and variants: lower bounds and applications.
  15. 3 March 2011 : Moses Charikar (Princeton University) Motwani Distinguished Lecture, Topic: Dimension Reduction in L_1.
  16. 21 April 2011 : Daniele Micciancio (University of California, San Diego) Solving All Lattice Problems in Deterministic Single Exponential Time.
  17. 28 April 2011 : Ilias Diakonikolas (University of California, Berkeley) A Regularity Lemma for Polynomial Threshhold Functions.
  18. 5 May 2011 : David Karger (MIT) Motwani Distinguished Lecture, Topic: Random Sampling and Cuts in Graphs: Three Combinatorial Proofs.
  19. 12 May 2011 : Miklos Ajtai (IBM Almaden) Secure Computation with Information Leaking to an Adversary.
  20. 19 May 2011 : Luca Trevisan (Stanford) TBA

23 September 2010

Gates 463A, 4:15 PM

Madhu Sudan (Microsoft/MIT)

The Method of Multiplicities

In 2008, Zeev Dvir achieved a breakthrough in combinatorial geometry by giving a stunningly simple, and sharp, bound on the size of "Kakeya Sets" in F_q^n, the n-dimensional vector space over the finite field on q elements. (A Kakeya set in any vector space is a set that contains a line in every direction.) Dvir proved this bound by setting up an n-variate low-degree non-zero polynomial that vanished on every point in the set, and then used the algebraic nature of a Kakeya set to argue that this polynomial was zero too often if the set was too small. In addition to resolving a long-studied problem in combinatorial geometry, this method also led to new constructions of ``randomness extractors''.

In this talk I will describe algebraic methods to improve the analysis of Dvir, by using polynomials that vanish with ``high multiplicity'' on every point on the given set. This method, based on prior work with Guruswami (1998), ends up yielding extremely tight (to within a factor of 2) bounds on the size of Kakeya sets; and, in combination with a host of other techniques, state-of-the-art "extractors" (algorithms that purify randomness).

In this talk I will describe the (simple) idea behind the method of multiplicities and some of the applications.

Based on joint works with Shubhangi Saraf (Analysis& PDE, 2009); and with Zeev Dvir, Swastik Kopparty, and Shubhangi Saraf (FOCS 2009).


18 October 2010

Gates 463A, 4:15 PM

Ran Raz(Weizmann Institute

Parallel Repetition of Two Prover Games: A Survey

The parallel repetition theorem states that for any two-prover game with value smaller than 1, parallel repetition reduces the value of the game in an exponential rate.

I will give a short introduction to the problem of parallel repetition of two-prover games and some of its applications in theoretical computer science, mathematics and physics. I will concentrate mainly on recent results.

In a two-prover (alternatively, two-player) game, a referee chooses questions $(x,y)$ according to a (publicly known) distribution, and sends $x$ to the first player and $y$ to the second player. The first player responds by $a=a(x)$ and the second by $b=b(y)$ (without communicating with each other). The players jointly win if a (publicly known) predicate $V(x,y,a,b)$ holds. The value of the game is the maximal probability of success that the players can achieve, where the maximum is taken over all protocols $a=a(x),b=b(y)$.

A parallel repetition of a two-prover game is a game where the players try to win $n$ copies of the original game simultaneously. More precisely, the referee generates questions $x=(x_1,...,x_n), y=(y_1,...,y_n)$, where each pair $(x_i,y_i)$ is chosen independently according to the original distribution. The players respond by $a=(a_1,...,a_n)$ and $b=(b_1,...,b_n)$. The players win if they win simultaneously on all the coordinates, that is, if for every $i$, $V(x_i,y_i,a_i,b_i)$ holds.

The parallel repetition theorem states that for any two-prover game, with value $1-\epsilon$ (for, say, $\epsilon < 1/2$), the value of the game repeated in parallel $n$ times is at most $(1- \epsilon3)^{\Omega(n/s)}$, where $s$ is the answers' length (of the original game).


21 October 2010

Gates 498, 4:15 PM

Robert Krauthgamer (Weizmann Institute)

Polylogarithmic Approximation for Edit Distance and the Asymmetric Query Complexity

We present a near-linear time algorithm that approximates the edit distance between two strings within a significantly better factor than previously known. This result emerges in our investigation of edit distance from a new perspective, namely a model of asymmetric queries, for which we present near-tight bounds.

Another consequence of this new model is the first rigorous separation between edit distance and Ulam distance, by tracing the hardness of edit distance to phenomena that were not used by previous analyses.

[Joint work with Alexandr Andoni and Krzysztof Onak.]


28 October 2010

Gates 498, 4:15 PM

Elad Verbin (University of Aarhus)

The Coin Problem, and Pseudorandomness for Branching Programs

The "Coin Problem" is the following problem: a biased coin is given. The coin lands on heads with probability either 1/2+beta or 1/2-beta. We are given the outcome of n independent tosses of this coin, and the goal is to guess which way the coin is biased, and to answer correctly with probability >= 2/3. When our computational model is unrestricted, the majority function is optimal, and succeeds when beta >= c / sqrt(n) for a large enough constant c. The coin problem is interesting in models that cannot compute the majority function, such as AC0, low-degree polynomials, automata, and low-width read-once branching programs.

In this talk I will show recent results with Joshua Brody, on the coin problem in the model of width-w read-once branching programs. We prove that if a branching program solves the coin problem with bias beta, then beta must be at least 1/ (log n)^{Theta(w)}. This is nearly tight, as can be seen by considering the recursive tribes function. We also briefly discuss various generalizations and variants of this result. The proof is based on appropriate use of techniques from work on AC0 circuits (mainly random restriction), along with some new ideas.

We suggest one application for this kind of theorem: We show that Nisan's generator epsilon-fools width-w read-once *regular* branching programs, using seed length O(logn*loglogn) when epsilon and w are both constant.

We are looking for applications of the coin problem in other domains (e.g. streaming lower bounds).

This talk is based on work with Joshua Brody, presented in FOCS 2010 (i.e. more or less right now). The paper is available at: http://www.cs.dartmouth.edu/~jbrody/papers/TheCoinProblemFOCS.pdf


4 November 2010

Gates 498, 4:15 PM

Jelani Nelson (MIT)

Optimal Moment Estimation in Data Streams

We close the problem of understanding the space complexity of pth moment estimation in data streams for 0 < p <= 2 by giving the first optimal upper and lower bounds. In this problem, a high-dimensional vector receives a long sequence of coordinate-wise updates, and we must output a low relative-error approximation of the pth moment at the end of the stream while keeping a memory footprint exponentially smaller than the vector length. This problem has applications in network traffic monitoring, databases, and simply computing distance measures across massive datasets.

We obtain the upper bound by showing an invariance principle for sums of boundedly independent p-stable random variables, which may be of independent interest. Our main ingredient in this proof is the introduction of an approximation theoretic tool we dub "FT-mollification", which has since found other applications in agnostic learning, pseudorandomness, and approximation theory. We obtain the lower bound by showing a direct sum property for the one way communication complexity of the GapHamming problem.

Joint work with Daniel M. Kane (Harvard) and David P. Woodruff (IBM Almaden)


11 November 2010

Gates 463A, 4:15 PM

Silvio Micali (MIT)

Conservative Rationalizability and the Second Knowledge Mechanism

Assuming a Bayesian is the traditional way to model uncertainty in settings of incomplete information. This assumption, however, is very strong, and may not hold in many real applications. We thus put forward (1) a set-theoretic way to model the knowledge that a player might have about his opponents, together with (2) a new class of mechanisms -not equilibrium-based- capable of leveraging such more conservative knowledge in a robust way.

In auctions of a single good, we show that one such mechanism can perfectly guarantee a revenue benchmark (always in between the second highest and the highest valuation) that no classical mechanism can even approximate in any robust way.

Joint Work with Jing Chen


18 November 2010

Gates 498, 4:15 PM

Amit Sahai(MIT)

Resolving the Asymptotic Computational Complexity of Universal Hashing

In this talk, we resolve a 30-year-old line of work on the asymptotic computational complexity of universal hashing, an intensively studied topic since the introduction of universal hashing by Carter and Wegman (STOC 1977). Universal hash functions are fundamental algorithmic constructs, and their use is widespread throughout computer science. It was widely believed that pairwise-independent hashing, because of its pseudorandom properties, is inherently computationally costly. Our work shows that this is not the case by giving an explicit construction of linear-size circuits for universal hash functions, refuting a conjecture of Mansour, Nisan, and Tiwari (STOC 1990).

We were led to this result through our study of methods to reduce the computational complexity of cryptographic primitives and protocols. Current constructions of cryptographic primitives typically involve a large multiplicative computational overhead that grows with the desired level of security. We explore the possibility of implementing cryptographic primitives (such as encryption, authentication, signatures, or secure two-party computation) while incurring only a *constant* computational overhead compared to insecure implementations of the same tasks.

This is joint work with Yuval Ishai, Eyal Kushilevitz, and Rafail Ostrovsky.


1 December 2010

Gates 498, 4:15 PM

Ricky Rosen (Tel Aviv University)

A Strong Parallel Repetition Theorem for Projection Games on Expanders.

The parallel repetition theorem states that for any Two Prover Game with value at most 1-\eps (for \eps<1/2), the value of the game repeated n times in parallel is at most (1-\eps^3)^{\Omega(n/s)}, where s is the length of the answers of the two provers \cite{R,Hol}. For Projection Games, the bound on the value of the game repeated n times in parallel was improved to (1-\eps^2)^{\Omega(n)} \cite{Rao} and this bound was shown to be tight \cite{R08}.

In this paper we study the case where the underlying distribution, according to which the questions for the two provers are generated, is uniform over the edges of a (bipartite) expander graph.

We show that if \lambda is the (normalized) spectral gap of the underlying graph, the value of the repeated game is at most (1-\eps^2)^{\Omega(c(\lambda) \cdot n/ s)}, where c(\lambda) = \poly(\lambda); and if in addition the game is a projection game, we obtain a bound of (1-\eps)^{\Omega(c(\lambda) \cdot n)}, where c(\lambda) = \poly(\lambda), that is, a strong parallel repetition theorem (when \lambda is constant).

This gives a strong parallel repetition theorem for a large class of two prover games.

This is a joint work with Ran Raz.


2 December 2010

Gates 498, 4:15 PM

James Lee(University of Washington)

Discrete differentiation and local rigidity of cuts

Global rigidity of nice cuts in graphs is a well-studied phenomenon in complexity theory and hardness of approximation. For instance, consider the results of Kahn-Kalai-Linial and Friedgut which state that balanced cuts in the hypercube that don't cut too many edges must be close to halfspaces. Starting with work of Cheeger and Kleiner on the Heisenberg group, it became clear that _local_ rigidity of cuts plays a fundamental role in understanding low-distortion embeddings into L_1. In joint work with Naor, we showed that this has implications for the approximability of the general Sparsest Cut problem in graphs.

I will talk about a discrete version of this phenomenon, and how it can be applied to understand a number of questions in metric embeddings and semi-definite programming. In particular, in joint work with Moharrami, we show a large gap between the "weak" and "strong" SDP triangle inequalities, resolving a question that has been open since 2003.

Using a more sophisticated version of this technique, I will analyze a doubling metric space whose n-point subspaces require distortion sqrt{log n / log log n} to embed into L_1, nearly matching the upper bound of Gupta-Krauthgamer-Lee (2003). This work, joint with Sidiropoulos, yields a nearly-tight "weak" integrality gap for the Sparsest Cut SDP. I will end by discussing how this weak gap might be amplified to fully resolve the negative-type embedding conjecture of Goemans and Linial.

The talk will be elementary in nature; in particular, most of the technical material will involve the structure of sets in the Euclidean plane, and discrete/randomized/ approximate versions of classical results from integral geometry.


20 January 2011

Gates 498, 4:15 PM

Mohammad Hossein Bateni (Princeton)

Prize collecting clustering and its applications

Grouping a set of items into clusters according to their similarities is called clustering. Clustering is now a common technique in widely different applications in, e.g., bioinformatics, data mining, machine learning, and social network analysis. Considerable effort has been put into study of the clustering techniques in recent years. We introduce a new clustering paradigm, in which items to be classified are vertices of a graph. Vertices have their own budgets and the goal is to cluster them such that the cost of (connections in) each cluster can be payed by the budget of its participants. Furthermore, we want vertices in different clusters to be in some sense far from each other. We propose and analyze algorithms for two similar problems in this paradigm. The resulting guarantees lead to resolution of several network design questions. We improve the approximation ratio for prize-collecting Steiner tree and prize-collecting TSP. In addition, we present polynomial-time approximation schemes (PTAS's) for planar Steiner forest, planar multiway cut, and planar prize-collecting Steiner tree. The talk is based on joint works with Aaron Archer, MohammadTaghi Hajiaghayi, Howard Karloff, Philip Klein, Daniel Marx, and Claire Mathieu.


27 January 2011

Gates 498, 4:15 PM

Debmalya Panigrahi (MIT)

A General Framework for Graph Sparsification

In this talk, I will present a general framework for constructing cut sparsifiers in undirected graphs, and use it to simplify, unify and improve upon previous sparsification results. As simple instantiations of this framework, we show that sparsifiers can be constructed by sampling edges according to their strength (a result of Bencz\'ur and Karger), effective resistance (a result of Spielman and Srivastava), edge connectivity (this resolves the main open question posed by Bencz\'ur and Karger) and Nagamochi-Ibaraki indices (this yields an elementary sparsification algorithm). In addition, we develop techniques that give the first (strictly) linear-time sparsification algorithm for unweighted graphs. Our algorithm produces sparsifiers containing O(n log n) edges in expectation; the only known construction of sparsifiers with fewer edges is by a substantially slower algorithm running in O(n^3 m) time. A key component of our results that might be of independent interest is a natural generalization of a seminal cut counting theorem due to Karger.

This work was done in collaboration with Ramesh Hariharan. Some of these results were also obtained by Fung and Harvey in independent concurrent work.


3 February 2011

Gates 498, 4:15 PM

Ilya Mironov (Microsoft Research)

The Limits of Two-Party Differential Privacy: Differential Privacy, Deterministic Extractors and Communication Complexity.

The rigorous notion of differential privacy introduced by Dwork et al. in 2006 is transforming research in the area of privacy in statistical databases. Among many variations of the typical setting of differentially private mechanisms - static, dynamic, on-line, intrusion-resistant, etc. - most assumed a single source and a single curator of sensitive data. A fundamental question in data analysis is whether the guarantee of differential privacy can be extended to a distributed setting of more than one party. We study differential privacy in a symmetric setting where two parties would like to perform analysis of their joint data while preserving privacy for both datasets. Our results imply almost tight lower bounds on the accuracy of such data analyses, both for specific natural functions (such as Hamming distance) and in general. Our bounds expose a sharp contrast between the two-party setting and the simpler client-server setting (where privacy guarantees are one-sided). In addition, those bounds demonstrate a dramatic gap between the accuracy that can be obtained by differentially private data analysis versus the accuracy obtainable when privacy is relaxed to a computational variant of differential privacy. The first proof technique we develop demonstrates a connection between differential privacy and deterministic extraction from weak sources. A second connection indicates that the ability to approximate a function by a low-error differentially-private protocol is strongly related to the ability to approximate it by a low communication protocol. (The connection goes in both directions.) Joint work with Andrew McGregor (UMass Amherst), Toniann Pitassi (U Toronto), Omer Reingold (MSR Silicon Valley), Kunal Talwar (MSR Silicon Valley), Salil Vadhan (Harvard)

17 February 2011

Gates 498, 4:15 PM

Andrew Goldberg (Microsoft Research, Silicon Valley)

Highway Dimension: From Practice to Theory and Back.

Computing driving directions has motivated many shortest path heuristics that answer queries on continental-scale networks, with tens of millions of intersections, in real time. The underlying algorithms are intuitive, but until recently there was no theoretical explanation of their performance. We review some of these algorithms and give the first theoretical analysis for them on a non-trivial class of networks.

For our analysis, we introduce the notion of highway dimension. The analysis works for networks with low highway dimension and gives a unified explanation of algorithms' good performance. Our analysis explores an unexpected relationship to VC-dimension and applies boosting to graph algorithm design.

We also show that a labeling algorithm, previously studied by the distributed computation community, achieves a better theoretical time bound. This motivates a heuristic implementation of the labeling algorithm. Our experimenters show that the implementation outperforms the fastest previous codes, validating the theoretical prediction.

Joint work with Ittai Abraham, Amos Fiat, and Renato Werneck.


1 March 2011

Gates 463A. 4:15 PM.

Facundo Memoli (Department of Mathematics, Stanford University)

The Gromov-Hausdorff distance and variants: lower bounds and applications.

The Gromov-Hausdorff distance is a notion of distance between metric spaces which was conceived by Gromov in the late 80s for studying topological properties of families of geometric structures. Recently, the GH distance has found applications in the context of matching objects under invariances. However, its direct computation leads to a bottleneck quadratic assignment problem and is thus NP-hard. Much of the effort has therefore been put into finding variants that lead to somewhat simpler problems, and also into producing families of lower bounds with different discriminative power. In this talk I will give an overview of the subject and discuss some applications.

3 March 2011

Motwani Distinguished Lecture

Huang Engineering Center, 3rd floor, Mackenzie Room. 4:15 PM.

Moses Charikar (Princeton University)

Dimension Reduction in L_1

Given a set of n points in a normed space, how many dimensions are needed to represent all distances within a specific distortion ? This dimension-distortion tradeoff is well understood for the L_2 norm, where O(log n) dimensions suffice to preserve distances almost exactly. In sharp contrast, the situation for L_1 is far from understood and there is a significant gap between upper and lower bounds. In this talk, I will survey what we currently know about dimension reduction in L_1 and present the first near linear lower bounds for this question.

21 April 2011

Gates 498, 4:15 PM.

Daniele Micciancio (University of California, San Diego)

Solving all Lattice Problems in Deterministic Single Exponential Time

We give deterministic exp(n)-time algorithms to solve all the most important computational problems on point lattices in NP, including the Shortest Vector Problem (SVP), Closest Vector Problem (CVP), and Shortest Independent Vectors Problem (SIVP). This improves the exp(n*log(n)) running time of the best previously known algorithms for CVP (Kannan, 1987) and SIVP (Micciancio, 2008), and gives a deterministic alternative to the randomized SVP algorithm for of (Ajtai, Kumar and Sivakumar, 2001). The core of our algorithm is a new method to solve the closest vector problem with preprocessing (CVPP) that uses the Voronoi cell of the lattice (described as intersection of half-spaces) as the result of the preprocessing function. Talk based on joint work with P. Voulgaris

28 April 2011

Gates 498, 4:15 PM.

Ilias Diakonikolas (University of California, Berkeley)

Regularity Lemma for Polynomial Threshold Functions

One of the most common ways to represent a Boolean function is as the sign of a real polynomial. If a Boolean function can be sign-represented by a polynomial of degree d, it is called a degree-d Polynomial Threshold Function (PTF). PTFs have played an important role in computer science for decades, at least since the 1968 work of Minsky and Papert on perceptrons. In this talk, I will present a "regularity lemma" for low-degree polynomial threshold functions (PTFs) over the Boolean hypercube. Roughly speaking, this result shows that every degree-d PTF can be decomposed into a constant number of subfunctions such that almost all of the subfunctions are close to being "regular" PTFs. Hence, the result provides a way to reduce questions about arbitrary PTFs to ones about "regular" PTFs. This lemma has played an important role in a range of recent results for PTFs. These include the discovery of various structural properties of PTFs -- which in turn lead to efficient learning algorithms -- and the construction of explicit pseudorandom generators. The talk will discuss these applications and emphasize the connections between them. (Based on joint works with R. Servedio, Li-Yang Tan and Andrew Wan.)

5 May 2011

Motwani Distinguished Lecture

CIS Auditorium, 4:15 PM.

David Karger (MIT)

Random Sampling and Cuts in Graphs: Three Combinatorial Proofs

I'll synthesize work showing that randomly sampling the edges of an undirected graph preserves cut values. I'll give three distinct proofs, based on random contraction, tree packing, and network reliability, that the values of cuts are tightly concentrated around their expectations when those expectations are sufficiently large. I'll give a combinatorial construction for sparsifying any n-vertex undirected graph down to O(n log n) edges with only small perturbations in cut value and show how it can be used in fast algorithms for approximate and exact maximum flow computations. While one can achieve slightly better sparsification using algebraic techniques, I believe these combinatorial methods offer useful insights and may yield more in the future. The talk is self contained, requiring only elementary knowledge of combinatorics and probability.

12 May 2011

Gates 498, 4:15 PM.

Miklos Ajtai (IBM Almaden)

Secure Computation with Information Leaking to an Adversary

Assume that Alice is running a program $P$ on a RAM, and an adversary Bob would like to get some information about the input of the program. At each time Bob is able to see the addresses of the memory cells involved in the instruction which is executed. In addition to this, at certain times, Bob can even see the contents of all of the memory cells involved in the instruction. We will call a time when this happens a compromised time. Bob can choose the compromised times in an adaptive way, that is, immediately before the instruction at time $t$ is executed, Bob, using all of the information at his disposal, can decide whether time $t$ will be compromised or not. The only restriction on his choice is, that among $m$ consecutive instructions there can be at most $\epsilon m$ whose time is compromised, where $\epsilon>0$ is a small constant. We show that if $m=O(\log n)$, then for each program $P$, using $n$ memory cells and time $T<\poly(n)$, Alice can construct a functionally equivalent program $P'$, such that the probability that Bob gets any nontrivial information about the input of $P$ is negligible, and the time and space requirements of $P'$ grows, compared to $P$, only by a factor of $\poly(\log n)$. We assume that the program $P'$ gets its input in an encoded form, namely each input bit $b$ is encoded by a random $0,1$-sequence of length $m$ whose parity is $b$. The output bits of $P'$ must be encoded by $P'$ in a similar way.

19 May 2011

Gates 498, 4:15 PM.

Luca Trevisan (Stanford University)

TBA