In order of submission.
  1. Settling the Polynomial Learnability of Mixtures of Gaussians
    Authors: Ankur Moitra (MIT) and Gregory Valiant (UC Berkeley)
    Abstract: Given data drawn from a mixture of multivariate Gaussians, a basic problem is to accurately estimate the mixture parameters. We give an algorithm for this problem that has a running time, and data requirement polynomial in the dimension and the inverse of the desired accuracy, with provably minimal assumptions on the Gaussians. As simple consequences of our learning algorithm, we can perform near-optimal clustering of the sample points and density estimation for mixtures of $k$ Gaussians, efficiently.

    The building blocks of our algorithm are based on the work (Kalai \emph{et al}, STOC 2010)~\cite{2Gs} that gives an efficient algorithm for learning mixtures of two Gaussians by considering a series of projections down to one dimension, and applying the \emph{method of moments} to each univariate projection. A major technical hurdle in~\cite{2Gs} is showing that one can efficiently learn \emph{univariate} mixtures of two Gaussians. In contrast, because pathological scenarios can arise when considering univariate projections of mixtures of more than two Gaussians, the bulk of the work in this paper concerns how to leverage an algorithm for learning univariate mixtures (of many Gaussians) to yield an efficient algorithm for learning in high dimensions. Our algorithm employs \emph{hierarchical clustering} and rescaling, together with delicate methods for backtracking and recovering from failures that can occur in our univariate algorithm.

    Finally, while the running time and data requirements of our algorithm depend exponentially on the number of Gaussians in the mixture, we prove that such a dependence in necessary.

  2. Solving linear systems through nested dissection
    Authors: Noga Alon (Tel Aviv University) Raphael Yuster (University of Haifa)
    Abstract: The generalized nested dissection method, developed by Lipton, Rose, and Tarjan, is a seminal method for solving a linear system $Ax=b$ where $A$ is a symmetric positive definite matrix. The method runs extremely fast whenever $A$ is a well-separable matrix (such as matrices whose underlying support is planar or avoids a fixed minor). In this work we extend the nested dissection method to apply to {\em any} non-singular well-separable matrix over {\em any} field. The running times we obtain essentially match those of the nested dissection method.
  3. Improved Bounds for Geometric Permutations
    Authors: Natan Rubin, Haim Kaplan and Micha Sharir (Tel Aviv University)
    Abstract: We show that the number of geometric permutations of an arbitrary collection of $n$ pairwise disjoint convex sets in $\reals^d$, for $d\geq 3$, is $O(n^{2d-3}\log n)$, improving Wenger's 20 years old bound of $O(n^{2d-2})$.
  4. Constructive Algorithms for Discrepancy Minimization
    Authors: Nikhil Bansal (IBM Research)
    Abstract: Given a set system (V,S), V=[n] and S={S_1,\ldots,S_m}, the minimum discrepancy problem is to find a 2-coloring X:V -> {-1,+1}, such that each set is colored as evenly as possible, i.e. find X to minimize max_{j \in [m]} |\sum_{i \in S_j} X(i)|.

    In this paper we give the first polynomial time algorithms for discrepancy minimization, that achieve bounds similar to those known existentially using the so-called Entropy Method. We also give a first approximation-like result for discrepancy. Specifically we give efficient randomized algorithms to:

    1. Construct an $O(\sqrt{n})$ discrepancy coloring for general sets systems when $m=O(n)$, matching the celebrated result of Spencer up to constant factors. Previously, no algorithmic guarantee better than the random coloring bound, i.e. O((n \log n)^{1/2}), was known. More generally, for $m\geq n$, we obtain a discrepancy bound of O(n^{1/2} \log (2m/n)).

    2. Construct a coloring with discrepancy $O(t^{1/2} \log n)$, if each element lies in at most $t$ sets. This matches the (non-constructive) result of Srinivasan \cite{Sr}.

    3. Construct a coloring with discrepancy O(\lambda\log(nm)), where \lambda is the hereditary discrepancy of the set system. In particular, this implies a logarithmic approximation for hereditary discrepancy.

    The main idea in our algorithms is to gradually produce a coloring by solving a sequence of semidefinite programs, while using the entropy method to guide the choice of the semidefinite program at each stage.

  5. An efficient test for product states, with applications to quantum Merlin-Arthur games
    Authors:Aram W. Harrow: Department of Mathematics, University of Bristol & Departmenty of Computer Science and Engineering, University of Washington and Ashley Montanaro: Department of Computer Science, University of Bristol and Department of Applied Mathematics and Theoretical Physics, University of Cambridge
    Abstract: We give a test that can distinguish efficiently between product states of n quantum systems and states which are far from product. If applied to a state psi whose maximum overlap with a product state is 1-epsilon, the test passes with probability 1-Theta(epsilon), regardless of n or the local dimensions of the individual systems. The test uses two copies of psi. We prove correctness of this test as a special case of a more general result regarding stability of maximum output purity of the depolarising channel.

    A key application of the test is to quantum Merlin-Arthur games, where we show that a witness from two unentangled provers can simulate a witness from arbitrarily many unentangled provers, up to a constant loss of soundness. Building on a previous result of Aaronson et al, this implies that there is an efficient quantum algorithm to verify SAT with constant soundness, given two unentangled proofs of O(sqrt(n) polylog(n)) qubits. This result implies complexity-theoretic obstructions to finding a polynomial-time algorithm to determine separability of mixed quantum states, even up to constant error, and also to proving "weak" variants of the additivity conjecture for quantum channels.

    Finally, our test can also be used to construct an efficient test for determining whether a unitary operator is a tensor product, which is a generalisation of classical linearity testing.

  6. Replacement Paths via Fast Matrix Multiplication
    Authors: Oren Weimann (Weizmann Institute of Science) Raphael Yuster (University of Haifa)
    Abstract: Let G = (V, E) be a directed edge-weighted graph and let P be a shortest path from s to t in G. The {\em replacement paths} problem asks to compute, for every edge e on P, the shortest s-to-t path that avoids e. Apart from approximation algorithms and algorithms for special graph classes, the naive solution to this problem -- removing each edge e on P one at a time and computing the shortest s-to-t path each time -- is surprisingly the only known solution for directed weighted graphs, even when the weights are integrals. In particular, although the related {\em shortest paths} problem has benefited from fast matrix multiplication, the replacement paths problem has not, and still required cubic time. For an n-vertex graph with integral edge-lengths in {-M,...,M}, we give a randomized O(Mn^{1+2\omega/3}) = O(Mn^{2.584}) time algorithm that uses fast matrix multiplication and is sub-cubic for appropriate values of M. In particular, it runs in O(n^{1+2\omega/3}) time if the weights are small (positive or negative) integers. We also show how to construct a distance sensitivity oracle in the same time bounds. A query (u,v,e) to this oracle requires sub-quadratic time and returns the length of the shortest u-to-v path that avoids the edge e. In fact, for any constant number of edge failures, we construct a data structure in sub-cubic time, that answer queries in sub-quadratic time. Our results also apply for avoiding vertices rather than edges.
  7. Logspace Versions of the Theorems of Bodlaender and Courcelle
    Authors: Michael Elberfeld, Andreas Jakoby, Till Tantau, Institute of Theoretical Computer Science, University of Lubeck, Germany.
    Abstract: Bodlaender's Theorem states that for every $k$ there is a linear-time algorithm that decides whether an input graph has tree width~$k$ and, if so, computes a width-$k$ tree composition. Courcelle's Theorem builds on Bodlaender's Theorem and states that for every monadic second-order formula $\phi$ and for every $k$ there is a linear-time algorithm that decides whether a given logical structure $\mathcal A$ of tree width at most $k$ satisfies $\phi$. We prove that both theorems still hold when ``linear time'' is replaced by ``logarithmic space.'' The transfer of the powerful theoretical framework of monadic second-order logic and bounded tree width to logarithmic space allows us to settle a number of both old and recent open problems in the logspace world.
  8. Impossibility of Differentially Private Universally Optimal Mechanisms
    Authors : Kobbi Nissim (Microsoft AI, Israel, and Dept. of Computer Science, Ben-Gurion University) Hai Brenner (Dept. of Mathematics, Ben-Gurion University)
    Abstract: The notion of {\em a universally utility-maximizing privacy mechanism} was recently introduced by Ghosh, Roughgarden, and Sundararajan~[STOC 2009]. These are mechanisms that guarantee optimal utility to a large class of information consumers {\em simultaneously}, while preserving {\em Differential Privacy} [Dwork, McSherry, Nissim, and Smith, TCC 2006]. Ghosh et al.\ have demonstrated, quite surprisingly, a case where such a universally-optimal differentially-private mechanisms exists, when the information consumers are Bayesian. This result was recently extended by Gupte and Sundararajan~[PODS 2010] to risk-averse consumers.

    Both positive results deal with mechanisms (approximately) computing, a {\em single count query} (i.e., the number of individuals satisfying a specific property in a given population), and the starting point of our work is a trial at extending these results to similar settings, such as sum queries with non-binary individual values, histograms, and two (or more) count queries. We show, however, that universally-optimal mechanisms do not exist for all these queries , both for Bayesian and risk-averse consumers.

    For the Bayesian case, we go further, and give a characterization of those functions that admit universally-optimal mechanisms, showing that a universally-optimal mechanism exists, essentially, only for a (single) count query. At the heart of our proof is a representation of a query function $f$ by its {\em privacy constraint graph} $G_f$ whose edges correspond to values resulting by applying $f$ to neighboring databases.

  9. Determinant Sums for Undirected Hamiltonicity
    Author: Andreas Bjorklund, Lund University, Department of Computer Science, P.O.Box 118, SE-22100 Lund, Sweden
    Abstract: We present a Monte Carlo algorithm for Hamiltonicity detection in an $n$-vertex undirected graph running in $O^*(2^{\frac{3}{4}n})$ time. To the best of our knowledge, this is the first superpolynomial improvement on the worst case runtime for the problem since the $O^*(2^n)$ bound established for TSP almost fifty years ago (Bellman 1962, Held and Karp 1962). It answers in part the first open problem in Woeginger's 2003 survey on exact algorithms for NP-hard problems.

    For bipartite graphs, we improve the bound to $O^*(2^{\frac{1}{2}n})$ time. Both the bipartite and the general algorithm can be implemented to use space polynomial in $n$.

    We combine several recently resurrected ideas to get the result. Our main technical contribution is a new reduction inspired by the algebraic sieving method for $k$-Path (Koutis ICALP 2008, Williams IPL 2009). We transform the Hamiltonicity instance into many smaller Cycle Cover instances in which we are left to count weighted cycle covers over a finite field of characteristic two. We next adopt and apply the determinant summation technique for Exact Set Covers (Bj\"orklund STACS 2010).

  10. A non-linear lower bound for planar epsilon-nets
    Author: Noga Alon, Tel Aviv University and IAS, Princeton
    Abstract: We show that the minimum possible size of an $\epsilon$-net for point objects and line (or rectangle)-ranges in the plane is (slightly) bigger than linear in $1/\epsilon$. This settles a problem raised by Matousek, Seidel and Welzl in 1990.
  11. Pseudorandom generators for CC_0[p] and the Fourier spectrum of low-degree polynomials over finite fields
    Authors: Shachar Lovett, The Weizmann Institute of Science, Partha Mukhopadhyay, Technion - Israel Institute of Technology, Amir Shpilka, Technion - Israel Institute of Technology
    Abstract: Abstract: In this paper we give the first construction of a pseudorandom generator, with seed length $O(\log n)$, for $\mathrm{CC}_0[p]$, the class of constant-depth circuits with unbounded fan-in $\mathrm{MOD}_p$ gates, for some prime $p$. More accurately, the seed length of our generator is $O(\log n)$ for any constant error $\epsilon>0$. In fact, we obtain our generator by fooling distributions generated by low degree polynomials, over $\mathbb{F}_p$, {\em when evaluated on the Boolean cube}. This result significantly extends previous constructions that either required a long seed~\cite{LubyVeWi93} or that could only fool the distribution generated by linear functions over $\mathbb{F}_p$, when evaluated on the Boolean cube~\cite{LovettReTrVa09,MekaZu09:small_bias}.

    Enroute of constructing our PRG, we prove two structural results for low degree polynomials over finite fields that can be of independent interest.

    \begin{enumerate} \item Let $f$ be an $n$-variate degree $d$ polynomial over $\mathbb{F}_p$. Then, for every $\epsilon>0$ there exists a subset $S \subset [n]$, whose size depends only on $d$ and $\epsilon$, such that $\sum_{\alpha \in \mathbb{F}_p^n: \alpha \ne 0, \alpha_S=0}|\hat{f}(\alpha)|^2 \leq \epsilon$. Namely, there is a constant size subset $S$ such that the total weight of the nonzero Fourier coefficients that do not involve any variable from $S$ is small.

    \item Let $f$ be an $n$-variate degree $d$ polynomial over $\mathbb{F}_p$. If the distribution of $f$ when applied to uniform zero-one bits is $\epsilon$-far (in statistical distance) from its distribution when applied to biased bits, then for every $\delta>0$, $f$ can be approximated over zero-one bits, up to error $\delta$, by a function of a small number (depending only on $\epsilon,\delta$ and $d$) of lower degree polynomials. \end{enumerate}

  12. A lower bound for dynamic approximate membership data structures
    Authors: Shachar Lovett and Ely Porat
    Abstract: An approximate membership data structure is a randomized data structure for representing a set which supports membership queries. It allows for a small false positive error rate but has no false negative errors. Such data structures were first introduced by Bloom~\cite{Bloom70} in the 1970's, and have since had numerous applications, mainly in distributed systems, database systems, and networks.

    The algorithm of Bloom is quite effective: it can store a set $S$ of size $n$ by using only $\approx 1.44 n \log_2(1/\epsilon)$ bits while having false positive error $\epsilon$. This is within a constant factor of the entropy lower bound of $n \log_2(1/\epsilon)$ for storing such sets~\cite{CarterFlGiMaWe78}. Closing this gap is an important open problem, as Bloom filters are widely used is situations were storage is at a premium.

    Bloom filters have another property: they are dynamic. That is, they support the iterative insertions of up to $n$ elements. In fact, if one removes this requirement, there exist static data structures which receive the entire set at once and can almost achieve the entropy lower bound~\cite{DietzfelbingerPa08,Porat09:bloom_static}; they require only $n \log_2(1/\epsilon)(1+o(1))$ bits.

    Our main result is a new lower bound for the memory requirements of any dynamic approximate membership data structure. We show that for any constant $\epsilon>0$, any such data structure which achieves false positive error rate of $\epsilon$ must use at least $C(\epsilon) \cdot n \log_2(1/\epsilon)$ memory bits, where $C(\epsilon)>1$ depends only on $\epsilon$. This shows that the entropy lower bound cannot be achieved by dynamic data structures for any constant error rate.

  13. A Decidable Dichotomy Theorem on Directed Graph Homomorphisms with Non-negative Weights
    Authors: Jin-Yi Cai: University of Wisconsin - Madison Xi Chen: University of Southern California
    Abstract: The complexity of graph homomorphism problems has been the subject of intense study. It is a long standing open problem to give a (decidable) complexity dichotomy theorem for the partition function of directed graph homomorphisms. In this paper, we prove a decidable complexity dichotomy theorem for this problem and our theorem applies to all non-negative weighted form of the problem: Given any fixed matrix A with non-negative entries, the partition function Z_A(G) of directed graph homomorphisms from any directed graph G is either tractable in polynomial time or #P-hard, depending on the matrix A. The proof of the dichotomy theorem is combinatorial, but involves the definition of an infinite family of graph homomorphism problems. The proof of its decidability is algebraic using properties of polynomials.

  14. The Geometry of Manipulation - a Quantitative Proof of the Gibbard Satterthwaite Theorem
    Authors: Marcus Isaksson and Guy Kindler and Elchanan Mossel
    Abstract: We prove a quantitative version of the Gibbard-Satterthwaite theorem. We show that a uniformly chosen voter for a neutral social choice function f of q>3 alternatives and n voters will be manipulable with probability at least $10^{-4}\epsilon^2n^{-3}q^{-30}$, where $\epsilon$ is the minimal statistical distance between f and the family of dictator functions.

    Our results extend those of Friedgut Kalai and Naor [FKN09], which were obtained for the case of 3 alternatives, and imply that the approach of masking manipulations behind computational hardness (as considered in [BO91, CS03, EL05, PR06, CS06]) cannot hide manipulations completely.

  15. Optimal Testing of Reed-Muller Codes
    Authors: Arnab Bhattacharyya (MIT) Swastik Kopparty (MIT) Grant Schoenebeck (UC Berkeley) Madhu Sudan (Microsoft Research New England) David Zuckerman (UT Austin)
    Abstract: We consider the problem of testing if a given function $f : \F_2^n \rightarrow \F_2$ is close to any degree $d$ polynomial in $n$ variables, also known as the Reed-Muller testing problem. The Gowers norm is based on a natural $2^{d+1}$-query test for this property. Alon et al.~\cite{AKKLR} rediscovered this test and showed that it accepts every degree $d$ polynomial with probability $1$, while it rejects functions that are $\Omega(1)$-far with probability $\Omega(1/(d 2^{d}))$. We give an asymptotically optimal analysis of this test, and show that it rejects functions that are (even only) $\Omega(2^{-d})$-far with $\Omega(1)$-probability (so the rejection probability is a universal constant independent of $d$ and $n$). This implies a tight relationship between the $(d+1)^{\rm{st}}$-Gowers norm of a function and its maximal correlation with degree $d$ polynomials, when the correlation is close to 1.

    Our proof works by induction on $n$ and yields a new analysis of even the classical Blum-Luby-Rubinfeld~\cite{BLR} linearity test, for the setting of functions mapping $\F_2^n$ to $\F_2$. The optimality follows from a tighter analysis of counterexamples to the ``inverse conjecture for the Gowers norm'' constructed by \cite{GT07,LMS}.

    Our result has several implications. First, it shows that the Gowers norm test is tolerant, in that it also accepts close codewords. Second, it improves the parameters of an XOR lemma for polynomials given by Viola and Wigderson~\cite{VW}. Third, it implies a ``query hierarchy'' result for property testing of affine-invariant properties. That is, for every function $q(n)$, it gives an affine-invariant property that is testable with $O(q(n))$-queries, but not with $o(q(n))$-queries, complementing an analogous result of \cite{GKNR08} for graph properties.

  16. Pseudorandom generators for regular branching programs.
    Authors: Mark Braverman (MSR New England and University of Toronto), Anup Rao (University of Washington), Ran Raz (Weizmann Institute of Science) and Amir Yehudayoff (IAS and Technion)
    Abstract: We give new pseudorandom generators for \emph{regular} read-once branching programs of small width. A branching program is regular if the in-degree of every vertex in it is (0 or) $2$. For every width $d$ and length $n$, our pseudorandom generator uses a seed of length $O((\log d + \log\log n + \log(1/\epsilon))\log n)$ to produce $n$ bits that cannot be distinguished from a uniformly random string by any regular width $d$ length $n$ read-once branching program, except with probability $\epsilon$.

    We also give a result for general read-once branching programs, in the case that there are no vertices that are reached with small probability. We show that if a (possibly non-regular) branching program of length $n$ and width $d$ has the property that every vertex in the program is traversed with probability at least $\gamma$ on a uniformly random input, then the error of the generator above is at most $2 \epsilon/\gamma^2$.

  17. Local list decoding with a constant number of queries
    Authors: Avraham Ben-Aroya and Klim Efremenko and Amnon Ta-Shma
    Abstract: Recently Efremenko showed locally-decodable codes of sub-exponential length. That result showed that these codes can handle up to $\frac{1}{3} $ fraction of errors. In this paper we show that the same codes can be locally unique-decoded from error rate $\half-\alpha$ for any $\alpha>0$ and locally list-decoded from error rate $1-\alpha$ for any $\alpha>0$, with only a constant number of queries and a constant alphabet size. This gives the first sub-exponential codes that can be locally list-decoded with a constant number of queries.
  18. New Constructive Aspects of the Lovasz Local Lemma
    Authors: Bernhard Haeupler (MIT) Barna Saha (University of Maryland) and Aravind Srinivasan (University of Maryland)
    Abstract: The Lov\'{a}sz Local Lemma (LLL) is a powerful tool that gives sufficient conditions for avoiding all of a given set of ``bad'' events, with positive probability. A series of results have provided algorithms to efficiently construct structures whose existence is non-constructively guaranteed by the LLL, culminating in the recent breakthrough of Moser \& Tardos for the full asymmetric LLL. We show that the output distribution of the Moser-Tardos algorithm well-approximates the \emph{conditional LLL-distribution} -- the distribution obtained by conditioning on all bad events being avoided. We show how a known bound on the probabilities of events in this distribution can be used for further probabilistic analysis and give new constructive and non-constructive results.

    We also show that when an LLL application provides a small amount of slack, the number of resamplings of the Moser-Tardos algorithm is nearly linear in the number of underlying independent variables (not events!), and can thus be used to give efficient constructions in cases where the underlying proof applies the LLL to super-polynomially many events. Even in cases where finding a bad event that holds is computationally hard, we show that applying the algorithm to avoid a polynomial-sized ``core'' subset of bad events leads to a desired outcome with high probability. This is shown via a simple union bound over the probabilities of non-core events in the conditional LLL-distribution, and automatically leads to simple and efficient Monte-Carlo (and in most cases $RNC$) algorithms.

    We demonstrate this idea on several applications. We give the first constant-factor approximation algorithm for the Santa Claus problem by making an LLL-based proof of Feige constructive. We provide Monte Carlo algorithms for acyclic edge coloring, non-repetitive graph colorings, and Ramsey-type graphs. In all these applications the algorithm falls directly out of the non-constructive LLL-based proof. Our algorithms are very simple, often provide better bounds than previous algorithms, and are in several cases the first efficient algorithms known.

    As a second type of application we show that the properties of the conditional LLL-distribution can be used in cases beyond the critical dependency threshold of the LLL: avoiding all bad events is impossible in these cases. As the first (even non-constructive) result of this kind, we show that by sampling from the LLL-distribution of a selected smaller core, we can avoid a fraction of bad events that is higher than the expectation. MAX $k$-SAT is an illustrative example of this.

  19. Matching vector codes
    Authors: Zeev Dvir, Princeton University Parikshit Gopalan, Microsoft Research Sergey Yekhanin, Microsoft Research
    Abstract: An (r,delta,epsilon)-locally decodable code encodes a k-bit message x to an N-bit codeword C(x), such that for every i in [k], the i-th message bit can be recovered with probability 1-epsilon, by a randomized decoding procedure that queries only r bits, even if the codeword C(x) is corrupted in up to delta*N locations.

    Recently a new class of locally decodable codes, based on families of vectors with restricted dot products has been discovered. We refer to those codes as Matching Vector (MV) codes. Several families of (r,delta,r*delta)-locally decodable MV codes have been obtained. While codes in those families were shorter than codes of earlier generations, they suffered from having large values of epsilon=r*delta, which meant that r-query MV codes could only handle error-rates below 1/r. Thus larger query complexity gave shorter length codes but at the price of less error-tolerance. No MV codes of super-constant number of queries capable of tolerating a constant fraction of errors were known to exist.

    In this paper we present a new view of matching vector codes and uncover certain similarities between MV codes and classical Reed Muller codes. Our view allows us to obtain deeper insights into the power and limitations of MV codes. Specifically,

    1. We show that existing families of MV codes can be enhanced to tolerate a large constant fraction of errors, independent of the number of queries. Such enhancement comes at a price of a moderate increase in the number of queries;

    2. Our construction yields the first families of matching vector codes of super-constant query complexity that can tolerate a constant fraction of errors. Our codes are shorter than Reed Muller LDCs for all values of r < log k / (log log k)^c, for some constant c;

    3. We show that any MV code encodes messages of length k to codewords of length at least k*2^{sqrt{ log k} }. Therefore MV codes do not improve upon Reed Muller LDCs for r > (log k)^{sqrt{log k}}.

  20. Approximation Algorithms for the Edge-Disjoint Paths Problem via Raecke Decompositions
    Author: Matthew Andrews, Alcatel-Lucent Bell Laboratories, 600-700 Mountain Avenue, Murray Hill, NJ 07974

    Abstract: We study the Edge-Disjoint Paths with Congestion (EDPwC) problem in undirected networks in which we must integrally route a set of demands without causing large congestion on an edge. We present a $(polylog(n),poly(\log\log n)))$-approximation, which means that if there exists a solution that routes $X$ demands integrally on edge-disjoint paths (i.e.\ with congestion $1$), then the approximation algorithm can route $X/polylog(n)$ demands with congestion $poly(\log\log n)$.

    The best previous result for this problem was a $(n^{1/\beta},\beta)$-approximation for $\beta<\log n$.

  21. The complexity of distributions
    Author: Emanuele Viola, Northeastern University
    Abstract: Complexity theory typically studies the complexity of computing a function $h(x) : {0,1}^m \to {0,1}^n$ of a given input $x$. We advocate the study of the complexity of generating the distribution $h(x)$ for uniform $x$, given random bits. Our main results are:

    \begin{enumerate} \item Any function $f : {0,1}^\ell \to {0,1}^n$ such that (i) each output bit $f_i$ depends on $o(\log n)$ input bits, and (ii) $\ell \le \log_2 \binom{n}{\alpha n} + n^{0.99}$, has output distribution $f(U)$ at statistical distance $\ge 1 - 1/n^{0.49}$ from the uniform distribution over $n$-bit strings of hamming weight $\alpha n$.

    We also prove lower bounds for generating $(X,b(X))$ for boolean $b$, and in the case in which each bit $f_i$ is a small-depth decision tree.

    These lower bounds seem to be the first of their kind; the proofs use anti-concentration results for the sum of random variables.

    \item Lower bounds for generating distributions imply succinct data structures lower bounds. As a corollary of (1), we obtain the first lower bound for the membership problem of representing a set $S \subseteq [n]$ of size $\alpha n$, in the case where $1/\alpha$ is a power of $2$: If queries ``$i \in S$?'' are answered by non-adaptively probing $o(\log n)$ bits, then the representation uses $\ge \log_2 \binom{n}{\alpha n} + \Omega(\log n)$ bits.

    \item Upper bounds complementing the bounds in (1) for various settings of parameters.

    \item Uniform randomized AC$^0$ circuits of $\poly(n)$ size and depth $d = O(1)$ with error $\e$ can be simulated by uniform randomized AC$^0$ circuits of $\poly(n)$ size and depth $d+1$ with error $\e + o(1)$ using $\le (\log n)^{O( \log \log n)}$ random bits.

    Previous derandomizations [Ajtai and Wigderson '85; Nisan '91] increase the depth by a constant factor, or else have poor seed length. \end{enumerate}

  22. All-Pairs Shortest Paths in $O(n^2)$ Time With High Probability
    Authors: Yuval Peres, Microsoft Research, Redmond, Dmitry Sotnikov, Tel Aviv University, Benny Sudakov, UCLA, Uri Zwick, Tel Aviv University
    Abstract: We present an all-pairs shortest path algorithm whose running time on a complete directed graph on $n$ vertices whose edge weights are chosen independently and uniformly at random from $[0,1]$ is~$O(n^2)$, in expectation and with high probability. This resolves a long standing open problem. The algorithm is a variant of the dynamic all-pairs shortest paths algorithm of Demetrescu and Italiano. The analysis relies on a proof that the number of \emph{locally shortest paths} in such randomly weighted graphs is $O(n^2)$, again in expectation and with high probability. We also present a dynamic version of the algorithm that recomputes all shortest paths after a random edge update in $O(\log^{2}n)$ expected time.
  23. Computational Transition at the Uniqueness Threshold
    Author: Allan Sly, Microsoft Research, Redmond
    Abstract: The hardcore model is a model of lattice gas systems which has received much attention in statistical physics, probability theory and theoretical computer science. It is the probability distribution over independent sets $I$ of a graph weighted proportionally to $\lambda^{|I|}$ with fugacity parameter $\lambda$. We prove that at the uniqueness threshold of the hardcore model on the $d$-regular tree, approximating the partition function becomes computationally hard.

    Specifically, we show that unless NP$=$RP there is no polynomial time approximation scheme for the partition function (the sum of such weighted independent sets) on graphs of maximum degree $d$ for fugacity $\lambda_c(d) < \lambda < \lambda_c(d) + \varepsilon(d)$ where $$\lambda_c = \frac{(d-1)^{d-1}}{(d-2)^d}$$ is the uniqueness threshold on the $d$-regular tree and $\varepsilon(d)>0$ is a positive constant. Weitz produced an FPTAS for approximating the partition function when $0<\lambda < \lambda_c(d)$ so this result demonstrates that the computation threshold exactly coincides with the statistical physics phase transition thus confirming the main conjecture of [MWW, '09]. We further analyze the special case of $\lambda=1, d=6$ and show there is no polynomial time approximation scheme for approximately counting independent sets on graphs of maximum degree $d= 6$, which is optimal, improving the previous bound of $d= 24$.

    Our proof is based on specially constructed random bi-partite graphs which act as gadgets in a reduction to MAX-CUT. Building on the involved second moment method analysis of \cite{MWW:09} and combined with an analysis of the reconstruction problem on the tree our proof establishes a strong version of ``replica'' method heuristics developed by theoretical physicists. The result establishes the first rigorous correspondence between the hardness of approximate counting and sampling with statistical physics phase transitions.

  24. Strong Fault-Tolerance for Self-Assembly with Fuzzy Temperature
    Authors: David Doty, University of Western Ontario, Matthew J. Patitz, University of Texas -- Pan American, Dustin Reishus, University of Southern California, Robert T. Schweller, University of Texas -- Pan American, Scott M. Summers, University of Wisconsin -- Platteville
    Abstract: We consider the problem of fault-tolerance in nanoscale algorithmic self-assembly. We employ a variant of Winfree's abstract Tile Assembly Model (aTAM), the *two-handed* aTAM, in which square "tiles" -- a model of molecules constructed from DNA for the purpose of engineering self-assembled nanostructures -- aggregate according to specific binding sites of varying strengths, and in which large aggregations of tiles may attach to each other, in contrast to the *seeded* aTAM, in which tiles aggregate one at a time to a single specially-designated "seed" assembly. We focus on a major cause of errors in tile-based self-assembly: that of unintended growth due to "weak" strength-1 bonds, which if allowed to persist, may be stabilized by subsequent attachment of neighboring tiles in the sense that at least energy 2 is now required to break apart the resulting assembly; i.e., the errant assembly is *stable at temperature 2*.

    We study a common self-assembly benchmark problem, that of assembling an n x n square using O(log n) unique tile types, under the two-handed model of self-assembly. Our main result achieves a much stronger notion of fault-tolerance than those achieved previously. *Arbitrary* strength-1 growth is allowed; however, any assembly that grows sufficiently to become stable at temperature 2 is guaranteed to assemble into the correct final assembly of an n x n square. In other words, errors due to insufficient attachment, which is the cause of errors studied in earlier papers on fault-tolerance, are prevented *absolutely* in our main construction, rather than only with high probability and for sufficiently small structures, as in previous fault-tolerance studies. We term this the *fuzzy temperature* model of faults, due to the following equivalent characterization: the temperature is normally 2, but may drift down to 1, allowing unintended temperature-1 growth for an arbitrary period of time. Our construction ensures that this unintended growth cannot lead to permanent errors, so long as the temperature is eventually raised back to 2. Thus, our construction overcomes a major cause of errors, insufficient strength-1 attachments becoming stabilized by subsequent growth, without requiring the detachment of strength-2 bonds that slows down previous constructions, and without requiring the careful fine-tuning of thermodynamic parameters to balance forward and reverse rates of reaction necessary in earlier work on fault-tolerance.

    Although we focus on the task of assembling an n x n square, our construction uses a number of geometric motifs and synchronization primitives that will likely prove useful in other theoretical (and we hope, experimental) applications.

  25. Subcubic Equivalences Between Path, Matrix, and Triangle Problems
    Authors: Virginia Vassilevska Williams, UC Berkeley and Ryan Williams, IBM Almaden Research Center
    Abstract: We say an algorithm on n by n matrices with entries in [-M,M] (or n-node graphs with edge weights from [-M,M]) is truly subcubic if it runs in O(n^{3-d} poly(log M)) time for some d > 0. We define a notion of subcubic reducibility, and show that many important problems on graphs and matrices solvable in O(n^3) time are equivalent under subcubic reductions. Namely, the following weighted problems either *all* have truly subcubic algorithms, or none of them do:
    • The all-pairs shortest paths problem on weighted digraphs (APSP).
    • Detecting if a weighted graph has a triangle of negative total edge weight.
    • Listing up to n^{2.99} negative triangles in an edge-weighted graph.
    • Finding a minimum weight cycle in a graph of non-negative edge weights.
    • The replacement paths problem on weighted digraphs.
    • Finding the second shortest simple path between two nodes in a weighted digraph.
    • Checking whether a given matrix defines a metric.
    • Verifying the correctness of a matrix product over the (min,+)-semiring.
    Therefore, if APSP cannot be solved in n^{3-eps} time for any eps > 0, then many other problems also need essentially cubic time. In fact we show generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure. These equivalences simplify prior work on subcubic algorithms for all-pairs path problems, since it now suffices to give appropriate subcubic triangle detection algorithms.

    Other consequences of our work are new combinatorial approaches to Boolean matrix multiplication over the (OR,AND)-semiring (abbreviated as BMM). We show that practical advances in triangle detection would imply practical BMM algorithms, among other results. Our approach also yields new BMM algorithms: a derandomization of the recent combinatorial BMM algorithm of Bansal and Williams (FOCS'09), and an improved quantum algorithm for BMM.

  26. Minimum-Cost Network Design with (Dis)economies of Scale
    Authors: Matthew Andrews and Spyridon Antonakopoulos and Lisa Zhang, Bell Laboratories, 600-700 Mountain Avenue, Murray Hill, NJ 07974
    Abstract: Given a network, a set of demands and a cost function $f(\cdot)$, the min-cost n etwork design problem is to route all demands with the objective of minimizing $ \sum_e f(\ell_e)$, where $\ell_e$ is the total traffic load under the routing. We focus on cost functions of the form $f(x)= \sigma + x^{\alpha}$ for $x > 0$, with $f(0) = 0$. For $\alpha \le 1$, $f(\cdot)$ is subadditive and exhibits beha vior consistent with economies of scale. This problem corresponds to the well-s tudied Buy-at-Bulk network design problem and admits polylogarithmic approximati on and hardness.

    In this paper, we focus on the less studied scenario of $\alpha > 1$ with a posi tive startup cost $\sigma > 0$. Now, the cost function $f(\cdot)$ is neither sub additive nor superadditive. This is motivated by minimizing network-wide energy consumption when supporting a set of traffic demands. It is commonly accepted that, for some computing and communication devices, doubling processing speed mo re than doubles the energy consumption. Hence, in Economics parlance, such a cos t function reflects \emph{diseconomies of scale}.

    We begin by discussing why existing routing techniques such as randomized roundi ng and tree-metric embedding fail to generalize directly. We then present our m ain contribution, which is a polylogarithmic approximation algorithm. We obtain this result by first deriving a bicriteria approximation for a related capacitat ed min-cost flow problem that we believe is interesting in its own right. Our ap proach for this problem builds upon the well-linked decomposition due to Chekuri -Khanna-Shepherd~\cite{ChekuriKS05}, the construction of expanders via matchings due to Khandekar-Rao-Vazirani~\cite{KhandekarRV06}, and edge-disjoint routing i n well-connected graphs due to Rao-Zhou~\cite{RaoZ06}. However, we also develop new techniques that allow us to keep a handle on the total cost, which was not a concern in the aforementioned literature.

  27. A Unified Framework for Testing Linear-Invariant Properties
    Authors: Arnab Bhattacharyya (MIT), Elena Grigorescu (MIT), Asaf Shapira (Georgia Tech)
    Abstract: In a sequence of recent papers, Sudan and coauthors have investigated the relation between testability of properties of Boolean functions and the invariance of the properties with respect to transformations of the domain. Linear-invariance is arguably the most common such symmetry for natural properties of Boolean functions on the hypercube. Hence, it is an important goal to find necessary and sufficient conditions for testability of linear-invariant properties. This is explicitly posed as an open problem in a recent survey of Sudan. We obtain the following results:

    1. We show that every linear-invariant property that can be characterized by forbidding induced solutions to a (possibly infinite) set of linear equations can be tested with one-sided error.

    2. We show that every linear-invariant property that can be tested with one-sided error can be characterized by forbidding induced solutions to a (possibly infinite) set of systems of linear equations.

    We conjecture that our result from item (1) can be extended to cover systems of linear equations. We further show that the validity of this conjecture would have the following implications:

    1. It would imply that every linear-invariant property that is closed under restrictions to linear subspaces is testable with one-sided error. Such a result would unify several previous results on testing Boolean functions, such as the results on testing low-degree polynomials and results on testing Fourier dimensionality.

    2. It would imply that a linear-invariant property P is testable with one-sided error *if and only if* P is closed under restrictions to linear subspaces, thus resolving Sudan's problem.

  28. Information Cost Tradeoffs for Augmented Index and Streaming Language Recognition
    Authors: Amit Chakrabarti, Dartmouth College, Graham Cormode, AT&T Labs -- Research, Ranganath Kondapally, Dartmouth College, Andrew McGregor, University of Massachusetts, Amherst
    Abstract: This paper makes three main contributions to the theory of communication complexity and stream computation. First, we present new bounds on the information complexity of AUGMENTED-INDEX. In contrast to analogous results for INDEX by Jain, Radhakrishnan and Sen [J. ACM, 2009], we have to overcome the significant technical challenge that protocols for AUGMENTED-INDEX may violate the "rectangle property" due to the inherent input sharing. Second, we use these bounds to resolve an open problem of Magniez, Mathieu and Nayak [STOC, 2010] that asked about the multi-pass complexity of recognizing Dyck languages. This results in a natural separation between the standard multi-pass model and the multi-pass model that permits reverse passes. Third, we present the first passive memory checkers that verify the interaction transcripts of priority queues, stacks, and double-ended queues. We obtain tight upper and lower bounds for these problems, thereby addressing an important sub-class of the memory checking framework of Blum et al. [Algorithmica, 1994].
  29. Optimal stochastic planarization
    Anastasios Sidiropoulos , Toyota Technological Institute at Chicago
    Abstract: It has been shown by Indyk and Sidiropoulos [IS07] that any graph of genus g>0 can be stochastically embedded into a distribution over planar graphs with distortion 2^O(g). This bound was later improved to O(g^2) by Borradaile, Lee and Sidiropoulos [BLS09]. We give an embedding with distortion O(log g), which is asymptotically optimal.

    Apart from the improved distortion, another advantage of our embedding is that it can be computed in polynomial time. In contrast, the algorithm of [BLS09] requires solving an NP-hard problem.

    Our result implies in particular a reduction for a large class of geometric optimization problems from instances on genus-g graphs, to corresponding ones on planar graphs, with a O(log g) loss factor in the approximation guarantee.

  30. Bounds on Monotone Switching Networks for Directed Connectivity
    Author: Aaron Potechin, MIT
    Abstract: We prove that any monotone switching network solving directed connectivity on N vertices must have size $N^{\Omega(\lg N)}$, which solves open problem 4 in "A. Razborov. Lower Bounds for Deterministic and Nondeterministic Branching Programs, Proceedings of the 8th FCT, Lecture Notes in Computer Science, vol. 529, 1991, 47-60." and proves that monotone-L does not equal monotone-NL.
  31. A Fourier-analytic approach to Reed-Muller decoding
    Parikshit Gopalan, Microsoft Research Silicon Valley.
    Abstract: We present a Fourier-analytic approach to list-decoding Reed-Muller codes over arbitrary finite fields. We use this to show that quadratic forms over any field are locally list-decodeable up to their minimum distance. The analogous statement for linear polynomials was proved in the celebrated works of Goldreich-Levin [GL89] and Goldreich-Rubinfeld-Sudan [GRS00]. Previously, tight bounds for quadratic polynomials were known only for q = 2; 3 [GKZ08]; the best bound known for other fields was the Johnson radius.

    Our approach departs from previous work on Reed-Muller decoding which relies on some form of self-correction [GRS00, AS03, STV01, GKZ08]. We seek to explain the good list-decoding properties of RM codes by using the rich structure in the weight distribution of these codes. We observe that the crux of the problem is to bound the number of low-weight codewords near a received word. We do this by applying ideas from Fourier analysis of Boolean functions to low-degree polynomials over finite fields, in conjunction with classical results about the structure of low-weight codewords.

  32. Holographic Algorithms with Matchgates Capture Precisely Tractable Planar #CSP
    Authors: Jin-Yi Cai, University of Wisconsin-Madison and Beijing University, Pinyan Lu, Microsoft Research Asia, and Mingji Xia, Institute of Software, Chinese Academy
    Abstract: Valiant introduced matchgate computation and holographic algorithms. A number of seemingly exponential time problems can be solved by this novel algorithmic paradigm in polynomial time. We show that, in a very strong sense, matchgate computations and holographic algorithms based on them provide a universal methodology to a broad class of counting problems studied in statistical physics community for decades. They capture precisely those problems which are #P-hard on general graphs but computable in polynomial time on planar graphs.

    More precisely, we prove complexity dichotomy theorems in the framework of counting CSP problems. The local constraint functions take Boolean inputs, and can be arbitrary real-valued symmetric functions. We prove that, every problem in this class belongs to precisely three categories: (1) those which are tractable (i.e.,polynomial time computable) on general graphs, or (2) those which are #P-hard on general graphs but tractable on planar graphs, or (3) those which are #P-hard even on planar graphs. The classification criteria are explicit. Moreover, problems in category (2) are tractable on planar graphs precisely by holographic algorithms with matchgates.

  33. Backyard Cuckoo Hashing: Constant Worst-Case Operations with a Succinct Representation
    Authors: Yuriy Arbitman and Moni Naor and Gil Segev, Weizmann Institute of Science.
    Abstract: The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constant-time operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. However, although the first analysis of a dynamic dictionary dates back more than 45 years ago (when Knuth analyzed linear probing in 1963), the trade-off between these aspects of performance is still not completely understood. In this paper we settle two fundamental open problems:

    -- We construct the first dynamic dictionary that enjoys the best of both worlds: it stores $n$ elements using $(1 + \epsilon) n$ memory words, and guarantees constant-time operations in the worst case with high probability. Specifically, for any $\epsilon = \Omega ( (\log \log n / \log n)^{1/2} )$ and for any sequence of polynomially many operations, with high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of $\epsilon$. The construction is a two-level variant of cuckoo hashing, augmented with a "backyard" that handles a large fraction of the elements, together with a de-amortized perfect hashing scheme for eliminating the dependency on $\epsilon$.

    -- We present a variant of the above construction that uses only $(1 + o(1)) B$ bits, where $B$ is the information-theoretic lower bound for representing a set of size $n$ taken from a universe of size $u$, and guarantees constant-time operations in the worst case with high probability, as before. This problem was open even in the amortized setting. Our approach is based on $k$-wise almost independent permutations with a succinct representation and a constant evaluation time.

  34. Vertex Sparsifiers and Abstract Rounding Algorithms
    Moses Charikar (Princeton University), Tom Leighton (MIT and Akamai Technologies, Inc), Shi Li (Princeton University), Ankur Moitra (MIT)
    Abstract: The notion of vertex sparsification (in particular cut-sparsification) is introduced in (M, 2009), where it was shown that for any graph $G = (V, E)$ and a subset of $k$ terminals $K \subset V$, there is a polynomial time algorithm to construct a graph $H = (K, E_H)$ \emph{on just the terminal set} so that simultaneously for all cuts $(A, K-A)$, the value of the minimum cut in $G$ separating $A$ from $K -A$ is approximately the same as the value of the corresponding cut in $H$. Then approximation algorithms can be run directly on $H$ as a proxy for running on $G$, yielding approximation guarantees independent of the size of the graph. In this work, we consider how well cuts in the sparsifier $H$ can approximate the minimum cuts in $G$, and whether algorithms that use such reductions need to incur a multiplicative penalty in the approximation guarantee depending on the quality of the sparsifier.

    We give the first super-constant lower bounds for how well a cut-sparsifier $H$ can simultaneously approximate all minimum cuts in $G$. We prove a lower bound of $\Omega(\log^{1/4} k)$ -- this is polynomially-related to the known upper bound of $O(\log k/\log \log k)$. This is an exponential improvement on the $\Omega(\log \log k)$ bound given in (LM, 2010) which in fact was for a stronger vertex sparsification guarantee, and did not apply to cut sparsifiers.

    Despite this negative result, we show that for many natural problems, we do not need to incur a multiplicative penalty for our reduction. Roughly, we show that any rounding algorithm which also works for the $0$-extension relaxation can be used to construct good vertex-sparsifiers for which the optimization problem is easy. Using this, we obtain optimal $O(\log k)$-competitive Steiner oblivious routing schemes, which generalize the results in \cite{R}. We also demonstrate that for a wide range of graph packing problems (which includes maximum concurrent flow, maximum multiflow and multicast routing, among others, as a special case), the integrality gap of the linear program is always at most $O(\log k)$ times the integrality gap restricted to trees. This result helps to explain the ubiquity of the $O(\log k)$ guarantees for such problems. Lastly, we use our ideas to give an efficient construction for vertex-sparsifiers that match the current best existential results -- this was previously open. Our algorithm makes novel use of Earth-mover constraints.

  35. Deciding first-order properties for sparse graphs
    Authors: Zdenek Dvorak (Charles University, Prague), Daniel Kral (Charles University, Prague), Robin Thomas (Georgia Institute of Technology, Atlanta)
    Abstract: We present a linear-time algorithm for deciding first-order logic (FOL) properties in classes of graphs with bounded expansion. Many natural classes of graphs have bounded expansion; for instance, graphs of bounded tree-width, all proper minor-closed classes of graphs, graphs of bounded degree, graphs with no subgraph isomorphic to a subdivision of a fixed graph, and graphs that can be drawn in a fixed surface in such a way that each edge crosses at most a constant number of other edges. We also develop an almost linear-time algorithm for deciding FOL properties in classes of graphs with locally bounded expansion; those include classes of graphs with locally bounded tree-width or locally excluding a minor.

    More generally, we design a dynamic data structure for graphs belonging to a class $\cal G$ of graphs of bounded expansion. After a linear-time initialization the data structure allows us to test an FOL property in constant time, and the data structure can be updated in constant time after addition/deletion of an edge, provided the list of possible edges to be added is known in advance and their addition results in a graph in $\cal G$. In addition, we design dynamic data structure for testing $\Sigma_1$-properties or the existence of short paths between prescribed vertices in such classes of graphs. All our results hold for relational structures.

  36. Clustering with Spectral Norm and the k-means Algorithm
    Amit Kumar (IIT Delhi) and Ravindran Kannan (Microsoft Research India Lab, Bangalore)
    Abstract: There has been much progress on efficient algorithms for clustering data points generated by a mixture of $k$ probability distributions under the assumption that the means of the distributions are well-separated, i.e., the distance between the means of any two distributions is at least $\Omega(k)$ standard deviations. These results generally make heavy use of the generative model and particular properties of the distributions. In this paper, we show that a simple clustering algorithm works without assuming any generative (probabilistic) model. Our only assumption is what we call a ``proximity condition'': the projection of any data point onto the line joining its cluster center to any other cluster center is $\Omega(k)$ standard deviations closer to its own center than the other center. Here the notion of standard deviations is based on the spectral norm of the matrix whose rows represent the difference between a point and the mean of the cluster to which it belongs. We show that in the generative models studied, our proximity condition is satisfied and so we are able to derive most known results for generative models as corollaries of our main result. We also prove some new results for generative models - e.g., we can cluster all but a small fraction of points only assuming a bound on the variance. Our algorithm relies on the well known $k$-means algorithm, and along the way, we prove a result of independent interest -- that the $k$-means algorithm converges to the ``true centers'' even in the presence of spurious points provided the initial (estimated) centers are close enough to the corresponding actual centers and all but a small fraction of the points satisfy the proximity condition. Finally, we present a new technique for boosting the ratio of inter-center separation to standard deviation. This allows us to prove results for learning mixture of a class of distributions under weaker separation conditions.
  37. Testing Properties of Sparse Images
    Authors: Dana Ron and Gilad Tsur, Department of Electrical Engineering - Systems, Tel-Aviv University.
    Abstract: We initiate the study of testing properties of images that correspond to {\em sparse\/} $0/1$-valued matrices of size $n\times n$. Our study is related to but different from the study initiated by Raskhodnikova ({\em Proceedings of RANDOM, 2003\/}), where the images correspond to {\em dense\/} $0/1$-valued matrices. Specifically, while distance between images in the model studied by Raskhodnikova is the fraction of entries on which the images differ taken with respect to all $n^2$ entries, the distance measure in our model is defined by the fraction of such entries taken with respect to the actual number of $1$'s in the matrix. We study several natural properties: connectivity, convexity, monotonicity, and being a line. In all cases we give testing algorithms with sublinear complexity, and in some of the cases we also provide corresponding lower bounds.
  38. Frugal Mechanism Design via Spectral Techniques
    Authors: Ning Chen and Edith Elkind and Nick Gravin and Fedor Petrov
    Abstract: We study the design of truthful mechanisms for set systems, i.e., scenarios where a customer needs to hire a team of agents to perform a complex task. In this setting, frugality (Archer&Tardos'02) provides a measure to evaluate the ``cost of truthfulness'', that is, the overpayment of a truthful mechanism relative to the ``fair'' payment.

    We propose a uniform scheme for designing frugal truthful mechanisms for general set systems. Our scheme is based on scaling the agents' bids using the eigenvector of a matrix that encodes the interdependencies between the agents. We demonstrate that the $r$-out-of-$k$-system mechanism and the $^{\sqrt{\ }}$-mechanism for buying a path in a graph (Karlin, Kempe, Tamir'05) can be viewed as instantiations of our scheme. We then apply our scheme to two other classes of set systems, namely, vertex cover systems and $k$-path systems, in which a customer needs to purchase $k$ edge-disjoint source-sink paths. For both settings, we bound the frugality of our mechanism in terms of the largest eigenvalue of the respective interdependency matrix.

    We show that our mechanism is optimal for a large subclass of vertex cover systems satisfying a simple local sparsity condition. For $k$-path systems, while our mechanism is within a factor of $k+1$ from optimal, we show that it is, in fact, {\em optimal}, when one uses a modified definition of frugality proposed by Elkind, Goldberg, Goldberg (2007). Our lower bound argument combines spectral techniques and Young's inequality, and is applicable to all set systems. As both $r$-out-of-$k$ systems and single path systems can be viewed as special cases of $k$-path systems, our result improves the lower bounds of Karlin et al. (2005) and answers several open questions proposed by Karlin et al. (2005).

  39. A separator theorem in minor-closed classes
    Authors Ken-ichi Kawarabayashi (National Institute of Informatics, Japan) and Bruce Reed (McGill University, Canada, and Sophia Antipolis, France)
    Abstract: It is shown that for each $t$, there is a separator of size $O(t \sqrt{n})$ in any $n$-vertex graph $G$ with no $K_t$-minor.

    This settles a conjecture of Alon, Seymour and Thomas (J. Amer. Math. Soc., 1990 and STOC'90), and generalizes a result of Djidjev (1981), and Gilbert, Hutchinson and Tarjan (J. Algorithm, 1984), independently, who proved that every graph with $n$ vertices and genus $g$ has a separator of order $O(\sqrt{gn})$, because $K_t$ has genus $\Omega(t^2)$.

    The bound $O(t \sqrt{n})$ is best possible because every 3-regular expander graph with $n$ vertices is a graph with no $K_t$-minor for $t=cn^{1/2}$, and with no separator of size $dn$ for appropriately chosen positive constants $c,d$.

    In addition, we give an $O(n^2)$ time algorithm to obtain such a separator, and then give a sketch how to obtain such a separator in $O(n^{1+\epsilon})$ time for any $\epsilon > 0$. Finally, we discuss several algorithm aspects of our separator theorem, including a possibility to obtain a separator of order $g(t)\sqrt{n}$, for some function $g$ of $t$, in an $n$-vertex graph $G$ with no $K_t$-minor in $O(n)$ time.

  40. On the Insecurity of Parallel Repetition for Leakage Resilience
    Allison Lewko (University of Texas at Austin) and Brent Waters (University of Texas at Austin)
    Abstract: A fundamental question in leakage-resilient cryptography is: can leakage resilience always be amplified by parallel repetition? It is natural to expect that if we have a leakage-resilient primitive tolerating $\ell$ bits of leakage, we can take $n$ copies of it to form a system tolerating $n\ell$ bits of leakage. In this paper, we show that this is not always true. We construct a public key encryption system which is secure when at most $\ell$ bits are leaked, but $n$ copies of the system are insecure when $n\ell$ bits are leaked. Our results hold either in composite order bilinear groups under a variant of the subgroup decision assumption \emph{or} in prime order bilinear groups under the decisional linear assumption where the public key systems share a common reference parameter.
  41. Approaching optimality for solving SDD linear systems
    Ioannis Koutis†, Gary L. Miller and Richard Peng, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213
    Abstract: We present an algorithm that on input a graph $G$ with $n$ vertices and $m+n-1$ edges and a value $k$, produces an {\em incremental sparsifier} $\hat{G}$ with $n-1 + m/k$ edges, such that the condition number of $G$ with $\hat{G}$ is bounded above by $\tilde{O}(k\log^2 n)$, with probability $1-p$. The algorithm runs in time

    $$\tilde{O}((m \log{n} + n\log^2{n})\log(1/p)).$$

    As a result, we obtain an algorithm that on input an $n\times n$ symmetric diagonally dominant matrix $A$ with $m+n-1$ non-zero entries and a vector $b$, computes a vector $\bar{x}$ satisfying $||x-A^{+}b||_A<\epsilon ||A^{+}b||_A $, in time

    $$\tilde{O}(m\log^2{n}\log(1/\epsilon)).$$

    The solver is based on a recursive application of the incremental sparsifier that produces a hierarchy of graphs which is then used to construct a recursive preconditioned Chebyshev iteration.

  42. The subexponential upper bound for on-line chain partitioning problem
    Authors: Bart\l{}omiej Bosek - Jagiellonian University, Theoretical Computer Science, ul. prof. Stanis\l{}awa \L{}ojasiewicza 6, Krak\'{o}w 30-348, Poland and Tomasz Krawczyk - Jagiellonian University, Theoretical Computer Science, ul. prof. Stanis\l{}awa \L{}ojasiewicza 6, Krak\'{o}w 30-348, Poland.
    Abstract: The main question in the on-line chain partitioning problem is to determine whether there exists an algorithm that partitions on-line posets of width at most $w$ into polynomial number of chains -- see Trotter's chapter \emph{Partially ordered sets} in the \emph{Handbook of Combinatorics}. So far the best known on-line algorithm of Kierstead used at most $(5^w-1)/4$ chains; on the other hand Szemer\'{e}di proved that any on-line algorithm requires at least $\binom{w+1}{2}$ chains. These results were obtained in the early eighties and since then no progress in the general case has been done.

    We provide an on-line algorithm that partitions orders of width $w$ into at most $w^{16\log{w}}$ chains. This yields the first sub-exponential upper bound for on-line chain partitioning problem.

  43. One Tree Suffices: A Simultaneous O(1)-Approximation for Single-Sink Buy-at-Bulk
    Authors: Ashish Goel (Stanford University) Ian Post (Stanford University)
    Abstract: We study the single-sink buy-at-bulk problem with an unknown cost function. We want to route flow from a set of demand nodes to a root node, where the cost of routing x total flow along an edge is proportional to f(x) for some concave, non-decreasing function f satisfying f(0)=0. We present a simple, fast, deterministic, combinatorial algorithm that takes a set of demands and constructs a single tree T such that for all f the cost f(T) is a 49.48-approximation of the optimal cost for that f. This is within a factor of 2 of the best approximation ratio currently achievable when the tree can be optimized for a specific function. Trees achieving simultaneous O(1)-approximations for all concave functions were previously not known to exist regardless of computation time.
  44. On the Queue Number of Planar Graphs
    Authors: Giuseppe Di Battista, Roma Tre University, Italy, Fabrizio Frati, Roma Tre University, Italy, J\'anos Pach, EPFL Lausanne, Switzerland
    Abstract: We prove that planar graphs have $O(\log^4 n)$ queue number, thus improving upon the previous $O(\sqrt n)$ upper bound. Consequently, planar graphs admit 3D straight-line crossing-free grid drawings in $O(n \log^c n)$ volume, for some constant $c$, thus improving upon the previous $O(n^{3/2})$ upper bound.
  45. Cryptography Against Continuous Memory Attacks
    AUthors: Yevgeniy Dodis and Kristiyan Haralambiev and Adriana Lopez-Alt and Daniel Wich
    Abstract: We say that a cryptographic scheme is Continous Leakage-Resilient (CLR), if it allows users to refresh their secret keys, using only fresh local randomness, such that:

    1.  The scheme remains functional after any number of key refreshes, although the public key never changes. Thus, the “outside world” is neither affected by these key refreshes, nor needs to know about their frequency.

    2. The scheme remains secure even if the adversary can continuously leak arbitrary information about the current secret-key of the system, as long as the amount of leaked information is bounded in between any two successive key refreshes. There is no bound on the total amount of information that can be leaked during the lifetime of the system.

    In this work, we construct a variety of practical CLR schemes, including CLR one-way relations, CLR signatures, CLR identification schemes, and CLR authenticated key agreement protocols. For each of the above, we give general constructions, and then show how to instantiate them efficiently using a well established assumption on bilinear groups, called the K-Linear assumption (for any constant K >= 1).

    Our constructions are highly modular, and we develop many interesting techniques and building-blocks along the way, including: leakage-indistinguishable re-randomizable relations, homomorphic NIZKs, and leakage-of-ciphertext non-malleable encryption schemes.

    Prior to our work, no “truly CLR” schemes were known, as previous leakage-resilient schemes suffer from one or more of the following drawbacks: (a) restrictions are placed on the type of allowed leakage, such as the axiom that “only computation leaks information”; (b) the overall amount of key leakage is bounded a-priori for the lifetime of the system and there is no method for refreshing keys ; (c) the efficiency of the scheme degrades proportionally with the number of refreshes; (d) the key updates require an additional leak-free “master secret key” to be stored securely; (e) the scheme is only proven secure under a strong non-standard assumption.

    In a recent follow-up result, Brakerski et al. [13] solve the main open problem left by our work, by constructing a CLR public-key encryption scheme under the 2-Linear assumption. assumption. In addition, they give an alternate (though inefficient) construction of CLR Signatures under the same assumption. (Note that, in addition to these follow-up results, [13] also includes the above-mentioned result (e), which came prior to our work, and a slight improvement

  46. Hardness of Finding Independent Sets in Almost 3-Colorable Graphs
    Authors: Irit Dinur and Subhash Khot and Will Perkins and Muli Safra
    Abstract: For every $\eps > 0$, and integer $q \geq 3$, we show that given an $N$-vertex graph that has an induced $q$-colorable subgraph of size $(1-\eps)N$, it is NP-hard to find an independent set of size $ \frac{N}{q^2}$.
  47. Min st-Cut Oracle for Planar Graphs with Near-Linear Preprocessing Time
    Authors: Glencora Borradaile: School of Electrical Engineering and Computer Science, Oregon State University; Piotr Sankowski: Institute of Informatics, University of Warsaw and Department of Computer and System Science, Sapienza University of Rome; Christian Wulff-Nilsen: Department of Computer Science, University of Copenhagen
    Abstract: For an undirected $n$-vertex planar graph $G$ with non-negative edge-weights, we consider the following type of query: given two vertices $s$ and $t$ in $G$, what is the weight of a min $st$-cut in $G$? We show how to answer such queries in constant time with $O(n\log^5n)$ preprocessing time and $O(n\log n)$ space. We use a Gomory-Hu tree to represent all the pairwise min cuts implicitly. Previously, no subquadratic time algorithm was known for this problem. Since all-pairs min cut and the minimum cycle basis are dual problems in planar graphs, we also obtain an implicit representation of a minimum cycle basis in $O(n\log^5n)$ time and $O(n\log n)$ space and an explicit representation with additional $O(C)$ time and space where $C$ is the size of the basis.

    These results require that shortest paths be unique. We deterministically remove this assumption with an additional $\log^2 n$ factor in the running time.

  48. Codes for Computationally Simple Channels: Explicit Constructions with Optimal Rate
    Authors: Venkatesan Guruswami (Carnegie Mellon University) Adam Smith (Pennsylvania State University)
    Abstract: In this paper, we consider coding schemes for \emph{computationally bounded} channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded by $p$ w.h.p. and (b) the process which adds the errors can be described by a sufficiently ``simple'' circuit. Codes for such channel models are attractive since, like codes for traditional adversarial errors, they can handle channels whose true behavior is \emph{unknown} or \emph{varying} over time.

    For three classes of channels, we provide explicit, efficiently encodable/decodable codes of optimal rate where only \emph{in}efficiently decodable codes were previously known. In each case, we provide one encoder/decoder that works for \emph{every} channel in the class. The encoders are randomized, and probabilities are taken over the (local, unknown to the decoder) coins of the encoder and those of the channel.

    Unique decoding for additive errors: We give the first construction of polytime encodable/decodable codes for \emph{additive} (a.k.a. \emph{oblivious}) channels that achieve the Shannon capacity $1-H(p)$. These are channels which add an arbitrary error vector $e\in\bit{n}$ of weight at most $pn$ to the transmitted word; the vector $e$ can depend on the code but not on the particular transmitted word. Such channels capture binary symmetric errors and burst errors as special cases.

    List-decoding for log-space channels: A \emph{space-$S(n)$ bounded} channel reads and modifies the transmitted codeword as a stream, using at most $S(n)$ bits of workspace on transmissions of $n$ bits. For constant $S$, this captures many models from the literature, including \emph{discrete channels with finite memory} and \emph{arbitrarily varying channels}. We give an efficient code with optimal rate (up to $1-H(p)$) that recovers a short list containing the correct message with high probability for channels limited to \emph{logarithmic} space.

    List-decoding for poly-time channels: For any constant $c$, assuming the existence of pseudorandom generators, we give a similar list-decoding result for channels describable by circuits of size at most $n^c$. We do not know of any channel models considered in the information theory literature (other than purely adversarial channels) which require more than linear time to implement.

  49. Polynomial Learning of Distribution Families
    Mikhail Belkin (Ohio State University) and Kaushik Sinha (Ohio State University)
    Abstract: The question of polynomial learnability of probability distributions, particularly Gaussian mixture distributions, has recently received significant attention in theoretical computer science and machine learning. However, despite major progress, the general question of polynomial learnability of Gaussian mixture distributions still remained open. The current work resolves the question of polynomial learnability for Gaussian mixtures in high dimension with an arbitrary but fixed number of components.

    The result for Gaussian distributions relies on a very general result of independent interest on learning parameters of distributions belonging to what we call {\it polynomial families}. These families are characterized by their moments being polynomial of parameters and, perhaps surprisingly, include almost all common probability distributions as well as their mixtures and products. Using tools from real algebraic geometry, we show that parameters of any distribution belonging to such a family can be learned in polynomial time.

    To estimate parameters of a Gaussian mixture distribution the general results on polynomial families are combined with a certain deterministic dimensionality reduction allowing learning a high-dimensional mixture to be reduced to a polynomial number of parameter estimation problems in low dimension.

  50. Overcoming the Hole in the Bucket: Public-Key Cryptography Resilient to Continual Memory Leakage
    Authors: Zvika Brakerski and Yael Tauman Kalai and Jonathan Katz and Vinod Vaikuntanathan
    Abstract: In recent years, there has been a major effort to design cryptographic schemes that remain secure even if part of the secret key is leaked. This is due to a recent proliferation of side channel attacks which, through various physical means, can recover part of the secret key. Micali and Reyzin (2004) suggested the intriguing possibility of achieving security even with continual leakage, i.e., even if some information is leaked each time the key is used. To achieve this impossible-sounding goal, theirs and several subsequent works all required the so-called ``only computation leaks information'' assumption, and it was unclear whether this strong assumption could be removed. We succeed in removing this assumption.

    We show how to securely update a secret key while information is leaked, allowing some leakage {\em even during the updates} themselves. Namely, we construct schemes that remain secure, even if an attacker {\em at each time period}, can probe the {\em entire} memory (containing a secret key) and ``leak out'' a $1/4-\epsilon$ fraction of the secret key . The attacker may also probe the memory {\em during the updates}, and leak $O(\log k)$ bits, where $k$ is the security parameter (relying on subexponential hardness, allows $k^\epsilon$ bits of leakage during each update process).

    Specifically, under the decisional linear assumption on bilinear groups, we achieve the above for public key encryption, identity-based encryption, and signature schemes. Prior to this work, it was not known how to construct encryption schemes even under the (stronger) assumption of Micali and Reyzin. One of our technical tools is a new linear algebraic theorem (which is unconditional, and requires no computational assumptions), stating that ``random subspaces are leakage-resilient''.

    The main contributions of this work are (1) showing how to securely update a secret key while information is leaked (without the aforementioned strong assumption) and (2) giving a public key {\em encryption} (and IBE) scheme that are resilient to continual leakage.

    In a concurrent work, Dodis {\em et al.} construct a signature scheme that is secure in the continual leakage model (without relying on the Mical-Reyzin assumption). They can leak $\frac{1}{2}-\epsilon$ of the secret key between time periods, but (unlike our result) do not tolerate leakage from the update procedure. Their results were obtained after being aware of an initial version of this work (where we constructed such a scheme under a very strong and non-standard assumption). The rest of our results were obtained after we were aware of their work.

  51. Sequential Rationality in Cryptographic Protocols
    Authors: Ronen Gradwohl, Kellogg School of Management, Northwestern University; Noam Livne, Weizmann Institute of Science; Alon Rosen, Herzliya IDC
    Abstract: Much of the literature on rational cryptography focuses on analyzing the strategic properties of cryptographic protocols. However, due to the presence of computationally-bounded players and the asymptotic nature of cryptographic security, a definition of sequential rationality for this setting has thus far eluded researchers.

    We propose a new framework for overcoming these obstacles, and provide the first definitions of computational solution concepts that guarantee sequential rationality. We argue that natural computational variants of subgame perfection are too strong for cryptographic protocols. As an alternative, we introduce a weakening called threat-free Nash equilibrium that is more permissive but still eliminates the undesirable ``empty threats'' of non-sequential solution concepts.

    To demonstrate the applicability of our framework, we revisit the problem of implementing a mediator for correlated equilibrium (Dodis-Halevi-Rabin, Crypto'00), and propose a variant of their protocol that is sequentially rational for a non-trivial class of correlated equilibria. Our treatment provides a better understanding of the conditions under which mediators in a correlated equilibrium can be replaced by a stable protocol.

  52. Dependent Randomized Rounding via Exchange Properties of Combinatorial Structures
    Authors: Chandra Chekuri Dept. of Computer Science, Univ. of Illinois; Jan Vondrak IBM Almaden Research Center; Rico Zenklusen Dept. of Mathematics, MIT;
    Abstract: We consider the problem of randomly rounding a fractional solution $x$ in a polytope $P \subset \RR^n$ to a vertex $X$ of $P$, so that $\E[X] = x$. Our goal is to achieve {\em concentration properties} for linear and submodular functions of the rounded solution. Such dependent rounding techniques, with concentration bounds for linear functions, have been developed in the past for two polytopes: the assignment polytope (that is, bipartite matchings and $b$-matchings) \cite{S01,GKPS06,KMPS09}, and more recently for the spanning tree polytope \cite{AGMGS10}. These schemes have led to a number of new algorithmic results.

    In this paper we describe a new {\em swap rounding} technique which can be applied in a variety of settings including {\em matroids}, {\em matroid intersection} and {\em non-bipartite graph $b$-matchings}, while providing Chernoff-type concentration bounds for linear and submodular functions of the rounded solution. In addition to existing techniques based on negative correlation, we use martingale methods to obtain an exponential tail estimate for monotone submodular functions, and also for linear functions in settings where negative correlation does not hold. The rounding scheme explicitly exploits {\em exchange properties} of the underlying combinatorial structures, and highlights these properties as the basis for concentration bounds.

    The framework of matroids and matroid intersection provides a unifying scheme for several known applications \cite{GKPS06,KMPS09,CCPV09,KST09,AGMGS10} as well as new ones, and its flexibility allows a richer set of constraints to be incorporated easily. We illustrate this on the max-min allocation problem with submodular valuations, submodular maximization subject to a matroid and multiple linear constraints, the crossing spanning tree problem, and resource allocation / broadcast scheduling problems with various demand/capacity constraints.

  53. From Sylvester-Gallai Configurations to Rank Bounds: Improved Black-box Identity Test for Depth-3 Circuits
    Authors: Nitin Saxena, Hausdorff Center for Mathematics, Bonn; C. Seshadhri, IBM Almaden
    Abstract: We study the problem of identity testing for depth-3 circuits of top fanin k and degree d. We give a new structure theorem for such identities. A direct application of our theorem improves the known deterministic d^{k^k}-time black-box identity test over rationals (Kayal & Saraf, FOCS 2009) to one that takes d^{k^2}-time. Our structure theorem essentially says that the number of independent variables in a real depth-3 identity is very small. This theorem affirmatively settles the strong rank conjecture posed by Dvir & Shpilka (STOC 2005).

    We devise a powerful algebraic framework and develop tools to study depth-3 identities. We use these tools show that any depth-3 identity contains a much smaller nucleus identity that contains most of the ``complexity" of the main identity. The special properties of this nucleus allow us to get almost optimal rank bounds for depth-3 identities.

  54. The Geometry of Scheduling
    Authors: Nikhil Bansal (IBM Research) and Kirk Pruhs (Univ. of Pittsburgh)
    Abstract: We consider the following general scheduling problem: There are $n$ jobs with arbitrary release times and sizes. In addition, each job has an associated arbitrary monotone function specifying the cost incurred when the job is completed at a particular time. This problem formulation is general enough to include many natural scheduling objectives, such as weighted flow time, weighted tardiness, and sum of flow time squared. The main contribution of this paper is a randomized polynomial-time algorithm with approximation ratio $O(\log \log (nP) )$, where $P$ is the maximum job size. This general result improves the best known approximation ratios by at least an exponential factor (and much more in some cases) for essentially {\em all} of the nontrivial common special cases of this problem.

    Our result is based on a novel connection between scheduling and geometry. We start with a certain strong linear programming relaxation for this problem, based on exponentially many knapsack cover inequalities. We show that the problem of rounding this linear program can be reduced to geometric weighted set multicover problems, where the sets/objects are have near linear union complexity. We then show how to apply Varadarajan's quasi-uniform sampling technique to obtain solutions for these geometric set multicover problems. We believe that this geometric interpretation of scheduling is of independent interest, and will likely find numerous future applications.

  55. Stability yields a PTAS for k-Median and k-Means Clustering
    Authors: Pranjal Awasthi and Avrim Blum and Or Sheffet, Carnegie Mellon University
    Abstract: We consider $k$-median clustering in finite metric spaces and $k$-means clustering in Euclidean spaces, in the setting where $k$ is part of the input (not a constant). For the $k$-means problem, Ostrovsky et al.~\cite{Ostrovsky06} show that if the input satisfies the condition that the optimal $(k-1)$-means clustering is more expensive than the optimal $k$-means clustering by a factor of $\max\{100, 1/\alpha^2\}$, then one can achieve a $(1+f(\alpha))$-approximation to the $k$-means optimal in time polynomial in $n$ and $k$ by using a variant of Lloyd's algorithm. In this work we substantially improve this approximation guarantee. We show that given only the condition that the $(k-1)$-means optimal is more expensive than the $k$-means optimal by a factor $1+\alpha$ for {\em some} constant $\alpha>0$, we can obtain a PTAS. In particular, under this assumption, for any $\eps>0$ we can achieve a $(1+\eps)$-approximation to the $k$-means optimal in time polynomial in $n$ and $k$, and exponential in $1/\eps$ and $1/\alpha$. We thus decouple the strength of the assumption from the quality of the approximation ratio. We also give a PTAS for the $k$-median problem in finite metrics under the analogous assumption as well. For $k$-means, we in addition give a randomized algorithm with improved running time of $n^{O(1)}(k \log n)^{\poly(1/\epsilon,1/\alpha)}$.

    We also use our technique to obtain a PTAS under the assumption considered by Balcan et al.~\cite{Balcan09} that all $(1+\alpha)$ approximations are $\delta$-close to a desired target clustering, when all target clusters have size greater than $2\delta n$. Both results are based on a new notion of clustering stability, that extends both the notions of~\cite{Ostrovsky06} and of~\cite{Balcan09}. No FPTAS for the $k$-median problem over such stable instances exists, unless $\P=\NP$. Thus our algorithm is in a sense best possible for such instances.

  56. Metric Extension Operators, Vertex Sparsifiers and Lipschitz Extendability
    Authors:Konstantin Makarychev (IBM Research) and Yury Makarychev (TTIC).
    Abstract: We study vertex cut and flow sparsifiers that were recently introduced by Moitra (2009), and Leighton and Moitra (2010). Our results improve and generalize the results by Moitra (2009), and Leighton and Moitra (2010). We give a new polynomial-time algorithm for constructing O(log k / log log k) cut and flow sparsifiers, matching the best existential upper bound on the quality of a sparsifier, and improving the previous algorithmic upper bound of O(log^2 k / log log k). We show that flow sparsifiers can be obtained from linear operators approximating minimum metric extensions. We introduce the notion of (linear) metric extension operators, prove that they exist, and give an exact polynomial-time algorithm for finding optimal operators.

    We then establish a direct connection between flow and cut sparsifiers and Lipschitz extendability of maps in Banach spaces, a notion studied in functional analysis since 1950s. Using this connection, we prove a lower bound of Omega(sqrt{log k /log log k}) for flow sparsifiers and a super-constant lower bound for cut sparsifiers. We show that if a certain open question posed by Ball in 1992 has a positive answer, then there exist \tilde O(\sqrt{log k}) cut sparsifiers. On the other hand, any lower bound on cut sparsifiers better than \tilde Omega(sqrt{log k}) would imply a negative answer to this question.

  57. Polylogarithmic Approximation for Edit Distance and the Asymmetric Query Complexity
    Authors: Alexandr Andoni (Princeton University & Center for Computational Intractability) Robert Krauthgamer (Weizmann Institute) Krzysztof Onak (Massachusetts Institute of Technology)
    Abstract: We present a near-linear time algorithm that approximates the edit distance between two strings within a polylogarithmic factor; specifically, for strings of length $n$ and every fixed $\eps>0$, it can compute a $(\log n)^{O(1/\eps)}$-approximation in $n^{1+\eps}$ time. This is an {\em exponential} improvement over the previously known factor, $2^{\tilde O(\sqrt{\log n})}$, with a comparable running time~\cite{OR-edit,AO-edit}. Previously, no efficient polylogarithmic approximation algorithm was known for any computational task involving edit distance (e.g., nearest neighbor search or sketching).

    This result arises naturally in the study of a new \emph{asymmetric query} model. In this model, the input consists of two strings $x$ and $y$, and an algorithm can access $y$ in an unrestricted manner, while being charged for querying every symbol of $x$. Indeed, we obtain our main result by designing an algorithm that makes a small number of queries in this model. We also provide a nearly-matching lower bound on the number of queries.

    Our lower bound is the first to expose we hardness of edit distance stemming from the input strings being ``repetitive'', which means that many of their substrings are approximately identical. Consequently, our lower bound provides the first rigorous separation between edit distance and Ulam distance, which is edit distance on non-repetitive strings, i.e., permutations.

  58. Fast approximation algorithms for flow and cut-based problems in undirected graphs
    Author: Aleksander Madry, MIT
    Abstract: We present a general method of designing fast approximation algorithms for undirected cut-based minimization problems. More precisely, we develop a technique that given any such cut-based problem that can be approximated quickly on trees, allows approximating it almost as quickly on general graphs while only losing a poly-logarithmic factor in the approximation guarantee.

    To illustrate the applicability of our paradigm, we focus our attention on the undirected sparsest cut problem with general demands, and the balanced separator problem. By a simple use of our framework, we obtain poly-logarithmic approximation algorithms for these problems that run in time close to linear. This establishes, in particular, the first poly-logarithmic-approximation algorithm for the generalized sparsest cut problem whose running time breaks the multicommodity flow barrier of $\Omega(n^2)$ time, and the first algorithms achieving such approximation ratio for the balanced separator and the (uniform) sparsest cut problem while running in time $o(m+n^{3/2})$.

    The main tool behind our result is an efficient procedure that decomposes general graphs into simpler ones while approximately preserving the cut-flow structure. This decomposition is inspired by hierarchical tree decompositions that were developed in the context of oblivious routing schemes.

  59. Efficient volume sampling for row/column subset selection
    Amit Deshpande, Microsoft Research India and Luis Rademacher, Computer Science and Engineering, Ohio State University
    Abstract: We give efficient algorithms for volume sampling, i.e., for picking $k$-subsets of the rows of any given matrix with probabilities proportional to the squared volumes of the simplices defined by them and the origin (or the squared volumes of the parallelepipeds defined by these subsets of rows). This solves an open problem from the monograph on spectral algorithms by Kannan and Vempala (see Section $7.4$ of \cite{KV}, also implicit in \cite{BDM, DRVW}).

    Our first algorithm for volume sampling $k$-subsets of rows from an $m$-by-$n$ matrix runs in $O(kmn^\omega \log n)$ arithmetic operations and a second variant of it for $(1+\epsilon)$-approximate volume sampling runs in $O(mn \log m \cdot k^{2}/\epsilon^{2} + m \log^{\omega} m \cdot k^{2\omega+1}/\epsilon^{2\omega} \cdot \log(k \epsilon^{-1} \log m))$ arithmetic operations, which is almost linear in the size of the input (i.e., the number of entries) for small $k$.

    Our efficient volume sampling algorithms imply the following results for low-rank matrix approximation: \begin{enumerate} \item Given $A \in R^{m \times n}$, in $O(kmn^{\omega} \log n)$ arithmetic operations we can find $k$ of its rows such that projecting onto their span gives a $\sqrt{k+1}$-approximation to the matrix of rank $k$ closest to $A$ under the Frobenius norm. This improves the $O(k \sqrt{\log k})$-approximation of Boutsidis, Drineas and Mahoney \cite{BDM} and matches the lower bound shown in \cite{DRVW}. The method of conditional expectations gives a \emph{deterministic} algorithm with the same complexity. The running time can be improved to $O(mn \log m \cdot k^{2}/\epsilon^{2} + m \log^{\omega} m \cdot k^{2\omega+1}/\epsilon^{2\omega} \cdot \log(k \epsilon^{-1} \log m))$ at the cost of losing an extra $(1+\epsilon)$ in the approximation factor. \item The same rows and projection as in the previous point give a $\sqrt{(k+1)(n-k)}$-approximation to the matrix of rank $k$ closest to $A$ under the spectral norm. In this paper, we show an almost matching lower bound of $\sqrt{n}$, even for $k=1$. \end{enumerate}

  60. Estimating the longest increasing sequence in polylogarithmic time
    Authors: Michael Saks, Rutgers University and C. Seshadhri, IBM Almaden
    Abstract: Finding the longest increasing subsequence (LIS) is a classic algorithmic problem. Let $n$ denote the size of the array. Simple $O(n\log n)$ algorithms are known for this problem. What can a sublinear time algorithm achieve? We develop a polylogarithmic time randomized algorithm that for any constant $\delta > 0$, given $f$ outputs an estimate of the LIS that, with high probability, is accurate to within an additive $\delta n$. More precisely, the running time of the algorithm is $(\log n)^c (1/\delta)^{O(1/\delta)}$ where the exponent $c$ is independent of $\delta$. Previously, the best known polylogarithmic time algorithms could only achieve an additive $n/2$ approximation.

    The LIS problem can be formulated as a dynamic program. Our overall approach seems very general, and might be applicable to approximating other dynamic programs.

  61. The Monotone Complexity of k-Clique on Random Graphs
    Author: Benjamin Rossman, MIT
    Abstract: Understanding the average-case complexity of natural problems on natural distributions is an important challenge for complexity theory. In this paper we consider the average-case complexity of the $k$-\Clique{} problem (for fixed $k$) on monotone circuits. A natural class of distributions in this context are Erd\H{o}s-R\'enyi random graphs $G(n,p)$ at threshold functions $p(n) \in \Theta(n^{-2/(k-1)})$, for which $\Pr(G(n,p)$ contains a $k$-clique$)$ is bounded away from $0$ and $1$. Our main result is a lower bound of $\omega(n^{k/4})$ on the size of monotone circuits which solve $k$-\Clique{} (asymptotically almost surely) on $G(n,p)$ for two sufficiently far-apart threshold functions $p(n)$, such as $n^{-2/(k-1)}$ and $2n^{-2/(k-1)}$. This result complements a previous lower bound \cite{STOC08} of $\omega(n^{k/4})$ on the size of $\AC^0$ circuits which solve $k$-\Clique{} on $G(n,p)$ for a single threshold function $p(n)$. These two lower bounds---obtained by different techniques in the different settings of $\AC^0$ and monotone circuits---support an intuition that {\em random graphs at the threshold may be a source of hard instances for $k$-\Clique{} in general}. (Note that similar beliefs about random \textsc{SAT} are common in statistical physics.)

    In addition, we show that $k/4$ in this lower bound is tight up to $o(k)$ by constructing monotone circuits of size $n^{k/4 + O(1)}$ which solve $k$-\Clique{} on $G(n,p)$ for all functions $p : \N \to [0,1]$ (monotonizing a construction of $\AC^0$ circuits due to Amano \cite{Amano09}). This moreover demonstrates a gap between the worst-case and average-case complexity of $k$-\Clique{} on monotone circuits, in light of an $\wt\Omega(n^k)$ worst-case lower bound due to Razborov \cite{Razborov85}.

    One technical contribution of this paper is the introduction of a new variant of sunflowers called {\em $(p,q)$-sunflowers}, in which petals may overlap (but not too much on average). We prove a combinatorial theorem (along the lines of the Erd\H{o}s-Rado Sunflower Lemma \cite{ErdosRado60}) implying the existence of large $(p,q)$-sunflowers in any large enough uniform hypergraph. For many applications of sunflowers (especially in circuit lower bounds), the use of $(p,q)$-sunflowers may give better parameters, as this paper shows.

  62. Frugal and Truthful Auctions for Vertex Covers, Flows, and Cuts
    Authors: David Kempe and Mahyar Salek and Cristopher Moore

    Abstract: We study truthful mechanisms for hiring a team of agents in three classes of set systems: Vertex Cover auctions, k-flow auctions, and cut auctions. For Vertex Cover auctions, the vertices are owned by selfish and rational agents, and the auctioneer wants to purchase a vertex cover from them. For k-flow auctions, the edges are owned by the agents, and the auctioneer wants to purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know their costs, and the auctioneer needs to select a feasible set and payments based on bids made by the agents.

    We present constant-competitive truthful mechanisms for all three set systems. That is, the maximum overpayment of the mechanism is within a constant factor of the maximum overpayment of any truthful mechanism, for every set system in the class. The mechanism for Vertex Cover is based on scaling each bid by a multiplier derived from the dominant eigenvector of a certain matrix. The mechanism for k-flows prunes the graph to be minimally (k+1)-connected, and then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts contracts the graph until all s-t paths have length exactly 2, and then applies the Vertex Cover mechanism.

  63. The Coin Problem, and Pseudorandomness for Branching Programs
    Authors: Joshua Brody and Elad Verbin
    Abstract: The \emph{Coin Problem} is the following problem: a coin is given, which lands on head with probability either $1/2 + \beta$ or $1/2 - \beta$. We are given the outcome of $n$ independent tosses of this coin, and the goal is to guess which way the coin is biased, and to be correct with probability $\ge 2/3$. When our computational model is unrestricted, the majority function is optimal, and succeeds when $\beta \ge c /\sqrt{n}$ for a large enough constant $c$. The coin problem is open and interesting in models that cannot compute the majority function.

    In this paper we study the coin problem in the model of \emph{read-once width-$w$ branching programs}. We prove that in order to succeed in this model, $\beta$ must be at least $1/ (\log n)^{\Theta(w)}$. For constant $w$ this is tight by considering the recursive tribes function.

    We generalize this to a \emph{Dice Problem}, where instead of independent tosses of a coin we are given independent tosses of one of two $m$-sided dice. We prove that if the distributions are too close, then the dice cannot be distinguished by a small-width read-once branching program.

    We suggest one application for this kind of theorems: we prove that Nisan's Generator fools width-$w$ read-once \emph{permutation} branching programs, using seed length $O(w^4 \log n \log \log n + \log n \log (1/\eps))$. For $w=\eps=\Theta(1)$, this seedlength is $O(\log n \log \log n)$. The coin theorem and its relatives might have other connections to PRGs. This application is related to the independent, but chronologically-earlier, work of Braverman, Rao, Raz and Yehudayoff (which might be submitted to this FOCS).

  64. Agnostically learning under permutation invariant distributions
    Author: Karl Wimmer (Duquesne University)
    Abstract: We generalize algorithms from computational learning theory that are successful under the uniform distribution on the Boolean hypercube $\{0,1\}^n$ to algorithms successful on permutation invariant distributions, distributions where the probability mass remains constant upon permutations in the instances. While the tools in our generalization mimic those used for the Boolean hypercube, the fact that permutation invariant distributions are not product distributions presents a significant obstacle.

    Under the uniform distribution, halfspaces can be agnostically learned in polynomial time for constant $\eps$. The main tools used are a theorem of Peres~\cite{Peres04} bounding the {\it noise sensitivity} of a halfspace, a result of~\cite{KOS04} that this theorem this implies Fourier concentration, and a modification of the Low-Degree algorithm of Linial, Mansour, Nisan~\cite{LMN:93} made by Kalai et. al.~\cite{KKMS08}. These results are extended to arbitrary product distributions in~\cite{BOWi08}.

    We prove analogous results for permutation invariant distributions; more generally, we work in the domain of the symmetric group. We define noise sensitivity in this setting, and show that noise sensitivity has a nice combinatorial interpretation in terms of Young tableaux. The main technical innovations involve techniques from the representation theory of the symmetric group, especially the combinatorics of Young tableaux. We show that low noise sensitivity implies concentration on ``simple'' components of the Fourier spectrum, and that this fact will allow us to agnostically learn halfspaces under permutation invariant distributions to constant accuracy in roughly the same time as in the uniform distribution over the Boolean hypercube case.

  65. Approximating Maximum Weight Matching in Near-linear Time
    Ran Duan, University of Michigan and Seth Pettie, University of Michigan
    Abstract: Given a weighted graph, the {\em maximum weight matching} problem (MWM) is to find a set of vertex-disjoint edges with maximum weight. In the 1960s Edmonds showed that MWMs can be found in polynomial time. At present the fastest MWM algorithm, due to Gabow and Tarjan, runs in $\tilde{O}(m\sqrt{n})$ time, where $m$ and $n$ are the number of edges and vertices in the graph. Surprisingly, restricted versions of the problem, such as computing $(1-\epsilon)$-approximate MWMs or finding maximum cardinality matchings, are not known to be much easier (on sparse graphs). The best algorithms for these problems also run in $\tilde{O}(m\sqrt{n})$ time.

    In this paper we present the first near-linear time algorithm for computing $(1-\epsilon)$-approximate MWMs. Specifically, given an arbitrary real-weighted graph and $\epsilon>0$, our algorithm computes such a matching in $O(m\epsilon^{-2}\log^3 n)$ time. The previous best approximate MWM algorithm with comparable running time could only guarantee a $(2/3-\epsilon)$-approximate solution. In addition, we present a faster algorithm, running in $O(m\log n\log\epsilon^{-1})$ time, that computes a $(3/4-\epsilon)$-approximate MWM.

  66. Bounded Independence Fools Degree-2 Threshold Functions
    Authors: Ilias Diakonikolas (Columbia), Daniel M. Kane (Harvard), Jelani Nelson (MIT)
    Abstract: Let x be a random vector coming from any k-wise independent distribution over {-1,1}^n. For an n-variate degree-2 polynomial p, we prove that E[sgn(p(x))] is determined up to an additive eps for k = poly(1/eps). This gives a large class of explicit pseudo-random generators against such functions and answers an open question of Diakonikolas et al. (FOCS 2009).

    In the process, we develop a novel analytic technique we dub multivariate FT-mollification. This provides a generic tool to approximate bounded (multivariate) functions by low-degree polynomials (with respect to several different notions of approximation). A univariate version of the method was introduced by Kane et al. (SODA 2010) in the context of streaming algorithms. In this work, we refine it and generalize it to the multivariate setting. We believe that our technique is of independent mathematical interest. To illustrate its generality, we note that it implies a multidimensional generalization of Jackson's classical result in approximation theory due to (Ganzburg 1979).

    To obtain our main result, we combine the FT-mollification technique with several linear algebraic and probabilistic tools. These include the invariance principle of of Mossell, O'Donnell and Oleszkiewicz, anti-concentration bounds for low-degree polynomials, an appropriate decomposition of degree-2 polynomials, and a generalized hyper-contractive inequality for quadratic forms which takes the operator norm of the associated matrix into account. Our analysis is quite modular; it readily adapts to show that intersections of halfspaces and degree-2 threshold functions are fooled by bounded independence. From this it follows that Omega(1/eps^2)-wise independence derandomizes the Goemans-Williamson hyperplane rounding scheme.

    Our techniques unify, simplify, and in some cases improve several recent results in the literature concerning threshold functions. For the case of ``regular'' halfspaces we give a simple proof of an optimal independence bound of Theta(1/eps^2), improving upon Diakonikolas et al. (FOCS 2009) by polylogarithmic factors. This yields the first optimal derandomization of the Berry-Esseen theorem and -- combined with the results of Kalai et al. (FOCS 2005) -- implies a faster algorithm for the problem of agnostically learning halfspaces.

  67. Boosting and Differential Privacy
    Authors: Cynthia Dwork (Microsoft Research), Guy Rothblum (Princeton University), Salil Vadhan (Harvard University
    Abstract: Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct {\em privacy-preserving synopses} of the input database. These are data structures that yield, for a given set $\Q$ of queries over an input database, reasonably accurate estimates of the responses to every query in~$\Q$. Given a {\em base synopsis generator} that takes a distribution on $\Q$ and produces a ``weak'' synopsis that yields ``good'' answers for a majority of the weight in $\Q$, our {\em Boosting for Queries} algorithm obtains a synopsis that is good for all of~$\Q$. We ensure privacy for the rows of the database, but the boosting is performed on the {\em queries}.

    We provide an (inefficient) base synopsis generator for sets of {\em arbitrary} low-sensitivity queries (queries whose answers do not vary much under the addition or deletion of a single row). This yields the first privacy-preserving synopsis generator for arbitrary low-sensitivity queries.

    Boosting is an iterative method. In our Boosting for Queries algorithm, each iteration incurs a certain privacy loss. In analyzing the cumulative privacy loss over many iterations, we obtain a bound on the {\em expected} privacy loss from a single $\eps$-\dfp{} mechanism. Combining this with {\em evolution of confidence} arguments from the literature, we get a fresh perspective -- and stronger bounds -- on the expected cumulative privacy loss due to multiple mechanisms, each providing $\eps$-differential privacy or one of its relaxations, and each operating on (potentially) different, adaptively chosen, databases. \snote{changed ``the first bounds'' to ``stronger bounds'', to deflect possible claims that such guarantees are folklore/obvious for $(k\eps,k\delta)$ type results}

    Finally, we can also view the input database as a training set in a learning algorithm, where each row corresponds to an element in the training set. Given the power and prevalence of boosting, it is natural to search for boosting techniques that preserve the privacy properties of the base learner. We present a differentially private boosting technique, in which privacy comes at little additional cost in accuracy. We call this {\em Boosting for People}, since rows corresponding to individual people are the elements of interest.

  68. Fighting Perebor: New and Improved Algorithms for Formula and QBF Satisfiability
    Author: Rahul Santhanam, University of Edinburgh
    Abstract: We investigate the possibility of finding satisfying assignments to Boolean formulae and testing validity of quantified Boolean formulae (QBF) asymptotically faster than a brute force search.

    Our first main result is a simple deterministic algorithm running in time $2^{n - \Omega(n)}$ for satisfiability of formulae of linear size in $n$, where $n$ is the number of variables in the formula. This algorithm extends to exactly counting the number of satisfying assignments, within the same time bound.

    Our second main result is a deterministic algorithm running in time $2^{n - \Omega(n/\log(n)}$ for solving QBFs in which the number of occurrences of any variable is bounded by a constant. For instances which are ``structured'', in a certain precise sense, the algorithm can be modified to run in time $2^{n - \Omega(n)}$.

    To the best of our knowledge, no non-trivial algorithms were known for these problems before.

    As a byproduct of the technique used to establish our first main result, we show that every function computable by linear-size formulae can be represented by decision trees of size $2^{n - \Omega(n)}$. As a consequence, we get strong superlinear {\it average-case} formula size lower bounds for the Parity function.

  69. Lower Bounds for Near Neighbor Search via Metric Expansion
    Authors: Rina Panigrahy (Microsoft Research Silicon Valley), Kunal Talwar (Microsoft Research Silicon Valley), Udi Wieder (Microsoft Research Silicon Valley)
    Abstract: In this paper we show how the complexity of performing nearest neighbor (NNS) search on a metric space is related to the expansion of the metric space. Given a metric space we look at the graph obtained by connecting every pair of points within a certain distance $r$ . We then look at various notions of expansion in this graph relating them to the cell probe complexity of NNS for randomized and deterministic, exact and approximate algorithms. For example if the graph has node expansion $\Phi$ then we show that any deterministic $t$-probe data structure for $n$ points must use space $S$ where $(St/n)^t > \Phi$. We show similar results for randomized algorithms as well. These relationships can be used to derive most of the known lower bounds in the well known metric spaces such as $l_1$, $l_2$, $l_\infty$ by simply computing their expansion. In the process, we strengthen and generalize our previous results~\cite{PTW08}. Additionally, we unify the approach in~\cite{PTW08} and the communication complexity based approach. Our work reduces the problem of proving cell probe lower bounds of near neighbor search to computing the appropriate expansion parameter.

    In our results, as in all previous results, the dependence on $t$ is weak; that is, the bound drops exponentially in $t$. We show a much stronger (tight) time-space tradeoff for the class of \emph{dynamic} \emph{low contention} data structures. These are data structures that supports updates in the data set and that do not look up any single cell too often.

  70. Black-Box Randomized Reductions in Algorithmic Mechanism Design
    Authors: Shaddin Dughmi and Tim Roughgarden (Stanford University)
    Abstract: We give the first black-box reduction from arbitrary approximation algorithms to truthful approximation mechanisms for a non-trivial class of multi-parameter problems. Specifically, we prove that every packing problem that admits an FPTAS also admits a truthful-in-expectation randomized mechanism that is an FPTAS. Our reduction makes novel use of smoothed analysis, by employing small perturbations as a tool in algorithmic mechanism design. To argue that our perturbation schemes are incentive-compatible, we develop a “duality” between linear perturbations of the objective function of an optimization problem and of its feasible set. Viewed from the dual perspective, our mechanisms are maximal-in-distributional-range and hence truthful in expectation.
  71. Pure and Bayes-Nash Price of Anarchy for Generalized Second Price Auction
    Authors: Renato Paes Leme and Eva Tardos (Cornell)
    Abstract: Generalized Second Price Auction, also knows as Ad Word auctions, and its variants has been the main mechanism used by search companies to auction positions for sponsored search links. In this paper we study the social welfare of the Nash equilibria of this game. It is known that socially optimal Nash equilibria exists (i.e., that the Price of Stability for this game is 1). This paper is the first to prove bounds on the price of anarchy. Our main result is to show that under some mild assumptions the price of anarchy is small. For pure Nash equilibria we bound the price of anarchy by 1.618, assuming all bidders are playing un-dominated strategies. For mixed Nash equilibria we prove a bound of 4 under the same assumption. We also extend the result to the Bayesian setting when bidders valuations are also random, and prove a bound of 8 for this case. Our proof exhibits a combinatorial structure of Nash equilibria and use this structure to bound the price of anarchy. While establishing the structure is simple in the case of pure and mixed Nash equilibria, the extension to the Bayesian setting requires the use of novel combinatorial techniques that can be of independent interest.
  72. Learning Convex Concepts from Gaussian Distributions with PCA
    Authors: Santosh Vempala (Georgia Tech)
    Abstract: We present a new algorithm for learning a convex set in $\R^n$ given labeled examples drawn from any Gaussian distribution. The efficiency of the algorithm depends on the dimension of the {\em normal subspace}, the span of vectors orthogonal to supporting hyperplanes of the convex set. The key step of the algorithm uses a Singular Value Decomposition (SVD) to approximate the relevant normal subspace. The complexity of the resulting algorithm is $\poly(n)2^{\tilde{O}(k)}$ for an arbitrary convex set with normal subspace of dimension $k$. For the important special case of the intersection of $k$ halfspaces, the complexity is

    \[ \poly(n,k,1/\eps) + n \cdot \min \, k^{O(\log k/\eps^4)}, (k/\eps)^{O(k\log (1/\eps))} \]

    to learn a hypothesis that correctly classifies $1-\eps$ of the unknown Gaussian distribution. This improves substantially on existing algorithms and is the first algorithm to achieve a fixed polynomial dependence on $n$. The proof of our main result is based on a monotonicity property of Gaussian space.

  73. Sublinear Optimization for Machine Learning
    Authors: Kenneth L. Clarkson and Elad Hazan and David P. Woodruff (IBM Almaden Research Center)
    Abstract: We give sublinear-time approximation algorithms for some optimization problems arising in machine learning, such as training linear classifiers and finding minimum enclosing balls. Our algorithms can be extended to some kernelized versions of these problems, such as SVDD, hard margin SVM, and $L_2$-SVM, for which sublinear-time algorithms were not known before. These new algorithms use a combination of novel sampling techniques and the randomized implementation of online learning algorithms. We give lower bounds which show the running times of our algorithms to be nearly best possible in the unit-cost RAM model. We also give implementations of our algorithms in the semi-streaming setting, obtaining the first low pass sublinear time algorithms with polylogarithmic space and arbitrary approximation factor. Finally we show that our algorithms, which are Monte Carlo, can be made Las Vegas with an additional linear-time step, and we show that such linear work is necessary for Las Vegas results.
  74. The Limits of Two-Party Differential Privacy Authors: Andrew McGregor (University of Massachusetts, Amherst) Ilya Mironov (Microsoft Research Silicon Valley) Toniann Pitassi (University of Toronto) Omer Reingold (Microsoft Research Silicon Valley) Kunal Talwar (Microsoft Research Silicon Valley) Salil Vadhan (Harvard)
    Abstract: We study differential privacy in a distributed setting where two parties would like to perform analysis of their joint data while preserving privacy for both datasets. Our results imply almost tight lower bounds on the accuracy of such data analyses, both for specific natural functions (such as Hamming distance) and in general. Our bounds expose a sharp contrast between the two-party setting and the simpler client-server setting (where privacy guarantees are one-sided). In addition, those bounds demonstrate a dramatic gap between the accuracy that can be obtained by differentially private data analysis versus the accuracy obtainable when privacy is relaxed to a computational variant of differential privacy.

    The first proof technique we develop demonstrates a connection between differential privacy and deterministic extraction from Santha-Vazirani sources. A second connection we expose indicates that the ability to approximate a function by a low-error differentially-private protocol is strongly related to the ability to approximate it by a low communication protocol (the connection goes in both ways).

  75. Distance Oracles Beyond the Thorup-Zwick Bound
    Authors: Mihai Patrascu - AT&T Labs; Liam Roditty - Bar Ilan University
    Abstract: We give the first improvement to the space/approximation trade-off of distance oracles since the seminal result of Thorup and Zwick [STOC'01].

    For unweighted graphs, our distance oracle has size O(n^{5/3}) = O(n^{1.66\cdots}) and, when queried about vertices at distance d, returns a path of length 2d+1.

    For weighted graphs with m=n^2/\alpha edges, our distance oracle has size O(n^2 / \sqrt[3]{\alpha}) and returns a factor 2 approximation.

    Based on a widely believed conjecture about the hardness of set intersection queries, we show that a 2-approximate distance oracle requires space Omega~(n^2 / \sqrt{\alpha}). For unweighted graphs, this implies an Omega~(n^{1.5}) space lower bound to achieve approximation 2d+1.

  76. A Multiplicative Weights Mechanism for Interactive Privacy-Preserving Data Analysis
    Authors: Moritz Hardt (Princeton University); Guy Rothblum (Princeton University)
    Abstract: We consider statistical data analysis with online queries. In this setting a trusted curator maintains a database of sensitive information about individual participants, and releases privacy-preserving answers to online queries as they arrive. Our primary contribution is a new differentially private multiplicative weights mechanism for answering a large number of counting queries that arrive online and may be adaptively chosen. This mechanism yields two main results.

    First, it is the first {\em efficient} mechanism for answering large numbers of online queries with worst-case accuracy guarantees (accuracy on every input database). The error is \emph{optimal} in its dependence on the number of participants and depends only logarithmically on the number of queries being answered. The running time is nearly {\em linear} in the size of the data universe. Previous mechanisms for answering many counting queries all run in super cubic time, even for the (easier) offline setting where all queries are known in advance.

    Second, while continuing to guarantee worst-case privacy for {\em any} input database, we obtain exponential improvements in running time for a broad class of databases. When the input database is drawn from a {\em smooth} distribution that does not place too much weight on any single data item, accuracy remains as above and the running time becomes {\em poly-logarithmic} in the data universe size. To the best of our knowledge, this is the first example of a poly-logarithmic time mechanism for answering large numbers of general queries (indeed, for worst-case accuracy guarantees, there are known hardness results). Our main technical contributions are a new application of multiplicative weights techniques to the differential privacy setting, a new privacy analysis for multiplicative weights algorithms, and a general technique for reducing data dimensionality for databases drawn from smooth distributions.

  77. Subexponential Algorithms for Unique Games and Related problems
    Authors: Sanjeev Arora: Princeton University and Center for Computational Intractability; Boaz Barak: Microsoft Research and Princeton University; David Steurer: Princeton University and Center for Computational Intractability.
    Abstract: We give subexponential time approximation algorithms for the unique games and the small set expansion problems. Specifically, for some absolute constant c, we give:

    1 An exp(kn^epsilon)-time algorithm that, given as input a k-alphabet unique game on n variables that has an assignment satisfying 1-epsilon^c fraction of its constraints, outputs an assignment satisfying 1-epsilon fraction of the constraints.

    2. An exp(n^epsilon/delta)-time algorithm that, given as input an n-vertex regular graph that has a set S of delta n vertices with edge expansion at most epsilon^c outputs a set S' of at most delta n vertices with edge expansion at most epsilon.

    We also obtain a subexponential algorithm with improved approximation for the Multi-Cut problem, as well as subexponential algorithms with improved approximations to Max-Cut, Sparsest-Cut and Vertex-Cover on some interesting subclasses of instances.

    Khot's Unique Games Conjecture (UGC) states that it is NP-hard to achieve approximation guarantees such as ours for unique games. While our results stop short of refusing the UGC, they do suggest that Unique Games is significantly easier than NP-hard problems such as 3SAT, 3LIN, Label Cover and more, that are believed not to have a subexponential algorithm achieving a non-trivial approximation ratio.

    The main component in our algorithms is a new result on graph decomposition that may have other applications. Namely we show that for every delta>0 and a regular n-vertex graph G, by changing at most delta fraction of G's edges, one can break G into disjoint parts so that the induced graph on each part has at most n^epsilon eigenvalues larger than 1-eta (where epsilon,eta depend polynomially on delta). Our results are based on combining this decomposition with previous algorithms for unique games on graphs with few large eigenvalues (Kolla and Tulsiani 2007, Kolla 2010).

  78. Budget Feasible Mechanisms
    Author: Yaron Singer, UC Berkeley
    Abstract: We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of submodular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the submodular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.
  79. Black-Box, Round-Efficient Secure Computation via Non-Malleability Amplification
    Hoeteck Wee (Queens College, CUNY)
    Abstract: We present new and improved protocols for secure multi-party computation. Our main contributions are as follows:

    -- a O(log^* n)-round protocol for secure multi-party computation with a dishonest majority that relies on black-box access to dense cryptosystems, homomorphic encryption schemes, or lossy encryption schemes. This improves upon the recent O(1)^{log^* n}-round protocol of Lin, Pass and Venkitasubramaniam (STOC 2009) that relies on non- black-box access to a smaller class of primitives.

    -- a O(1)-round protocol requiring in addition, black-box access to a one-way function with sub-exponential hardness, improving upon the recent work of Pass and Wee (Eurocrypt 2010).

    These are the first black-box constructions for secure computation with sublinear round complexity. Our constructions build on and improve upon the work of Lin and Pass (STOC 2009) on non-malleability amplification, as well as that of Ishai et al. (STOC 2006) on black-box secure computation.

    In addition to the results on secure computation, we also obtain a simple, self-contained construction of a O(log^* n)-round non-malleable commitment scheme based on one-way functions, improving upon the recent O(1)^{log^* n} -round protocol of Lin and Pass (STOC 2009). Our construction uses a novel transformation for handling arbitrary man-in-the- middle scheduling strategies which improves upon a previous construction of Barak (FOCS 2002).

  80. On the Computational Complexity of Coin Flipping
    Authors: Hemanta K. Maji (UIUC) Manoj Prabhakaran (UIUC) Amit Sahai (UCLA)
    Abstract: Coin flipping is one of the most fundamental tasks in cryptographic protocol design. Informally, a coin flipping protocol should guarantee both (1) Completeness: an honest execution of the protocol by both parties results in a fair coin toss, and (2) Security: a cheating party cannot increase the probability of its desired outcome by any significant amount. Since its introduction by Blum~\cite{Blum82}, coin flipping has occupied a central place in the theory of cryptographic protocols. In this paper, we explore what are the implications of the existence of secure coin flipping protocols for complexity theory. As exposited recently by Impagliazzo~\cite{Impagliazzo09talk}, surprisingly little is known about this question.

    Previous work has shown that if we interpret the Security property of coin flipping protocols very strongly, namely that nothing beyond a negligible bias by cheating parties is allowed, then one-way functions must exist~\cite{ImpagliazzoLu89}. However, for even a slight weakening of this security property (for example that cheating parties cannot bias the outcome by any additive constant $\epsilon>0$), the only complexity-theoretic implication that was known was that $\PSPACE \nsubseteq \BPP$.

    We put forward a new attack to establish our main result, which shows that, informally speaking, the existence of any (weak) coin flipping protocol that prevents a cheating adversary from biasing the output by more than $\frac14 - \epsilon$ implies that $\NP \nsubseteq \BPP$. Furthermore, for constant-round protocols, we show that the existence of any (weak) coin flipping protocol that allows an honest party to maintain any noticeable chance of prevailing against a cheating party implies the existence of (infinitely often) one-way functions.

  81. Adaptive Hardness and Composable Security in the Plain Model from Standard Assumptions
    Authors: Ran Canetti, Huijia Lin, and Rafael Pass.
    Abstract: We construct the first general secure computation protocols that require no trusted infrastructure other than authenticated communication, and that satisfy a meaningful notion of security that is preserved under universal composition---{\em assuming only the existence of enhanced trapdoor permutations.} The notion of security is a generalization of the ``angel-based'' notion of Prabhakaran and Sahai (STOC'04) and implies super-polynomial time simulation security.

    A key element in our construction is a new notion of security for commitment schemes. The new notion, security against chosen-commitment-attacks (CCA security), means that security holds even if the attacker has access to a {\em decommitment oracle.} This notion is stronger than concurrent non-malleability and is of independent interest. Our main technical contribution is constructing CCA-secure commitments based on standard one-way functions, and with no trusted set-up. This provides a construction of a primitive whose \emph{adaptive hardness} can be based on standard assumptions without set-up.