From a game-theoretic point of view, our results provide a characterization of the power of the main positive result of mechanism design: the VCG mechanism. Another motivation is that our lower bounds might play a crucial role towards setting lower bounds on the power of polynomial time truthful mechanisms. We will explain and discuss both issues.
Joint work with Noam Nisan.
Among the other tools, we develop a sufficient condition for the tree and graph reconstruction problem to coincide. We apply such condition to antiferromagnetic colorings of random graphs.
Based on joint work with A. Gerschenfeld.
In this talk, we will illustrate some of these connections and their power by showing how recent breakthrough constructions of list-decodable codes, due to Parvaresh and Vardy (FOCS '05) and Guruswami and Rudra (STOC '06), can be used to give much simpler and improved constructions of both randomness extractors and highly unbalanced bipartite expander graphs.
Joint work with Venkatesan Guruswami and Christopher Umans.
The data stream model does not exploit all the resources available in modern storage architectures, e.g., Map-Reduce uses clusters to perform updates over multiple streams. In the second part of my talk, I will consider one generalization of the data stream model in which the algorithm has sequential access to multiple streams and can also write onto streams. There is no limit on the size of the streams but the number of passes made on the streams is restricted. On the other hand, the amount of internal memory used by the algorithm is scarce, similar to the data stream model. This model is truly more powerful than data streams: it is possible to sort N items in O(log N) passes with O(log N) memory. Since sorting is the "mother" of most streaming problems, this yields efficient algorithms for many problems that are intractable for data streams. This model of read/write streams was introduced by Grohe and Schweikardt in PODS 2005.
What happens if the number of passes is sub-logarithmic? Previous work by these authors and others yielded lower bounds only for deterministic algorithms. Any problem that involves approximation, however, requires one to analyze randomized algorithms with 2-sided error for which these results do not yield anything. This was the main problem left open in their work that we resolved recently. The main result that I will present is for the classical set-disjointness problem: a near-linear lower bound on the size of the internal memory when the number of asses is slightly sub-logarithmic. The technique involves a direct-sum type of argument that can be applied to a large class of problems.
This work appeared in STOC '07 and is joint with Paul Beame (Univ. of Washington) and Atri Rudra (SUNY Buffalo).
Motivated by the stochastic shortest path problem, we further embark on giving new tools for quasi-concave minimization--a problem of considerable difficulty, whose solutions are typically exponential, and usually heuristics of unknown approximation guarantees are employed.
To that effect, we provide the first smoothed analysis of quasi-concave minimization. The analysis is based on a smoothed bound for the number of extreme points of the projection of the feasible polytope onto a $k$-dimensional subspace. The bound we derive can be used to design a polynomial-time approximation algorithm for low-rank quasi-concave minimization (as is the case of the shortest path problem above). We supplement the positive smoothed bound with a hardness result: that general quasi-concave minimization is hard to approximate. The latter implies that our bound is tight, in that no polynomial smoothed bound is possible for general quasi-concave minimization.
This talk is based on two recent papers,
On the Hardness and Smoothed Complexity of Quasi-Concave Minimization,
joint with J. Kelner (FOCS 07)
Stochastic Shortest Paths via Quasi-Convex Maximization,
joint with M. Brand, J. Kelner and M. Mitzenmacher (ESA 06)
Joint work with Ittai Abraham and Yair Bartal.
In this talk we focus on using linear sketches Ax to recover sparse
approximations of x. Informally, a sparse approximation of x is a vector x'
that is "sparse" (has few non-zero terms) and "well-approximates" x. The
methods that accomplish this task they can be roughly divided into two classes:
- combinatorial: utilize sparse sketching matrices; the recovery
algorithms involve binary-search-like techniques
- geometric: utilize dense sketching matrices; the recovery algorithms
involve linear or convex programming
Given that both methods have pros and cons, it is desirable to understand the connections between them, with the ultimate goal of obtaining the "best of both worlds" solution.
In this talk we present recent results on applying geometric recovery methods to sparse sketching matrices. We show that, both in theory and in practice, the sparse matrices are essentially as "good" as the dense ones. At the same time, our approach inherits many advantages of sparse matrices, such as lower sketching and recovery times, and the ability to construct the sketching matrices explicitly.
Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff and Martin Strauss.
In this talk, we derive a 2-approximation algorithm for a general subclass of the Restless Bandit Problem, in which the state-transition probability for each bandit is "separable" into a constant probability matrix and a vector of arbitrary monotone functions of time. The monotonicity models increasing levels of uncertainty as time-since-last-observation increases. We mention applications of this model to wireless scheduling and unmanned aircraft navigation. The resulting algorithm is simple and intuitive, and in addition, applicable to related stochastic scheduling problems.
Joint work with Sudipto Guha, UPenn, and Peng Shi, Duke.
"Smooth histograms for sliding windows" is a joint work with Rafail Ostrovsky (FOCS '07). This paper presents a smooth histogram method that not only captures and improves multiple previous results on sliding windows, but also extends the class of functions that can be approximated on sliding windows.
"Succinct sampling on streams" is a joint work with Rafail Ostrovsky and Carlo Zaniolo (submitted). We call sampling algorithms succinct if they use provably optimal worst-case memory. In this paper we ask the following question: Is succinct sampling on streams (or S^3) possible, and if so, for what models? We show that S^3 algorithms are possible for all variants of the problem, i.e., both with and without replacement and both for one-at-a-time and bursty arrival models.
The major result on K-Server is the Work Function Algorithm by Koutsoupias and Papadimitriou, establishing a 2k-1 competitive deterministic algorithm. Slightly improved results (k-competitive) exist for some special cases and a deterministic lower bound of k is also known. However, in many cases it is possible to improve online competitive results using a randomized algorithm. In this talk, I will present a randomized algorithm for K-Server on a special class of metric spaces -- hierarchical binary trees. While this seems a restrictive case, a slight generalization of this result to non-binary hierarchically structured trees would apply to arbitrary finite metrics because of a recent set of results on randomized metric embedding. This talk presents a number of new ideas, including a model of an online problem as a hierarchy of independent online decision makers and an improved algorithm for a special case of the metrical task system problem.
This is joint work with Aaron Cote (UCLA) and Laura Poplawski (Northeastern) and has been accepted to STOC 2008.
We then consider the discounted secretary problem. There is a time-dependent ``discount'' factor d(t), and the benefit derived from assigning the item at time t to agent e is d(t) times v(e). For this problem, we show a lower bound of $\Omega(\frac{\log n}{\log \log n})$, and a nearly-matching O(log n)-competitive truthful auction with general (and possibly increasing) d(t). Additionally, we present a constant-competitive truthful auction when the expected optimum is known in advance, proving the large value of knowing this market statistics.
This talk is based on joint work with Immorlica and Kleinberg (SODA '07) and Dinitz, Gupta, Immorlica and Talwar (2008).