Stanford Algorithms Seminar (formerly known as AFLB) is usually held in Gates 498 or Theory Lounge (Gates 463A).
It is run by Nikola Milosavljevic and Tim Roughgarden. Click here for directions.

Contents:

  1. 16 Oct 2007: Shahar Dobzinski (Hebrew University of Jerusalem). The Power of VCG: On Algorithms that are Maximal in Range.
  2. 13 Nov 2007: Andrea Montanari (Stanford). TBA.
  3. 15 Nov 2007: Salil Vadhan (Harvard). Expander Graphs, Randomness Extractors, and List-Decodable Codes.
  4. 27 Nov 2007: T. S. Jayram (IBM Almaden). Read/Write Streams for Massive Data Sets.
  5. 11 Dec 2007: Evdokia (Eddie) Nikolova (MIT). From Shortest Paths to Quasi-Concave Minimization.
  6. 22 Jan 2008: Ofer Neiman (Hebrew University of Jerusalem). Local Embedding of Metric Spaces.
  7. 25 Jan 2008: Paul Valiant (MIT). Testing Symmetric Properties of Distributions.
  8. 29 Jan 2008: Piotr Indyk (MIT). Sparse Recovery Using Sparse Random Matrices.
  9. 11 Mar 2008: Kamesh Munagala (Duke University). LP-duality Based Algorithms for Restless Bandit Problems.
  10. 15 Apr 2008: Vladimir Braverman (UCLA). Streaming Computations on Sliding Windows.
  11. 29 Apr 2008: Adam Meyerson (UCLA). Randomized K-Server on Hierarchical Binary Trees.
  12. 13 May 2008: Moshe Babaioff (Microsoft Research). Online Auctions, Matroids and Secretary Problems.

16 October 2007

Gates 498, 4:00pm

Shahar Dobzinski (Hebrew University of Jerusalem)

The Power of VCG: On Algorithms that are Maximal in Range

Consider the following simple type of approximation algorithms: limit the set of possible outcomes, and completely optimize over the restricted subrange. In this talk we will study the power of this type of algorithms in two settings: multi-unit auctions, and combinatorial auctions with subadditive bidders.

From a game-theoretic point of view, our results provide a characterization of the power of the main positive result of mechanism design: the VCG mechanism. Another motivation is that our lower bounds might play a crucial role towards setting lower bounds on the power of polynomial time truthful mechanisms. We will explain and discuss both issues.

Joint work with Noam Nisan.


13 November 2007

Gates 498, 4:30pm

Andrea Montanari (Stanford)

Reconstruction for Models on Random Graphs

The reconstruction problem requires to estimate a random variable given "far away" observations. Several theoretical results (and simple algorithms) are available when the underlying probability distribution is Markov with respect to a tree. In this paper we establish several exact thresholds for loopy graphs. More precisely we consider models on random graphs that converge locally to trees. We establish the reconstruction thresholds for the Ising model both with attractive and random interactions (respectively, "ferromagnetic" and "spin glass"). Remarkably, in the first case the result does not coincide with the corresponding tree threshold.

Among the other tools, we develop a sufficient condition for the tree and graph reconstruction problem to coincide. We apply such condition to antiferromagnetic colorings of random graphs.

Based on joint work with A. Gerschenfeld.


15 November 2007

Packard 101, 4:15pm

Salil Vadhan (Harvard)

Expander Graphs, Randomness Extractors, and List-Decodable Codes

One of the exciting contributions of the theory of pseudorandomness has been the discovery that a number of fundamental and widely studied objects are almost equivalent when interpreted appropriately. These objects include expander graphs, randomness extractors, list-decodable error-correcting codes, pseudorandom generators, and randomness-efficient samplers.

In this talk, we will illustrate some of these connections and their power by showing how recent breakthrough constructions of list-decodable codes, due to Parvaresh and Vardy (FOCS '05) and Guruswami and Rudra (STOC '06), can be used to give much simpler and improved constructions of both randomness extractors and highly unbalanced bipartite expander graphs.

Joint work with Venkatesan Guruswami and Christopher Umans.


27 November 2007

Gates 498, 4:30pm

T. S. Jayram (IBM Almaden)

Read/Write Streams for Massive Data Sets

The data stream model is an important computing paradigm for massive data sets. The field has matured quite nicely in the last decade, and the theory community has made several fundamental contributions to the field. In the first part of my talk, I will highlight some of these results, and briefly describe the techniques that have played a major role in both the design of algorithms and in proving the limits of efficient computation.

The data stream model does not exploit all the resources available in modern storage architectures, e.g., Map-Reduce uses clusters to perform updates over multiple streams. In the second part of my talk, I will consider one generalization of the data stream model in which the algorithm has sequential access to multiple streams and can also write onto streams. There is no limit on the size of the streams but the number of passes made on the streams is restricted. On the other hand, the amount of internal memory used by the algorithm is scarce, similar to the data stream model. This model is truly more powerful than data streams: it is possible to sort N items in O(log N) passes with O(log N) memory. Since sorting is the "mother" of most streaming problems, this yields efficient algorithms for many problems that are intractable for data streams. This model of read/write streams was introduced by Grohe and Schweikardt in PODS 2005.

What happens if the number of passes is sub-logarithmic? Previous work by these authors and others yielded lower bounds only for deterministic algorithms. Any problem that involves approximation, however, requires one to analyze randomized algorithms with 2-sided error for which these results do not yield anything. This was the main problem left open in their work that we resolved recently. The main result that I will present is for the classical set-disjointness problem: a near-linear lower bound on the size of the internal memory when the number of asses is slightly sub-logarithmic. The technique involves a direct-sum type of argument that can be applied to a large class of problems.

This work appeared in STOC '07 and is joint with Paul Beame (Univ. of Washington) and Atri Rudra (SUNY Buffalo).


11 December 2007

Gates 459, 4:30pm

Evdokia (Eddie) Nikolova (MIT)

From Shortest Paths to Quasi-Concave Minimization

In this talk I will walk through a journey of research, that started with a simple application in mind: that of finding the shortest path in an uncertain environment (or: how to get to the airport on time). In particular, our goal was to maximize the probability that the path length does not exceed a given threshold value (deadline). This stochastic shortest path problem is one of few that have considered a nonlinear objective function--due to the difficulty of finding closed-form expressions for the objective, in addition to the challenge of optimizing the objective, which falls in the category of non-convex optimization. We give a surprising exact $n^{\Theta(log n)}$ algorithm for the case of independent normally distributed edge lengths, which is based on quasi-concave minimization.

Motivated by the stochastic shortest path problem, we further embark on giving new tools for quasi-concave minimization--a problem of considerable difficulty, whose solutions are typically exponential, and usually heuristics of unknown approximation guarantees are employed.

To that effect, we provide the first smoothed analysis of quasi-concave minimization. The analysis is based on a smoothed bound for the number of extreme points of the projection of the feasible polytope onto a $k$-dimensional subspace. The bound we derive can be used to design a polynomial-time approximation algorithm for low-rank quasi-concave minimization (as is the case of the shortest path problem above). We supplement the positive smoothed bound with a hardness result: that general quasi-concave minimization is hard to approximate. The latter implies that our bound is tight, in that no polynomial smoothed bound is possible for general quasi-concave minimization.

This talk is based on two recent papers,
On the Hardness and Smoothed Complexity of Quasi-Concave Minimization, joint with J. Kelner (FOCS 07)
Stochastic Shortest Paths via Quasi-Convex Maximization, joint with M. Brand, J. Kelner and M. Mitzenmacher (ESA 06)


22 January 2008

Gates 498, 4:00pm

Ofer Neiman (Hebrew University of Jerusalem)

Local Embedding of Metric Spaces

In many application areas, complex data sets are often represented by some metric space and metric embedding is used to provide a more structured representation of the data. In many of these applications much greater emphasis is put on the preserving the local structure of the original space than on maintaining its complete structure. In this paper we initiate the study of local embeddings of metric spaces and provide embeddings with distortion depending solely on the local structure of the space.

Joint work with Ittai Abraham and Yair Bartal.


25 January 2008

Gates 498, 4:00pm

Paul Valiant (MIT)

Testing Symmetric Properties of Distributions

We introduce the notion of a Canonical Tester for a class of properties, that is, a tester strong and general enough that ``a property is testable if and only if the Canonical Tester tests it''. We construct a Canonical Tester for the class of symmetric properties of one or two distributions, satisfying a certain weak continuity condition. Analyzing the performance of the Canonical Tester on specific properties resolves several open problems, establishing lower-bounds that match known upper-bounds: we show that distinguishing between entropy $<\alpha$ or $>\beta$ on distributions over $[n]$ requires $n^{\alpha/\beta- o(1)}$ samples, and distinguishing whether a pair of distributions has statistical distance $<\alpha$ or $>\beta$ requires $n^{1-o(1)}$ samples. Our techniques also resolve a conjecture about a property that our Canonical Tester does not apply to: distinguishing identical distributions from those with statistical distance $>\beta$ requires $\Omega(n^{2/3})$ samples.

29 January 2008

Gates 498, 4:00pm

Piotr Indyk (MIT)

Sparse Recovery Using Sparse Random Matrices

Over the recent years, a new method for obtaining succinct approximate representations of n-dimensional vectors has been discovered. For any vector x, its representation is equal to Ax, where A is an m x n matrix (possibly chosen at random). Although typically the representation length m is much smaller than n, the "sketch" Ax often contains plenty of useful information about x. At the same time, the linearity of the sketching method is very convenient for many applications, such as data stream computing and compressed sensing.

In this talk we focus on using linear sketches Ax to recover sparse approximations of x. Informally, a sparse approximation of x is a vector x' that is "sparse" (has few non-zero terms) and "well-approximates" x. The methods that accomplish this task they can be roughly divided into two classes:
- combinatorial: utilize sparse sketching matrices; the recovery algorithms involve binary-search-like techniques
- geometric: utilize dense sketching matrices; the recovery algorithms involve linear or convex programming

Given that both methods have pros and cons, it is desirable to understand the connections between them, with the ultimate goal of obtaining the "best of both worlds" solution.

In this talk we present recent results on applying geometric recovery methods to sparse sketching matrices. We show that, both in theory and in practice, the sparse matrices are essentially as "good" as the dense ones. At the same time, our approach inherits many advantages of sparse matrices, such as lower sketching and recovery times, and the ability to construct the sketching matrices explicitly.

Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff and Martin Strauss.


11 March 2008

Gates 498, 4:00pm

Kamesh Munagala (Duke University)

LP-duality Based Algorithms for Restless Bandit Problems

In numerous areas of operations research and artificial intelligence, we face a problem common to gamblers choosing slot machines (or bandits) in a casino: whether to try new machines or keep playing the one we know and hope for the best. More generally, we face the trade-off between exploration and exploitation: between learning from and adapting to a stochastic system and exploiting our current best-knowledge. A fundamental decision-theoretic model that captures this trade-off is the celebrated Multi-arm Bandit Problem. The basic multi-armed bandit problem admits to an elegant polynomial time solution. However, many stochastic scheduling applications can only be modeled using a generalization termed the Restless Bandit Problem, which in its ultimate generality, is PSPACE-Hard to approximate to any non-trivial factor.

In this talk, we derive a 2-approximation algorithm for a general subclass of the Restless Bandit Problem, in which the state-transition probability for each bandit is "separable" into a constant probability matrix and a vector of arbitrary monotone functions of time. The monotonicity models increasing levels of uncertainty as time-since-last-observation increases. We mention applications of this model to wireless scheduling and unmanned aircraft navigation. The resulting algorithm is simple and intuitive, and in addition, applicable to related stochastic scheduling problems.

Joint work with Sudipto Guha, UPenn, and Peng Shi, Duke.


15 April 2008

Gates 498, 4:00pm

Vladimir Braverman (UCLA)

Streaming Computations on Sliding Windows

A streaming model is one where data items arrive over long periods of time, either one item at a time or in bursts. Maintaining statistics and aggregates is an important and non-trivial task in this model. This becomes even more challenging in the sliding windows model, where statistics must be maintained only over the most recent n elements. This talk covers two recent results for the sliding windows model.

"Smooth histograms for sliding windows" is a joint work with Rafail Ostrovsky (FOCS '07). This paper presents a smooth histogram method that not only captures and improves multiple previous results on sliding windows, but also extends the class of functions that can be approximated on sliding windows.

"Succinct sampling on streams" is a joint work with Rafail Ostrovsky and Carlo Zaniolo (submitted). We call sampling algorithms succinct if they use provably optimal worst-case memory. In this paper we ask the following question: Is succinct sampling on streams (or S^3) possible, and if so, for what models? We show that S^3 algorithms are possible for all variants of the problem, i.e., both with and without replacement and both for one-at-a-time and bursty arrival models.


29 April 2008

Gates 498, 4:00pm

Adam Meyerson (UCLA)

Randomized K-Server on Hierarchical Binary Trees

Metric K-Server is a major problem in the area of online algorithms. We are given a metric space along with a set of K initial server locations, and a sequence of location requests arrives one at a time. As each request arrives, we must move one of our servers to that location. The goal is to minimize the total (or average) distance traveled by servers. This models emergency response, and also has a wide range of applications relating to caching and paging, robot navigation, and reconfigurable computing. In general we cannot compute an optimum solution without foreknowledge of the request sequence, so we will seek an algorithm with good competitive performance (minimizing the worst-case ratio of our total distance traveled to the optimum).

The major result on K-Server is the Work Function Algorithm by Koutsoupias and Papadimitriou, establishing a 2k-1 competitive deterministic algorithm. Slightly improved results (k-competitive) exist for some special cases and a deterministic lower bound of k is also known. However, in many cases it is possible to improve online competitive results using a randomized algorithm. In this talk, I will present a randomized algorithm for K-Server on a special class of metric spaces -- hierarchical binary trees. While this seems a restrictive case, a slight generalization of this result to non-binary hierarchically structured trees would apply to arbitrary finite metrics because of a recent set of results on randomized metric embedding. This talk presents a number of new ideas, including a model of an online problem as a hierarchy of independent online decision makers and an improved algorithm for a special case of the metrical task system problem.

This is joint work with Aaron Cote (UCLA) and Laura Poplawski (Northeastern) and has been accepted to STOC 2008.


13 May 2008

Gates 498, 4:00pm

Moshe Babaioff (Microsoft Research)

Online Auctions, Matroids and Secretary Problems

We consider the problem of online auction design (e.g. Priceline.com). Maximizing welfare in such auctions is complicated by the fact that decisions are instantaneous and irrevocable. The classic secretary problem introduced by Dynkin (1963) can be used to design approximately welfare-maximizing auctions in the simple single-unit auction setting, when assuming that bidders arrive in a random order. We generalized the problem to combinatorial settings, specifically, we study matroid domains in which the sets of agents which may be simultaneously satisfied constitute a matroid. We present truthful O(log(k))-competitive auction for any rank k matroid, and constant-competitive auctions for several specific matroids that are of economic interest.

We then consider the discounted secretary problem. There is a time-dependent ``discount'' factor d(t), and the benefit derived from assigning the item at time t to agent e is d(t) times v(e). For this problem, we show a lower bound of $\Omega(\frac{\log n}{\log \log n})$, and a nearly-matching O(log n)-competitive truthful auction with general (and possibly increasing) d(t). Additionally, we present a constant-competitive truthful auction when the expected optimum is known in advance, proving the large value of knowing this market statistics.

This talk is based on joint work with Immorlica and Kleinberg (SODA '07) and Dinitz, Gupta, Immorlica and Talwar (2008).