Stanford Algorithms Seminar (formerly known as AFLB) is usually held in Gates 498 or Theory Lounge (Gates 463A).
It is run by Kshipra Bhawalkar and Tim Roughgarden. Click here for directions.

Contents:

  1. 7 Oct 2008: Vijay Vazirani (Georgia Tech). Nash Bargaining via Flexible Budget Markets.
  2. 21 Oct 2008: Alex Slivkins (Microsoft Research). Multi-Armed Bandits in Metric Spaces.
  3. 11 Nov 2008: Daniel Golovin (Carnegie Mellon). Uniquely Represented Data Structures with Applications to Privacy.
  4. 18 Nov 2008: Luca Trevisan (Berkeley). Max Cut and the Smallest Eigenvalue.
  5. 15 Jan 2009: Christos Papadimitriou (Berkeley). The Algorithmic Lens: How the Computational Perspective is Transforming the Sciences.
  6. 27 Jan 2009: Yaron Singer (Berkeley). Limitations of Incentive Compatible Algorithms.
  7. 17 Feb 2009: Vahab Mirrokni (Google). Submodular Maximization Applied to Marketing over Social Networks.
  8. 24 Feb 2009: Dana Moshkovitz (Princeton). Two Query PCP with Sub-Constant Error.
  9. 5 May 2009: Michael Schapira(Yale, Berkeley). The Quest for Stability in Interdomain Routing
  10. 14 July 2009: Michael Mitzenmacher(Harvard). Some Open Problems in Cuckoo Hashing

7 October 2008

Gates 463A, 4:00pm

Vijay Vazirani (Georgia Tech)

Nash Bargaining via Flexible Budget Markets

In his seminal 1950 paper, John Nash defined the bargaining game; the ensuing theory of bargaining lies at the heart of game theory. In this work, we initiate an algorithmic study of Nash bargaining games.

For a certain class of Nash bargaining games, we show that they can be transformed into a market model whose equilibrium yields the solution to the Nash bargaining game. This market model is a natural variant of a traditional market model from mathematical economics, dating back to 1891.

We then extend techniques developed within theoretical computer science over the last seven years to give a combinatorial, polynomial time algorithm for computing an equilibrium for this market model.

Over the years, a fascinating theory has started forming around a convex program given by Eisenberg and Gale in 1959. Besides market equilibria, this theory touches on such disparate themes as TCP congestion control and efficient solvability of nonlinear programs by combinatorial means. Our work shows that the Nash bargaining problem fits harmoniously in this collage of ideas.


21 October 2008

Gates 498, 4:00pm

Alex Slivkins (Microsoft Research)

Multi-Armed Bandits in Metric Spaces

(Joint work with Bobby Kleinberg and Eli Upfal, to appear in STOC'08)

In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions.

In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the "Lipschitz MAB problem". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant which bounds from below the performance of Lipschitz MAB algorithms for X, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions.


11 November 2008

Gates 498, 4:00pm

Daniel Golovin (Carnegie Mellon)

Uniquely Represented Data Structures with Applications to Privacy

In a typical system with some internal data structures, if the memory representations of those internal data structures are inspected they may leave significant clues to the past use of the system. For example, a data structure with lazy deletions might store information (possibly in a very subtle manner) that the user believes was deleted long ago. This is problematic in environments requiring high security or strict privacy guarantees. This problem can be addressed by using uniquely represented data structures; these data structures have a unique physical state to encode each logical state of its abstract data type, and thus store exactly the information specified by an abstract data type and nothing more.

In this talk I will discuss new efficient uniquely represented versions of popular data structures (including hash tables, binary search trees, and queues), how these data structures work, and how they can be used to improve the privacy of real-world applications.

Short bio:

Daniel Golovin is currently a postdoctoral fellow at the Center for Computational Thinking at Carnegie Mellon University. His research interests include using novel data structures to improve privacy, approximation and online algorithms, and provably good ways of dealing with uncertainty. He received a B.S. from Cornell University in 2003, an M.S. from Carnegie Mellon University in 2006, and a Ph.D. from Carnegie Mellon University in 2008 under the supervision of Guy Blelloch.


18 November 2008

Gates 498, 3:30pm

Luca Trevisan (Berkeley)

Max Cut and the Smallest Eigenvalue

We describe a new approximation algorithm for Max Cut. Our algorithm runs in nearly quadratic time, and achieves an approximation ratio of .531.

On instances in which an optimal solution cuts a $1-\epsilon$ fraction of edges, our algorithm finds a solution that cuts a $1-O(\epsilon^{1/2})-o(1)$ fraction of edges. This is best possible up the constant in the big-Oh notation, assuming the unique games conjecture.

On instances in which an optimal solution cuts a $1/2 + \epsilon$ fraction of edges, a related algorithm cuts a $1/2 + exp(-\Omega(1/\epsilon))$ fraction of edges. (Algorithms based on semidefinite programming are known to do considerably better.)

Our results rely on a variant of spectral partitioning, which can be implemented in nearly linear time. Our analysis establishes a relation between the smallest eigenvalue and a combinatorial quantity, similarly to how Cheeger's inequality establishes a relation between second eigenvalue and edge expansion.

Our algorithm is the first one to achieve an approximation better than 1/2 for Max Cut by any means other than using a hyperplane to round the solution to a semidefinite program.

A full paper is available online at http://arxiv.org/abs/0806.1978


15 January 2009

Packard 101, 4:15pm

Christos Papadimitriou (Berkeley)

The Algorithmic Lens: How the Computational Perspective is Transforming the Sciences

Computational research transforms the sciences (physical, mathematical, life or social) not just by empowering them analytically, but mainly by providing a novel and powerful perspective which often leads to unforeseen insights. Examples abound: quantum computation provides the right forum for questioning and testing some of the most basic tenets of quantum physics, while statistical mechanics has found in the efficiency of randomized algorithms a powerful metaphor for phase transitions. In mathematics, the P vs. NP problem has joined the list of the most profound and consequential problems, and in economics considerations of computational complexity revise predictions of economic behavior and affect the design of economic mechanisms such as auctions. Finally, in biology some of the most fundamental problems, such as understanding the brain and evolution, can be productively recast in computational terms. My talk is structured around eight vignettes exemplifying this pattern.

27 January 2009

Gates 498, 4:00pm

Yaron Singer (Berkeley)

Limitations of Incentive Compatible Algorithms

The central focus in the study algorithmic mechanism design is the tension between incentive compatibility and computational efficiency. We introduce a novel mechanism design problem that enables addressing this tension. We show that while the problem is easily solvable from a purely computational viewpoint, and from an economic perspective, no algorithm which is simultaneously incentive compatible and runs in polynomial-time can obtain any reasonable approximation ratio.

We prove our result in both the communication- and computational-complexity models. In particular, we present a novel technique for proving computational lower bounds for the case in which the private information of the agents can be succinctly described. The proof highlights connections between AMD, complexity theory, and combinatorics and is one of the first impossibility results connecting mechanism design to computational-complexity theory. We will then show how such techniques enable taking first steps in showing the computational complexity of truthful algorithms for combinatorial auctions.

The talk is based on joint works with Elchanan Mossel, Christos Papadimitirou, and Michael Schapira.


17 February 2009

Gates 498, 4:00pm

Vahab Mirrokni (Google)

Submodular Maximization Applied to Marketing over Social Networks

Submodular maximization is a central problem in optimization with several applications. Unlike minimizing submodular functions, submodular maximization is NP-hard. In this talk, we first describe recent approximation algorithms for this problem, and then discuss an application in optimal marketing over social networks.

We give the first constant-factor approximation algorithms for maximizing non-negative submodular functions with multiple budget and matroid constraints. In particular, we give a randomized 2/5-approximation algorithm for maximizing non-negative submodular functions and a 1/5-approximation for maximizing submodular functions with multiple budget constraints. Furthermore, we prove that achieving an approximation factor better than 1/2 requires exponential time.

In the second half of the talk, I will discuss an application in marketing digital goods over social networks, and study revenue-maximizing strategies in the presence of positive externalities of buyers. In particular, we study a set of influence-and-exploit marketing strategies and show how to use submodular maximization to identify a set of influential buyers in a social network.

This talk is based on joint work with Feige and Vondrak (FOCS 2007), Hartline and Sundararajan (WWW 2008), and Lee, Nagarajan, Sviridenko (STOC 2009).

The slides from the talk are available here.


24 February 2009

Gates 459, 4:00pm

Dana Moshkovitz (Princeton)

Two Query PCP with Sub-Constant Error

We show that the NP-Complete language 3Sat has a PCP verifier that makes two queries to a proof of almost-linear size and achieves sub-constant probability of error o(1). The verifier performs only projection tests, meaning that the answer to the first query determines at most one accepting answer to the second query.

Previously, by the parallel repetition theorem, there were PCP Theorems with two-query projection tests, but only (arbitrarily small) constant error and polynomial size. There were also PCP Theorems with sub-constant error and almost-linear size, but a constant number of queries that is larger than 2.

As a corollary, we obtain a host of new results. In particular, our theorem improves many of the hardness of approximation results that are proved using the parallel repetition theorem.

This is joint work with Ran Raz.


5 May 2009

Gates 463A, 4:00pm

Michael Schapira (Yale, Berkeley)

The Quest for Stability in Interdomain Routing

The Border Gateway Protocol (BGP) establishes routes between the Autonomous Systems (ASes) that make up the Internet -- a task known as "interdomain routing". BGP allows network administrators great flexibility in selecting routes, at the potential expense of persistent route oscillations -- a perennial concern with today's routing system. We present various possibility and impossibility results regarding BGP's convergence to a "stable" routing outcome. Our findings suggest that many restrictions on ASes' routing policies that are believed to be necessary for guaranteeing BGP convergence are, in fact, a byproduct of the worst-case model used to analyze BGP dynamics, and not inherent to real-life routing between ASes. In particular, we prove that, for general classes of routing policies that are notoriously unsafe in the conservative model, BGP convergence is guaranteed in a more realistic average-case model. Finally, we present a modest extension to BGP, namely "Neighbor-Specific BGP" (NS-BGP), that enables a much wider range of local ASes' routing policies without compromising global stability. We prove that this simple modification of BGP is safe to deploy partially/incrementally.

The talk is self-contained. Based on joint work with Roee Engelberg, Jennifer Rexford, Rahul Sami, Yi Wang and Aviv Zohar.


14 July 2009

Gates 498, 4:00pm

Michael Mitzenmacher (Harvard)

Some Open Problems in Cuckoo Hashing

In this talk, we survey recent work in the area of cuckoo hashing, including a clear description of several open problems, with the hope of spurring further research.