For a certain class of Nash bargaining games, we show that they can be transformed into a market model whose equilibrium yields the solution to the Nash bargaining game. This market model is a natural variant of a traditional market model from mathematical economics, dating back to 1891.
We then extend techniques developed within theoretical computer science over the last seven years to give a combinatorial, polynomial time algorithm for computing an equilibrium for this market model.
Over the years, a fascinating theory has started forming around a convex program given by Eisenberg and Gale in 1959. Besides market equilibria, this theory touches on such disparate themes as TCP congestion control and efficient solvability of nonlinear programs by combinatorial means. Our work shows that the Nash bargaining problem fits harmoniously in this collage of ideas.
In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions.
In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the "Lipschitz MAB problem". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant which bounds from below the performance of Lipschitz MAB algorithms for X, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions.
In this talk I will discuss new efficient uniquely represented versions of popular data structures (including hash tables, binary search trees, and queues), how these data structures work, and how they can be used to improve the privacy of real-world applications.
Short bio:
Daniel Golovin is currently a postdoctoral fellow at the Center for Computational Thinking at Carnegie Mellon University. His research interests include using novel data structures to improve privacy, approximation and online algorithms, and provably good ways of dealing with uncertainty. He received a B.S. from Cornell University in 2003, an M.S. from Carnegie Mellon University in 2006, and a Ph.D. from Carnegie Mellon University in 2008 under the supervision of Guy Blelloch.
On instances in which an optimal solution cuts a $1-\epsilon$ fraction of edges, our algorithm finds a solution that cuts a $1-O(\epsilon^{1/2})-o(1)$ fraction of edges. This is best possible up the constant in the big-Oh notation, assuming the unique games conjecture.
On instances in which an optimal solution cuts a $1/2 + \epsilon$ fraction of edges, a related algorithm cuts a $1/2 + exp(-\Omega(1/\epsilon))$ fraction of edges. (Algorithms based on semidefinite programming are known to do considerably better.)
Our results rely on a variant of spectral partitioning, which can be implemented in nearly linear time. Our analysis establishes a relation between the smallest eigenvalue and a combinatorial quantity, similarly to how Cheeger's inequality establishes a relation between second eigenvalue and edge expansion.
Our algorithm is the first one to achieve an approximation better than 1/2 for Max Cut by any means other than using a hyperplane to round the solution to a semidefinite program.
A full paper is available online at http://arxiv.org/abs/0806.1978
We prove our result in both the communication- and computational-complexity models. In particular, we present a novel technique for proving computational lower bounds for the case in which the private information of the agents can be succinctly described. The proof highlights connections between AMD, complexity theory, and combinatorics and is one of the first impossibility results connecting mechanism design to computational-complexity theory. We will then show how such techniques enable taking first steps in showing the computational complexity of truthful algorithms for combinatorial auctions.
The talk is based on joint works with Elchanan Mossel, Christos Papadimitirou, and Michael Schapira.
We give the first constant-factor approximation algorithms for maximizing non-negative submodular functions with multiple budget and matroid constraints. In particular, we give a randomized 2/5-approximation algorithm for maximizing non-negative submodular functions and a 1/5-approximation for maximizing submodular functions with multiple budget constraints. Furthermore, we prove that achieving an approximation factor better than 1/2 requires exponential time.
In the second half of the talk, I will discuss an application in marketing digital goods over social networks, and study revenue-maximizing strategies in the presence of positive externalities of buyers. In particular, we study a set of influence-and-exploit marketing strategies and show how to use submodular maximization to identify a set of influential buyers in a social network.
This talk is based on joint work with Feige and Vondrak (FOCS 2007), Hartline and Sundararajan (WWW 2008), and Lee, Nagarajan, Sviridenko (STOC 2009).
The slides from the talk are available here.
Previously, by the parallel repetition theorem, there were PCP Theorems with two-query projection tests, but only (arbitrarily small) constant error and polynomial size. There were also PCP Theorems with sub-constant error and almost-linear size, but a constant number of queries that is larger than 2.
As a corollary, we obtain a host of new results. In particular, our theorem improves many of the hardness of approximation results that are proved using the parallel repetition theorem.
This is joint work with Ran Raz.
The talk is self-contained. Based on joint work with Roee Engelberg, Jennifer Rexford, Rahul Sami, Yi Wang and Aviv Zohar.