Research Overview

Wondering what my research is about, but not sure where to start? Too busy to sift through the details of my papers? This page outlines some of the major themes in my research over the past 15+ years and provides links to the best entry points for learning more (books, surveys, videos, and papers). Click on a topic to explore further. (Last updated Fall 2016.)

Algorithmic Game Theory (General)

Computer science and economics have had a productive relationship over the past 15 years, resulting in a new field called algorithmic game theory or alternatively economics and computation. Many problems central to modern computer science, ranging from resource allocation in large networks to online advertising, fundamentally involve interactions between multiple self-interested parties. Economics and game theory offer a host of useful models and definitions to reason about such problems. The flow of ideas also travels in the other direction, with approximation and complexity notions playing an increasingly important role in the latest developments in economic theory. My textbook and accompanying lecture videos offer an accessible introduction to the subject, with case studies on online advertising, wireless spectrum auctions, kidney exchange, and network management.

For a "reader's digest" version, see For a one-hour general-audience talk, see The following collection is older and targeted more to researchers than to students, but is still useful for several topics.
  • (co-edited with Noam Nisan, Eva Tardos, and Vijay Vazirani) Algorithmic Game Theory, Cambridge University Press, 2007. Read the entire book online by clicking here (look under the "Resources" tab).

The Price of Anarchy

From game theory 101, we know that the outcome of self-interested behavior can be bad for everybody. Two famous examples are the mutual betrayal in the Prisoner's dilemma and the overconsumption of a shared resource in the tragedy of the commons. One branch of algorithmic game theory studies the price of anarchy (POA), which quantifies the harm caused by self-interested behavior. This measure was first proposed by Koutsoupias and Papadimitriou, and it compares two things. The first is an outcome of selfish behavior (formally, a Nash equilibrium). The second is the best outcome, such as the one that maximizes the sum of everybody's payoffs. The POA measures how much worse the first is compared to the second, and the closer the POA of a game is to 1, the better its equilibria approximate what could be achieved with centralized coordination of all of the players' actions. The "Twenty Lectures" book above has several chapters on the price of anarchy, and the following older surveys are still relevant:

For a one-hour general-audience talk, see For a two-hour tutorial on the mathematics behind price-of-anarchy bounds, see
  • Near-Optimal Equilibria (videos), Part 1 and Part 2, Economics and Computation Boot Camp, Simons Institute (2015). Slides

The POA of Selfish Routing

Not long after the price of anarchy was first defined, Eva Tardos and I studied it in a classical model of "selfish routing," where each user of a network chooses its path to minimize its own delay. (Think drivers on a highway, or packets in a data network, or even current in an electrical network. If you're familiar with Braess's paradox, then you already know this model.) How much worse, in terms of average commute times, is selfish routing than a centrally optimal solution? The following paper has two main results: (i) when the delay of each network edge is a linear function of the amount of traffic on it, then the answer is 33%; and (ii) even with arbitrary delay functions, throwing hardware at the problem (i.e., selfish routing after a modest increase in network capacity) outperforms centrally optimal routing.

The following paper generalizes the 33% result to nonlinear delay functions (where the POA grows with the "degree of nonlinearity") by showing that simple (two-node, two-link) networks always realize the worst-possible POA. Here's a book I wrote on this topic: and a "reader's digest" version: and a survey of routing games more generally:

More on Braess's Paradox and Routing Games

I was pretty obsessed with routing and other network games during the first half of the oughts (e.g., the book above). Some other highlights of my work on this topic include the first generalization of Braess's paradox to larger networks (first with one origin and destination, and then with many):

A proof that Braess's paradox is ubiquitous in random networks: And several strategies for improving the POA in various network games, including cost-sharing: tolls: and partially centralized routing:

Smooth Games and Robust POA Bounds

A price-of-anarchy bound applies when players of a game have successfully reached an equilibrium. But what if they haven't, for example because of the apparent intractability of computing a Nash equilibrium? In the second half of the oughts, there was a lot of nice work proving price-of-anarchy bounds that apply even when players have not reached a Nash equilibrium (including Mirrokni-Vetta on best-response dynamics, Christodoulou-Koutsoupias on correlated equilibria, and Blum-Hajiaghayi-Ligett-Roth on no-regret learners). The following paper gives a general theory of such "robust" price-of-anarchy bounds. It defines "smooth games," notes that many of the games studied in computer science and related fields are smooth, and proves that price-of-anarchy bounds for smooth games are automatically robust in several senses.

See also these eight-minute and hour-long videos that explain this theory.

Several extensions of the basic theory of smooth games have appeared over the past few years, such as the Syrgkanis-Tardos theory of smooth mechanisms. I've worked on extensions to incomplete-information games and large games (with many "small" players), and also "local" notions of smoothness.

The POA of Simple Auctions

Early work in computer science on auction design focused on auctions where every bidder has a foolproof strategy (like in a second-price auction). As impossibility results for such auctions mounted, attention naturally shifted to proving price-of-anarchy bounds for the equilibria of simple but strategically non-trivial auctions (beginning with Christodoulou-Kovacs-Schapira), such as simultaneous single-item auctions. The theory of smooth games and its extensions (see previous section) turns out to be well suited for this goal. Here is a survey of the state-of-the-art as of 2016:

The boot camp lectures mentioned above (Part 1 and Part 2) are also mostly about this topic.

A representative positive result (later improved by Feldman-Fu-Gravin-Lucier) is:

The following paper uses communication complexity to prove limits on the best-possible POA of any simple auction. For example, it makes precise the folklore rule of thumb that "simple multi-item auctions cannot perform well when there are synergies between items."

Auction and Mechanism Design

A mechanism is a procedure for making a decision when agents have private preferences, and an auction is a mechanism specifically for the exchange of goods and money. Economists have been thinking hard about auctions and mechanisms for over a half-century. What could computer science possibly bring to the table? Several things: measures of approximation; alternatives to Bayesian analysis; techniques for learning from data; and a focus on computational tractability.

Two surveys on different facets of this area are:

Robust Revenue-Maximization

What selling procedure should a seller use to maximize her revenue? This is a hard question because the best auction to run --- like choosing the right opening bid in an eBay auction or the right reserve price in a sponsored search auction --- depends on what bidders are willing to pay. The traditional approach in economics, exemplified by Myerson's optimal auction theory, is to maximize the seller's expected revenue with respect to a known distribution over what bidders are willing to pay.

In many cases, the theoretically optimal auction is too complex for direct use (especially when bidders are not symmetric). The following paper identifies many scenarios where simple and directly usable auctions do almost as well as theoretically optimal auctions.

But what if the distribution over what bidders are willing to pay is unknown or changing rapidly? Could there be prior-independent auctions, which are simultaneously near-optimal for a wide range of distributions? The following paper gives an affirmative answer. The following two papers give prior-independent auctions for "multi-parameter" settings, like auctions with multiple distinct items, and to bidders with interdependent preferences. In the theory of prior-free auctions, pioneered by Goldberg-Hartline-Karlin-Saks-Wright, the goal is to dispense with distributional assumptions entirely and prove revenue guarantees that hold no matter what bidders want (as opposed to on average over what bidders want). The following paper shows how to generate automatically meaningful revenue benchmarks for such worst-case revenue guarantees. For an example of this framework in action, see: Here is an overview talk on robust revenue-maximization:

Data-Driven Auction Design

As discussed in the previous section, the traditional economic approach to revenue-maximizing auction design posits a known prior distribution over what bidders are willing to pay, and then solves for the auction that maximizes the seller's expected revenue with respect to this distribution. But where does this distribution come from? One obvious answer is from data, in the form of previous bids by comparable bidders in auctions for comparable items. (Ostrovsky-Schwarz applied this idea to great effect at Yahoo! in 2008.) How much data is necessary and sufficient to determine what auction you should run? The following paper answers this question for single-item auctions with asymmetric bidders.

Two follow-up papers prove optimal bounds for learning the best selling price for a single buyer, and show how to learn an optimal auction when the willingness to pay of each bidder is drawn from an "irregular" distribution (like a bimodal distribution). There is a clear analogy between learning a good auction from data and one of the most central problems in machine learning, that of learning a good classifier from labeled data. This connection is made explicit in the following works, which also show that the optimal auction from any "simple" class (such as reserve-price-based auctions) can be learned from a reasonable amount of data. The first paper considers "single-parameter" auction settings, and second paper the more complicated case of "multi-parameter" settings. A largely open direction is to understand the computational complexity of learning a near-optimal auction from data. (The papers above focus primarily on the amount of information needed, rather than on computational efficiency.) The following paper is a first step in this direction:

Algorithmic Mechanism Design

In settings with monetary payments (like auctions), a well-known solution from economics (the VCG mechanism) can be used, in principle, to determine the best thing to do even when you don't know participants' preferences a priori. (The Vickrey auction is a special case.) The only problem is that, in many scenarios including multi-item auctions, implementing this mechanism requires solving computationally intractable problems. The field of algorithmic mechanism design studies when the promise of the VCG mechanism can be preserved (at least approximately) while using a reasonable amount of computation. The best-case scenario is when computationally efficient approximation algorithms (which take preferences as given) can be translated in a generic way to equally good incentive-compatible mechanisms (which need to elicit preferences from the participants), as in the following paper.

Most multi-item auctions require customized solutions, as in the following two papers. The first considers items that are "substitutes" and designs a best-possible auction for this case, the second design auctions for items that are "complements." Sometimes an approximation algorithm leads to a mechanism with incentive properties superior to those of the VCG mechanism. This is the case in deferred-acceptance auctions, where ascending/descending clock auctions are closely connected to greedy heuristics. The following paper explores the power and limitations of such heuristics.

Cost-Sharing Mechanisms

Another well-known drawback of the VCG mechanism (see preceding section) is that, in settings where there is an outcome-dependent cost, its revenue need not cover the cost incurred. There are "budget-balanced" alternatives, where the cost incurred is shared among the winners in the chosen outcome, but such mechanisms compromise on the quality of the outcome selected. The impossibility of maximizing social welfare while balancing the budget motivates using approximation measures to quantify the feasible trade-offs between the two objectives. This is the subject of the following paper.

For several follow-up results, see Mukund's PhD thesis.

The following paper introduces acyclic mechanisms, which are more flexible than the Moulin mechanisms studied in most of the cost-sharing literature, and shows that many primal-dual algorithms naturally lead to good acyclic mechanisms.

Algorithms for and Complexity of Game-Theoretic Equilibria

Can an equilibrium of a game be computed efficiently? Arguably, this is a prerequisite for interpreting an equilibrium as a prediction of players' behavior. (If no computer can find an equilibrium in a reasonable amount of time, then presumably a bunch of uncoordinated players can't either.) The following survey gives a general overview of the area, assuming minimal prior knowledge of computational complexity:

Evidence is mounting that there is no general algorithm for computing a Nash equilibrium of a game in polynomial time (see e.g. the survey above). Meanwhile, the well-known correlated equilibrium concept can be viewed as a tractable relaxation of the Nash equilibrium. (It can also be interpreted in terms of a mediator, or as the natural outcome of Bayesian rationality.) The following paper gives fairly general results for computing a correlated equilibrium of a compactly described game (in time polynomial in the game description), and also (under stronger assumptions) for optimizing over the correlated equilibria. On the negative side, returning to Nash equilibria, the following paper gives evidence that a large amount of (deterministic) communication is required to compute even an approximate equilibrium. In an exciting recent development, Babichenko and Rubinstein prove (unconditionally) that a large amount of (even randomized) communication is indeed required.

Beyond the Worst-Case Analysis of Algorithms

The dominant paradigm in the theoretical analysis of algorithms is worst-case analysis, where the overall performance of an algorithm is summarized by its worst performance on any input. Good worst-case guarantees are great when you can get them, but for many important problems (linear programming, clustering, many online algorithms, etc.), worst-case analysis gives inaccurate advice about which algorithm to use. Going beyond worst-case analysis requires more careful modeling of the "relevant inputs" of a problem. For a two-hour tutorial on this area, see

  • Beyond Worst-Case Analysis, Part 1 and Part 2, Algorithms and Uncertainty Boot Camp, Simons Institute (2016). Slides
The above tutorial is a "reader's digest" version of a Stanford course that I've taught since 2009; the 2014 version has a full set of lecture notes and lecture videos which explore this field much more deeply. I also organized the first workshop dedicated to the topic in 2011; there have been follow-up workshops in 2014 at Dagstuhl and in 2016 at the Simons Institute.

My interest in this area originally stems from my work in auction design and analysis (see the section "Auction and Mechanism Design : Robust Revenue-Maximization" above), where worst-case analysis provides no useful information and alternative frameworks are needed. My work there proposes interpolations between worst-case and average-case analysis, which inherit robustness from the former and meaningful performance guarantees from the latter (see this survey talk). (I call these "hybrid" analysis frameworks; other examples include smoothed analysis and semi-random models.) This led me to explore similar approaches to "mainstream" algorithmic problems.

The following three papers are representative of my interests. The first studies the problem of choosing the right algorithm for an application (where "application" means an unknown probability distribution over instances of a problem) based on some benchmark instances. The goal is to obtain good expected performance (this is the average-case aspect) no matter what the unknown distribution is (this is the worst-case aspect).

The second considers the problem of recovering an unknown (that is, worst-case) binary labeling of a graph (for example, foreground-background image segmentation) from noisy measurements. The model resembles the stochastic block model, but is additionally parameterized by a (non-complete) graph of pairwise measurements. The third proposes a new graph class meant to capture all plausible social and information networks (motivated by the triadic closure property of such networks). This graph class can be viewed as a "worst case" over all plausible probabilistic models of social and information networks. Thus structural and algorithmic results for such graphs automatically apply to the networks produced by your favorite generative model.

Extroverted Complexity Theory

Papadimitriou describes the extroverted side of complexity theory as "an applied mathematical discipline whose function is to gain insights into the problems of the natural, social, and applied and engineering sciences." For example, the following monograph was inspired by all the great results in communication complexity that explain why so many natural algorithmic goals are impossible.

Recently I've been exploring how "complexity barriers" can be used to prove impossibility results in economics and elsewhere. The following talk gives an overview of my recent work in the area. My interest in this area started with the following paper, which uses communication complexity to make precise an old rule of thumb in multi-item auction design: simple auctions (like selling items separately) do not work well when there are synergies across items. One of the most basic notions in economics is that of a competitive equilibrium, which comprises prices for items that equalize supply and demand. In a market with divisible goods (milk, wheat, etc.), a competitive equilibrium exists under reasonable assumptions. Many markets with indivisible goods (houses, wireless spectrum licenses, airport landing slots, etc.), however, can fail to possess a competitive equilibrium. The following paper uses computational complexity to explain the rampant non-existence of such equilibria. Border's theorem is a basic result in auction theory that characterizes everything that can happen at equilibrium in any single-item auction. This characterization has not been generalized much beyond single-item auctions, despite significant effort. The following paper gives a complexity-theoretic explanation for this. Switching application domains, the following paper considers modern parallel computation, as exemplified by MapReduce and Hadoop. It has been a challenge to understand which problems require a large number of synchronized rounds of parallel computation to solve. The following paper proves some general but modest lower bounds, and explains why stronger lower bounds have not been forthcoming: proving better lower bounds would imply a major breakthrough in circuit complexity.

Home