Wondering what my research is about, but not sure where to start? Too busy to sift through the details of my papers? This page outlines some of the major themes in my research over the past 15+ years and provides links to the best entry points for learning more (books, surveys, videos, and papers). Click on a topic to explore further. (Last updated Fall 2016.)
Computer science and economics have had a productive relationship over the past 15 years, resulting in a new field called algorithmic game theory or alternatively economics and computation. Many problems central to modern computer science, ranging from resource allocation in large networks to online advertising, fundamentally involve interactions between multiple self-interested parties. Economics and game theory offer a host of useful models and definitions to reason about such problems. The flow of ideas also travels in the other direction, with approximation and complexity notions playing an increasingly important role in the latest developments in economic theory. My textbook and accompanying lecture videos offer an accessible introduction to the subject, with case studies on online advertising, wireless spectrum auctions, kidney exchange, and network management.
From game theory 101, we know that the outcome of self-interested behavior can be bad for everybody. Two famous examples are the mutual betrayal in the Prisoner's dilemma and the overconsumption of a shared resource in the tragedy of the commons. One branch of algorithmic game theory studies the price of anarchy (POA), which quantifies the harm caused by self-interested behavior. This measure was first proposed by Koutsoupias and Papadimitriou, and it compares two things. The first is an outcome of selfish behavior (formally, a Nash equilibrium). The second is the best outcome, such as the one that maximizes the sum of everybody's payoffs. The POA measures how much worse the first is compared to the second, and the closer the POA of a game is to 1, the better its equilibria approximate what could be achieved with centralized coordination of all of the players' actions. The "Twenty Lectures" book above has several chapters on the price of anarchy, and the following older surveys are still relevant:
Not long after the price of anarchy was first defined, Eva Tardos and I studied it in a classical model of "selfish routing," where each user of a network chooses its path to minimize its own delay. (Think drivers on a highway, or packets in a data network, or even current in an electrical network. If you're familiar with Braess's paradox, then you already know this model.) How much worse, in terms of average commute times, is selfish routing than a centrally optimal solution? The following paper has two main results: (i) when the delay of each network edge is a linear function of the amount of traffic on it, then the answer is 33%; and (ii) even with arbitrary delay functions, throwing hardware at the problem (i.e., selfish routing after a modest increase in network capacity) outperforms centrally optimal routing.
I was pretty obsessed with routing and other network games during the first half of the oughts (e.g., the book above). Some other highlights of my work on this topic include the first generalization of Braess's paradox to larger networks (first with one origin and destination, and then with many):
A price-of-anarchy bound applies when players of a game have successfully reached an equilibrium. But what if they haven't, for example because of the apparent intractability of computing a Nash equilibrium? In the second half of the oughts, there was a lot of nice work proving price-of-anarchy bounds that apply even when players have not reached a Nash equilibrium (including Mirrokni-Vetta on best-response dynamics, Christodoulou-Koutsoupias on correlated equilibria, and Blum-Hajiaghayi-Ligett-Roth on no-regret learners). The following paper gives a general theory of such "robust" price-of-anarchy bounds. It defines "smooth games," notes that many of the games studied in computer science and related fields are smooth, and proves that price-of-anarchy bounds for smooth games are automatically robust in several senses.
Several extensions of the basic theory of smooth games have appeared over the past few years, such as the Syrgkanis-Tardos theory of smooth mechanisms. I've worked on extensions to incomplete-information games and large games (with many "small" players), and also "local" notions of smoothness.
Early work in computer science on auction design focused on auctions where every bidder has a foolproof strategy (like in a second-price auction). As impossibility results for such auctions mounted, attention naturally shifted to proving price-of-anarchy bounds for the equilibria of simple but strategically non-trivial auctions (beginning with Christodoulou-Kovacs-Schapira), such as simultaneous single-item auctions. The theory of smooth games and its extensions (see previous section) turns out to be well suited for this goal. Here is a survey of the state-of-the-art as of 2016:
A representative positive result (later improved by Feldman-Fu-Gravin-Lucier) is:
A mechanism is a procedure for making a decision when agents have private preferences, and an auction is a mechanism specifically for the exchange of goods and money. Economists have been thinking hard about auctions and mechanisms for over a half-century. What could computer science possibly bring to the table? Several things: measures of approximation; alternatives to Bayesian analysis; techniques for learning from data; and a focus on computational tractability.
Two surveys on different facets of this area are:
What selling procedure should a seller use to maximize her revenue? This is a hard question because the best auction to run --- like choosing the right opening bid in an eBay auction or the right reserve price in a sponsored search auction --- depends on what bidders are willing to pay. The traditional approach in economics, exemplified by Myerson's optimal auction theory, is to maximize the seller's expected revenue with respect to a known distribution over what bidders are willing to pay.
In many cases, the theoretically optimal auction is too complex for direct use (especially when bidders are not symmetric). The following paper identifies many scenarios where simple and directly usable auctions do almost as well as theoretically optimal auctions.
As discussed in the previous section, the traditional economic approach to revenue-maximizing auction design posits a known prior distribution over what bidders are willing to pay, and then solves for the auction that maximizes the seller's expected revenue with respect to this distribution. But where does this distribution come from? One obvious answer is from data, in the form of previous bids by comparable bidders in auctions for comparable items. (Ostrovsky-Schwarz applied this idea to great effect at Yahoo! in 2008.) How much data is necessary and sufficient to determine what auction you should run? The following paper answers this question for single-item auctions with asymmetric bidders.
In settings with monetary payments (like auctions), a well-known solution from economics (the VCG mechanism) can be used, in principle, to determine the best thing to do even when you don't know participants' preferences a priori. (The Vickrey auction is a special case.) The only problem is that, in many scenarios including multi-item auctions, implementing this mechanism requires solving computationally intractable problems. The field of algorithmic mechanism design studies when the promise of the VCG mechanism can be preserved (at least approximately) while using a reasonable amount of computation. The best-case scenario is when computationally efficient approximation algorithms (which take preferences as given) can be translated in a generic way to equally good incentive-compatible mechanisms (which need to elicit preferences from the participants), as in the following paper.
Another well-known drawback of the VCG mechanism (see preceding section) is that, in settings where there is an outcome-dependent cost, its revenue need not cover the cost incurred. There are "budget-balanced" alternatives, where the cost incurred is shared among the winners in the chosen outcome, but such mechanisms compromise on the quality of the outcome selected. The impossibility of maximizing social welfare while balancing the budget motivates using approximation measures to quantify the feasible trade-offs between the two objectives. This is the subject of the following paper.
The following paper introduces acyclic mechanisms, which are more flexible than the Moulin mechanisms studied in most of the cost-sharing literature, and shows that many primal-dual algorithms naturally lead to good acyclic mechanisms.
Can an equilibrium of a game be computed efficiently? Arguably, this is a prerequisite for interpreting an equilibrium as a prediction of players' behavior. (If no computer can find an equilibrium in a reasonable amount of time, then presumably a bunch of uncoordinated players can't either.) The following survey gives a general overview of the area, assuming minimal prior knowledge of computational complexity:
The dominant paradigm in the theoretical analysis of algorithms is worst-case analysis, where the overall performance of an algorithm is summarized by its worst performance on any input. Good worst-case guarantees are great when you can get them, but for many important problems (linear programming, clustering, many online algorithms, etc.), worst-case analysis gives inaccurate advice about which algorithm to use. Going beyond worst-case analysis requires more careful modeling of the "relevant inputs" of a problem. For a two-hour tutorial on this area, see
My interest in this area originally stems from my work in auction design and analysis (see the section "Auction and Mechanism Design : Robust Revenue-Maximization" above), where worst-case analysis provides no useful information and alternative frameworks are needed. My work there proposes interpolations between worst-case and average-case analysis, which inherit robustness from the former and meaningful performance guarantees from the latter (see this survey talk). (I call these "hybrid" analysis frameworks; other examples include smoothed analysis and semi-random models.) This led me to explore similar approaches to "mainstream" algorithmic problems.
The following three papers are representative of my interests. The first studies the problem of choosing the right algorithm for an application (where "application" means an unknown probability distribution over instances of a problem) based on some benchmark instances. The goal is to obtain good expected performance (this is the average-case aspect) no matter what the unknown distribution is (this is the worst-case aspect).
Papadimitriou describes the extroverted side of complexity theory as "an applied mathematical discipline whose function is to gain insights into the problems of the natural, social, and applied and engineering sciences." For example, the following monograph was inspired by all the great results in communication complexity that explain why so many natural algorithmic goals are impossible.