Location: Gates 4B lobby.
Mailing list: thseminar@cs.stanford.edu. Email Rishi to sign up.
Contact: Rishi Gupta (rishig@cs.stanford...)
I gave a "Christmas Tree Lecture" last month with the same title. [People who want to see that lecture can view it via http://scpd.stanford.edu/knuth/index.jsp, and watching it will be helpful (maybe even interesting) but not a prerequisite for next Thursday's talk.]
At the end of that lecture I spoke about a fascinating conjecture that I had been trying to prove without success during the previous week. I had meanwhile accumulated overwhelming evidence for the truth of the conjecture.
My talk will describe the sequel: Three days after giving the Christmas Tree Lecture, I learned that the conjecture is true in an even stronger form, and that it has a simple proof! A happy ending...
A group of n countries all release pollution into the environment, which drifts across international borders. They strike a deal in which each country will restrict its economic production in an effort to unleash less pollution on its neighbors. But someday down the line, some smaller coalition of these countries will realize that they made a bad deal: this coalition could have struck a different deal that would leave the entire coalition happier, even without help from its neighbors. The coalition breaks off and the original deal falls apart.
There is something seemingly exponential about this notion of stability - it places a restriction on every possible coalition of countries, of which there are 2^n. Predictably, past study has shown that checking an outcome for stability is hard (NP- or coNP-complete) in various simple types of games. We study a new type, "Negotiation Games," which capture interactions in which each player can do work for the benefit of a group (much like our pollution-reducing countries). We show that Negotiation Games are an interesting exception: we outline a polytime algorithm that tests the outcome of a Negotiation Game for stability.
Imagine that someone presents you with a distribution over discrete objects (i.e. a distribution over jelly-bean colors), and tells you what the distribution is (Pr(red)=.3, Pr(green)=.1, ...). How many draws from the distribution must you make so as to verify that the distribution is indeed the promised distribution? Specifically, given an error parameter epsilon, you would like to take some draws from the distribution and run an algorithm that will output "Accept" (with high probability) if the distribution is the one described, and output "Reject" (with high probability) if the distribution has total variation distance more than epsilon from the described distribution. We give instance-optimal tight bounds on the required sample size, as a function of the distribution in question.
I will briefly sketch the algorithm, though will spare you the lower bound construction. I'll also discuss the main technical tool that enables the analysis of the algorithm, which is a complete (and clean) characterization of a useful and general class of inequalities that generalize the Cauchy-Schwarz inequality, Holder's inequalities, and the monotonicity of Lp norms.
We consider the problem of online scheduling of jobs on unrelated machines with dynamic speed scaling to minimize the sum of energy and weighted flow time. We give an algorithm with an aymptotically optimal competitive ratio for arbitrary power functions. (No earlier results handled arbitrary power functions for unrelated machines.) For power functions of the form f(s) = s^a for some constant a > 1, we get a competitive ratio of O(a / log a), improving upon a previous competitive ratio of O(a^2) by Anand et al. (SODA '12), along with a matching lower bound of Omega(a / log a). Further, in the resource augmentation model, with a (1 + eps) speed up, we give a 2(1 + 1/eps) competitive algorithm, with essentially the same techniques, matching the bound of Anand et al. (SODA '12) for the special case of fixed speed unrelated machines. Unlike the previous results most of which used an amortized local competitiveness argument or dual fitting methods, we use a primal-dual method, which is useful not only to analyze the algorithms but also to design the algorithm itself.
Joint work with Nikhil R. Devanur from Microsoft Research Redmond.
I will show how to write a symmetric LP that achieves the best possible approximation (even instance-wise) among all the symmetric LPs of roughly the same size for the Traveling Salesman Problem.
If time permits, I will also show that the Sum of Squares (Lasserre) SDPs are optimal among all the symmetric SDP relaxations for Constraint Satisfaction Problems, generalizing a recent result of Chan et. al.
Joint work with James R. Lee, Prasad Raghavendra and David Steurer
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option.
For a multi-dimensional domain with an arbitrary number of agents and items, and for the very large class of homogeneous valuation functions, we prove that our mechanism provides every agent with at least a 1/e \approx 0.368 fraction of her Proportionally Fair valuation. To the best of our knowledge, this is the first result that gives a constant factor approximation to every agent for the Proportionally Fair solution. To complement this result, we show that no truthful mechanism can guarantee more than 0.5 approximation, even for the restricted class of additive linear valuations. In addition to this, we uncover a connection between the Partial Allocation mechanism and VCG-based mechanism design.
Joint work with Richard Cole and Gagan Goel that appeared at EC 2013 and AAMAS 2013.
In the k-SUM problem, we are given a set of numbers and asked if there are k of them which sum to 0. The case of k=3 has been extensively studied in computational geometry, and known to be intimately related to many low-dimensional problems. The case of arbitrary k is the natural parameterization of Subset Sum and is well-known in parameterized algorithms and complexity.
We present new FPT reductions between the k-SUM problem and the k-Clique problem, yielding several complexity-theoretic and algorithmic consequences. Our reductions show that k-SUM on "small" numbers (in the range [-n^f(k), n^f(k)] for a sufficiently large function f) is W[1]-complete, and that k-SUM (in general) is W[1]-complete under a common derandomization assumption. These results effectively resolve the parameterized complexity of k-SUM, initially posed in 1992 by Downey and Fellows in their seminal paper on parameterized intractability. Our method is quite general and also applies to other weighted problems.
The economic notion of a competitive equilibrium aims to capture the steady state of combinatorial markets: prices create equality between demand and supply, thus stabilizing the market; moreover, prices provide the common information that drives an efficient allocation out of disparate individual decisions. It is well-known however that a competitive equilibrium is not guaranteed to exist when valuations are not gross substitutes. We study an alternative notion of equilibrium that accommodates bundles of goods, as found in many real-life markets (e.g., a six-pack). The purpose of this work is to study the stabilizing role of bundles alongside prices, and the welfare and revenue properties of equilibria with bundles. We establish non-trivial welfare and revenue guarantees for competitive equilibrium with bundling in different classes of markets. In studying these properties we address the open question raised by Feldman-Gravin-Lucier in their recent STOC'13 paper.
Joint work with Shahar Dobzinski, Michal Feldman and Omri Weinstein.
The Personalized PageRank from a start node s to a target node t in a directed graph is the probability of ending a random walk from s at t, where after each step the walk flips a coin and stops with some fixed probability. Existing methods like Monte-Carlo require time Ω(1/δ) to detect a personalized PageRank probability larger than δ; this makes them unsuitable for use in large social-networks for applications requiring values of δ = O(1/n). We present a new bidirectional method with a provable average running-time guarantee of O(sqrt(d/δ)). We complement this result with an Ω(1/sqrt(δ)) lower bound for Significant-PageRank, showing that the dependence on δ cannot be improved.
This Thursday I will present our new bidirectional algorithm for Personalized PageRank estimation and a proof of its correctness and running time. This is joint work with Siddhartha Banerjee, Ashish Goel, and C. Seshadhri.