Location: Gates 4B lobby.
Mailing list: thseminar@cs.stanford.edu. Email Kshipra to sign up.
Contact: Kshipra Bhawalkar (kshipra@stanford...)
We study the design and approximation of optimal crowdsourcing contests. Crowdsourcing contests can be modeled as all-pay auctions because entrants must exert effort up-front to enter. Unlike all-pay auctions where a usual design objective would be to maximize revenue, in crowdsourcing contests, the principal only benefits from the submission with the highest quality. We give a theory for optimal crowdsourcing contests that mirrors the theory of optimal auction design. We also compare crowdsourcing contests with more conventional means of procurement and show that crowdsourcing contests are constant factor approximations to conventional methods.
Joint work with Shuchi Chawla and Jason Hartline.
There has been a recent rush of approximation algorithms for planar network design problems. These algorithms have shown that problems such as travelling salesperson, Steiner tree and Steiner forest can be approximated to within a much higher degree of accuracy in planar graphs than in general graphs. In particular, these problems admit polynomial-time approximation schemes (PTASes) in planar graphs and only constant-factor approximations in general graphs. (For a fixed epsilon, a PTAS finds, in polynomial time, a solution whose value is within 1+epsilon of the optimal solution.)
This talk will unravel the last 5 years of work in this area. We will see a framework for designing PTASes in planar graphs. We will see how this paradigm builds on ideas from Arora and Mitchell for geometric problems and Baker for packing and covering problems. We will see how far this framework can go -- and how far it cannot.
We make attempts to characterize the complexity of the following problem: for a graph G, how does one choose a set S of vertices that maximizes the size of the largest subgraph where every vertex is either a member of S or has degree at least k? This problem has applications in social networks.
In this talk, we'll see that the problem is quite hard to solve in general, but algorithmic progress is possible when the graph has certain structural properties.
For long time, Coding theory sought constructions of good error-correcting codes that have very efficient encoding and decoding algorithms. However, until 1994, most of the known good codes were algebraic codes based on polynomials, and their encoding and decoding algorithms usually required time of at least O(nlogn). This situation was changed in 1994, when Sipser and Spielman gave constructions of error correcting codes that were based on expander graphs, and that had decoding algorithm that runs in time O(n). Their construction inspired a sequence of works on this subject, yielding better and better codes that have linear time encoding and decoding algorithms.
In this talk, I will describe the core ideas of one of the the last constructions in this sequence, due to Guruswami and Indyk, which achieves codes that can be encoded and decoded in linear time and have nearly tradeoff between rate and distance. I will focus on a nice idea that shows how expanders can be used to move from a random noise model to an adversarial noise model.
The talk will be self-contained.
Most results in revenue-maximizing auction design hinge on "getting the price right" - offering goods to bidders at a price low enough to encourage a sale, but high enough to garner non-trivial revenue. Getting the price right can be hard work, especially when the seller has little or no a priori information about bidders' valuations.
A simple alternative approach is to "let the market do the work", and have prices emerge from competition for scarce goods. The simplest-imginable implementation of this idea is the following: first, if necessary, impose an artificial limit on the number of goods that can be sold; second, run the welfare-maximizing VCG mechanism subject to this limit.
We prove that such "supply-limiting mechanisms" achieve a new-optimal expected revenue in a range of single- and multi-parameter Bayesian settings. Indeed, despite their simplicity, we prove that they essentially match the state-of-the-art in prior-independent mechanism design.
The girth of a graph is the length of its shortest cycle. Computing the girth has been studied at least since the 1970s. The fastest known algorithm that computes the girth exactly in n-node undirected unweighted graphs is an algorithm by Itai and Rodeh from 1978 that runs in O(n^\omega) time where \omega<2.38 is the exponent for nxn matrix multiplication. Any deterministic exact algorithm for the problem must take at least \Omega(n^2) time in dense graphs as it should be able to distinguish between graphs containing a triangle and triangle-free graphs. A natural question is then, how good of an approximation to the girth can one return in subquadratic time? Such an algorithm would not be able to even read the input for dense enough graphs!
This problem has been studied before. The best known algorithm is by Lingas and Lundell'09 who show how to return a cycle of length at most 8g/3 for graphs of girth g, in subquadratic, O(n^{1.5}) time. An open problem in their paper was whether a 2-approximation is possible in subquadratic time. We answer this question in the affirmative by giving a deterministic 2-approximation algorithm running in O(n^{5/3}) time. I will present this algorithm and discuss some extensions.
(Joint work with Liam Roditty, appeared in SODA'12.)
While most complexity theorists believe that P is not equal to NP, the existence of fastest (up to a polynomial) algorithm for any problem in NP \ P is an intriguing open question. The explicit construction of such an algorithm for any NP-complete problem will reduce P vs NP question to the question of polynomial boundedness of this algorithm.
In this talk we will look at recent results in this field. We will see constructions of different optimal algorithms in different models of computations. We will see how these ideas can be applied to solving graph non-isomorphism problem and how they relate to the notion of pseudorandom generators.
Truthfulness is fragile and demanding. It is oftentimes computationally harder than solving the original problem. Even worse, truthfulness can be utterly destroyed by small uncertainties in a mechanism's outcome. One obstacle is that truthful payments depend precisely on outcomes other than the one realized, such as the lengths of non-shortest-paths in a shortest-path auction. Single-call mechanisms are a powerful tool that circumvents this obstacle --- they implicitly charge truthful payments, guaranteeing truthfulness in expectation using only the outcome realized by the mechanism. The cost of such truthfulness is a trade-off between the expected quality of the outcome and the risk of large payments.
We largely settle when and to what extent single-call mechanisms are possible. The first single-call construction was discovered by Babaioff, Kleinberg, and Slivkins [BKS10] in single-parameter domains. They give a transformation that turns any monotone, single-parameter allocation rule into a truthful-in-expectation single-call mechanism. Our first result is a natural complement to [BKS10]: we give a new transformation that produces a single-call VCG mechanism from any allocation rule for which VCG payments are truthful. Second, in both the single-parameter and VCG settings, we precisely characterize the possible transformations, showing that that a wide variety of transformations are possible but that all take a very simple form. Finally, we study the inherent trade-off between the expected quality of the outcome and the risk of large payments. We show that our construction and [BKS10] simultaneously optimize a variety of metrics in their respective domains.
Our study is motivated by settings where uncertainty in a mechanism renders other known techniques untruthful. As an example, we analyze pay-per-click advertising auctions, where the truthfulness of the standard VCG-based auction is easily broken when the auctioneer's estimated click-through-rates are imprecise.
I will describe a constructive proof of von Neumann's minmax theorem for 2-player zero-sum game --- specifically, an algorithm which builds a near-optimal strategy for the first player from several best-responses of the first player to strategies of the second player. The idea is based on similar results of Freund and Schapire [FS99], however, our algorithm runs in poly(n) time even when a pure strategy for player 2 is a distribution (from a convex set of distributions) over [N] for N=2^n.
I will then highlight a few of its applications in computational complexity and cryptography: characterizing (uniform) conditional pseudoentropy [Vadhan and Zheng, 2011]; uniform hardcore lemma with optimal parameters; uniform dense model theorem; -Y´separating SNARG from all falsifiable assumptions¡ [Gentry and Wichs, 2011] in the uniform model; etc (as time permits).
Joint work with Salil Vadhan.