Location: We meet in the Gates 4B lobby, also known as the theory lounge.
Mailing List: thseminar@cs.stanford.edu. Email shaddin to sign up.
Contact: Shaddin Dughmi (shaddin at cs dot stanford dot edu)
Food: Wendy usually choses the food at her discretion. However, the speaker may suggest a preference to Wendy for the day of their talk ahead of time.
For previous quarter schedules see here.
The price of anarchy is a measure of the inefficiency of selfish behavior that has been successfully analyzed in many applications. It is defined as the worst-case ratio between the welfare of a Nash equilibrium and that of an optimal (first-best) solution. Seemingly, a bound on the price of anarchy is meaningful only if players successfully reach some Nash equilibrium. Our main result is that for most of the classes of games in which the price of anarchy has been studied, results are "intrinsically robust" in the following sense: a bound on the worst-case price of anarchy for pure Nash equilibria *necessarily* implies the exact same worst-case bound for a much larger sets of outcomes, including mixed Nash equilibria, correlated equilibria, and sequences of outcomes generated by natural experimentation strategies (such as successive best responses or simultaneous regret-minimization). Byproducts of our work include several new results for the inefficiency of equilibria in congestion games.
We consider the problem of matching a polygonal curve of complexity p with an arbitrary curve on an embedded planar graph of complexity q. With the large amount of GPS and location trace data that have become available in recent years, algorithms for such problems have become increas- ingly important. Direct applications include matching a GPS trace to a given roadmap and indirect applications including building and querying databases of polygonal curves. As problem sizes increase, improvements in asymptotic runtime may prove to be useful and we present an algorithm which solves the problem with respect to the weak Fr´echet distance in O(pq) time, which is an asymptotic im- provement. In the process of doing so, we also provide a more intuitive and practical approach for the recursive division procedure used in Henzinger et al.’s shortest path algorithm for planar graphs. Unlike previous approaches to minimize the asymptotic complexity of calculating the Fr´echet distance, our method does not involve parametric search, and hence is also more practical to implement.
link: http://www.stanford.edu/~sunjian/paper_ppt/matchingpm.pdf
We present a randomized mechanism for multi-unit auctions that is truthful in expectation, and achieves the optimal approximation guarantee assuming P \neq NP: a fully-polynomial time approximation scheme in both the computational and communication models. The previous best known was a truthful 2-approximation, matched by a deterministic lower bound on maximal-in-range mechanisms. This means that any improvement must either leverage randomization or be non-MIR. We accomplish the former in our result.
This is joint work with Shahar Dobzinski
The Reverse Nearest Neighbor (RNN) query, as the name suggests, is the inverse problem to nearest neighbors and arises in a variety of contexts. The RNN problem asks to build a data structure which can retrieve the set of points in a given point set that have a specified query point as their nearest neighbor. Although a number of heuristic approaches have been proposed for the problem, currently known data structures suffer from the "curse of dimensionality" (they have either a linear query time or an exponential dependence on the dimension). In analogy with the nearest neighbor problem, we define the natural approximate version of the RNN problem and show that it is possible to overcome the curse of dimensionality for the approximate version. In particular, we propose a locality-sensitive hashing based data structure for answering these queries in sublinear time (with an additive term proportional to the size of the solution) and polynomial space with high probability.
Joint work with David Arthur and Steve Oudot.
Finding the right benchmark for worst-case (prior-free) revenue maximization is usually done on a problem by problem basis, without any common principle or justification. I will talk about recent results by Hartline and Roughgarden, on a new benchmark for all single-parameter domains (satisfying some conditions) and its application to sponsored search auction. Comparisons will be made between the new benchmark and existing ones (studied by Abrams, Ghosh) together with their competitive mechanisms.
I will present the following paper: Asymptotically Efficient Lattice Based Digital Signatures by Vadim Lyubashevsky and Daniele Micciancio. This appeared in TCC 2008.
The authors give a direct construction of digital signatures based on the complexity of approximating the shortest vector in ideal (e.g., cyclic) lattices. The construction is provably secure based on the worst-case hardness of approximating the shortest vector in such lattices within a polynomial factor, and it is also asymptotically efficient: the time complexity of the signing and verification algorithms, as well as key and signature size is almost linear (up to poly-logarithmic factors) in the dimension n of the underlying lattice.
In this talk, I will present how formal method has been used over the last decade to improve network security. I will the introduce a model based on game theory and TATL that overcome the limitation of previous model and illustrate with an example. Finally I will discuss how this model scale in practice and presents the implementation : NetQi (www.netqi.org)
Fingerprinting codes, originally designed for embedding traceable fingerprints in digital content, have many applications in cryptography. Most notably, they are used to construct traitor tracing systems. Recently there has been some interest in constructing {\em robust} fingerprinting codes: codes capable of tracing words even when the pirate adversarially destroys a $\delta$ fraction of the marks in the fingerprint. The best construction to date, due to Boneh and Naor, produces codewords whose length is proportional to $c4/(1-\delta)2$ where $c$ is the number of words at the adversary's disposal. We construct codes whose length is proportional to $(c \log c)2 / (1-\delta)$, which is optimal up to $\log c$ factors. These new codes lead to traitor tracing systems with constant size ciphertext and much shorter secret keys than previously possible.