Location: Gates 415, Stanford (directions at http://infolab.stanford.edu/pub/keller/gates-map.html)
Schedule:
9:30 Coffee (provided) and hobnobbing
10-10:40 Nikhil Bansal, "Min-Max Graph Partitioning"
10:40-11:20 David P Woodruff "Tight Bounds for Distributed Functional Monitoring"
11:20-12 C Seshadhri "Understanding simple triangle enumeration algorithms on real-world graphs"
12-1 lunch (on your own)
1-1:40 Sofya Raskhodnikova "Testing and Reconstruction of Lipschitz Functions with Applications to Data Privacy"
1:40-2:20 Aaron Roth "Privately Computing Equilibria"
2:20-3 Berthold VĂ¶cking "A Universally-truthful Approximation Scheme for Multi-unit Auctions"
3-3:30 coffee break (provided)
3:30-4:10 Rafael Pass, "An Epistemic Approach to Mechanism Design"
4:10-4:50 Harry Buhrman, "Position-based cryptography"
4:50-5:30 Swastik Kopparty, "List-Decoding Multiplicity Codes"
(dinner on your own)
---
ABSTRACTS
Nikhil Bansal, "Min-Max Graph Partitioning"
Abstract: I will talk about graph partitioning problems from a min-max
perspective, in which an input graph on n vertices should be
partitioned into k parts,
and the objective is to minimize the maximum number of edges leaving a
single part.
The two main versions we consider are where the k parts need to be of
equal-size,
and where they must separate a set of k given terminals. We design O(
\sqrt{log n log
k})-approximations for these problems, substantially improving upon
previous results.
The main tool we use is a new O(\sqrt{\log n \log k}) approximation
algorithm for the small
set expansion problem, and a new randomized uncrossing procedure which
may be of
independent interest.
Based on joint work with Uriel Feige, Robert Krauthgamer, Konstantin
Makarychev,
Viswanath Nagarajan, Joseph Naor and Roy Schwartz.
---
David P Woodruff "Tight Bounds for Distributed Functional Monitoring"
Tight Bounds for Distributed Functional Monitoring
We resolve several open questions in the area of distributed
functional monitoring. For the purposes of this talk, we can think of
there being k parties, each holding their own private vector, and the
parties want to compute a statistic on the sum v of the k vectors. We
show that to compute a 1+eps-approximation to the support of v or to
the Euclidean norm of v, k/eps^2 communication is necesary up to
polylogarithmic factors. These lower bound are matched by known upper
bounds. We also mention results on approximating the Euclidean norm in
the model in which the private vectors are evolving with time and a
central coordinator must maintain an approximation to |v|_2 at all
times. We improve the previous k^2/poly(eps) communication bound to
k/poly(eps), which resolves a question of Cormode, Muthukrishnan, and
Yi.
Joint work with Qin Zhang
---
C Seshadhri "Understanding simple triangle enumeration algorithms on real-world graphs"
Understanding simple triangle enumeration algorithms on real-world
graphs
Abstract: Triangle enumeration is an important practical (and
interesting theoretical) problem for real-world graphs. This work is
motivated by the following observation: simple, embarrassingly
parallel heuristics for triangle enumeration perform rather well on
certain classes of real-world graphs (like infrastructure networks and
web networks). On the other hand, it is very easy to come up with
terrible worst-case examples for these heuristics. Can we give some
theoretical reason for why these algorithms work?
We analyze a basic binning algorithm for triangle counting (which is
quite commonly used among practitioners) for graphs generated by the
edge-configuration model. This is a fairly common model studied by
statisticians and physicists who wish to construct graphs with a given
degree-sequence. We prove that the running time is basically
proportional to the four-thirds moment of the degree
distribution. This is a very pleasant discovery: moments of degree
distributions of real-world graphs are considered a very important
property (think of heavy-tail and power law), and it is this property
that allows speedup in simple triangle enumeration algorithms.
Joint work with Jonathan Berry, Luke Fostvedt, Daniel Nordman, Cynthia
Phillips, and Alyson Wilson
---
Sofya Raskhodnikova "Testing and Reconstruction of Lipschitz Functions with Applications to Data Privacy"
Testing and Reconstruction of Lipschitz Functions with Applications to
Data Privacy.
Abstract:
A function f : D -> R has Lipschitz constant c if d_R(f(x), f(y)) <= c
d_D(x, y) for all x, y in D, where d_R and d_D denote the distance
functions on the range and domain of f, respectively. We say a
function is Lipschitz if it has Lipschitz constant 1. (Note that
rescaling by a factor of 1/c converts a function with a Lipschitz
constant c into a Lipschitz function.) Intuitively, a Lipschitz
constant of f is a bound on how sensitive f is to small changes in its
input. Lipschitz constants are important in mathematical analysis, the
theory of differential equations and other areas of mathematics and
computer science. However, in general, it is computationally
infeasible to find a Lipschitz constant of a given function f or even
to verify that f is c-Lipschitz for a given number c.
We initiate the study of testing (which is a relaxation of the
decision problem described above) and local reconstruction of the
Lipschitz property of functions. A property tester, given a proximity
parameter epsilon, has to distinguish functions with the property (in
this case, Lipschitz) from functions that are epsilon-far from having
the property, that is, differ from every function with the property on
at least an epsilon fraction of the domain. A local filter
reconstructs a desired property (in this case, Lipschitz) in the
following sense: given an arbitrary function f and a query x, it
returns g(x), where the resulting function g satisfies the property,
changing f only when necessary. If f has the property, g must be equal
to f.
We design efficient testers and local filters for functions over
domains of the form {1,...,n}^d, equipped with L1 distance, and give
corresponding lower bounds. The testers that we developed have
applications to programs analysis. The local filters have applications
to data privacy. The application to privacy is based on the fact that
a function f of entries in a database of sensitive information can be
released with noise of magnitude proportional to a Lipschitz constant
of f, while preserving the privacy of individuals whose data is stored
in the database (Dwork, McSherry, Nissim and Smith, TCC 2006). We give
a differentially private mechanism, based on local filters, for
releasing a function f when a purported Lipschitz constant of f is
provided by a distrusted client.
Based on a FOCS 2011 paper with M. Jha and recent work with
P. Awasthi, M. Jha and M. Molinaro
---
Aaron Roth - Privately Computing Equilibria (with applications to equilibrium
selection and repeated games)
We consider "large games" in which the choice of action of each
individual agent can have an arbitrarily large effect on his own
utility, but only a bounded effect on the utility of any other agent.
In such games, we study the problem of computing correlated equilibria
while preserving the (differential) privacy of the utility function of
each agent. We give a tight information theoretic lower bound on the
accuracy to which approximate equilibria can be computed while
preserving differential privacy, and two algorithms. Our first
algorithm is computationally efficient, but has a suboptimal
dependence on the number of actions in the game (It continues to match
the information theoretic lower bound in games with O(1) actions). The
second algorithm is not computationally efficient, but gives a tight
approximation factor that depends only logarithmically on the number
of actions in the game.
Finally, we mention a couple of game theoretic implications of the
ability to compute private equilibria. We get an approximately
truthful equilibrium selection mechanism for large games, and a way to
analyze noisy repeated games with growing populations that eliminates
folk theorem equilibria, and collapses all of the equilibria of the
repeated game to the equilibria of the single shot game.
This is joint work with Michael Kearns, Mallesh Pai, and Jon Ullman.
---
Berthold Voecking - A Universally-truthful Approximation Scheme for Multi-unit Auctions
We present a randomized, polynomial-time approximation scheme for
multi-unit auctions. Our mechanism is truthful in the universal sense,
i.e., a distribution over deterministically truthful
mechanisms. Previously known approximation schemes were truthful in
expectation which is a weaker notion of truthfulness assuming risk
neutral bidders. The existence of a universally truthful approximation
scheme was questioned by previous work showing that multi-unit
auctions with certain technical restrictions on their output do not
admit a polynomial-time, universally truthful mechanism with
approximation factor better than two.
Our new mechanism employs VCG payments in a non-standard way: The
deterministic mechanisms underlying our universally truthful
approximation scheme are not maximal in range and do not belong to the
class of affine maximizers which, on a first view, seems to contradict
previous characterizations of VCG-based mechanisms. Instead, each of
these deterministic mechanisms is composed of a collection of affine
maximizers, one for each bidder which yields a subjective variant of
VCG.
---
Rafael Pass - An Epistemic Approach to Mechanism Design
We introduce an epistemic framework for analyzing mechanisms. This
framework enables mechanism designers to define desirability of
outcomes not only based on players' actual payoff types and their
beliefs about the payoff types of other players (as in the classic
models), but also based on higher order beliefs of the players (i.e.,
beliefs about beliefs about ... the payoff types of the players). In
this framework, we may also use epistemic solution concepts to analyze
what outcomes are consistent with different levels of rationality: a
player is k-level rational if he is rational and considers all other
players (k-1)-level rational; following Aumann, we consider a very
weak notion of rationality: player i is *rational* if he uses a
strategy \sigma such that for every alternative strategy \sigma', i
considers some world possible where \sigma performs at least as well
as \sigma'.
We showcase this framework in the context of single-good auctions,
presenting an interim individually-rational mechanism with the
following revenue guarantee: for any k\geq 0, any outcome consistent
with all players being (k+1)-level rational guarantees the seller a
revenue of G^k - \epsilon (for any \epsilon > 0), where G^k is the
second highest belief about belief about ... (k times) about the
highest valuation of some player. We additionally show that no interim
individually rational mechanism can guarantee a revenue of G^k -
\epsilon for any constant \epsilon, if only assuming players are
k-level rational (as opposed to (k+1)-level rational). Taken together,
these results demonstrate the existence of a ``revenue-rationality
hierarchy'': strictly higher revenue may be extracted by assuming
players satisfy higher levels of rationality.
Towards analyzing our mechanism and proving our lower bounds, we
introduce an iterative deletion procedure of dominated strategies that
precisely characterizes strategies consistent with k-level
rationality.
Prior knowledge of mechanism design or epistemic logic will not be
assumed.
Joint work with Jing Chen and Silvio Micali.
---
Harry Buhrman, "Position-based cryptography"
Position-based cryptography
On 20 July 1969, millions of people held their breath as they watched,
live on television, Neil Armstrong set foot on the Moon. Yet Fox
Television has reported that a staggering 20% of Americans have had
doubts about the Apollo 11 mission. Could it have been a hoax staged
by Hollywood studios here on Earth? Position based cryptography may
offer a solution. This kind of cryptography uses the geographic
position of a party as its sole credential. Normally digital keys or
biometric features are used.
A central building block in position-based cryptography is that of
position-verification. The goal is to prove to a set of verifier that
one is at a certain geographical location. Protocols typically assume
that messages can not travel faster than the speed of light. By
responding to a verifier in a timely manner one can guarantee that one
is within a certain distance of that verifier. Quite recently it was
shown that position-verification protocols only based on this
relativistic principle can be broken by attackers who simulate being
at the claimed position while physically residing elsewhere in space.
Because of the no-cloning property of quantum information (qubits) it
was believed that with the use of quantum messages one could devise
protocols that were resistant to such collaborative attacks. Several
schemes were proposed that later turned out to be insecure. Finally it
was shown that also in the quantum case no unconditionally secure
scheme is possible. We will review the field of position-based quantum
cryptography and highlight some of the research currently going on in
order to develop, using reasonable assumptions on the capabilities of
the attackers, protocols that are secure in practice.
---
Swastik Kopparty, "List-Decoding Multiplicity Codes"
List-Decoding Multiplicity Codes
Multiplicity Codes allow one to encode data with just an epsilon
fraction
redundancy, so that even if a constant fraction of the encoded bits
are corrupted,
any one bit of the original data can be recovered in sublinear time
with high probability.
I will talk about a new result showing that these codes also tolerate
a large fraction of errors:
(1) they can achieve "list-decoding capacity",
(2) they can be locally list-decoded beyond half their minimum
distance.
In particular, we give the first polynomial time algorithms for
decoding multiplicity codes
upto half their minimum distance.
The first of these results is based on solving some kinds of algebraic
differential equations.
The second is based on a family of algebraically repelling
"space-filling" curves.