Location: either Gates 4B lobby, or Terman 453.
Mailing list: thseminar@cs.stanford.edu. Email Qiqi to sign up.
Contact: Qiqi Yan (qiqiyan@stanford...)
Food: Wendy usually chooses the food at her discretion. But the speaker may suggest a preference to Wendy.
This meeting is the place where people sign up for talks.
We study a general sub-class of concave games which we call socially-concave games. We show that if each player follows any no-external regret minimization procedure then the dynamics will converge in the sense that both the average action vector will converge to a Nash equilibrium and that the utility of each player will converge to her utility in that Nash equilibrium.
We show that many natural games are socially concave games. Specifically, we show that linear Cournot competition, linear resource allocation games, and atomic splittable routing with affine cost functions, are socially-concave games, and therefore our convergence result applies to them. In addition, we show that a simple best response dynamics might diverge for linear resource allocation games, and is known to diverge for linear Cournot competition, and for atomic routing games. For the TCP congestion games we show that ``near'' the equilibrium the games are socially-concave, and using our general methodology we show the convergence of a specific regret minimization dynamics.
Joint work with Eyal Even Dar and Yishay Mansour.
Bargaining networks model the behavior of a set of players that need
to reach pairwise agreements for making profits. Nash bargaining
solutions are special outcomes of such games that are both stable and
balanced. Kleinberg and Tardos proved a sharp algorithmic
characterization of such outcomes, but left open the problem of how
the actual bargaining process converges to them. A partial answer was
provided by Azar et al. who proposed a distributed, but not natural,
algorithm for constructing Nash bargaining solutions, without
polynomial bounds on its convergence rate. In this paper, we introduce
a simple and natural model for the bargaining process, and study its
convergence rate to Nash bargaining solutions.
At each time step, each player proposes a deal to each of her
neighbors. The proposal consists of a share of the potential profit in
case of agreement. The share is chosen to be balanced in Nash's sense
as far as this is feasible (with respect to the current best
alternatives for both players). We prove that, whenever the Nash
bargaining solution is unique (and satisfies a positive gap condition)
this dynamics converges to it in polynomial time.
Our analysis is based on an approximate decoupling phenomenon between
the dynamics on different substructures of the network. This approach
may be of general interest for the analysis of local algorithms on
networks.
This is joint work with Mohsen Bayati, Christian Borgs, Jennifer
Chayes and Andrea Montanari.
It is well established that the problem of detecting a triangle in a graph can be reduced to
Boolean matrix multiplication (BMM). Many have asked if there is a reduction in the other
direction: can a fast triangle detection algorithm be used to solve BMM faster? The general
intuition has been that such a reduction is impossible: for example, triangle detection returns one
bit, while a BMM algorithm returns n^2 bits. Similar reasoning goes for other matrix products
and their corresponding triangle problems.
We show this intuition is false, and present a new generic strategy for efficiently computing
matrix products over algebraic structures used in optimization. We say an algorithm on n x n
matrices (or n-node graphs) is truly subcubic if it runs in O(n^{3-eps} poly(logM)) time for some
eps > 0, where M is the absolute value of the largest entry (or the largest edge weight). We prove an
equivalence between the existence of truly subcubic algorithms for any (min, *) matrix product,
the corresponding matrix product verification problem, and a corresponding triangle detection
problem. Our work simplifies and unifies prior work, and has some new consequences:
(1) The following problems either all have truly subcubic algorithms, or none of them do:
- The all-pairs shortest paths problem on directed graphs.
- The all-pairs shortest paths problem on undirected graphs.
- Detecting if a weighted graph has a triangle of negative total edge weight.
- Reporting n^{2.99} negative triangles in a graph.
- Checking whether a given matrix defines a metric.
- Verifying the correctness of a matrix product over the (min; +)-semiring.
(2) The following either all have truly subcubic combinatorial algorithms, or none of them do:
- Boolean matrix multiplication.
- Detecting if a graph has a triangle.
- Reporting n^{2.99} triangles in a graph.
- Verifying the correctness of a matrix product over the Boolean semiring.
Joint work with Ryan Williams.
We consider the well-studied problem of finding a perfect
matching in a d-regular bipartite graph on 2n nodes with $m=nd$
edges. The best-known algorithm for general bipartite graphs (due to
Hopcroft and Karp) takes time O(m\sqrt{n}). In regular bipartite graphs,
however, a matching is known to be computable in O(m) time (due to Cole,
Ost, and Schirra). In a recent line of work by Goel, Kapralov, and Khanna the
O(m) time algorithm was improved first to \tilde O\left(\min\{m,
n^{2.5}/d\}\right) and then to \tilde O\left(\min\{m,
n^2/d\}\right). It was also shown that the latter algorithm is optimal up
to polylogarithmic factors among all algorithms that use non-adaptive
uniform sampling to reduce the size of the graph as a first step.
We give a randomized algorithm that finds a perfect matching
in a d-regular graph and runs in O(n\log n) time (both in expectation
and with high probability). The algorithm performs an appropriately
truncated random walk on a modified graph to find augmenting
paths. Our algorithm may be viewed as using adaptive uniform sampling, and
is thus able to bypass the limitations of (non-adaptive) uniform sampling
established in earlier work. Our techniques also give an
algorithm that successively finds a matching in the support of a doubly
stochastic matrix in expected time O(n\log^2 n), with O(m)
pre-processing time; this gives a simple O(m+mn\log^2 n) time algorithm
for finding the Birkhoff-von Neumann decomposition of a doubly stochastic
matrix.
We show that randomization is crucial for obtaining o(nd) time algorithms
by establishing an \Omega(nd) lower bound for any deterministic
algorithm. We also show that there does not exist a randomized algorithm that
finds a matching in a regular bipartite multigraph and takes o(n\log n)
time with high probability.
This is joint work with Ashish Goel and Sanjeev Khanna.
We study the complexity of rationalizing network formation. In this
problem we fix an underlying model describing how selfish parties (the
vertices) produce a graph by making individual decisions to form or not
form incident edges. The model is equipped with a notion of stability
(or equilibrium), and we observe a set of "snapshots" of graphs that are
assumed to be stable. From this we would like to infer some unobserved
data about the system: edge prices, or how much each vertex values short
paths to each other vertex.
We study two rationalization problems arising from the network formation
model of Jackson and Wolinsky [JW96]. When the goal is to infer edge
prices, we observe that the rationalization problem is easy. The problem
remains easy even when rationalizing prices do not exist and we instead
wish to find prices that maximize the stability of the system. The
latter situation may arise from noisy data or inadequacies in the model.
In contrast, when the edge prices are given and the goal is instead to
infer valuations of each vertex by each other vertex, we prove that the
rationalization problem becomes NP-hard. Our proof exposes a close
connection between rationalization problems and the Inequality-SAT
(I-SAT) problem.
Finally and most significantly, we prove that an approximation version
of this NP-complete rationalization problem is NP-hard to approximate to
within better than a 1/2 ratio. To do this we prove a
1/2+\eps-approximation hardness for a variant of I-SAT in which all
coefficients are non-negative. This in turn follows from a tight
hardness result for MAX-LINR+ (linear equations over the reals, with
non-negative coefficients), which we prove by a (non-trivial)
modification of a result of Guruswami and Raghavendra [GR07] which
achieved tight hardness for this problem without the non-negativity
constraint.
Our technical contributions regarding the hardness of I-SAT and
MAX-LINR+ may be of independent interest, given the generality of these
problems.
Congestion Games introduced by Rosenthal(1973) form a broad class of games that include games that model Selfish Routing, Network Formation, Migration of species etc. Thus a theoretical study of equilibria in these games can provide useful insight into the practical situations they model. Price of Anarchy is one metric which captures the social inefficiency of the Nash equilibria in these games.
We characterize the price of anarchy in weighted congestion games, as a function of the allowable resource cost functions. Our results provide as thorough an understanding of this quantity as is already known for nonatomic and unweighted congestion games, and take the form of universal (cost function-independent) worst-case examples. One note-worthy byproduct of our proofs is the fact that weighted congestion games are "tight", which implies that the worst-case price of anarchy with respect to pure Nash, mixed Nash, correlated, and coarse correlated equilibria are always equal (under mild conditions on the allowable cost functions). Another is the fact that, like nonatomic but unlike atomic (unweighted) congestion games, weighted congestion games with trivial structure already realize the worst-case POA, at least for polynomial cost functions.
We also prove a new result about unweighted congestion games: the worst-case price of anarchy in symmetric games is, as the number of players goes to in?nity, as large as in their more general asymmetric counterparts.
Joint work with Martin Gairing(Liverpool) and Tim Roughgarden(Stanford).
No abstract.
We propose a model for the evolution of market share in presence of social influence. Using techniques from stochastic approximation theory, we show that market shares converge to an equilibrium. In a special case, when the choice model is a multinomial logit model, we show that inequality in the market increases with social influence and with strong enough social influence, monopoly occurs.
The web security community has been concerned for some time with attacks aiming to steal the cookies (or other items) belonging to a user. Here we present a study on the prevalence of such attacks in the real world, focusing on several sites that may be familiar to the audience. We also examine defenses, in particular the effectiveness of secure cookies in thwarting such attacks.
This is joint work between Elie Bursztein, Mike Hamburg, Hart Montgomery.