Location: Gates 4B lobby.
Mailing list: thseminar@cs.stanford.edu. Email Qiqi to sign up.
Contact: Qiqi Yan (qiqiyan@stanford...)
Abstract: I'll discuss Leonid Gurvits's breakthrough about lower bounds for the permanent.
Abstract:
Algorithmic Mechanism Design is concerned with solving computational problems in situations where essential problem data is being held privately by selfish agents. Techniques from economics have long existed for aligning the incentives of the agents with the social good, yet they often require solving a hard optimization problem exactly. On the other hand, computer scientists have coped with intractability by designing approximation algorithms. The main challenge of algorithmic mechanism design is to combine these economic and computational guarantees by designing mechanisms that are "truthful", and moreover compute socially desirable solutions in polynomial time.
A series of negative results has demonstrated that, using existing techniques, combining truthfulness and polynomial time computation results in an inevitable deterioration of the approximation ratio for many important problems. Fortunately, said negative results pertain almost exclusively to mechanisms that are deterministic. It may be possible to exploit randomization to overcome these barriers. In this work we propose a new framework, based on convex optimization, for designing randomized truthful approximation mechanisms, and apply our techniques to the paradigmatic problem of combinatorial auctions. We show that approximation algorithms based on rounding a mathematical relaxation (such as a linear program) can be generically converted to a truthful mechanism provided the rounding scheme is "convex", in a precise sense we define. This reduces the design of truthful mechanisms to the design of a "convex rounding scheme". We apply this framework to obtain an optimal approximation mechanism for a large class of combinatorial auctions. Namely, we consider a large subclass of submodular valuations that includes most concrete submodular functions studied in the literature. When player valuations lie in this class, we show how to compute an optimal 1-1/e approximation to combinatorial auctions via a truthful randomized mechanism. The best previously known bound was a log m log log m approximation mechanism due to Dobzinski. Our result is the first optimal truthful mechanism for an NP-hard class of combinatorial auctions with restricted valuations.
Joint work with Tim Roughgarden and Qiqi Yan
Even commercial router vendors have adopted randomized algorithms in a
few cases because they are simple, fast, and memory-efficient.
Further, because of the opportunity to see every arriving packet and
preserve summary information about the entire population, such
randomized algorithms can obtain an "edge" over standard algorithms
that merely sample the population. I illustrate this thesis by
describing an algorithm that provides an inexpensive technique for
measuring the average and
variance of packet latencies and loss on a link. By contrast, the
majority of routers have no support for fine-grained latency
measurement; managers must instead rely on approximate methods such as
sending probe packets or using "tomographic" techniques.
I will end by phrasing this problem abstractly as one of maximizing the mass
of a set of experimental samples given T testtubes in the face of
an unknown and randomly selected number of contaminants.
Joint work with Ramana Kompella, Kirill Levchenko, Terry
Lam and Alex Snoeren.
George Varghese obtained his Ph.D in 1992 from MIT. He joined UCSD in
1999, where he currently is a professor of computer science. He was
elected to be a Fellow of the Association for Computing Machinery (ACM) in 2002.
He has written a book on building fast router and endnode implementations
called "Network Algorithmics", which was published in December 2004 by
Morgan-Kaufman. In May 2004, he co-founded NetSift Inc. which was
acquired by Cisco Systems in 2005.
Abstract: We give the first combinatorial approximation algorithm for Maxcut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an O(n^b)algorithm that outputs a (0.5+d) approximation for Maxcut, where d = d(b) is some positive constant. One of the components of our algorithm is a weak local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter f, unless a random walk of length l = O(log n) starting from i mixes rapidly (in terms of f and l), we can find a cut of conductance at most f close to the vertex. The work done per vertex found in the cut is sublinear in n.
This is joint work with C. Seshadhri.
We show that the problem of bounding the approximation ratio of certain algorithms can sometimes be reduced to solving a large instance of LP using a computer LP solver. With this unconventional approach, we are able to show that for online bipartite matching with random arrivals, the competitive ratio of the simple ranking algorithm is better than 1-1/e. Our approach can also give us a slightly improved ratio for the uncapacitated metric facility location problem.
Joint work with Mohammad Mahdian.
I'll talk about the following paper:
S. Arora, B. Barak, and D. Steurer. Subexponential Algorithms for Unique Games and Related problems . In Proc. of FOCS, 2010.
For some positive constant \eps, we give a 3/2-\eps approximation algorithm for the following problem: given a graph G_0=(V,E), find the shortest tour that visits every vertex at least once. This problem is known as the unweighted TSP. The result improves on the 3/2-approximation algorithm due to Christofides.
Similar to Christofides, our algorithm finds a spanning tree whose cost is upper bounded by the optimum, then it finds the minimum cost Eulerian augmentation of that tree. The main difference is in the selection of the spanning tree. Except in certain cases where the solution of LP is nearly integral, we select the spanning tree randomly by sampling from a maximum entropy distribution defined by the linear programming relaxation.
Despite the simplicity of the algorithm, the analysis builds on a variety of ideas such as properties of strongly Rayleigh measures from probability theory, graph theoretical results on the structure of near minimum cuts, and the integrality of the T-join polytope from polyhedral theory.
This is a joint work Amin Saberi, and Mohit Singh.
Consider a sequence of bits where we are trying to predict the next bit from the previous bits. Assume we are allowed to say `predict 0' or `predict 1', and our payoff is $+1$ if the prediction is correct and $-1$ otherwise. We will say that at each point in time the loss of an algorithm is the number of wrong predictions minus the number of right predictions so far. In this paper we are interested in algorithms that have essentially zero (expected) loss over any string at any point in time and yet have small regret with respect to always predicting $0$ or always predicting $1$. This can be formulated as the experts problem in which we require exponentially small regret with respect to one special expert (which corresponds to the `no prediction' strategy in our setting). It was shown by Even-Dar et al. (COLT'07) that constant expected loss can be achieved. In this paper we give an algorithm that has small regret and exponentially small loss (in expectation), achieving the optimal tradeoff between the two. For a sequence of length $T$ our algorithm has an amortized per time step regret $\epsilon $ and loss $e^{-\epsilon2 T+1}/\sqrt{T}$ in expectation for all strings. The algorithm extends to the general expert setting, yielding essentially zero loss with respect to the special expert and optimal loss/regret tradeoff.
The strong loss bounds of the algorithm have some surprising consequences. First, we obtain a parameter free algorithm for the experts problem that has optimal {\em adaptive regret} bounds, i.e. bounds with respect to the optimum that is allowed to change arms multiple times. Moreover, for {\em any window of size $n$} the regret of our algorithm to any expert never exceeds $O(\sqrt{n(\log N+\log T)})$, where $N$ is the number of experts and $T$ is the time horizon, while maintaining the essentially zero loss property.
Joint work with Rina Panigrahy.