Location: either Gates 4B lobby, or Terman 453.
Mailing list: thseminar@cs.stanford.edu. Email Qiqi to sign up.
Contact: Qiqi Yan (qiqiyan@stanford...)
Food: Wendy usually chooses the food at her discretion. But the speaker may suggest a preference to Wendy.
This meeting is the place where people sign up for talks.
I'll give a survey of Descriptive Complexity (http://www.cs.umass.edu/~immerman/descriptive_complexity.html). In the last third of the talk I'll discuss some recent work I'm doing at Stanford with Shachar Itzhaky, Sumit Gulwani, and Mooly Sagiv on the automatic synthesis of efficient data structures and algorithms from very high level algorithmic specifications.
We give the first black-box reduction from arbitrary approximation
algorithms to truthful approximation mechanisms for a non-trivial
class of multi-parameter problems. Specifically, we prove that every
packing problem that admits an FPTAS also admits a
truthful-in-expectation randomized mechanism that is an FPTAS.
Our reduction makes novel use of smoothed analysis, by employing small
perturbations as a tool in algorithmic mechanism design.
To argue that our perturbation schemes are incentive-compatible, we
develop a ``duality'' between linear perturbations of the
objective function of an optimization problem and of its feasible set.
Viewed from the dual perspective, our mechanisms are
maximal-in-distributional-range and hence truthful in expectation.
joint work with Tim Roughgarden
Given data drawn from a mixture of multivariate Gaussians, a basic problem is to accurately estimate the mixture parameters. This problem has a rich history of study in both statistics and, more recently, in CS Theory. We give an efficient algorithm for this problem (running time, and data requirement polynomial in the dimension and the inverse of the desired accuracy), with provably minimal assumptions on the Gaussians. One component of the proof is showing that noisy estimates of the low-order moments of a 1-dimensional mixture suffice to recover accurate estimates of the mixture parameters, as conjectured by Pearson (1894), and in fact these estimates converge at an inverse polynomial rate. The remainder of the proof is a dimension-reduction argument for how one can piece together information from different 1-dimensional projections to yield accurate parameters.
As a simple consequence of our learning algorithm, one can efficiently perform near-optimal clustering: in the case where the overlap between the Gaussians is small, one can accurately cluster the data, and when the Gaussians have partial overlap, one can still accurately cluster those data points which are not in the overlap region.
Joint work with Adam Kalai and Ankur Moitra
Matroid matching was proposed as a common generalization of two classical problems in P: maximum matching, and matroid intersection. Unfortunately, it turns out that matroid matching is intractable for general matroids. For linear matroids (representable over a field), it was shown by Lovasz that matroid matching can be solved in polynomial time. We show that for any matroid given by an oracle, there is a simple PTAS for matroid matching, which is achieved by local search. More generally, we get a (k/2+epsilon)-approximation for matroid matching in k-uniform hypergraphs, which is a common generalization of k-set packing and the intersection of k matroids.
On the negative side, we show that the known LP relaxations of the matroid matching problem have an Omega(n) integrality gap, and even after r rounds of the Sherali-Adams hierarchy the gap is still Omega(n/r). Thus matroid matching is an example of a problem where the Sherali-Adams hierarchy fails to capture a simple combinatorial algorithm.
Joint work with Jon Lee and Maxim Sviridenko (IBM Watson)
Ant colony optimization is inspired by the foraging behavior of natural ants. Ants deposit pheromone trails on the ground that attract other ants. This simple communication mechanism results in swarm intelligence phenomena that can solve complex tasks in nature such as finding short paths to food sources. Ant-inspired algorithms form a powerful optimization paradigm with many successful applications to problems such as the TSP, network routing, and scheduling problems. I will present analyses for the running time of ant algorithms for shortest path problems. This is joint work with Christian Horoba and it extends previous work by Attiratanasunthron and Fakcharoenphol. For the single-destination shortest path problem we obtain an almost tight upper bound. This bound transfers to the all-pairs shortest path problem where different types of ants search for different destinations. A simple interaction mechanism between different types of ants leads to a significant speed-up. Interestingly, this speed-up is achieved by sending ants to intermediate destinations chosen uniformly at random.
We consider the online stochastic matching problem proposed by Feldman et al. as a
model of display ad allocation. We are given a bipartite graph; one side of the graph
corresponds to a fixed set of advertisers and the other side represents the set of possible
impression types. We are also given a distribution over the impression types that indicates
the sampling rate of each type. At each time step, an impression is sampled
independently from the given distribution and it needs to be matched upon its arrival.
The goal is to maximize the number of matched advertisers. We
develop an online algorithm that, in expectation, achieves a competitive ratio better
than the previous result by Feldman et al.
Our algorithm is adaptive and it works for any sampling rates, unlike the algorithm
proposed by Feldman et al. that only works when sampling rates are integral. The algorithm
is based on sampling from the expected optimum offline solution which can be computed efficiently using
Monte Carlo methods. Furthermore, we show that the competitive ratio of
our algorithm is within 5% of the optimum online algorithm.
Based on joint work with Shayan Oveis Gharan, Amin Saberi
We present new methods for the running time analysis of parallel evolutionary algorithms with spatially structured populations. These methods are applied to estimate the speed-up gained by parallelization in pseudo-Boolean optimization. The possible speed-up increases with the density of the topology. Surprisingly, even sparse topologies like ring graphs lead to a significant speed-up for many functions while not increasing the total number of function evaluations. We also give practical hints towards choosing the minimum number of processors that gives an optimal speed-up. Joint work with Dirk Sudholt.
We construct an algebraic pseudorandom function (PRF) that is
more efficient than the classic Naor-Reingold algebraic PRF. Our PRF is
the result of adapting the cascade construction, which is the basis of
HMAC (hash-based message authentication code), to the algebraic
settings. To do so we define an augmented cascade construction and prove
it secure when the underlying PRF satisfies a property called parallel
security. We then use the augmented cascade to build new algebraic PRFs.
The algebraic structure of our PRF leads to an efficient large-domain
Verifiable Random Functions (VRF) and a large-domain simulatable VRF.
Joint work with Dan Boneh and Hart Montgomery.