Talks are held in the Gates Building, near the Main Quad of Stanford's campus. Click here for directions.
We present a strongly polynomial algorithm for the restricted generalized uncapacitated transshipment problem, a generalized version of the shortest path problem. We present a left-distributive closed semiring which permits use of the Bellman-Ford algorithm to solve this problem given a guess for the value of the optimal solution. Using Megiddo's parametric search scheme, we can compute the optimal value in strongly polynomial time.
Using the restricted generalized uncapacitated transshipment algorithm, we present fully polynomial-time approximation schemes for the generalized versions of the maximum flow problem, the minimum-cost flow problem, the concurrent flow problem, the multicommodity maximum-flow problem, and the multicommodity minimum-cost flow problem. For all of these problems except the maximum flow variant, these combinatorial algorithm schemes are the first polynomial-time algorithms not based on interior point methods.
If there is time, we may present preliminary implementation results for the restricted generalized transshipment problem. The implementation is joint work with Amit Patel.
Email:
oldham@cs.stanford.edu
WWW:
http://theory.stanford.edu/~oldham/
Related paper:
http://Theory.Stanford.EDU/~oldham/publications/generalized/
Classically, space-time tradeoffs have been studied in this model under the assumption that family F is an unrestricted function family. In this setting, a large gap exists between the expected and the provable lower bounds. Augmenting the model with parameter F that characterizes the preprocessing power, makes for a more realistic computational model and unifies many fundamental questions previously studied in isolated settings. In particular, the extreme settings of our model correspond to the cell probe and the generalized decision tree complexities. Thus our general model combines together the issues of ``reusing'' limited space and ``adaptive expressibility'' in computing f. Using graph properties as candidates for the source function f, we develop new techniques for deriving space-time tradeoffs in this model. In doing so, we illustrate various facets of our model and strengthen some existing results.
This is joint work with Sanjeev Khanna (Bell Labs).
Email:
yevgen@theory.lcs.mit.edu
WWW:
http://theory.lcs.mit.edu/~yevgen/
Related paper:
Email:
krv@dimacs.rutgers.edu
WWW:
http://dimacs.rutgers.edu/People/Postdocs/Varadarajan.html
Related paper: FOCS'98
Email:
ranade@surya.cse.iitb.ernet.in
WWW:
http://www.cse.iitb.ernet.in/~ranade
Related paper:
http://www.cse.iitb.ernet.in/~ranade/nstcs97.dvi
Email:
feige@pa.dec.com
Related paper: available upon request.
Joint work with Funda Ergun (Bell Labs) and S. Ravi Kumar (IBM Almaden).
Email:
ronitt@cs.cornell.edu
WWW:
http://simon.cs.cornell.edu/Info/People/ronitt/homepage.html
Email:
val@csr.uvic.ca
WWW:
http://csr.uvic.ca/~val/val.html
Related paper: STOC'99, available upon request.
Email:
madhu@lcs.mit.edu
WWW:
http://theory.lcs.mit.edu/~madhu
Related paper: http://theory.lcs.mit.edu/~madhu/papers.html
.
This is joint work with Ravi Kumar, Prabhakar Raghavan and Andrew Tomkins.
Email:
sridhar@almaden.ibm.com
We extend the pseudo-random generators of Nisan and Wigderson and Impagliazzo and Wigderson to create an "all purpose" derandomization technique applicable to any randomized procedure with the following property: for any fixed input, the complexity of checking whether the procedure succeeds on a given random bit sequence is not too high. To demonstrate the generality of our framework, we derandomize a variety of probabilistic complexity theoretic constructions (in each case assuming a sufficiently strong worst-case hardness assumption) including the Valiant-Vazirani random hashing technique, the construction of matrices with high rigidity, and universal traversal sequences.
Critical to our argument is the "black-box" nature of the proofs found in the construction of the pseudo-random generator of Nisan and Wigderson [NW94]. We show how the ideas from [NW94] and our work relate to the recent breakthrough extractor construction of Trevisan.
Join work with Dieter van Melkebeek.
Email:
klivans@math.mit.edu
Related paper:
ftp://ftp.eccc.uni-trier.de/pub/eccc/reports/1998/TR98-075/index.html
for a constant alpha > 0. This collective sampling protocol is not subsumed by Feige's recent leader election protocol.
We also give a lower bound for both collective coin-flipping and leader election in the case where each player can broadcast only 1 bit per round (and the good players bits are unbiased). In particular, we show that any protocol with linear resilience must take at least [1/2 - o(1)] log^* n rounds. The primary component of this result is a new bound on the influence of random sets of variables on Boolean functions.
This is joint work with Alex Russell and Mike Saks.
Email:
diz@cs.berkeley.edu
WWW:
http://www.cs.utexas.edu/users/diz/
Related papers:
http://www.cs.utexas.edu/users/diz/leader-upper.ps and http://www.cs.utexas.edu/users/diz/leader-upper.ps.
Email:
lieven@snowmass.stanford.edu
WWW:
http://www-snow.stanford.edu/~lieven/
Related resources:
http://squint.stanford.edu/
1/4 can be replaced by any fixed gamma less than 1/2 if c is sufficiently large with respect to gamma. We actually show that the promise problem :
where gamma>0 is an arbitrarily small constant, cannot be solved by a nonlinear algorithm with the described limitations even if we know that we only get inputs where one of these conditions hold. (In this case epsilon may depend on gamma too).
Email:
ajtai@almaden.ibm.com
Related paper:
ftp://ftp.eccc.uni-trier.de/pub/eccc/reports/1998/TR98-077/index.html
We will then show how to apply the algorithms to text and image retrieval, VLSI design, clustering point sets and searching the web, along with some (preliminary) experimental results.
Joint work with Alan Frieze (CMU), Ravi Kannan, Petros Drineas (Yale), and V.Vinay (IISc).
Email:
vempala@lcs.mit.edu
WWW:
http://www-math.mit.edu/~vempala/
Following notions from game theory, micro-economics, and, in particular, from mechanism design, we suggest a framework for studying such algorithms. We also suggest extensions to the basic framework -- extensions that are motivated by the algorithmic domain. We apply the standard tools of mechanism design to algorithmic problems and observe their limitations.
We proceed by analyzing a representative problem, task scheduling, that can not be handled by the standard tools. We present both upper and lower bounds as well as theorems regarding several key issues including approximations, the power of randomness, and the implications of our model extensions.
Joint work with Noam Nisan
To achieve such goal-directed crawling, we design two hypertext mining programs that guide our crawler: a "classifier" that evaluates the relevance of a hypertext document with respect to the focus topics, and a "distiller" that identifies hypertext nodes that are great access points to many relevant pages within a few links. We report on extensive focused-crawling experiments using several topics at different levels of specificity. Focused crawling acquires relevant pages steadily while standard crawling quickly loses its way, even though they are started from the same root set. Focused crawling is robust against large perturbations in the starting set of URLs. It discovers largely overlapping sets of resources in spite of these perturbations. It is also capable of exploring out and discovering valuable resources that are dozens of links away from the start set, while carefully pruning the millions of pages that may lie within this same radius. Our anecdotes suggest that focused crawling is very effective for building high-quality collections of web documents on specific topics, using modest desktop hardware.
Joint work with Martin van den Berg and Byron Dom.
A fundamental difference between connection-caching and standard caching is that connection caching involves multiple interacting caches: Each open connection is cached by two hosts, and may be terminated by either; the other node is forced to accept closing decision. Policies may also assume different levels of inter-host communication.
We propose and study a theoretical model for connection caching. While the ordinary off-line caching problem is easily solved in polynomial time, we show that the off-line connection caching problem is NP-complete, and in fact hard to approximate. Single-cache replacement policies can be cast as connection-caching policies when implemented locally at each node. We provide a tight 2-approximation bound for a distributed Belady's algorithm (which is optimal for a single cache). We show that any (on-line) c-competitive single-cache policy can be converted into a distributed (on-line) 2c-competitive connection-caching algorithm. We provide a tight 2k-1 bound for the competitiveness of connection-caching LRU (single-cache LRU is k-competitive), where k is the size of the largest cache in the network. Unlike the situation in a single-cache, LRU is not the best deterministic algorithm: we construct a family of k-competitive connection-caching policies. We also discuss variants of the model allowing different levels of inter-host communication.
This is joint work with Haim Kaplan and Uri Zwick.