Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.
We also show how our algorithm can be used to order matrices with the Robinson property. An application of this will be illustrated with examples from seriation in archeology.
To our knowledge, this is the first result in which a spectral method solves a combinatorial problem exactly instead of merely supplying a heuristic or a bound.
Joint work with Bruce Hendrickson and Jon Atkins. (This talk will partially overlap the talk Bruce Hendrickson gave at Xerox PARC on September 21.)
We attempt to reconcile the two distinct views of approximation classes: syntactic and computational. Syntactic classes such as MAXSNP permit structural results and have natural complete problems, while computational classes such as APX allow us to work with classes of problems whose approximability is well-understood. Our results provide a syntactic characterization of computational classes and give a computational framework for syntactic classes.
We compare the syntactically defined class MAXSNP with the computationally defined class APX, and show that every problem in APX can be ``placed'' (i.e., has an approximation preserving reduction to a problem) in MAXSNP. Our methods introduce a general technique for creating approximation-preserving reductions which show that any ``well'' approximable problem can be reduced in an approximation- preserving manner to a problem which is hard to approximate to corresponding factors.
We use the syntactic nature of MAXSNP to define a general paradigm, non-oblivious local search, useful for developing simple yet efficient approximation algorithms. We show that such algorithms can find good approximations for all MAXSNP problems, yielding approximation ratios comparable to the best-known for a variety of specific MAXSNP-hard problems. Non-oblivious local search provably out-performs standard local search in both the degree of approximation achieved and the efficiency of the resulting algorithms.
Anil Kamath:
The classical occupancy problem is concerned with studying the number of empty bins resulting from a random allocation of $m$ balls to $n$ bins. We provide a series of tail bounds on the distribution of the number of empty bins. These tail bounds should find application in randomized algorithms and probabilistic analysis. Our motivating application is the following well-known conjecture on threshold phenomenon for the satisfiability problem. Consider random 3-SAT formulas with $cn$ clauses over $n$ variables, where each clause is chosen uniformly and independently from the space of all clauses of size 3. It has been conjectured that there is a sharp threshold for satisfiability at $c*$ approximately 4.2. We provide the first non-trivial upper bound on the value of $c*$, showing that for $c$ > 4.758 a random 3-SAT formula is unsatisfiable with high probability. This result is based on a structural property, possibly of independent interest, whose proof needs several applications of the occupancy tail bounds.
Joint work with Rajeev Motwani, Krishna Palem, and Paul Spirakis.
We develop efficient algorithms for embedding graphs low-dimensionally with a small distortion. Further algorithmic applications include:
1. A simple, unified approach to a number of problems on multicommodity flows, including the Leighton-Rao Theorem [T. Leighton and S. Rao, An approximate max-flow min-cut theorem for uniform multicommodity flow problems with applications to approximation algorithms, FOCS 29 (1988), 422-431] and some of its extensions.
2. For graphs embeddable in low-dimensional spaces with a small distortion, we can find low-diameter decompositions (in the sense of [B. Awerbuch and D. Peleg, Sparse partitions, FOCS 31 (1990), 503-513] and [N. Linial and M. Saks, Low diameter graph decompositions, Combinatorica 13 (1993), 441-454]). The parameters of the decompo- sition depend only on the dimension and the distortion and not on the size of the graph.
3. In graphs embedded this way, small balanced separators can be found efficiently.
Faithful low-dimensional representations of statistical data allow for meaningful and efficient clustering, which is one of the most basic tasks in pattern-recognition. For the (mostly heuristic) methods used in the practice of pattern-recognition, see [R.O. Duda and P.E. Hart, "Pattern Classification and Scene Analysis", John Wiley and Sons, New York, 1973], especially chapter 6.
Keywords: Graph, metric space, normed space, embedding, isometry, distortion, separator, multicommodity flow, clustering, dimension, graph decomposition, algorithm.
Joint work with Nathan Linial and Yuri Rabinovich.
However, d-wise independence is often an over-estimate on the degree of randomness actually required by the constraints. We investigate the problem of constructing distributions that satisfy only a given set of constraints. We show that if the constraints are consistent, then there exists a distribution satisfying them supported by a ``small'' sample space (one whose cardinality is equal to the number of constraints).
We provide two constructive versions of this proof for the important case where the constraints are satisfied by independent random variables. The first is a polynomial-time algorithm for constructing a distribution that precisely satisfies a set of _expectation constraints_. The second is an NC algorithm for constructing a distribution that approximately satisfies constraints over any event that can be verified by finite automata (including independence constraints, constraints on the parity, and more). The constraints are satisfied up to a small _relative_ error. The parallel construction relies on a new and independently interesting parallel algorithm for converting a solution to a linear system into an almost basic approximate solution to the same system.
We demonstrate the use of this technique for derandomizing algorithms by applying it to the problem of finding large independent sets in sparse d-uniform hypergraphs. In particular, we provide the first NC derandomization of this algorithm for arbitrary d. We also show how the linear programming perspective suggests new proof techniques which might be useful in general probabilistic analysis.
The first part of this research is joint work with Nimrod Megiddo, and the second part with David Karger.
THE TALK WILL BE A SURVEY OF THIS AREA AND WILL BE SELF-CONTAINED.
We also provide new results on the problem of estimating the worst case ratio between the independence number and the $\theta$-function of a graph.
Joint work with Noga Alon.
Using this new framework, we derive intuitively appealing results which were unattainable in the previous framework. First, we show that deterministic marking algorithms such as the least recently used (LRU) page replacement algorithm achieve constant competitive ratios against a restricted adversary which must choose a request sequence exhibiting a quantifiable amount of locality of reference, and we show that this performance guarantee is, in some sense, optimal. We prove a similar result for the randomized marking algorithm. These results lead us to a better understanding of when on-line algorithms can guarantee good performance, and this, in turn, helps us explain why the competitive ratio of the optimal on-line algorithm increases with $k$, the size of fast memory. Next, we show that on-line algorithms augmented with lookahead can achieve better competitive ratios than on-line algorithms without lookahead. In particular, we show that a modified LRU algorithm with sufficient lookahead can achieve a competitive ratio of 2 even against an unrestricted adversary. Finally, because our model includes both fast memory size and the relative difference in access speed between fast and slow memories, we are able to compare the effects of increasing fast memory size, fast memory speed, and slow memory speed on system performance.
We develop a general scheme for implementing PET, and we also provide an information theoretic proof that the total bandwidth of the scheme for the given priority levels is optimal.
Our primary practical goal at this point is to incorporate PET into real-time multicast teleconferencing applications. This would allow graceful degradation of picture quality as a function of packet loss. Preliminary simulations of a prototype PET system are quite encouraging: When sending an MPEG video stream without any protection over the network, the video degrades substantially and erratically with even moderate packet loss, whereas if PET is used to selectively protect the MPEG video, the degradation is reasonable and controlled.
We have focussed on minimizing the computational processing time used by PET. We have developed a novel software implementation of PET system that can encode an MPEG 3Mbits/second video stream into IP packets and decode the packets back into an MPEG video stream in real-time using a 50MHz SPARC 10 machine.
In this talk we show that the median can be found using at most 2.95n comparisons. We also show that the the a*n-th element using at most [1+a*log(1/a)+O(a*loglog(1/a))]*n comparisons. This comes very close to the [1+H(a)]*n - o(n) lower bound obtained by Bent and John. As H(a)=a*log(1/a)+O(a), the coefficient of a*log(1/a) in our new upper bound cannot be improved.
This result is a joint work with Uri Zwick.
This joint work with Saul Schleimer and Daniel Wilkerson.
Joint work with Dan Kleitman and Tom Leighton.
Some of the major areas of work include:
* Developing and implementing new interior-point techniques for efficiently solving large scale multi-commodity network flow problems. These algorithms are based on solving linear and quadratic programming formulations of the problem.
* Designing fast algorithms based on continuous methods for approximate solution of {\em minimum cost} multi-commodity flow problem. These techniques are geared towards distributed implementation.
* Application of multi-commodity flow techniques for design and implementation of centralized and distributed algorithms for on-line routing of Virtual Circuits in ATM (Asynchronous Transfer Mode) networks.
Our approach is to design algorithms that have provably good worst or average case performance and demonstrate their efficiency by implementing and evaluating them on real data.
Joint work with M. Schlansker, Hewlett Packard Labs.
We will show how this problem is polynomial on the average for some very special input distributions. We then investigate IP protocols to verify that $GR(x)=y$ and show an interactive protocol can check GR in O(n) interactive steps. The new aspect of this protocol (ISTTCS-94) is that it is presented in purely graph theoretical terms.
We then discuss possible extensions of this problem to other uncertainty models encountered in control theory.
This talk presents a recent breakthrough in the area of dynamic graph algorithms: We present the first dynamic algorithms for various graph connectivity problems with polylogarithmic time per operation. The previously best algorithms took time \Omega(\sqrt n) per operation. Our algorithms almost match the corresponding lower bounds. The algorithms are Las-Vegas type randomized algorithms, which use simple data structures and have a small constant factor.
This represents joint work with Valerie King and will be presented at STOC 95.
We give a polynomial time algorithm for constructing the minimal trellis for a code over an abelian group, given a generator matrix for the code. This extends the work of Kschischang and Sorokine who handled the case of linear codes over fields (such a code is a vector subspace). A key step in our work is handling codes over cyclic groups $C_{p^\alpha}$, where $p$ is a prime. Such a code can be viewed as a submodule over the ring $Z_{p^\alpha}$. Because of the presence of zero-divisors in the ring, submodules do not share the useful properties of a vector subspace. We get around this difficulty by restricting the notion of linear combination to p-linear combination, and introducing the notion of a p-generator sequence, which enjoys properties similar to that of a basis for a vector subspace. In turn, this leads to the minimal trellis algorithm.
We also present two characterizations of minimal trellises for group codes. The first shows that a trellis for a group code is minimal iff it is two-way proper. All our algorithmic results use this characterization to establish minimality of the trellis constructed. The second gives algebraic structural properties of the set of transitions of the minimal trellis for a group code; we call this the Transition Space Theorem. This theorem complements the State Space Theorem of Forney and Trott, which gives algebraic structural properties of the set of states of the minimal trellis.
Joint work with Huzur Saran and B. Sundar Rajan (IIT-New Delhi).
This is joint work with Micha Sharir.
Hosted by Dan Halperin.