Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.
Joint work with Boris Cherkassky and Tomasz Radzik.
This is joint work with S. Cheng, E. Deprit, J. Jones, S. Shih.
We obtain $RNC^3$ approximation algorithms for set cover and $RNC^4$ algorithms for its generalizations. These algorithms are guaranteed to output solutions that are within $O(\log n)$ of optimal where $n$ is the number of elements (variables). This factor is essentially tight (to within NP Completeness) due to the recent result of Lund and Yannakakis. Parallel approximation algorithms for set cover were previously known though not for its generalizations.
This is joint work with Prof. Vijay Vazirani, Indian Institute of Technology, Delhi.
In bin packing, we are given a sequence of N items with sizes in the interval (0,1], and asked to pack them into a minimum number of unit-capacity bins. Under BEST FIT (in practice perhaps the best of the simple on-line heuristics), items are packed in order, with each item going into the bin with the smallest gap that can hold it (ties broken arbitrarily). If item sizes are uniformly distributed in the interval (0,1], it can be shown that the expected waste (sum of the gaps in all the bins) grows sublinearly with N. Surprisingly, if the distribution of sizes is uniform over an interval (0,u], u < 1, experiments indicate that the waste grows linearly. This also holds if sizes are uniform over the discrete sizes i/K, 1 <= i <= J, when J < K-2 is sufficiently large. Unfortunately, proving this observation (for ANY such continuous or discrete distribution), has remained an open problem for almost a decade.
Ed Coffman, Peter Shor, Richard Weber and I have been studying the
simplest of the distributions for which linear waste appears to occur
(the discrete uniform distribution with K=11,J=8). Assuming one is
prepared to trust the result of a 24-hour computer run, we now have a
proof. This talk will cover the bin packing background, and then give
an overview of the mathematical and algorithmic techniques involved in
our proof, as well as the general applicability and limitations of the
approach. We also discuss related computer proofs that show that the
expected waste is bounded when K=11,J<8 or K<11,J
We also describe an improved $O(nk \log^2 k)$ upper bound for the case when the $k$ polyhedra have the form $A_i \oplus B$, $i=1,\ldots,k$, where $A_i$'s are disjoint convex polyhedra, $B$ is a convex polyhedron, and $\oplus$ denotes the Minkowski sum. This case is relevant for translational motion planning.
Efficient incremental randomized algorithms exist for constructing the boundary of the union in both cases. We will outline them if time permits.
This is joint work with Micha Sharir.
These results were obtained jointly with Hal Gabow, Michel Goemans, Andrew Goldberg, Milena Mihail, Serge Plotkin, David Shmoys, Eva Tardos, and Vijay Vazirani.
The study of a mysterious algorithm for transitive closure in 1973 led us to the Postman algorithm for delivering mail using stack-based storage. This algorithm evolved in several stages to a priority queue algorithm, known as Fishspear, which has a number of unusual features.
Fishspear can be implemented efficiently using sequential storage such as stacks or tapes, making it potentially attractive for implementation of very large queues on paged memory systems. It is comparable to the usual heap algorithm in its worst case running time, and its relative performance is much better in many common situations.
A full paper, coauthored with Mike Fischer, is about to be published in the JACM.
This talk deals with problems related to the number of manipulators needed to assemble configurations of simple geometric objects. More specifically, we find counterexamples to a conjecture that B. K. Natarajan made in 1985: that every set of disjoint convex objects in 3-space could be assembled (or, by reversing time, taken apart) by translation with two hands---meaning that some proper subset could be translated to infinity without disturbing its complement.
In 2-d, any set of convex objects with disjoint interiors can be ``taken apart'' by translation in any chosen direction---there is always one object that can be translated away without disturbing the rest. The graphics textbook example of three overlapping triangles shows that one cannot have such a strong property in 3-d; in fact, sets where no single object could be translated to infinity were already known.
We have proved that Natarajan's conjecture holds for five or fewer objects and give a minimum counterexample with six objects. We extend the counterexample to a configuration of 30 disjoint convex objects that cannot be taken apart with two hands using arbitrary rigid motions (isometries).
The counterexamples are formed by the action of a group on one carefully constructed convex object. The resulting symmetries help reduce case analysis and make for an interesting sculpture when the objects are made from 2 meter long sections of aluminum pipe.
This is joint work with Jorge Stolfi.
In the talk, we will survey recent work on improved algorithms for the multicommodity flow problem. Particular emphasis will be placed on approximation algorithms that can compute near optimal flow paths in a relatively small amount of time. A new and very simple local-control algorithm for multicommodity flow will also be presented. The latter algorithm is notable in that it does not rely on augmenting paths, shortest paths, min-cost paths, or similar techniques to push flow through the network. As a consequence, the algorithm is particularly well-suited for applications involving distributed networks.
We will also briefly mention recent work on the development of a max-flow/min-cut theorem for multicommodity flows.
The talk will be introductory in nature.
As an immediate consequence of the above result one can infer that the chromatic number in a graph cannot be approximated to within $n^(1/20)$. We show a further improvement on this hardness result by presenting a more efficient reduction from the clique number to the chromatic number problem (improving on [LY,KLS]). This yields a $n^(1/10)$-factor hardness of approximation result for the chromatic number problem. We also report some marginal improvements on the MAX SAT problem.
This is joint work with Mihir Bellare at IBM, T.J. Watson.
Due to the large bandwidth-delay product of broadband networks, there are several restrictions on virtual circuit implementations. First, each circuit has to be routed on a single path. Second, rerouting circuits is either heavily discouraged or outright forbidden.
This raises several natural questions:
What path should be used to route a given circuit ?
Which requests should be satisfied and which should be rejected ?
In this talk we will discuss recent progress in online algorithms for virtual circuit routing. There are several natural models that differ according to the optimization parameters; I will describe two such models. In the ``congestion model'', the rejections are not allowed and the goal is to minimize maximum congestion. In the ``throughput model'' rejections are allowed and the goal is to maximize the total number of transmitted bits.
The performance of virtual circuit routing algorithms can be naturally measured by a competitive ratio. To illustrate the online routing techniques, I will give a fairly detailed description of an $O(\log (nT))$-competitive strategy for the ``throughput model'', where $n$ is the number of nodes and $T$ is the maximum call duration. Roughly speaking, this means that this strategy results in throughput that is within $O(\log nT)$ factor of the highest possible throughput achievable by an omniscient algorithm that knows all of the requests in advance.
This talk is based on several papers that are joint work with Aspnes, Awerbuch, Azar, Fiat, and Waarts.
(Joint work with Subhash Suri.)
In the absence of error, the correct placement can be found easily using a PQ-tree. The data is never free from error, however, and algorithms are differentiated by their performance in the presence of errors. We approach errors from two angles: by detecting and removing them, and by using algorithms which are robust in the presence of errors.
We generate solutions to an instance of the problem by using a sequence of algorithms. First, we use one of two methods for screening errors from the data. After generating a good initial starting solution and possibly subdividing the problem, we use one of two algorithms to generate a near-optimal solution to an objective function. The objective function maximizes the probability of the noiseless data given the measured data. The screening process may disconnect the data, so after obtaining an ordering for each connected component of clones and probes, we reconnect the components in the most likely order.
We have also developed a strategy to recover noiseless data through an interactive process which detects anomalies in the data and retests questionable entries in the incidence matrix of clones and probes.
Since all our central computational problems are NP-hard, we evaluate the effectiveness of our algorithms empirically, using simulated data as well as real data from human chromosome 21.
This is joint work with Farid Alizadeh, Richard Karp, and Geoff Zweig.
Recent work has yielded a simple explanation: many NP-hard optimization problems are also NP-hard to approximate. Among the problems for which this has been proved are Independent Set, Chromatic Number, Set Cover, Vertex Cover, Max-Cut, Decoding to the nearest codeword, Learning in the presence of Errors, and many others. At the heart of these new results is a new type of NP-completeness reduction which acts globally (as opposed to the classical reductions, which used local transformations involving gadgets etc.). A more interesting way to view this reduction is as a new probabilistic definition of NP : NP is exactly the class of languages for which membership proofs (i.e., certificates) can be checked in polynomial time by using only O(log n) random bits and examining only O(1) bits in them.
The talk will be a survey of this new area (including the work of other researchers) and its open problems.
Joint work with Noga Alon.
The natural representation for most situations is as a game tree, where the structure of the tree represents the dynamics of the situation. When solving games in this form, the standard procedure has been to first transform the game tree into the normal form; that is, into a matrix listing the payoff for each strategy combination of the players. It is then often possible to use a standard algorithm, such as linear programming or complementarity, to find the solution to the game. However, the normal form of a game tree is often very large---exponential in the size of the game tree---thus making the problem intractable in all but the simplest cases. In this talk, we demonstrate the advantages of leaving the game in the form of a tree, and present efficient algorithms to solve games in that form.
The talk will be self-contained, and will include an introduction to the relevant concepts from game theory.
This is joint work with Nimrod Megiddo and Bernhard von Stengel.
This is joint work with Philip Klein, Satish Rao, and Sairam Subramanian.
Another way to answer this question is to demonstrate that {\em known} methods are inherently too weak to solve problems such as $P\stackrel{?}{=}NP$. This approach was taken in Baker, Gill, and Solovay \cite{BGS} who used oracle separation results for many major complexity classes to argue that relativizing proof techniques could not solve these problems. Since relativizing proof techniques involving diagonalization and simulation were the only available tools at the time of their work progress along known lines was ruled out.
Instead, people started to look at these problems in terms of Boolean complexity. Along these lines, many (non-relativizing) proof techniques have been discovered and used to prove lower bounds. These techniques are highly combinatorial; they exist in a much larger variety than their recursion-theoretic predecessors.
In this paper we introduce the notion of a {\em natural} proof. We argue that {\em all lower bound proofs known to us in Boolean complexity either are natural or can be represented as natural}. We show that if a cryptographic hardness assumption is true, then {\em no natural proof can prove superpolynomial lower bounds for general circuits}. Furthermore, no complexity class containing good pseudo-random function generators has a natural proof against it.
Natural proofs form a natural hierarchy depending on the degree to which the combinatorial property involved in the proof is constructive. We show without using any cryptographic assumption that $AC^0$-natural proofs which are sufficient to prove the parity lower bounds of FSS, A, Yao, and Hastad are inherently incapable of proving the bounds for $AC^0[q]$-circuits of Razborov and Smolensky. We also give a technical argument suggesting one reason that natural proofs are indeed natural: we show that every formal complexity measure which can prove super-polynomial lower bounds for a {\em single} function, can do so for {\em almost all functions}. This is one of the key requirements for a natural proof in our sense.
This is joint work with Alexander Razborov.
We present an overview of subsequent developments in this line of research, which can be viewed as the investigation of optimization problems from the standpoint of descriptive complexity. We delineate the expressive power of the logical framework and establish the existence of logical hierarchies for both maximization problems and minimization problems. This approach makes it possible to draw sharp distinctions between maximization and minimization problems. More specifically, it turns out that logical definability has different implications for NP-maximization problems than it has for NP-minimization problems, in terms of both expressive power and approximation properties.
Most of the results reported in this talk represent joint work with M. N. Thakur.
By contrast, more straightforward search strategies can be shown to take exponential time in the worst case. Our original motivation for studying this model was that it mimics the polynomial time behavior of simulated annealing algorithms. The tree analysis suggests that a "go with the winners" implementation of simulated annealing might be superior to several independent runs of simulated annealing.
Joint work with David Aldous.
When computing with arrangements, one may benefit from focusing on the portion of an arrangement that is relevant to the problem at hand, instead of computing the entire arrangement. The lower envelope of an arrangement, any single cell, or the zone of an additional object in the arrangement (i.e., the collection of cells of the arrangement intersected by this object) are among the portions that lead to such savings. The center of the talk will be the following recent result for a single cell in 3D arrangements: Given an arrangement of $n$ low-degree algebraic surface patches in 3-space, we show that the maximum combinatorial complexity of any single cell in the arrangement is $O(n^{2+\eps})$, for any $\eps>0$, where the constant of proportionality depends on $\eps$ and on the maximum degree of the given surfaces and of their boundaries. This extends several previous results, and has applications to motion planning of general robot systems with three degrees of freedom. (In this setting, the lower envelope and the zone problems are special cases of the single cell problem.)
This is joint work with Micha Sharir.
In our framework, the ``request sequence'' to be handled by the competitive algorithm includes the interleaving of process step times. Any distributed algorithm mediates between unpredictable users above it and an unpredictable system below it. To our knowledge, when competitive analysis has been applied to distributed algorithms in the literature, it has been applied only to the ``upper'' part of the problem: the unpredictable sequence of requests from the users. The ``lower'' part of the problem has been considered only in standard distributed terms. Using this approach, the competitive ratio for an algorithm compares its cost to satisfy a particular sequence of requests under the worst possible conditions in the system to the cost for an off-line algorithm to satisfy the same requests under conditions that may be very different. Our approach is to treat all sources of nondeterminism equally, and therefore to require that the on-line and off-line algorithms operate under the same conditions (for example, under the same schedule of process steps). This requirement creates additional difficulties: we must be very careful in defining what we mean by ``the same conditions.''
In this talk we extend the notion of competitive analysis to one appropriate for wait-free algorithms. We present the Speedy Collect, an algorithm for the cooperative collect primitive, and give a competitive analysis for this algorithm.
This is joint work with Jim Aspnes, Miki Ajtai, and Orli Waarts.
Joint work with Yossi Azar (Tel-Aviv), Anna Karlin (Digital), and Eli Upfal (Weizmann).
The talk, which represents joint work with P. Kolaitis, will be self-contained. Only a basic familiarity with first-order logic will be assumed.
We then discuss the minimum cut problem, a basic optimization problem with numerous applications. We give a simple new randomized algorithm which outperforms all previously known algorithms for the problem, running significantly faster and giving more information in its output. This algorithm is also the first efficient parallel algorithm for the problem. We show how the algorithm can be derandomized to yield the first deterministic parallel algorithm for the problem.
The minimum cut algorithm algorithm leads in turn to several other random sampling techniques which can be applied to problems involving cuts in graphs, such as the maximum flow problem. We use random sampling to compute exact and approximate solutions to such problems quickly. A particular consequence of the technique is a new randomized rounding approach to the ``network synthesis problem'' of constructing minimum cost networks satisfying certain connectivity requirements.
For the past two years, we have been working to improve the algorithms, code, functionality, and documentation in the most popular general-purpose linkage analysis package, which is called LINKAGE. User-reported speedups for our sequential code range from multiplicative factors of 2 to 72, with a factor of 10 to 15 being typical. We have also done two experimental parallel implementations on a network of 8 workstations. The parallel implementations run on top of the TreadMarks distributed shared memory system being developed at Rice; they can also run on existing shared memory multiprocessors.
The first half of the talk will be an introduction to linkage analysis from a Computer Science point of view.
The work on the sequential algorithms is joint with Robert Cottingham (Baylor College of Medicine) and Ramana Idury (formerly of Rice, now at USC), Sandeep Gupta and K. Shriram (both of Rice). The work on the parallel implementations is joint with Alan Cox, Sandhya Dwarkadas, Sandeep Gupta, Peter Keleher, and Willy Zwaenepoel (all from Rice).
For network optimization problems, significant progress has been made recently in the design of algorithms with good worst-case bounds. Although good worst-case bounds do not guarantee good practical performance, some of the recent algorithms are practical, outperforming previous methods by a significant margin.
We consider four important optimization problems: maximum flow, minimum-cost flow, shortest path, and assignment. We survey the recent results and describe implementations of some of the algorithms, including heuristics and data structures which speed up the codes in practice. We present experimental data for our implementations and that for established codes (such as network simplex and Hungarian Method). Our implementations are more robust and have significantly better overall performance.
Joint work with Adi Shamir.
Joint work with Laurent Alonso and Ren\'e Schott.
Joint work with A. Blum, P. Berman, A. Fiat, H. Karloff, and M. Saks.