SAS is the Stanford Algorithms Seminar (formerly the AFLB -- Algorithms for Lunch Bunch). It is run by Jeffrey Oldham. Meetings are generally Thursdays at 4:15 p.m.

Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.

Contents:

  1. 30 September 1993: Andrew Goldberg (Stanford). Shortest Paths Algorithms: Theory and Experimental Evaluation.
  2. 22 October 1993: Abhiram Ranade (UC-Berkeley). Parallelism and Locality in Priority Queues.
  3. 28 October 1993: Sridhar Rajagopalan (UC-Berkeley). Primal-Dual RNC Approximation Algorithms for Set Cover and Its Extensions.
  4. 1 November 1993: David S. Johnson (AT&T Bell Laboratories). Computer Proofs of Expected Behavior Results for Bin Packing (Theory Colloquium).
  5. 2 November 1993: Boris Aronov (Polytechnic University). On the Union of Convex Polyhedra, or Castles in the Air: Skeletons in the Closet.
  6. 9 November 1993: David Williamson (Cornell University). An Approximation Algorithm for General Graph Connectivity Problems.
  7. 11 November 1993: Mike Paterson (University of Warwick, England). Evolution of an Algorithm Fishspear, a New Priority Queue Algorithm.
  8. 18 November 1993: Jack Snoeyink (University of British Columbia). Objects that cannot be taken apart with two hands.
  9. 18 November 1993: Tom Leighton (MIT). Multicommodity Flows: A Survey of Recent Research (Theory Colloquium).
  10. 22 November 1993: Madhu Sudan (IBM Watson). Improved Non-Approximability Results.
  11. 2 December 1993: Serge Plotkin (Stanford). Online Algorithms for Virtual Circuit Routing.
  12. 9 December 1993: John Hershberger. An Optimal Algorithm for Euclidean Shortest Paths in the Plane.
  13. 13 January 1994: Edith Cohen (AT&T Bell Laboratories). Polylog-Time and Near-Linear Work Approximation-Scheme for Undirected Shortest Paths.
  14. 20 January 1994: John Canny (UC-Berkeley). Computational Problems in Molecular Biology: What Silicon and Carbon Scientists Need to Know in the 90's.
  15. 3 February 1994: Deborah K. Weisser (ICSI). Physical Mapping of DNA Using Unique Probes.
  16. 10 February 1994: Sanjeev Arora (UC-Berkeley). Probabilistic Checking of Proofs and Hardness of Approximation Problems (Theory Colloquium).
  17. 23 February 1994: Nabil Kahale (DIMACS). A spectral technique for coloring random $3$-colorable graphs.
  18. 24 February 1994: Daphne Koller (UC-Berkeley and IBM Almaden). Fast Algorithms for Finding Randomized Strategies in Game Trees.
  19. 16 March 1994: Monica Rauch (Cornell University). A Linear Time Algorithm for Computing a Single-Source Shortest-Paths Tree in a Planar Graph.
  20. 17 March 1994: Steven Rudich (Carnegie Mellon University). Natural Proofs.
  21. 31 March 1994: Daniel Solow (Case Western Reserve University). A Finite-Improvement Algorithm for Finding the Minimum Cut (and Maximum Flow) in a Network.
  22. 14 April 1994: Phokion G. Kolaitis (UC-Santa Cruz). The Descriptive Complexity of NP-Optimization Problems.
  23. 21 April 1994: Umesh Vazirani (UC-Berkeley). "Go With the Winners" Algorithms.
  24. 28 April 1994: Dan Halperin (Stanford). Lower Envelope, Single Cell and Zone in Arrangements: Recent Developments.
  25. 5 May 1994: Cynthia Dwork (IBM Almaden). The Competitive Analysis of Wait-Free Algorithms and its Application to the Cooperative Collect Problem.
  26. 12 May 1994: Andrei Broder (DEC SRC). Balanced Allocations.
  27. 19 May 1994: Moshe Y. Vardi (IBM Almaden and Rice University). Infinitary Logics in Computer Science.
  28. 26 May 1994: David Karger (Stanford). Random Sampling in Graph Optimization Problems.
  29. 2 June 1994: Ronitt Rubinfeld (Cornell University). Self-Testing/Correcting Based on Functional Equations.
  30. 9 June 1994: Alejandro Schaffer (Rice University). Faster Genetic Linkage Analysis Computations.
  31. 22 June 1994: Andrew Goldberg (Stanford). Optimization Algorithms for Large Networks.
  32. 21 July 1994: Tomas Feder (IBM Almaden). A Sublinear Parallel Algorithm for Stable Matching.
  33. 28 July 1994: Andrew Yao (Princeton University). Quantum Computation.
  34. 4 August 1994: Moni Naor (Weizmann Institute). Visual Cryptography.
  35. 5 August 1994: Randall Wilson (Sandia National Labs). Recent Progress and Open Problems in Assembly Planning.
  36. 11 August 1994: Ed Reingold (University of Illinois-UC). The Complexity of Determining the Majority.
  37. 18 August 1994: Adi Rosen (Tel-Aviv University). Randomized Robot Navigation Algorithms.

30 September 1993

Andrew Goldberg (Stanford)

Shortest Paths Algorithms: Theory and Experimental Evaluation

We conduct an extensive computational study of shortest paths algorithms, including some very recent algorithms. We also suggest new algorithms motivated by the experimental results and prove interesting theoretical results suggested by the experimental data. Our computational study is based on several natural problem classes which identify strengths and weaknesses of various algorithms. These problem classes and algorithm implementations form an environment for testing the performance of shortest paths algorithms. The interaction between the experimental evaluation of algorithm behavior and the theoretical analysis of algorithm performance plays an important role in our research.

Joint work with Boris Cherkassky and Tomasz Radzik.


22 October 1993

Abhiram Ranade (UC-Berkeley)

Parallelism and Locality in Priority Queues

We explore two ways of incorporating parallelism into priority queues. The first is to speed up the execution of individual priority operations so that they can be serviced at the rate of one operation per time step, unlike sequential implementations which require $O(\log{N})$ time steps per operation, where $N$ denotes the number of elements in the heap. For this we give an optimal parallel implementation that uses $O(\log{N})$ processors, and further, the processors are connected together in the simplest possible manner: linear array. We also consider parallel insertions into the priority queue. Our main result here is that a $\sqrt{P}\times\sqrt{P}$ mesh of processors can insert and delete (the smallest) $P$ elements from a heap in time $O(\sqrt{P\log{N}})$. We also show a matching lower bound (based on $AT^2$ arguments) for a wide range of possible implementations.

This is joint work with S. Cheng, E. Deprit, J. Jones, S. Shih.


28 October 1993

Sridhar Rajagopalan (UC-Berkeley)

Primal-Dual RNC Approximation Algorithms for Set Cover and Its Extensions

We build on the classical greedy approximation algorithm for set cover to obtain simple parallel approximation algorithms for set cover and its generalizations. The algorithms are based on the primal-dual schema and rely on randomization to guarantee rapid execution.

We obtain $RNC^3$ approximation algorithms for set cover and $RNC^4$ algorithms for its generalizations. These algorithms are guaranteed to output solutions that are within $O(\log n)$ of optimal where $n$ is the number of elements (variables). This factor is essentially tight (to within NP Completeness) due to the recent result of Lund and Yannakakis. Parallel approximation algorithms for set cover were previously known though not for its generalizations.

This is joint work with Prof. Vijay Vazirani, Indian Institute of Technology, Delhi.


1 November 1993

David S. Johnson (AT&T Bell Laboratories)

Computer Proofs of Expected Behavior Results for Bin Packing (Theory Colloquium)

We show how to apply dynamic and linear programming to design sophisticated potential functions for infinite, multi-dimensional Markov chains, thereby deriving bounds on the expected quality of packings produced by the BEST FIT bin packing algorithm and solving a long-standing open problem.

In bin packing, we are given a sequence of N items with sizes in the interval (0,1], and asked to pack them into a minimum number of unit-capacity bins. Under BEST FIT (in practice perhaps the best of the simple on-line heuristics), items are packed in order, with each item going into the bin with the smallest gap that can hold it (ties broken arbitrarily). If item sizes are uniformly distributed in the interval (0,1], it can be shown that the expected waste (sum of the gaps in all the bins) grows sublinearly with N. Surprisingly, if the distribution of sizes is uniform over an interval (0,u], u < 1, experiments indicate that the waste grows linearly. This also holds if sizes are uniform over the discrete sizes i/K, 1 <= i <= J, when J < K-2 is sufficiently large. Unfortunately, proving this observation (for ANY such continuous or discrete distribution), has remained an open problem for almost a decade.

Ed Coffman, Peter Shor, Richard Weber and I have been studying the simplest of the distributions for which linear waste appears to occur (the discrete uniform distribution with K=11,J=8). Assuming one is prepared to trust the result of a 24-hour computer run, we now have a proof. This talk will cover the bin packing background, and then give an overview of the mathematical and algorithmic techniques involved in our proof, as well as the general applicability and limitations of the approach. We also discuss related computer proofs that show that the expected waste is bounded when K=11,J<8 or K<11,J The talk should be accessible to a general Math/CS/OR audience.


2 November 1993

Boris Aronov (Polytechnic University)

On the Union of Convex Polyhedra, or Castles in the Air: Skeletons in the Closet

We show that the number of vertices, edges, and faces of the union of $k$ convex polyhedra in 3-space, having a total of $n$ faces, is $O(k^3+nk\log^2 k)$. This bound is almost tight in the worst case.

We also describe an improved $O(nk \log^2 k)$ upper bound for the case when the $k$ polyhedra have the form $A_i \oplus B$, $i=1,\ldots,k$, where $A_i$'s are disjoint convex polyhedra, $B$ is a convex polyhedron, and $\oplus$ denotes the Minkowski sum. This case is relevant for translational motion planning.

Efficient incremental randomized algorithms exist for constructing the boundary of the union in both cases. We will outline them if time permits.

This is joint work with Micha Sharir.


9 November 1993

David Williamson (Cornell University)

An Approximation Algorithm for General Graph Connectivity Problems

I will describe an approximation algorithm that solves very general kinds of graph connectivity and augmentation problems. In addition to providing the first or best known approximation algorithm for several NP-hard problems, the algorithm also generalizes some classical algorithms (e.g. Kruskal's minimum spanning tree algorithm) and approximates some problems faster than the best known exact algorithms (e.g., the minimum-weight perfect matching problem with triangle inequality). An example of an NP-hard problem which the algorithm can approximate is the survivable network design problem (also called the generalized Steiner network problem) in which the goal is to find a minimum-cost subgraph such that every pair of nodes i and j has at least r_{ij} edge-disjoint paths between them. This problem has applications in the design of fiber-optic telephone networks. No previous approximation algorithm was known even for the case in which all r_{ij} are either 0, 1, or 2. The approximation factor is at worst 2H(k), where k is the highest connectivity requirement, and H(n) = 1 + 1/2 + ... + 1/n. The running time of the algorithm is O(n^3) when k is O(1). If there is time, I will describe some experimental work I have done in testing a 2-approximation algorithm for Euclidean matching given by these results. In over 1,400 trials on random and structured instances on 1,000 to 131,072 vertices, the approximation algorithm was never more than 4% from optimal.

These results were obtained jointly with Hal Gabow, Michel Goemans, Andrew Goldberg, Milena Mihail, Serge Plotkin, David Shmoys, Eva Tardos, and Vijay Vazirani.


11 November 1993

Mike Paterson (University of Warwick, England)

Evolution of an Algorithm Fishspear, a New Priority Queue Algorithm

The life-history of an algorithm is charted, from its parentage through conception to publication. This story is unusually long, covering a period of some twenty years.

The study of a mysterious algorithm for transitive closure in 1973 led us to the Postman algorithm for delivering mail using stack-based storage. This algorithm evolved in several stages to a priority queue algorithm, known as Fishspear, which has a number of unusual features.

Fishspear can be implemented efficiently using sequential storage such as stacks or tapes, making it potentially attractive for implementation of very large queues on paged memory systems. It is comparable to the usual heap algorithm in its worst case running time, and its relative performance is much better in many common situations.

A full paper, coauthored with Mike Fischer, is about to be published in the JACM.


18 November 1993

Jack Snoeyink (University of British Columbia)

Objects that cannot be taken apart with two hands

Have you ever found, when you were assembling a puzzle, that you needed an extra hand?

This talk deals with problems related to the number of manipulators needed to assemble configurations of simple geometric objects. More specifically, we find counterexamples to a conjecture that B. K. Natarajan made in 1985: that every set of disjoint convex objects in 3-space could be assembled (or, by reversing time, taken apart) by translation with two hands---meaning that some proper subset could be translated to infinity without disturbing its complement.

In 2-d, any set of convex objects with disjoint interiors can be ``taken apart'' by translation in any chosen direction---there is always one object that can be translated away without disturbing the rest. The graphics textbook example of three overlapping triangles shows that one cannot have such a strong property in 3-d; in fact, sets where no single object could be translated to infinity were already known.

We have proved that Natarajan's conjecture holds for five or fewer objects and give a minimum counterexample with six objects. We extend the counterexample to a configuration of 30 disjoint convex objects that cannot be taken apart with two hands using arbitrary rigid motions (isometries).

The counterexamples are formed by the action of a group on one carefully constructed convex object. The resulting symmetries help reduce case analysis and make for an interesting sculpture when the objects are made from 2 meter long sections of aluminum pipe.

This is joint work with Jorge Stolfi.


18 November 1993

Tom Leighton (MIT)

Multicommodity Flows: A Survey of Recent Research (Theory Colloquium)

The multicommodity flow problem consists of shipping a collection of different commodites through a common network so that the total flow going through each edge in the network does not exceed its capacity. Multicommodity flow problems arise in a wide variety of applications and have been extensively studied during the past several decades.

In the talk, we will survey recent work on improved algorithms for the multicommodity flow problem. Particular emphasis will be placed on approximation algorithms that can compute near optimal flow paths in a relatively small amount of time. A new and very simple local-control algorithm for multicommodity flow will also be presented. The latter algorithm is notable in that it does not rely on augmenting paths, shortest paths, min-cost paths, or similar techniques to push flow through the network. As a consequence, the algorithm is particularly well-suited for applications involving distributed networks.

We will also briefly mention recent work on the development of a max-flow/min-cut theorem for multicommodity flows.

The talk will be introductory in nature.


22 November 1993

Madhu Sudan (IBM Watson)

Improved Non-Approximability Results

We present strong hardness of approximation results for the Max Clique problem. We show, under reasonable intractability assumptions, that the clique number of a graph with $n$ nodes cannot be approximated to within a factor of $n^(1/4 - o(1))$. Our result is based upon a new observation of Feige and Kilian who show that a modification of the query parameter in probabilistically checkable proofs [FGLSS,AS] is much more closely related to the hardness of clique approximation. Direct analysis of existing proof systems with respect to this new parameter gives them a $n^(1/15)$-factor hardness result for the clique. Our improvement over this comes from a new protocol and analysis which amounts to saying that the modified query parameter can be recycled efficiently when amplifying the gap in the proof system.

As an immediate consequence of the above result one can infer that the chromatic number in a graph cannot be approximated to within $n^(1/20)$. We show a further improvement on this hardness result by presenting a more efficient reduction from the clique number to the chromatic number problem (improving on [LY,KLS]). This yields a $n^(1/10)$-factor hardness of approximation result for the chromatic number problem. We also report some marginal improvements on the MAX SAT problem.

This is joint work with Mihir Bellare at IBM, T.J. Watson.


2 December 1993

Serge Plotkin (Stanford)

Online Algorithms for Virtual Circuit Routing

Virtual circuits serve as a convenient abstraction for sharing the available resources (bandwidth, buffer space, etc) among the users of a high-speed communication network. Roughly speaking, in order to use the network, the customer has to request some bandwidth between two specified points. The network then allocates a virtual circuit connecting these points. Such a circuit provides the customer with an illusion of a dedicated line.

Due to the large bandwidth-delay product of broadband networks, there are several restrictions on virtual circuit implementations. First, each circuit has to be routed on a single path. Second, rerouting circuits is either heavily discouraged or outright forbidden.

This raises several natural questions:

What path should be used to route a given circuit ?

Which requests should be satisfied and which should be rejected ?

In this talk we will discuss recent progress in online algorithms for virtual circuit routing. There are several natural models that differ according to the optimization parameters; I will describe two such models. In the ``congestion model'', the rejections are not allowed and the goal is to minimize maximum congestion. In the ``throughput model'' rejections are allowed and the goal is to maximize the total number of transmitted bits.

The performance of virtual circuit routing algorithms can be naturally measured by a competitive ratio. To illustrate the online routing techniques, I will give a fairly detailed description of an $O(\log (nT))$-competitive strategy for the ``throughput model'', where $n$ is the number of nodes and $T$ is the maximum call duration. Roughly speaking, this means that this strategy results in throughput that is within $O(\log nT)$ factor of the highest possible throughput achievable by an omniscient algorithm that knows all of the requests in advance.

This talk is based on several papers that are joint work with Aspnes, Awerbuch, Azar, Fiat, and Waarts.


9 December 1993

John Hershberger

An Optimal Algorithm for Euclidean Shortest Paths in the Plane

I will describe an optimal-time algorithm for computing planar shortest paths that avoid polygonal obstacles. The algorithm runs in O(n log n) time and space, where n is the total number of vertices in the obstacle polygons. The algorithm computes a planar map encoding shortest paths from a fixed source point to all other points of the plane; this map can be used to answer single-source shortest path queries in O(log n) time. The algorithm's time complexity is a significant improvement over all previously published results on the shortest path problem.

(Joint work with Subhash Suri.)


13 January 1994

Edith Cohen (AT&T Bell Laboratories)

Polylog-Time and Near-Linear Work Approximation-Scheme for Undirected Shortest Paths

Shortest paths computations constitute one of the most fundamental network problems. Nonetheless, known parallel shortest-paths algorithms are generally inefficient: they perform significantly more work (product of time and processors) than their sequential counterparts. This gap, known in the literature as the ``transitive closure bottleneck,'' poses a long-standing open problem. Our main result is an $O(mn^{\epsilon_0}+s(m+n^{1+\epsilon_0}))$ work polylog-time randomized algorithm that computes paths within $(1+O(1/\polylog n))$ of shortest from $s$ source nodes to all other nodes in weighted undirected networks with $n$ nodes and $m$ edges (for any fixed $\epsilon_0>0$). This work bound nearly matches the $\tilde{O}(sm)$ sequential time. In contrast, previous polylog-time algorithms required $\min\{\tilde{O}(n^3),\tilde{O}(m^2)\}$ work (even when $s=1$), and previous near-linear work algorithms required near-$O(n)$ time. Another result is faster shortest-paths algorithms if accurate distances are required only between ``distant'' vertices: We obtain an $O((m+sn)n^{\epsilon_0})$ time algorithm that computes paths of weight $(1+O(1/\polylog n))\dist + O(w_{\max}\polylog n)$, where $\dist$ is the corresponding distance and $w_{\max}$ is the maximum edge weight. Our chief instrument, which is of independent interest, are efficient constructions of sparse hop sets. A $(d,\epsilon)$-hop set of a network $G=(V,E)$ is a set $E^*$ of new weighted edges such that minimum-weight $d$-edge paths in $(V,E\cup E^*)$ have weight within $(1+\epsilon)$ of the respective distances in $G$. We construct hop sets of size $O(n^{1+\epsilon_0})$ where $\epsilon=O(1/\polylog n)$ and $d=O(\polylog n)$.


20 January 1994

John Canny (UC-Berkeley)

Computational Problems in Molecular Biology: What Silicon and Carbon Scientists Need to Know in the 90's

This talk began with a literature search for problems in molecular biology related to previously studied problems in robotics and computational geometry. What was striking during this process was the close resemblance between prob- lems in biochemistry and in computer algorithms, and the simultaneous lack of cross-fertilization between the fields. Since computational biology is an oft-cited grand challenge application of large-scale computing, this is an opportunity being missed. The first part of this talk will be a short enumeration of geometric problems in molecular biochemistry, namely: (i) Structure determination and refinement (ii) Feature recognition (iii) Feature matching and alignment. A half-dozen associated graph-theoretic and geometric problems will be described. The second part of the talk is about the work that led up to the first: techniques in non-linear algebra that can be applied to structure determination and geometric design problems.


3 February 1994

Deborah K. Weisser (ICSI)

Physical Mapping of DNA Using Unique Probes

The goal of physical mapping of the genome is to reconstruct a strand of DNA given a collection of overlapping fragments, or clones, from the strand. We present several algorithms to infer how the clones overlap, given data about each clone. We focus on data used to map human chromosomes 21 and Y, in which relatively short substrings, or probes, are extracted from the ends of clones. The substrings are long enough to be unique with high probability. The data we are given is an incidence matrix of clones and probes.

In the absence of error, the correct placement can be found easily using a PQ-tree. The data is never free from error, however, and algorithms are differentiated by their performance in the presence of errors. We approach errors from two angles: by detecting and removing them, and by using algorithms which are robust in the presence of errors.

We generate solutions to an instance of the problem by using a sequence of algorithms. First, we use one of two methods for screening errors from the data. After generating a good initial starting solution and possibly subdividing the problem, we use one of two algorithms to generate a near-optimal solution to an objective function. The objective function maximizes the probability of the noiseless data given the measured data. The screening process may disconnect the data, so after obtaining an ordering for each connected component of clones and probes, we reconnect the components in the most likely order.

We have also developed a strategy to recover noiseless data through an interactive process which detects anomalies in the data and retests questionable entries in the incidence matrix of clones and probes.

Since all our central computational problems are NP-hard, we evaluate the effectiveness of our algorithms empirically, using simulated data as well as real data from human chromosome 21.

This is joint work with Farid Alizadeh, Richard Karp, and Geoff Zweig.


10 February 1994

Sanjeev Arora (UC-Berkeley)

Probabilistic Checking of Proofs and Hardness of Approximation Problems (Theory Colloquium)

The classical theory of NP-completeness, pioneered by Cook, Karp and Levin in the 70s, explains the apparent computational intractability of optimiz- -ation problems by proving them NP-hard. But its techniques were unable to explain why for many optimization problems, even finding good approximation algorithms is hard (or at least, why significant amounts of research failed to find such algorithms).

Recent work has yielded a simple explanation: many NP-hard optimization problems are also NP-hard to approximate. Among the problems for which this has been proved are Independent Set, Chromatic Number, Set Cover, Vertex Cover, Max-Cut, Decoding to the nearest codeword, Learning in the presence of Errors, and many others. At the heart of these new results is a new type of NP-completeness reduction which acts globally (as opposed to the classical reductions, which used local transformations involving gadgets etc.). A more interesting way to view this reduction is as a new probabilistic definition of NP : NP is exactly the class of languages for which membership proofs (i.e., certificates) can be checked in polynomial time by using only O(log n) random bits and examining only O(1) bits in them.

The talk will be a survey of this new area (including the work of other researchers) and its open problems.


23 February 1994

Nabil Kahale (DIMACS)

A spectral technique for coloring random $3$-colorable graphs

Spectral methods have been recently used to solve several combinatorial problems. In this talk, we present a spectral technique for coloring random $3$-colorable graphs. Let $G(3n,p,3)$ be a random $3$-colorable graph on a set of $3n$ vertices generated as follows. First, split the vertices arbitrarily into three equal color classes and then choose every pair of vertices of distinct color classes, randomly and independently, to be an edge with probability $p$. We describe a polynomial time algorithm that finds a proper $3$-coloring of $G(3n,p,3)$ with high probability, whenever $p \ge c /n$, and $c$ is a sufficiently large constant. This settles a problem of Blum and Spencer, who asked if one can design an algorithm that works almost surely for $p \ge polylog(n) /n$. The algorithm can be extended to produce optimal $k$-colorings of random $k$-colorable graphs in a similar model, as well as in various related models.

Joint work with Noga Alon.


24 February 1994

Daphne Koller (UC-Berkeley and IBM Almaden)

Fast Algorithms for Finding Randomized Strategies in Game Trees

Many situations describing interactions between agents can be naturally described as a multi-player game. This includes real-world situations such as economic interactions, as well as many important problems in computer science. The representation as a game allows us to reason formally about the situation using established techniques from the discipline of game theory. In particular, the game-theoretic notion of a solution to a game often allows us to design mechanisms for dealing with the original situation. While the formal notion of solution is well-established, previous techniques for solving games were computationally intractable for most games arising in practice. In this talk, I will describe a new representation for games, which results in techniques for solving games that are much more efficient.

The natural representation for most situations is as a game tree, where the structure of the tree represents the dynamics of the situation. When solving games in this form, the standard procedure has been to first transform the game tree into the normal form; that is, into a matrix listing the payoff for each strategy combination of the players. It is then often possible to use a standard algorithm, such as linear programming or complementarity, to find the solution to the game. However, the normal form of a game tree is often very large---exponential in the size of the game tree---thus making the problem intractable in all but the simplest cases. In this talk, we demonstrate the advantages of leaving the game in the form of a tree, and present efficient algorithms to solve games in that form.

The talk will be self-contained, and will include an introduction to the relevant concepts from game theory.

This is joint work with Nimrod Megiddo and Bernhard von Stengel.


16 March 1994

Monica Rauch (Cornell University)

A Linear Time Algorithm for Computing a Single-Source Shortest-Paths Tree in a Planar Graph

Given a planar graph $G$ with nonnegative edge costs and a node $s$ of $G$ we present an algorithm that computes the shortest path from $s$ to every other node in time linear in the size of the graph. The best previous algorithm takes time $O(n \sqrt {\log n})$, where $n$ is the number of nodes in $G$. The algorithm is a generalization of Dijkstra's algorithm.

This is joint work with Philip Klein, Satish Rao, and Sairam Subramanian.


17 March 1994

Steven Rudich (Carnegie Mellon University)

Natural Proofs

It is natural to ask what makes lower bound questions such as $P\stackrel{?}{=}PSPACE$ and $P\stackrel{?}{=}NC$ so difficult to solve. A non-technical reason for thinking they are difficult might be that some very bright people have tried and failed -- but this is hardly satisfactory. A technical reason along the same lines would be provided by a reduction to these questions from another problem known to be really hard such as the Riemann Hypothesis. Perhaps the ultimate demonstration that $P\stackrel{?}{=}NP$ is a hard problem would be to show it to be independent of set theory (ZFC).

Another way to answer this question is to demonstrate that {\em known} methods are inherently too weak to solve problems such as $P\stackrel{?}{=}NP$. This approach was taken in Baker, Gill, and Solovay \cite{BGS} who used oracle separation results for many major complexity classes to argue that relativizing proof techniques could not solve these problems. Since relativizing proof techniques involving diagonalization and simulation were the only available tools at the time of their work progress along known lines was ruled out.

Instead, people started to look at these problems in terms of Boolean complexity. Along these lines, many (non-relativizing) proof techniques have been discovered and used to prove lower bounds. These techniques are highly combinatorial; they exist in a much larger variety than their recursion-theoretic predecessors.

In this paper we introduce the notion of a {\em natural} proof. We argue that {\em all lower bound proofs known to us in Boolean complexity either are natural or can be represented as natural}. We show that if a cryptographic hardness assumption is true, then {\em no natural proof can prove superpolynomial lower bounds for general circuits}. Furthermore, no complexity class containing good pseudo-random function generators has a natural proof against it.

Natural proofs form a natural hierarchy depending on the degree to which the combinatorial property involved in the proof is constructive. We show without using any cryptographic assumption that $AC^0$-natural proofs which are sufficient to prove the parity lower bounds of FSS, A, Yao, and Hastad are inherently incapable of proving the bounds for $AC^0[q]$-circuits of Razborov and Smolensky. We also give a technical argument suggesting one reason that natural proofs are indeed natural: we show that every formal complexity measure which can prove super-polynomial lower bounds for a {\em single} function, can do so for {\em almost all functions}. This is one of the key requirements for a natural proof in our sense.

This is joint work with Alexander Razborov.


31 March 1994

Daniel Solow (Case Western Reserve University)

A Finite-Improvement Algorithm for Finding the Minimum Cut (and Maximum Flow) in a Network

A new algorithm for finding a minimum cut in a network is presented. Starting with an arbitrary cut, the algorithm repeatedly uses a movement mechanism to create a new cut with a strictly smaller capacity than that of the current cut. When the movement mechanism fails to produce a better cut, the current cut is shown to be optimal by constructing a flow whose value is equal to the capacity of the current cut.


14 April 1994

Phokion G. Kolaitis (UC-Santa Cruz)

The Descriptive Complexity of NP-Optimization Problems

One of the challenges faced by researchers in computational complexity is to develop a coherent structural theory that would account for the radically different approximation properties of NP-optimization problems. In 1988, Papadimitriou and Yannakakis brought a fresh insight to this area by focusing on the logical definability of NP-optimization problems. In particular, they isolated a natural syntactic class of NP-maximization problems and showed that every problem in it possesses a constant-approximation algorithm.

We present an overview of subsequent developments in this line of research, which can be viewed as the investigation of optimization problems from the standpoint of descriptive complexity. We delineate the expressive power of the logical framework and establish the existence of logical hierarchies for both maximization problems and minimization problems. This approach makes it possible to draw sharp distinctions between maximization and minimization problems. More specifically, it turns out that logical definability has different implications for NP-maximization problems than it has for NP-minimization problems, in terms of both expressive power and approximation properties.

Most of the results reported in this talk represent joint work with M. N. Thakur.


21 April 1994

Umesh Vazirani (UC-Berkeley)

"Go With the Winners" Algorithms

We can view certain randomized optimization algorithms as rules for randomly moving a particle around the state space of all possible solutions to the optimization problem. One general paradigm for designing heuristics is to run several simulations simultaneously, and every so often classify the particles as "doing well" or "doing badly," and move each particle that is "doing badly" to the position of one that is "doing well." Such "go with the winners" schemes are obviously reminiscent of genetic algorithms but the latter typically have much additional special structure. We give a rigorous analysis of such a "go with the winners" scheme in the concrete setting of searching for a deep leaf in a tree. The running time of the scheme is at most a polynomial in parameters of the tree being explored.

By contrast, more straightforward search strategies can be shown to take exponential time in the worst case. Our original motivation for studying this model was that it mimics the polynomial time behavior of simulated annealing algorithms. The tree analysis suggests that a "go with the winners" implementation of simulated annealing might be superior to several independent runs of simulated annealing.

Joint work with David Aldous.


28 April 1994

Dan Halperin (Stanford)

Lower Envelope, Single Cell and Zone in Arrangements: Recent Developments

Arrangements play a central role in the design and analysis of geometric algorithms, and they emerge in a variety of seemingly unrelated application areas: from robot motion planning and computer vision to computer-assisted surgery and molecular biology. An arrangement of geometric objects is the subdivision of the space where these objects reside into cells of various dimensions. For example, an arrangement of a set L of lines in the plane is the subdivision of the plane into vertices, edges and faces induced by the lines in L. In this talk, I'll survey a collection of recent combinatorial and algorithmic results in the study of arrangements of low-degree algebraic surface patches in three or higher dimensions, and their applications. The new results extend known results involving 2-dimensional arrangements, and they almost settle several conjectures posed eight years ago.

When computing with arrangements, one may benefit from focusing on the portion of an arrangement that is relevant to the problem at hand, instead of computing the entire arrangement. The lower envelope of an arrangement, any single cell, or the zone of an additional object in the arrangement (i.e., the collection of cells of the arrangement intersected by this object) are among the portions that lead to such savings. The center of the talk will be the following recent result for a single cell in 3D arrangements: Given an arrangement of $n$ low-degree algebraic surface patches in 3-space, we show that the maximum combinatorial complexity of any single cell in the arrangement is $O(n^{2+\eps})$, for any $\eps>0$, where the constant of proportionality depends on $\eps$ and on the maximum degree of the given surfaces and of their boundaries. This extends several previous results, and has applications to motion planning of general robot systems with three degrees of freedom. (In this setting, the lower envelope and the zone problems are special cases of the single cell problem.)

This is joint work with Micha Sharir.


5 May 1994

Cynthia Dwork (IBM Almaden)

The Competitive Analysis of Wait-Free Algorithms and its Application to the Cooperative Collect Problem

In many shared-memory applications, processes repeatedly collect values stored in a set of registers. If each process reads every register itself, then the communication costs increase dramatically with the degree of concurrency, due to bus congestion and contention. Interestingly, this is the (trivial) solution that is used in current literature on wait-free shared-memory applications, including all (but one) algorithms known to us for consensus, snapshots, coin flipping, timestamps, and multi-writer registers. (The exception is the consensus algorithm of Saks, Shavit, and Woll.) The cost of this naive implementation is easily shown to be a lower bound on the worst-case cost of any implementation. Here, the worst case is taken over the set of adversarially chosen schedules of events. In this talk we show that in the interesting cases -- those in which concurrency is high -- it is possible to do much better than in the naive solution. This suggests that a competitive analysis of the problem may be fruitful. The goal of a competitive algorithm is to do better when the schedule permits.

In our framework, the ``request sequence'' to be handled by the competitive algorithm includes the interleaving of process step times. Any distributed algorithm mediates between unpredictable users above it and an unpredictable system below it. To our knowledge, when competitive analysis has been applied to distributed algorithms in the literature, it has been applied only to the ``upper'' part of the problem: the unpredictable sequence of requests from the users. The ``lower'' part of the problem has been considered only in standard distributed terms. Using this approach, the competitive ratio for an algorithm compares its cost to satisfy a particular sequence of requests under the worst possible conditions in the system to the cost for an off-line algorithm to satisfy the same requests under conditions that may be very different. Our approach is to treat all sources of nondeterminism equally, and therefore to require that the on-line and off-line algorithms operate under the same conditions (for example, under the same schedule of process steps). This requirement creates additional difficulties: we must be very careful in defining what we mean by ``the same conditions.''

In this talk we extend the notion of competitive analysis to one appropriate for wait-free algorithms. We present the Speedy Collect, an algorithm for the cooperative collect primitive, and give a competitive analysis for this algorithm.

This is joint work with Jim Aspnes, Miki Ajtai, and Orli Waarts.


12 May 1994

Andrei Broder (DEC SRC)

Balanced Allocations

Suppose that we sequentially place n balls into n boxes by putting each ball into a randomly chosen box. It is well known that when we are done, the fullest box has with high probability ln n/ln ln n (1 + o(1)) balls in it. Suppose instead, that for each ball we choose two boxes at random and place the ball into the one which is less full at the time of placement. We show that with high probability, the fullest box contains only ln ln n/ln 2 + O(1) balls -- an exponential difference. Furthermore, we show that a similar gap exists in the infinite process, where at each step one ball, chosen uniformly at random, is deleted, and one ball is added in the manner above. We discuss consequences of this and related theorems for dynamic resource allocation, hashing, and on-line load balancing.

Joint work with Yossi Azar (Tel-Aviv), Anna Karlin (Digital), and Eli Upfal (Weizmann).


19 May 1994

Moshe Y. Vardi (IBM Almaden and Rice University)

Infinitary Logics in Computer Science

Infinitary logic extends first-order logic by allowing infinitary conjunctions and disjunctions (i.e., conjunctions with an infinite number of conjuncts and disjunctions with an infinite number of disjuncts). One usually think of infinitary logic as a fairly esoteric logic, which is not of much interest in computer science. Surprisingly, a certain fragment L of infinitary logic is of great interest in computer science. This fragment L is obtained by restricting formulas, which can be of infinite length, to contain a finite number of distinct variables. The advantage of dealing with L is that its the expressive power can be completely characterized in game-theoretic terms. I will describe applications of this logic to the study of 0-1 laws and the expressive power of database logic programs.

The talk, which represents joint work with P. Kolaitis, will be self-contained. Only a basic familiarity with first-order logic will be assumed.


26 May 1994

David Karger (Stanford)

Random Sampling in Graph Optimization Problems

Random sampling can be a useful part of an algorithm designer's toolkit. In many cases it leads to algorithms which are faster, simpler, more practical and easier to code than their deterministic counterparts. We present several old and new random-sampling based algorithms for classical problems with many practical applications. If time permits, we will begin by reviewing an elegant median finding algorithm by Floyd and Rivest. We then show how random sampling leads to the first linear time algorithm for the minimum spanning tree problem.

We then discuss the minimum cut problem, a basic optimization problem with numerous applications. We give a simple new randomized algorithm which outperforms all previously known algorithms for the problem, running significantly faster and giving more information in its output. This algorithm is also the first efficient parallel algorithm for the problem. We show how the algorithm can be derandomized to yield the first deterministic parallel algorithm for the problem.

The minimum cut algorithm algorithm leads in turn to several other random sampling techniques which can be applied to problems involving cuts in graphs, such as the maximum flow problem. We use random sampling to compute exact and approximate solutions to such problems quickly. A particular consequence of the technique is a new randomized rounding approach to the ``network synthesis problem'' of constructing minimum cost networks satisfying certain connectivity requirements.


2 June 1994

Ronitt Rubinfeld (Cornell University)

Self-Testing/Correcting Based on Functional Equations

The idea of self-testing/correcting programs, introduced by Blum, Luby, and Rubinfeld in 1990, is a powerful tool for attacking the problem of program correctness. However, one of the main criticisms of this approach was that it seemed to have limited scope, applying mainly to functions that could be viewed as linear or as low degree polynomials. In this talk, we survey the known results and discuss new results which show that the concept of self-testing/correcting has broader applications than we previously understood. We concentrate on functions $f$ satisfying functional equations of the form $\forall x,y~~ F[f(x-y), f(x+y), f(x),f(y)]=0$, where $F$ is an algebraic function. We show that self-testers and self-correctors can be found for many such functions, including $\tan{x},{1 \over {1+\cot{x}}},{Ax \over {1-Ax}},\cosh{x}$. We make an initial attempt at characterizing properties of functional equations that make them useful for self-testing and self-correcting the functions that satisfy them.


9 June 1994

Alejandro Schaffer (Rice University)

Faster Genetic Linkage Analysis Computations

Linkage analysis is a statistical technique that geneticists use in the early stages of mapping human genes and locating disease genes. The size and complexity of linkage analysis problems that geneticists want to solve seem to be unbounded. Thus linkage analysis presents an excellent opportunity to apply sequential algorithmic techniques that feature good asymptotic performance and to use parallel computers.

For the past two years, we have been working to improve the algorithms, code, functionality, and documentation in the most popular general-purpose linkage analysis package, which is called LINKAGE. User-reported speedups for our sequential code range from multiplicative factors of 2 to 72, with a factor of 10 to 15 being typical. We have also done two experimental parallel implementations on a network of 8 workstations. The parallel implementations run on top of the TreadMarks distributed shared memory system being developed at Rice; they can also run on existing shared memory multiprocessors.

The first half of the talk will be an introduction to linkage analysis from a Computer Science point of view.

The work on the sequential algorithms is joint with Robert Cottingham (Baylor College of Medicine) and Ramana Idury (formerly of Rice, now at USC), Sandeep Gupta and K. Shriram (both of Rice). The work on the parallel implementations is joint with Alan Cox, Sandhya Dwarkadas, Sandeep Gupta, Peter Keleher, and Willy Zwaenepoel (all from Rice).


22 June 1994

Andrew Goldberg (Stanford)

Optimization Algorithms for Large Networks

As computer memory size increases and applications get more sophisticated, larger problems come up in applications. The algorithm running time is usually superlinear in the problem size, so big problems require more time even though computers get faster. This makes efficient algorithms more important.

For network optimization problems, significant progress has been made recently in the design of algorithms with good worst-case bounds. Although good worst-case bounds do not guarantee good practical performance, some of the recent algorithms are practical, outperforming previous methods by a significant margin.

We consider four important optimization problems: maximum flow, minimum-cost flow, shortest path, and assignment. We survey the recent results and describe implementations of some of the algorithms, including heuristics and data structures which speed up the codes in practice. We present experimental data for our implementations and that for established codes (such as network simplex and Hungarian Method). Our implementations are more robust and have significantly better overall performance.


21 July 1994

Tomas Feder (IBM Almaden)

A Sublinear Parallel Algorithm for Stable Matching

>Parallel algorithms for various versions of the stable matching problem are presented. The algorithms are based on the primal-dual interior path-following method for linear programming. The main result is that a stable matching can be found in O^*(\sqrt{m}) time by a polynomial number of processors, where m is the total length of preference lists of individuals.


28 July 1994

Andrew Yao (Princeton University)

Quantum Computation

In recent years, among physicists and computer scientists, there has been much interest in the possibility of performing computations based on quantum mechanics. In this talk we survey some of the recent significant theoretical developments in this area, such as the construction of efficient universal quantum machines, fast quantum algorithms for seemingly hard problems, etc. We also discuss quantum communication theory, and its relation with quantum computation.


4 August 1994

Moni Naor (Weizmann Institute)

Visual Cryptography

We consider a new type of cryptographic scheme, which can decode concealed images without any cryptographic computations. The scheme is perfectly secure and very easy to implement. We extend it into a visual variant of the $k$ out of $n$ secret sharing problem, in which a dealer provides a transparency to each one of the $n$ users; any $k$ of them can see the image by stacking their transparencies, but any $k-1$ of them gain no information about it. We describe efficient combinatorial constructions which solve it for various choices of $k$ and $n$.

Joint work with Adi Shamir.


5 August 1994

Randall Wilson (Sandia National Labs)

Recent Progress and Open Problems in Assembly Planning

The goal of assembly planning is to automatically determine the operations and order of those operations required to construct an assembly from its constituent parts, given only a description of the final assembly as input. While a great number of assembly planning systems have been written, few have taken careful computational approaches to the problems. This talk will give an overview of the field and some of the approaches that have been taken, and attempt to identify crisp open problems that are interesting from both theoretical and practical viewpoints.


11 August 1994

Ed Reingold (University of Illinois-UC)

The Complexity of Determining the Majority

Given a set {x_1, x_2,..., x_n}, each element of which is colored either red or blue, we must determine an element of the majority color by making equal/not equal color comparisons ; when n is even, we must report that there is no majority if there are equal numbers of each color. We give a new proof that in the worst case exactly n-nu(n) questions are necessary and sufficient, where nu(n) is the number of 1-bits in the binary representation of n. Furthermore, we prove that 2n/3 - \sqrt(8n/9 pi) + O(log n) such comparisons are necessary and sufficient in the average case.

Joint work with Laurent Alonso and Ren\'e Schott.


18 August 1994

Adi Rosen (Tel-Aviv University)

Randomized Robot Navigation Algorithms

We give randomized strategies for robot navigation in an unknown environment. In particular, we consider the "wall-problem," first considered by Papadimitriou and Yannakakis, and Blum, Raghavan and Schieber: A robot has to reach a wall through a scene with rectilinear obstacles without having any prior knowledge about the obstacles. The aim of the strategy is to minimize the worst case ratio between the distance the robot travels and the optimum path to the wall. We give randomized strategies that are provably better than any deterministic strategy, thus partially settling an open problem posed by Blum, Raghavan and Schieber.

Joint work with A. Blum, P. Berman, A. Fiat, H. Karloff, and M. Saks.