SAS is the Stanford Algorithms Seminar (formerly the AFLB -- Algorithms for Lunch Bunch). It is run by Daphne Koller (September 1992-March 1993), Jeffrey Oldham (April 1993-June 1993), and Ram Kumar (July 1993 - September 1993). Meetings are generally Thursdays at 4:15 p.m.

Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.


  1. 17 September 1992: Prabhakar Raghavan (IBM Yorktown). Exact Analysis of Hot Potato Routing.
  2. 1 October 1992: Alistair Sinclair (University of Edinburgh and ICSI). Quadratic Dynamical Systems.
  3. 15 October 1992: Andrew Goldberg (Stanford). Implementing Scaling Algorithms for the Min-Cost Flow Problem (Why and How).
  4. 22 October 1992: Krishna V. Palem (IBM Watson). Efficient (Program) Transformations for Responsive Parallel Computing.
  5. 5 November 1992: Gene Itkis (Boston University). Self-Stablizing Distributed Protocols in Constant Space Per Edge.
  6. 12 November 1992: Donald Knuth (Stanford). Textbook Examples of Recursion.
  7. 19 November 1992: Bernd Sturmfels (Cornell University). Sparse systems of algebraic equations.
  8. 3 December 1992: Steven Phillips, Stanford University. Markov Paging.
  9. 8 December 1992: Neal Young (UMIACS). Linear-Programming Duality and the K-Server Problem.
  10. 10 December 1992: Dorit S. Hochbaum (UC-Berkeley). Generalizing the Vertex Cover Problem Properties, to Integer Programs with Two Variables Per Inequality.
  11. 17 December 1992: Claire Kenyon (ENS-Lyon and William Thurston, MSRI). Rotation distance between binary trees: hyperbolic geometry vs. max-flow min-cut..
  12. 7 January 1993: Moshe Y. Vardi (IBM Almaden). Fixpoint Logics, Relational Machines, and Computational Complexity.
  13. 14 January 1993: David Karger, Stanford University. A New Algorithm for the Minimum Cut Problem.
  14. 21 January 1993: Andrei Broder (DEC SRC). On the satisfiability of random 3-CNF formulas.
  15. 28 January 1993: David Zuckerman, MIT. Diminishing our Reliance on Randomness in Computation.
  16. 4 February 1993: Cynthia Dwork (IBM Almaden). Contention in Shared-Memory Algorithms.
  17. 9 February 1993: Samson Abramsky (Imperial College). Games, Types and Interactions.
  18. 9 February 1993: David Harel (Weizmann Institute). Towards a Theory of Infinite Computable Structures and Databases .
  19. 11 February 1993: Edith Cohen (AT&T Bell Labs). Fast algorithms for constructing t-spanners and paths with stretch t.
  20. 18 February 1993: Tomasz Radzik (Cornell University). Approximate Generalized Circulation.
  21. 25 February 1993: Abhijit Sahay (UC-Berkeley). LogP: Towards a Realistic Model of Parallel Computation.
  22. 4 March 1993: Leonard J. Schulman, U.C. Berkeley. Deterministic Coding for Interactive Communication.
  23. 11 March 1993: Ashok K. Chandra (IBM Almaden). Towards More Realistic Models of Parallel Computers.
  24. 16 March 1993: Steven Phillips (Stanford). Online Load Balancing and Network Flow.
  25. 1 April 1993: Orli Waarts (IBM Almaden). On-Line Load Balancing with Applications to Machine Scheduling and Virtual Circuit Routing.
  26. 8 April 1993: Yuval Rabani (ICSI). A Decomposition Theorem and Bounds For Randomized Server Problems.
  27. 15 April 1993: Michael Luby (ICSI). A Parallel Approximation Algorithm for Positive Linear Programming.
  28. 22 April 1993: Anna R. Karlin (DEC SRC). On the Evolution of the Butterfly.
  29. 29 April 1993: Larry Stockmeyer (IBM Almaden). What Can Be Computed Locally?.
  30. 14 May 1993: Yoram Moses (Weizmann Institute). Fully Polynomial Byzantine Agreement in t+1 Rounds.
  31. 25 May 1993: Dr. Narendra Karmarkar (AT&T Bell Laboratories). New Massively Parallel Architecture Based on Finite Projective Geometries.
  32. 27 May 1993: Rafail Ostrovsky (ICS). One-Way Functions are Essential for Non-Trivial Zero-Knowledge.
  33. 1 June 1993: Farid Alizadeh (ICSI). Perfect graphs, Lovasz numbers and the positive semidefinite cone: A study based on duality theory and interior point algorithms.
  34. 3 June 1993: Jean-Claude Latombe (Stanford). Robot Algorithms.
  35. 15 July 1993: Bernard Chazelle (Princeton). A Simple Proof Technique for Geometric Discrepancy.
  36. 22 July 1993: Herbert Edelsbrunner (University of Illinois-UC). Modeling with shapes and simplicial complexes.
  37. 29 July 1993: Eric Torng (Stanford). A Better Algorithm For an Ancient Scheduling Problem.
  38. 5 August 1993: Dan Halperin (Stanford). Moving a Polygon in a Polygonal Environment.
  39. 26 August 1993: Naveen Garg (Indian Institute of Technology, Delhi). Approximate max-flow min-(multi)cut theorems for multicommodity flow.
  40. 22 September 1993: Jeff Vitter (Duke). Load Balancing Paradigms for Optimal Use of Parallel Disks and Parallel Memory Hierarchies.

17 September 1992

Prabhakar Raghavan (IBM Yorktown)

Exact Analysis of Hot Potato Routing

We consider a form of packet routing known as {\em hot potato routing} or {\em deflection routing}. Its striking feature is that there are no buffers at intermediate nodes. Thus packets are always moving (sometimes possibly in the ``wrong'' direction), giving rise to the term ``hot potato''. We give a simple deterministic algorithm that will route a random instance on the $n\times n$ torus in $2n+O(\log n)$ steps with high probability. We add random delays to this algorithm so that it solves the permutation routing problem on the torus in $9n$ steps with high probability, on every instance. On a hypercube with $N=2^n$ nodes, we give a simple deterministic algorithm that will route a random instance in $O(n)$ steps with high probability. Various other results are discussed.

Joint work with Uri Feige of The Weizmann Institute.

1 October 1992

Alistair Sinclair (University of Edinburgh and ICSI)

Quadratic Dynamical Systems

Quadratic dynamical systems are widely used to model phenomena in the natural sciences, and serve as the basis for many computer simulations of these phenomena. Examples include population genetics and the kinetic theory of ideal gases. Less classically, they also provide an appropriate framework for the study of genetic algorithms for combinatorial optimisation. In contrast to linear systems, which are well understood, there is little general theory available for the quantitative analysis of quadratic systems.

In this talk, I will present several fundamental properties of the large class of symmetric quadratic systems acting on populations over a fixed set of types. I will go on to give a complete analysis of one particular system, defined on populations over the set of matchings in a tree. In particular, it will turn out that convergence to the limit in this system requires only polynomial time. This demonstrates that such systems, though non-linear, are sometimes amenable to analysis.

This is joint work with Yuri Rabinovich and Avi Wigderson.

15 October 1992

Andrew Goldberg (Stanford)

Implementing Scaling Algorithms for the Min-Cost Flow Problem (Why and How)

The scaling push-relabel method is an important theoretical development in the area of minimum-cost flow algorithms. We study practical implementations of this method. We are especially interested in heuristics which improve real-life performance of the method.

Our implementation works very well over a wide range of problem classes. In our experiments, it was always competitive with the established codes, and usually outperformed these codes by a wide margin.

Some heuristics we develop may apply to other network algorithms. Our experimental work on the minimum-cost flow problem motivated theoretical work on related problems.

22 October 1992

Krishna V. Palem (IBM Watson)

Efficient (Program) Transformations for Responsive Parallel Computing

High-level parallel programming languages are designed to present a simple virtual machine model to the programmer. This allows the programmer to focus on the essentials of the correctness and efficiency of programs, and thereby improves productivity. However, engineering constraints do not always allow these idealized abstractions to be directly implemented in practice. A natural approach to bridging this gap is via mechanisms for automatically transforming the parallel algorithms or programs designed in the (idealized) high-level language to execute correctly on machines that can be actually realized in practice.

In this talk, we will describe transformations which provide a basis for the compilation of ideal programs that allow synchronization, to run efficiently on asynchronous parallel machines with shared memory. In addition to being correct and efficient, the transformed programs have the important property that they are responsive. Essentially, responsive computations always make progress on the asynchronous target machine without waiting for slow or failed processors to complete their work. Therefore, responsive executions are fault-tolerant in the following strong sense: the computation will terminate correctly even if processors fail arbitrarily often during the computation, so long as at least one processor is executing correctly at any time. No implicit assumptions about the relative speeds of processors, or about synchronization primitives built into the target machines, are required for our methods to work. By using randomization, we can guarantee that the executions on the asynchronous targets are guaranteed to require only $O(\log n)$ extra space and $O(\log^3 n)$ additional expected work, compared to the original source programs.

This is joint work with Z. Kedem, M. Rabin and A. Raghunathan.

5 November 1992

Gene Itkis (Boston University)

Self-Stablizing Distributed Protocols in Constant Space Per Edge

We present randomized distributed protocols utilizing only constant memory per edge for the following problems:

* Leader Election

* Spanning Tree (Minimum, BFS, etc.)

* Shortest-Path

* Deadlock Detection

* Network Reset

* Network Synchronization.

Our protocols work for asynchronous general topology networks of identical nameless nodes. Moreover, they are efficient and self-stabilizing i.e. the protocols can be executed begining from an arbitrary configuration. We stress that all the previous protocols (even without self-stabilization) required logarithmic memory per processor.

In fact, not only the above problems but everything computable in randomized linear space can be computed by an asynchronous coin-flipping network (with the same memory and in a self-stabilizing fashion). Thus, we obtain a tight and robust characterization of the power of randomized distributed networks.

Based on joint work with B. Awerbuch (MIT) and R. Ostrovsky (MIT, ICSI, Berkeley) and further developments by G. Itkis and Leonid Levin (BU).

12 November 1992

Donald Knuth (Stanford)

Textbook Examples of Recursion

We discuss properties of recursive schemas related to McCarthy's ``91 function'' and to Takeuchi's triple recursion. Several theorems are proposed as interesting candidates for machine verification, and some intriguing open questions are raised.

19 November 1992

Bernd Sturmfels (Cornell University)

Sparse systems of algebraic equations

Given N+1 algebraic equations in N variables, under which condition do they have a common solution ?

If the equations are linear, then this condition is expressed by the determinant of the coefficient matrix. In order to solve linear equations, either symbolically or numerically, we first need to understand the structural properties of the determinant.

If the equations are non-linear, and possibly sparse, then the role of the determinant is played by the "sparse resultant". In this lecture we explore the combinatorial structure of the sparse resultant, and its application to solving non-linear algebraic equations.

3 December 1992

Steven Phillips, Stanford University

Markov Paging

We consider the problem of paging under the assumption that the sequence of pages accessed is a Markov chain. The states of the Markov chain are the pages of memory, while the transition probabilities encode the locality of refer- ence of the program. This model is less adversarial than the "competitive" model for the study of on-line problems.

It is easy to learn to Markov chain, just by observing it. However, the question we focus on is: if you know the Markov chain, what do you do with it? How can you use it to get the lowest possible fault rate?

We look at a number of natural algorithms, all of which turn out to have a high fault rate on some Markov chains. We then describe a specific algorithm that has a fault-rate at most a constant times optimal, for every Markov chain.

(Joint work with Anna Karlin and Prabhakar Raghavan)

8 December 1992

Neal Young (UMIACS)

Linear-Programming Duality and the K-Server Problem

The k-server problem is an outstanding open problem in on-line algorithms. We will discuss why we feel linear-programming techniques, particular primal-dual techniques, are relevant to the problem of finding a competitive on-line strategy. We will discuss some partial successes of this approach and, if time permits, what the next steps might be.

10 December 1992

Dorit S. Hochbaum (UC-Berkeley)

Generalizing the Vertex Cover Problem Properties, to Integer Programs with Two Variables Per Inequality

The vertex cover problem has several interesting properties: The optimal basic solutions to the Linear Programming relaxation assume values {0,1/2,1}; Such solutions can be obtained by solving a certain minimum cut algorithm; Rounding the Linear Programing relaxation solution gives a 2-approximate solution and finally, there is an optimal solution identical to the Linear Programming solution in its integer components.

We consider integer programming problems with a minimization objective and such that each constraint involves up to two variables. Such problems are NP-hard even if the variables are binary, (e.g. vertex cover), and even if the inequalities are monotone (the coefficients of the variables have opposite signs).

Let U be an upper bound on the value of the variables. An O(mnU^2 log n^2U/m)) algorithm is given for the problem over monotone inequalities, thus proving it is weakly NP -hard. This algorithm has a direct application for a problem of layout of VLSI circuits. This algorithm is also the key to the approximation in the general integer programming case.

For the general (nonmonotone) case, we show that the properties of the vertex cover problem are a special case of the properties of integer programs with 2 variables per inequality. The results include a O(mnU^2 log n^2U/m)) algorithm delivering a super-optimal solution that provides a lower bound that is only better than the corresponding bound derived from the linear programming relaxation. This super-optimal solution consists of integer multiple of 1/2 and it also has the property that some of its integer components retain their value in an optimal solution. It is shown how several approximation algorithms for the vertex cover, that exploit certain graph properties in order to derive better approximation algorithms, generalize to our integer programs.

Parts of this work are joint with J. Naor and with N. Megiddo and A. Tamir.

17 December 1992

Claire Kenyon (ENS-Lyon and William Thurston, MSRI)

Rotation distance between binary trees: hyperbolic geometry vs. max-flow min-cut.

The maximum number of rotations needed to go from one given binary tree with $n$ nodes to another is exactly $2n-6$ when $n$ is large enough. We first sketch Sleator, Tarjan and Thurston's original proof of this theorem, which involves hyperbolic geometry volume arguments, the Riemann mapping theorem, approximate calculations of integrals and an induction argument. We then present an alternate, elementary proof, based on the max-flow min-cut theorem. Finally, we compare the two proofs and show how they are essentially two versions of the same proof, by relating successively hyperbolic volume to cocycles to linear programming to amortized analysis to flow problems.

7 January 1993

Moshe Y. Vardi (IBM Almaden)

Fixpoint Logics, Relational Machines, and Computational Complexity

We establish a general connection between fixpoint logic and complexity. On one side, we have fixpoint logic, parameterized by the choices of 1st-order operators (inflationary or noninflationary) and iteration constructs (deterministic, nondeterministic, or alternating). On the other side, we have the complexity classes between P and EXPTIME. Our parameterized fixpoint logics capture the complexity classes P, NP, PSPACE, and EXPTIME, though equality is achieved only over ordered structures.

There is, however, an inherent mismatch between complexity and logic -- while computational devices work on encodings of problems, logic is applied directly to the underlying mathematical structures. To overcome this mismatch, we develop a theory of relational complexity, which bridges tha gap between standard complexity and fixpoint logic. On one hand, we show that questions about containments among standard complexity classes can be translated to questions about containments among relational complexity classes. On the other hand, the expressive power of fixpoint logic can be precisely characterized in terms of relational complexity classes. This tight three-way relationship among fixpoint logics, relational complexity and standard complexity yields in a uniform way logical analogs to all containments among the complexity classes P, NP, PSPACE, and EXPTIME. The logical formulation shows that some of the most tantalizing questions in complexity theory boil down to a single question: the relative power of inflationary vs. noninflationary 1st-order operators.

This talk represents joint work with S. Abiteboul and V. Vianu.

14 January 1993

David Karger, Stanford University

A New Algorithm for the Minimum Cut Problem

This talk presents a new algorithm for finding global min-cuts in weighted, undirected graphs. One of the strengths of the algorithm is its extreme simplicity. As a first step, we show that this algorithm can be implemented as a strongly polynomial sequential algorithm with running time $O^*(mn^2)$, even if space is restricted to $O(n)$, or can be parallelized as an $\RNC$ algorithm which runs in time $O(\log^2 n)$ on a CRCW PRAM with $mn^2\log n$ processors. This provides the first proof that the min-cut problem for weighted undirected graphs is in $\RNC$. The algorithm does more than find a single min-cut; it finds all of them. The algorithm also yields numerous results on network reliability, enumeration of cuts, multi-way cuts, and approximate min-cuts.

New work (joint with Clifford Stein) improves the bounds to $O^*(n^2)$ sequential time or parallel processors, thus providing the most efficient known algorithm for the minimum cut problem in both the sequential and the parallel case.

If time permits, we will show how a theorem based upon the above algorithm can be applied to minimum spanning trees. We give a simple and practical $O(m+ n\log^2 n)$ running time minimum spanning algorithm, as well as an EREW PRAM minimum spanning tree algorithm which runs $O(\log n)$ time using $O(m+n^{1+\epsilon})$ processors--this is the best possible running time and is work-efficient on dense graphs.

21 January 1993

Andrei Broder (DEC SRC)

On the satisfiability of random 3-CNF formulas

Given a boolean formula in conjunctive normal form (CNF), the ``satisfiability problem'' (SAT) is to determine whether there is a truth assignment that satisfies it. Since SAT is NP-complete, the interest is in efficient heuristics that perform well ``on average,'' or with high probability.

Here we analyze the pure literal rule heuristic for computing a satisfying assignment to a random 3-CNF formula with n variables. Previous works showed that the unit clause rule and the minimal clause rule can find satisfying assignments for almost all 3-CNF formulas with up to n clauses. We show that the pure literal rule by itself finds satisfying assignments for almost all 3-CNF formulas with up to 1.63 n clauses, but it fails for more than 1.7 n clauses.

The talk is intended for a relatively broad audience: I will try to build some intuition for the problem and give a sketch of the proofs, but will only skim the technical details of the analysis.

This is joint work with Alan Frieze (CMU), and Eli Upfal (IBM Almaden and Weizmann Institute).

28 January 1993

David Zuckerman, MIT

Diminishing our Reliance on Randomness in Computation

In this talk, I will discuss three results about minimizing our reliance on randomness, and show how they are related. The first result is that a randomized space(S) machine using only poly(S) random bits can be simulated deterministically in space(S). The second is that a randomized polynomial-time machine can be simulated even if the source of randomness is defective. The third is an explicit construction of expanders that beat the eigenvalue bound, with applications to sorting and selecting algorithms, and to superconcentrator and nonblocking-network constructions.

The first result is joint work with N. Nisan, and the third is joint work with A. Wigderson.

4 February 1993

Cynthia Dwork (IBM Almaden)

Contention in Shared-Memory Algorithms

The principal focus of theoretical research on asynchronous distributed algorithms for shared-memory architecture has been on the cost measures of time (number of process steps), register size, and total memory. The issue of contention--simultaneous access by multiple processes to the same location in memory--has received little formal treatment. However, write-contention induces high real-time costs. For example, the real cost of $n$ simultaneous writes to a single shared variable can grow proportionately to $n^2$.

In this work we provide the theoretical underpinnings needed to analyze the relative contention costs of distributed algorithms. We show that the bitonic counting network of Aspnes, Herlihy, and Shavit enjoys much lower contention costs than the single-variable solution to the counting problem. In addition, we obtain high lower bounds on contention and contention/latency tradeoffs for a variety of other problems, including consensus and mutual exclusion.

This is joint work with Maurice Herlihy and Orli Waarts.

9 February 1993

Samson Abramsky (Imperial College)

Games, Types and Interactions

We introduce the ideas of Game semantics for programming languages and constructive logics, in which types (or formulas) denote games, and programs (or proofs) denote winning strategies. Games have a natural duality (between player and opponent); there are also constructions for combining games, with fine control over the flow of information. These constructions lead to a model of Linear Logic, which captures the dynamical intuitions which make that logic important to Computer Science. The winning strategies corresponding to proofs are ``dynamic tautologies'', processes which conserve the flow of information. They also lead to a subtle model of sequential processes, constrained enough to lead to very strong definability results. We present one such result, a ``Full Completeness theorem'' for Multiplicative Linear Logic (plus MIX): every winning strategy is the denotation of a unique cut-free proof net.

Game semantics may also provide the ``missing link'' between programming language semantics and Complexity theory. We conclude with some speculative remarks on this theme.

9 February 1993

David Harel (Weizmann Institute)

Towards a Theory of Infinite Computable Structures and Databases

Computer scientists are interested predominantly in finite objects. However, insight into finite objects can often be obtained by considering infinite, but recursive (i.e., computable), variants thereof. This leads to the notion of a recursive graph, or, more generally, a recursive structure in the model-theoretic sense. Moreover, in the database world, infinite computable collections of data are not out of place: a table of logarithms or trigonometric funtions, for example, can be viewed as an infinite relation that is given by a recursive procedure for generating any desired part of it. This is a rather general notion of a deductive database.

The talk will describe our recent work on infinite structures and databases, including the high undecidability of many problems on recursive graphs, a completeness result for computable database queries, and ways of deducing results for the finite case from those for the infinite case and vice versa. In view of the recent results about the impossibility of approximating problems hard for Max-SNP, one of our results seems to be particularly interesting, since it shows that any problem in NP whose infinite version is highly undecidable cannot be in Max-SNP (in fact, it is not even in Max-NP).

11 February 1993

Edith Cohen (AT&T Bell Labs)

Fast algorithms for constructing t-spanners and paths with stretch t

A path in a weighted graph G=(V,E) has stretch t if its weight is at most t times the distance between its end points. For weighted undirected graphs we present an O(mn^{(2+\eps)/t}+kn^{(2+\eps)/t}) time randomized algorithm for computing paths with stretch t between k specified pairs of vertices (where n=|V| and m=|E|). This is a significant improvement over previous bounds. As a concrete example, for the problem of computing paths between O(m) arbitrary pairs of vertices, the best shortest paths algorithm runs in time ~O(mn), our algorithm, in time ~O(mn^{0.5}) finds paths with stretch 4, previous algorithm ([ABCP92]) can only obtain paths with stretch 128 within this time bound. We also present deterministic and randomized parallel algorithms.

A t-spanner of a graph G is a spanning subgraph of G such that distances in the spanner are within a factor of t from the corresponding distances in G. Much research was recently conducted on bounding the size and efficiently constructing t-spanners. All previous constructions, however, were based on solving many instances of the exact shortest-paths problem. We avoid that and construct t-spanners, of even smaller size, much more efficiently than known before.

The main tool we employ are efficient constructions of (t,W)-pairwise-covers of the input graph (where W>0 and t\geq 2). A (t,W)-cover is, roughly, a collection of subsets of vertices of total size ~O(n^{1+2/t}) such that any pair of vertices of distance at most W are both contained in at least one subset, and any pair of vertices which are in the same subset are of distance at most tW. We use a logarithmic number of pairwise covers for different values of W to generate a t(1+\eps)-spanner and paths of stretch t(1+\eps).

18 February 1993

Tomasz Radzik (Cornell University)

Approximate Generalized Circulation

The generalized circulation problem is a generalization of the maximum flow problem. Each arc (v,w) has a gain factor g(v,w). If f(v,w) units of flow enter (v,w) at v, $f(v,w) g(v,w)$ units arrive at w. The objective is to maximize the net flow into a distinguished node, the source, while maintaining the conservation of flow at all other nodes and capacity constraints on all arcs. We prove that the fat path algorithm designed for this problem by Goldberg, Plotkin, and Tardos gives a good approximate solution in strongly polynomial time. More precisely, after $O^*(m^2n\log(1/a))$ time the net flow at the source is at least (1-a) times the optimum. Our bound improves by factor n the previous best strongly polynomial approximation given by Cohen and Megiddo.

25 February 1993

Abhijit Sahay (UC-Berkeley)

LogP: Towards a Realistic Model of Parallel Computation

Much of the research in the design of parallel algorithms has been based either on simplistic models of parallel computers (such as the PRAM) or on models that are too closely tied to the architectures of specific machines (such as various network models.) In order to develop techniques that yield performance across a range of parallel machines, a more realistic and general model is needed.

We argue that critical technology trends underlying parallel machine design offer opportunities to model a wide range of machines uniformly with a small number of parameters. Specifically, we offer the {\em LogP model} in which a parallel computer is viewed as a collection of homogeneous processors, each with a local memory, connected together in a network capable of delivering point-to-point messages of fixed size. The machine is characterized by four parameters: $P$, the number of processors; $L$, an upper bound on the latency of a message in the network; $o$, the send/receive overhead; and $g$, the minimum separation between consecutive sends/receives at a processor. Our model reveals important bottlenecks but is simple enough to support interesting algorithmic analysis.

This talk will describe the model and optimal algorithms for some broadcast problems, summation and for computing FFTs. All our algorithms are absolutely optimal in that not even the constant factors can be improved.

The LogP model is the result of work done jointly with David Culler, Richard Karp, David Patterson, Eunice Santos, Klaus Schauser, Ramesh Subramonian and Thorsten von Eicken.

4 March 1993

Leonard J. Schulman, U.C. Berkeley

Deterministic Coding for Interactive Communication

Two factors are prominent among those contributing to the increases in speed and storage capacity in current generations of computers. The first is increasing parallelism --- whether in actual parallel and distributed computers, or among the steadily more numerous components of a sequential machine. The second is the dramatic miniaturization of logical devices and wires. The first of these factors greatly magnifies the number of interprocessor communications performed during any computation, while the second increases the noise level affecting transmissions.

In view of this phenomenon the following concern was recently identified as basic. Consider a problem whose input is split between two processors connected by a communication link; and for which an interactive protocol exists which solves the problem in $T$ transmissions on any input, provided the channel is noiseless. If in fact there is some noise on the channel, what is the effect upon the number of transmissions needed in order to solve the communication problem reliably?

We will address this question by describing a deterministic method for simulating noiseless-channel protocols on noisy channels, with only a constant slow-down. This is an analog for general interactive protocols of Shannon's coding theorem, which dealt only with data transmission, i.e. one-way protocols.

We will also discuss some of the principal directions for further work.

11 March 1993

Ashok K. Chandra (IBM Almaden)

Towards More Realistic Models of Parallel Computers

A PRAM is a very convenient model of parallel computation, both for theoretical analysis and for programming. Unfortunately, it cannot be built. In practice, access to remote memory takes much more time than local access - sometimes by a factor of a thousand or more when data transfers take place using messages or pages. Such large factors have to be addressed explicitly when designing efficient algorithms. This is often done in an ad-hoc fashion. Is there a robust model of computation which can take these issues into account and guide efficient algorithm design?

Communication delays arise from two major limitations: bandwidth and latency, which have different consequences for algorithm design. A Block PRAM model of computation (BPRAM for short) models these two effects by allowing processors to communicate by transferring blocks of data - here n words can be transferred in time l+bn where l corresponds to latency, and b represents the (inverse of) bandwidth per processor. This model can be used to study several phenomena which arise in practice: inherent limits to parallelizing communication, computation-communication tradeoffs, bandwidth limits for problems such as sorting, FFT etc., importance of using both temporal and spatial locality of reference, the role of the l.p product of a machine as a determiner of the difficulty of generating efficient algorithms, etc. Furthermore, one can model phenomena akin to practice where tiny changes in the communication pattern can result in catastrophic changes in the running time of certain algorithms.

The research was done jointly with Alok Aggarwal and Marc Snir.

16 March 1993

Steven Phillips (Stanford)

Online Load Balancing and Network Flow

We study two problems that can be viewed as on-line games on a dynamic bipartite graph. The first problem is on-line load balancing with preemption. A centralized server must assign tasks to servers, processing online a sequence of task arrivals and departures. Each task is restricted to run on some subset of the servers. The scheduler attempts to keep the load well-balanced. If preemptive reassignments are dissallowed, Azar, Karlin and Broder proved a lower bound of Omega(Sqrt(n)) on the ratio between the maximum load achieved by an on-line algorithm and the optimum off-line maximum load. We show that this ratio can be reduced to O(log n) by an efficient scheduler using only a small amount of rescheduling.

We then apply these ideas to network flow. Cheriyan and Hagerup introduced an on-line game on a bipartite graph as a fundamental step in improving algorithms for computing the maximum flow in networks. They described a randomized strategy to play the game. King, Rao and Tarjan studied a modified version of this game, called "node kill", and gave a deterministic strategy. We obtain an improved deterministic algorithm for the node kill game (and hence for maximum flow) in all but the sparsest graphs.

These problems combine a demand for good competitive ratios with more traditional requirements of implementation efficiency. Our solutions deal with the tradeoffs between these measures.

This is joint work with Jeff Westbrook.

1 April 1993

Orli Waarts (IBM Almaden)

On-Line Load Balancing with Applications to Machine Scheduling and Virtual Circuit Routing

In this work we study an idealized problem of on-line allocation of routes to virtual circuits where the goal is to minimize the required bandwidth. For the case where virtual circuits continue to exist forever, we describe an algorithm that achieves an O(log n) competitive ratio, where n is the number of nodes in the network. Informally, our results show that instead of knowing all of the future requests, it is sufficient to increase the bandwidth of the communication links by an O(log n) factor. We also show that this result is tight, i.e. for any on-line algorithm there exists a scenario in which O(log n) increase in bandwidth is necessary.

We view virtual circuit routing as a generalization of an on-line scheduling problem, and hence a major part of this work focuses on development of algorithms for non-preemptive on-line scheduling for related and unrelated machines. Specialization of routing to scheduling leads us to concentrate on scheduling in the case where jobs must be assigned immediately upon arrival; assigning a job to a machine increases this machine's load by an amount that depends both on the job and on the machine. The goal is to minimize the maximum load.

For the related machines case, we describe the first algorithm that achieves constant competitive ratio. For the unrelated case (with n machines), we describe a new method that yields O(log n)-competitive algorithm. This stands in contrast to the natural greedy approach, which we show has only a Theta(n) competitive ratio. The virtual circuit routing result follows as a generalization of the unrelated machines case. Our techniques can be extended to provide low congestion solutions for more general routing problems as well.

The talk is self contained.

This is joint work with James Aspnes from IBM Almaden Research Center, Yossi Azar from DEC Systems Research Center, Amos Fiat from Tel-Aviv University, and Serge Plotkin from Stanford University.

8 April 1993

Yuval Rabani (ICSI)

A Decomposition Theorem and Bounds For Randomized Server Problems

We present a general decomposition theorem for a simpler version of both the $k$-server problem and the metrical task system problems, which we call the ``pursuit-evasion game.'' We show that if a metric space M can be decomposed into two spaces M_L and M_R both very far away from each other, then the competitive ratio for this game on M can be expressed nearly exactly in terms of the ratios on each of the two subspaces. This provides a natural way of analyzing the competitive ratio of a complex space made up of several regions in terms of the ratios of its components.

Using the decomposition theorem, we prove a lower bound of $\Omega(\sqrt{\log k / \log\log k})$ for the competitive ratio of randomized algorithms for the $k$-server problem against the oblivious adversary. The bound holds for arbitrary metric spaces (of at least $k+1$ points) and provides a new lower bound for the metrical task system problem as well. This improves the previous best lower bound of $\Omega(\log\log k)$ for arbitrary metric spaces \cite{KRR}, more closely approaching the conjectured lower bound of $\Omega(\log k)$. We also prove a lower bound of $\Omega(\log k / \log\log k)$ for the server problem on $k+1$ equally-spaced points on a line, which corresponds to several natural motion-planning problems.

This is joint work with Avrim Blum, Howard Karloff and Mike Saks.

15 April 1993

Michael Luby (ICSI)

A Parallel Approximation Algorithm for Positive Linear Programming

We introduce a fast parallel approximation algorithm for the positive linear programming optimization problem, i.e. the special case of the linear programming optimization problem where the input constraint matrix and constraint vector consist entirely of positive entries. The algorithm is elementary, and has a simple parallel implementation that runs in polylog time using a linear number of processors.

This is joint work with Noam Nisan.

22 April 1993

Anna R. Karlin (DEC SRC)

On the Evolution of the Butterfly

There has been much recent work on the robustness of interconnection networks against random static faults. In this talk we examine the fault tolerance of the butterfly network. Specifically, we are interested in properties of B_n(p), the butterfly network with n nodes, where each edge is independently faulted with probability p.

We show that there is a constant p*, 1/2 < p* < 2/3, such that

1. There is a constant c > 0, such that for p < p*, the probability that B_n(p) has a connected component of size cn tends to 1, as n tends to infinity.

2. If p > p*, then for any constant c' > 0, the probability that B_n(p) has a connected component of size c'n tends to 0, as n tends to infinity.

This is joint work with Greg Nelson (DEC SRC) and Hisao Tamaki (University of Toronto).

29 April 1993

Larry Stockmeyer (IBM Almaden)

What Can Be Computed Locally?

An issue in distributed systems is locality. Informally, a distributed computation is local if each processor communicates only with processors located ``near'' to it. The subject of the talk is the question of what can be computed by algorithms which satisfy a strong requirement of locality, namely, that the algorithm must run in constant time, independent of the size of the network. A processor running in constant time t must base its output solely on the information it can collect from processors located within distance t from it in the network. The work has three goals: first, to lay some groundwork for studying the question of what can and cannot be computed locally; second, to establish some basic, general results; and third, to study particular examples.

For the most part, we consider problems of producing labelings of the nodes of the network where the legality of a labeling can be checked locally (e.g., coloring). Two general results about such problems are: it is undecidable whether a given problem can be solved locally; and randomization does not help in solving problems locally. Regarding particular examples: there is a non-trivial labeling problem which can be solved locally; and this problem has been used as a basis for a local solution to a certain relaxed version of the dining philosophers problem. Open questions will also be discussed.

This is joint work with Moni Naor.

14 May 1993

Yoram Moses (Weizmann Institute)

Fully Polynomial Byzantine Agreement in t+1 Rounds

This talk presents a polynomial protocol for reaching Byzantine agreement in t+1 rounds whenever n>3t, where n is the number of processors and t is an a priori upper bound on the number of failures. This resolves a longstanding open problem presented by Pease, Shostak and Lamport in 1980.

This is joint work with Juan Garay of IBM T.J. Watson Research Center.

25 May 1993

Dr. Narendra Karmarkar (AT&T Bell Laboratories)

New Massively Parallel Architecture Based on Finite Projective Geometries

Despite intense research activity in massively parallel architectures, several difficult problems remain in the design of high performance, low cost parallel machines that are also easy to program and achieve high efficiency.

In this talk, I will present a fundamentally new approach to organizing a parallel system, based on finite projective geometries. In the simplest (two dimensional) case of such a system, every pair of processors share a memory and every pair of memories share a processor. The instruction set of the machine is designed in such a way that each basic instruction results in massively parallel and conflict-free use of all components of the system (processors, memories and communication network) and the collection of all instructions has a rich mathematical structure that facilitates automatic transformation of user programs into sequences of basic instructions. Problem partitioning is based on a simple geometric rule implemented in hardware. As a result, many difficult problems arising in parallel architectures (such as load balancing, data routing, memory access conflicts etc.) are solved efficiently, without programmer intervention.

I will also present simulation results on large-scale problems arising in linear programming, solution of partial differential equations, signal processing, control systems and simulation of non-linear electronic circuits.


Dr. Narendra Karmarkar received his Ph.D. in Computer Science from University of California, Berkeley in 1983. Since 1983 he has been a Member of Technical Staff, Mathematical Sciences Research Center, AT&T Bell Laboratories, Murray Hill, New Jersey.

He received the Lancaster Prize of the Operations Research Society of American for the best published contributions to operations research in the year 1984. AT&T honored him with a ``fellow'' of Bell Laboratories designation in the year 1986. He received the Fulkerson Prize in discrete mathematics given jointly by American Mathematical Society and Mathematical Programming Society in the year 1988.

He has received several patents for original invention in the field of linear programming and its implementation.

27 May 1993

Rafail Ostrovsky (ICS)

One-Way Functions are Essential for Non-Trivial Zero-Knowledge

It was known that if one-way functions exist, then there are zero-knowledge proofs for every language in PSPACE. We prove that unless very weak one-way functions exist, Zero-Knowledge proofs can be given only for languages in BPP. For average-case definitions of BPP we prove an analogous result under the assumption that uniform one-way functions do not exist.

Thus, very loosely speaking, zero--knowledge is either useless (exists only for ``easy'' languages), or universal (exists for every provable language).

The talk will be self-contained.

Joint work with Avi Wigderson. The paper is to appear in the Israel Symposium on Theory of Computing and Systems (ISTCS93) this summer, where it won the Henry Taub Award.

1 June 1993

Farid Alizadeh (ICSI)

Perfect graphs, Lovasz numbers and the positive semidefinite cone: A study based on duality theory and interior point algorithms

Duality theory in linear programming is a fundamental concept with numerous applications in combinatorial problems. For instance many theorems such as max-flow-min-cut theorem, various theorems of Koenig, Dilworth's theorem, and Lovasz perfect graph theorem are all in some sense special cases of linear programming duality. However, there is an almost identical duality theory for more general convex optimization problems which is surprisingly less well-known. I will review this duality theory in the special case of semidefinite programming (SDP for short): that is minimization of a linear function of a symmetric matrix subject to linear equality constraints plus an additional constraint that the symmetric matrix be positive semidefinite.

I will then review applications of SDP to combinatorial optimization. The most striking application arises in Lovasz number of graphs, an invariant of graphs which is simultaneously an upper bound on the maximum clique and a lower bound on minimum coloring of a graph. Furthermore, this invariant is polynomial-time computable by virtue of the ellipsoid method. However, I will show that interior point methods also result in polytime algorithms for SDP in general, and Lovasz number in particular.

I will sketch a particular primal-dual method for solving SDP and argue that any linear programming interior point method can be mechanically transformed into a similar algorithm for SDP. As a result we can get a sublinear-time parallel algorithm for computing maximum cliques in perfect graphs (that is graphs for which clique number equals chromatic number and the same is true for all induced subgraphs). If there is time, I will also review a generalization of Lovasz numbers due to Narasimhan and Manber and derive some characterization of it.

3 June 1993

Jean-Claude Latombe (Stanford)

Robot Algorithms

Robotics is often viewed as the combined application of component disciplines, such as kinematics, dynamics, control, and programming, to create programmable mechanical systems capable of performing a great variety of tasks in the physical space. In this talk, I take the stand that, like Computer Science, Robotics is fundamentally about algorithms.

Like computer algorithms, robot algorithms are abstractions that can be designed and analyzed independent of any particular technology. Moreover, algorithms based on very different principles can be used to perform the same task. However, robot algorithms also differ in significant ways from computer algorithms. While the latter apply to data that can be observed and acted upon without uncertainty (letting aside a few problems such as finite-precision arithmetics), robot algorithms apply to physical objects, which they attempt to control despite, or thanks to, the laws of nature. The real world is also so complex that, for most tasks, the set of states that may have to be considered by a robot algorithm is too large to be explicitly anticipated. Hence, most robot algorithms must incorporate a planner whose role is to dynamically change the control part of the algorithm. But many planning problems are known to be computationally hard. Hence, robot algorithms blend fundamental control issues (reachability and recognizability of states) and computational issues (computability, complexity) in a unique fashion.

Investigating robot algorithms casts new light on Robotics. As our understanding of these algorithms matures, robots will more and more appear as machines executing robot algorithms, just as computers are machines executing computation algorithms, independent of any ephemeral technology.

15 July 1993

Bernard Chazelle (Princeton)

A Simple Proof Technique for Geometric Discrepancy

I will discuss a new method for proving lower bounds on the discrepancy of various geometric objects. The proof technique can be used to rederive classical results in a much simpler fashion and sharpen existing bounds. It also provides an intuitive probabilistic interpretation of the "large-discrepancy" phenomenon.

22 July 1993

Herbert Edelsbrunner (University of Illinois-UC)

Modeling with shapes and simplicial complexes

Given a finite point set in three-dimensional space, a method is described that extracts shapes of various degrees of detail. Each shape is a geometric object that represents the intuitive notion of shape of the point set. Technically, each shape is the underlying space of a subcomplex of the Delaunay simplicial complex of the points. A real parameter, alpha, controls the degree of detail expressed by the shape.

Applications to various problems and research areas motivate extensions of the ideas of shape and subcomplexes in a number of directions. For modeling molecules, it is important to understand the deeper connections between shapes and the popular space filling diagrams. For surface reconstruction methods it is necessary to vary the degree of detail at different places in space.

(Talk includes a demo on an SGI workstation).

29 July 1993

Eric Torng (Stanford)

A Better Algorithm For an Ancient Scheduling Problem

One of the oldest and simplest variants of scheduling $n$ jobs on $m$ machines is the on-line scheduling problem studied by Graham in 1966: the jobs arrive on-line and must be scheduled non-preemptively on identical machines so as to minimize the makespan. Graham proved that the List Processing Algorithm (List for short) which assigns each job to the currently least loaded machine has competitive ratio $(2 - 1/m)$. No improvements to List were discovered until 1991 when several authors found algorithms that have better competitive ratios than List for $m \ge 4$, but for large $m$ these algorithms are still asymptotically $2$-competitive. Bartal et al solved the open question of whether there exists an algorithm with competitive ratio bounded away from $2$ when they developed an algorithm for which they could prove a competitive ratio of at most $(2 - 1/70) \approx (2 - 0.014)$ for large $m$, but for $m < 70$, their algorithm is not provably better than List.

We present a more natural algorithm that outperforms List whenever $m\ge6$ and has a competitive ratio of at most $(2 - .055) \approx 1.945$ for all $m$, which is significantly closer to the best known lower bound of $1.837$ for the problem. Our proof of this is computer aided because we need to solve many large linear programs. We then show that the analysis of our algorithm is almost tight by presenting a lower bound of $(2 - .0622)$ on its competitive ratio for large $m$.

This is joint work with David Karger and Steven Phillips of Stanford University.

5 August 1993

Dan Halperin (Stanford)

Moving a Polygon in a Polygonal Environment

Given a simple polygon B free to translate and rotate in the plane among polygonal obstacles, we wish to plan a collision-free motion for B between given initial and final free placements, if such a motion exists. We present an algorithm to plan such a motion whose running time is near-quadratic in the complexity of the obstacles, and thus constitutes an improvement of almost an order of magnitude over the best previously known solution for this problem.

Our solution is based on a bound that we obtain on the maximum combinatorial complexity of a single connected component in the three-dimensional configuration space of B, and on the ability to efficiently compute such a component. More specifically, assuming the polygon B has a constant number of sides and the obstacles are bounded by $n$ edges, we show that the complexity of a single connected component of the configuration space is $O(n^{2+\epsilon})$ for any $\epsilon>0$, where the constant of proportionality depends on $\epsilon$. We also present an algorithm for constructing a single component in $O(n^{2+\epsilon})$ time, for any $\epsilon>0$.

In the talk I'll also review related work in algorithmic motion planning in robotics as well as in the area of arrangements of surfaces, with an emphasis on recent advances in the study of the `lower envelope' and `single cell' problems.

This is joint work with Micha Sharir.

26 August 1993

Naveen Garg (Indian Institute of Technology, Delhi)

Approximate max-flow min-(multi)cut theorems for multicommodity flow

Much of flow theory, and the theory of cuts in graphs, is built around a single theorem - the celebrated max-flow min-cut theorem. The importance of this theorem has led researchers to seek its generalization to the case of multicommodity flow. In this setting, each commodity has its own source and sink, and the object is to maximize the sum of the flows subject to capacity and flow conservation requirements. The notion of a multicut generalizes that of a cut, and is defined as a set of edges whose removal disconnects each source from its corresponding sink. We prove the following approximate max-flow min-multicut theorem:

min multicut <= max flow <= min multicut

O(log k)

where k is the number of commodities.

A tighter approximate max-flow min-multicut theorem can be obtained for multicommodity flow on trees. This setting also captures a surprisingly rich collection of problems.

22 September 1993

Jeff Vitter (Duke)

Load Balancing Paradigms for Optimal Use of Parallel Disks and Parallel Memory Hierarchies

We present several balancing paradigms pertinent to optimizing I/O performance with disk and processor parallelism. We use sorting as our canonical application and subroutine to illustrate the paradigms. The use of parallel disks can help overcome the I/O bottleneck in sorting if the records in each read or write are evenly balanced among the disks. There are three known record-balancing paradigms that lead to optimal I/O algorithms: using randomness to assign blocks to disks, using the disks predominantly independently, and deterministically balancing the blocks by matching. In this talk, we describe these techniques and compare their relative advantages. We show how randomized and deterministic balancing can be extended to provide algorithms that are optimal both in terms of the number of I/Os and the internal processing time for parallel-processing machines with scalable I/O subsystems and with parallel memory hierarchies. We also survey some I/O results in other areas, such as on-line and batch range searching and several problems in computational geometry.

(The sorting results are joint with Mark Nodine and with Liddy Shriver. The online range searching results are joint with Paris Kanellakis, Sridhar Ramaswamy, and Darren Vengroff. The geometry results are joint with Mike Goodrich, J.-J. Tsay, and Darren Vengroff.)