AFLB is the Algorithms for Lunch Bunch. It is run by Steven Phillips (Oct 91-Jan 92) and Daphne Koller (Jan 92 - Aug 92). Meetings are usually Thursdays at 12:00 p.m., or Thursdays at 4:15 p.m.

Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.


  1. 10 October 1991: Don Knuth (Stanford). Efficient Representations of Perm Groups.
  2. 16 October 1991: Igor Riven (NEC). Geometry of Polyhedra.
  3. 14 November 1991: Tomasz Radzik. Parametric search with Newton's Method, and an application to the problem of minimizing maximum arc cost in a transshipment network.
  4. 21 November 1991: Madhu Sudan (UC-Berkeley). Reconstructing Algebraic Functions from Noisy Data.
  5. 5 December 1991: Orli Waarts (Stanford). Simple and Efficient Bounded Concurrent Timestamping, or Bounded Concurrent Timestamp Systems are Comprehensible!.
  6. 16 January 1991: Yossi Azar (DEC SRC). The Competitiveness of On-Line Assignments.
  7. 23 January 1992: David Patterson (UC-Berkeley). Massively Parallel Computer Architecture: Observations and Need for a New Theoretical Model.
  8. 24 January 1992: Micha Sharir (Tel Aviv University). A Subexponential Randomized Incremental Algorithm for Linear Programming.
  9. 6 February 1992: Milena Mihail (Bellcore). ON THE NUMBER OF EULERIAN ORIENTATIONS OF A GRAPH.
  10. 13 February 1992: Shmuel Safra . Approximating Clique is NP-hard.
  11. 20 February 1992: Moni Naor (IBM Almaden Research Center). Optimal File Sharing in Distributed Networks.
  12. 27 February 1992: Robert Tarjan (Princeton). Open Problems in Data Structures.
  13. 5 March 1992: Richard M. Karp (UC Berkeley and ICSI). Physical Mapping of Chromosomes: A Combinatorial Problem in Molecular Biology.
  14. 12 March 1992: Tomas Feder (IBM Almaden Research Center). Balanced Matroids.
  15. 2 April 1992: Ronald Fagin (IBM Almaden Research Center, San Jose, California). Finite-Model Theory - a Personal Perspective.
  16. 9 April 1992: Eli Upfal . A Theory of Wormhole Routing in Parallel Computers.
  17. 16 April 1992: Steven Phillips (Stanford University). Biased Random Walks.
  18. 23 April 1992: David R. Karger (Stanford University). Fast Connected Components Algorithms for the EREW PRAM.
  19. 27 April 1992: Christos H. Papadimitriou . INEFFICIENT EXISTENCE PROOFS AND COMPLEXITY .
  20. 30 April 1992: Satish B. Rao (NEC Research Institute). Faster Algorithms for Finding Small Edge Cuts in Planar Graphs.
  21. 1 May 1992: Ernst W. Mayr (J.W. Goethe-University, Frankfurt, Germany). "Managing Divide-And-Conquer Efficiently on the Hypercube".
  22. 8 May 1992: Edith Cohen (AT&T Bell Laboratories). Approximate max flow on small depth acyclic networks.
  23. 14 May 1992: Mike Luby, UC Berkeley. New Definitions and Simplified Proofs in Cryptography.
  24. 21 May 1992: Joel Spencer (NYU). Flip a Coin! From Erdos to Algorithms.
  25. 28 May 1992: Andrew Goldberg (Stanford). SPUR: An Implementation of a Scaling Push-Relabel Algorithm for the Minimum-Cost Flow Problem.
  26. 4 June 1992: IBM Almaden Research Center. Robert Cypher.
  27. 25 June 1992: Sivan Toledo (MIT). Maximizing Non-Linear Concave Functions in Fixed Dimension.
  28. 30 July 1992: Noam Nisan . Undirected Connectivity in O(log^{1.5} n) Space.
  29. 13 August 1992: Adi Rosen (Tel-Aviv University). The Distributed $k$-Server Problem--- A Competitive Distributed Translator for $k$-Server Algorithms.

10 October 1991

Don Knuth (Stanford)

Efficient Representations of Perm Groups

Presenting an elementary version of Sims's algorithm for computing strong generators of a given perm group, together with a proof of correctness and some notes about appropriate low-level data structures. Upper and lower bounds on the running time are also obtained.

16 October 1991

Igor Riven (NEC)

Geometry of Polyhedra

In 1832 Jakob Steiner asked for a combinatorial characterization of convex polyhedra all of whose vertices lie on the unit sphere. This problem turns out to have quite wide connections with other (seemingly unrelated) problems in geometry.

In the present talk I give a simple answer to Steiner's question, and give a sketch of some of the techniques, insights and consequences.

14 November 1991

Tomasz Radzik

Parametric search with Newton's Method, and an application to the problem of minimizing maximum arc cost in a transshipment network

We discuss two paradigms for solving a fractional optimization problem: one known as Megiddo's parametric search and the other one using Newton's method. Megiddo's method is efficient if the underlying problem has an efficient parallel algorithm. This is not always the case. We show that Newton's method gives faster algorithms for the maximum mean cut problem and for the problem of minimizing the maximum arc cost in a transshipment network. Here, the underlying non-fractional optimization problem is the maximum flow problem, which does not have an efficient parallel algorithm.

21 November 1991

Madhu Sudan (UC-Berkeley)

Reconstructing Algebraic Functions from Noisy Data

We consider the task of learning algebraic functions, in particular polynomials, given by erroneous oracles. We look at situations where the error of the oracle is on an extremely large fraction of the domain, but the errors are not arbitrary and are instead somehow {\em structured}.

We define a simple algebraic notion of ``structure'' to error and show how to reconstruct algebraic functions in situations where the oracle errs (in structured manner) on all but a small fraction of the inputs.

The definition of structured error is quite restrictive, but we justify the notion by presenting applications to a widely varying set of areas - curve fitting, bivariate polynomial factoring, learning and correccting programs.

This is joint work with Sigal Ar, Richard Lipton and Ronitt Rubinfeld.

5 December 1991

Orli Waarts (Stanford)

Simple and Efficient Bounded Concurrent Timestamping, or Bounded Concurrent Timestamp Systems are Comprehensible!

In a bounded concurrent timestamping system we have n asynchronous concurrent processors communicating via shared memory, repeatedly assigning themselves labels such that the order of the labels reflects the real-time order in which they were assigned. The system should be wait free (that is, there is an a priory upper bound on the number of steps a processor must take when running the algorithm, regardless of the behavior of the other processors in the system) and have labels of bounded size.

Concurrent timestamping is an extremely powerful tool for concurrency control.

The first to isolate the notion of bounded timestamping were Israeli and Li, who also developed a theory of the weaker problem: sequential timestamping. (That is, timestamping in a world where no two operations are ever concurrent.) Later, Dolev and Shavit defined and constructed the first bounded concurrent timestamping system. Their solution, however is very complex.

Hence, the contributions of this paper are twofold. First, our solution is based on an entirely new approach very different from that of Dolev and Shavit as well as Israeli and Li yielding a much simpler construction and proof of correctness. Second, our solution is more efficient than the one of Dolev and Shavit. In particular, its time complexity is linear in contrast to their more than quadratic one. Finally, to implement our approach using bounded registers we introduce a new communication primitive which we believe will have application in other contexts in which boundedness is desired.

This is a joint work with Cynthia Dwork from IBM Almaden.

16 January 1991

Yossi Azar (DEC SRC)

The Competitiveness of On-Line Assignments

Consider the on-line problem where a number of servers are ready to provide service to a set of customers. Each customer's job can be handled by any of a {\it subset} of the servers. Customers arrive one-by-one and the problem is to assign each customer to an appropriate server in a manner that will balance the load on the servers. This problem can be modeled in a natural way by a bipartite graph where the vertices of one side (customers) appear one at a time and the vertices of the other side (servers) are known in advance. We derive tight bounds on the competitive ratio in both deterministic and randomized cases. Let $n$ denote the number of servers. In the deterministic case we provide an on-line algorithm that achieves a competitive ratio of $k=\log _2 n $ (up to an additive 1) and prove that this is the best competitive ratio that can be achieved by any deterministic on-line algorithm. In a similar way we prove that the competitive ratio for the randomized case is $k'=\ln(n)$ (up to an additive 1). We conclude that for this problem, randomized algorithms differ from deterministic ones by precisely a constant factor.

This is joint work with J. Naor and R. Rom.

23 January 1992

David Patterson (UC-Berkeley)

Massively Parallel Computer Architecture: Observations and Need for a New Theoretical Model

Just a few years ago I doubted that massively parallel processors (MPP) and software would ever converge on a common foundation, which is absolutely essential if MPP is to become popular. Today I can see that convergence, with a machine operating 1000 times faster than the fastest Cray computer being feasible in just a few years. Now I am concerned whether computer science in general (and computer science theory in particular) will take advantage of this opportunity to contribute to and accelerate the success of massive parallelism.

This talk will first relay a few of my (possibly controversial) observations:

1: We really can get to 1 TeraFLOPS(Million MFLOPS)! And Soon!

2: Early guesses at MPP issues were wrong.

3: Both MPP programming models and MPP hardware models are converging.

4: Current theoretical models may be too inaccurate to expect them to lead to important contributions in MPP.

5: Computer science has an obligation as well as an opportunity in MPP.

I will then mention the need for a better theoretical model for MP algorithms. Such a model must emphasize the importance of algorithms that localize memory accesses. (Perhaps we can derive an appropriate model via audience participation!)

24 January 1992

Micha Sharir (Tel Aviv University)

A Subexponential Randomized Incremental Algorithm for Linear Programming

We present a very simple randomized and incremental algorithm for linear programming. The expected running time of the algorithm is shown to be at most $\exp (c\sqrt{d\ln n})$, although in practice the algorithm seems to be much faster.

Joint work with Emo Welzl and Jirka Matousek.

Host: Leonidas Guibas

6 February 1992

Milena Mihail (Bellcore)


An Eulerian orientation of an undirected Eulerian graph is an assignment of directions to the edges of the graph such that for every vertex the in-degree is equal to the out-degree. Eulerian orientations are natural flow-like structures, and, for example, Welsh has pointed out that computing their number (i) is equivalent to evaluating the (0,2) point of the Tutte plane, and (ii) arises in the evaluation of ice-type partition functions in statistical physics.

Apparently there is no known efficient method to compute the number of Eulerian orientations; we justify this lack by showing that the problem is #P-complete. On the positive side, we give an efficient randomized algorithm to approximate the number of Eulerian orientations up to arbitrarily small relative error.

Technically, the new ingredient in our work is a decomposition method for Eulerian graphs; we shall discuss the potential applicability of this method to further problems.

This is joint work with Peter Winkler.

13 February 1992

Shmuel Safra

Approximating Clique is NP-hard

We give a new characterization of NP: the class NP contains exactly all languages $L$ for which membership proofs can be verified probabilistically using logarithmic number of random bits and logarithmic number of queries to the proof. We conclude that approximating Clique (or Independent-Set) is NP-complete.

Joint work with Sanjeev Arora (UC Berkeley)

20 February 1992

Moni Naor (IBM Almaden Research Center)

Optimal File Sharing in Distributed Networks

Given a distributed network of processors represented by an undirected graph $G = (V,E)$, and a file size $k$, we consider the following problem: An arbitrary file $W$ of $k$ bits is to be distributed among all nodes of the network $G$. We are to assign memory devices to the nodes of $G$ such that, by accessing only the memory of its adjacent neighbors, each node can reconstruct the contents of $W$. The objective is to minimize the total size of memory in the network.

We derive linear and integer programming lower bounds on the total size of memory required for any network $G$ and file size $k$. We present a scheme that is within $1+o(1)$ of these bounds when $k$ is larger than $\log \Delta$, where $\Delta$ is the largest degree in the network. The memory allocation scheme is constructive and the reconstruction method is efficient.

Finally, we show that the requirement for $k$ being much larger than $\log \Delta_G$ is necessary in order to have total memory size close to the integer programming lower bound.

Joint work with Ronny Roth.

27 February 1992

Robert Tarjan (Princeton)

Open Problems in Data Structures

(no abstract)

5 March 1992

Theory Colloquium.

Richard M. Karp (UC Berkeley and ICSI)

Physical Mapping of Chromosomes: A Combinatorial Problem in Molecular Biology

A fundamental tool for exploring the structure of a long DNA sequence is to construct a ``library'' consisting of many cloned fragments of the sequence. Each fragment can be replicated indefinitely and then ``fingerprinted''to obtain partial information about its structure. Two common types of fingerprinting are:

(1) restriction fingerprinting, in which an enzyme called a restriction nuclease cleaves the fragment wherever a particular short sequence of nucleotides occurs, and the lengths of the resulting pieces are measured;

(2) hybridization fingerprinting, in which each fragment is tested to determine which of certain short nucleotide sequences called probes it contains.

An important combinatorial problem is to determine, from such fingerprint information, the most probable arrangement of the cloned fragments along the overall sequence. Such a physical map can then be used for tasks of biological or medical significance, such as determining the locations of particular genes within a chromosome.

The most common approach to physical mapping is to compute, for each pair of fragments,a weight (either positive or negative) related to the probability that the two fragments overlap, and then construct an interval graph of maximum total weight. We argue that this approach makes insufficient use of fingerprint information. Instead, we cast the problem as a variant of the traveling-salesman problem in which the cost of travel between two cities depends on a small neighborhood around the cities within the overall tour. We then discuss some of the issues of algorithm design that arise in constructing efficient local optimization methods for this generalized traveling-salesman problem.

No prior knowledge of molecular biology will be needed to understand this talk.

12 March 1992

Tomas Feder (IBM Almaden Research Center)

Balanced Matroids

We introduce the notion of ``balance'', and say that a matroid is balanced if the matroid and all its minors satisfy the property that, for a randomly chosen basis, the presence of an element can only make any other element less likely.

We establish strong expansion properties for the bases-exchange graph of balanced matroids; consequently, the set of bases of a balanced matroid can be sampled and approximately counted using rapidly mixing Markov chains. Specific classes for which balance is known to hold include graphic (whose bases are spanning trees of a graph) and regular matroids.

Joint work with Milena Mihail, Bellcore.

2 April 1992

Theory Colloquium.

Ronald Fagin (IBM Almaden Research Center, San Jose, California)

Finite-Model Theory - a Personal Perspective

Finite-model theory is a study of the logical properties of finite mathematical structures. This talk gives an overview, including:

(1) Differences between the model theory of finite structures and infinite structures. Most of the classical theorems of logic fail for finite structures, which gives us a challenge to develop new concepts and tools, appropriate for finite structures.

(2) The relationship between finite-model theory and complexity theory. Surprisingly enough, it turns out that in some cases, we can characterize complexity classes (such as NP) in terms of logic, without using any notion of machine, computation, or time.

(3) Zero-one laws. There is a remarkable phenomenon, which says that certain properties (such as those expressible in first-order logic) are either almost surely true or almost surely false.

(4) Descriptive complexity. Here we consider how complex a formula must be to express a given property.

In recent years, there has been a re-awakening of interest in finite-model theory. One goal of this talk is to help "fan the flames" of interest, by introducing the audience to this fascinating area.

9 April 1992

Eli Upfal

A Theory of Wormhole Routing in Parallel Computers

IBM Almaden and The Weizmann Institute

Virtually all theoretical work on message routing in parallel computers has dwelt on {\em packet routing:} messages are conveyed as packets, an entire packet can reside at a node of the network, and a packet is sent from the queue of one node to the queue of another node until its reaches its destination. The current trend in multicomputer architecture, however, is to use {\em wormhole routing}. In wormhole routing a message is transmitted as a continuous stream of bits, physically occupying a sequence of nodes/edges in the network. Thus, a message resembles a worm burrowing through the network. In this paper we give theoretical analyses of simple wormhole routing algorithms, showing them to be nearly optimal for butterfly and mesh connected networks.

(Joint work with Prabhakar Raghavan, IBM Watson Research Center.)

16 April 1992

Steven Phillips (Stanford University)

Biased Random Walks

How much can an imperfect source of randomness affect an algorithm? We examine some simple questions of this type concerning the long-term behavior of a random walk on a finite graph. In our setup, at each step of the random walk a ``controller'' can, with a certain small probability, fix the next step, thus introducing a bias. We analyze the extent to which the bias can affect the limit behavior of the walk. The controller is assumed to associate a real, nonnegative, ``benefit'' with each state, and to strive to maximize the long-term expected benefit. We derive tight bounds on the maximum of this objective function over all controller's strategies, and present polynomial time algorithms for computing the optimal controller strategy.

This is joint work with Nati Linial, Anna Karlin, Andrei Broder, and Yossi Azar.

23 April 1992

David R. Karger (Stanford University)

Fast Connected Components Algorithms for the EREW PRAM

We present fast, almost optimal parallel algorithms for finding the connected components of an undirected graph. These algorithms run even on the exclusive-read, exclusive-write (EREW) PRAM. On a graph with $n$ vertices and $m$ edges, our randomized algorithm runs in $O(\log n)$ parallel time using $(m+n^{1+\epsilon})/\log n$ processors (for any fixed $\epsilon>0$). A variant uses $(m+n)/\log n$ processors and runs in $O(\log n \log \log n)$ time, thus coming within an $O(\log\log n)$ factor of the best possible parallel connected components algorithm. A deterministic version of the algorithm runs in $O(\log^{1.5}n)$ time using $m+n$ EREW processors.

If time permits, I will draw some parallels with the new result on finding connected components using $O(\log^{1.5}n)$ space.

Joint work with Noam Nisan and Michal Parnas from the Hebrew University of Jerusalem.

27 April 1992

Theory Colloquium.

Christos H. Papadimitriou



We point out that in several well-known instances in mathematics the existence of a mathematical object is established by a constructive argument based on an exponentially large graph. These include Brouwer's Fix-point Theorem, the Borsuk-Ulam Theorem, the Arrow-Debreu theory in mathematical economics, Chevalley's Theorem in number theory, the convergence of Hopfield neural nets, Smith's Theorem in graph theory, and many others. We show that this phenomenon can be captured by complexity classes, containing several important complete problems.

COMMENT: Of interest to Theoreticians and Mathematicians.

30 April 1992

Satish B. Rao (NEC Research Institute)

Faster Algorithms for Finding Small Edge Cuts in Planar Graphs

In this paper, we consider partitioning a planar graph by removing either nodes or edges. In particular, we consider a cut to be either a set of nodes or edges whose removal divides the graph into two pieces.

We define {\em the balance of a cut} as the ratio of the weight of the smaller side of the cut to the total weight in the graph. Thus, the best possible balance is 1/2. We define {\em the quotient cost} of a cut as the ratio of the cost of the cut to the size of the smaller side of the cut. Thus, a costly cut with a high balance may have lower quotient cost than a less costly cut with a lower balance. Finally, we define the {\em flux} of a graph as the minimum quotient cost of any cut in the graph.

We are interested in finding small cuts with high balance: either finding the cut that minimizes the quotient cost of the cut, or finding the smallest cut of some specified balance. (For balance $1/3$, this problem has been called the minimum separator problem.)

We present a very simple algorithm for finding $1.5$ times optimal quotient node or edge cuts in planar graphs in time $O(n^2\min(W,C))$, where $W$ is the total weight of the graph, and $C$ is the total cost of the graph. In fact, these cuts are only nonoptimal when they have very good balance. Thus, the algorithms either find an optimal quotient cut or a nearly optimal quotient cut with very high balance.

We can greatly improve the running time if we settle for finding $3.5$ times optimal quotient node and edge cuts. We can accomplish this in $O(n^2(\log W + \log C))$ time. And, if we are willing to settle for $O(k)$ times optimal solutions, we can make the running time quite fast; $O(n^{1+1/k} \log n (\log W + \log C) \log C).$ If $k = \log n$, and $W$ and $C$ are polynomial in $n$, this becomes $O(n \log^3 n)$.

Finally, we give approximation algorithms for finding nearly optimal {\em balanced} node or edge separators in planar graphs, based on the quotient cut algorithms.

These algorithms improve upon previous algorithms by being much faster, much much simpler, and by applying to node separation as well as edge separation.

1 May 1992

Ernst W. Mayr (J.W. Goethe-University, Frankfurt, Germany)

"Managing Divide-And-Conquer Efficiently on the Hypercube"

It is often claimed that the recursive structure of the $n$-dimensional boolean hypercube directly supports algorithms based on a divide-and-conquer approach. To obtain fast and efficient parallel algorithms, however, the following problems have to be taken care of: processor allocation and the routing of subproblems and resulting data. Processor allocation becomes nontrivial since the size of the generated subproblems will vary in general and will also most likely not be powers of two, as are the sizes of the subcubes. However, even for arbitrary sizes of the subproblems, constant average ``utilization'' of the processors can be achieved while keeping the routing overhead low.

To illustrate our new techniques, we present an $O(\log n)$ time algorithm for a new class of partial permutation routing problems on the hypercube which can be defined via matching pairs of parentheses. We also give an $O(\log n)$ time algorithm for the algebraic tree evaluation problem.

This is joint work with R. Werchner, J.W. Goethe-University.

8 May 1992

Edith Cohen (AT&T Bell Laboratories)

Approximate max flow on small depth acyclic networks

We consider the maximum flow problem on directed acyclic networks with $m$ edges and {\em depth}\/ $r$ (length of longest $s$-$t$ path). We give deterministic sequential and parallel algorithms for finding blocking flows and $s$-$t$ flows of value at least $(1-\epsilon)$ of the maximum flow. When $r$ and $\epsilon^{-1}$ are small (e.g., $O(\polylog(m))$), these algorithms are in NC and use only $O(m)$ processors, which is a significant improvement over existing parallel algorithms. As one consequence, we obtain an $\NC$ $O(m)$ processor algorithm to find a bipartite matching of cardinality $(1-\epsilon)$ of maximum (for $\epsilon^{-1}= O(\polylog(m))$).

14 May 1992

Mike Luby, UC Berkeley

New Definitions and Simplified Proofs in Cryptography

Recently (I developed some natural (at least to me) extensions of the definitions of one-way functions and pseudo-random generators which I would like to propose in this talk. (One of the reasons for giving the talk is to provoke thought about whether these new definitions are reasonable). Under the new definitions, some previous rather hard to prove results have become much simpler, e.g. a security preserving reduction from a weak one-way permutation (one that is hard to invert on at least a small fraction of the inputs) to a one-way permutation (one that is hard to invert for almost all inputs). This has led to some alternative ways of looking at the previous reductions, and intuitive insight has been gained.)

21 May 1992

Theory Colloquium.

Joel Spencer (NYU)

Flip a Coin! From Erdos to Algorithms

The Probabilistic Method, as developed over the past half century by Paul Erdos, is a powerful proof technique in Discrete Mathematics. To prove the existence of a combinatorial object - e.g., a graph, a tournament, a coloring - one devises a ``thought experiment'' for which the probability of creating the desired object is positive. In recent decades this technique has found natural application in Theoretical Computer Science. Here is is desirable to replace Hilbertian existence arguments with polynomial time algorithms and this can be done. Sometimes.

28 May 1992

Andrew Goldberg (Stanford)

SPUR: An Implementation of a Scaling Push-Relabel Algorithm for the Minimum-Cost Flow Problem

The scaling push-relabel method is an important theoretical development in the area of minimum-cost flow algorithm. We study practical implementations of this method. We are especially interested in heuristics which improve real-life performance of the method.

Our implementation works very well on many classes of problems. We also exhibit a class of problems on which it does not work so well, and suggest directions for improvement.

Some of the heuristics we suggest may apply to other algorithms for the maximum and minimum-cost flow problems.

4 June 1992

IBM Almaden Research Center

Robert Cypher

In this talk we study the problem of deadlock-free packet routing in parallel (and distributed architectures. In particular, we focus on asynchronous routing algorithms in which a packet is allowed to move from one queue to another based solely on the queues involved and on the packet's source and destination. Our goal is to characterize those properties which such an algorithm must have in order to be free of deadlock. We present three main results.)

First, we show that the standard technique of ordering the queues so that every packet always has the possibility of moving to a higher ordered queue is not necessary for deadlock-freedom. Second, we show that every deadlock-free, adaptive packet routing algorithm can be restricted, by limiting the adaptivity available, to obtain an oblivious algorithm which is also deadlock-free. Third, we show that any packet routing algorithm for a cycle or torus network which is free of deadlock and which uses only minimal length paths must require at least three queues in some node. This matches the known upper bound of three queues per node for deadlock-free, minimal packet routing on cycle and torus networks.

This is joint work with Luis Gravano from IBM Almaden and IBM Argentina.

25 June 1992

Sivan Toledo (MIT)

Maximizing Non-Linear Concave Functions in Fixed Dimension

Consider a convex set P in d-dimensional space and a piecewise polynomial concave function F defined on P. Let A be an algorithm that given a point x in d-dimensional space, computes F(x) if x is in P, or returns a concave polynomial p such that p(x) < 0 but for any y in P, p(y) >= 0. We assume that d is fixed and that all comparisons in A depend on the sign of polynomial functions of the input point. We show that under these conditions, one can find the maximum of F over P in time which is polynomial in the number of arithmetic operations of A.

Cohen and Megiddo, and Norton, Plotkin and Tardos obtained a similar result for the case where F is piecewise linear and the separation and comparison polynomials are all linear. We develop several new techniques to cope with the general non-linear case. Using our method we give the first strongly polynomial algorithms for many non-linear parametric problems in fixed dimension, such as the parametric max flow problem, the parametric minimum s-t distance, the parametric spanning tree problem and other problems.

In addition we show that in one dimension, the same result holds even if we only know how to approximate the value of F. Specifically, if we can obtain an alpha-approximation for F(x) in time T, then we can alpha-approximate the value of max F in time O(T^2). We thus obtain the first polynomial approximation algorithms for many NP-hard problems such as the parametric Euclidean Traveling Salesman Problem.

30 July 1992

Noam Nisan

Undirected Connectivity in O(log^{1.5} n) Space

We present a deterministic algorithm for the connectivity problem on undirected graphs that runs in O(log^{1.5} n) space. Thus, the recursive doubling technique of Savich which requres O(log^2 n) space is not optimal for this problem.

Joint work with Endre Szemeredi and Avi Wigderson.

13 August 1992

Adi Rosen (Tel-Aviv University)

The Distributed $k$-Server Problem--- A Competitive Distributed Translator for $k$-Server Algorithms

We consider the $k$-server problem in a distributed setting. Given a network of $n$ processors, and $k$ identical mobile servers, requests for service appear at the processors and a server must reach the request point. Besides modeling problems in computer networks where $k$ identical mobile resources are shared by the processors of the network, this models a realistic situation where the transfer of information is costly and there is no central control that governs the behavior of servers that move around to satisfy requests for service.

The problem is that of devising algorithms that minimize not only the travel of the servers but also the communication cost incurred for transmission of control messages. Thus, in addition to the traditional lack of knowledge of future requests that a global-control online algorithm has to deal with, a distributed $k$-server algorithm must also deal with issues of locality. As a result, the algorithm has only partial information on past requests, and even on the current configuration of all its servers.

The main contribution of this paper is a general translator to transform any deterministic global-control competitive $k$-server algorithm into a distributed competitive one.

In contrast to the global-control case where $k$-server algorithms with competitive ratio that depends on $k$ solely are known, we have a lower bound of $\Omega(\max\{k,\log n /\log\log n\})$ on the competitive ratio of any distributed $k$-server algorithm.

We also give a distributed version of the Harmonic randomized $k$-server algorithm, which currently has the best proved competitive ratio. Given that the original Harmonic is $c_H$-competitive, the competitive ratio of the distributed version can be stated as $O(c_H\cdot k \cdot \polylog(n))$ for the cases where the diameter of the network is bounded by a fixed polynomial in $n$.

Joint work with Yair Bartal.