AFLB is the Algorithms for Lunch Bunch. It is run by Steven Phillips. Meetings are Thursdays at 12:00 p.m.

Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.

Contents:

  1. 27 September 1990: Amos Fiat (Tel-Aviv University and IBM Almaden). Recent Developments in Online Algorithms.
  2. 4 October 1990: Thane Plambeck (Stanford University). M & M's, Kayles, and the Conway-Sibert decomposition misere octal games.
  3. 11 October 1990: Tomas Feder. Network Flow and 2-Satisfiability.
  4. 18 October 1990: Orli Waarts. Perfectly Secure Message Transmission.
  5. 25 October 1990: Anna Karlin (DEC SRC). On the Parallel Complexity of Evaluating Game-Trees.
  6. 1 November 1990: Heather Woll. OPTIMAL TIME RANDOMIZED CONSENSUS- MAKING RESILIANT ALGORITHMS FAST IN PRACTICE.
  7. 14 November 1990: Laszlo Babai (University of Chicago and Eotvos University, Budapest). Complexity in Finite Groups.
  8. 15 November 1990: Dan Gusfield (UC-Davis). Recents results in parametric combinatorial optimization.
  9. 29 November 1990: Andrei Broder (DEC SRC). Finding hidden Hamiltonian cycles.
  10. 6 December 1990: Michael Kearns (ICSI). Computational Learning Theory: An Introduction and Overview.
  11. 13 December 1990: Michael Kharitonov (Stanford). Asking questions often doesn't help to learn.
  12. 9 Januaray 1991: Don Knuth (Stanford). There are at most 3^(n choose 2) simple arrangements of pseudolines.
  13. 10 January 1991: Yuval Peres. Iterating Von Neumann's procedure for extracting random bits.
  14. 17 January 1991: Richard Karp (UC-Berkeley and ICSI). Probabilistic Recurrence Relations.
  15. 23 January 1991: Muli Safra. Approximating Clique is NP-complete.
  16. 31 January 1991: Joel Wein (MIT). On-line Scheduling of Parallel Machines.
  17. 5 February 1991: Elywn Berlekamp (UC-Berkeley). Mathematical Go Endgames.
  18. 7 February 1991: Vaughan Pratt. Event Spaces.
  19. 7 February 1991: Alexandre Razborov (Steklov Mathematical Institute, Moscow). Switching-and-Rectifier Networks for Majority Require Superlinear Size.
  20. 21 February 1991: Pete Gemmell (UC-Berkeley). Self-Testers/Correctors for Polynomials and approximate functions.
  21. 21 February 1991: Russell Impagliazzo (University of Toronto). Recycling Random Bits: Provably Good Pseudo-Random Generators for Amplifying Probabilities of Correctness and Sampling from Distributions.
  22. 28 February 1991: Ron Shami (Tel-Aviv University and Rutgers University). Greedily Solvable Network Flow Problems: Characterizations and Algorithms.
  23. 7 March 1991: Leonidas Guibas. Random Sampling in Computational Geometry.
  24. 14 March 1991: Ravi Kannan. A simple coin changing problem and its not so simple solution.
  25. 4 April 1991: Mihalis Yannakakis . Testing Finite State Machines.
  26. 11 April 1991: David Karger. Finding the Hidden Path: Time Bounds For All-Pairs Shortest Paths.
  27. 16 April 1991: Alan Siegel (NYU). The anatomy of random hash functions.
  28. 18 April 1991: Gil Kalai (IBM Almaden). The diameter problem for convex polytopes and the simplex algorithm for linear programming.
  29. 25 April 1991: Sandy Irani (UC-Berkeley). Competitive Paging with Locality of Reference.
  30. 2 May 1991: Serge Plotkin (Stanford). Fast Approximation Algorithms for Multicommodity Flow Problems.
  31. 9 May 1991 (?): Baruch Awerbuch. The Maintenance of Common Data in a Distributed System.
  32. 16 May 1991 (?): Anne Condon. The Complexity of Space Bounded Interactive Proof Systems.
  33. 25 July 1991: Davdi Harel (Weizmann Institute). How Hard is it to Reason About Propositional Programs?.
  34. 1 August 1991: Madhu Sudan (UC-Berkeley). Testing Low-degree Polynomials.
  35. 8 August 1991: Oded Maler (INRIA). Tight Bounds on the Complexity of Cascaded Decomposition of Automata.
  36. 19 August 1991: Amos Fiat. An O(log n) competitive algorithm for the room problem.

27 September 1990

Margaret Jacks Hall 301. 12:00 p.m.

Amos Fiat (Tel-Aviv University and IBM Almaden)

Recent Developments in Online Algorithms

We give an overview of recent results concerning the k-server problem, traversing layered graphs, and other online/offline issues.


4 October 1990

Margaret Jacks Hall 301. 12:00 p.m.

Thane Plambeck (Stanford University)

M & M's, Kayles, and the Conway-Sibert decomposition misere octal games

Arranging several heaps of M & M candies on a table, I challenge you to a game with the following rules. On your move, you first select one of the heaps. You then must choose between exercising either (or both) of two options:

1. Remove a single M & M from the selected heap.

2. Split the remaining M & M's of the selected heap into 2 nonempty heaps.

Now it is my turn, and I have the exact same set of options. We move alternately in this way until the final M & M is taken.

If the last player to take an M & M is declared the winner, then we have _normal_play_ and the Sprague-Grundy theory can be used to classify all first- and second-player winning positions in the M & M game. One arrives at a winning strategy based on the heap sizes taken modulo two and addition over GF(4).

If the last player to take a M & M is declared to be the _loser_, then we have _misere_play_ and a complete classification of all first-player winning positions in the M & M game was---until today's talk!---unknown.

The M & M game belongs to a general class of Nim-like games known as the octal games. Until very recently, the best weapon for the misere play analysis of such games was the difficult Tame, Restive, and Wild theory >From Chapter 13 of Berlekamp, Conway, and Guy's ``Winning Ways.'' The Conway-Sibert decomposition has only recently emerged as an exciting new method of attack capable of yielding solutions to misere octal games without direct reference to complicated genus considerations.

In our talk, we shall briefly review the Sprague-Grundy and Tame, Restive, and Wild theories for such games and then shall move on to a discussion of the Conway-Sibert technique and the associated ideas of frisky and frigid positions. Finally, the solution of several other such games under misere rules will be presented.

Although no knowledge of combinatorial games will be assumed, near the end our talk we will approach the limit of current knowledge about impartial play combinatorial games. An outrageous conjecture will be shared with the audience. If you are familiar with the Noah's Ark theorem, Half-Tameness, or the Firm and Fickle Units, please come (maybe you can explain them to me)!


11 October 1990

Margaret Jacks Hall 301. 12:00 p.m.

Tomas Feder

Network Flow and 2-Satisfiability

I will present two algorithms for network flow in the case of networks with infinite capacities and finite supplies and demands. The first algorithm is efficient when the value of the optimal flow is small, and is based on matching. The second algorithm exploits networks of small `width'. The motivation for these two problems comes from the `optimal' stable marriage problem, and gives new algorithms for this problem in both the weighted and the unweighted case. The reduction from flow to marriage considers the problem of finding minimum weight ideals in a partial order, or more generally minimum weight solutions for a 2-SAT instance.


18 October 1990

Margaret Jacks Hall 301. 12:00 p.m.

Orli Waarts

Perfectly Secure Message Transmission

Recent advances in fiber optics make realizable the construction of networks in which communication paths are shared by an immense number of clients and where each client can use many communication paths. As more and more personal and business communication takes place over these systems, issues of correctness and privacy become increasingly important.

The secret message transmission problem is an abstraction of the above problem of secure communication in a general network. It considers a Receiver and a Sender connected by n communication lines, d of which may arbitrarily disrupt the information they carry, and l of which may listen to the information they carry. The goal is for the Sender to convey a message to the Receiver in such a way that the Receiver will always learn the message correctly (despite the presence of the disrupting lines) and no subset of size l of the lines will ever be able to learn anything about the message even when combining the information on them.

Given the maximal number of listening lines and the maximal number of disrupting lines, this paper derives lower bounds on the total number of lines between Sender and Receiver required for successful secure communication in the different cases. And then, it presents efficient algorithms that solve the problem with these lower bounds. The secrecy achieved by these algorithms does not rely on any cryptographic assumptions. Moreover, the algorithms require only three rounds of information exchange. These are the first algorithms for secure communication in a general network to simultaneously achieve the three goals of perfect secrecy (no subset of l lines learns *anything* about the message), perfect resiliency (the Receiver *always* learns the message correctly), and constant number of information exchange.

Underlying our results are new techniques for achieving secrecy and resiliency which may have wide applications elsewhere.

The talk is self contained.

This is a joint work with Danny Dolev, Cynthia Dwork and Moti Yung, and will be presented in the 31st FOCS, Oct., 1990.


25 October 1990

Margaret Jacks Hall 301. 12:00 p.m.

Anna Karlin (DEC SRC)

On the Parallel Complexity of Evaluating Game-Trees

We consider efficient parallel algorithms for the evaluation of game-trees. We will describe the proof of an inherent limitation on the speedup achievable, and present an algorithm that achieves its best performance bounds on trees of the sort that are likely to arise in game-playing programs.

This is joint work with Andrei Broder, Prabhakar Raghavan and Eli Upfal.


1 November 1990

Margaret Jacks Hall 301. 12:00 p.m.

Heather Woll

OPTIMAL TIME RANDOMIZED CONSENSUS- MAKING RESILIANT ALGORITHMS FAST IN PRACTICE

We present an optimally fast and highly resilient shared-memory randomized consensus algorithm that runs in only $O(\log n)$ expected time if $\sqrt n$ or less failures occur, and whose $O(\frac{n^3}{n-f})$ expected time performance degrades gracefully as the number of failures $\sqrt n Joint work with Michael Saks and Nir Shavit.


14 November 1990

Margaret Jacks Hall 352. 3:00 p.m.

Laszlo Babai (University of Chicago and Eotvos University, Budapest)

Complexity in Finite Groups

We shall survey recent results on the complexity of fundamental computational tasks in finite groups in a variety of computational models.

The groups in consideration are represented as permutation groups, matrix groups over finite fields, or more generally, as ``black box groups'' (operations are performed by an oracle), always given by a list of generators.

The techniques to be illustrated on representative sample cases involve a combination of combinatorial results on finite groups, some classical elementary group theory, and a moderate use of easily stated consequences of the classification of finite simple groups.

Particularly efficient and completely elementary Monte Carlo methods, obtained over the past half year in collaboration with G. Cooperman, L. Finkelstein, E. Luks, and A. Seress, will be discussed.


15 November 1990

Margaret Jacks Hall 301. 12:00 p.m.

Dan Gusfield (UC-Davis)

Recents results in parametric combinatorial optimization

I will talk about two recent results in parametric optimization, the first involving network flows, and the second involving string matching and alignment.

Gallo, Grigoriadis and Tarjan \cite{GGT} recently examined the problem of computing online a sequence of $k$ maximum flows and minimum cuts in a network of $n$ nodes, where certain edge capacities change between each flow. They showed that for an important class of networks, the $k$ flows can be computed in $O(n^3+kn^2)$ total time, provided that the capacity changes are made ``in order". We show how to reduce the total time to $O(n^3 + kn)$. We also show that the faster bound holds even when the capacity changes are not ``in order". If time allows we illustrate the utility of these results by applying them to the {\it rectilinear layout problem}.

In DNA and Protein sequence alignment there is considerable disagreement about how to weight matches, mismatches and spaces. Parametric Sequence Alignment is the problem of computing the optimal valued alignment between two sequences as a {\it function} of variable weights for matches, mismatches, and spaces. For the case where all spaces are counted, we show that the functional dependence of the optimal alignments on the parameters is surprisingly simple, and can be completely determined in $O(n^{1.66}m)$ time, where $n$ and $m$ are the lengths of the two strings. For contrast, it takes $O(nm)$ time to compute just a single alignment for fixed weights.

Different parts of the work joint with Eva Tardos, Dalit Naor, and K. Balasubramanian.


29 November 1990

Margaret Jacks Hall 301. 12:00 p.m.

Andrei Broder (DEC SRC)

Finding hidden Hamiltonian cycles

Consider a random graph G composed of a Hamiltonian cycle on n labeled vertices and d*n random edges that ``hide'' the cycle. Is it possible to unravel the structure, that is, to efficiently find a Hamiltonian cycle in G?

We describe an O(d*n^3) steps algorithm for this purpose, and prove that it succeeds almost surely. Part one of the algorithm properly covers the ``trouble spots'' of G by a collection of disjoint paths. (This is the hard part to analyze.) Part two of the algorithm extends this cover to a full cycle by the rotation-extension technique which is already classical for such problems.

This is joint work with Alan Frieze (CMU) and Eli Shamir (Hebrew University).


6 December 1990

Margaret Jacks Hall 301. 12:00 p.m.

Michael Kearns (ICSI)

Computational Learning Theory: An Introduction and Overview

In this talk I will give a survey of recent results on the subject of machine learning from the theoretical computer science community. Using the "probably approximately correct" model due to Valiant as a starting point, I will discuss results demonstrating the possibilities and limitations for efficient machine learning from examples in several natural settings. I will then give an overview of several recent variants of Valiant's model intended to incorporate real-world conditions such as uncertainty and the ability of the learner to ask questions. The talk will highlight connections between learning theory and other areas of theoretical computer science such as cryptography, complexity theory and computational geometry.


13 December 1990

Margaret Jacks Hall 301. 12:00 p.m.

Michael Kharitonov (Stanford)

Asking questions often doesn't help to learn

We consider a model of PAC learning in which membership queries by a learner are allowed. Such queries have been shown to allow to learn several concept classes that are unlearnable (under plausible cryptographic assumptions) from examples only (e.g. DFAs.) We investigate cryptographic limitations on the power of membership queries to help with concept learning. In particular, we show that (under specific cryptographic assumptions) boolean formulas, read-thrice boolean formulas, finite unions or intersections of DFAs, 2-way DFAs, NFAs, and context free grammars are not PAC learnable even with membership queries. Also, we show that if there exist one-way functions that cannot be inverted by polynomial-sized circuits, membership queries will not help to learn CNF or DNF formulas, which may or may not be learnable >From examples only.

This is joint work with Dana Angluin of Yale.

NOTE: I will assume some familiarity with the fundamentals of computational learning theory. People not familiar with the basic definitions and results are encouraged to attend the preceding AFLB talk by Michael Kearns on December 6.


9 Januaray 1991

Margaret Jacks Hall 301. 12:30 p.m.

Don Knuth (Stanford)

There are at most 3^(n choose 2) simple arrangements of pseudolines

One of the vexing questions in geometry has been to know how many ways n simple closed curves can divide the projective plane into cell complexes, when every pair of curves intersects exactly once, and these (n choose 2) points of intersection are distinct. Goodman and Pollack constructed 2^(n^2 /8) distinct arrangements and proved an upper bound of 2^O(n^2 logn). We prove their long-standing conjecture that the true number is 2^O(n^2). The proof is based on some interesting properties of primitive sorting networks.


10 January 1991

Margaret Jacks Hall 301. 12:00 p.m.

Yuval Peres

Iterating Von Neumann's procedure for extracting random bits

(No abstract provided.)


17 January 1991

Margaret Jacks Hall 301. 12:00 p.m.

Richard Karp (UC-Berkeley and ICSI)

Probabilistic Recurrence Relations

This talk is concerned with recurrence relations that arise frequently in the analysis of divide-and-conquer algorithms. In order to solve a problem instance of size $x$, such an algorithm invests an amount of work $a(x)$ to break the problem into subproblems of sizes $h_1(x),h_2(x),\ldots,h_k(x)$, and then proceeds to solve the subproblems. Our particular interest is in the case where the sizes $h_i(x)$ are random variables; this may occur either because of randomization within the algorithm or because the instances to be solved are assumed to be drawn from a probability distribution. When the $h_i$ are random variables the running time of the algorithm on instances of size $x$ is also a random variable $T(x)$. We shall give a convenient general method for obtaining a fairly tight bound on the upper tail of the probability distribution of $T(x)$. The proof of the bound requires an interesting analysis of optimal strategies in certain gambling games. We shall demonstrate how our bounds can be used to analyze algorithms for list ranking, sorting and the construction of maximal independent sets in graphs, and to study the distribution of cycle lengths in random permutations. It is our hope that the method will take its place alongside Chernoff bounds and martingale tail inequalities as a useful probabilistic tool for the analysis of algorithms.


23 January 1991

Margaret Jacks Hall 301. 12:00 p.m.

Muli Safra

Approximating Clique is NP-complete

It is well known that the Clique problem (i.e., finding the size of the largest complete subgraph in a given graph $G$) is NP-complete. However, the related problem of {\em approximating} this size (say within a constant factor) is not known to have an efficient algorithm, nor is it known to be NP-complete.

We consider the class of languages accepted in quasi-polynomial ($= 2^{log^k n}$) time, which we denote by \~P, and its corresponding nondeterministic class N\~P. We show that if there is a \~P approximation algorithm for Clique (even within a factor of $2^{\log^c n}$ for $c < 1$) then N\~P = \~P. Thus we can conclude that if such an approximation procedure exists then:

\begin{enumerate}

\item EXPTIME = NEXPTIME

\item NP $\subseteq$ \~P

\end{enumerate}

We also discuss what better approximation procedure would imply that NP is in deterministic time $n^{O(\log \log n)}$.

This work uses techniques from multi-prover interactive proof systems, in particular, the recent theorem of Babai, Fortnow and Lund, showing that NEXPTIME has multi-prover interactive proofs.

This is joint work with Uriel Feige, Shafi Goldwasser and Laszlo Lovasz.


31 January 1991

Margaret Jacks Hall 301. 12:00 p.m.

Joel Wein (MIT)

On-line Scheduling of Parallel Machines

We study the problem of scheduling jobs on parallel machines in an {\em on-line} fashion, where the processing requirement of a job is unknown until the job is completed, and the arrival time of a job is unknown until the job arrives. Despite this lack of knowledge of the future, we wish to schedule so as to minimize the completion time of the entire set of jobs. We study the three fundamental models of this problem: {\em identical} machines, in which all the machines run at the same speed; {\em uniformly related} machines, in which the machines run at different but uniformly related speeds; and {\em unrelated} machines, in which the speeds of the machines on different jobs need not be related. We measure the performance of these on-line algorithms by their {\em competitive ratio}, and prove matching or close to matching upper and lower bounds for these models. We also prove a surprisingly strong lower bound on the performance of randomized algorithms for identical machines that nearly matches the deterministic upper bound.

Joint work with David Shmoys and David Williamson.


5 February 1991

Margaret Jacks Hall 252. 11:00 a.m.

Elywn Berlekamp (UC-Berkeley)

Mathematical Go Endgames

"Mathematical Go" is defined. Under appropriate conditions, there is a direct correspondence between winning strategies for this game and for the corresponding endgame positions in Chinese and Japanese Go.

Useful notions of equality and inequality are defined among positions in mathematical Go. Some Go endgames are seen to be "equal" to abstract games which are also equal to games whose rules appear quite different from the rules of Go (e.g., Domineering). There is a direct correspondence between winning strategies for these "equal" games.

A recursive abstract mapping called "cooling" maps games into games.

Cooling by a large amount (called "freezing") naturally defines a game's mean value and its "temperature", both of which are often fractions of a point. The temperature corresponds to the "value of the move" in Go.

Although the general cooling operator is many-to-one, the special case of cooling by one degree (called "chilling") is one-to-one over a large class of interesting Go endgames. Its inverse, called "warming", is monotonic. This ensures a direct correspondence between winning strategies for chilled games and warmed games. Many Go endgames chill into numbers and infinitesimals that were analyzed in "Winning Ways": star, up, down, tiny. Others chill into new types of infinitesimals with interesting properties.

Using these techniques, it is possible to determine precise and rigorous strategies for an interesting class of Go endgame problems, in reasonable time, using only pencil and paper. Yet some of these problems are so difficult that a professional Japanese 7-dan was unable to learn the solution, even after two hours study and several unsuccessful practice plays (trying both Black and White) against a mathematically perfect opponent. Another challenging 19 x 19 problem has the property that although nearly all moves are worth no more than one point, and White can ultimately win by one point, it requires at 101 moves to do so against an astute opponent.


7 February 1991

Margaret Jacks Hall 301. 12:00 p.m.

Vaughan Pratt

Event Spaces

An event space encodes the separate notions of AND, OR, and THEN in a single partially ordered set. The set has a top and all nonempty joins including infinite joins. The order itself codes THEN, nontop joins code AND, and top joins code OR. There exists a very beautiful calculus of event spaces whose basic operations are direct product and tensor product and their respective units, along with formation of free spaces. From these are derived several other operations constituting simultaneously a concurrent programming language and a linear logic of concurrency. While event spaces resemble vector spaces in many ways, they improve on them with respect to duality (limited to finite dimensions for vector spaces), flexibility (vector spaces are rigid), and nondegeneracy (product and sum of vector spaces coincide).


7 February 1991

Margaret Jacks Hall 252. 4:00 p.m.

Alexandre Razborov (Steklov Mathematical Institute, Moscow)

Switching-and-Rectifier Networks for Majority Require Superlinear Size

Switching-and-rectifier networks (sometimes also called nondeterministic branching programs) are Boolean devices which correspond to the nondeterministic space and gen- eralize both switching networks and branching programs. The only lower bound in this model (for explicitly given Boolean functions) known so far comes from the famous paper of Nechiporuk. We prove the first nonlinear lower bound for the complexity of symmetric Boolean functions (to which the Nechiporuk's method doesn't apply). The proof involves a version of the method of approximations.


21 February 1991

Margaret Jacks Hall 301. 12:00 p.m.

Pete Gemmell (UC-Berkeley)

Self-Testers/Correctors for Polynomials and approximate functions

The study of self-testing/correcting programs was introduced by Blum, Luby and Rubinfeld to allow one to use a program $P$ to compute a function $f$ without trusting that $P$ works correctly. A self-tester for $f$ estimates the fraction of $x$ for which $P(x) = f(x)$; and a self-corrector for $f$ takes a program that is correct on most inputs and turns it into a program that is correct on every input with high probability. Both access $P$ only as a black-box and in some precise way are not allowed to compute the function $f$. (The notion of self-testing was also independently introduced by Lipton.)

Self-correcting is usually easy when the function has the random self-reducibility property. One class of such functions that has this property is the class of multivariate polynomials over finite fields (shown by Beaver, Feigenbaum and Lipton). We extend this result in two directions. First, we show that polynomials are random self-reducible over more general domains: specifically, over the rationals and over noncommutative rings. Secondly, we show that one can get self-correctors even when the program satisfies weaker conditions, i.e. when the program has more errors, or when the program behaves in a more adversarial manner by changing the function it computes between successive calls.

Self-testing is a much harder task. Previously it was known how to self-test for a few special examples of functions, such as the class of linear functions. We show how to self-test the class of univariate polynomial functions and multivariate polynomial functions in few variables over $Z_p$.

We also initiate the study of self-testing (and self-correcting) programs which only approximately compute $f$. This setting captures in particular the digital computation of real valued functions. We present a rigorous framework and obtain the first results in this area: namely that the class of linear functions, the log function and floating point exponentiation can be self- tested. All of the above functions also have self-correctors.

This is a result of joint work with Richard Lipton, Ronitt Rubinfeld, Madhu Sudan and Avi Wigderson.


21 February 1991

Margaret Jacks Hall 301. 4:00 p.m.

Russell Impagliazzo (University of Toronto)

Recycling Random Bits: Provably Good Pseudo-Random Generators for Amplifying Probabilities of Correctness and Sampling from Distributions

We analyze the performance of a simple pseudo-random generator which is a modification of a linear congruential generator when used to amplify the correctness of a probabalistic algorithm, or to sample from an arbitrary sequence of distributions. We prove that, for these purposes, pseudo-random sequences output by these generators perform almost as well as truly random bits. The type of pseudo-random generator we construct can be based on any universal family of hash functions; in particular, examples can be constructed bearing a strong relationship to linear congruential generators and to shift register generators, both of which are used in practice. The difference between the generators considered here and those used in practice is that, in our constructions, a small number of bits of the seed are replaced by new random bits in each iteration. These generators can be proven, without any complexity-theoretic assumptions, to be almost as good as truly random bits for the purposes of amplifying the correctness of probabilistic algorithms, or for sampling from distributions (e.g., picking a list of random $n$ bit primes). Our results both help to theoretically justify the use of simple pseudo-random generators in practice, and suggest a minor alteration in the commonly used pseudo-random generators which would make them provably good for many purposes.


28 February 1991

Margaret Jacks Hall 301. 12:00 p.m.

Ron Shami (Tel-Aviv University and Rutgers University)

Greedily Solvable Network Flow Problems: Characterizations and Algorithms

Given a network flow problem, does the greedy algorithm provide a solution for every feasible supply function on the network? We give characterizations of the uncapacitated networks on which the answer is positive, both for the max flow and for the minimum cost flow problem. We also give efficient algorithms for the recognition of those networks, and for the construction of the order of variable maximization in each problem.

Our approach builds on some interesting relations between those problems and Monge sequences, chordal bipartite graphs and totally balanced matrices.

(Joint work with Ilan Adler (UC Berkeley) and Alan Hoffman (IBM Yorktown))


7 March 1991

Margaret Jacks Hall 301. 12:00 p.m.

Leonidas Guibas

Random Sampling in Computational Geometry

In this talk we survey the use of randomization in devising efficient geometric algorithms. Randomization is a tool that has yielded major improvements in the solution of several geometric problems in the last few years. Though the resulting randomized algorithms are often challenging to analyze, they are equally often simple to implement, and thus useful in practice. They have been used to solve problems in many application areas, ranging from motion planning in robotics to hidden-surface elimination in computer graphics.

Randomization is frequently useful for effective divide-and-conquer algorithms in geometry. By choosing a random sample of the objects defining our problem we can partition the underlying space into parts so that each part receives its "fair share" of the overall complexity of the problem. We illustrate this use of randomization by an example >From the theory of arrangements. We also discuss other paradigms for the use of randomization, including randomized incremental constructions and randomized re-weighing techniques.

We will briefly develop the rudiments of the general theories proposed by Clarkson-Shor and Haussler-Welzl (epsilon nets) for formalizing the contexts in which random samples are "universally good" samples for certain types of queries. These theories help explain the unusual effectiveness of randomization in geometry. If time allows we will also mention recent work of Chazelle/Friedman, Matousek, and Agarwal in rendering some of the above techniques deterministic.


14 March 1991

Margaret Jacks Hall 301. 12:00 p.m.

Ravi Kannan

A simple coin changing problem and its not so simple solution

The following problem (attributed to Frobenius) has been around for many decades : given n natural numbers whose greatest common divisor is 1, find the largest natural number which cannot be expressed as a nonnegative integer combination of them. For n=2, there is a simple closed form formula. For n=3, a polynomial time algorithm was developed about 10 years ago. I will describe a polynomial time algorithm for each fixed n. The algorithm is able to solve the more general parametric integer programmming problem of whether an integer program is feasible for every integer right hand side vector drawn from a polyhedron. In the terminology of logic, the algorithm decides Pi-2 sentences and thus is a generalization of Lenstra's result that shows that such Sigma_1 sentences can be decided in polynomial time. It remains an interesting open problem to extend these results to higher number of alternations and perhaps apply them to Presburger arithmetic.


4 April 1991

Margaret Jacks Hall 301. 12:00 p.m.

Mihalis Yannakakis

Testing Finite State Machines

We are given a Mealy machine M, which we may test by providing inputs and observing the outputs produced. We discuss the complexity of the following problems.

Distinguishing Sequence (State Identification): We know the state diagram of M and wish to determine its unknown initial state.

Unique Input-Output Sequence (State Verification): We wish to verify that M starts from a specific initial state.

Checking Sequence (Fault Detection): We do not know the state diagram of M. Rather, we are given the diagram of a specification machine A, and wish to check that M is a correct implementation of A.

(This is joint work with David Lee.)


11 April 1991

Margaret Jacks Hall 301. 12:00 p.m.

David Karger

Finding the Hidden Path: Time Bounds For All-Pairs Shortest Paths

We investigate the problem of finding the shortest path between each pair of vertices in a weighted graph (the all-pairs shortest paths problem). We present an algorithm -- the Hidden Paths algorithm -- that finds these paths in time $O(\mo n \log n)$ using simple data structures, or $O(\mo n+n^2 \log n)$ using Fibonacci heaps, where $\mo$ is the number of edges participating in shortest paths. Our algorithm is a practical substitute for \Dj's algorithm, and is an improvement whenever $\mo$ is smaller than $m$. We give evidence that this condition will occur frequently in practice, by showing that with high probability $\mo = O(m / \sqrt{n})$, if the edge weights in the complete graph are independently and uniformly chosen from a suitable range of integers. Finally we prove an $\Omega(mn)$ lower bound on the runtime of any {\em comparison-based} algorithm for the all-pairs shortest paths problem. Comparison-based algorithms form a natural class containing the Hidden Paths algorithm, as well as the algorithms of Dijkstra and Floyd.

Joint work with Daphne Koller and Steven Phillips.


16 April 1991

Margaret Jacks Hall (room TBA). 12:00 p.m.

Alan Siegel (NYU)

The anatomy of random hash functions

The analyses of classical hashing algorithms have depended upon an underlying probing behavior that is completely random. Unfortunately, completely random functions have an average program size (Kolmogorov Complexity) that is huge. For example, the average random function mapping $[0,n^2]\mapsto [0,n]$ requires about $n^2\log n$ bits. This talk presents formally constructive functions and corresponding analyses for several hashing algorithms.

Let $S$ be a set containing $\alpha n$ elements belonging to the domain $D=[0,1 ,\dots, m]$, $\alpha <1$. In open addressing, random mappings $f(x,probe):D\times[1,\dots,n]\mapsto[0,n-1]$ are used to locate data within a size $n$ table. Collisions are resolved by placing data in the first vacant probe location within the table. In Coalesced Hashing, an item $x$ has a single probe location $f(x)$. Items colliding at an occupied location are added to the list extending from the occupant.

1) We construct highly random functions to map $[0,m]\mapsto[0,n]$, where the function uses a random seed of $O(\log\log m+\log^2n)$ bits, and evaluation requires ``only'' a constant number of word operations.

2) We show these functions can be used to achieve the same asymptotic performance as fully random functions for

a) Uniform Hashing,

b) Linear Probing,

c) Double Hashing, and

d) Coalesced Hashing.

Thus, the performance of each scheme is established in a randomized sense for a class of programmable hash functions, which gives the strongest proof of performance to date.

Some of our results for c) and d) are new even in the case of full randomness.

In particular, we show that for Double Hashing, the expected insertion time for the $\alpha n$-th item is

$${1\over 1-\alpha}+O({1\over n(1-\alpha)^5}).$$

This bound extends to any Double Hashing-like scheme where the first two probes are approximately independent, and subsequent probes are computed from any reasonable deterministic function of the first two probes. The $O(1/n)$ performance penalty extends to any fixed moment of the probe count.

For Coalesced Hashing, we show how to obtain the generating function governing the asymptotic distribution of the chain lengths, as well as semiclosed forms for the exact distribution. The discrepancy between the asymptotic formula and the exact expectation can also be measured quite accurately. The results extend to Coalesced Hashing with a cellar.

The discipline of limited randomness requires fundamentally different problem models and methods of analysis, since the hash table statistics cannot be viewed as the outcome of $\alpha n$ random insertions. We argue that these restrictions incur penalties outweighed by their rewards, and close with a few questions deserving further thought.

The work on Double Hashing is joint with Jeanette Schmidt.


18 April 1991

Margaret Jacks Hall 301. 12:00 p.m.

Gil Kalai (IBM Almaden)

The diameter problem for convex polytopes and the simplex algorithm for linear programming

Let P be a d-dimensional polytope defined by n linear inequalities. The graph of P, G(P), is a graph whose vertices are the extreme points of P and two vertices are adjacent if they form a 1-dimensional face of P. Let d(P) denotes the diameter of G(P) and D(d,n) denotes the maximum of d(P) over all d-polytopes determined by n linear inequalities.

In 1957 Hirsch conjectured that D(d,2d)=d and D(d,n)<=n-d. The best upper bound untill a few months ago was D(d,n) D(d,n) is clearly a lower bound on the worse case behaviour of any edge following algorithm for linear programming (LP). We will discuss the relation with LP and in particular, discuss monotone paths on polytopes (in the presence of a linear objective function).


25 April 1991

Margaret Jacks Hall 301. 12:00 p.m.

Sandy Irani (UC-Berkeley)

Competitive Paging with Locality of Reference

The Sleator-Tarjan competitive analysis of paging (CACM 85) gives us the ability to make strong theoretical statements about the performance of paging algorithms without making probabilistic assumptions on the input. Nevertheless practitioners voice reservations about the model, citing its inability to discern between LRU and FIFO (algorithms whose performances differ markedly in practice), and the fact that the theoretical competitiveness of LRU is much larger than observed in practice. In addition, we would like to address the following important question: given some knowledge of a program's reference pattern, can we use it to improve paging performance on that program?

We address these concerns by introducing an important practical element that underlies the philosophy behind paging: locality of reference. We devise a graph-theoretical model, the access graph, for studying locality of reference. We use it to prove results that address the practical concerns mentioned above. In addition, we use our model to answer the following questions: How well is LRU likely to perform on a given program? Is there a universal paging algorithm that achieves (nearly) the best possible paging performance on every program? We do so without compromising the benefits of the Sleator-Tarjan model, while bringing it closer to practice.

This is joint work with Allan Borodin, Prabhakar Raghavan, and Baruch Schieber.


2 May 1991

Margaret Jacks Hall 301. 12:00 p.m.

Serge Plotkin (Stanford)

Fast Approximation Algorithms for Multicommodity Flow Problems

All previously known algorithms for solving the general multicommodity flow problem are based on linear programming. The best of these algorithms, due to Vaidya, makes use of a fast matrix multiplication algorithm and takes $O(k^{2.5}n^2 m^{.5}\log(nDU))$ time to find an approximate solution, where $k$ is the number of commodities, $n$ and $m$ denote the number of nodes and edges in the network, $D$ is the largest demand, and $U$ is the largest edge capacity. Substantially more time is needed to find an exact solution. As a consequence, even multicommodity flow problems with just a few commodities are believed to be much harder than single-commodity maximum flow or minimum-cost flow problems.

In this work, we describe the first polynomial-time combinatorial algorithms for approximately solving the multicommodity flow problem. The running time of our randomized algorithm is (up to log factors) the same as the time needed to solve $k$ single-commodity flow problems, thus giving the surprising result that approximately computing a $k$-commodity maximum flow is not much harder than computing about $k$ single-commodity maximum flows in isolation.

In fact, we prove that a $k$-commodity flow problem can be approximately solved by approximately solving $O(k \log k \log n)$ single-commodity minimum-cost flow problems. Hence our $k$-commodity algorithm runs in $O(knm\log^4 n)$ time with high probability. We also describe a deterministic algorithm that uses an $O(k)$-factor more time. Given any multicommodity flow problem as input, both algorithms are guaranteed to provide a feasible solution to a modified flow problem in which all capacities are increased by a $(1+\epsilon)$-factor, or to provide a proof that there is no feasible solution to the original problem.

We also describe faster approximation algorithms for multicommodity flow problems with a special structure, such as those that arise in the ``sparsest cut'' problems, ``minimum-ratio cut'' problems, and the uniform concurrent flow problems.

Joint work with: T. Leighton, F. Makedon, C. Stein, E. Tardos, S. Tragoudas.

To appear in: Proc. 23th ACM Symposium on the Theory of Computing, May 1991.


9 May 1991 (?)

Margaret Jacks Hall 301. 12:00 p.m.

Baruch Awerbuch

The Maintenance of Common Data in a Distributed System

A basic task in distributed computation is the maintenance at each processor of the network, of a current and accurate copy of a common database. A primary example is the maintenance, for routing and other purposes, of a record of the current topology of the system.

Such a database must be updated in the wake of locally generated changes to its contents. Due to previous disconnections of parts of the network, a maintenance protocol may need to update processors holding widely varying versions of the database, while taking advantage of prior partial knowledge.

In this talk, we describe efficient randomized and deterministic solutions for this problem, which have only polylogarithmic overhead in both time and communication complexities. Previous solutions required polynomial overhead in at least one of these measures.


16 May 1991 (?)

Margaret Jacks Hall 301. 3:30 p.m.

Anne Condon

The Complexity of Space Bounded Interactive Proof Systems

Recent results on time bounded interactive proof systems have demonstrated that the model is much more powerful than was first expected. The study of space bounded interactive proof systems has also led to surprising results on the power of interaction, with applications in diverse areas of computer science.

In this talk, I will describe the model of space bounded interactive proof systems, in particular when the verifier is a 2-way probabilistic finite state automaton. I will present some results on the classes of languages they accept. For example, if they are not required to halt with high probability on rejected inputs, these interactive proof systems can accept any recursively enumerable language.

I will also describe applications of these results to problems in formal language theory, the theory of time-varying Markov processes and to the non-approximability of NP-complete and P-complete problems.


25 July 1991

Margaret Jacks Hall 301. 12:00 p.m.

Davdi Harel (Weizmann Institute)

How Hard is it to Reason About Propositional Programs?

We survey a variety of results pertaining to the difficulty of reasoning about schematic programs. We adopt the general framework of propositional dynamic logic (PDL), which is a natural extension of the propositional calculus, subsuming Hoare logic and various modal logics. Formulas in PDL can express a wide spectrum of statements about programs, including correctness, termination and equivalence.

There are many variants of PDL, differing in the class of programs allowed. Two of the most interesting are PDL_reg and PDL_cf, in which the programs are regular and context-free, respectively. The former correspond to iterative while-programs, or to flowcharts, and the latter to recursive procedures. As it turns out, the difficulty of deciding truth of formulas in such logics can range from being NP-complete to being highly undecidable.

The talk will describe and put into perspective the work of many people during the last 15 years. It will include some very recent results concerning the PDL of concurrency, that of recursive programs, and that of logic programs. We will also pose a number of open problems.


1 August 1991

Margaret Jacks Hall 301. 12:00 p.m.

Madhu Sudan (UC-Berkeley)

Testing Low-degree Polynomials

This talk essentially surveys some recent results in program testing and their connection to some results in complexity theory. We will start by introducing the notions of program testing as developed by Blum, Luby and Rubinfeld. A simple test, which verifies that a program supposedly computing a low-degree polynomial of its input is correct, will be described next. We will also describe why the same tester works for multivariate polynomials. This in turn yields one of the simplest proofs yet of MIP=NEXP and this connection will be shown (the proof is the same as that of Babai et.al. - where their multilinearity test is replaced with our low-degree test). Lastly, extensions of the tester - based on efficiency and practicality considerations - will be mentioned.

This combines joint work with Gemmell, Lipton, Rubinfeld, Shen and Wigderson.


8 August 1991

Margaret Jacks Hall 301. 12:00 p.m.

Oded Maler (INRIA)

Tight Bounds on the Complexity of Cascaded Decomposition of Automata

In this work we give exponential upper and lower bounds on the size of the cascaded (Krohn-Rhodes) decomposition of automata. These results are used for giving elementary algorithms for various translations between automata and temporal logic, where the previously-known translations were non-elementary.


19 August 1991

Margaret Jacks Hall 301. 12:00 p.m.

Amos Fiat

An O(log n) competitive algorithm for the room problem

Blum, Raghavan and Schieber give an algorithm for motion planning in a warehouse containing unknown rectilinearly oriented rectangular objects that is no longer than n * exp(sqrt(log n)) if n is the length of the shortest path to the target. We improve this to n * log(n).