AFLB is the Algorithms for Lunch Bunch. Meetings are Thursdays at 12:30 p.m.

Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.

Contents:

  1. 24 August 1989: Greg Plaxton (Stanford). Deterministic Sorting on Practical Networks.
  2. 12 October 1989: Moni Naor (IBM Almaden). Efficient Cryptographic Schemes Provably as Secure as Subset Sum.
  3. 19 October 1989: Vijay Vazirani (Cornell). A Theory of Alternating Paths and Blossoms for Proving Correctness of the O(sqrt(V) E) General Graph Matching Algorithm.
  4. 25 October 1989: Umesh Vazirani (Berkeley). Learning on a Graph..
  5. 26 October 1989: Michael Kharitonov (Stanford). Lower Bounds for Pseudorandom Number Generators.
  6. 9 November 1989: Joseph Cheriyan (Universitaet des Saarlandes). A Randomized Maximum-Flow Algorithm.
  7. 16 November 1989: Noga Alon (IBM Almaden and Tel Aviv University). Separating graphs with an excluded minor..
  8. 30 November 1989: Miklos Ajtai (IBM Almaden). Induction, Pigeonhole Principle and Parity..
  9. 1 December 1989: David Zuckerman (Berkeley). New Pseudo-Random Generators for Weak Random Sources.
  10. 8 December 1989: Danny Dolev (IBM Almaden and Hebrew University). Sharing Memory Robustly in Message-Passing Systems.
  11. 14 December 1989: Joel Friedman (Princeton). The Second Eigenvalue of Hypergraphs.
  12. 18 January 1990: Seffi Naor (Stanford). Small-bias probability spaces: efficient constructions and applications..
  13. 25 January 1990: Danny Sleator (Carnegie Mellon). Competitive Randomized Paging Algorithms.
  14. 1 February 1990: Serge Plotkin (Stanford). NETWORK DECOMPOSITION AND LOCALITY IN DISTRIBUTED COMPUTATION.
  15. 8 February 1990: Anil Gangolli (Stanford). Mean-Value Estimation with Markov Chains.
  16. 15 February 1990: Ronitt Rubenfeld (Berkeley). Self-Testing/Correcting Programs with Applications to Numerical Problems.
  17. 22 February 1990: Eli Gafni (UCLA and Tel-Aviv University). Bootstrap Network Resynchronization: An efficient Technique for End-to-end Communication.
  18. 1 March 1990: Jack Edmonds (Stanford). Existentially polynomial formulas..
  19. 8 March 1990: Moni Naor (IBM Almaden). Public-key cryptosystems provably secure against chosen ciphertext attack.
  20. 15 March 1990: Marshall Bern (Xerox PARC). ON-LINE ALGORITHMS FOR LOCATING CHECKPOINTS.
  21. 4 April 1990: Leonid Levin (Boston University). Pseudo-Random Bits: Full Security at Last..
  22. 12 April 1990: Tomas Feder (Stanford). Product Graph Representations.
  23. 26 April 1990: Kireeti Kompella (USC). Program Checkers for Modular Exponentiation.
  24. 3 May 1990: Ching-Tien (Howard) Ho (IBM Almaden). Optimal Communications on Hypercubes.
  25. 10 May 1990: Jack Snoeyink (Stanford). A trivial knot with exponential-sized spanning disks.
  26. 24 May 1990: Peter Sarnak (Stanford and IBM Almaden). Diophantine problems and graphs.
  27. 31 May 1990: Yitzhak Birk (IBM Almaden). CONNECTING GRID POINTS TO THE GRID'S BOUNDARY USING NON-INTERSECTING STRAIGHT LINES.
  28. 7 June 1990: Yatin Saraiya (Stanford). Subtree-elimination algorithms in deductive databases.
  29. 14 June 1990: Naoki Abe (UC Santa Cruz). On the Computational Complexity of Approximating Distributions by Probabilistic Automata.
  30. 21 June 1990: Michael Hirsch (Berkeley). Applications of Topology to Lower Bound Computations.
  31. 19 July 1990: Philip Milne (University of Bath). The expressive power of Voting Polynomials..
  32. 9 August 1990: Jim Aspnes (Carnegie-Mellon). The expressive power of Voting Polynomials..
  33. 16 August 1990: Orli Waarts (Stanford). A Characterization of Eventual Byzantine Agreement.
  34. 23 August 1990: Richard Beigel (Yale). $\PP$ is closed under intersection..

24 August 1989

Margaret Jacks Hall 352. 12:30 p.m.

Greg Plaxton (Stanford)

Deterministic Sorting on Practical Networks

This talk will describe a new deterministic algorithm for sorting on realistic parallel computers such as the butterfly, hypercube and shuffle-exchange. The algorithm runs in $O(\log n (\log\log n)^3)$ time and requires only $O(1)$ storage per processor. Previously, the best known deterministic sorting algorithm for such computers was Batcher's bitonic sort, which runs in $O(\log^2 n)$ time (and $O(1)$ storage). Expander graphs and other techniques related to the AKS sorting circuit are not employed. The algorithm does not provide another $o(\log^2 n)$ depth sorting circuit.

Joint work with Bob Cypher.


12 October 1989

Margaret Jacks Hall 352. 12:30 p.m.

Moni Naor (IBM Almaden)

Efficient Cryptographic Schemes Provably as Secure as Subset Sum

The subset sum (a.k.a. knapsack) problem is: given a set of numbers and a target sum, is there a subset of the numbers whose sum is the target.

We show very efficient constructions for a pseudo-random generator and for a universal one-way hash function based on the intractability of random instances of the subset sum problem for certain dimensions. (Pseudo-random generators can be used for private key encryption and universal one-way hash functions for authentication and signature schemes). The efficiency of our construction is due to the fact that many bits can be generated/hashed with one application of the assumed one-way function. We also show a bit commitment protocol with the property that it is secure if the subset sum problem cannot be broken on-line;

All our construction can be implemented in NC using an optimal number of processors.

Joint work with Russell Impagliazzo.

The talk is accessible to non specialists in cryptography.


19 October 1989

Margaret Jacks Hall 352. 12:30 p.m.

Vijay Vazirani (Cornell)

A Theory of Alternating Paths and Blossoms for Proving Correctness of the O(sqrt(V) E) General Graph Matching Algorithm

We will present new algorithmically relevant combinatorial structure of matchings; this structure yields the first proof of correctness of the general graph matching algorithm obtained jointly will S. Micali (FOCS 1980). Central to this structure is a definition of blossoms from the perspective of minimum length alternating paths, and a study of properties of these paths w.r.t. the blossoms. A simple description of the algorithm is made possible in light of this structure.


25 October 1989

Margaret Jacks Hall 352. 12:30 p.m.

Umesh Vazirani (Berkeley)

Learning on a Graph.

We give a generalization of Valiant's distribution free model of learning: in our model the positive and negative examples of the concept to be learnt are placed on the vertices of a large (arbitrary) graph. The learner performs a random walk on this graph, and tries to learn the concept based on the data encountered. Valiant's model is the case when the walk is performed on the complete graph. The advantage of the generalization is that the model can express short term correlations in the observed data (e.g. two successively observed examples of chairs are likely to look similar).

The relationship between learning on a graph and data compression can now be expressed as a natural problem about random walks on graphs. We show the equivalence between learning nested concepts on a graph and data compression by proving the following theorem: Consider a random walk on an arbitrary n-vertex graph G whose vertices are labelled with (arbitrary) integers. Then the expected length of the loexicographically first increasing subsequence of integers encountered in a random walk of length t is at most sqrt(t)logn.

This is joint work with David Aldous.


26 October 1989

Margaret Jacks Hall 352. 12:30 p.m.

Michael Kharitonov (Stanford)

Lower Bounds for Pseudorandom Number Generators

In this paper we look at computational resources necessary to generate pseudorandom strings. We prove lower bounds on pseudorandom generation with respect to random polynomial-time testers. We describe several techniques for proving such bounds. By enumerating configurations of the generator that are consistent with its current output, we show that a one-way log-space machine cannot generate even an $(n+1)$-bit pseudorandom string from an $n$-bit string. We strengthen this result by exhibiting an upper bound showing that under a realistic complexity assumption, there is a one-way pseudorandom generator that uses slightly more than logarithmic space.

Based on a language-recognition technique, we prove that one-way pushdown machines (with no restriction on its space size) also cannot produce an extra bit of pseudorandomness, and we show that bounded reversal log-space machines cannot produce more than a linear number of pseudorandom bits.

We also introduce a notion of separation of machine-based complexity classes based on their ability (or inability) to generate pseudorandom strings. We discuss how this separation notion, combined with the assumption that secure pseudorandom generators with respect to polynomial-time machines exist, can be used to separate $\cal P$ from log-space.

Joint work with Andrew V. Goldberg and Moti Yung


9 November 1989

Margaret Jacks Hall 352. 12:30 p.m.

Joseph Cheriyan (Universitaet des Saarlandes)

A Randomized Maximum-Flow Algorithm

A network is a graph each of whose edges has an associated real number. Let n and m denote the number of vertices and edges, respectively, and U denote the maximum absolute edge weight. An algorithm for a problem on networks is said to be strongly polynomial if its running time depends only on n (and m).

Ahuja and Orlin, using a generic maximum-flow algorithm due to Goldberg and Tarjan, recently developed an O(nm + n^2 logU)-time algorithm for the restricted class of integer networks.

Based on this work, a strongly-polynomial algorithm is presented, and its running time is shown to be O(nm + #selects logn), where #selects denotes the expected number of iterations in the execution. By randomly permuting the adjacency lists at the start (this is the only use of randomization) and using a simple analysis, the trivial O(nm) bound on #selects is improved to O(n (nm^2 logn)^{1/3}) with high probability. This gives an O(nm + n^2 (logn)^4) maximum-flow algorithm. (The fastest deterministic algorithm takes O(nm log(n^2/m)) time.)

(This is joint work with Torben Hagerup.)


16 November 1989

Margaret Jacks Hall 352. 12:30 p.m.

Noga Alon (IBM Almaden and Tel Aviv University)

Separating graphs with an excluded minor.

Let G be an n-vertex graph with nonnegative weights whose sum is 1 assigned to its vertices, and with no minor isomorphic to a given h-vertex graph H. We prove that there is a set X of no more than h^{3/2}n^{1/2} vertices of G whose deletion creates a graph in which the total weight of every connected component is at most 1/2. This extends significantly a theorem of Lipton and Tarjan for planar graphs. We exhibit an algorithm which finds, given an n-vertex graph G with m edges and with weights as above and an h-vertex graph H, either such a set X or a minor of G isomorphic to H. The algorithm runs in time O( h^{1/2} n^{1/2} m ). Besides several combinatorial applications, these results supply extensions of the many known applications of the Lipton-Tarjan separator theorem from the class of planar graphs (or that of graphs with bounded genus) to any class of graphs with an excluded minor. For example, it follows that for any fixed graph H , given a graph G with n vertices and with no H-minor one can approximate the size of the maximum independent set of G up to a relative error of 1/sqrt{log n} in polynomial time, find that size exactly and find the chromatic number of G in time 2^{O( sqrt{n})} and solve any sparse system of n linear equations in n unknowns whose sparsity structure corresponds to G in time O(n^{3/2}). This is a joint work with Paul Seymour and Robin Thomas.


30 November 1989

Margaret Jacks Hall 352. 12:30 p.m.

Miklos Ajtai (IBM Almaden)

Induction, Pigeonhole Principle and Parity.

Induction upto n, Pigeonhole Principle for the number n and the statement that a set of size 2n+1 cannot be partitioned into subsets of size two are all axioms, expressing the finiteness of a set of size n. We show that in a natural sense the Pigeonhole Principle is stronger than Induction, and the Parity Axiom is stronger than the Pigeonhole Principle. It follows from these results that the Boolean formula expressing the Pigeonhole Principle for n, has no propositional proof of polynomial length if all of the formulas in the proof are of constant depth and may use only the original variables.


1 December 1989

Margaret Jacks Hall 352. 12:30 p.m.

David Zuckerman (Berkeley)

New Pseudo-Random Generators for Weak Random Sources

We consider the following model for a weak random source: the source is asked only once for $R$ bits, and the source outputs an $R$-bit string such that no string has probability more than $2^{-\delta R}$ of being output, for some fixed $\delta >0$. We show that under the Generalized Paley Graph Conjecture, there is a pseudo-random generator that simulates RP using as a seed a string from such a source for any $\delta > 0$. For $\delta > 1/2$, we can simplify our generator considerably and prove its correctness without relying on any unproven assumption. Cohen and Wigderson [CW] have also solved the case $\delta > 1/2$ using different techniques. Finally, we prove that for any $\delta > 0$ and for all but an exponential fraction of $\delta$-sources, an even simpler generator can simulate RP.


8 December 1989

Margaret Jacks Hall 352. 12:30 p.m.

Danny Dolev (IBM Almaden and Hebrew University)

Sharing Memory Robustly in Message-Passing Systems

{\em Emulators} that translate algorithms from the shared-memory model to two different message-passing models are presented. Both are achieved by implementing an atomic, wait-free, single-writer multi-readers register in unreliable, asynchronous networks. The two message-passing models considered are a complete network with processor failures and an arbitrary network with dynamic link failures.

These results make it possible to view the shared-memory model as a higher-level language for designing algorithms in asynchronous distributed systems. We can automatically emulate any wait-free algorithm based on atomic, wait-free, single-writer multi-readers registers in message-passing systems. The overhead introduced by these emulations is polynomial in the number of processors in the systems.

Immediate new results are obtained by applying the emulators to the following constructions: multi-writer-multi-readers registers, concurrent time-stamp system, atomic snapshot scan, and randomized consensus algorithms.

Joint work with Hagit Attiya and Amotz Bar-Noy.


14 December 1989

Margaret Jacks Hall 352. 12:30 p.m.

Joel Friedman (Princeton)

The Second Eigenvalue of Hypergraphs

We define a notion of second eigenvalue for regular hypergraphs. It turns out that a random hypergraph has a very small second eigenvalue. A class of applications of graphs with small second eigenvalue would be greatly improved if we could explicity construct hypergraphs with as small a second eigenvalue as random ones have. Unfortunately we have no such constructions as yet. In this talk we define the second eigenvalue, discuss the second eigenvalue of random hypergraphs, discuss some constructions and applications. This is joint work with Avi Wigderson.


18 January 1990

Margaret Jacks Hall 352. 12:30 p.m.

Seffi Naor (Stanford)

Small-bias probability spaces: efficient constructions and applications.

We show how to efficiently construct a small probability space on $n$ binary random variables such that for every subset, its pari- ty is either zero or one with ``almost" equal probability. They are called $epsilon$-biased random variables. The number of ran- dom bits needed to generate the random variables is $O(log n + log {1/epsilon})$. Thus, if $epsilon$ is polynomially small, then the size of the sample space is also polynomial. $epsilon$-biased random variables can be used to construct "al- most" $k$-wise independent random variables where $epsilon$ is a function of $k$.

These probability spaces have various applications:

(1) Derandomization of algorithms: many randomized algorithms that require only $k$-wise independence of their random bits (where k is O(log n)$), can be derandomized by using $epsilon$- biased random variables.

(2) Reducing the number of random bits required by certain ran- domized algorithms, e.g., verification of matrix multiplication.

(3) Exhaustive testing of combinatorial circuits. We provide the smallest known family for such testing.

(4) Communication complexity: two parties can verify equality of strings with high probability exchanging only a logarithmic number of bits.

(5) Hash functions: we can construct a polynomial sized family of hash functions such that with high probability, the sum of a ran- dom function over two different sets is not equal.

This is joint work with Moni Naor.


25 January 1990

Margaret Jacks Hall 352. 12:30 p.m.

Danny Sleator (Carnegie Mellon)

Competitive Randomized Paging Algorithms

The {\em paging problem} is that of deciding which pages to keep in a memory of k pages in order to minimize the number of page faults. We develop the {\em partitioning algorithm}, a randomized on-line algorithm for the paging problem. We prove that its expected cost on any sequence of requests is within a factor of H_k of optimum. (H_k is the k^{th} harmonic number, which is about ln(k).) No on-line algorithm can perform better by this measure. Our result improves by a factor of two the best previous algorithm.

Joint work with Lyle A. McGeoch.


1 February 1990

Margaret Jacks Hall 352. 12:30 p.m.

Serge Plotkin (Stanford)

NETWORK DECOMPOSITION AND LOCALITY IN DISTRIBUTED COMPUTATION

We introduce a concept of {\em network decomposition}, a partitioning of an arbitrary graph into small-diameter connected components, such that the graph created by contracting each component into a single node has low chromatic number and low arboricity. We present an efficient distributed algorithm for constructing such a decomposition, and demonstrate its use for design of efficient distributed algorithms.

Our method yields new deterministic distributed algorithms for finding a maximal independent set in an arbitrary graph and for $(\Delta+1)$-coloring of graphs with maximum degree $\Delta$. These algorithms run in $O(n^\epsilon)$ time for $\epsilon = O(\sqrt{\log \log n / \log n})$, while the best previously known deterministic algorithms required $\Omega (n)$ time. Our techniques can also be used to remove randomness from the previously known most efficient distributed breadth-first search algorithm, without increasing the asymptotic communication and time complexity.

Joint work with: B. Awerbuch (MIT) A. Goldberg (Stanford) M. Luby (ICSI)


8 February 1990

Margaret Jacks Hall 352. 12:30 p.m.

Anil Gangolli (Stanford)

Mean-Value Estimation with Markov Chains

A common statistical problem is to estimate the mean of an easily computable function on a set S under a given distribution P.

We use a theorem of Aldous on the variance of certain estimators to show two results:

* Given the same number of random input bits, adjacent correlated samples drawn from a stationary time-reversible Markov chain typically give better estimates than well-spaced roughly-independent ones drawn from the same chain. This eliminates a factor of log|S| from mean-estimation methods bounds based on current second eigenvalue alone (e.g. Jerrum/Sinclair). As |S| may be exponential in the input size, the saving can be appreciable. We illustrate with an application to estimating the mean of a function on restricted permutation data under the uniform distribution (i.e. on random matchings).

* More interestingly, using a random walk on a type of expander graph over {0,1}^k, one can obtain n random binary k-tuples using only (k + c(n-1)) random input bits, for a constant c independent of k and n. These samples are correlated but for the purposes of estimating the mean are essentially as good as independent samples (i.e. n k random bits), (and much much cheaper). We dicuss the relation of this result to the recent work of Impagliazzo and Zuckerman (FOCS 89).


15 February 1990

Margaret Jacks Hall 352. 12:30 p.m.

Ronitt Rubenfeld (Berkeley)

Self-Testing/Correcting Programs with Applications to Numerical Problems

A {\em self-testing/correcting pair} for a function $f$ is a pair of probabilistic programs $(T,C)$ that have the following properties. Let $P$ be any program that purportedly computes $f$. $T$ and $C$ both make calls to $P$, and both are required to run much faster than any correct program for $f$, not counting the time for calls to $P$. Tester $T$ estimates the probability that $P$ incorrectly computes $f$ with respect to a distribution $D$ on inputs. If $P$ is not too faulty with respect to $D$, then corrector $C$ on input $x$ uses $P$ to compute $f(x)$ correctly with high probability, where this probability is independent of $x$. $(T,C)$ is an {\em efficient} pair if their total running times, including time for calls to $P$, is big Oh of the running time of $P$.

We present general techniques for constructing simple to program and efficient self-testing/correcting pairs for a variety of numerical problems, including integer multiplication, modular multiplication, matrix multiplication, inverting matrices, computing the determinant of a matrix, computing the rank of a matrix, integer division, modular exponentiation and polynomial multiplication.

One of the central ideas is to design a self-testing/correcting pair that makes calls to a library of programs for a variety of functions. For example, the self-testing/correcting pair for computing the rank of a matrix makes calls to programs for computing matrix multiplication, inverting matrices and computing determinants of matrices, but does not assume that any particular answer it receives from these programs is correct.

Joint work with Manuel Blum and Michael Luby.


22 February 1990

Margaret Jacks Hall 352. 12:30 p.m.

Eli Gafni (UCLA and Tel-Aviv University)

Bootstrap Network Resynchronization: An efficient Technique for End-to-end Communication

Many fault tolerant network-communication algorithms rely on the ability of the network nodes to distinguish between ``old'' and ``new'' messages. Usually such algorithms accomplish this distinction by time-stamping messages, and hence have unbounded complexity. Here we give an efficient technique to achieve this distinction, without time-stamps, in the weakest model of dynamic networks. The technique supports the growth of a ``resynchronized territory'' around a resynchronization initiator. ``Old'' messages that try to enter the resynchronized territory are recognized as such. The correctness of the process of enlarging the territory hinges on the fact that the territory is already resynchronized and relays only ``new'' messages, hence the name ``Bootstrap-Resynchronization.'' We apply the technique to the end-to-end communication problem to yield an $O(nE)$ message complexity algorithm. Although the best previous algorithm for this problem [AMS 89] has polynomial message complexity, its complexity is too high to render it practical.

(Joint work with Yehuda Afek, TAU)


1 March 1990

Margaret Jacks Hall 352. 12:30 p.m.

Jack Edmonds (Stanford)

Existentially polynomial formulas.

"Expressible by an EP-formula" is what defines a predicate to be in the class NP (or EP). We will give a particularly simple proof of the Cook-Levin NP-Completeness theorem, and also show how EP-formulas have a more positive use.


8 March 1990

Margaret Jacks Hall 352. 12:30 p.m.

Moni Naor (IBM Almaden)

Public-key cryptosystems provably secure against chosen ciphertext attack

In a public-key cryptosystem, any user can easily encrypt messages but, if the cryptosystem is secure, cannot decrypt messages without access to the decryption device.

A chosen ciphertext attack on a cryptosystem consists of a preliminary stage, where the attacker is allowed to encrypt any message she wants and is allowed to use the decryption device as a black box to decrypt any message she wants. Afterward, the attack is considered successful if she is able to decrypt subsequent messages without the help of the decryption device.

We show how to use a non-interactive zero-knowledge proof system to convert any public-key cryptosystem into one that is secure against a chosen ciphertext attack.

Blum, Feldman and Micali introduced non-interactive zero-knowledge proof systems for language membership. Such a system allows users who share a random string to prove membership in a language non-interactively (i.e. by sending a single message), without revealing any other useful information.

Using a non-interactive proof system suggested by De Santis, Micali and Persiano this yields a concrete implementation of a public-key cryptosystem secure agains chosen ciphertext attack based on the quadratic residousity assumption.

This is joint work with Moti Yung.


15 March 1990

Margaret Jacks Hall 352. 12:30 p.m.

Marshall Bern (Xerox PARC)

ON-LINE ALGORITHMS FOR LOCATING CHECKPOINTS

We consider the problem of choosing locations in a long computation at which to save intermediate results. Such checkpoints allow faster recomputation of arbitrary requested points within the computation. We investigate adaptive solutions to the problem by abstracting the problem to a quite nonstandard server problem. In this talk, I'll give some of the proofs not given in the BATS talk on the same work.

[Joint work with Dan Greene, Arvind Raghunathan, and Madhu Sudan]


4 April 1990

Margaret Jacks Hall 352. 12:30 p.m.

Leonid Levin (Boston University)

Pseudo-Random Bits: Full Security at Last.

I will show tight relationship between two remarkable phenomena: pseudo-random bits and one-way functions.

Function inverting is a challenging task. Consider F(x) which checks whether x is an absolutely complete mathematical proof and outputs its length and the last proven fact. F is trivial to compute, but inverting it amounts to finding shortest proofs for any given theorem! Still, the problem (known as "P =? NP") of actually proving that some functions are one-way (i.e. easy to compute while hard to invert) remains open. Security of F is the time needed to notice that some algorithm can successfully invert F with an observable frequency.

A reason for F being one-way may be in a particular simple predicate b(x) that is determined by but hard to compute from F(x). Moreover, b(x) may even be hard to guess with any noticeable correlation (then F must be one-way on most inputs). Security of b is the time needed to notice the correlation. b(x) may be determined by F(x) while appearing completely random. So, randomness and determinism may be not so opposite as our scientific culture presumes.

Such (hypothetical) hidden bits play a significant role in foundations of (pseudo)randomness, information, cryptography, and other areas. Blum, Micali and Yao found hidden bits for functions of a special form (assuming there are one-way ones among them). [Goldreich Levin, STOC-89] show that every one-way function hides, with at most a polynomial security loss, almost all linear predicates. We will see that no polynomial security loss actually occurs.


12 April 1990

Margaret Jacks Hall 352. 12:30 p.m.

Tomas Feder (Stanford)

Product Graph Representations

We study a hierarchy of canonical representations of graphs as subgraphs of cartesian products of graphs. This hierarchy starts with the isometric representation, includes the 2-isometric representation, and ends with the cartesian prime factorization. We show that all three representations can be obtained in O(n m) time using O(m) space, for graphs with n vertices and m edges. Our algorithms give immediate parallel versions that use n^3 processors and run in O(log^2 n) time.


26 April 1990

Margaret Jacks Hall 352. 12:30 p.m.

Kireeti Kompella (USC)

Program Checkers for Modular Exponentiation

Program checkers are an algorithmic approach to the program correctness issue. Introduced by Manuel Blum, they provide in many cases a practical solution to the problem, "Has this program produced a correct output at the given input value?" Checkers also pose important theoretical questions, among them, whether it is always easier to check than to compute, and determining the structure of problems that admit fast checkers.

This talk presents program checkers for modular exponentiation: given integers a, b, c, compute a^b mod c. We show that, whereas current techniques for exponentiating require O(log b) multiplications, a processor restricted to performing only O(loglog b) multiplications can check exponentiation using O(loglog b) queries of the program. Independently, Ronitt Rubinfeld has obtained a checker that requires O((loglog b)^4) multiplications and O((loglog b)^3) queries. We also demonstrate a hypothesis under which checking can be done with a constant number of multiplications and queries. Such a checker would imply that any program for modular exponentiation can be transformed to a "self-checking" program with asymptotically no loss in efficiency.

This work was done jointly with Leonard Adleman.


3 May 1990

Margaret Jacks Hall 352. 12:30 p.m.

Ching-Tien (Howard) Ho (IBM Almaden)

Optimal Communications on Hypercubes

Many scientific computations can be characterized by local computation steps interposed with a set of regular communication steps. If frequently used communication primitives are implemented efficiently then high-level parallel programming is feasible for a large class of problems. In this talk, we demonstrate how parallel computing on hypercubes can be made more convenient and efficient for a large class of problems by providing a set of optimal communication primitives.

For many common communication patterns, we present optimal routing algorithms which, in most cases, outperform the previous best known algorithms by an order of O(log N) in an N processor hypercube. The algorithms use the edge-disjoint spanning trees, the balanced tree or the rotated binomial trees on hypercubes. The techniques used in the algorithms are extensible to a large class of networks, and similar optimal algorithms can be derived for multidimensional meshes/torus, generalized hypercubes, folded hypercubes, butterflies and star graphs.

The set of communication primitives includes broadcasting, reduction, personalized communication (scatter/gather), transposition, dimension rotation (shuffle), dimension reflection, conversion between various encodings and dimension permutations. We present implementation results on the Intel iPSC hypercube. Based on these efficient communication primitives, we have also implemented several applications such as matrix multiplication, matrix transpose, tridiagonal systems solvers, and have a suite of algorithms for various problem sizes and machine parameters.


10 May 1990

Margaret Jacks Hall 352. 12:30 p.m.

Jack Snoeyink (Stanford)

A trivial knot with exponential-sized spanning disks

How does one determine if a closed polygonal curve in space is knotted or if it can be pulled into a circle without crossing itself or being cut? Though this ``knot triviality'' problem is known to be decidable, it is not known to be in polynomial time. Furthermore, the mathematical tools commonly used for this problem---knot groups and knot polynomials---lead to computationally intractable problems.

Rather than look at algebraic certificates of knot triviality, we could look at geometric certificates, such as the Seifert surface. If a knot is the boundary of a polyhedral surface without holes or self-intersections, then it is trivial---it can be untied without cutting.

Researchers have conjectured that every knot has a `spanning surface' with only polynomially many facets; if this were true then knot triviality would be in NP. Unfortunately, we disprove this conjecture by exhibiting trivial knots composed of $n$ line segments that require exp(c n) facets, for a positive constant c. The proof uses a geometric approach: slicing the knot and its spanning surface and discovering their geometric structure.

I expect the talk to be self contained, but be sure you bring your lunch because the algorithmic content will be small.


24 May 1990

Margaret Jacks Hall 352. 12:30 p.m.

Peter Sarnak (Stanford and IBM Almaden)

Diophantine problems and graphs

We will discuss the problem of counting integer points on sets defined by polynomial equations and related eigenvalue problems. This relationship allows us to give simple and elementary constructions of expander graphs.


31 May 1990

Margaret Jacks Hall 352. 12:30 p.m.

Yitzhak Birk (IBM Almaden)

CONNECTING GRID POINTS TO THE GRID'S BOUNDARY USING NON-INTERSECTING STRAIGHT LINES

Consider the following problem. Given a set of N points in a rectangular (mXn) grid; determine whether it is possible to connect all points to the grid's boundary using N non-intersecting straight (horizontal or vertical) lines. If such a connection is possible, provide a legal set of lines. In this talk, we present a sequential algorithm for solving the problem. Its worst-case time complexity is the minimum of O(m+n) and O(NlogN). This improves upon a recent O(N^2) algorithm by Roychowdhury and Bruck. We then consider two related topologies: (i) a 3-dimensional grid, and (ii) a 2-dimensional grid which is partitioned into subgrids. In the latter case, we allow a connection from at most one of two adjacent subgrids to any given point on their common boundary. We show that for both topologies the problem is NP-Complete.

The 2-dimensional problems (with an additional constraint, which we also address) were originally posed by S.-Y. Kung, S.N Jean and C.W. Chang in the context of fault-tolerant arrays of standard cells. A solution to these problems directly provides a reconfiguration assignment for a certain class of rectangular arrays. This can be used to increase manufacturing yield of VLSI or WSI arrays and to reduce the reconfiguration time of operational fault-tolerant arrays. The partitioned-grid problem is motivated by the fact that for very large arrays, it is necessary to provide additional rows and columns of spare cells, since the ratio of total number of cells to spare cells increases as the array grows (area/perimeter). Additional applications may include automated assembly and testing.

This is joint work with Jeff Lotspiech


7 June 1990

Margaret Jacks Hall 352. 12:30 p.m.

Yatin Saraiya (Stanford)

Subtree-elimination algorithms in deductive databases

A number of optimizations in deductive databases involve the transformation of database logic programs to simpler, more efficiently evaluable programs. In many cases, the desired transformation is a simple syntactic restriction on the form of the original program, and the correctness of the transformation indicates the existence of a desirable property in the program. For example, the existence of {\it basis-linearizability} in a nonlinear program indicates that the program is inherently linear, and permits the use of special-purpose query evaluators for linear recursions. Similarly, if a program is {\it sequencable}, then it is conducive to a pipelined evaluation. Further, the existence of $k$-{\it boundedness\/} in a program permits the elimination of recursion overhead in evaluating the program. We investigate the complexity of deciding whether such transformations may correctly be applied to a given program.

Each of the problems that are mentioned above may be described in terms of the {\it subtree-elimination problem}, which we define, and for which we provide algorithms that are correct but not always complete. Further, we show that the basis-linearizability and sequencability problems are undecidable. Our construction yields tight undecidability results for the detection of program equivalence. In addition, we provide a decision procedure for the detection of basis-linearizability in a restricted class of recursive programs. Finally, we consider the tractability of simple cases of the basis-linearizability, sequencability and $k$-boundedness problems, by providing \script{NP}-hardness results and efficient algorithms for various classes of each problem. Our investigation yields a complete characterization of the complexity of testing the equivalence of conjunctive queries, in terms of the polynomial heirarchy.


14 June 1990

Margaret Jacks Hall 352. 12:30 p.m.

Naoki Abe (UC Santa Cruz)

On the Computational Complexity of Approximating Distributions by Probabilistic Automata

We address the question of approximating an arbitrary, unknown source distribution by distributions generated by probabilistic automata. Probabilistic automata (PA's), and hidden Markov models (HMM's) which are closely related to PA's, are used extensively as models for probabilistic generation of speech signals for the purpose of speech recognition. We investigate a hitherto neglected aspect of this important, well-studied problem: Does there exist an {\em efficient} training algorithm such that the trained PA's {\em provably converge} to a model close to the optimal one with high confidence, after only a feasibly small set of training data? We model this problem in the framework of computational learning theory and analyze the sample as well as computational complexity. We show that the number of examples required for training PA's is moderate -- essentially linear in the number of transition probabilities to be trained and a low-degree polynomial in the example length and parameters quantifying the accuracy and confidence. Computationally, however, training PA's is quite demanding: Fixed state size PA's are trainable in time polynomial in the accuracy and confidence parameters and input length, but {\em not} in the alphabet size, unless $RP = NP$. The latter result is shown via a strong non-approximability result for the single string training problem for 2-state PA's, which is of independent interest.

Joint work with Manfred Warmuth.


21 June 1990

Margaret Jacks Hall 352. 12:30 p.m.

Michael Hirsch (Berkeley)

Applications of Topology to Lower Bound Computations

When an algorithm is viewed as a branching tree, the Topological Complexity of the algorithm is the number of branches in the tree. The Topological Complexity of a problem is the smallest Topological Complexity of any algorithm to solve the problem. This is a quite different measure of complexity than the traditional computational complexity, though not unrelated; its logarithm clearly provides a lower bound for the computational complexity of the problem.

The name "Topological Complexity" is motivated by a definition which is purely topological, involving only spaces and continuous maps with no reference to computers or algorithms. Under this alternative viewpoint, it is natural to bring forth the techniques and technology of topology to try to calculate Topological Complexity. This has been done with some success in the case of root findng for univariate (Smale, Vasiliev) and multivariate (Levine) polynomials.

In this talk I will examine a new problem, the New Point Problem, which is particularly natural to view topologically and discuss its Topological Complexity. The emphasis will be on the topological techniques involved with a view towards perhaps applying them to more central problems in Computer Science, as well as on the idea that there are many results in the mathematical literature that are useful to computer scientists.


19 July 1990

Margaret Jacks Hall 352. 12:30 p.m.

Philip Milne (University of Bath)

This informal talk will explain the structure of a simple extension of Sturm's theorem to higher dimensions. A Sturm sequence may be used to count the number of real roots of a polynomial equation which lie within an interval along the real line. The new sequence extends this notion to polynomial maps in higher dimensions. It may be used to count, in two dimensions for example, the number of simultaneous real solutions of a pair of bivariate polynomials which lie within a co-ordinate alligned rectangle. The seminar will not be given by a mathematician, and will therefore involve pretty, rather than deep, mathematics.


9 August 1990

Margaret Jacks Hall 352. 12:30 p.m.

Jim Aspnes (Carnegie-Mellon)

The expressive power of Voting Polynomials.

Boolean functions from $\{0,1\}^{n}$ to $\{0,1\}$ can be represented in multiple ways. We consider the representation of boolean functions as n-variable polynomials over the reals, which we call Voting Polynomials. A Voting Polynomial represents a boolean function F when the polynomial takes a positive value on all points in $\{0,1\}^{n}$ where F has the value 1, and takes a negative value on the remaining points where F is 0. A Voting Polynomial approximates a boolean function F when its sign is correct on a large number of points of $\{0,1\}^{n}$. We focus on the following question: Given a positive integer k and a boolean function F, how well can the best Voting Polynomial of degree k approximate F, i.e., how few points can it be wrong on? We give tight bounds in the case of the parity function. We give bounds with an intriguing gap in the case of arbitrary symmetric functions. Our lower bound technique can be applied to any boolean function.

These results have applications in complexity theory. For example, the lower bounds for parity repair a broken proof of Bennett and Gill stating that PP is not in Parity-P relative to a random oracle. The bounds also give a proof that parity can't be well approximated by an AC0 circuit with one majority gate.

THIS WORK IS BEST APPRECIATED BY CONSIDERING THE FOLLOWING CLOSELY RELATED PUZZLES:

Imagine that n people have a random bit on their forehead. (Uniform distribution.) Assume n is an odd number. The people want to vote on the parity of the n bits. Formally, each person has access to all bits but his own; there is no communication between the people. Each person casts one vote. The result of the voting is majority of the votes cast. The voters wish the result of the vote to be the parity of the n bits.

A voting strategy is a procedure for each voter. The procedure takes what the voter sees and returns a vote. It is clear by the nature of the parity function that a voter can have only a 50% chance of casting a correct vote. (Each voter has a separate identity so each procedure can be different.)

A voting strategy succeeds with probability p if the probability that the voters vote the parity is p. (Varying over all possible assignments of bit to foreheads.)

Two puzzles:

1) Give a very simple strategy the voters can use to succeed with a probability converging to one as n grows. (The proof should also be very simple. I expect your strategy to fail with prob 1/O(sqrt(n)) )

2*) Find a strategy that is optimal for n equal to a power of two minus one. (This can be extended to a strategy that is optimal for all n.)

Now change the rules slightly to allow for "Chicago voting". Each person can vote as many times as he/she wants! The other conditions remain the same.

Two more puzzles:

1) Find a strategy that fails on only one situation (with prob 1/2^n). (This is clearly optimal.)

2) Show that in the case of any function other than parity or its negation the voters can be correct all the time.

Again, lets change the scenario a bit. This time each of the n people gets a bit in his/her hand. (Everybody only sees one bit.) Again, let's allow Chicago voting:

Puzzle: Find an optimal strategy for the voters.

Joint work with Richard Beigel, Merrick Furst, and Steven Rudich.


16 August 1990

Margaret Jacks Hall 352. 12:30 p.m.

Orli Waarts (Stanford)

A Characterization of Eventual Byzantine Agreement

Fault tolerant systems often require means by which independent processors can arrive at an exact mutual agreement of some kind. The eventual Byzantine agreement problem (EBA) captures a fundamental aspect of such coordination. The problem requires n processors, at most t of which are faulty, to agree on a value, in such a way that all the nonfaulty processors decide on the same value, and if all processors started with the same initial value, they should decide on this value.

This paper investigates EBA in the case of crash failures (where a processor that is faulty sends no messages after it fails) and in the more general case of omission failures (where a processor may continue to send messages after it has failed). The emphasis is on characterizing optimal EBA protocols (EBA protocols in which the processors decide at least as fast as in any other EBA protocol) in terms of the states of knowledge required by the processors in order to attain EBA. We define a new state of knowledge in terms of which we can characterize necessary and sufficient conditions for attaining EBA. Using our characterization, we provide a technique that allows us to start with any EBA protocol, apply a certain construction twice, and arrive at an optimal EBA protocol.

Although we focus here on Byzantine agreement, it is straightforward to extend our results to general coordination problems.

The talk is self contained. This is a joint work with Joe Halpern and Yoram Moses, and will be presented in 9th PODC, Aug., 1990


23 August 1990

Margaret Jacks Hall 352. 12:30 p.m.

Richard Beigel (Yale)

$\PP$ is closed under intersection.

Abstract: In his seminal paper on probabilistic Turing machines, Gill~\cite{GillRandom} asked whether the class $\PP$ is closed under intersection and union. We give a positive answer to this question. In fact, $\PP$ is closed under polynomial-time multilinear reductions. In circuits, this allows us to combine several threshold gates into a single threshold gate, while increasing depth by only a constant. Consequences in complexity theory include definite collapse and plausible separation of certain query hierarchies over $\PP$. Consequences in the study of circuits include the simulation of circuits with a small number of threshold gates by circuits having only a single threshold gate at the root (perceptrons), and a lower bound on the number of threshold gates needed in order to compute the parity function.

This is joint work with Nick Reingold and Dan Spielman.