AFLB is the Algorithms for Lunch Bunch. It is run by Misha Kharitonov. Meetings are Thursdays at 12:30 p.m.

Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.


  1. 29 September 1988: Dr. Ron Shamir (School of Mathematics, Tel Aviv University). An algorithm for the recognition of greedily solvable transportation problems..
  2. 6 October 1988: Moti Yung (IBM Almaden). Universal One-Way Hash Functions and their Cryptographic Applications.
  4. 18 October 1988: Milena Mihael (Harvard University and UC-Berkeley). On the Magnification of 0-1 Polytopes.
  5. 27 October 1988: Thane Plambeck. Periodicity Results for Some Simple Octal Games.
  6. 4 November 1988: Jin-Yi Cai (Yale University). Take a Walk and Grow a Tree on the Boolean Hypercube..
  7. 17 November 1988: Shmuel Safra (Weizmann Institute). On The Complexity of $\omega$-Automata.
  8. 1 December 1988: Alok Aggarwal (IBM Yorktown). COMPLEXITY CONSIDERATIONS IN MEMORY HIERARCHIES.
  10. 15 December 1988: Ilan Vardi (Stanford). A MEDICAL APPLICATION OF COMBINATORIAL OPTIMIZATION .
  11. 12 Januaray 1989 (?): Don Knuth (Stanford). Stable Husbands.
  12. 19 Januaray 1989: Bernd Sturmfels. RECENT APPLICATIONS OF GROBNER BASES THEORY .
  13. 24 January 1989: David Shmoys (MIT). Approximation Algorithms for Constrained Parallel Machine Scheduling Problems.
  14. 26 Januaray 1989: Eva Tardos. Combinatorial algorithms for the generalized network flow problem.
  15. 2 February 1989: Arving Raghunathan (UC-Berkeley). A CONSTRUCTIVE RESULT FROM GRAPH MINORS: LINKLESS EMBEDDINGS.
  17. 14 February 1989: Mauricio Karchmer (University of Toronto). Communication Complexity & Circuit Complexity.
  18. 23 February 1989: Zvi Galil (Columbia University and Tel Aviv University). The subset-sum problem and analytic number theory -- an interplay.
  20. 2 March 1989: Bob Cypher (University of Washington). Lower and Upper Bounds on Parallel Sorting.
  21. 7 March 1989: Joe Kilian. Completeness results for two-party cryptographic protocols..
  23. 6 April 1989: Dan Pehoushek (Stanford). Hamiltonian Circuit Problems.
  24. 13 April 1989: Edith Cohen (Stanford). Strongly Polynomial-Time and NC Algorithms for Detecting Cycles in Dynamic Graphs.
  25. 20 April 1989: Nathan Linial (Hebrew University). (no title provided).
  27. 4 May 1989: Russell Impagliazzo (UC-Berkeley). How to Recycle Random Bits.
  28. 11 May 1989: Mario Szegedy (University of Chicago). APPROXIMATION OF BOOLEAN FUNCTIONS BY VARIOUS SCHEMES.
  29. 18 May 1989: Mihalis Yannakakis. Shortest Paths Without a Map.
  30. 25 May 1989: Nati Linial (Hebrew University). Every boolean function has a small, dominant set of variables.
  31. 8 June 1989: Michael Luby (ICSI). Removing Randomness in Parallel Computation Without a Processor Penalty.

29 September 1988

Margaret Jacks Hall 220. 12:30 p.m.

Dr. Ron Shamir (School of Mathematics, Tel Aviv University)

An algorithm for the recognition of greedily solvable transportation problems.

In this talk we address the following question: Given a matrix of costs for the transportation problem, determine if there exists a permutation of the problem's variables, such that the problem is greedily solvable by that permutation for every possible supply and demand vectors. If the answer is positive, find such permutation. We say that a problem is "greedily solvable" by a permutation, if the greedy algorithm which maximizes each variable in turn, according to the ordering in that permutation, provides an optimal solution.

In 1963, Hoffman gave a necessary and sufficient condition which, given the matrix and the permutation of the variables, verifies that the matrix is greedily solvable by that ordering. (In that case the permutation is called a Monge sequence.) Hoffman's condition, though, did not answer the question raised above, since it required prior knowledge of the permutation in order to verify that the condition holds. Until now, the question of existence and the generation of Monge sequences had to be approached separately for every family of transportation problems.

We shall describe an efficient algorithm for the problem. The algorithm uses Hoffman's condition, but it does not require prior knowledge of the Monge sequence. Our algorithm also generates the corresponding Monge sequence whenever it exists. This facilitates the solution of every subsequent transportation problem with that cost matrix in linear time.

The running time of our algorithm is better than that of the best known algorithms for the transportation problem, and thus it can also be used as a preliminary step towards solving such problems, without an increase in the overall complexity.

(Joint work with N. Alon, S. Cosares and D. Hochbaum.)

6 October 1988

Margaret Jacks Hall 352. 12:30 p.m.

Moti Yung (IBM Almaden)

Universal One-Way Hash Functions and their Cryptographic Applications

We define a Universal One-Way Hash Function Family, a tool which enables the compression of elements in the function domain. The main property of such a family is that given an element x in the domain, it is computationally hard to find a different domain element which collides with x. We prove that Universal One-Way Hash Function exists if One-Way Function exists.

Among the applications of the tool is a Secure Digital Signature Scheme. Secure Signature Schemes were previously known to exist only under the assumption that (specific and recently general) Trapdoor One-Way Functions exist.

-a joint work with Moni Naor, Berkeley.

13 October 1988

Margaret Jacks Hall 352. 12:30 p.m.

Marek Karpinski (University of Bonn and ICSI)


We construct a fast parallel algorithm for enumerating all the perfect matchings in bipartite graphs with polynomially bounded permanents. Some implications towards the general maximum matching and counting problems are formulated as well as some surprising applications towards efficient deterministic interpolation schemes for polynomials over arbitrary fields. These results imply in particular the existence of efficient deterministic sparse conversion algorithms working over arbitrary fields, and the first deter ministic polynomial time (and boolean NC) algorithm for the separation of numerator and denominator of general rational functions. As another appli- cation we display a deterministic polynomial time (boolean NC) RSE-conversion algorithm for the sparse boolean circuits. The last result implies that the SPARSE SAT Problem is in P, and in deterministic NC.

18 October 1988

Margaret Jacks Hall 352. 12:30 p.m.

Milena Mihael (Harvard University and UC-Berkeley)

On the Magnification of 0-1 Polytopes

A polytope in R^n is a "0-1 polytope" if the coordinates of its vertices are either 0 or 1. The "1-skeleton" of such a po- lytope is a graph whose vertex set is the set of vertices of the polytope and whose edges correspond to 1-dimensional faces (edges) of the polytope. For several non-trivial combinatorial objects - matchings, matroids, order ideal - interesting struc- tural information can be expressed in terms of their associated 0-1 polytopes: the 1-skeletons of these polytopes are natural "exchange-graphs".

We show strong magnification properties for the 1-skeletons of the following classes of polytopes: (i) Arbitrary truncations of the matching polytope. (ii) Arbitrary truncations of partition matroid polytopes. (iii) The polytope of order ideals. The proof technique is a probabilistic extension of Jerrum and Sinclair's idea to "encode and bound paths using the state-space". New sam- pling schemes for matchings follow as a concrete consequence of these results.

We conjecture that the magnification properties of the specif- ic polytopes listed above are examples of a more general phenomenon : "All 0-1 polytopes are magnifying". A positive reso- lution of this conjecture would imply efficient schemes for es- timating the number of vertices of polytopes whose degrees are bounded by a polynomial in n. Several unsolved counting problems (including counting bases of a matroid and network reliability) can be expressed in terms of counting the number of vertices of suitably chosen polytopes that satisfy the condition of bounded degrees.

- This is joint work with Umesh Vazirani -.

27 October 1988

Margaret Jacks Hall 352. 12:30 p.m.

Thane Plambeck

Periodicity Results for Some Simple Octal Games

Taking and breaking games are a class of impartial games played by removing beans from a pile and leaving the pile in zero or more parts. Following Conway, the rules for a taking and breaking game can specified as a numeric code which describes the number of beans one can take and the number of piles into which one can break a pile. The games which have Conway codes consisting only of the octal digits 0...7 are known as octal games.

The theorem of Sprague and Grundy tells us that every instance of an impartial game is equivalent to an instance of the game Nim. The instances of a taking and breaking game thus specify a sequence of Nim values. It is conjectured [R. Guy] that all finitely-specified octal games have ultimately periodic Nim-value sequences, but the conjecture is not known to hold even for all octal games with 3 or fewer code digits. Slightly fewer than sixty of these now remain unsettled. We present techniques for giving computational proofs of the periodicity of some of these sequences and consequent periodicity results for three such games.

No background in the theory of impartial games will be assumed.

This is joint work with Anil Gangolli.

4 November 1988

Margaret Jacks Hall 352. 12:30 p.m.

Jin-Yi Cai (Yale University)

Take a Walk and Grow a Tree on the Boolean Hypercube.

We present a simple randomized algorithm for maintaining dynamically evolving binary trees on hypercube networks. The algorithm guarantees that: (1) nodes adjacent in the tree are within distance $O(\log\log N)$ in an $N$ processor hypercube, and (2) with overwhelming probability, no hypercube processor is assigned more than $O(1+\frac{M}{N})$ tree nodes, where $M$ is the number of nodes in the tree. The algorithm is distributed and does not require any global information. This is the first load-balancing algorithm with provably good performance.

The algorithm can be used to efficiently parallelize any tree based computation such as divide-and-conquer, branch-and-bound, or functional expression evaluation. It can also be used to efficiently maintain dynamic data structures such as quad-trees that arise in scientific applications and image processing.

A novel technique -- tree surgery -- is introduced to deal with dependencies inherent in trees. Together with tree surgery, the study of random walks is used to analyze the algorithm.

(Joint work with Sandeep Bhatt.)

17 November 1988

Margaret Jacks Hall 352. 12:30 p.m.

Shmuel Safra (Weizmann Institute)

On The Complexity of $\omega$-Automata

Automata on infinite words were introduced by Buchi in order to give a decision procedure for S1S, the monadic second-order theory of one successor. Muller suggested deterministic $\omega$-automata as a means of describing the behavior of non-stabilizing circuits. McNaughton proved that the classes of languages accepted by nondeterministic Buchi automata and by deterministic Muller automata are the same. His construction and its proof are quite complicated, and the blow-up of the construction is doubly exponential.

Our main result is a new determinization construction. The advantages of the construction are that it is simpler and yields a single exponent upper bound for the general case. This construction is essentially optimal. Using the construction we can also obtain an improved complementation construction for Buchi automata, which is also optimal. Both constructions can be used to improve the complexity of decision procedures that use automata-theoretic techniques.

1 December 1988

Margaret Jacks Hall 352. 12:30 p.m.

Alok Aggarwal (IBM Yorktown)


In theoretical computer science, algorithms traditionally have been studied for the random access machine model, in which it is assumed that memory is unlimited and access to any memory loca- tion takes unit time. This is not the case in actual computer systems wherein the time to access a location depends upon its address. Furthermore, the time to access a block of contiguous locations is smaller than the total time spent in accessing each location individually. In this talk, we present and study models of computation that incorporate the characteristics of such real machines. Algorithms that take into account nonuniform memory access cost are developed for well-known problems; most of them are shown to be optimal by providing matching lower bounds in these models. Finally, the issue of memory management is studied -- while on-line memory management is shown to be efficient in some models, explicit memory management by the user is required in other models.

8 December 1988

Margaret Jacks Hall 352. 12:30 p.m.

Tom Leighton (MIT)


(no abstract provided.)

15 December 1988

Margaret Jacks Hall 352. 12:30 p.m.

Ilan Vardi (Stanford)


The process of in vivo genetic recombination (IVGR) is well known to be succeptible to transfer of adventitious viral or bacterial information (VBI) during invasive chromosomal dispersion (ICD). In recent years focus has turned to polymer based occlusive sheaths (PBOS) as a method of preventing VBI during ICD.

In the talk, the question of minimizing the number of PBOS' given that all dyadic ICD's with IVGR capability are to be performed in a group of ICD donors (ICDD) and ICD receptors (ICDR) each with a different VBI so that no VBI is transmitted. Previous work of Hajnal and Lovasz found upper and lower bounds that differed by an additive constant of one. I will fill in this gap by providing an algorithm that gives the optimal solution for any number of ICDD's and ICDR's.

Caveat lector: anyone who doesn't understand this abstract may be asked to participate in a real time demonstration of the algorithm.

12 Januaray 1989 (?)

Margaret Jacks Hall 352. 12:30 p.m.

Don Knuth (Stanford)

Stable Husbands

Suppose $n$ boys and $n$ girls rank each other at random. We show that any particular girl has at least $({1\over 2}-\epsilon) \ln n$ different husbands in the set of all Gale/Shapley stable matchings defined by these rankings, with probability approaching~1 as $n$, if $\epsilon$ is any positive constant. The proof emphasizes general methods that appear to be useful for the analysis of many other combinatorial algorithms.

(joint work with Rajeev Motwani and Boris Pittel)

19 Januaray 1989

Margaret Jacks Hall 352. 12:30 p.m.

Bernd Sturmfels


B.cBuchberger's Gr\"obner bases theory provides both a theoretical framework and practical algorithms for large class of computational problems in algebraic geometry. Moreover, implementations of the basic algorithms are now available in many computer algebra systems (e.g. MAPLE, MACSYMA, SCRATCHPAD, REDUCE, ...).

After a brief introduction to Gr\"obner bases ``from a user's point of view'', we will discuss recent applications of these methods to problems in the following areas:

Computational invariant theory

Inverse kinematics in robot programming

Determinantal ideals and algebraic combinatorics

Algorithmic versions of the Quillen-Suslin theorem

24 January 1989

Margaret Jacks Hall 301. 12:30 p.m.

David Shmoys (MIT)

Approximation Algorithms for Constrained Parallel Machine Scheduling Problems

In this talk, we consider the effect of precedence constraints on the complexity of scheduling problems. In particular, we shall consider the following scheduling problem: there are n independent jobs to be scheduled on m identical parallel machines; each job may scheduled only after a specified release date; each job has a due date, and the objective is minimize the maximum lateness. If, in addition, there are precedence constraints, we show that a simple heuristic delivers solutions that are no worse than twice optimal, and by a result of Lenstra & Rinnooy Kan, no algorithm exists that guarantees better than a factor of 4/3 unless P=NP. Without precedence constraints, we present a polynomial approximation scheme. For scheduling problems with precedence constraints, it is typical that no algorithms are known that guarantee better than a factor of 2, but in the special case where m=1, the first result can be improved to a factor of 4/3.

26 Januaray 1989

Margaret Jacks Hall 352. 12:30 p.m.

Eva Tardos

Combinatorial algorithms for the generalized network flow problem

We consider a generalization of the maximum flow problem in which the amounts of flow entering and leaving an arc are linearly related. More precisely, if $x(e)$ units of flow enter an arc $e$, $x(e) \gamma (e)$ units arrive at the other end. For instance, nodes of the graph can correspond to different currencies, with the multipliers being the exchange rates. We require conservation of flow at every node except a given source node. The goal is to maximize the amount of flow excess at the source.

This problem is a special case of linear programming, and therefore can be solved in polynomial time. We present a simple combinatorial algorithm for this problem, that is based on ideas from maximum flow and minimum cost flow algorithms.

Joint work with Andrew Goldberg and Serge Plotkin.

2 February 1989

Margaret Jacks Hall 352. 12:30 p.m.

Arving Raghunathan (UC-Berkeley)


We consider the problem of checking whether a graph can be embedded in 3-space such that no collection of vertex disjoint cycles is topologically linked. The Robertson-Seymour theory of graph minors is applicable to this problem and guarantees the existence of an O(n^3) algorithm for this decision problem. However, not even a finite time decision procedure was explicitly known for this problem.

We exhibit a small set of forbidden minors for linkless emebddable graphs and show that any graph with these minors cannot be embedded without linked cycles. We further establish that any graph which does not contain these minors is embeddable without linked cycles. Thus, we demonstrate an O(n^3) algorithm for the decision problem. This is one of the first problems for which a polynomial time algorithm has been constructed after it was shown that such an algorithm must exist.

An important consequence of our work is a proof that a linkless embeddable graph is also knotless embeddable. We also show that a graph which is embeddable without any two cycles being linked is linkless embeddable. Motivated by these results, we define the notion of a "good" three-dimensional embedding of a graph and show that a graph has a good embedding if and only if it is linkless embeddable. This notion of a good embedding gives us a finite time alogrithm to actually embed a graph without linked cycles, if such an embedding exists.

(This is joint work with Rajeev Motwani and Huzur Saran)

9 February 1989

Margaret Jacks Hall 352. 12:30 p.m.

Rajeev Motwani (Stanford)


We present a fairly general technique for converting RNC algorithms into NC algorithms. Our approach is based on a parallel implementation of the Method of Conditional Probabilities due to Joel Spencer. This method was used to convert probabilistic proofs of existence of combinatorial structures into polynomial time deterministic algorithms. It had the apparent drawback of being extremely sequential in nature. We show that, under certain general conditions, it is possible to use this technique for devising deterministic parallel algorithms.

We use our technique to devise NC algorithms for various problems including set balancing problems, near-optimal edge coloring of graphs, finding independent sets in hypergraphs and lattice approximation problems. Simple RNC algorithms were known for each of these problems previously but no deterministic NC algorithms had been devised.

[Joint work with Seffi Naor (Stanford University) and Moni Naor (IBM-ARC). ]

14 February 1989

Margaret Jacks Hall 352. 11:00 a.m.

Mauricio Karchmer (University of Toronto)

Communication Complexity & Circuit Complexity

Three approaches to the $NC^1$ vs. $P$ question will be presented. All three approaches are based on the equivalence between circuit depth and some problems in communication complexity. We are going to motivate each of the approaches and state intermediate questions that may give ideas for the general problem.

23 February 1989

Margaret Jacks Hall 352. 12:30 p.m.

Zvi Galil (Columbia University and Tel Aviv University)

The subset-sum problem and analytic number theory -- an interplay

One way of attacking NP-complete problems is to identify input domains for which it is possible to design polynomial time (or pseudo polynomial time) algorithms. For example, consider the subset-sum problem. If the m summands are bounded by l, one can solve the problem in time O(l m^2) by dynamic programming.

G. Freiman initiated an entirely different approach for solving the subset-sum problem. Using analytic number theory, he and others proved theorems characterizing the set of subset sums as a collection of arithmetic progressions with the same difference. These theorems were used to design algorithms for the subset-sum problem that improved the ones using dynamic programming in case of dense enough inputs. (A problem is dense if m is large enough as a function of l.)

More recently, a family of better algorithms was designed. They do not use any analytic number theory, only elementary number theoretic arguments. The fastest algorithms in the family run in time O(m), and the slowest in time O(l log l). Moreover the algorithms yield a characterization of the set of subset sums, improving the theorems mentioned above.

The talk will discuss the approach, its potential and its limitations.

Joint results with M. Chaimovich, G. Freiman and O. Margalit of Tel-Aviv University.

28 February 1989

Margaret Jacks Hall 352. 4:15 p.m.

Serge Plotkin (Stanford)


We present the first sublinear-time deterministic parallel algorithms for bipartite matching and several related problems, including maximal node-disjoint paths, depth-first search, and flows in zero-one networks. Our results are based on a better understanding of the combinatorial structure of the above problems, which leads to new algorithmic techniques. In particular, we show how to use maximal matching to extend, in parallel, a current set of node-disjoint paths and how to take advantage of the parallelism that arises when a large number of nodes are ``active'' during an execution of a push/relabel network flow algorithm.

We also show how to apply our techniques to design parallel algorithms for the weighted versions of the above problems. In particular, we present sublinear-time deterministic parallel algorithms for finding a minimum-weight bipartite matching and for finding a minimum-cost flow in a network with zero-one capacities, if the weights are polynomially bounded integers.

Joint work with A. Goldberg and P. Vaidya.

2 March 1989

Margaret Jacks Hall 352. 12:30 p.m.

Bob Cypher (University of Washington)

Lower and Upper Bounds on Parallel Sorting

This talk presents two results in parallel sorting. The first result is an algorithm, called Cubesort, that sorts efficiently on a number of bounded degree parallel computers when the number of data items exceeds the number of processors. Let $N$ be the number of data items to be sorted, let $P$ be the number of processors in the parallel computer and let $L = N/P$. If $\log^{(x)} N < L < N^{x}$ for some constant $x$, then Cubesort sorts in $O(L\log^{2} N/\log L)$ time on a shuffle-exchange or cube-connected cycles computer. This result is the fastest known for these types of computers.

The second result is a lower bound on the size of sorting networks based on Shellsort. A comparator is a device that sorts 2 items, and a sorting network is a hardwired collection of comparators that sorts $N$ items. Shellsort is a well known sorting algorithm that is controlled by a sequence of parameters called increments. Researchers have suggested that a sorting network having $O(N\log N)$ comparators could be obtained by using Shellsort with the correct increment sequence. All increment sequences that have been proposed for Shellsort have been monotonic. It will be shown that any monotonic increment sequence yields a sorting network having at least $Omega(N\log^{2} N/\log\log N)$ comparators.

7 March 1989

Margaret Jacks Hall 352. 4:15 p.m.

Joe Kilian

Completeness results for two-party cryptographic protocols.

This paper will survey recent work in proving completeness results for two-party protocols. Joe will primarily discuss work he has done with Ben-Or, Crepeau, Goldwasser, Nisan, and Wigderson.

The goal of this research is to develop an understanding of the cryptographic power of simple protocols, and to apply this knowledge to remove or weaken cryptographic assumptions. One of the early results in this area says that if one can implement a simple protocol, known as oblivious transfer, then one can implement some very general two-party games. Machinery has also been developed to show completeness results for other ``protocols,'' such as an ideal noisy channel. Applications of this research have been applied to the study of multi-prover interactive proof systems, bounded interaction zero-knowledge, space-bounded automata, efficient zero-knowledge protocols for NP, and weakening the cryptographic assumptions necessary for secure circuit evaluation.

9 March 1989

Margaret Jacks Hall 352. 12:30 p.m.

Baruch Schieber (IBM Watson)


We prove an Omega ((log n)**0.5) lower bound on the depth of any decision tree with operations {+,-,*,/,floor,<}, that decides whether an integer is a perfect square, for any n-bit integer. We then extend the arguments to obtain an Omega(loglog n) lower bound on the depth of any decision tree with operations {+,-,*,/,floor,<}, that decides whether two integers are relatively prime, for any two n-bit integers. Although our lower bounds are not tight, they are the first non-trivial bounds for natural problems in the presence of the floor operation.

A novel technique for handling the floor operation is presented. The same technique can be used to derive lower bounds for many other problems.

Finally, we show that the same lower bounds hold for the time complexity of RAMs with the same set of arithmetic operations.

This is a joint work with Yishay Mansour and Prasoon Tiwari.

6 April 1989

Margaret Jacks Hall 352. 12:30 p.m.

Dan Pehoushek (Stanford)

Hamiltonian Circuit Problems

A divide and conquer algorithm is used to solve various Hamiltonian Circuit problems, including the decision, minimum weighted, and enumeration versions of the problem. The general approach is described quite well in Arnborg and Proskurowski, 1984. With respect to this algorithm, the space of graphs is stratified into "complexity classes" based on tree-width, ranging from linearly to exponentially hard. One surprising aspect of the algorithm is the small difference in difficulty between the enumeration (#P) and decison (NP and coNP) HC problems, even though #P-complete problems are not known to be in NP.

13 April 1989

Margaret Jacks Hall 352. 12:30 p.m.

Edith Cohen (Stanford)

Strongly Polynomial-Time and NC Algorithms for Detecting Cycles in Dynamic Graphs

This talk is concerned with the problem of recognizing, in a graph with rational vector-weights associated with the edges, the existence of a cycle whose total weight is the zero vector. This problem is known to be equivalent to the problem of recognizing the existence of cycles in dynamic graphs and to the validity of some systems of recursive formulas.

The best previously known result was that the problem can be solved through a polynomial number of linear programming problems. It was previously conjectured that combinatorial algorithms exist for the cases of two- and three-dimensional vector-weights. The work gives strongly polynomial algorithms for any fixed dimension. Moreover, these algorithms also establish membership in the class NC. On the other hand, it is shown that when the dimension of the weights is not fixed, the problem is equivalent to the general linear programming problems under strongly polynomial and logspace reductions.

Joint work with Nimrod Megiddo (IBM Almaden)

20 April 1989

Margaret Jacks Hall 352. 12:30 p.m.

Nathan Linial (Hebrew University)

(no title provided)

This talk is a survey of a number of papers (joint works with M. Ben-Or, J. Kahn, G. Kalai, M. Saks and M. Ajtai) in which the following general problem is addressed: Consider a perfect-information n-player game to achieve a goal such as generating one random bit (=flipping a coin) or electing a random player. It is assumed that some coalition of players may try to bias the outcome and the question is to design games in which coalitions of restricted size cannot affect the outcome too much. The complementary problem is to show that in every game some coalitions of large enough size will have a substantial influence over the outcome. Games which run for only one round are in effect boolean functions and the questions considered address some fundamental properties of general boolean functions. A number of basic questions in this area were recently answered. Our understanding of influence in general perfect-information games is still incomplete and some of the open problems will be reviewed.

27 April 1989

Margaret Jacks Hall 352. 12:30 p.m.

Greg Frederickson (ICSI)


In this talk we bring insights from topological graph theory to bear on the solution of the all pairs shortest path problem. We present a shortest path algorithm faster than any previously known for certain classes of weighted graphs, based on how the graphs can be embedded in orientable and nonorientable surfaces. To generate shortest paths, we define and make use of a decomposition of the graph based on an appropriate embedding. Since the problem of finding an embedding of minimum genus is NP-hard, we rely on several graph grammar techniques to identify efficiently a non-minimal embedding that is still suitable for our purposes. Crucial to our approach is our encoding of shortest paths into a compact form, similar to that of routing tables for computer networks that send point-to-point messages along shortest paths. Determining compact routing tables for sparse networks is consequently one application of the algorithm. The talk will assume that the audience has no background in topological graph theory.

4 May 1989

Margaret Jacks Hall 352. 12:30 p.m.

Russell Impagliazzo (UC-Berkeley)

How to Recycle Random Bits

Random bits are a scarce resource; pseudo-random generators are an important means for using random bits most efficiently. The pseudo-random generators used in practice are the linear congruential generator and the shift register generator. Recent research has attempted to show why these generators perform well in practice, even though they are not cryptographically secure. We show that modified versions of these two generators are provably good, i.e. quasi-perfect. More precisely, if $r$ random bits are needed for a BPP algorithm to be correct with probability at least $2/3$, then $O(r+k^2)$ bits are needed to improve this probability to $1-2^{-k}$. We also present a different pseudo-random generator that is optimal, up to a constant factor, in this regard: it uses only $O(r+k)$ bits to improve the probability to $1-2^{-k}$. This generator is based on random walks on expanders, and requires only $O(k)$ arithmetic operations, for $k$ as above.

Next we show that our modified versions of the shift register and linear congruential generators can be used for more general purposes. For example, they can be used to generate a list of $k(n)$ $n$-bit primes using the information theoretical lower bound of $n-\log n-c$ bits per prime, amortized, for a suitable polynomial~$k(n)$. More generally, they can be used to generate polynomially-many samples from any polynomially-samplable distribution using the information theoretical lower bound of the entropy $+ o(1)$ bits per sample, amortized. In fact, this is possible even if the samples are needed from different distributions; the number of random bits used will approach the information-theoretic bound in the limit.

11 May 1989

Margaret Jacks Hall 352. 12:30 p.m.

Mario Szegedy (University of Chicago)


The approximation technique first introduced by Razborov has proven to be one of the most powerful techniques so far in the complexity theory of Boolean functions. Smolensky inspired by these results discovered that AC_0 circuits can always be approximated by low degree polynomials over a finite field. This and other results brought the degree as a complexity measure into the focus of the attention of many in the theoretical computer science. Unfortunately so far no one could get real strong results by degree considerations and perhaps the nature of the technique will not make it possible. Here we prove high degreeness for two different approximation schemes. First we prove that if a polynomial over the reals approximates a Boolean function in L_infinite norm then the degree of the polynomial is at least square-root of the sensitivity of the function. We also prove for a variety of rings that if the majority function is expressed by a polynomial over the ring in a zero-non-zero fashion almost everywhere then the polynomial should have high degree. As a consequence AC_0 circuits with a single mod 6 gate on the top cannot compute majority if the size is small.

18 May 1989

Margaret Jacks Hall 352. 12:30 p.m.

Mihalis Yannakakis

Shortest Paths Without a Map

We examine several versions of the shortest path problem when the map (i.e., the distances) is not known in advance but is discovered dynamically. We seek search strategies that optimize the ratio of the distance covered to the optimum. We describe simple optimal rules for two cases: layered graphs of bounded width and two-dimensional scenes with unit square obstacles. For slightly more general graphs and scenes, no bounded ratio is possible. We also show that designing a strategy that achieves the best ratio is in general PSPACE-complete.

(Joint work with Christos Papadimitriou)

25 May 1989

Margaret Jacks Hall 352. 12:30 p.m.

Nati Linial (Hebrew University)

Every boolean function has a small, dominant set of variables

In this talk I will survey a proof of the following theorem by J. Kahn, G. Kalai and myself:

Let f be a boolean function on n variables. Then there is a set of only o(n) variables whose influence on f is 1-o(1).

The definition of influence as well as all background from harmonic analysis that will be needed is included in the talk. Having heard my previous AFLB talk will help, but will by no means be necessary to follow this talk.

If time permits I will make some remarks on a very recent paper of Y. Mansour, N. Nisan and myself where similar techniques are used to study problems in circuit complexity.

8 June 1989

Margaret Jacks Hall 352. 12:30 p.m.

Michael Luby (ICSI)

Removing Randomness in Parallel Computation Without a Processor Penalty

We develop some general techniques for removing randomness from randomized $NC$ algorithms without a blowup in the number of processors. One of the requirements for the application of these techniques is that the analysis of the randomized algorithm uses only pairwise independence. Our main new result is a parallel algorithm for the $\Delta + 1$ vertex coloring problem with running time $O(\log^3 n \log\log n)$ using a linear number of processors on a CREW PRAM. Our techniques also apply to several other problems, including the maximal independent set problem and the maximal matching problem. The application of the general technique to these last two problems is mostly of academic interest because $NC$ algorithms that use a linear number of processors which have better running times have been previously found.