Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.

- 9 October 1986:
**Cynthia Dwork**(IBM Almaden). Distributed Coin Flipping -- A Survey of Results. - 16 October 1986:
**Eli Shamir**(Hebrew University DEC SRC). Parallel Routing in Networks -- Back to Virtual Paths?. - 23 October 1986:
**Ramsey Haddad**(Stanford University). Beta Operations: Efficient implementation of a primitive parallel operation. - 30 October 1986:
**Andrew Yao**(Princeton University). A Slightly Superlinear Lower Bound for the Randomized Complexity of Monotone Graph Properties. - 6 November 1986:
**Robert W Floyd**(Stanford University). Programs are random walks are programs. - 13 November 1986:
**Steven Rudich**(UC Berkeley). Circuit Complexity Stuff. - 20 November 1986:
**Leonid Levin**(Boston University and MIT). Extracting unpredictable bits from inputs of one-way functions. - 4 December 1986:
**Russell Impagliazzo**(UC Berkeley). Direct Zero-Knowledge Protocols for NP-complete Problems. - 11 December 1986:
**Marshall Bern**(UC - Berkeley). A More General Special Case of the Steiner Tree Problem. - 8 January 1987:
**Nicholas Pippenger**(IBM Almaden Research Center). THE CONFERENCE-CALL TELEPHONE-EXCHANGE PROBLEM. - 15 January 1987:
**Ashok Chandra**(IBM Yorktown Heights). A Model for Hierarchical Memory. - 22 January 1987:
**Silvio Ursic**(Madison, WI). A Linear Characterization of Counting Problems. - 29 January 1987:
**Robert Tarjan, Forsythe Lecturer**(Princeton and AT&T Bell Labs). Fast Algorithms for Network Optimization. - 5 February 1987:
**Persi Diaconis**(Stanford University). Is there a non-commutative Fast Fourier Transform?. - 9 February 1987:
**Richard Cole**(Courant Institute, NYU). Parallel Merge Sort. - 12 February 1987:
**Daniel Greene**(Xerox PARC). Finite-Resolution Computational Geometry. - 19 February 1987:
**A. Schaffer**. Open problem session. - 25 February 1987:
**William Aiello**(MIT). ON THE POWER OF INTERACTION. - 26 February 1987:
**Daniel Sleator**(Carnegie-Mellon University). Competitive Algorithms for On-line Problems. - 4 March 1987:
**Joel Friedman**(UCBerkeley). On Newton's Method. - 5 March 1987:
**Subhash Suri**(Johns Hopkins). An O(n log n) time algorithm for computing the Link Diameter of a simple polygon. - 11 March 1987:
**Andrew Goldberg**(MIT). Solving Minimum-Cost Flow Problems by Successive Approximation. - 12 March 1987:
**B.K. Natarajan**(Carnegie-Mellon). The Automated Design of Parts Feeders. - 19 March 1987:
**Pang Chen**. Problem Session. - 26 March 1987:
**Stavros S. Cosmadakis**(IBM, Yorktown). The Word and Generator Problems for Lattices. - 2 April 1987:
**Lenore D. Zuck**(Yale University). A Little Knowledge Goes a Long Way: Simple knowledge-based derivations and correctness proofs for a family of protocols. - 16 April 1987:
**Richard Beigel**(Johns Hopkins University). Tradeoffs in the Solution of Several Instances of an NP-Complete Problem. - 23 April 1987:
**Leonidas J. Guibas**(Stanford and DEC SRC). The Complexity of Many Faces in Arrangements of Lines and Segments. - 30 April 1987:
**Stephen Vavasis**(Stanford). EXPONENTIAL LOWER BOUNDS FOR BROUWER FIXED POINTS. - 7 May 1987:
**James Renegar**(Stanford). On the Worst-Case Arithmetic Complexity of Approximating Zeros of Polynomials. - 14 May 1987:
**John Hershberger**(Stanford). Optimal Shortest Path Queries in a Simple Polygon. - 21 May 1987:
**Miklos Ajtai**(IBM Almaden). Deterministic Simulation in LOGSPACE. - 28 May 1987:
**Christos H. Papadimitriou**(Stanford). THE GEOMETRY OF FINGERS. - 4 June 1987:
**Professors D.E. Knuth, C.H. Papadimitriou, J.D. Ullman**(Stanford). Solutions to the 1987 Analysis of Algorithms Qualifying Examination, Part I. - 11 June 1987:
**Professors D.E. Knuth, C.H. Papadimitriou, J.D. Ullman**(Stanford). Solutions to the 1987 Analysis of Algorithms Qualifying Examination, Part II. - 25 June 1987:
**Philippe Flajolet, INRIA**(France). SINGULARITY ANALYSIS OF GENERATING FUNCTIONS. - 2 July 1987:
**Ashok Subramanian**(Stanford). PSPACE Survives Three-Bit Bottlenecks. - 9 July 1987:
**Ramsey W. Haddad**(Stanford). Recognizing Bellman-Ford-Orderable Graphs. - 23 July 1987:
**Jack Snoeyink**(Stanford). Stabbing Vertical Segments with a Convex Polygon. - 6 August 1987:
**Anil Gangolli**(Stanford). Generating Random Combinatorial Objects with Random Walks. - 26 August 1987:
**Yossi Matias**(Weizmann Institute). A Video Scrambling Technique Based on Space Filling Curves.. - 27 August 1987:
**Ken Clarkson**(AT & T Bell Laboratories). Probabilistic Methods in Combinatorial Geometry.

Algorithms and implementation issues will be discussed.

Let the input size of the problem be N and output size M (note that 1 <= M <= N). We will show how to efficiently perform the operation in time that is a function of BOTH N and M.

If the parallel time complexity of performing the operation is expressed solely as a function of the number of inputs, the time bounds are trivially seen to match those of sorting --- O(\log^2 N) for the N-node hypercube and O(sqrt N) for the sqrt N X sqrt N mesh-of-trees. If it is also clear that if M=1 for this operation, it can be performed in O(log N) time on either parallel architecture.

What we will show is that the Beta Operation can be performed on an N-node hypercube in O(log N + log^2 M) time. For a sqrt N X sqrt N mesh-of-trees, we require O(log N + sqrt M) time. Thus, we reduce the time needed when using the parallel primitive in cases where a lot of data reduction leaves few outputs (that is, with small M --- for example, if M is polylog in N).

* Introduce explicit variables to track all parameters of interest.

* By operator strength reduction (e.g., finite difference) methods, semilinearize the computational steps.

* Introduce explicit deterministic variables that track expected values of the random ones by linear recurrence. A well known theorem about conditional expected values is useful.

* Find invariants of the resulting program and solve for final values. Typically, this entails finding eigenvectors of a triangular linear system.

The method has determined several means and variances in coalesced hashing, and high moments of certain random walks. It uses no higher mathematical notions than those mentioned above, and tends to provide a firm sense of direction to the analysis.

1) parity requires exactly 4n-4 fan-in 2 and, or, and not gates to compute.

2) the minimal circuit to compute any graph theoretic property (connectivity, 3-colorability, ... ) has no input that fans out to more than 34n gates (n is the number of inputs into the circuit).

3) other fun stuff as time permits.

However, the same techniques can be extended to give general protocols that enable one party to verify that she has performed a fixed arithmetic, Boolean or Turing machine computation correctly, without revealing any information about either the inputs or the outputs. This leads to direct protocols for problems such as Subset Sum and Satisfiability, as well as a general method for converting a polynomial-time nondetermenistic Turing Machine program into a zero-knowledge protocol for the language it recognizes. A slight adaptation allowing probabilistic computation gives a general method for converting any protocol into an equivalent zero-knowledge protocol.

I'll give a polynomial-time algorithm for the Steiner tree problem on planar networks in the special case that there are a small number of faces which contain all the vertices in N. I'll then show improvements of the basic algorithm as it applies to the rectilinear Steiner problem. My basic algorithm is a refinement of a dynamic programming algorithm due to Dreyfus and Wagner that is, in general, exponential time.

The polytopes defined by the linear program are projections of cubes. Their vertices are labeled with some of the boolean functions in N boolean variables. They are defined with recursions that mirror the binomial recursion and have been named ``binomial polytopes''.

Boolean symmetric functions assume a preeminent role in this work. A new symmetry, distinct from negation and permutation of variables is found to be present. The results extend the range of combinatorial problems that can be approached with methods to solve systems of linear inequalities to counting problems. They show that an incomplete knowledge of the face structure of the associated polytopes leads to imprecise counting. With a linear program, we obtain upper and lower bounds to the number of inputs accepted by the associated Turing machine. They also show that probabilistic Turing machines are a more natural model for integer and linear programming methods based on an incomplete description of the underlying polytopes.

Recently I have had to compute some non-commutative transforms on the symmetric group S_49. I will sketch available ideas and open problems.

(Joint work with Frances Yao)

The class IP[f(n)] is said to contain L if there exists an interactive proof system with f(n)-message exchanges (interactions) such that with high probability the verifier accepts x (of size n) if and only if x is in L.

Babai[85] showed that all languages recognized by interactive proof systems with a bounded number of interactions, can be recognized by interactive proof systems with only two interactions. That is, for every constant k, IP[k] collapses to IP[2].

In this paper, we give evidence that interactive proof systems with some unbounded number of interactions may be MORE POWERFUL than interactive proof systems with a bounded number of interactions. We show that for any unbounded function f(n) there exists an oracle B such that IP^B[f(n)] is not contained in PH^B. This implies that IP^B[f(n)] is not contained in IP^B[2], since IP^B[2] is contained in Pi_2^B for all oracles B.

The techniques employed are extensions of the techniques for proving lower bounds on small depth circuits used in Furst, Saxe, and Sipser[81]; Yao[85]; and Hastad[86].

Optimal algorithms are obtained also for related problems, e.g., finding a minimum link path between two fixed points, and finding a collection of minimum link paths from a fixed point to all vertices of P. Our diameter finding algorithm also gives a close approximation to the link center of P, which is a point that minimizes the maximum link distance to all points of P.

We discuss two methods for producing such streams: (a) Sensorless manipulation of the object to achieve unique orientation (b) Sensing without manipulation to detect orientation. For both methods, we present paradigms and develop algorithms that take the geometry of the object as input and produce manipulation/sensing schemes as output.

Our approach will be a little more formal than is traditional in the area.

This is joint work with J.Y. Halpern of IBM Almaden Research Center.

The corresponding problem for complete (for r.e.) sets has a positive answer: We can always reduce the problem of answering 2^n-1 instances of a complete problem to answering only $n$ instances of the complete problem. The reduction requires a tradeoff, because we cannot know in advance which $n$ instances of the complete problem we will have to solve.

We might hope that a similar tradeoff would allow us to efficiently solve a large number of instances to SAT.

We also consider variants of question (1) in which we are only trying to solve some decision problem that depends on the answers to k instances of SAT. We ask whether we can reduce the decision problem to answering (2) only k-1 instances of SAT or (3) only a fixed number of instances of SAT independent of k.

We show:

(1) is false under almost all relativizations, i.e., no tradeoff is possible in the solution of k instances of SAT.

(2) is true if k > 2, i.e., a tradeoff is possible in the solution of a decision problem.

(3) implies not (1), unless P = NP, i.e., if a huge tradeoff is possible in the solution of a decision problem, then no tradeoff is possible in the solution of k instances of SAT.

The talk will assume only a very basic familiarity with the theory of NP-completeness.

(joint work with Micha Sharir)

This talk represents joint work with Michael Hirsch of the UC-Berkeley math department.

This is joint work with Leo Guibas.

The sequence that we construct does not really depend on the test, in the sense that a polynomial family of sequences is constructed so that at least one of them passes any test. The polynomial family is the same even if the test is allowed to use an oracle of polynomial size, and it can be constrcuted in LOGSPACE (without using an oracle).

In this talk I shall outline a proof of this conjecture (discovered jointly with Xanthippi Markenscoff and Luqun Ni). In particular we show that form closure of any two-dimensional object with piecewise smooth boundary, except for the circle, can be achieved by four fingers. For three dimensions we prove that form closure of any object with piecewise smooth boundary can be achieved with twelve fingers if and only if the object does not have a rotational symmetry (in which case, of course, form closure is not achievable, since no moment along the axis of symmetry can be balanced). We also show that, under very general conditions, form closure of three-dimensional objects without rotational symmetries can be achieved with seven fingers. Finally, we show that, when friction is taken into account, under the most relaxed assumptions three fingers are necessary and sufficient in two dimensions, and four fingers in three dimensions. If time permits, I shall discuss the problem of achieving the *best* stable grip of an object.

The classical methods are: (i) Tauberian Theorems. (ii) The Darboux method that relies on a simple lemma from Fourier analysis and which was used extensively by Polya in tree enumerations.

We shall present here the "transfer lemma" approach based on works by Odlyzko (Counting of 2-3 trees) and Flajolet-Odlyzko (Height of Trees). That method requires only information about the order of growth of the function around its dominant singularity (-ies) in a complex domain. It is useful in cases where smoothness conditions are not necessarily satisfied and it can accomaodate a wide range of singular behaviours for generating functions.

Applications to be given include:

-The counting of 2-3 trees [Odlyzko]

-The height of binary trees [Flajolet-Odlyzko]

-Common subexpression problem

-Ramanujan's Q(n) functions and random mappings

-Full expansions for height of planted plane trees, register allocation and Odd-Even Merge, where singularity analysis can be used in conjunction with Mellin transform techniques.

PSPACE is logspace-serializable

Gee-whiz, lone inhabitant of the planet of Amnesia, is in real trouble, for Shomu, the super-hyper-overlord of the galaxy refuses to leave her alone. Every so often, he writes problem instances on the sky of Amnesia, and insists that Gee-whiz solve them. Life is hard, for there are only so many trees on the tiny planet, and Shomu's problem instances are so huge that they cannot even be written down on all the paper on the planet. To cap it all, Amnesia is cursed with occasional freak thunderstorms, of such a violence that every scrap of paper is washed away in the ensuing flood. [Hence the name of the planet.] Each time such a calamity happens, Gee-whiz has no choice but to make fresh paper. And Gee-whiz absolutely hates chopping trees.

Of late, Shomu has been setting Gee-whiz harder and harder tasks. But today he has gone too far.

``Shame on you, Shomu,'' cried Gee-whiz. ``You set me problems that are too hard to solve. If I used every scrap of paper on the planet, I would still only have space logarithmic in the size of what you write on the sky. Even if it didn't rain so often, I could only tackle problems in logspace. But the task you have set me is complete for PSPACE,'' and she muttered something about space-hierarchy theorems that Shomu did not quite understand.

``Tell you what,'' Shomu rejoinded, ``I'll grant you a boon. As long as you are still industriously working away, it won't rain. The moment you stop, however, there will be the customary few drops.''

``Thanks for the offer,'' Gee-whiz replied, ``Now I can let my axe rust. But it isn't going to do you a whit of good. For I still only have logspace.''

``Look, Gee-whiz, it is beyond my power to increase the size of your planet. But I shall make the task well worth your while; if you solve this problem, there will be no others. And to make life less monotonous, I shall set up this huge polynomial-bit binary clock in the sky. You can look at it when you wish, and it will tell you how many times it has rained since your toil began.''

``Gee whiz, I can do it !!,'' she exclaimed, ``and now I will be rid of you for good.''

To find out how Gee-whiz does it, show up at AFLB!

[This talk is based on work of Cai and Furst, presented at the Structure in Complexity Theory conference, 1987. Only very elementary facts about complexity theory will be assumed.]

(joint work with Alejandro Schaffer of Stanford University)

A collection of segments S = {s1, s2, ..., sn} in the plane R^2 is *stabbed* by a convex polygon P if the boundary of P intersects every segment in S. Arie Tamir at the Fourth NYU Computational Geometry Day posed the problem of finding a convex stabbing polygon for S or determining that none exists. In this talk we restrict S to contain only vertical segments develop an optimal O(n log n) time and linear space algorithm.

We distinguish two cases for stabbing polygons, based on a notion of a `canonical'' stabbing polygon. In one case either the upper or lower chain of the canonical stabbing polygon is a simply a line segment, in the other both upper and lower chains are non-trivial. (Somewhat surprisingly, the former appears more difficult than the latter.) In each case, we solve the stabbing problem by transforming it to what we will call a *bipartite stabbing problem*. We show how to solve this problem in O(n log n) time and O(n) space using the notion of geometric duality.

We reduce both cases of the convex stabbing problem to this bipartite stabbing problem in O(n log n) time and O(n) space, where n is the number of segments in S, thus solving the convex stabbing problem in these same bounds. Our reduction in both cases is based on a `point-sweeping' paradigm (or, alternatively, a `chain-unwrapping' paradigm).

We'll discuss how this approach works, some of the questions which arise naturally when one takes it, and some techniques one can use for answering these questions.

Only a basic background in discrete probability, and some linear algebra will be assumed.

This work is in collaboration with Adi Shamir.

For a set S \subset E^d with n points in general position, let T \subset S contain d points. If the hyperplane defined by T bounds an open halfspace containing no more than k points of S, then T is a (<=k)-facet of S. For example, a (<=0)-facet of a set of points in the plane defines an edge of the convex hull of S, and in general, an ordinary facet of the convex hull of S corresponds to a (<=0)-facet of S. The Upper Bound Theorem of convex polytope theory says that the number of (<=0)-facets of S is O(n^s), where s is the integer part of d/2. In this talk, I'll use a simple probabilistic argument to show that the number of (<=k)-facets of S is no more than O(n^s k^{d-s}). Similar ideas can be used to show that this bound is tight, and to derive bounds for the expected number of (<=k)-facets of random points. This result has applications to algorithm for halfspace range reporting.

If time permits, I'll discuss a probabilistic argument that the total number of edges of a set of m cells in an arrangement of n lines in the plane is bounded by O(n^{2/3+ epsilon} m^{2/3} + nlog m) for any fixed epsilon > 0.