AFLB is the Algorithms for Lunch Bunch. It is run by Alex Schaffer. Meetings are Thursdays at 12:30 p.m.

Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.

Contents:

  1. 9 October 1986: Cynthia Dwork (IBM Almaden). Distributed Coin Flipping -- A Survey of Results.
  2. 16 October 1986: Eli Shamir (Hebrew University DEC SRC). Parallel Routing in Networks -- Back to Virtual Paths?.
  3. 23 October 1986: Ramsey Haddad (Stanford University). Beta Operations: Efficient implementation of a primitive parallel operation.
  4. 30 October 1986: Andrew Yao (Princeton University). A Slightly Superlinear Lower Bound for the Randomized Complexity of Monotone Graph Properties.
  5. 6 November 1986: Robert W Floyd (Stanford University). Programs are random walks are programs.
  6. 13 November 1986: Steven Rudich (UC Berkeley). Circuit Complexity Stuff.
  7. 20 November 1986: Leonid Levin (Boston University and MIT). Extracting unpredictable bits from inputs of one-way functions.
  8. 4 December 1986: Russell Impagliazzo (UC Berkeley). Direct Zero-Knowledge Protocols for NP-complete Problems.
  9. 11 December 1986: Marshall Bern (UC - Berkeley). A More General Special Case of the Steiner Tree Problem.
  10. 8 January 1987: Nicholas Pippenger (IBM Almaden Research Center). THE CONFERENCE-CALL TELEPHONE-EXCHANGE PROBLEM.
  11. 15 January 1987: Ashok Chandra (IBM Yorktown Heights). A Model for Hierarchical Memory.
  12. 22 January 1987: Silvio Ursic (Madison, WI). A Linear Characterization of Counting Problems.
  13. 29 January 1987: Robert Tarjan, Forsythe Lecturer (Princeton and AT&T Bell Labs). Fast Algorithms for Network Optimization.
  14. 5 February 1987: Persi Diaconis (Stanford University). Is there a non-commutative Fast Fourier Transform?.
  15. 9 February 1987: Richard Cole (Courant Institute, NYU). Parallel Merge Sort.
  16. 12 February 1987: Daniel Greene (Xerox PARC). Finite-Resolution Computational Geometry.
  17. 19 February 1987: A. Schaffer. Open problem session.
  18. 25 February 1987: William Aiello (MIT). ON THE POWER OF INTERACTION.
  19. 26 February 1987: Daniel Sleator (Carnegie-Mellon University). Competitive Algorithms for On-line Problems.
  20. 4 March 1987: Joel Friedman (UCBerkeley). On Newton's Method.
  21. 5 March 1987: Subhash Suri (Johns Hopkins). An O(n log n) time algorithm for computing the Link Diameter of a simple polygon.
  22. 11 March 1987: Andrew Goldberg (MIT). Solving Minimum-Cost Flow Problems by Successive Approximation.
  23. 12 March 1987: B.K. Natarajan (Carnegie-Mellon). The Automated Design of Parts Feeders.
  24. 19 March 1987: Pang Chen. Problem Session.
  25. 26 March 1987: Stavros S. Cosmadakis (IBM, Yorktown). The Word and Generator Problems for Lattices.
  26. 2 April 1987: Lenore D. Zuck (Yale University). A Little Knowledge Goes a Long Way: Simple knowledge-based derivations and correctness proofs for a family of protocols.
  27. 16 April 1987: Richard Beigel (Johns Hopkins University). Tradeoffs in the Solution of Several Instances of an NP-Complete Problem.
  28. 23 April 1987: Leonidas J. Guibas (Stanford and DEC SRC). The Complexity of Many Faces in Arrangements of Lines and Segments.
  29. 30 April 1987: Stephen Vavasis (Stanford). EXPONENTIAL LOWER BOUNDS FOR BROUWER FIXED POINTS.
  30. 7 May 1987: James Renegar (Stanford). On the Worst-Case Arithmetic Complexity of Approximating Zeros of Polynomials.
  31. 14 May 1987: John Hershberger (Stanford). Optimal Shortest Path Queries in a Simple Polygon.
  32. 21 May 1987: Miklos Ajtai (IBM Almaden). Deterministic Simulation in LOGSPACE.
  33. 28 May 1987: Christos H. Papadimitriou (Stanford). THE GEOMETRY OF FINGERS.
  34. 4 June 1987: Professors D.E. Knuth, C.H. Papadimitriou, J.D. Ullman (Stanford). Solutions to the 1987 Analysis of Algorithms Qualifying Examination, Part I.
  35. 11 June 1987: Professors D.E. Knuth, C.H. Papadimitriou, J.D. Ullman (Stanford). Solutions to the 1987 Analysis of Algorithms Qualifying Examination, Part II.
  36. 25 June 1987: Philippe Flajolet, INRIA (France). SINGULARITY ANALYSIS OF GENERATING FUNCTIONS.
  37. 2 July 1987: Ashok Subramanian (Stanford). PSPACE Survives Three-Bit Bottlenecks.
  38. 9 July 1987: Ramsey W. Haddad (Stanford). Recognizing Bellman-Ford-Orderable Graphs.
  39. 23 July 1987: Jack Snoeyink (Stanford). Stabbing Vertical Segments with a Convex Polygon.
  40. 6 August 1987: Anil Gangolli (Stanford). Generating Random Combinatorial Objects with Random Walks.
  41. 26 August 1987: Yossi Matias (Weizmann Institute). A Video Scrambling Technique Based on Space Filling Curves..
  42. 27 August 1987: Ken Clarkson (AT & T Bell Laboratories). Probabilistic Methods in Combinatorial Geometry.

9 October 1986

Margaret Jacks Hall 352. 12:30 p.m.

Cynthia Dwork (IBM Almaden)

Distributed Coin Flipping -- A Survey of Results

In the context of a distributed system a global coin is a ``sufficiently unbiased'' source of randomness visible to ``sufficiently many'' processors. In 1983 Rabin observed that Byzantine agreement can be obtained in expected time O(T(n)), independent of the number of faulty processors, where T(n) is the number of rounds of communication required to achieve a global coin among n processors of which up to a linear fraction may be faulty. Rabin's result offered the possibility, soon realized, of beating the lower bound of t+1 rounds required by any deterministic algorithm to reach agreement in the presence of t faults. At the same time a similar idea enabled Ben-Or to obtain a randomized asynchronous agreement algorithm, beating the impossibility result for deterministic algorithms in that model. This talk surveys work of the past 3 or 4 years on the problem of achieving a global coin. Several open questions will be raised.


16 October 1986

Margaret Jacks Hall 352. 12:30 p.m.

Eli Shamir (Hebrew University DEC SRC)

Parallel Routing in Networks -- Back to Virtual Paths?

There are networks of size ranging from a few hundreds to a few millions nodes which achieve a ratio diameter/fanout much smaller (so better) than hypercubes. This implies a massive number of edge-disjoint paths, allowing parrallel communication requests to be routed by virtual paths instead of randomized packet switching (which is used only on tracers finding and marking these paths).

Algorithms and implementation issues will be discussed.


23 October 1986

Margaret Jacks Hall 352. 12:30 p.m.

Ramsey Haddad (Stanford University)

Beta Operations: Efficient implementation of a primitive parallel operation

The Beta Operation was introduced as a parallel programming primitive by Hillis as a means to reduce the complexity of programming his hypercube-based Connection Machine. The Beta Operation performs a combination of sorting and data reduction. (One possible application would be: sorting numbers in the presence of MANY duplicates.) We shall explore efficient ways to perform this operator on the hypercube and mesh-of-trees.

Let the input size of the problem be N and output size M (note that 1 <= M <= N). We will show how to efficiently perform the operation in time that is a function of BOTH N and M.

If the parallel time complexity of performing the operation is expressed solely as a function of the number of inputs, the time bounds are trivially seen to match those of sorting --- O(\log^2 N) for the N-node hypercube and O(sqrt N) for the sqrt N X sqrt N mesh-of-trees. If it is also clear that if M=1 for this operation, it can be performed in O(log N) time on either parallel architecture.

What we will show is that the Beta Operation can be performed on an N-node hypercube in O(log N + log^2 M) time. For a sqrt N X sqrt N mesh-of-trees, we require O(log N + sqrt M) time. Thus, we reduce the time needed when using the parallel primitive in cases where a lot of data reduction leaves few outputs (that is, with small M --- for example, if M is polylog in N).


30 October 1986

Margaret Jacks Hall 352. 12:30 p.m.

Andrew Yao (Princeton University)

A Slightly Superlinear Lower Bound for the Randomized Complexity of Monotone Graph Properties

(No abstract received due to network troubles)


6 November 1986

Margaret Jacks Hall 352. 12:30 p.m.

Robert W Floyd (Stanford University)

Programs are random walks are programs

To determine moments and other expected values of variables resulting from random walks or from programs with pseudorandom components, a certain systematic method seems effective:

* Introduce explicit variables to track all parameters of interest.

* By operator strength reduction (e.g., finite difference) methods, semilinearize the computational steps.

* Introduce explicit deterministic variables that track expected values of the random ones by linear recurrence. A well known theorem about conditional expected values is useful.

* Find invariants of the resulting program and solve for final values. Typically, this entails finding eigenvectors of a triangular linear system.

The method has determined several means and variances in coalesced hashing, and high moments of certain random walks. It uses no higher mathematical notions than those mentioned above, and tends to provide a firm sense of direction to the analysis.


13 November 1986

Margaret Jacks Hall 352. 12:30 p.m.

Steven Rudich (UC Berkeley)

Circuit Complexity Stuff

I will prove:

1) parity requires exactly 4n-4 fan-in 2 and, or, and not gates to compute.

2) the minimal circuit to compute any graph theoretic property (connectivity, 3-colorability, ... ) has no input that fans out to more than 34n gates (n is the number of inputs into the circuit).

3) other fun stuff as time permits.


20 November 1986

Margaret Jacks Hall 352. 12:30 p.m.

Leonid Levin (Boston University and MIT)

Extracting unpredictable bits from inputs of one-way functions

There is an abundant supply of functions believed to be one-way, i.e easy to compute, but hard to invert (Such beliefs imply P!=NP and, thus, are not supported by proofs). But even if x cannot be completely computed from f(x) in polynomial time, some valuable information about x may be easy to infer. Which bits b(x) of information about x are independent of, i.e. unpredictable, from f(x) in polynomial time with polynomial correlation? I believe (but cannot prove) that the boolean inner product (z*x) is independent from (z,f(x)), no matter what one-way function f is chosen. Other methods do work and will be described. This leads to various applications, in particular to simpler and more efficient "perfect" pseudorandom generators.


4 December 1986

Margaret Jacks Hall 352. 12:30 p.m.

Russell Impagliazzo (UC Berkeley)

Direct Zero-Knowledge Protocols for NP-complete Problems

A zero-knowledge protocol is a way for one party knowing a solution to to a problem to convince another party that the problem is solvable without revealing any other information about the problem. Earlier protocols for NP-complete problems seemed to rely on the fact that solutions to these problems, such as Hamiltonian Cicuit or Graph 3-colorability, can be verified through ``geometric properties''; i.e., verification of the solutions does not involve computation of any complexity.

However, the same techniques can be extended to give general protocols that enable one party to verify that she has performed a fixed arithmetic, Boolean or Turing machine computation correctly, without revealing any information about either the inputs or the outputs. This leads to direct protocols for problems such as Subset Sum and Satisfiability, as well as a general method for converting a polynomial-time nondetermenistic Turing Machine program into a zero-knowledge protocol for the language it recognizes. A slight adaptation allowing probabilistic computation gives a general method for converting any protocol into an equivalent zero-knowledge protocol.


11 December 1986

Margaret Jacks Hall 352. 12:30 p.m.

Marshall Bern (UC - Berkeley)

A More General Special Case of the Steiner Tree Problem

The Steiner tree problem on networks asks for a minimum length tree spanning a given subset N of the vertices in the network. (The vertices not in N are optional.) This problem is NP-complete even for planar networks.

I'll give a polynomial-time algorithm for the Steiner tree problem on planar networks in the special case that there are a small number of faces which contain all the vertices in N. I'll then show improvements of the basic algorithm as it applies to the rectilinear Steiner problem. My basic algorithm is a refinement of a dynamic programming algorithm due to Dreyfus and Wagner that is, in general, exponential time.


8 January 1987

Margaret Jacks Hall 352. 12:30 p.m.

Nicholas Pippenger (IBM Almaden Research Center)

THE CONFERENCE-CALL TELEPHONE-EXCHANGE PROBLEM

We consider the problem of constructing, with a minimum number of ``switches'', a telephone exchange capable of continuously accomodating any number of disjoint but dynamically changing ``conferences'', always without disturbing existing connections. It has long been known that Omega(nlog n) switches are required, but until recently the best upper bound available was O(n(log n)^2)$. We shall present a new upper bound of O(nlog n), due to P. Feldman, J. Friedman and the speaker, and describe some algorithmic problems left open by this work.


15 January 1987

Margaret Jacks Hall 352. 12:30 p.m.

Ashok Chandra (IBM Yorktown Heights)

A Model for Hierarchical Memory

Large processors typically have a complex memory hierarchy. For instance, there may be registers, cache, main memory and extended store. We consider the following approximation (the hierarchical memory model or HMM): a random access machine where access to memory location x requires log x time instead of the usual constant time. Algorithms that take time T(n) on a RAM can be run in time T(n)log n where T(n) is polynomial. It is shown that in some cases this log n factor cannot be improved (as for searching). In other cases it can be reduced to a constant (e.g. for matrix multiplication using only semiring operations). For yet others it can be reduced only to log log n (e.g. for oblivious sorting or for computations on a butterfly network). Differences in the various algorithms give some indication of the inherent locality of reference in the problem.


22 January 1987

Margaret Jacks Hall 352. 12:30 p.m.

Silvio Ursic (Madison, WI)

A Linear Characterization of Counting Problems

The problem of counting the number of inputs accepted by a Turing machine in a polynomial number of steps is transformed to a linear program in a polynomial number of variables and with integer coefficients of polynomial size.

The polytopes defined by the linear program are projections of cubes. Their vertices are labeled with some of the boolean functions in N boolean variables. They are defined with recursions that mirror the binomial recursion and have been named ``binomial polytopes''.

Boolean symmetric functions assume a preeminent role in this work. A new symmetry, distinct from negation and permutation of variables is found to be present. The results extend the range of combinatorial problems that can be approached with methods to solve systems of linear inequalities to counting problems. They show that an incomplete knowledge of the face structure of the associated polytopes leads to imprecise counting. With a linear program, we obtain upper and lower bounds to the number of inputs accepted by the associated Turing machine. They also show that probabilistic Turing machines are a more natural model for integer and linear programming methods based on an incomplete description of the underlying polytopes.


29 January 1987

Margaret Jacks Hall 352. 12:30 p.m.

Robert Tarjan, Forsythe Lecturer (Princeton and AT&T Bell Labs)

Fast Algorithms for Network Optimization

Recent work by Andrew Goldberg of MIT, Harold Gabow of the University of Colorado, and the speaker has yielded new fast algorithms for the maximum flow problem, the minimum cost flow problem, and the weighted matching problem. These algorithms and the ideas behind them will be discussed.


5 February 1987

Margaret Jacks Hall 352. 12:30 p.m.

Persi Diaconis (Stanford University)

Is there a non-commutative Fast Fourier Transform?

The Fast Fourier Transform is a collection of ideas for computing discrete Fourier Transforms. Usually it is carried out on the integers modulo n, or on the groups of binary n-tuples.

Recently I have had to compute some non-commutative transforms on the symmetric group S_49. I will sketch available ideas and open problems.


9 February 1987

Margaret Jacks Hall 352. 10:00 a.m.

Richard Cole (Courant Institute, NYU)

Parallel Merge Sort

We give a parallel implementation of merge sort on a CREW PRAM that uses n processors and O(log n) time; the constant in the running time is small. We also give a more complex version of the algorithm for the EREW PRAM; it also uses n processors and O(log n) time. The constant in the running time is still moderate, though not as small.


12 February 1987

Margaret Jacks Hall 352. 12:30 p.m.

Daniel Greene (Xerox PARC)

Finite-Resolution Computational Geometry

Many of the algorithms of Computational Geometry are designed assuming infinite precision arithmetic, and yet often, for computer graphics applications, they must be used at a finite resolution. Because of this fundamental mismatch, some of the basic techniques of Computational Geometry are notoriously hard to use in practice. This talk will explore ways of converting algorithms that require infinite precision to finite precision counterparts. In the case of the line intersection problem, criteria will be formulated for an accurate finite-resolution solution, and we will show how this can be achieved using a variation of the continued fraction algorithm coupled with a succinct way of controlling the movement of line segments.

(Joint work with Frances Yao)


19 February 1987

Margaret Jacks Hall 352. 12:30 p.m.

A. Schaffer

Open problem session


25 February 1987

Margaret Jacks Hall 352. 1:00 p.m.

William Aiello (MIT)

ON THE POWER OF INTERACTION

A hierarchy of probabilistic complexity classes generalizing NP has recently emerged in the work of Babai[85]; Goldwasser, Micali, and Rackoff[85]; and Goldwasser and Sipser[86]. The IP hierarchy is defined through the notion of an interactive proof system in which an all powerful prover tries to convince a probabilistic polynomial time verifier that a string x is in a language L. The verifier tosses coins and exchanges messages back and forth with the prover before it decides whether to accept x. This proof-system yields ``probabilistic'' proofs: the verifier may erroneously accept or reject x with small probability.

The class IP[f(n)] is said to contain L if there exists an interactive proof system with f(n)-message exchanges (interactions) such that with high probability the verifier accepts x (of size n) if and only if x is in L.

Babai[85] showed that all languages recognized by interactive proof systems with a bounded number of interactions, can be recognized by interactive proof systems with only two interactions. That is, for every constant k, IP[k] collapses to IP[2].

In this paper, we give evidence that interactive proof systems with some unbounded number of interactions may be MORE POWERFUL than interactive proof systems with a bounded number of interactions. We show that for any unbounded function f(n) there exists an oracle B such that IP^B[f(n)] is not contained in PH^B. This implies that IP^B[f(n)] is not contained in IP^B[2], since IP^B[2] is contained in Pi_2^B for all oracles B.

The techniques employed are extensions of the techniques for proving lower bounds on small depth circuits used in Furst, Saxe, and Sipser[81]; Yao[85]; and Hastad[86].


26 February 1987

Margaret Jacks Hall 352. 12:30 p.m.

Daniel Sleator (Carnegie-Mellon University)

Competitive Algorithms for On-line Problems

In a variety of situations, ranging from motion planning, to data structure design, to control of caches, it is necessary to process a sequence of requests on-line. In such a situation an algorithm must be devised that decides how to process each request and how to arrange the system, without knowing what the future requests will be. An on-line algorithm for processing a sequence of requests is said to be competitive if for any sequence of requests the performance of the algorithm on that sequence is within a constant factor of the performance of the optimum off-line algorithm. There is reason to believe that competitiveness is a better measure of an algorithm's performance in real situations than other methods, such as average case analysis. Competitive algorithms have been discovered for a variety of problems. This talk will describe several of these.


4 March 1987

Margaret Jacks Hall 352. 2:15 p.m.

Joel Friedman (UCBerkeley)

On Newton's Method

In this talk we estimate the density of the set of points for which Newton's method converges. Given a polynomial of degree d with complex coefficients whose roots lie in the unit ball (a condition which is easy to check and to ensure by suitable rescaling), we ask what is the measure of the set of points in the ball of radius 2 for which Newton's method converges. We prove that for each d this measure is bounded from below by a positive number, and we give an estimate of this number. We may also discuss results on a related problem, an average case analysis of the set of points for which Newton's method converges "very" quickly (i.e. "approximate zero" for Newton's method).


5 March 1987

Margaret Jacks Hall 352. 12:30 p.m.

Subhash Suri (Johns Hopkins)

An O(n log n) time algorithm for computing the Link Diameter of a simple polygon

We study the motion planning problems for a point-size robot, where a minimum possible number of turns are to be made during the motion. The motion takes place within a simple polygon P having n vertices, whose boundary acts as the fixed set of obstacles. The link distance between two points of P equals the minimum number of straight segments needed by any polygonal path that connects the two points in P. Our main result is an O(n log n) time and O(n) space algorithm for computing the maximal link distance between any two points of P, also called the Link Diameter of P.

Optimal algorithms are obtained also for related problems, e.g., finding a minimum link path between two fixed points, and finding a collection of minimum link paths from a fixed point to all vertices of P. Our diameter finding algorithm also gives a close approximation to the link center of P, which is a point that minimizes the maximum link distance to all points of P.


11 March 1987

Margaret Jacks Hall 252. 10:00 a.m.

Andrew Goldberg (MIT)

Solving Minimum-Cost Flow Problems by Successive Approximation

We introduce a framework for solving minimum-cost flow problems. Our approach is to measure the quality of a solution by the amount that the complementary slackness conditions are violated. We show how to extend techniques developed for the maximum flow problem to improve the quality of a solution. This framework allows us to achieve O(min[n^3, n^{5/3}m^{2/3}, nm[log n]] log(nC)) running time. (Here n is the number of nodes in the input network, m is the number of edges, and C is the absolute value of the biggest cost.) This significantly improves the previous bounds. We believe that our results are interesting from both theoretical and practical viewpoints.


12 March 1987

Margaret Jacks Hall 352. 12:30 p.m.

B.K. Natarajan (Carnegie-Mellon)

The Automated Design of Parts Feeders

In robot-automated assembly, precise knowledge of the orientation of the components involved is essential if the robot is to successfully pick up and assemble them. Consequently, it is desirable to feed the robot with a stream of each component, the orientation of the object at the head of each stream being uniquely known.

We discuss two methods for producing such streams: (a) Sensorless manipulation of the object to achieve unique orientation (b) Sensing without manipulation to detect orientation. For both methods, we present paradigms and develop algorithms that take the geometry of the object as input and produce manipulation/sensing schemes as output.

Our approach will be a little more formal than is traditional in the area.


19 March 1987

Margaret Jacks Hall 352. 12:30 p.m.

Pang Chen

Problem Session


26 March 1987

Margaret Jacks Hall 352. 12:30 p.m.

Stavros S. Cosmadakis (IBM, Yorktown)

The Word and Generator Problems for Lattices

We present polynomial-time algorithms for the uniform word problem and for the generator problem for lattices. The algorithms are derived from novel, proof-theoretic approaches. We show that both problems are log-spacecomplete for P, but can be solved in deterministic logarithmic space in the case of free lattices. We also show that the more general problem of testing whether a given open sentence is true in all lattices is co-NP complete.


2 April 1987

Margaret Jacks Hall 352. 12:30 p.m.

Lenore D. Zuck (Yale University)

A Little Knowledge Goes a Long Way: Simple knowledge-based derivations and correctness proofs for a family of protocols

We use a high-level, knowledge-based approach for deriving a family of protocols for the sequence transmission problem. The protocols of Aho, Ullman, and Yannakakis, the Alternating Bit protocol, and Stenning's protocol are all instances of one knowledge-based protocol that we derive. Our derivation leads to easy and uniform correctness proofs for all these protocols.

This is joint work with J.Y. Halpern of IBM Almaden Research Center.


16 April 1987

Margaret Jacks Hall 352. 12:30 p.m.

Richard Beigel (Johns Hopkins University)

Tradeoffs in the Solution of Several Instances of an NP-Complete Problem

In this talk we address the complexity of answering k instances of an NP-complete decision problem. The central question is (1) Can we reduce the problem of answering k instances of SAT to answering only k-1 instances of SAT? (Naturally, we only allow polynomial time reductions in connection with SAT.)

The corresponding problem for complete (for r.e.) sets has a positive answer: We can always reduce the problem of answering 2^n-1 instances of a complete problem to answering only $n$ instances of the complete problem. The reduction requires a tradeoff, because we cannot know in advance which $n$ instances of the complete problem we will have to solve.

We might hope that a similar tradeoff would allow us to efficiently solve a large number of instances to SAT.

We also consider variants of question (1) in which we are only trying to solve some decision problem that depends on the answers to k instances of SAT. We ask whether we can reduce the decision problem to answering (2) only k-1 instances of SAT or (3) only a fixed number of instances of SAT independent of k.

We show:

(1) is false under almost all relativizations, i.e., no tradeoff is possible in the solution of k instances of SAT.

(2) is true if k > 2, i.e., a tradeoff is possible in the solution of a decision problem.

(3) implies not (1), unless P = NP, i.e., if a huge tradeoff is possible in the solution of a decision problem, then no tradeoff is possible in the solution of k instances of SAT.

The talk will assume only a very basic familiarity with the theory of NP-completeness.


23 April 1987

Margaret Jacks Hall 352. 12:30 p.m.

Leonidas J. Guibas (Stanford and DEC SRC)

The Complexity of Many Faces in Arrangements of Lines and Segments

We show that the total number of edges in m faces of an arrangement of n lines in the plane is O(m^(.6 + delta) n^(.8 +delta) + m + nlogn), for any delta > 0, thereby improving a previous upper bound of O(m^.5 n) due to Edelsbrunner and Welzl. Our proof uses an algorithmic approach: we describe an algorithm for the calculation of these m faces, and derive the upper bound from the analysis of the space complexity of the algorithm. The algorithm is based on half-plane range searching, and uses the epsilon-net technique of Haussler and Welzl. The time of complexity of our algorithm is within a log n factor of the above bound. If instead of lines we have an arrangement of n line segements, then we show that the total complexity of m faces in this arrangement is O(m^(.6 + delta) n^(.8 +delta) + m + n(alpha(n))logn), for any delta > 0, where alpha(n) is the extremely slowly growing functional inverse of Ackermann's function.

(joint work with Micha Sharir)


30 April 1987

Margaret Jacks Hall 352. 12:30 p.m.

Stephen Vavasis (Stanford)

EXPONENTIAL LOWER BOUNDS FOR BROUWER FIXED POINTS

The Brouwer fixed point theorem has become a major tool for modeling economic systems during the 20th century. It was intractable to use the theorem in a computational manner until 1965 when Scarf provided the first practical algorithm for finding a fixed point of a Brouwer map. Scarf's work left open the question of worst-case complexity, although he hypothesized that his algorithm had ``typical'' behavior of polynomial time in the number of variables of the problem. In the talk we show that {\sl any} algorithm for fixed points based on function evaluation (which includes all general purpose fixed-point algorithms) must in the worst case take a number of steps which is exponential both in the number of digits of accuracy and in the number of variables.

This talk represents joint work with Michael Hirsch of the UC-Berkeley math department.


7 May 1987

Margaret Jacks Hall 352. 12:30 p.m.

James Renegar (Stanford)

On the Worst-Case Arithmetic Complexity of Approximating Zeros of Polynomials

Let Pd(R) denote the set of degree d complex polynomials with all zeros z satisfying |z| <= R. For d >=2 fixed, we show that with respect to a certain model of computation, the worst-case computational complexity of obtaining an epsilon-approximation to either one, or to each, zero of arbitrary f in Pd(R) is THETA(loglog(R/epsilon)); that is, we prove both lower and upper bounds. A new algorithm, based on Newton's method, is introduced for proving the upper bound.


14 May 1987

Margaret Jacks Hall 352. 12:30 p.m.

John Hershberger (Stanford)

Optimal Shortest Path Queries in a Simple Polygon

Let P be a simple polygon with n sides. I will show how to preprocess the polygon so that, given two query points p and q inside P, the length of the shortest path inside the polygon from p to q can be found in time O(log n). The path itself must be polygonal and can be extracted in additional time proportional to the number of turns it makes. The preprocessing consists of triangulation plus a linear amount of additional work. I will also describe several applications of the result.

This is joint work with Leo Guibas.


21 May 1987

Margaret Jacks Hall 352. 12:30 p.m.

Miklos Ajtai (IBM Almaden)

Deterministic Simulation in LOGSPACE

We show that a wide class of probabilistic algorithms can be simulated by deterministic algorithms. Namely if there is a test in LOGSPACE so that a random sequence of length (logn)^(1.5) pas the test with probability at least 1/n then a deterministic sequence can be constructed in LOGSPACE that also passes the test. It is important that the machine performing the test gets each bit of the sequence only once. The theorem remains valid if both the test and the machine constructing the satisfying sequnce have access to the same oracle of polynomial size.

The sequence that we construct does not really depend on the test, in the sense that a polynomial family of sequences is constructed so that at least one of them passes any test. The polynomial family is the same even if the test is allowed to use an oracle of polynomial size, and it can be constrcuted in LOGSPACE (without using an oracle).


28 May 1987

Margaret Jacks Hall 352. 12:30 p.m.

Christos H. Papadimitriou (Stanford)

THE GEOMETRY OF FINGERS

ABSTRACT: A *form closure* of a solid object is a finite set of wrenches (force-moment combinations) applied on the object, with the property that any other wrench acting on the object can be balanced by a positive combination of the original ones. Intuitively, form closure is a way of defining the notion of a ``stable grip'' of the object, when friction is not taken into account. It has been pointed out by Reuleaux and Somoff in the last century, and more recently by Lakshminarayana, that the form closure of a two-dimensional object requires at least four wrenches, and that form closure of a three-dimensional object requires at least seven wrenches. It was also conjectured that these numbers can be achieved by wrenches realizable as forces normal to the surface of the object (such wrenches are called *fingers*).

In this talk I shall outline a proof of this conjecture (discovered jointly with Xanthippi Markenscoff and Luqun Ni). In particular we show that form closure of any two-dimensional object with piecewise smooth boundary, except for the circle, can be achieved by four fingers. For three dimensions we prove that form closure of any object with piecewise smooth boundary can be achieved with twelve fingers if and only if the object does not have a rotational symmetry (in which case, of course, form closure is not achievable, since no moment along the axis of symmetry can be balanced). We also show that, under very general conditions, form closure of three-dimensional objects without rotational symmetries can be achieved with seven fingers. Finally, we show that, when friction is taken into account, under the most relaxed assumptions three fingers are necessary and sufficient in two dimensions, and four fingers in three dimensions. If time permits, I shall discuss the problem of achieving the *best* stable grip of an object.


4 June 1987

Margaret Jacks Hall 352. 12:30 p.m.

Professors D.E. Knuth, C.H. Papadimitriou, J.D. Ullman (Stanford)

Solutions to the 1987 Analysis of Algorithms Qualifying Examination, Part I

Copies of the exam will be available in Phyllis Winkler's office.


11 June 1987

Margaret Jacks Hall 352. 12:30 p.m.

Professors D.E. Knuth, C.H. Papadimitriou, J.D. Ullman (Stanford)

Solutions to the 1987 Analysis of Algorithms Qualifying Examination, Part II


25 June 1987

Margaret Jacks Hall 352. 12:30 p.m.

Philippe Flajolet, INRIA (France)

SINGULARITY ANALYSIS OF GENERATING FUNCTIONS

It is known from classical mathematics that the Taylor coefficients of a function are largely determined by the singularities of the function. This talk will review methods for extracting asymptotics of coefficients of generating functions given either explicitly or implicitly via functional equations.

The classical methods are: (i) Tauberian Theorems. (ii) The Darboux method that relies on a simple lemma from Fourier analysis and which was used extensively by Polya in tree enumerations.

We shall present here the "transfer lemma" approach based on works by Odlyzko (Counting of 2-3 trees) and Flajolet-Odlyzko (Height of Trees). That method requires only information about the order of growth of the function around its dominant singularity (-ies) in a complex domain. It is useful in cases where smoothness conditions are not necessarily satisfied and it can accomaodate a wide range of singular behaviours for generating functions.

Applications to be given include:

-The counting of 2-3 trees [Odlyzko]

-The height of binary trees [Flajolet-Odlyzko]

-Common subexpression problem

-Ramanujan's Q(n) functions and random mappings

-Full expansions for height of planted plane trees, register allocation and Odd-Even Merge, where singularity analysis can be used in conjunction with Mellin transform techniques.


2 July 1987

Margaret Jacks Hall 352. 12:30 p.m.

Ashok Subramanian (Stanford)

PSPACE Survives Three-Bit Bottlenecks

What follows are a title and abstract that Ashok concocted to describe the paper he will present.

PSPACE is logspace-serializable

Gee-whiz, lone inhabitant of the planet of Amnesia, is in real trouble, for Shomu, the super-hyper-overlord of the galaxy refuses to leave her alone. Every so often, he writes problem instances on the sky of Amnesia, and insists that Gee-whiz solve them. Life is hard, for there are only so many trees on the tiny planet, and Shomu's problem instances are so huge that they cannot even be written down on all the paper on the planet. To cap it all, Amnesia is cursed with occasional freak thunderstorms, of such a violence that every scrap of paper is washed away in the ensuing flood. [Hence the name of the planet.] Each time such a calamity happens, Gee-whiz has no choice but to make fresh paper. And Gee-whiz absolutely hates chopping trees.

Of late, Shomu has been setting Gee-whiz harder and harder tasks. But today he has gone too far.

``Shame on you, Shomu,'' cried Gee-whiz. ``You set me problems that are too hard to solve. If I used every scrap of paper on the planet, I would still only have space logarithmic in the size of what you write on the sky. Even if it didn't rain so often, I could only tackle problems in logspace. But the task you have set me is complete for PSPACE,'' and she muttered something about space-hierarchy theorems that Shomu did not quite understand.

``Tell you what,'' Shomu rejoinded, ``I'll grant you a boon. As long as you are still industriously working away, it won't rain. The moment you stop, however, there will be the customary few drops.''

``Thanks for the offer,'' Gee-whiz replied, ``Now I can let my axe rust. But it isn't going to do you a whit of good. For I still only have logspace.''

``Look, Gee-whiz, it is beyond my power to increase the size of your planet. But I shall make the task well worth your while; if you solve this problem, there will be no others. And to make life less monotonous, I shall set up this huge polynomial-bit binary clock in the sky. You can look at it when you wish, and it will tell you how many times it has rained since your toil began.''

``Gee whiz, I can do it !!,'' she exclaimed, ``and now I will be rid of you for good.''

To find out how Gee-whiz does it, show up at AFLB!

[This talk is based on work of Cai and Furst, presented at the Structure in Complexity Theory conference, 1987. Only very elementary facts about complexity theory will be assumed.]


9 July 1987

Margaret Jacks Hall 352. 12:30 p.m.

Ramsey W. Haddad (Stanford)

Recognizing Bellman-Ford-Orderable Graphs

K. Mehlhorn and B. Schmidt [Discr. Appl. Math. 15(1986), pp. 315-327] consider the following problem. Given a directed graph with distinguished source vertex s, is it possible to order the edge so that all simple paths starting at s use edges in increasing order? They show how to solve their problem in O(|E|^2) steps, where |E| is the number of edges. We give an algorithm that runs in O(|V|^2) steps, where |V| is the number of vertices. The new algorithm and its analysis apply and extend previous results on dominators in directed graphs.

(joint work with Alejandro Schaffer of Stanford University)


23 July 1987

Margaret Jacks Hall 352. 12:30 p.m.

Jack Snoeyink (Stanford)

Stabbing Vertical Segments with a Convex Polygon

(joint work with Michael T. Goodrich of The Johns Hopkins Univ.)

A collection of segments S = {s1, s2, ..., sn} in the plane R^2 is *stabbed* by a convex polygon P if the boundary of P intersects every segment in S. Arie Tamir at the Fourth NYU Computational Geometry Day posed the problem of finding a convex stabbing polygon for S or determining that none exists. In this talk we restrict S to contain only vertical segments develop an optimal O(n log n) time and linear space algorithm.

We distinguish two cases for stabbing polygons, based on a notion of a `canonical'' stabbing polygon. In one case either the upper or lower chain of the canonical stabbing polygon is a simply a line segment, in the other both upper and lower chains are non-trivial. (Somewhat surprisingly, the former appears more difficult than the latter.) In each case, we solve the stabbing problem by transforming it to what we will call a *bipartite stabbing problem*. We show how to solve this problem in O(n log n) time and O(n) space using the notion of geometric duality.

We reduce both cases of the convex stabbing problem to this bipartite stabbing problem in O(n log n) time and O(n) space, where n is the number of segments in S, thus solving the convex stabbing problem in these same bounds. Our reduction in both cases is based on a `point-sweeping' paradigm (or, alternatively, a `chain-unwrapping' paradigm).


6 August 1987

Margaret Jacks Hall 352. 12:30 p.m.

Anil Gangolli (Stanford)

Generating Random Combinatorial Objects with Random Walks

Suppose we have a set S of objects from which we wish to choose a member approximately uniformly at random, that is, each with probability 1/|S|, or nearly so. Even given a source of unbiased independent random bits, it may not be obvious how to do this. We may not even know precisely what |S| is. Recent results by Broder on approximating the permanent, and work in progress by Diaconis and Gangolli on problems related to chi-squared testing use "random walks" on relevant sets S to choose elements uniformly or nearly uniformly at random.

We'll discuss how this approach works, some of the questions which arise naturally when one takes it, and some techniques one can use for answering these questions.

Only a basic background in discrete probability, and some linear algebra will be assumed.


26 August 1987

Margaret Jacks Hall 352. 12:30 p.m.

Yossi Matias (Weizmann Institute)

A Video Scrambling Technique Based on Space Filling Curves.

In this paper we propose a video scrambling technique which scans a picture stored in a frame buffer along a pseudo-random space filling curve. We describe several efficient methods for generating cryptographically strong curves, and show that they actually decrease the bandwidth required to transmit the picture.

This work is in collaboration with Adi Shamir.


27 August 1987

Margaret Jacks Hall 352. 12:30 p.m.

Ken Clarkson (AT & T Bell Laboratories)

Probabilistic Methods in Combinatorial Geometry

Sometimes a combinatorial argument is best expressed in terms of probabilities. I'll discuss an instance of this in discrete geometry, a proof of a new bound for semi-space partitions of point sets.

For a set S \subset E^d with n points in general position, let T \subset S contain d points. If the hyperplane defined by T bounds an open halfspace containing no more than k points of S, then T is a (<=k)-facet of S. For example, a (<=0)-facet of a set of points in the plane defines an edge of the convex hull of S, and in general, an ordinary facet of the convex hull of S corresponds to a (<=0)-facet of S. The Upper Bound Theorem of convex polytope theory says that the number of (<=0)-facets of S is O(n^s), where s is the integer part of d/2. In this talk, I'll use a simple probabilistic argument to show that the number of (<=k)-facets of S is no more than O(n^s k^{d-s}). Similar ideas can be used to show that this bound is tight, and to derive bounds for the expected number of (<=k)-facets of random points. This result has applications to algorithm for halfspace range reporting.

If time permits, I'll discuss a probabilistic argument that the total number of edges of a set of m cells in an arrangement of n lines in the plane is bounded by O(n^{2/3+ epsilon} m^{2/3} + nlog m) for any fixed epsilon > 0.