SAS is the Stanford Algorithms Seminar (formerly the AFLB -- Algorithms for Lunch Bunch). It is run by Jeffrey Oldham. Meetings are generally Thursdays at 4:15 p.m.

Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.

Contents:

  1. 27 October 1994: Eric Boman (Stanford). A Spectral Graph Algorithm for the Consecutive Ones Problem.
  2. 15 November 1994: Sanjeev Khanna and Anil Kamath (Stanford). On Syntactic versus Computational Views of Approximability/ Tail Bounds for Occupancy and the Satisfiability Threshold.
  3. 29 November 1994: Eran London (Hebrew University). The geometry of graphs and some of its algorithmic applications.
  4. 1 December 1994: Daphne Koller (UC Berkeley). Constructing Small Sample Spaces.
  5. 8 December 1994: Rafail Ostrovsky (ICSI). Reducibility and Completeness in Private Computations.
  6. 12 January 1995: Nabil Kahale (Xerox PARC). Approximating the independence number via the $\theta$-function.
  7. 19 January 1995: Eric Torng (Michigan State U.). Redefining the Paging Problem.
  8. 26 January 1995: Michael Luby (ICSI). Resilient Video Transmission.
  9. 26 January 1995: Dorit Dor (Tel Aviv University). Selecting the median and other percentile elements.
  10. 24 February 1995: Abhiram Ranade (U.C. Berkeley). Nearly Tight Bounds on Wormhole Routing.
  11. 27 February 1995: Neal Young (AT&T Bell Labs). Randomized Rounding Without Solving the Linear Program.
  12. 2 March 1995: Yuan Ma (Stanford). On the Design of Boolean Circuits that Contain Partially Unreliable Gates.
  13. 9 March 1995: Anil Kamath (Stanford). Efficient Continuous Algorithms for Combinatorial Optimization.
  14. 16 March 1995: Edith Cohen (AT&T Bell Labs). Size-Estimation Framework with Applications to Transitive Closure and Reachability.
  15. 6 April 1995: Balas Natarajan (HP Palo Alto). Scheduling Directed Acyclic Graphs with Register Constraints.
  16. 20 April 1995: Michel de Rougemont (U. Orsay, France). Computation and Verification of the Graph Reliability Problem.
  17. 11 May 1995: Monika Rauch Henzinger (Cornell). Dynamic Graph Algorithms in Polylogarithmic Time per Operation.
  18. 2 June 1995: Omri Palmon (Stanford). (no title available.).
  19. 5 June 1995: Vijay Vazirani (IIT). Minimal Trellises for Group Codes.
  20. 15 June 1995: Otfried Schwarzkopf (Utrecht U.). The Vertical Decomposition of a Single Cell in an Arrangement of Surface Patches.

27 October 1994

Margaret Jacks Hall 352. 4:15 p.m.

Eric Boman (Stanford)

A Spectral Graph Algorithm for the Consecutive Ones Problem

The consecutive ones problem can be formulated as: Given an m by n 0-1 matrix, find a column permutation such that for each row, all the ones occur in a consecutive block. This problem turns out to be important in physical mapping of chromosomes (DNA). If the given instance is such that a solution exists, then there are linear-time algorithms which solve the problem. In most applications, however, no exact solution exists (due to errors in the data etc.) and we seek a "best" approximation. This gives an NP-hard problem. We propose a new algorithm based on spectral graph theory. We prove that the algorithm finds a solution if one exists. If there is no exact solution, our algorithm still produces an approximate solution which can be used in real applications. We therefore claim our algorithm is more robust than previous methods.

We also show how our algorithm can be used to order matrices with the Robinson property. An application of this will be illustrated with examples from seriation in archeology.

To our knowledge, this is the first result in which a spectral method solves a combinatorial problem exactly instead of merely supplying a heuristic or a bound.

Joint work with Bruce Hendrickson and Jon Atkins. (This talk will partially overlap the talk Bruce Hendrickson gave at Xerox PARC on September 21.)


15 November 1994

Margaret Jacks Hall 352. 4:15 p.m.

Sanjeev Khanna and Anil Kamath (Stanford)

On Syntactic versus Computational Views of Approximability/ Tail Bounds for Occupancy and the Satisfiability Threshold

Sanjeev Khanna:

We attempt to reconcile the two distinct views of approximation classes: syntactic and computational. Syntactic classes such as MAXSNP permit structural results and have natural complete problems, while computational classes such as APX allow us to work with classes of problems whose approximability is well-understood. Our results provide a syntactic characterization of computational classes and give a computational framework for syntactic classes.

We compare the syntactically defined class MAXSNP with the computationally defined class APX, and show that every problem in APX can be ``placed'' (i.e., has an approximation preserving reduction to a problem) in MAXSNP. Our methods introduce a general technique for creating approximation-preserving reductions which show that any ``well'' approximable problem can be reduced in an approximation- preserving manner to a problem which is hard to approximate to corresponding factors.

We use the syntactic nature of MAXSNP to define a general paradigm, non-oblivious local search, useful for developing simple yet efficient approximation algorithms. We show that such algorithms can find good approximations for all MAXSNP problems, yielding approximation ratios comparable to the best-known for a variety of specific MAXSNP-hard problems. Non-oblivious local search provably out-performs standard local search in both the degree of approximation achieved and the efficiency of the resulting algorithms.

Anil Kamath:

The classical occupancy problem is concerned with studying the number of empty bins resulting from a random allocation of $m$ balls to $n$ bins. We provide a series of tail bounds on the distribution of the number of empty bins. These tail bounds should find application in randomized algorithms and probabilistic analysis. Our motivating application is the following well-known conjecture on threshold phenomenon for the satisfiability problem. Consider random 3-SAT formulas with $cn$ clauses over $n$ variables, where each clause is chosen uniformly and independently from the space of all clauses of size 3. It has been conjectured that there is a sharp threshold for satisfiability at $c*$ approximately 4.2. We provide the first non-trivial upper bound on the value of $c*$, showing that for $c$ > 4.758 a random 3-SAT formula is unsatisfiable with high probability. This result is based on a structural property, possibly of independent interest, whose proof needs several applications of the occupancy tail bounds.

Joint work with Rajeev Motwani, Krishna Palem, and Paul Spirakis.


29 November 1994

Margaret Jacks Hall 352. 4:15 p.m.

Eran London (Hebrew University)

The geometry of graphs and some of its algorithmic applications

We explore some implications of viewing graphs as geometric objects. This approach offers a new perspective on a number of graph-theoretic and algorithmic problems. There are several ways to model graphs geometrically and our main concern here is with geometric repre- sentations that respect the metric of the (possibly weighted) graph. Given a graph G we map its vertices to a normed space in an attempt to (i) Keep down the dimension of the host space and (ii) Guarantee a small distortion, i.e., make sure that distances between vertices in G closely match the distances between their geometric images.

We develop efficient algorithms for embedding graphs low-dimensionally with a small distortion. Further algorithmic applications include:

1. A simple, unified approach to a number of problems on multicommodity flows, including the Leighton-Rao Theorem [T. Leighton and S. Rao, An approximate max-flow min-cut theorem for uniform multicommodity flow problems with applications to approximation algorithms, FOCS 29 (1988), 422-431] and some of its extensions.

2. For graphs embeddable in low-dimensional spaces with a small distortion, we can find low-diameter decompositions (in the sense of [B. Awerbuch and D. Peleg, Sparse partitions, FOCS 31 (1990), 503-513] and [N. Linial and M. Saks, Low diameter graph decompositions, Combinatorica 13 (1993), 441-454]). The parameters of the decompo- sition depend only on the dimension and the distortion and not on the size of the graph.

3. In graphs embedded this way, small balanced separators can be found efficiently.

Faithful low-dimensional representations of statistical data allow for meaningful and efficient clustering, which is one of the most basic tasks in pattern-recognition. For the (mostly heuristic) methods used in the practice of pattern-recognition, see [R.O. Duda and P.E. Hart, "Pattern Classification and Scene Analysis", John Wiley and Sons, New York, 1973], especially chapter 6.

Keywords: Graph, metric space, normed space, embedding, isometry, distortion, separator, multicommodity flow, clustering, dimension, graph decomposition, algorithm.

Joint work with Nathan Linial and Yuri Rabinovich.


1 December 1994

Margaret Jacks Hall 146. 4:15 p.m.

Daphne Koller (UC Berkeley)

Constructing Small Sample Spaces

The correctness of randomized algorithms is usually proved with respect to a particular probability distribution, typically one based on independent random variables. During the course of the proof, the distribution is assumed to satisfy various constraints Pr(Event) = p. One can derandomize the algorithm by constructing a different distribution with a ``small'' (polynomial-size) sample space; this space can be searched exhaustively in order to find a ``good'' point in the space. As d-wise independence often suffices to satisfy the constraints, most of the attention so far has been concentrated on finding small sample spaces for (approximately) d-wise independent distributions.

However, d-wise independence is often an over-estimate on the degree of randomness actually required by the constraints. We investigate the problem of constructing distributions that satisfy only a given set of constraints. We show that if the constraints are consistent, then there exists a distribution satisfying them supported by a ``small'' sample space (one whose cardinality is equal to the number of constraints).

We provide two constructive versions of this proof for the important case where the constraints are satisfied by independent random variables. The first is a polynomial-time algorithm for constructing a distribution that precisely satisfies a set of _expectation constraints_. The second is an NC algorithm for constructing a distribution that approximately satisfies constraints over any event that can be verified by finite automata (including independence constraints, constraints on the parity, and more). The constraints are satisfied up to a small _relative_ error. The parallel construction relies on a new and independently interesting parallel algorithm for converting a solution to a linear system into an almost basic approximate solution to the same system.

We demonstrate the use of this technique for derandomizing algorithms by applying it to the problem of finding large independent sets in sparse d-uniform hypergraphs. In particular, we provide the first NC derandomization of this algorithm for arbitrary d. We also show how the linear programming perspective suggests new proof techniques which might be useful in general probabilistic analysis.

The first part of this research is joint work with Nimrod Megiddo, and the second part with David Karger.


8 December 1994

Margaret Jacks Hall 352. 4:15 p.m.

Rafail Ostrovsky (ICSI)

Reducibility and Completeness in Private Computations

In recent years, two different approaches to the cryptographic protocol design were proposed. The first approach is to base the security of protocols on general (hardness) complexity assumptions. The second approach is to assume (as a black-box) an implementation of a "simple" protocol problem and then to "reduce" all other problems to this "simple" problem while preserving security measures. In this talk, we survey the second approach, including a recent progress on this front. In particular, we give appropriate definitions of reducibility and completeness in two-party and multi-party settings and characterize all (boolean) functions which are complete for private computations.

THE TALK WILL BE A SURVEY OF THIS AREA AND WILL BE SELF-CONTAINED.


12 January 1995

Margaret Jacks Hall 352. 4:15 p.m.

Nabil Kahale (Xerox PARC)

Approximating the independence number via the $\theta$-function

We describe an approximation algorithm for the independence number of a graph (the number of vertices in a maximum independent set of a graph). If a graph on $n$ vertices has an independence number $n/k + m$ for some fixed integer $k >= 3$ and some $m > 0$, the algorithm finds, in random polynomial time, an independent set of size $\tilde \Omega(m^{3/(k+1)})$, improving the best known previous algorithm of Boppana and Halldorsson that finds an independent set of size $\Omega(m^{1/(k-1)})$ in such a graph. The algorithm is based on semi-definite programming, some properties of the Lovasz $\theta$-function of a graph and the recent algorithm of Karger, Motwani and Sudan for approximating the chromatic number of a graph. If the $\theta$-function of an $n$ vertex graph is at least $Mn^{1-2/k}$ for some absolute constant $M$, we describe another algorithm that finds an independent set of size $k$.

We also provide new results on the problem of estimating the worst case ratio between the independence number and the $\theta$-function of a graph.

Joint work with Noga Alon.


19 January 1995

Margaret Jacks Hall 352. 4:15 p.m.

Eric Torng (Michigan State U.)

Redefining the Paging Problem

We redefine the on-line paging problem by introducing a new parameter $s$ to model the relative difference in access speed between fast and slow memories. This allows us to focus on the real fundamental goal of a two-level store which is minimizing total memory access time, not minimizing the number of page faults incurred. While the optimization versions of these problems are identical (i.e., minimizing the number of page faults incurred also minimizes total memory access time), the approximation versions are not. We also present a new characterization of request sequences by the amount of locality of reference they exhibit.

Using this new framework, we derive intuitively appealing results which were unattainable in the previous framework. First, we show that deterministic marking algorithms such as the least recently used (LRU) page replacement algorithm achieve constant competitive ratios against a restricted adversary which must choose a request sequence exhibiting a quantifiable amount of locality of reference, and we show that this performance guarantee is, in some sense, optimal. We prove a similar result for the randomized marking algorithm. These results lead us to a better understanding of when on-line algorithms can guarantee good performance, and this, in turn, helps us explain why the competitive ratio of the optimal on-line algorithm increases with $k$, the size of fast memory. Next, we show that on-line algorithms augmented with lookahead can achieve better competitive ratios than on-line algorithms without lookahead. In particular, we show that a modified LRU algorithm with sufficient lookahead can achieve a competitive ratio of 2 even against an unrestricted adversary. Finally, because our model includes both fast memory size and the relative difference in access speed between fast and slow memories, we are able to compare the effects of increasing fast memory size, fast memory speed, and slow memory speed on system performance.


26 January 1995

Margaret Jacks Hall 352. 2:45 p.m.

Michael Luby (ICSI)

Resilient Video Transmission

We introduce a novel approach for sending messages over lossy packet-based networks. The new method, called Priority Encoding Transmission, allows a user to specify a different priority on each segment of the message. Based on the priorities, the sender encodes the segments into packets for transmission. The priority of a segment determines the fraction of the packets sufficient to recover the segment. The receiver is able to decode the segments of the message in order of their priority, based solely on the quantity of packets received.

We develop a general scheme for implementing PET, and we also provide an information theoretic proof that the total bandwidth of the scheme for the given priority levels is optimal.

Our primary practical goal at this point is to incorporate PET into real-time multicast teleconferencing applications. This would allow graceful degradation of picture quality as a function of packet loss. Preliminary simulations of a prototype PET system are quite encouraging: When sending an MPEG video stream without any protection over the network, the video degrades substantially and erratically with even moderate packet loss, whereas if PET is used to selectively protect the MPEG video, the degradation is reasonable and controlled.

We have focussed on minimizing the computational processing time used by PET. We have developed a novel software implementation of PET system that can encode an MPEG 3Mbits/second video stream into IP packets and decode the packets back into an MPEG video stream in real-time using a 50MHz SPARC 10 machine.


26 January 1995

Margaret Jacks Hall 352. 4:15 p.m.

Dorit Dor (Tel Aviv University)

Selecting the median and other percentile elements

Given a set X containing n distinct elements, and given a number 1<=i<=n, we would like to find the element whose rank in X is exactly i, i.e., the element of X larger than exactly i-1 elements of X and smaller than the other n-i elements of X. In particular we are interested in the case where i=a*n for some 0 This problem dates back to Charles L. Dodgson, (better known as Lewis Carroll, the author of ``Alice's Adventures in Wonderland''), who noticed in 1883 that the second prize in lawn tennis tournaments was usually unfairly awarded. Only in the early 1970's, it was shown, by Blum, Floyd, Pratt, Rivest and Tarjan, that the selection problem can be solved in O(n) time. Their bound was improved a few years later by Schonhage, Paterson and Pippenger who showed an algorithm for finding any element using at most 3n+o(n) comparisons. Recently, we have been able to improve this bound to about 2.95n+o(n).

In this talk we show that the median can be found using at most 2.95n comparisons. We also show that the the a*n-th element using at most [1+a*log(1/a)+O(a*loglog(1/a))]*n comparisons. This comes very close to the [1+H(a)]*n - o(n) lower bound obtained by Bent and John. As H(a)=a*log(1/a)+O(a), the coefficient of a*log(1/a) in our new upper bound cannot be improved.

This result is a joint work with Uri Zwick.


24 February 1995

Margaret Jacks Hall 352. 4:15 p.m.

Abhiram Ranade (U.C. Berkeley)

Nearly Tight Bounds on Wormhole Routing

We present nearly matching upper and lower bounds on wormhole routing on Butterfly networks. We show that wormhole routing is strictly less efficient than store-and-forward packet routing in utilizing communication links. Our results derive from a relationship between wormhole routing problems and the chromatic number of certain graphs, which we show how to approximate. We also give nearly matching routing algorithms which significantly improve upon existing algorithms. Our algorithms also extend to other networks such as two dimensional meshes.

This joint work with Saul Schleimer and Daniel Wilkerson.


27 February 1995

Margaret Jacks Hall 301. 1:00 p.m.

Neal Young (AT&T Bell Labs)

Randomized Rounding Without Solving the Linear Program

Randomized rounding is a basic framework for algorithm design. Typically, one randomly rounds an optimal fractional solution of a linear program to obtain an approximately optimal integer solution. We introduce a variant called oblivious rounding that avoids the bottleneck of first solving the linear program. This provides a unified approach to algorithms for fractional and integer packing and covering problems. In this talk we introduce the technique using a simple example and then sketch how it can be used on the concurrent multi-commodity flow problem.


2 March 1995

Margaret Jacks Hall 352. 4:15 p.m.

Yuan Ma (Stanford)

On the Design of Boolean Circuits that Contain Partially Unreliable Gates

We will investigate a new model of gate failure for Boolean circuits in which a faulty gate is restricted to output one of its input values. The new model allows both worst-case fault-tolerance and arbitrarily high success probability in the case of random faults, whereas the classic von Neumann fault model allows neither of them. Moreover, the new model captures a particular type of fault that commonly appears in practice, and it suggests a simple method for performing post-test alterations to circuits that have more severe types of faults. We will present some bounds on the size of fault-tolerant Boolean circuits, and raise an intriguing open problem. Finally, a related open problem in the area of fault-tolerant sorting will be discussed.

Joint work with Dan Kleitman and Tom Leighton.


9 March 1995

Margaret Jacks Hall 352. 3:00 p.m.

Anil Kamath (Stanford)

Efficient Continuous Algorithms for Combinatorial Optimization

The thesis concentrates on design, analysis, and implementation of efficient algorithms for large scale optimization problems with special emphasis on problems related to network flows and routing in high-speed networks. The main focus is on using continuous techniques for solving large combinatorial optimization problems in off-line and online settings.

Some of the major areas of work include:

* Developing and implementing new interior-point techniques for efficiently solving large scale multi-commodity network flow problems. These algorithms are based on solving linear and quadratic programming formulations of the problem.

* Designing fast algorithms based on continuous methods for approximate solution of {\em minimum cost} multi-commodity flow problem. These techniques are geared towards distributed implementation.

* Application of multi-commodity flow techniques for design and implementation of centralized and distributed algorithms for on-line routing of Virtual Circuits in ATM (Asynchronous Transfer Mode) networks.

Our approach is to design algorithms that have provably good worst or average case performance and demonstrate their efficiency by implementing and evaluating them on real data.


16 March 1995

Margaret Jacks Hall 352. 4:15 p.m.

Edith Cohen (AT&T Bell Labs)

Size-Estimation Framework with Applications to Transitive Closure and Reachability

Computing transitive closure and reachability information in directed graphs is a fundamental graph problem with many applications. The fastest known algorithms run in $O(min(mn,n^{2.38})$ time, where $n$ is the number of nodes and $m$ the number of edges in the graph. It is often the case that the aim is to compute the size of the transitive closure and the number of nodes reachable from certain nodes rather than computing them explicitly. Examples of potential applications are database query optimization or optimizing multiplications of large real-valued matrices. We present an $O(m)$ time randomized algorithm that estimates the number of nodes reachable from every node and the size of the transitive closure. Another ramification of our estimation scheme is a $\tilde{O}(m)$ time algorithm for estimating sizes of neighborhoods in directed graphs with nonnegative weights. Our size-estimation algorithms are much faster than performing the respective explicit computations and improve significantly over previous estimation methods.


6 April 1995

Margaret Jacks Hall 301. 4:15 p.m.

Balas Natarajan (HP Palo Alto)

Scheduling Directed Acyclic Graphs with Register Constraints

With the increased use of wide-word computer architectures, several scheduling problems merit fresh attention. One such problem is minimum length scheduling of a set of operations with precedence constraints forming a directed acyclic graph, using a bounded number of registers. Even in the absence of register constraints the problem is NP-hard, but a number of provably good heuristics are known. We present a heuristic for the problem, obtain goodness bounds, and examine the performance of an implementation of the algorithm on randomly generated graphs.

Joint work with M. Schlansker, Hewlett Packard Labs.


20 April 1995

Margaret Jacks Hall 301. 4:15 p.m.

Michel de Rougemont (U. Orsay, France)

Computation and Verification of the Graph Reliability Problem

The graph reliability problem (GR) is the probabilistic version of the accessibility problem in a graph: given a graph with $n$ nodes such that each edge $e$ has an existence probability $p(e)$ and two distinguished nodes $s$ and $t$, the problem asks for the probability that there is a path from $s$ to $t$. Valiant showed that this problem is #P hard.

We will show how this problem is polynomial on the average for some very special input distributions. We then investigate IP protocols to verify that $GR(x)=y$ and show an interactive protocol can check GR in O(n) interactive steps. The new aspect of this protocol (ISTTCS-94) is that it is presented in purely graph theoretical terms.

We then discuss possible extensions of this problem to other uncertainty models encountered in control theory.


11 May 1995

Margaret Jacks Hall 301. 4:15 p.m.

Monika Rauch Henzinger (Cornell)

Dynamic Graph Algorithms in Polylogarithmic Time per Operation

In compilers, data bases, and combinatorial optimization problems, graph properties like connected components need to be computed repeatedly on graphs that closely resemble each other. Efficient algorithms for these situations are called dynamic graph algorithms. These are algorithms that allow the input graph to be modified by insertions and deletions of edges and vertices.

This talk presents a recent breakthrough in the area of dynamic graph algorithms: We present the first dynamic algorithms for various graph connectivity problems with polylogarithmic time per operation. The previously best algorithms took time \Omega(\sqrt n) per operation. Our algorithms almost match the corresponding lower bounds. The algorithms are Las-Vegas type randomized algorithms, which use simple data structures and have a small constant factor.

This represents joint work with Valerie King and will be presented at STOC 95.


2 June 1995

Margaret Jacks Hall 352. 12:30 p.m.

Omri Palmon (Stanford)

(no title available.)

(no abstract available.)

5 June 1995

Margaret Jacks Hall 301. 4:15 p.m.

Vijay Vazirani (IIT)

Minimal Trellises for Group Codes

The success of trellis coded modulation (which revolutionized transmission rates of modems in bandwidth limited channels), has led researchers to study block coded modulation. Two basic ingredients for a large class of such schemes are group block codes and trellises. Since the decoding effort in a trellis-based decoding scheme is directly proportional to the size of the trellis, much work has been done on characterizing and constructing minimal trellises.

We give a polynomial time algorithm for constructing the minimal trellis for a code over an abelian group, given a generator matrix for the code. This extends the work of Kschischang and Sorokine who handled the case of linear codes over fields (such a code is a vector subspace). A key step in our work is handling codes over cyclic groups $C_{p^\alpha}$, where $p$ is a prime. Such a code can be viewed as a submodule over the ring $Z_{p^\alpha}$. Because of the presence of zero-divisors in the ring, submodules do not share the useful properties of a vector subspace. We get around this difficulty by restricting the notion of linear combination to p-linear combination, and introducing the notion of a p-generator sequence, which enjoys properties similar to that of a basis for a vector subspace. In turn, this leads to the minimal trellis algorithm.

We also present two characterizations of minimal trellises for group codes. The first shows that a trellis for a group code is minimal iff it is two-way proper. All our algorithmic results use this characterization to establish minimality of the trellis constructed. The second gives algebraic structural properties of the set of transitions of the minimal trellis for a group code; we call this the Transition Space Theorem. This theorem complements the State Space Theorem of Forney and Trott, which gives algebraic structural properties of the set of states of the minimal trellis.

Joint work with Huzur Saran and B. Sundar Rajan (IIT-New Delhi).


15 June 1995

Margaret Jacks Hall 301. 4:15 p.m.

Otfried Schwarzkopf (Utrecht U.)

The Vertical Decomposition of a Single Cell in an Arrangement of Surface Patches

Let $\Sigma$ be a collection of $n$ algebraic surface patches of constant maximum degree in ${\bf R}^3$. We show that the combinatorial complexity of the vertical decomposition of a single cell in the arrangement ${\cal A}(\Sigma)$ is $O(n^{2+\varepsilon})$, for any $\varepsilon>0$, where the constant of proportionality depends on $\varepsilon$ and on the maximum degree of the surfaces and of their boundaries. As an application, we obtain a near-quadratic motion planning algorithm for general systems with three degrees of freedom.

This is joint work with Micha Sharir.

Hosted by Dan Halperin.