Talks are held in the Gates Building, near the Main Quad of Stanford's campus. Click here for directions.

- 9 October 1996:
**Roy Armoni**(Hebrew University). Discrepancy Sets and Pseudorandom Generators for Combinatorial Rectangles. - 17 October 1996:
**Bruce Donald**(Cornell). Algorithmic Foundations for Planar Force-Field Devices: Geometric Algorithms for Massively-Parallel Distributed Manipulation. - 24 October 1996:
**Monika Henzinger**(Digital SRC and Cornell). New Bounds for Computing Vertex Connectivity. - 31 October 1996:
**Danny Raz**(ICSI). Approximating Total Flow Time on Parallel Machines. - 21 November 1996:
**Moses Charikar**(Stanford). Dynamic Servers: A Model for Kinematic Structures and Video-on-Demand. - 9 January 1997:
**Marshal Bern**(Xerox PARC). Meshing, Morphing, and Mapping. - 16 January 1997:
**Andrew Yao**(Princeton). New-Element Identification and the Pigeonhole Principle. - 23 January 1997:
**Nina Amenta**(Xerox PARC). The Shape of a Set of Points . - 30 January 1997:
**Mohan Paturi**(UC San Diego). Exponential Lower Bounds for Depth 3 Boolean Circuits. - 6 February 1997:
**Amin Shokrollahi**(ICSI). Practical Loss-Resilient Codes. - 13 February 1997:
**Jon Kleinberg**(Cornell University and IBM Almaden). Nearest-Neighbor Search in High Dimensions. - 6 March 1997:
**Monika Henzinger**(DEC). Fully Dynamic Minimum Spanning Trees in O(n^{1/3} log n) time per operation. - 18 March 1997:
**Sanjeev Arora**(Princeton). Nearly Linear Time Approximation Schemes for Euclidean TSP and Other Geometric Problems. - 10 April 1997:
**Edith Cohen**(UC-Berkeley and AT&T Labs). Approximating Matrix Multiplication for Pattern Recognition Tasks. - 1 May 1997:
**Susanne Albers**(Max Planck Institut fur Informatik, Saarbrucken). Better Bounds for Online Scheduling. - 8 May 1997:
**Martin Dietzfelbinger**(Universitaet Dortmund). The Linear-Array Problem in Communication Complexity Resolved . - 15 May 1997:
**Yoram Singer**(AT&T Labs). Using and Combining the Advice of Experts that Specialize. - 17 July 1997:
**Moni Naor**(Weizmann Institute). Number-Theoretic Constructions of Efficient Pseudo-random Functions and Other Cryptographic Primitives.

Joint work with Michael Saks, Avi Wigderson, and Shiyu Zhou.

**Email:**
aroy@cs.huji.ac.il

**WWW:**
http://www.cs.huji.ac.il/~aroy

**Related paper:** To appear in FOCS '96

The manufacturing problems of parts-feeding and -orientation motivate our work on massively-parallel distributed manipulation. I'll discuss our algorithms for sensorless manipulation using SIMD arrays of microfabricated actuators. We are testing these strategies on the M-CHIP (Manipulation Chip), an array of programmable micro-motion pixels fabricated at Cornell. I'll show a prototype M-CHIP containing over 15,000 silicon "cilia" (actuators) in one square inch. This may be a kind of record for parallelism and density. We have developed and analyzed efficient SIMD manipulation strategies and planning algorithms for parts-sorting and -orienting using a computational-geometric analysis of programmable vector fields. I will also discuss new experimental results using an organic ciliary array developed at Stanford, and new results on macroscopic parts feeders.

We believe such systems could result in ``smart'' (i.e., programmable) manipulation surfaces, tiled with microactuators. I'll discuss the challenges in programming such systems, and in designing efficient control and planning algorithms. New upper and lower bounds are discussed, as well as connections to computational dynamical systems theory. I'll show a video of our prototype systems.

**Email:**
brd@cs.stanford.edu

**WWW:**
http://www.cs.cornell.edu/home/brd/

We modify the Hao & Orlin approach for computing edge connectivity to give the fastest known deterministic and randomized algorithms for computing vertex connectivity.

Our randomized algorithm improves the running time over existing algorithms

- by a factor of n*\kappa in the weighted case, and
- by a factor of at least min (\kappa, sqrt(n)) in the unweighted case,

To be precise, the randomized algorithm computes \kappa with probability 1/2 in time

- O(nm) in unweighted graphs, and in time
- O(nm \log (n^2/m)) in weighted graphs,

This is joint work with Satish Rao and Hal Gabow and will appear in FOCS 96.

**Email:**
monika@pa.dec.com

**WWW:**
http://www.cs.cornell.edu/Info/People/mhr/home.html

**Related paper:**
http://www.cs.cornell.edu/Info/People/mhr/papers/focs96.ps

We prove that when preemption is allowed, {\em Shortest Remaining Processing Time} (SRPT) is an $O(\log ({\rm min}\{\frac{n}{m}, P\}))$ approximation algorithm for the total flow time, where $n$ is the number of jobs, $m$ is the number of machines, and $P$ is the ratio between the maximum and the minimum processing time of a job. We also provide an $\Omega(\log ({\rm max}\{\frac{n}{m},P\}))$ lower bound on the competitive ratio of any randomized on-line algorithm, therefore showing that SRPT is an optimal on-line algorithm.

The solution of the preemptive version of the problem is used as a basis to obtain a scheduler in the non-preemptive case. Here we give an $O(\sqrt {\frac{n}{m}}\log {\frac{n}{m}})$ approximation algorithm and an $\Omega(n^{\frac{1}{3}-\epsilon})$ lower bound on the approximability of this problem (assuming $P \not= NP$).

This is a joint work with Stefano Leonardi.

**Email:**
draz@icsi.berkeley.edu

**WWW:**
http://www.icsi.berkeley.edu/~draz

This is joint work with Dan Halperin and Rajeev Motwani.

**Email:**
moses@cs.stanford.edu

**WWW:**
http://www-cs-students.stanford.edu/~moses

**Email:**
yao@cs.princeton.edu

**WWW:**
http://www.cs.princeton.edu/~yao

We can prove that if a smooth curve is sampled densely enough, then the crust of the samples approximates the curve, with no extraneous features. The minimum required sampling density varies along the curve according to the {\em Local Feature Size} (which is also simply defined), so that areas of less detail can be sampled less densely.

The crust is easy to compute using Voronoi diagrams in $O(n \lg n)$ time (even in practice). We will demonstrate the computation of crusts on arbitrary point sets.

**Email:**
amenta@parc.xerox.com

**WWW:**
http://www.geom.umn.edu/~nina

**Related paper:** (TBA)

This is joint work with Mike Saks and Francis Zane.

Despite the simplicity of the algorithms, their design and analysis are very intricate and elegant mathematically. The design requires the careful choice of a random irregular bipartite graph, where the structure of the irregular graph is extremely important. We model the progress of the decoding algorithm by a set of differential equations. The solution to these equations can then be expressed as polynomials in one variable with coefficients determined by the graph structure. Based on these polynomials, we design a graph structure that guarantees successful decoding with high probability. Joint work with Mike Luby, Mike Mitzenmacher, Dan Spielman and Volker Stemann.

**Email:**
amin@icsi.berkeley.edu

**WWW:**
http://www.icsi.berkeley.edu/~amin

**Related paper:**
http://theory.stanford.edu/~aflb/shokrollahi97.ps

We consider the nearest-neighbor problem for $d$-dimensional Euclidean space: we wish to pre-process a database of $n$ points so that given a query point, one can efficiently determine its nearest neighbors in the database. There is a large literature on algorithms for this problem, in both the exact and approximate cases. The more sophisticated algorithms typically achieve a query time that is logarithmic in $n$ at the expense of an exponential dependence on the dimension $d$; indeed, even the average-case analysis of heuristics such as k-d trees reveals an exponential dependence on $d$ in the query time.

In this work, we develop a new approach to the nearest-neighbor problem, based on a method for combining randomly chosen one-dimensional projections of the underlying point set. We use this method to obtain, among other results, an approximate nearest-neighbor algorithm with a query time that is logarithmic in $n$ and polynomial in the dimension $d$.

**Email:**
kleinber@almaden.ibm.com

**WWW:**
http://www.cs.cornell.edu/home/kleinber/kleinber.html

**Related paper:** (TBA)

Our proof generalizes the multi-dimensional Markov chain analysis used by Kenyon, Rabani, and Sinclair to prove that Best Fit is also stable under these distributions.

Our proof is motivated by a similar analysis of Random Fit, a new simple packing algorithm related to First Fit, that is interesting in its own right.

This is joint work with Susanne Albers.

**Email:**
michaelm@pa.dec.com

**WWW:**
http://www.research.digital.com/SRC/personal/Michael_Mitzenmacher/home.html

**Related paper:** (TBA)

It requires building a data structure that after an edge insertion or deletion

- can quickly determine if and how the minimum spanning forest is affected by the change, and
- can quickly be updated.

We achieve this result in two steps: First we give a deletions-only data structure that takes time O(n^{1/3} log n) per operation. Second we present a general technique to combine a deletions-only minimum spanning tree data structure with an insertions-only minimum spanning tree data structure to create a fully dynamic data structure. We expect that this technique will also be useful for other dynamic graph problems.

This is joint work with Valerie King.

**Email:**
monika@pa.dec.com

**WWW:**
http://www.research.digital.com/SRC/personal/Monika_Henzinger/home.html

**Related paper:** (TBA)

Last year we presented a new algorithm that is an approximation scheme: for any fixed c>0, it computes a tour of cost at most 1+c times the optimal. The scheme's running time was n^{O(1/c)}.

This talk will describe an approximation scheme that is simpler and much more efficient. For any fixed c>0, it finds a (1+c)-approximate tour in n*polylog(n) time, which is nearly linear in n. The scheme also works in nearly linear time on instances in d dimensions (for fixed d).

Our algorithm generalizes to many other geometric problems, including the famous Minimum Steiner Tree problem, and the Euclidean Matching problem.

**Email:**
arora@cs.princeton.edu

**WWW:**
http://www.cs.princeton.edu/~arora

Joint work with David D. Lewis.

**Email:**
edith@CS.Berkeley.EDU

**WWW:**
http://www.research.att.com/~edith

**Related paper:**
http://www.research.att.com/~edith/Papers/posmat.ps.Z

The problem was introduced by Graham in 1966. Graham developed the well-known List scheduling algorithm that schedules every incoming job on the least loaded machine. The List algorithm is $(2- {1\over m})$-competitive. In recent years, research has focused on finding algorithms with a competitive ratio bounded away from 2.

Bartal, Fiat, Karloff and Vohra (1992) gave a deterministic online algorithm that is 1.986-competitive. Karger, Phillips and Torng (1994) generalized the algorithm and proved an upper bound of 1.945. The best lower bound currently known on the competitive ratio that can be achieved by deterministic online algorithms it equal to 1.837.

We present an improved deterministic online scheduling algorithm that is 1.923-competitive, for all $m\geq 2$. The algorithm is based on a new scheduling strategy, i.e., it is not a generalization of the approach by Bartal et al. Also, the algorithm has a simple structure. Furthermore, we develop a better lower bound. We prove that, for general $m$, no deterministic online scheduling algorithm can be better than 1.852-competitive. The contribution of our work is not primarily the improvement in the competitive ratios but the new scheduling algorithm underlying our solution.

**Email:**
albers@mpi-sb.mpg.de

**WWW:**
http://www.mpi-sb.mpg.de/~albers/

Tiwari proved that D_k(f) >= k*(D(f)-O(1)) for almost all functions, and conjectured this to be true for all f. His conjecture was falsified by Kushilevitz, Linial, and Ostrovsky (1996): they exhibited a function f for which D_k(f) is essentially bounded above by 0.75*D(f). The best known general lower bound known is D_k(f) >= k*(D(f)^(1/2)-log k-3).

We prove: D_k(f) >= 0.146 * k * D(f), for all functions f and all k >= 2. Applying the general framework provided by Tiwari, one may derive lower bounds for the communication complexity of a function in general asynchronous networks in terms of its two-party communication complexity. Moreover, resolving a longstanding open question, the result shows that the communication complexity of a function directly implies a lower bound for the running time of one-tape Turing machines computing this function.

The nondeterministic and the randomized case are also studied, leading to similar results.

**Email:**
dietzf@noether.informatik.uni-dortmund.de

**Related paper:** To appear in STOC '97

Based on the following papers:

- Freund, Schapire, Singer, Warmuth, (STOC97). Using and Combining Predictors that Specialize
- Cohen, Singer, (SIGIR96). Context-Sensitive Learning Methods for Text Categorization
- Helmbold, Schapire, Singer, Warmuth, (submitted to Mathematical Finance). On-line Portfolio Selection Using Multiplicative Updates

Joint work with Omer Reingold.

**Email:**
naor@wisdom.weizmann.ac.il