For 1995, talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.

For 1996, talks are held in the Gates Building, near the Main Quad of Stanford's campus. Click here for directions.

- 12 October 1995:
**Howard Karloff**(Georgia Tech). A Better Approximation Algorithm for Finding Planar Subgraphs. - 2 November 1995:
**Michael Luby**(ICSI). Linear Time Erasure Codes With Nearly Optimal Recovery. - 16 November 1995:
**Sanjeev Khanna**(Stanford). Angular-Metric TSP and the Matroid Parity Problem. - 30 November 1995:
**Alistair Sinclair**(UC-Berkeley). Biased Random Walks, Lyapunov Functions, and Stochastic Analysis of Best-Fit Bin Packing. - 7 December 1995:
**Cynthia Dwork**(IBM Almaden). Digital Signets for Protection of Digital Information. - 14 December 1995:
**Donald Aingworth**(Stanford). Estimating Diameter and Shortest Paths without Matrix Multiplication. - 18 January 1996:
**Eli Gafni**(UCLA). Read-Write Protocols with Scheduling Constraints. - 1 February 1996:
**Naoki Katoh**(Kobe University of Commerce). Computing {\em LMT}-skeleton efficiently. - 8 February 1996:
**Uri Zwick**(Tel Aviv University). Finding the median requires (2+epsilon)n comparisons. - 15 February 1996:
**Yair Bartal**(ICSI). Lower Bounds for On-line Graph Problems with Application to On-line Circuit and Optical Routing. - 22 February 1996:
**Eli Upfal**(IBM Almaden and The Weizmann Institute). Steady State Analysis of Computer Processes. - 29 February 1996:
**Michael Mitzenmacher**(UC-Berkeley). Large Markovian Particle Processes and Some Applications to Load Balancing. - 5 March 1996:
**Sally Goldman**(Washington University). Noise-Tolerant Distribution-Free Learning of General Geometric Concepts. - 4 April 1996:
**Miki Ajtai**(IBM Almaden). Generating Hard Problems with Their Solutions. - 5 April 1996:
**Andrew Goldberg**(NEC). Negative-Cycle Detection Algorithms. - 18 April 1996:
**Ron Fagin**(IBM Almaden). Easier Ways to Win Logical Games. - 25 April 1996:
**John Mitchell**(Stanford). Linear Logic, Probabilistic Games and the Complexity of Approximate Proof Search. - 2 May 1996:
**Cliff Stein**(Dartmouth). Designing Schedules that Minimize Average Completion Time. - 9 May 1996:
**Dan Spielman**(UC-Berkeley and MIT). Spectral Partitioning Works: Planar graphs and finite element meshes. - 16 May 1996:
**Samir Khuller**(Maryland). Approximation Algorithms for Facility Location Problems. - 23 May 1996:
**Julien Basch**(Stanford). Reporting Red-Blue Intersections between Connected Sets of Line Segments. - 30 May 1996:
**Stavros Kolliopoulis**(Dartmouth). Finding Real-Valued Single-Source Shortest Paths in $o(n^3)$ Expected Time. - 6 June 1996:
**Leszek Gasieniec**(Max Planck Institut fur Informatik, Saarbrucken). From Parallel to Small Space String Matching. - 22 August 1996:
**David Karger**(MIT). Random Sampling in Cut Problems.

This is joint work with Cristina Fernandes, Gruia Calinescu, and Ulrich Finkler.

**Email:**
howard@cc.gatech.edu

**Related paper:**
http://theory.stanford.edu/~aflb/karloff95.ps

Erasure codes have immediate applications, e.g., to sending real-time high-volume video information over lossy packet based networks. The two most important properties of erasure codes are the running time of the algorithms and the amount of encoding sufficient to recover the message, i.e., the value of $r$. It is not hard to see that the running time must be at least linear and that $r$ must be at least $n$.

We describe erasure codes where both the encoding and decoding algorithms run in linear time and where $r$ is only slightly larger than $n$. Our first scheme is deterministic: On input parameter $\e > 0$, the run time of the encoding and decoding algorithms is $\bigO(n/\e^4)$ and $r = (1+\e)n$. Although this is an interesting theoretical result, it is not clear if it is practical.

Our second scheme is probabilistic: On input parameter $\e > 0$, the run time of the encoding and decoding algorithms is $\bigO(n\log(1/\e)/\e)$ and $r = (1+\e)n$. In this setting, both the encoding and decoding algorithms are randomized with access to the same set of random bits, the network is assumed to drop packets independent of their contents, and the message is recovered from $r$ packets with high probability. This scheme shows promise of being practical.

JOINT WORK WITH: Noga Alon and Jeff Edmonds

Presented at FOCS `95

**Email:**
luby@icsi.berkeley.edu

**WWW:**
http://www.icsi.berkeley.edu:80/~luby

**Related paper:**
to appear in IEEE Transactions on Information Theory

This is joint work with Alok Aggarwal, Don Coppersmith, Rajeev Motwani and Baruch Schieber.

**Email:**
sanjeev@cs.stanford.edu

**WWW:**
http://Theory.Stanford.edu/people/sanjeev/sanjeev.html

**Related paper:** (TBA)

Large-scale simulations (due to Coffman et al) point to an interesting dichotomy: when $j$ is either very large (i.e., close to $k$) or very small, the expected waste (i.e., unused space in the bins) remains bounded for an infinite stream of items, but there is a critical region (around $j\approx 0.8k$) in which the waste grows linearly. However, these experimental results have yet to be verified theoretically for most values of $j$ and $k$.

In this talk, we take a step in this direction by proving that the waste remains bounded along the line $j=k-2$. The proof is the first to involve a detailed understanding of the high-dimensional random walk underlying the algorithm.

(Joint work with Claire Kenyon and Yuval Rabani.)

**Email:**
sinclair@cs.berkeley.edu

**Related paper:** In Proceedings of SODA, 1996

In this paper we introduce {\em digital signets}, a new technique for protecting digital content from illegal redistribution. In broad terms, signets work as follows. There is some common public data, some of which is encrypted {\em content}, an {\em authorization center}, and any number of potential {\em users}. Each user has access to the common public data. To gain access to the content, user $U$ interacts with an authorization center to obtain a short {\em digital signet}, which is a function of information private to $U$. Using only the public data, its own private information, and the signet, $U$ can decrypt the content. Signets are designed in such a way that for $U$ to help any other $U'$ to access the content, $U$ would have to either transmit something very long (such as the content itself), or reveal $U$'s private information. Thus, users have incentive to police {\em themselves.}

The work motivates the study of the previously unexamined class of {\em incompressible} functions, analysis of which adds a cryptographic twist to communication complexity.

This is joint work with Jeffrey Lotspiech and Moni Naor.

This is joint work with Chandra Chekuri and Rajeev Motwani.

In this talk we answer the question for a class of scheduling constraints large enough to capture any reasonable modeling of processors speed-mismatch. We show that a given scheduling constraint is characterized by the multi-valued consensus problems it enables to solve. In particular we close the gap between 1-resiliency and $n-1$ resiliency, which were characterized in the past by Biran Moran and Zaks in 1988, and Herlihy and Shavit in 1994, respectively, characterizing $t$-resiliency for all $t$, and as a by-product also characterizing the set of problems that may be solved when multi-valued consensus is a system primitive.

(Joint work with Elizabeth Borowsky).

**Email:**
eli@cs.ucla.edu

**WWW:**
http://www.cs.ucla.edu/csd-lanai/fweb/eli.html

We shall propose an extension of their criteria to find an {\em LMT}-skeleton larger than the one by Dickerson and Montague, and an improvement upon the time and space complexities of their algorithm to find the {\em LMT}-skeleton.

The implementation described by Dickerson and Montague runs in $O(n^4)$ time and uses $O(n^3)$ space in worst case. We present a faster implementation that runs in $O(n^3\log n)$ time and $O(n^2)$ space.

**Email:**
naoki@kucgw.kobeuc.ac.jp

Joint work with Dorit Dor

**Email:**
zwick@math.tau.ac.il

**WWW:**
http://www.math.tau.ac.il/~zwick/

**Related paper:**
http://www.math.tau.ac.il/~ddorit/thesis.ps.gz

Lund and Yannakakis give unapproximability results for the problem of finding the largest induced subgraph satisfying any non-trivial, hereditary property. E.g., independent set, planar, acyclic, bipartite, etc. We consider the on-line version of this family of problems.

Furthermore, we study the on-line version of graph coloring whose off-line version has also been shown to be unapproximable by Lund and Yannakakis, on-line max edge-disjoint paths and on-line path coloring problems.

Irrespective of the time complexity, we show an $\Omega(n^\epsilon)$ lower bound of on the competitive ratio of randomized on-line algorithms for any of these problems.

As a consequence, we obtain an $\Omega(n^\epsilon)$ lower bound on the competitive ratio of randomized on-line algorithms for virtual circuit routing and optical routing on general networks, in contrast to the known results for some specific networks, and disproving a recently published claim of Kapoulas and Spirakis. Moreover, our lower bounds hold even if the use of preemption is allowed.

Joint work with Amos Fiat and Stefano Leonardi.

**Email:**
yairb@icsi.berkeley.edu

**WWW:**
http://www.icsi.berkeley.edu/~yairb

**Related paper:** (TBA)

We present in the talk steady state analysis for two continuous processes: a contention resolution protocol, and a hot potato routing mechanism on arrays.

We provide a novel approach to analyzing such a system, based on studying an idealized, infinitely large version of the system and showing that the behavior of the finite system is "close" to the behavior of the infinite system in a well-defined sense. This methodology proves quite general, and can be used to study the behavior of many similar load balancing problems.

**Email:**
mitzen@cs.berkeley.edu

**WWW:**
http://http.cs.Berkeley.edu/~mitzen/

The concept class we study here is much more general than any geometric concept class known to be PAC-learnable. As special cases of our main result, we obtain learning algorithms for several new geometric concept classes. While we consider geometric concepts defined by halfspaces, any polyhedron (not necessarily convex) defined by $f$ faces can be formed by combining $f$ halfspaces. Thus our algorithm can also learn any boolean combination of a polynomial number of polyhedra each with a polynomial number of faces. Even further, our results can be easily extended to efficiently learn any boolean combination of a polynomial number of concepts selected from any concept class $\cal C$ over $\Re^d$ given that the VC-dimension of $\cal C$ is polynomial in $d$ (and thus constant for constant $d$), and there is a polynomial time algorithm to determine if there is a concept from $\cal C$ consistent with a given set of labelled examples.

We also present a statistical query version of our algorithm that can tolerate random classification noise for any noise rate strictly less than 1/2. Finally we present a generalization of the standard $\epsilon$-net result of Haussler and Welzl~\cite{HW87} and apply it to give an alternative noise-tolerant algorithm for $d=2$ based on geometric subdivisions.

Joint work with Nader Bshouty, David Mathias, Suhbash Suri and Hisao Tamaki.

(Accepted to STOC '96)

**Email:**
sg@cs.wustl.edu

**WWW:**
http://www.cs.wustl.edu/~sg/

**Related paper:**
http://siesta.cs.wustl.edu/~sg/general-geometric.ps.Z

The natural assumptions that can be used for such a proof state that a class of problems (e.g. integer factorization) is difficult to solve in the worst case. We cannot use this directly to generate single problems that are difficult to solve. E.g., in spite of the fact that there is no known efficient algorithm that can factor every large number, there are specific large numbers that are easy to factor. There are many other famous problems that are thought to be difficult to solve in the worst case. It was realized long ago that a possible way of generating hard instances of problems would be to find a set of randomly generated problems and to show that if there is an algorithm that finds a solution of a random instance with a nontrivial probability, then there is also an algorithm that solves one of the famous unsolved problems in the worst case. In this work, such a random class is given. The hardness of the problem is reduced to the hardness of famous unsolved worst-case problems related to lattices in the n-dimensional space. Namely, we show that if the random problems are easy to solve with nontrivial probability, then it is also easy to approximate the length of the shortest non-zero vector in a lattice up to a polynomial factor in the worst-case. The hardness of this approximation problem is guaranteed by the fact that in spite of the efforts of many great mathematicians (including the works of Dirichlet and Minkowski in the last century), it is still unsolved.

An algorithm for the negative cycle problem combines a shortest path algorithm and a cycle detection strategy. We study various combinations of shortest path algorithms and cycle detection strategies and find the best combinations.

All shortest path algorithms in our study are based on the labeling method of Ford. We prove the following result about this method: If the input graph contains a negative cycle, then, after a finite number of labeling operations, the distance label of some vertex is less then the length of a shortest simple path in the graph and the graph of parent pointers always contains a cycle. This result is fundamental for many cycle detection strategies, yet it was unknown prior to our paper.

Tarjan proposed a cycle detection strategy that allows immediate detection of cycles in the graph of parent pointers, without asymptotic increase in complexity of underlying shortest path algorithms. Surprisingly, this strategy improves practical performance of shortest path algorithms.

Our previous study showed that an algorithm of Goldberg and Radzik is the best for shortest path problems with negative length arcs. Tarjan's algorithm is competitive with the Goldberg--Radzik algorithm on a wide range of negative cycle and shortest path problems.

**Email:**
avg@research.nj.nec.com

**WWW:**
http://www.neci.nj.nec.com/homepages/avg.html

**Related paper:**
http://www.neci.nj.nec.com/homepages/avg/pub/neci-tr-96-029.ps

Essentially the only known technique for proving inexpressibility results in descriptive complexity is to make use of games on graphs played between two players. The purpose of this talk is to discuss some results and techniques that assist in the use of these games, and in particular that make the task of proving inexpressibility results easier. Thereby, we can prove new and deeper inexpressibility results. Our hope is that we can develop such a powerful toolkit that we can eventually make a serious assault on such fundamental problems as the question of whether NP = co-NP.

Only a general familiarity with first-order logic will be assumed.

**Email:**
fagin@almaden.ibm.com

**Related paper:**
http://theory.stanford.edu/~aflb/fagin96.ps (preliminary version)

There are several forms of proof games for linear logic. We will look at a few of them briefly:

1) a solitaire version, in which a single player writes a proof by placing tiles on a grid,

2) a symmetric, two-player version in which both players must have PSpace capability to play optimally,

3) an asymmetric two-player version in which the proponent plays against a randomized adversary and the final score is assigned probabilistically,

4) another asymmetric two-player version in which the proponent plays against a randomized adversary and the final score is a determinisitic numeric (as opposed to 0/1) score based on the approximate correctness of the proof fragment.

The characterization of MALL using winning strategies for game (3) is based on the complexity class of "Games Against Nature." Using game (4) and results from the literature on the complexity of approximation, we can derive lower bounds on the complexity of local proof search.

This is joint work with Patrick Lincoln (SRI) and Andre Scedrov (U Penn). No prior knowledge of linear logic is assumed.

**Email:**
mitchell@cs.stanford.edu

**WWW:**
http://theory.stanford.edu/people/jcm/home.html

If we schedule a set of $n$ jobs on a set of machines, each job $j$ finishes at some time $C_j$; our goal is to minimize $\sum_j C_j/n$. If all the jobs are available at time $0$ and there are no precedence constraints, this problem is known to be solvable in polynomial time. Once we introduce release dates, or precedence constraints, or add weights to the jobs, the problem becomes NP-hard. Until our recent work no approximation algorithms were known for any (non-trivial) variants of these problems.

We introduce several fairly general techniques for solving problems of this type and show how they lead to small-constant-factor approximation algorithms. Our first technique is to show how to use the solution to a preemptive variant of a scheduling problem to guide the design of a non-preemptive scheduling problem. Second, we show how the solution to linear programming relaxations of these scheduling problems can be used to guide the design of an integral non-preemptive schedule. We then show a fairly general framework for designing schedules on-line, using a subroutine for a makespan-like problem.

Applying these techniques, we get approximation algorithms for many problems including one-machine, parallel machines, unrelated machines and job shop scheduling problems. We can handle release dates and precedence constraints and weights on the jobs.

We also prove that we can modify our algorithms so that they simultaneously approximate the optimal makespan (schedule length) as well; this leads to the following quite general result: for any instance of any (reasonable) scheduling problem, there is a schedule that simultaneously comes within a small constant factor of both the optimal makespan and the optimal average weighted completion time.

We will discuss work from several papers, joint with Soumen Chakrabarti (U.C. Berkeley), Cynthia Phillips (Sandia National Labs), Andreas Schulz (Berlin), David Shmoys (Cornell) and Joel Wein (Polytechnic)

**Email:**
cliff@theory.stanford.edu

**WWW:**
http://www.cs.dartmouth.edu/~cliff

In this talk, we show that the proper application of spectral partitioning works well on bounded-degree planar graphs and finite element meshes---the classes of graphs to which they are usually applied. In particular, we prove that the Fiedler vector can be used to produce separators whose ratio of vertices removed to edges cut is $\O{\sqrt{n}}$ for bounded-degree planar graphs and two-dimensional meshes and $\O{n^{1/d}}$ for well-shaped $d$-dimensional meshes. The main ingredient of our analysis is a geometric technique for estimating the second-smallest eigenvalues of the Laplacian matrices of these graphs.

We will also explain why naive applications of spectral partitioning, such as spectral bisection, will fail miserably on some graphs that could arise in practice.

This is joint work with Shang-Hua Teng of the University of Minnesota

**Email:**
spielman@conga.cs.berkeley.edu

**WWW:**
http://theory.lcs.mit.edu/~spielman

**Related paper:**
http://http.cs.berkeley.edu/~spielman/spect.html

(Parts of this are joint work with Robert Pless and Yoram Sussmann.)

**Email:**
samir@cs.UMD.EDU

**WWW:**
http://www.cs.umd.edu/~samir

Under this assumption of connectedness, we propose an algorithm that reports all purple intersections in time O((N+K) \log^3 N \alpha(N)), where N is the number of streets, K is the number of purple intersections, and \alpha(N) is the inverse of Ackermann's function. The technique involved is a new type of line sweep that keeps a sequence of partial orders in randomized heaps.

This problem was extensively studied in the case where there is no red-red nor blue-blue intersection: a number of solutions have been provided that give an optimal O(N log N + K) running time. In the general case, things are harder: the problem can be solved in time O(N^{4/3} log^{O(1)} N + K), using range searching techniques, and this is likely to be a lower bound. The connectedness hypothesis was taken advantage of by Agarwal and Sharir, who gave an O(N log^2 N \alpha(N)) algorithm to detect one single purple intersection.

This work may have applications for the L.A.P.D.

This is joint work with Leo Guibas and G.D. Ramkumar.

**Email:**
jbasch@cs.stanford.edu

**WWW:**
http://robotics.stanford.edu/~jbasch/

**Related paper:**
http://robotics.stanford.edu/~jbasch/files/red-blue.ps

The shortest path problem is an old and fundamental problem with a host of applications. Our algorithm is the first strongly-polynomial algorithm in over 35 years to improve upon some aspect of the running time of the celebrated Bellman-Ford algorithm for arbitrary networks, with any type of cost assignments.

Joint work with Cliff Stein

**Email:**
stavros@cs.dartmouth.edu

**WWW:**
http://www.cs.dartmouth.edu/~stavros/

**Related paper:**
to appear in Proc. of IPCO '96

Presented results are based on common work with Wojciech Plandowski and Wojciech Rytter from Warsaw University.

**Email:**
leszek@mpi-sb.mpg.de

**WWW:**
http://www.mpi-sb.mpg.de/~leszek

**Related paper:**
ftp://zaa.mimuw.edu.pl/pub/users/lechu/znew.ps.gz

**Related paper:**
ftp://zaa.mimuw.edu.pl/pub/users/lechu/cpm95.ps.gz

- A nearly linear time algorithm for finding minimum cuts,
- An O(n^2) time algorithm for approximating s-t minimum cuts in undirected graphs, where the best exact algorithm takes O(n^3) time,

- Fast algorithms approximating maximum flows and spanning tree packings
- Algorithms for approximately computing the reliability of networks with random edge failures
- Randomized rounding methods that give (1+epsilon) times optimal solutions to network design problems with large connectivity demands.