SAS is the Stanford Algorithms Seminar (formerly the AFLB -- Algorithms for Lunch Bunch). It is run by Craig Silverstein. Meetings are generally Thursdays at 4:15 p.m.

For 1995, talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.

For 1996, talks are held in the Gates Building, near the Main Quad of Stanford's campus. Click here for directions.

Contents:

  1. 12 October 1995: Howard Karloff (Georgia Tech). A Better Approximation Algorithm for Finding Planar Subgraphs.
  2. 2 November 1995: Michael Luby (ICSI). Linear Time Erasure Codes With Nearly Optimal Recovery.
  3. 16 November 1995: Sanjeev Khanna (Stanford). Angular-Metric TSP and the Matroid Parity Problem.
  4. 30 November 1995: Alistair Sinclair (UC-Berkeley). Biased Random Walks, Lyapunov Functions, and Stochastic Analysis of Best-Fit Bin Packing.
  5. 7 December 1995: Cynthia Dwork (IBM Almaden). Digital Signets for Protection of Digital Information.
  6. 14 December 1995: Donald Aingworth (Stanford). Estimating Diameter and Shortest Paths without Matrix Multiplication.
  7. 18 January 1996: Eli Gafni (UCLA). Read-Write Protocols with Scheduling Constraints.
  8. 1 February 1996: Naoki Katoh (Kobe University of Commerce). Computing {\em LMT}-skeleton efficiently.
  9. 8 February 1996: Uri Zwick (Tel Aviv University). Finding the median requires (2+epsilon)n comparisons.
  10. 15 February 1996: Yair Bartal (ICSI). Lower Bounds for On-line Graph Problems with Application to On-line Circuit and Optical Routing.
  11. 22 February 1996: Eli Upfal (IBM Almaden and The Weizmann Institute). Steady State Analysis of Computer Processes.
  12. 29 February 1996: Michael Mitzenmacher (UC-Berkeley). Large Markovian Particle Processes and Some Applications to Load Balancing.
  13. 5 March 1996: Sally Goldman (Washington University). Noise-Tolerant Distribution-Free Learning of General Geometric Concepts.
  14. 4 April 1996: Miki Ajtai (IBM Almaden). Generating Hard Problems with Their Solutions.
  15. 5 April 1996: Andrew Goldberg (NEC). Negative-Cycle Detection Algorithms.
  16. 18 April 1996: Ron Fagin (IBM Almaden). Easier Ways to Win Logical Games.
  17. 25 April 1996: John Mitchell (Stanford). Linear Logic, Probabilistic Games and the Complexity of Approximate Proof Search.
  18. 2 May 1996: Cliff Stein (Dartmouth). Designing Schedules that Minimize Average Completion Time.
  19. 9 May 1996: Dan Spielman (UC-Berkeley and MIT). Spectral Partitioning Works: Planar graphs and finite element meshes.
  20. 16 May 1996: Samir Khuller (Maryland). Approximation Algorithms for Facility Location Problems.
  21. 23 May 1996: Julien Basch (Stanford). Reporting Red-Blue Intersections between Connected Sets of Line Segments.
  22. 30 May 1996: Stavros Kolliopoulis (Dartmouth). Finding Real-Valued Single-Source Shortest Paths in $o(n^3)$ Expected Time.
  23. 6 June 1996: Leszek Gasieniec (Max Planck Institut fur Informatik, Saarbrucken). From Parallel to Small Space String Matching.
  24. 22 August 1996: David Karger (MIT). Random Sampling in Cut Problems.

12 October 1995

Margaret Jacks Hall 352. 4:15 p.m.

Howard Karloff (Georgia Tech)

A Better Approximation Algorithm for Finding Planar Subgraphs

The MAXIMUM PLANAR SUBGRAPH problem---given a graph G, find a largest planar subgraph of G---has applications in circuit layout, facility layout, and graph drawing. No previous polynomial-time approximation algorithm for this NP-Complete problem was known to achieve a performance ratio larger than 1/3. which is achieved simply by producing a spanning tree of G. We present the first approximation algorithm for MAXIMUM PLANAR SUBGRAPH with higher performance ratio (0.4 instead of 1/3). (The same techniques can be applied to find large outerplanar subgraphs.) We also show that MAXIMUM PLANAR SUBGRAPH is Max SNP-Hard.

This is joint work with Cristina Fernandes, Gruia Calinescu, and Ulrich Finkler.

Email: howard@cc.gatech.edu
Related paper: http://theory.stanford.edu/~aflb/karloff95.ps


2 November 1995

Margaret Jacks Hall 352. 1:15 p.m.

Michael Luby (ICSI)

Linear Time Erasure Codes With Nearly Optimal Recovery

An erasure code consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of equal length packets of total length $cn$ from an $n$-bit message, where $c > 1$ is an input parameter. The decoding algorithm is able to recover the message from any set of packets whose total length is $r$.

Erasure codes have immediate applications, e.g., to sending real-time high-volume video information over lossy packet based networks. The two most important properties of erasure codes are the running time of the algorithms and the amount of encoding sufficient to recover the message, i.e., the value of $r$. It is not hard to see that the running time must be at least linear and that $r$ must be at least $n$.

We describe erasure codes where both the encoding and decoding algorithms run in linear time and where $r$ is only slightly larger than $n$. Our first scheme is deterministic: On input parameter $\e > 0$, the run time of the encoding and decoding algorithms is $\bigO(n/\e^4)$ and $r = (1+\e)n$. Although this is an interesting theoretical result, it is not clear if it is practical.

Our second scheme is probabilistic: On input parameter $\e > 0$, the run time of the encoding and decoding algorithms is $\bigO(n\log(1/\e)/\e)$ and $r = (1+\e)n$. In this setting, both the encoding and decoding algorithms are randomized with access to the same set of random bits, the network is assumed to drop packets independent of their contents, and the message is recovered from $r$ packets with high probability. This scheme shows promise of being practical.

JOINT WORK WITH: Noga Alon and Jeff Edmonds

Presented at FOCS `95

Email: luby@icsi.berkeley.edu
WWW: http://www.icsi.berkeley.edu:80/~luby
Related paper: to appear in IEEE Transactions on Information Theory


16 November 1995

Margaret Jacks Hall 352. 4:15 p.m.

Sanjeev Khanna (Stanford)

Angular-Metric TSP and the Matroid Parity Problem

We consider the problem of minimizing the total angle of a TSP tour, where the angle of the tour is the sum of the direction changes at the vertices. The problem is originally motivated by the goal of finding a ``smooth'' tour that visits a set of prescribed viewpoints of an object. A rather unexpected consequence of our study of this problem is the resolution of a longstanding open problem in matroid theory: is the Matroid Parity problem for weighted linear matroids NP-hard? We settle this question in the affirmative via a complex sequence of reductions involving the angular TSP problem. We also devise approximation algorithms for the angular TSP and the angular cycle cover problems. Finally, we consider the trade-off issue in simultaneously approximating both the length and the angle metric. By understanding the combinatorial structure of the angle measure, we establish essentially matching upper and lower bounds on the quality of this trade-off.

This is joint work with Alok Aggarwal, Don Coppersmith, Rajeev Motwani and Baruch Schieber.

Email: sanjeev@cs.stanford.edu
WWW: http://Theory.Stanford.edu/people/sanjeev/sanjeev.html
Related paper: (TBA)


30 November 1995

Margaret Jacks Hall 352. 4:15 p.m.

Alistair Sinclair (UC-Berkeley)

Biased Random Walks, Lyapunov Functions, and Stochastic Analysis of Best-Fit Bin Packing

The Best Fit algorithm for online bin packing is widely used in practice and has inspired some elegant theoretical analysis over two decades. In recent years, much attention has focused on the average case behavior under the discrete uniform distribution $U(j,k)$, where the bins have size $k$ and the items to be packed have sizes uniformly distributed in the set $\{1,...,j\}$.

Large-scale simulations (due to Coffman et al) point to an interesting dichotomy: when $j$ is either very large (i.e., close to $k$) or very small, the expected waste (i.e., unused space in the bins) remains bounded for an infinite stream of items, but there is a critical region (around $j\approx 0.8k$) in which the waste grows linearly. However, these experimental results have yet to be verified theoretically for most values of $j$ and $k$.

In this talk, we take a step in this direction by proving that the waste remains bounded along the line $j=k-2$. The proof is the first to involve a detailed understanding of the high-dimensional random walk underlying the algorithm.

(Joint work with Claire Kenyon and Yuval Rabani.)

Email: sinclair@cs.berkeley.edu
Related paper: In Proceedings of SODA, 1996


7 December 1995

Margaret Jacks Hall 352. 4:15 p.m.

Cynthia Dwork (IBM Almaden)

Digital Signets for Protection of Digital Information

The problem of protecting digital content -- software, video, documents, music, etc. -- from illegal redistribution by an authorized user, is the focus of considerable industrial and academic effort. In the absence of special-purpose tamper-proof hardware, the problem has no cryptographically secure solution: once a legitimate user has purchased the content, the user, by definition, has access to the material and can therefore capture it and redistribute it. A number of techniques have been suggested or are currently employed to make redistribution either inconvenient or traceable. The problem with traceability is that it requires a ``digital content police'' that has no authmatic means of generating suspicion.

In this paper we introduce {\em digital signets}, a new technique for protecting digital content from illegal redistribution. In broad terms, signets work as follows. There is some common public data, some of which is encrypted {\em content}, an {\em authorization center}, and any number of potential {\em users}. Each user has access to the common public data. To gain access to the content, user $U$ interacts with an authorization center to obtain a short {\em digital signet}, which is a function of information private to $U$. Using only the public data, its own private information, and the signet, $U$ can decrypt the content. Signets are designed in such a way that for $U$ to help any other $U'$ to access the content, $U$ would have to either transmit something very long (such as the content itself), or reveal $U$'s private information. Thus, users have incentive to police {\em themselves.}

The work motivates the study of the previously unexamined class of {\em incompressible} functions, analysis of which adds a cryptographic twist to communication complexity.

This is joint work with Jeffrey Lotspiech and Moni Naor.

Email: dwork@almaden.ibm.com


14 December 1995

Margaret Jacks Hall 352. 4:15 p.m.

Donald Aingworth (Stanford)

Estimating Diameter and Shortest Paths without Matrix Multiplication

Consider the problem of computing all-pairs shortest paths in an undirected, unweighted graph with $n$ vertices and $m$ edges. Using fast matrix multiplication, this problem can be solved in time $\~O(n^\omega)$, where $\omega$ is the exponent in the running time of the fast matrix multiplication algorithm. Not only are these algorithms infeasible in practice, they are unsatisfying in that by ignoring the combinatorial structure of these problems they hide the nature of the solution. We present a purely combinatorial algorithm for approximating all-pairs shortest paths (and thus diameter) -- to within a additive constant error -- in $\~O(n^2.5)$ time. Implementation results show that this algorithm is significantly faster than prior approaches, with only a small loss in accuracy. Open questions are whether the error can be eliminated, and if the running time can be reduced below the current exponent for fast matrix multiplication (which is currently 2.376).

This is joint work with Chandra Chekuri and Rajeev Motwani.


18 January 1996

Gates Building 498. 4:15 p.m.

Eli Gafni (UCLA)

Read-Write Protocols with Scheduling Constraints

A distributed protocol solves a problem $t$-resiliently if it works under the assumption that at most $t$ processors will fail stop. Resiliency can be abstracted as a scheduling constraint. A $t$-resilient protocol works for executions in a given set but may not work for others. One can imagine many scheduling constraints that do not correspond to $t$-resiliency for any $t$. Given a problem and a scheduling constraint, does there exist a read-write shared memory protocol to solve the problem for executions satisfying the constraints?

In this talk we answer the question for a class of scheduling constraints large enough to capture any reasonable modeling of processors speed-mismatch. We show that a given scheduling constraint is characterized by the multi-valued consensus problems it enables to solve. In particular we close the gap between 1-resiliency and $n-1$ resiliency, which were characterized in the past by Biran Moran and Zaks in 1988, and Herlihy and Shavit in 1994, respectively, characterizing $t$-resiliency for all $t$, and as a by-product also characterizing the set of problems that may be solved when multi-valued consensus is a system primitive.

(Joint work with Elizabeth Borowsky).

Email: eli@cs.ucla.edu
WWW: http://www.cs.ucla.edu/csd-lanai/fweb/eli.html


1 February 1996

Gates Building 498. 4:15 p.m.

Naoki Katoh (Kobe University of Commerce)

Computing {\em LMT}-skeleton efficiently

Given a planar point set $S$, an {\em LMT}-skeleton, which is a subgraph of minimum weight triangulation of $S$, is recently proposed by Dickerson and Montague. They experimentally showed that their {\em LMT}-skeleton is a very large subgraph of minimum weight triangulation and an exact minimum weight triangulation for more than 200 points can be found very quickly based on the {\em LMT}-skeleton combined with dynamic programming.

We shall propose an extension of their criteria to find an {\em LMT}-skeleton larger than the one by Dickerson and Montague, and an improvement upon the time and space complexities of their algorithm to find the {\em LMT}-skeleton.

The implementation described by Dickerson and Montague runs in $O(n^4)$ time and uses $O(n^3)$ space in worst case. We present a faster implementation that runs in $O(n^3\log n)$ time and $O(n^2)$ space.

Email: naoki@kucgw.kobeuc.ac.jp


8 February 1996

Gates Building 498. 4:15 p.m.

Uri Zwick (Tel Aviv University)

Finding the median requires (2+epsilon)n comparisons

Improving a long standing result of Bent and John, we obtain a (2+epsilon)n lower bound (for some fixed epsilon>0) on the number of comparisons required, in the worst case, for selecting the MEDIAN of n elements. The new lower bound is obtained by enhancing the leaf counting approach with adversary arguments.

Joint work with Dorit Dor

Email: zwick@math.tau.ac.il
WWW: http://www.math.tau.ac.il/~zwick/
Related paper: http://www.math.tau.ac.il/~ddorit/thesis.ps.gz


15 February 1996

Gates Building 498. 4:15 p.m.

Yair Bartal (ICSI)

Lower Bounds for On-line Graph Problems with Application to On-line Circuit and Optical Routing

We present lower bounds on the competitive ratio of randomized algorithms for a wide class of on-line graph optimization problems and we apply such results to on-line virtual circuit and optical routing problems.

Lund and Yannakakis give unapproximability results for the problem of finding the largest induced subgraph satisfying any non-trivial, hereditary property. E.g., independent set, planar, acyclic, bipartite, etc. We consider the on-line version of this family of problems.

Furthermore, we study the on-line version of graph coloring whose off-line version has also been shown to be unapproximable by Lund and Yannakakis, on-line max edge-disjoint paths and on-line path coloring problems.

Irrespective of the time complexity, we show an $\Omega(n^\epsilon)$ lower bound of on the competitive ratio of randomized on-line algorithms for any of these problems.

As a consequence, we obtain an $\Omega(n^\epsilon)$ lower bound on the competitive ratio of randomized on-line algorithms for virtual circuit routing and optical routing on general networks, in contrast to the known results for some specific networks, and disproving a recently published claim of Kapoulas and Spirakis. Moreover, our lower bounds hold even if the use of preemption is allowed.

Joint work with Amos Fiat and Stefano Leonardi.

Email: yairb@icsi.berkeley.edu
WWW: http://www.icsi.berkeley.edu/~yairb
Related paper: (TBA)


22 February 1996

Gates Building 498. 4:15 p.m.

Eli Upfal (IBM Almaden and The Weizmann Institute)

Steady State Analysis of Computer Processes

Research on probabilistic analysis has focused mainly on analyzing terminating processes (algorithms), where the goal is to bound the number of steps till the process terminates. However, many important processes in a computing system (such as load balancing, or management of a communication network) are continuous, and non-terminating. The goal in analyzing such processes is to characterize their steady state performance. Such an analysis requires different techniques.

We present in the talk steady state analysis for two continuous processes: a contention resolution protocol, and a hot potato routing mechanism on arrays.

Email: ely@almaden.ibm.com


29 February 1996

Gates Building 498. 4:15 p.m.

Michael Mitzenmacher (UC-Berkeley)

Large Markovian Particle Processes and Some Applications to Load Balancing

Consider the following dynamic problem: jobs arrive as a Poisson stream of rate $\lambda n$, $\lambda < 1$, to a collection of $n$ servers. Each job chooses some constant $d$ servers independently and uniformly at random from the $n$ servers, and waits for service at the one currently containing the fewest jobs (ties are broken arbitrarily). Jobs are served using the first-in first-out (FIFO) protocol, and the service time for a job is exponentially distributed with mean 1. We wish to know how this system behaves, and in particular we are interested in the expected time a job spends in the system.

We provide a novel approach to analyzing such a system, based on studying an idealized, infinitely large version of the system and showing that the behavior of the finite system is "close" to the behavior of the infinite system in a well-defined sense. This methodology proves quite general, and can be used to study the behavior of many similar load balancing problems.

Email: mitzen@cs.berkeley.edu
WWW: http://http.cs.Berkeley.edu/~mitzen/


5 March 1996

Gates Building 498. 4:15 p.m.

Sally Goldman (Washington University)

Noise-Tolerant Distribution-Free Learning of General Geometric Concepts

We present an efficient algorithm for PAC-learning a very general class of geometric concepts over $\Re^d$ for fixed $d$. More specifically, let $\cal T$ be a set of $s$ halfspaces. Let $x = (x_1, \ldots, x_d)$ be an arbitrary point in $\Re^d$. With each $t \in {\cal T}$ we associate a boolean indicator function $I_{t}(x)$ which is $1$ if and only if $x$ is in the halfspace $t$. The concept class, ${\cal C}^d_s$, that we study consists of all concepts formed by any boolean function over $I_{t_1}, \ldots, I_{t_s}$ for $t_i \in {\cal T}$ where $\cal T$ is any set of $s$ halfspaces.

The concept class we study here is much more general than any geometric concept class known to be PAC-learnable. As special cases of our main result, we obtain learning algorithms for several new geometric concept classes. While we consider geometric concepts defined by halfspaces, any polyhedron (not necessarily convex) defined by $f$ faces can be formed by combining $f$ halfspaces. Thus our algorithm can also learn any boolean combination of a polynomial number of polyhedra each with a polynomial number of faces. Even further, our results can be easily extended to efficiently learn any boolean combination of a polynomial number of concepts selected from any concept class $\cal C$ over $\Re^d$ given that the VC-dimension of $\cal C$ is polynomial in $d$ (and thus constant for constant $d$), and there is a polynomial time algorithm to determine if there is a concept from $\cal C$ consistent with a given set of labelled examples.

We also present a statistical query version of our algorithm that can tolerate random classification noise for any noise rate strictly less than 1/2. Finally we present a generalization of the standard $\epsilon$-net result of Haussler and Welzl~\cite{HW87} and apply it to give an alternative noise-tolerant algorithm for $d=2$ based on geometric subdivisions.

Joint work with Nader Bshouty, David Mathias, Suhbash Suri and Hisao Tamaki.

(Accepted to STOC '96)

Email: sg@cs.wustl.edu
WWW: http://www.cs.wustl.edu/~sg/
Related paper: http://siesta.cs.wustl.edu/~sg/general-geometric.ps.Z


4 April 1996

Gates Building 498. 4:15 p.m.

Miki Ajtai (IBM Almaden)

Generating Hard Problems with Their Solutions

One of the most basic questions of cryptography is how to generate problems (with known solutions) that are difficult to solve. As an illustration, we may think of the problem of identification. If you communicate with somebody, you can send her an equation and say that you know the solution. When next time you communicate again, you can send the solution as a proof of your identity. If the equation is really difficult to solve, then nobody will be able to send messages in your name. In this work we describe a method for generating specific instances of problems, together with a known solution, so that with high probability each single generated problem is difficult to solve. The hardness of these problems can be proved from generally accepted assumptions in complexity theory.

The natural assumptions that can be used for such a proof state that a class of problems (e.g. integer factorization) is difficult to solve in the worst case. We cannot use this directly to generate single problems that are difficult to solve. E.g., in spite of the fact that there is no known efficient algorithm that can factor every large number, there are specific large numbers that are easy to factor. There are many other famous problems that are thought to be difficult to solve in the worst case. It was realized long ago that a possible way of generating hard instances of problems would be to find a set of randomly generated problems and to show that if there is an algorithm that finds a solution of a random instance with a nontrivial probability, then there is also an algorithm that solves one of the famous unsolved problems in the worst case. In this work, such a random class is given. The hardness of the problem is reduced to the hardness of famous unsolved worst-case problems related to lattices in the n-dimensional space. Namely, we show that if the random problems are easy to solve with nontrivial probability, then it is also easy to approximate the length of the shortest non-zero vector in a lattice up to a polynomial factor in the worst-case. The hardness of this approximation problem is guaranteed by the fact that in spite of the efforts of many great mathematicians (including the works of Dirichlet and Minkowski in the last century), it is still unsolved.

Email: ajtai@almaden.ibm.com


5 April 1996

Gates Building 498. 1:30 p.m.

Andrew Goldberg (NEC)

Negative-Cycle Detection Algorithms

We study the classical problem of finding a negative-length cycle in a network. This problem comes up directly and as a subproblem in algorithms for other important problems.

An algorithm for the negative cycle problem combines a shortest path algorithm and a cycle detection strategy. We study various combinations of shortest path algorithms and cycle detection strategies and find the best combinations.

All shortest path algorithms in our study are based on the labeling method of Ford. We prove the following result about this method: If the input graph contains a negative cycle, then, after a finite number of labeling operations, the distance label of some vertex is less then the length of a shortest simple path in the graph and the graph of parent pointers always contains a cycle. This result is fundamental for many cycle detection strategies, yet it was unknown prior to our paper.

Tarjan proposed a cycle detection strategy that allows immediate detection of cycles in the graph of parent pointers, without asymptotic increase in complexity of underlying shortest path algorithms. Surprisingly, this strategy improves practical performance of shortest path algorithms.

Our previous study showed that an algorithm of Goldberg and Radzik is the best for shortest path problems with negative length arcs. Tarjan's algorithm is competitive with the Goldberg--Radzik algorithm on a wide range of negative cycle and shortest path problems.

Email: avg@research.nj.nec.com
WWW: http://www.neci.nj.nec.com/homepages/avg.html
Related paper: http://www.neci.nj.nec.com/homepages/avg/pub/neci-tr-96-029.ps


18 April 1996

Gates Building 498. 4:15 p.m.

Ron Fagin (IBM Almaden)

Easier Ways to Win Logical Games

The "computational complexity" of a problem is the amount of resources, such as time or space, required by a machine that solves the problem. The "descriptive complexity" of a problem is the complexity of describing the problem in some logical formalism. There is an intimate connection between the descriptive complexity and the computational complexity. This connection was first discovered by the speaker, who showed that the complexity class NP coincides with the class of properties of finite structures expressible in existential second-order logic (where we are allowed to existentially quantify over not just points, as in first-order logic, but also over relations). The equivalence of questions in computational and descriptive complexity holds the promise that techniques from one domain can be brought to bear on questions in the other domain.

Essentially the only known technique for proving inexpressibility results in descriptive complexity is to make use of games on graphs played between two players. The purpose of this talk is to discuss some results and techniques that assist in the use of these games, and in particular that make the task of proving inexpressibility results easier. Thereby, we can prove new and deeper inexpressibility results. Our hope is that we can develop such a powerful toolkit that we can eventually make a serious assault on such fundamental problems as the question of whether NP = co-NP.

Only a general familiarity with first-order logic will be assumed.

Email: fagin@almaden.ibm.com
Related paper: http://theory.stanford.edu/~aflb/fagin96.ps (preliminary version)


25 April 1996

Gates Building 498. 4:15 p.m.

John Mitchell (Stanford)

Linear Logic, Probabilistic Games and the Complexity of Approximate Proof Search

Linear logic is a refinement of classical and constructive logic that has a "resource sensitive" character. Intuitively, a formula A asserts the presence or availability of some resource. Therefore "A and A" is not logically equivalent to A. In past work, we have shown that fragments of propositional linear logic are undecidable, PSpace-complete and NP-complete. This talk will focus on so-called "multiplicative, additive linear logic", or MALL, which is PSpace-complete. This is a natural complexity class to associate with games.

There are several forms of proof games for linear logic. We will look at a few of them briefly:

1) a solitaire version, in which a single player writes a proof by placing tiles on a grid,

2) a symmetric, two-player version in which both players must have PSpace capability to play optimally,

3) an asymmetric two-player version in which the proponent plays against a randomized adversary and the final score is assigned probabilistically,

4) another asymmetric two-player version in which the proponent plays against a randomized adversary and the final score is a determinisitic numeric (as opposed to 0/1) score based on the approximate correctness of the proof fragment.

The characterization of MALL using winning strategies for game (3) is based on the complexity class of "Games Against Nature." Using game (4) and results from the literature on the complexity of approximation, we can derive lower bounds on the complexity of local proof search.

This is joint work with Patrick Lincoln (SRI) and Andre Scedrov (U Penn). No prior knowledge of linear logic is assumed.

Email: mitchell@cs.stanford.edu
WWW: http://theory.stanford.edu/people/jcm/home.html


2 May 1996

Gates Building 498. 4:15 p.m.

Cliff Stein (Dartmouth)

Designing Schedules that Minimize Average Completion Time

We discuss new approaches to the design of approximation algorithms for several NP-hard scheduling problems in which the objective is to minimize the average completion time.

If we schedule a set of $n$ jobs on a set of machines, each job $j$ finishes at some time $C_j$; our goal is to minimize $\sum_j C_j/n$. If all the jobs are available at time $0$ and there are no precedence constraints, this problem is known to be solvable in polynomial time. Once we introduce release dates, or precedence constraints, or add weights to the jobs, the problem becomes NP-hard. Until our recent work no approximation algorithms were known for any (non-trivial) variants of these problems.

We introduce several fairly general techniques for solving problems of this type and show how they lead to small-constant-factor approximation algorithms. Our first technique is to show how to use the solution to a preemptive variant of a scheduling problem to guide the design of a non-preemptive scheduling problem. Second, we show how the solution to linear programming relaxations of these scheduling problems can be used to guide the design of an integral non-preemptive schedule. We then show a fairly general framework for designing schedules on-line, using a subroutine for a makespan-like problem.

Applying these techniques, we get approximation algorithms for many problems including one-machine, parallel machines, unrelated machines and job shop scheduling problems. We can handle release dates and precedence constraints and weights on the jobs.

We also prove that we can modify our algorithms so that they simultaneously approximate the optimal makespan (schedule length) as well; this leads to the following quite general result: for any instance of any (reasonable) scheduling problem, there is a schedule that simultaneously comes within a small constant factor of both the optimal makespan and the optimal average weighted completion time.

We will discuss work from several papers, joint with Soumen Chakrabarti (U.C. Berkeley), Cynthia Phillips (Sandia National Labs), Andreas Schulz (Berlin), David Shmoys (Cornell) and Joel Wein (Polytechnic)

Email: cliff@theory.stanford.edu
WWW: http://www.cs.dartmouth.edu/~cliff


9 May 1996

Gates Building 498. 4:15 p.m.

Dan Spielman (UC-Berkeley and MIT)

Spectral Partitioning Works: Planar graphs and finite element meshes

Spectral partitioning methods use the Fiedler vector---the eigenvector of the second-smallest eigenvalue of the Laplacian matrix---to find a small separator of a graph. These methods are important components of many circuit design and scientific numerical algorithms, and have been demonstrated by experiment to work extremely well. However, the quality of the partition that these methods should produce had eluded precise analysis.

In this talk, we show that the proper application of spectral partitioning works well on bounded-degree planar graphs and finite element meshes---the classes of graphs to which they are usually applied. In particular, we prove that the Fiedler vector can be used to produce separators whose ratio of vertices removed to edges cut is $\O{\sqrt{n}}$ for bounded-degree planar graphs and two-dimensional meshes and $\O{n^{1/d}}$ for well-shaped $d$-dimensional meshes. The main ingredient of our analysis is a geometric technique for estimating the second-smallest eigenvalues of the Laplacian matrices of these graphs.

We will also explain why naive applications of spectral partitioning, such as spectral bisection, will fail miserably on some graphs that could arise in practice.

This is joint work with Shang-Hua Teng of the University of Minnesota

Email: spielman@conga.cs.berkeley.edu
WWW: http://theory.lcs.mit.edu/~spielman
Related paper: http://http.cs.berkeley.edu/~spielman/spect.html


16 May 1996

Gates Building 498. 4:15 p.m.

Samir Khuller (Maryland)

Approximation Algorithms for Facility Location Problems

Facility location problems arise in a variety of contexts. Location of emergency facilities, supply depots, information servers in a network etc., are all situations where facility location problems arise. The most basic facility location problem is called the K-center problem, and asks for a placement of K centers in a graph, so that each vertex is guaranteed to have a center close to it. Specifically, we wish to minimize the maximum distance from a vertex to its closest center. This problem is NP-hard, and several optimal approximation algorithms are known for it. We study generalizations of this problem to the "capacitated" case, where there are additional constraints on the number of vertices that can be assigned to a center. We also study extensions to the case when multiple centers are required to be close to each vertex. We present constant factor approximation algorithms in both cases.

(Parts of this are joint work with Robert Pless and Yoram Sussmann.)

Email: samir@cs.UMD.EDU
WWW: http://www.cs.umd.edu/~samir


23 May 1996

Gates Building 498. 4:15 p.m.

Julien Basch (Stanford)

Reporting Red-Blue Intersections between Connected Sets of Line Segments

We study the following problem: the "Blues", a city gang, controls a connected network of streets. The "Reds" similarly have their own connected network. There are some "purple" intersections between the two networks, and the goal of the city police is to find all these points efficiently to avoid possible rumbles.

Under this assumption of connectedness, we propose an algorithm that reports all purple intersections in time O((N+K) \log^3 N \alpha(N)), where N is the number of streets, K is the number of purple intersections, and \alpha(N) is the inverse of Ackermann's function. The technique involved is a new type of line sweep that keeps a sequence of partial orders in randomized heaps.

This problem was extensively studied in the case where there is no red-red nor blue-blue intersection: a number of solutions have been provided that give an optimal O(N log N + K) running time. In the general case, things are harder: the problem can be solved in time O(N^{4/3} log^{O(1)} N + K), using range searching techniques, and this is likely to be a lower bound. The connectedness hypothesis was taken advantage of by Agarwal and Sharir, who gave an O(N log^2 N \alpha(N)) algorithm to detect one single purple intersection.

This work may have applications for the L.A.P.D.

This is joint work with Leo Guibas and G.D. Ramkumar.

Email: jbasch@cs.stanford.edu
WWW: http://robotics.stanford.edu/~jbasch/
Related paper: http://robotics.stanford.edu/~jbasch/files/red-blue.ps


30 May 1996

Gates Building 498. 2:30 p.m.

Stavros Kolliopoulis (Dartmouth)

Finding Real-Valued Single-Source Shortest Paths in $o(n^3)$ Expected Time.

Given an $n$-vertex directed network $G$ with real costs on the edges and a designated source vertex $s,$ we give a new algorithm to compute shortest paths from $s$. Our algorithm is a simple deterministic one with $O(n^2 \log n)$ expected running time over a large class of input distributions.

The shortest path problem is an old and fundamental problem with a host of applications. Our algorithm is the first strongly-polynomial algorithm in over 35 years to improve upon some aspect of the running time of the celebrated Bellman-Ford algorithm for arbitrary networks, with any type of cost assignments.

Joint work with Cliff Stein

Email: stavros@cs.dartmouth.edu
WWW: http://www.cs.dartmouth.edu/~stavros/
Related paper: to appear in Proc. of IPCO '96


6 June 1996

Gates Building 498. 4:15 p.m.

Leszek Gasieniec (Max Planck Institut fur Informatik, Saarbrucken)

From Parallel to Small Space String Matching

Given two words: text T of length n and pattern P of length m. The string matching problem is to find all occurrences of pattern P in text T. We present how techniques designed for fast parallel string matching as duels by witnesses and deterministic sampling can be used efficiently in string matching with a small additional memory. First we show how the parallel method of expensive duels is transformed into sequential zooming technique used in `constant' space $(2+\varepsilon)n$ comparisons string matching algorithm. Then we present application of deterministic sampling in real constant space $(1+\varepsilon)n$ comparisons string matching algorithm.

Presented results are based on common work with Wojciech Plandowski and Wojciech Rytter from Warsaw University.

Email: leszek@mpi-sb.mpg.de
WWW: http://www.mpi-sb.mpg.de/~leszek
Related paper: ftp://zaa.mimuw.edu.pl/pub/users/lechu/znew.ps.gz
Related paper: ftp://zaa.mimuw.edu.pl/pub/users/lechu/cpm95.ps.gz


22 August 1996

Gates Building 498. 4:15 p.m.

David Karger (MIT)

Random Sampling in Cut Problems

We will discuss several uses of random sampling for solving problems involving cuts and flows in undirected graphs. Given a large input problem, random sampling can be used to build a "representative subproblem" that can be solved quickly because it is small. The solution to the subproblem gives us a great deal of information that can be used to approximately or exactly solve the original problem. We show that undirected graphs are "well behaved" under random sampling experiments. In particular, the values of all cuts in the sampled graph are tightly concentrated near their expectations. We will discuss a few applications of this principle, including: Depending on time and interest, we may also discuss Email:
karger@.mit.edu
WWW: http://theory.lcs.mit.edu/~karger
Related paper: http://theory.lcs.mit.edu/~karger/Papers/lincut.ps.gz
Related paper: http://theory.lcs.mit.edu/~karger/Papers/stcut.ps.gz