Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.
In the present talk I give a simple answer to Steiner's question, and give a sketch of some of the techniques, insights and consequences.
We define a simple algebraic notion of ``structure'' to error and show how to reconstruct algebraic functions in situations where the oracle errs (in structured manner) on all but a small fraction of the inputs.
The definition of structured error is quite restrictive, but we justify the notion by presenting applications to a widely varying set of areas - curve fitting, bivariate polynomial factoring, learning and correccting programs.
This is joint work with Sigal Ar, Richard Lipton and Ronitt Rubinfeld.
Concurrent timestamping is an extremely powerful tool for concurrency control.
The first to isolate the notion of bounded timestamping were Israeli and Li, who also developed a theory of the weaker problem: sequential timestamping. (That is, timestamping in a world where no two operations are ever concurrent.) Later, Dolev and Shavit defined and constructed the first bounded concurrent timestamping system. Their solution, however is very complex.
Hence, the contributions of this paper are twofold. First, our solution is based on an entirely new approach very different from that of Dolev and Shavit as well as Israeli and Li yielding a much simpler construction and proof of correctness. Second, our solution is more efficient than the one of Dolev and Shavit. In particular, its time complexity is linear in contrast to their more than quadratic one. Finally, to implement our approach using bounded registers we introduce a new communication primitive which we believe will have application in other contexts in which boundedness is desired.
This is a joint work with Cynthia Dwork from IBM Almaden.
This is joint work with J. Naor and R. Rom.
This talk will first relay a few of my (possibly controversial) observations:
1: We really can get to 1 TeraFLOPS(Million MFLOPS)! And Soon!
2: Early guesses at MPP issues were wrong.
3: Both MPP programming models and MPP hardware models are converging.
4: Current theoretical models may be too inaccurate to expect them to lead to important contributions in MPP.
5: Computer science has an obligation as well as an opportunity in MPP.
I will then mention the need for a better theoretical model for MP algorithms. Such a model must emphasize the importance of algorithms that localize memory accesses. (Perhaps we can derive an appropriate model via audience participation!)
Joint work with Emo Welzl and Jirka Matousek.
Host: Leonidas Guibas
Apparently there is no known efficient method to compute the number of Eulerian orientations; we justify this lack by showing that the problem is #P-complete. On the positive side, we give an efficient randomized algorithm to approximate the number of Eulerian orientations up to arbitrarily small relative error.
Technically, the new ingredient in our work is a decomposition method for Eulerian graphs; we shall discuss the potential applicability of this method to further problems.
This is joint work with Peter Winkler.
Joint work with Sanjeev Arora (UC Berkeley)
We derive linear and integer programming lower bounds on the total size of memory required for any network $G$ and file size $k$. We present a scheme that is within $1+o(1)$ of these bounds when $k$ is larger than $\log \Delta$, where $\Delta$ is the largest degree in the network. The memory allocation scheme is constructive and the reconstruction method is efficient.
Finally, we show that the requirement for $k$ being much larger than $\log \Delta_G$ is necessary in order to have total memory size close to the integer programming lower bound.
Joint work with Ronny Roth.
(1) restriction fingerprinting, in which an enzyme called a restriction nuclease cleaves the fragment wherever a particular short sequence of nucleotides occurs, and the lengths of the resulting pieces are measured;
(2) hybridization fingerprinting, in which each fragment is tested to determine which of certain short nucleotide sequences called probes it contains.
An important combinatorial problem is to determine, from such fingerprint information, the most probable arrangement of the cloned fragments along the overall sequence. Such a physical map can then be used for tasks of biological or medical significance, such as determining the locations of particular genes within a chromosome.
The most common approach to physical mapping is to compute, for each pair of fragments,a weight (either positive or negative) related to the probability that the two fragments overlap, and then construct an interval graph of maximum total weight. We argue that this approach makes insufficient use of fingerprint information. Instead, we cast the problem as a variant of the traveling-salesman problem in which the cost of travel between two cities depends on a small neighborhood around the cities within the overall tour. We then discuss some of the issues of algorithm design that arise in constructing efficient local optimization methods for this generalized traveling-salesman problem.
No prior knowledge of molecular biology will be needed to understand this talk.
We establish strong expansion properties for the bases-exchange graph of balanced matroids; consequently, the set of bases of a balanced matroid can be sampled and approximately counted using rapidly mixing Markov chains. Specific classes for which balance is known to hold include graphic (whose bases are spanning trees of a graph) and regular matroids.
Joint work with Milena Mihail, Bellcore.
(1) Differences between the model theory of finite structures and infinite structures. Most of the classical theorems of logic fail for finite structures, which gives us a challenge to develop new concepts and tools, appropriate for finite structures.
(2) The relationship between finite-model theory and complexity theory. Surprisingly enough, it turns out that in some cases, we can characterize complexity classes (such as NP) in terms of logic, without using any notion of machine, computation, or time.
(3) Zero-one laws. There is a remarkable phenomenon, which says that certain properties (such as those expressible in first-order logic) are either almost surely true or almost surely false.
(4) Descriptive complexity. Here we consider how complex a formula must be to express a given property.
In recent years, there has been a re-awakening of interest in finite-model theory. One goal of this talk is to help "fan the flames" of interest, by introducing the audience to this fascinating area.
Virtually all theoretical work on message routing in parallel computers has dwelt on {\em packet routing:} messages are conveyed as packets, an entire packet can reside at a node of the network, and a packet is sent from the queue of one node to the queue of another node until its reaches its destination. The current trend in multicomputer architecture, however, is to use {\em wormhole routing}. In wormhole routing a message is transmitted as a continuous stream of bits, physically occupying a sequence of nodes/edges in the network. Thus, a message resembles a worm burrowing through the network. In this paper we give theoretical analyses of simple wormhole routing algorithms, showing them to be nearly optimal for butterfly and mesh connected networks.
(Joint work with Prabhakar Raghavan, IBM Watson Research Center.)
This is joint work with Nati Linial, Anna Karlin, Andrei Broder, and Yossi Azar.
If time permits, I will draw some parallels with the new result on finding connected components using $O(\log^{1.5}n)$ space.
Joint work with Noam Nisan and Michal Parnas from the Hebrew University of Jerusalem.
We point out that in several well-known instances in mathematics the existence of a mathematical object is established by a constructive argument based on an exponentially large graph. These include Brouwer's Fix-point Theorem, the Borsuk-Ulam Theorem, the Arrow-Debreu theory in mathematical economics, Chevalley's Theorem in number theory, the convergence of Hopfield neural nets, Smith's Theorem in graph theory, and many others. We show that this phenomenon can be captured by complexity classes, containing several important complete problems.
COMMENT: Of interest to Theoreticians and Mathematicians.
We define {\em the balance of a cut} as the ratio of the weight of the smaller side of the cut to the total weight in the graph. Thus, the best possible balance is 1/2. We define {\em the quotient cost} of a cut as the ratio of the cost of the cut to the size of the smaller side of the cut. Thus, a costly cut with a high balance may have lower quotient cost than a less costly cut with a lower balance. Finally, we define the {\em flux} of a graph as the minimum quotient cost of any cut in the graph.
We are interested in finding small cuts with high balance: either finding the cut that minimizes the quotient cost of the cut, or finding the smallest cut of some specified balance. (For balance $1/3$, this problem has been called the minimum separator problem.)
We present a very simple algorithm for finding $1.5$ times optimal quotient node or edge cuts in planar graphs in time $O(n^2\min(W,C))$, where $W$ is the total weight of the graph, and $C$ is the total cost of the graph. In fact, these cuts are only nonoptimal when they have very good balance. Thus, the algorithms either find an optimal quotient cut or a nearly optimal quotient cut with very high balance.
We can greatly improve the running time if we settle for finding $3.5$ times optimal quotient node and edge cuts. We can accomplish this in $O(n^2(\log W + \log C))$ time. And, if we are willing to settle for $O(k)$ times optimal solutions, we can make the running time quite fast; $O(n^{1+1/k} \log n (\log W + \log C) \log C).$ If $k = \log n$, and $W$ and $C$ are polynomial in $n$, this becomes $O(n \log^3 n)$.
Finally, we give approximation algorithms for finding nearly optimal {\em balanced} node or edge separators in planar graphs, based on the quotient cut algorithms.
These algorithms improve upon previous algorithms by being much faster, much much simpler, and by applying to node separation as well as edge separation.
To illustrate our new techniques, we present an $O(\log n)$ time algorithm for a new class of partial permutation routing problems on the hypercube which can be defined via matching pairs of parentheses. We also give an $O(\log n)$ time algorithm for the algebraic tree evaluation problem.
This is joint work with R. Werchner, J.W. Goethe-University.
Our implementation works very well on many classes of problems. We also exhibit a class of problems on which it does not work so well, and suggest directions for improvement.
Some of the heuristics we suggest may apply to other algorithms for the maximum and minimum-cost flow problems.
First, we show that the standard technique of ordering the queues so that every packet always has the possibility of moving to a higher ordered queue is not necessary for deadlock-freedom. Second, we show that every deadlock-free, adaptive packet routing algorithm can be restricted, by limiting the adaptivity available, to obtain an oblivious algorithm which is also deadlock-free. Third, we show that any packet routing algorithm for a cycle or torus network which is free of deadlock and which uses only minimal length paths must require at least three queues in some node. This matches the known upper bound of three queues per node for deadlock-free, minimal packet routing on cycle and torus networks.
This is joint work with Luis Gravano from IBM Almaden and IBM Argentina.
Cohen and Megiddo, and Norton, Plotkin and Tardos obtained a similar result for the case where F is piecewise linear and the separation and comparison polynomials are all linear. We develop several new techniques to cope with the general non-linear case. Using our method we give the first strongly polynomial algorithms for many non-linear parametric problems in fixed dimension, such as the parametric max flow problem, the parametric minimum s-t distance, the parametric spanning tree problem and other problems.
In addition we show that in one dimension, the same result holds even if we only know how to approximate the value of F. Specifically, if we can obtain an alpha-approximation for F(x) in time T, then we can alpha-approximate the value of max F in time O(T^2). We thus obtain the first polynomial approximation algorithms for many NP-hard problems such as the parametric Euclidean Traveling Salesman Problem.
Joint work with Endre Szemeredi and Avi Wigderson.
The problem is that of devising algorithms that minimize not only the travel of the servers but also the communication cost incurred for transmission of control messages. Thus, in addition to the traditional lack of knowledge of future requests that a global-control online algorithm has to deal with, a distributed $k$-server algorithm must also deal with issues of locality. As a result, the algorithm has only partial information on past requests, and even on the current configuration of all its servers.
The main contribution of this paper is a general translator to transform any deterministic global-control competitive $k$-server algorithm into a distributed competitive one.
In contrast to the global-control case where $k$-server algorithms with competitive ratio that depends on $k$ solely are known, we have a lower bound of $\Omega(\max\{k,\log n /\log\log n\})$ on the competitive ratio of any distributed $k$-server algorithm.
We also give a distributed version of the Harmonic randomized $k$-server algorithm, which currently has the best proved competitive ratio. Given that the original Harmonic is $c_H$-competitive, the competitive ratio of the distributed version can be stated as $O(c_H\cdot k \cdot \polylog(n))$ for the cases where the diameter of the network is bounded by a fixed polynomial in $n$.
Joint work with Yair Bartal.