Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.
Joint work with Robert Tarjan (Princeton and Bell Labs.).
We show that, while there is a polynomial time approximation scheme to estimate the number of clusters that are needed for a fixed cluster size, one cannot approximate the optimal cluster size for a fixed number of clusters within a factor close to 2 in polynomial time, unless P=NP. We present an algorithm which achieves this factor of 2 in time O(n log k), and show that this running time is optimal in the algebraic decision tree model. Our approach can be extended to provide optimal O(n log k) approximation algorithms for the related k-centers, k-suppliers, and weighted k-suppliers problems.
This talk represents joint work with Daniel Greene at Xerox PARC.
To date, almost all research in interactive proof systems has dealt with the case that the verifier is a probabilistic Turing machine (ptm) which runs in polynomial time. Several classes of interactive proof systems have been defined, for example, (1) the verifier may employ either public coins or private coins, defined according to whether the prover can see the outcomes of the random choices made by the verifier; (2) the IPS may or may not be zero-knowledge, meaning, roughly speaking, that no verifier can extract any information from the interactive proof that x is in L except for the fact that x is in L. Many previous results for ptm verifiers depend on unproved assumptions.
By restricting attention to verifiers which are 2-way probabilistic finite state automata (2pfa's), we obtain results about IPS's without unproved assumptions. This talk will focus on three results: (1) private coins are more powerful than public coins; (2) IPS's with public coins are more powerful than 2pfa's alone; (3) there is a language with an IPS but with no zero knowledge IPS.
(This is joint work with Cynthia Dwork who will speak at the Nov. 12 BATS on this work. Different material will be emphasized in the two talks.)
In 1973, Aanderaa and Rosenberg conjectured that any property which is a graph or digraph property (invariant under graph isomorphism) and is monotone (not destroyed by the deletion of edges) and nontrivial (holds for some but not all graphs) has complexity at least c n^2 where n is the number of nodes and c is a constant. This conjecture was proved by Rivest and Vuillemin who found a constant of 1/16.
In 1984, Kahn, Saks, and Sturtevant discovered that algebraic topology could be used to prove evasiveness when n is a prime power. A consequence of this was a lower bound of almost 1/4 n^2 for graph and digraph properties and general n. Recently, their technique was used to prove evasiveness for all monotone, nontrivial bipartite graph properties (Yao, 1987) and a further improvement on the Aanderaa-Rosenberg constant, to almost 1/2 n^2 for all monotone nontrivial digraph properties (King, 1987).
This talk will describe the method developed by KSS and its new applications.
Our techniques involve the new notion of the "block sensitivity" of a function, a generalization of the critical sensitivity measure. They apply to a completely different model of computation as well: the CREW PRAM. We show that the block sensitivity completely characterizes the complexity in the CREW PRAM model (with an unbounded number of processors). We get that the parallel time it takes to compute f on a CREW PRAM with an unlimited number of processors is equal (up to a constant factor) to the logarithm of the boolean decision tree complexity of f.
The roadmap algorithm is in effect a decision algorithm for the first order existential theory of real numbers. It makes use of results in singularity theory, in particular the notion of stratified sets, which provide a very natural and efficient way to represent geometric objects. It also uses a new (actually very old but obscure) algebraic tool called the multivariate resultant. This latter tool gives a fast parallel method for solving systems of equations, and should give improvements in algorithms for inverse kinematics, CSG modeling, and geometric theorem proving.
The following three results are proved:
- The n-cycle cannot be 3-colored faster than log*n (and this is tight by previous work) and cannot be 2 colored faster than cn.
- The d-regular tree cannot be colored with root(d) colors faster than diameter/3.
- Every graph may be O(D**2) colored at time O(log*n) where D is the largest degree.
More realistic models for distributed systems will be mentioned where the computations must be local.
There is a train chugging along a straight stretch of track from station A to station B. The engineer realizes he needs to return to station A and is not very excited about the prospect of backing the train all the way. Luckily there is a short spur line off to the right side of the track. It is just large enough to hold one car and and has a switch for either direction of track.
The engineer must use this spur line to turn the entire train around. This entails reversing the ordering of the cars, and the orientation of each individual car. We count one unit of work as moving one car one car-length on the track. Dewdney has an O(n^3) solution, where n is the number of cars, and he challenges his readers to do better.
We present an O(n^2 logn) algorithm for solving a variation of this problem in which all cars are taken to be locomotives, i.e. we have n self-propelled cars. We then outline a proof that the algorithm can be simulated by the original single-locomotive train with at most a constant-factor time loss.
This is far from the case in computing over continuous structures (such as the reals). To point to a concrete case, the main competing algorithms for the linear programming problem (e.g., Simplex and Karmakar's algorithm) are defined and analyzed using incomparable models of computation and measures of complexity.
I will discuss various issues involved (e.g., the role of the "condition" of a problem), and a proposed model of computation over an arbitrary ring R. In this general setting, we get universal machines, normal forms, R.E. sets, as well as P, NP and NP complete classes. While our theory reflects the classical over Z (e.g. the computable functions are the recursive functions) it also reflects the special mathematical character of the underlying ring R (e.g. complements of Julia sets provide examples of R.E. undecidable sets over the reals) and provides a natural setting for studying foundational issues concerning algorithms in numerical analysis.
This is joint work in progress with M. Shub and S. Smale.
In order to demonstrate the capabilities of the new model we examine two problems that have been extensively studied in the distributed algorithm and networking literature. For the problem of maintaining network topology we devise a broadcast algorithm which takes O(n) "messages" and O(log n) time. For the problem of leader election we present a simple algorithm that uses O(n) "messages" and O(n) time. Both algorithms are better than the existing algorithms under the new network model. Some extensions will also be discussed.
(Joint work with I. Cidon and I. Gopal.)
Joint work with Jorge More of Argonne National Laboratory.
The algorithm uses new techniques in coordinating the flows from the sources to the sinks.
Joint work with Gary Miller.
* Set up a recurrence relation.
* Eliminate a weighted summation by differencing often enough.
* Solve the (linear) recurrence for the forcing function in terms of the unknown function.
* Try a large enough repertory of unknown functions that the forcing function of interest is in the space spanned by the above solutions.
One direct application is to the uniform sampling of combinatorial structures. The states of the Markov chain in this instance correspond to the sample space of all structures, and the transitions to simple local perturbations. Sampling is done by simulating the chain for a certain number of steps and noting the final state. Providing the stationary distribution of the chain is uniform over states, and providing the rapid mixing property holds, experiments of this type provide a computationally efficient sampling procedure which is close to uniform. Efficient sampling procedures are of practical interest in the study of statistical physics.
Rapidly mixing Markov chains have other, less obvious, computational applications. For example, they may allow the size of a class of combinatorial structures to be estimated when exact enumeration is infeasible. In particular, it can be shown that the permanent of a sufficiently dense 0-1 matrix can be evaluated with arbitrary (fixed) accuracy in polynomial time.
The results described in this talk were the product of joint work with Mark Jerrum.
This is joint work with David McAllester.