Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.
Joint work with Bob Cypher.
We show very efficient constructions for a pseudo-random generator and for a universal one-way hash function based on the intractability of random instances of the subset sum problem for certain dimensions. (Pseudo-random generators can be used for private key encryption and universal one-way hash functions for authentication and signature schemes). The efficiency of our construction is due to the fact that many bits can be generated/hashed with one application of the assumed one-way function. We also show a bit commitment protocol with the property that it is secure if the subset sum problem cannot be broken on-line;
All our construction can be implemented in NC using an optimal number of processors.
Joint work with Russell Impagliazzo.
The talk is accessible to non specialists in cryptography.
The relationship between learning on a graph and data compression can now be expressed as a natural problem about random walks on graphs. We show the equivalence between learning nested concepts on a graph and data compression by proving the following theorem: Consider a random walk on an arbitrary n-vertex graph G whose vertices are labelled with (arbitrary) integers. Then the expected length of the loexicographically first increasing subsequence of integers encountered in a random walk of length t is at most sqrt(t)logn.
This is joint work with David Aldous.
Based on a language-recognition technique, we prove that one-way pushdown machines (with no restriction on its space size) also cannot produce an extra bit of pseudorandomness, and we show that bounded reversal log-space machines cannot produce more than a linear number of pseudorandom bits.
We also introduce a notion of separation of machine-based complexity classes based on their ability (or inability) to generate pseudorandom strings. We discuss how this separation notion, combined with the assumption that secure pseudorandom generators with respect to polynomial-time machines exist, can be used to separate $\cal P$ from log-space.
Joint work with Andrew V. Goldberg and Moti Yung
Ahuja and Orlin, using a generic maximum-flow algorithm due to Goldberg and Tarjan, recently developed an O(nm + n^2 logU)-time algorithm for the restricted class of integer networks.
Based on this work, a strongly-polynomial algorithm is presented, and its running time is shown to be O(nm + #selects logn), where #selects denotes the expected number of iterations in the execution. By randomly permuting the adjacency lists at the start (this is the only use of randomization) and using a simple analysis, the trivial O(nm) bound on #selects is improved to O(n (nm^2 logn)^{1/3}) with high probability. This gives an O(nm + n^2 (logn)^4) maximum-flow algorithm. (The fastest deterministic algorithm takes O(nm log(n^2/m)) time.)
(This is joint work with Torben Hagerup.)
These results make it possible to view the shared-memory model as a higher-level language for designing algorithms in asynchronous distributed systems. We can automatically emulate any wait-free algorithm based on atomic, wait-free, single-writer multi-readers registers in message-passing systems. The overhead introduced by these emulations is polynomial in the number of processors in the systems.
Immediate new results are obtained by applying the emulators to the following constructions: multi-writer-multi-readers registers, concurrent time-stamp system, atomic snapshot scan, and randomized consensus algorithms.
Joint work with Hagit Attiya and Amotz Bar-Noy.
These probability spaces have various applications:
(1) Derandomization of algorithms: many randomized algorithms that require only $k$-wise independence of their random bits (where k is O(log n)$), can be derandomized by using $epsilon$- biased random variables.
(2) Reducing the number of random bits required by certain ran- domized algorithms, e.g., verification of matrix multiplication.
(3) Exhaustive testing of combinatorial circuits. We provide the smallest known family for such testing.
(4) Communication complexity: two parties can verify equality of strings with high probability exchanging only a logarithmic number of bits.
(5) Hash functions: we can construct a polynomial sized family of hash functions such that with high probability, the sum of a ran- dom function over two different sets is not equal.
This is joint work with Moni Naor.
Joint work with Lyle A. McGeoch.
Our method yields new deterministic distributed algorithms for finding a maximal independent set in an arbitrary graph and for $(\Delta+1)$-coloring of graphs with maximum degree $\Delta$. These algorithms run in $O(n^\epsilon)$ time for $\epsilon = O(\sqrt{\log \log n / \log n})$, while the best previously known deterministic algorithms required $\Omega (n)$ time. Our techniques can also be used to remove randomness from the previously known most efficient distributed breadth-first search algorithm, without increasing the asymptotic communication and time complexity.
Joint work with: B. Awerbuch (MIT) A. Goldberg (Stanford) M. Luby (ICSI)
We use a theorem of Aldous on the variance of certain estimators to show two results:
* Given the same number of random input bits, adjacent correlated samples drawn from a stationary time-reversible Markov chain typically give better estimates than well-spaced roughly-independent ones drawn from the same chain. This eliminates a factor of log|S| from mean-estimation methods bounds based on current second eigenvalue alone (e.g. Jerrum/Sinclair). As |S| may be exponential in the input size, the saving can be appreciable. We illustrate with an application to estimating the mean of a function on restricted permutation data under the uniform distribution (i.e. on random matchings).
* More interestingly, using a random walk on a type of expander graph over {0,1}^k, one can obtain n random binary k-tuples using only (k + c(n-1)) random input bits, for a constant c independent of k and n. These samples are correlated but for the purposes of estimating the mean are essentially as good as independent samples (i.e. n k random bits), (and much much cheaper). We dicuss the relation of this result to the recent work of Impagliazzo and Zuckerman (FOCS 89).
We present general techniques for constructing simple to program and efficient self-testing/correcting pairs for a variety of numerical problems, including integer multiplication, modular multiplication, matrix multiplication, inverting matrices, computing the determinant of a matrix, computing the rank of a matrix, integer division, modular exponentiation and polynomial multiplication.
One of the central ideas is to design a self-testing/correcting pair that makes calls to a library of programs for a variety of functions. For example, the self-testing/correcting pair for computing the rank of a matrix makes calls to programs for computing matrix multiplication, inverting matrices and computing determinants of matrices, but does not assume that any particular answer it receives from these programs is correct.
Joint work with Manuel Blum and Michael Luby.
(Joint work with Yehuda Afek, TAU)
A chosen ciphertext attack on a cryptosystem consists of a preliminary stage, where the attacker is allowed to encrypt any message she wants and is allowed to use the decryption device as a black box to decrypt any message she wants. Afterward, the attack is considered successful if she is able to decrypt subsequent messages without the help of the decryption device.
We show how to use a non-interactive zero-knowledge proof system to convert any public-key cryptosystem into one that is secure against a chosen ciphertext attack.
Blum, Feldman and Micali introduced non-interactive zero-knowledge proof systems for language membership. Such a system allows users who share a random string to prove membership in a language non-interactively (i.e. by sending a single message), without revealing any other useful information.
Using a non-interactive proof system suggested by De Santis, Micali and Persiano this yields a concrete implementation of a public-key cryptosystem secure agains chosen ciphertext attack based on the quadratic residousity assumption.
This is joint work with Moti Yung.
[Joint work with Dan Greene, Arvind Raghunathan, and Madhu Sudan]
Function inverting is a challenging task. Consider F(x) which checks whether x is an absolutely complete mathematical proof and outputs its length and the last proven fact. F is trivial to compute, but inverting it amounts to finding shortest proofs for any given theorem! Still, the problem (known as "P =? NP") of actually proving that some functions are one-way (i.e. easy to compute while hard to invert) remains open. Security of F is the time needed to notice that some algorithm can successfully invert F with an observable frequency.
A reason for F being one-way may be in a particular simple predicate b(x) that is determined by but hard to compute from F(x). Moreover, b(x) may even be hard to guess with any noticeable correlation (then F must be one-way on most inputs). Security of b is the time needed to notice the correlation. b(x) may be determined by F(x) while appearing completely random. So, randomness and determinism may be not so opposite as our scientific culture presumes.
Such (hypothetical) hidden bits play a significant role in foundations of (pseudo)randomness, information, cryptography, and other areas. Blum, Micali and Yao found hidden bits for functions of a special form (assuming there are one-way ones among them). [Goldreich Levin, STOC-89] show that every one-way function hides, with at most a polynomial security loss, almost all linear predicates. We will see that no polynomial security loss actually occurs.
This talk presents program checkers for modular exponentiation: given integers a, b, c, compute a^b mod c. We show that, whereas current techniques for exponentiating require O(log b) multiplications, a processor restricted to performing only O(loglog b) multiplications can check exponentiation using O(loglog b) queries of the program. Independently, Ronitt Rubinfeld has obtained a checker that requires O((loglog b)^4) multiplications and O((loglog b)^3) queries. We also demonstrate a hypothesis under which checking can be done with a constant number of multiplications and queries. Such a checker would imply that any program for modular exponentiation can be transformed to a "self-checking" program with asymptotically no loss in efficiency.
This work was done jointly with Leonard Adleman.
For many common communication patterns, we present optimal routing algorithms which, in most cases, outperform the previous best known algorithms by an order of O(log N) in an N processor hypercube. The algorithms use the edge-disjoint spanning trees, the balanced tree or the rotated binomial trees on hypercubes. The techniques used in the algorithms are extensible to a large class of networks, and similar optimal algorithms can be derived for multidimensional meshes/torus, generalized hypercubes, folded hypercubes, butterflies and star graphs.
The set of communication primitives includes broadcasting, reduction, personalized communication (scatter/gather), transposition, dimension rotation (shuffle), dimension reflection, conversion between various encodings and dimension permutations. We present implementation results on the Intel iPSC hypercube. Based on these efficient communication primitives, we have also implemented several applications such as matrix multiplication, matrix transpose, tridiagonal systems solvers, and have a suite of algorithms for various problem sizes and machine parameters.
Rather than look at algebraic certificates of knot triviality, we could look at geometric certificates, such as the Seifert surface. If a knot is the boundary of a polyhedral surface without holes or self-intersections, then it is trivial---it can be untied without cutting.
Researchers have conjectured that every knot has a `spanning surface' with only polynomially many facets; if this were true then knot triviality would be in NP. Unfortunately, we disprove this conjecture by exhibiting trivial knots composed of $n$ line segments that require exp(c n) facets, for a positive constant c. The proof uses a geometric approach: slicing the knot and its spanning surface and discovering their geometric structure.
I expect the talk to be self contained, but be sure you bring your lunch because the algorithmic content will be small.
The 2-dimensional problems (with an additional constraint, which we also address) were originally posed by S.-Y. Kung, S.N Jean and C.W. Chang in the context of fault-tolerant arrays of standard cells. A solution to these problems directly provides a reconfiguration assignment for a certain class of rectangular arrays. This can be used to increase manufacturing yield of VLSI or WSI arrays and to reduce the reconfiguration time of operational fault-tolerant arrays. The partitioned-grid problem is motivated by the fact that for very large arrays, it is necessary to provide additional rows and columns of spare cells, since the ratio of total number of cells to spare cells increases as the array grows (area/perimeter). Additional applications may include automated assembly and testing.
This is joint work with Jeff Lotspiech
Each of the problems that are mentioned above may be described in terms of the {\it subtree-elimination problem}, which we define, and for which we provide algorithms that are correct but not always complete. Further, we show that the basis-linearizability and sequencability problems are undecidable. Our construction yields tight undecidability results for the detection of program equivalence. In addition, we provide a decision procedure for the detection of basis-linearizability in a restricted class of recursive programs. Finally, we consider the tractability of simple cases of the basis-linearizability, sequencability and $k$-boundedness problems, by providing \script{NP}-hardness results and efficient algorithms for various classes of each problem. Our investigation yields a complete characterization of the complexity of testing the equivalence of conjunctive queries, in terms of the polynomial heirarchy.
Joint work with Manfred Warmuth.
The name "Topological Complexity" is motivated by a definition which is purely topological, involving only spaces and continuous maps with no reference to computers or algorithms. Under this alternative viewpoint, it is natural to bring forth the techniques and technology of topology to try to calculate Topological Complexity. This has been done with some success in the case of root findng for univariate (Smale, Vasiliev) and multivariate (Levine) polynomials.
In this talk I will examine a new problem, the New Point Problem, which is particularly natural to view topologically and discuss its Topological Complexity. The emphasis will be on the topological techniques involved with a view towards perhaps applying them to more central problems in Computer Science, as well as on the idea that there are many results in the mathematical literature that are useful to computer scientists.
These results have applications in complexity theory. For example, the lower bounds for parity repair a broken proof of Bennett and Gill stating that PP is not in Parity-P relative to a random oracle. The bounds also give a proof that parity can't be well approximated by an AC0 circuit with one majority gate.
THIS WORK IS BEST APPRECIATED BY CONSIDERING THE FOLLOWING CLOSELY RELATED PUZZLES:
Imagine that n people have a random bit on their forehead. (Uniform distribution.) Assume n is an odd number. The people want to vote on the parity of the n bits. Formally, each person has access to all bits but his own; there is no communication between the people. Each person casts one vote. The result of the voting is majority of the votes cast. The voters wish the result of the vote to be the parity of the n bits.
A voting strategy is a procedure for each voter. The procedure takes what the voter sees and returns a vote. It is clear by the nature of the parity function that a voter can have only a 50% chance of casting a correct vote. (Each voter has a separate identity so each procedure can be different.)
A voting strategy succeeds with probability p if the probability that the voters vote the parity is p. (Varying over all possible assignments of bit to foreheads.)
Two puzzles:
1) Give a very simple strategy the voters can use to succeed with a probability converging to one as n grows. (The proof should also be very simple. I expect your strategy to fail with prob 1/O(sqrt(n)) )
2*) Find a strategy that is optimal for n equal to a power of two minus one. (This can be extended to a strategy that is optimal for all n.)
Now change the rules slightly to allow for "Chicago voting". Each person can vote as many times as he/she wants! The other conditions remain the same.
Two more puzzles:
1) Find a strategy that fails on only one situation (with prob 1/2^n). (This is clearly optimal.)
2) Show that in the case of any function other than parity or its negation the voters can be correct all the time.
Again, lets change the scenario a bit. This time each of the n people gets a bit in his/her hand. (Everybody only sees one bit.) Again, let's allow Chicago voting:
Puzzle: Find an optimal strategy for the voters.
Joint work with Richard Beigel, Merrick Furst, and Steven Rudich.
This paper investigates EBA in the case of crash failures (where a processor that is faulty sends no messages after it fails) and in the more general case of omission failures (where a processor may continue to send messages after it has failed). The emphasis is on characterizing optimal EBA protocols (EBA protocols in which the processors decide at least as fast as in any other EBA protocol) in terms of the states of knowledge required by the processors in order to attain EBA. We define a new state of knowledge in terms of which we can characterize necessary and sufficient conditions for attaining EBA. Using our characterization, we provide a technique that allows us to start with any EBA protocol, apply a certain construction twice, and arrive at an optimal EBA protocol.
Although we focus here on Byzantine agreement, it is straightforward to extend our results to general coordination problems.
The talk is self contained. This is a joint work with Joe Halpern and Yoram Moses, and will be presented in 9th PODC, Aug., 1990
This is joint work with Nick Reingold and Dan Spielman.