Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.
In 1963, Hoffman gave a necessary and sufficient condition which, given the matrix and the permutation of the variables, verifies that the matrix is greedily solvable by that ordering. (In that case the permutation is called a Monge sequence.) Hoffman's condition, though, did not answer the question raised above, since it required prior knowledge of the permutation in order to verify that the condition holds. Until now, the question of existence and the generation of Monge sequences had to be approached separately for every family of transportation problems.
We shall describe an efficient algorithm for the problem. The algorithm uses Hoffman's condition, but it does not require prior knowledge of the Monge sequence. Our algorithm also generates the corresponding Monge sequence whenever it exists. This facilitates the solution of every subsequent transportation problem with that cost matrix in linear time.
The running time of our algorithm is better than that of the best known algorithms for the transportation problem, and thus it can also be used as a preliminary step towards solving such problems, without an increase in the overall complexity.
(Joint work with N. Alon, S. Cosares and D. Hochbaum.)
Among the applications of the tool is a Secure Digital Signature Scheme. Secure Signature Schemes were previously known to exist only under the assumption that (specific and recently general) Trapdoor One-Way Functions exist.
-a joint work with Moni Naor, Berkeley.
We show strong magnification properties for the 1-skeletons of the following classes of polytopes: (i) Arbitrary truncations of the matching polytope. (ii) Arbitrary truncations of partition matroid polytopes. (iii) The polytope of order ideals. The proof technique is a probabilistic extension of Jerrum and Sinclair's idea to "encode and bound paths using the state-space". New sam- pling schemes for matchings follow as a concrete consequence of these results.
We conjecture that the magnification properties of the specif- ic polytopes listed above are examples of a more general phenomenon : "All 0-1 polytopes are magnifying". A positive reso- lution of this conjecture would imply efficient schemes for es- timating the number of vertices of polytopes whose degrees are bounded by a polynomial in n. Several unsolved counting problems (including counting bases of a matroid and network reliability) can be expressed in terms of counting the number of vertices of suitably chosen polytopes that satisfy the condition of bounded degrees.
- This is joint work with Umesh Vazirani -.
The theorem of Sprague and Grundy tells us that every instance of an impartial game is equivalent to an instance of the game Nim. The instances of a taking and breaking game thus specify a sequence of Nim values. It is conjectured [R. Guy] that all finitely-specified octal games have ultimately periodic Nim-value sequences, but the conjecture is not known to hold even for all octal games with 3 or fewer code digits. Slightly fewer than sixty of these now remain unsettled. We present techniques for giving computational proofs of the periodicity of some of these sequences and consequent periodicity results for three such games.
No background in the theory of impartial games will be assumed.
This is joint work with Anil Gangolli.
The algorithm can be used to efficiently parallelize any tree based computation such as divide-and-conquer, branch-and-bound, or functional expression evaluation. It can also be used to efficiently maintain dynamic data structures such as quad-trees that arise in scientific applications and image processing.
A novel technique -- tree surgery -- is introduced to deal with dependencies inherent in trees. Together with tree surgery, the study of random walks is used to analyze the algorithm.
(Joint work with Sandeep Bhatt.)
Our main result is a new determinization construction. The advantages of the construction are that it is simpler and yields a single exponent upper bound for the general case. This construction is essentially optimal. Using the construction we can also obtain an improved complementation construction for Buchi automata, which is also optimal. Both constructions can be used to improve the complexity of decision procedures that use automata-theoretic techniques.
In the talk, the question of minimizing the number of PBOS' given that all dyadic ICD's with IVGR capability are to be performed in a group of ICD donors (ICDD) and ICD receptors (ICDR) each with a different VBI so that no VBI is transmitted. Previous work of Hajnal and Lovasz found upper and lower bounds that differed by an additive constant of one. I will fill in this gap by providing an algorithm that gives the optimal solution for any number of ICDD's and ICDR's.
Caveat lector: anyone who doesn't understand this abstract may be asked to participate in a real time demonstration of the algorithm.
(joint work with Rajeev Motwani and Boris Pittel)
After a brief introduction to Gr\"obner bases ``from a user's point of view'', we will discuss recent applications of these methods to problems in the following areas:
Computational invariant theory
Inverse kinematics in robot programming
Determinantal ideals and algebraic combinatorics
Algorithmic versions of the Quillen-Suslin theorem
This problem is a special case of linear programming, and therefore can be solved in polynomial time. We present a simple combinatorial algorithm for this problem, that is based on ideas from maximum flow and minimum cost flow algorithms.
Joint work with Andrew Goldberg and Serge Plotkin.
We exhibit a small set of forbidden minors for linkless emebddable graphs and show that any graph with these minors cannot be embedded without linked cycles. We further establish that any graph which does not contain these minors is embeddable without linked cycles. Thus, we demonstrate an O(n^3) algorithm for the decision problem. This is one of the first problems for which a polynomial time algorithm has been constructed after it was shown that such an algorithm must exist.
An important consequence of our work is a proof that a linkless embeddable graph is also knotless embeddable. We also show that a graph which is embeddable without any two cycles being linked is linkless embeddable. Motivated by these results, we define the notion of a "good" three-dimensional embedding of a graph and show that a graph has a good embedding if and only if it is linkless embeddable. This notion of a good embedding gives us a finite time alogrithm to actually embed a graph without linked cycles, if such an embedding exists.
(This is joint work with Rajeev Motwani and Huzur Saran)
We use our technique to devise NC algorithms for various problems including set balancing problems, near-optimal edge coloring of graphs, finding independent sets in hypergraphs and lattice approximation problems. Simple RNC algorithms were known for each of these problems previously but no deterministic NC algorithms had been devised.
[Joint work with Seffi Naor (Stanford University) and Moni Naor (IBM-ARC). ]
G. Freiman initiated an entirely different approach for solving the subset-sum problem. Using analytic number theory, he and others proved theorems characterizing the set of subset sums as a collection of arithmetic progressions with the same difference. These theorems were used to design algorithms for the subset-sum problem that improved the ones using dynamic programming in case of dense enough inputs. (A problem is dense if m is large enough as a function of l.)
More recently, a family of better algorithms was designed. They do not use any analytic number theory, only elementary number theoretic arguments. The fastest algorithms in the family run in time O(m), and the slowest in time O(l log l). Moreover the algorithms yield a characterization of the set of subset sums, improving the theorems mentioned above.
The talk will discuss the approach, its potential and its limitations.
Joint results with M. Chaimovich, G. Freiman and O. Margalit of Tel-Aviv University.
We also show how to apply our techniques to design parallel algorithms for the weighted versions of the above problems. In particular, we present sublinear-time deterministic parallel algorithms for finding a minimum-weight bipartite matching and for finding a minimum-cost flow in a network with zero-one capacities, if the weights are polynomially bounded integers.
Joint work with A. Goldberg and P. Vaidya.
The second result is a lower bound on the size of sorting networks based on Shellsort. A comparator is a device that sorts 2 items, and a sorting network is a hardwired collection of comparators that sorts $N$ items. Shellsort is a well known sorting algorithm that is controlled by a sequence of parameters called increments. Researchers have suggested that a sorting network having $O(N\log N)$ comparators could be obtained by using Shellsort with the correct increment sequence. All increment sequences that have been proposed for Shellsort have been monotonic. It will be shown that any monotonic increment sequence yields a sorting network having at least $Omega(N\log^{2} N/\log\log N)$ comparators.
The goal of this research is to develop an understanding of the cryptographic power of simple protocols, and to apply this knowledge to remove or weaken cryptographic assumptions. One of the early results in this area says that if one can implement a simple protocol, known as oblivious transfer, then one can implement some very general two-party games. Machinery has also been developed to show completeness results for other ``protocols,'' such as an ideal noisy channel. Applications of this research have been applied to the study of multi-prover interactive proof systems, bounded interaction zero-knowledge, space-bounded automata, efficient zero-knowledge protocols for NP, and weakening the cryptographic assumptions necessary for secure circuit evaluation.
A novel technique for handling the floor operation is presented. The same technique can be used to derive lower bounds for many other problems.
Finally, we show that the same lower bounds hold for the time complexity of RAMs with the same set of arithmetic operations.
This is a joint work with Yishay Mansour and Prasoon Tiwari.
The best previously known result was that the problem can be solved through a polynomial number of linear programming problems. It was previously conjectured that combinatorial algorithms exist for the cases of two- and three-dimensional vector-weights. The work gives strongly polynomial algorithms for any fixed dimension. Moreover, these algorithms also establish membership in the class NC. On the other hand, it is shown that when the dimension of the weights is not fixed, the problem is equivalent to the general linear programming problems under strongly polynomial and logspace reductions.
Joint work with Nimrod Megiddo (IBM Almaden)
Next we show that our modified versions of the shift register and linear congruential generators can be used for more general purposes. For example, they can be used to generate a list of $k(n)$ $n$-bit primes using the information theoretical lower bound of $n-\log n-c$ bits per prime, amortized, for a suitable polynomial~$k(n)$. More generally, they can be used to generate polynomially-many samples from any polynomially-samplable distribution using the information theoretical lower bound of the entropy $+ o(1)$ bits per sample, amortized. In fact, this is possible even if the samples are needed from different distributions; the number of random bits used will approach the information-theoretic bound in the limit.
(Joint work with Christos Papadimitriou)
Let f be a boolean function on n variables. Then there is a set of only o(n) variables whose influence on f is 1-o(1).
The definition of influence as well as all background from harmonic analysis that will be needed is included in the talk. Having heard my previous AFLB talk will help, but will by no means be necessary to follow this talk.
If time permits I will make some remarks on a very recent paper of Y. Mansour, N. Nisan and myself where similar techniques are used to study problems in circuit complexity.