Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.
1. Remove a single M & M from the selected heap.
2. Split the remaining M & M's of the selected heap into 2 nonempty heaps.
Now it is my turn, and I have the exact same set of options. We move alternately in this way until the final M & M is taken.
If the last player to take an M & M is declared the winner, then we have _normal_play_ and the Sprague-Grundy theory can be used to classify all first- and second-player winning positions in the M & M game. One arrives at a winning strategy based on the heap sizes taken modulo two and addition over GF(4).
If the last player to take a M & M is declared to be the _loser_, then we have _misere_play_ and a complete classification of all first-player winning positions in the M & M game was---until today's talk!---unknown.
The M & M game belongs to a general class of Nim-like games known as the octal games. Until very recently, the best weapon for the misere play analysis of such games was the difficult Tame, Restive, and Wild theory >From Chapter 13 of Berlekamp, Conway, and Guy's ``Winning Ways.'' The Conway-Sibert decomposition has only recently emerged as an exciting new method of attack capable of yielding solutions to misere octal games without direct reference to complicated genus considerations.
In our talk, we shall briefly review the Sprague-Grundy and Tame, Restive, and Wild theories for such games and then shall move on to a discussion of the Conway-Sibert technique and the associated ideas of frisky and frigid positions. Finally, the solution of several other such games under misere rules will be presented.
Although no knowledge of combinatorial games will be assumed, near the end our talk we will approach the limit of current knowledge about impartial play combinatorial games. An outrageous conjecture will be shared with the audience. If you are familiar with the Noah's Ark theorem, Half-Tameness, or the Firm and Fickle Units, please come (maybe you can explain them to me)!
The secret message transmission problem is an abstraction of the above problem of secure communication in a general network. It considers a Receiver and a Sender connected by n communication lines, d of which may arbitrarily disrupt the information they carry, and l of which may listen to the information they carry. The goal is for the Sender to convey a message to the Receiver in such a way that the Receiver will always learn the message correctly (despite the presence of the disrupting lines) and no subset of size l of the lines will ever be able to learn anything about the message even when combining the information on them.
Given the maximal number of listening lines and the maximal number of disrupting lines, this paper derives lower bounds on the total number of lines between Sender and Receiver required for successful secure communication in the different cases. And then, it presents efficient algorithms that solve the problem with these lower bounds. The secrecy achieved by these algorithms does not rely on any cryptographic assumptions. Moreover, the algorithms require only three rounds of information exchange. These are the first algorithms for secure communication in a general network to simultaneously achieve the three goals of perfect secrecy (no subset of l lines learns *anything* about the message), perfect resiliency (the Receiver *always* learns the message correctly), and constant number of information exchange.
Underlying our results are new techniques for achieving secrecy and resiliency which may have wide applications elsewhere.
The talk is self contained.
This is a joint work with Danny Dolev, Cynthia Dwork and Moti Yung, and will be presented in the 31st FOCS, Oct., 1990.
This is joint work with Andrei Broder, Prabhakar Raghavan and Eli Upfal.
The groups in consideration are represented as permutation groups, matrix groups over finite fields, or more generally, as ``black box groups'' (operations are performed by an oracle), always given by a list of generators.
The techniques to be illustrated on representative sample cases involve a combination of combinatorial results on finite groups, some classical elementary group theory, and a moderate use of easily stated consequences of the classification of finite simple groups.
Particularly efficient and completely elementary Monte Carlo methods, obtained over the past half year in collaboration with G. Cooperman, L. Finkelstein, E. Luks, and A. Seress, will be discussed.
Gallo, Grigoriadis and Tarjan \cite{GGT} recently examined the problem of computing online a sequence of $k$ maximum flows and minimum cuts in a network of $n$ nodes, where certain edge capacities change between each flow. They showed that for an important class of networks, the $k$ flows can be computed in $O(n^3+kn^2)$ total time, provided that the capacity changes are made ``in order". We show how to reduce the total time to $O(n^3 + kn)$. We also show that the faster bound holds even when the capacity changes are not ``in order". If time allows we illustrate the utility of these results by applying them to the {\it rectilinear layout problem}.
In DNA and Protein sequence alignment there is considerable disagreement about how to weight matches, mismatches and spaces. Parametric Sequence Alignment is the problem of computing the optimal valued alignment between two sequences as a {\it function} of variable weights for matches, mismatches, and spaces. For the case where all spaces are counted, we show that the functional dependence of the optimal alignments on the parameters is surprisingly simple, and can be completely determined in $O(n^{1.66}m)$ time, where $n$ and $m$ are the lengths of the two strings. For contrast, it takes $O(nm)$ time to compute just a single alignment for fixed weights.
Different parts of the work joint with Eva Tardos, Dalit Naor, and K. Balasubramanian.
We describe an O(d*n^3) steps algorithm for this purpose, and prove that it succeeds almost surely. Part one of the algorithm properly covers the ``trouble spots'' of G by a collection of disjoint paths. (This is the hard part to analyze.) Part two of the algorithm extends this cover to a full cycle by the rotation-extension technique which is already classical for such problems.
This is joint work with Alan Frieze (CMU) and Eli Shamir (Hebrew University).
This is joint work with Dana Angluin of Yale.
NOTE: I will assume some familiarity with the fundamentals of computational learning theory. People not familiar with the basic definitions and results are encouraged to attend the preceding AFLB talk by Michael Kearns on December 6.
We consider the class of languages accepted in quasi-polynomial ($= 2^{log^k n}$) time, which we denote by \~P, and its corresponding nondeterministic class N\~P. We show that if there is a \~P approximation algorithm for Clique (even within a factor of $2^{\log^c n}$ for $c < 1$) then N\~P = \~P. Thus we can conclude that if such an approximation procedure exists then:
\begin{enumerate}
\item EXPTIME = NEXPTIME
\item NP $\subseteq$ \~P
\end{enumerate}
We also discuss what better approximation procedure would imply that NP is in deterministic time $n^{O(\log \log n)}$.
This work uses techniques from multi-prover interactive proof systems, in particular, the recent theorem of Babai, Fortnow and Lund, showing that NEXPTIME has multi-prover interactive proofs.
This is joint work with Uriel Feige, Shafi Goldwasser and Laszlo Lovasz.
Joint work with David Shmoys and David Williamson.
Useful notions of equality and inequality are defined among positions in mathematical Go. Some Go endgames are seen to be "equal" to abstract games which are also equal to games whose rules appear quite different from the rules of Go (e.g., Domineering). There is a direct correspondence between winning strategies for these "equal" games.
A recursive abstract mapping called "cooling" maps games into games.
Cooling by a large amount (called "freezing") naturally defines a game's mean value and its "temperature", both of which are often fractions of a point. The temperature corresponds to the "value of the move" in Go.
Although the general cooling operator is many-to-one, the special case of cooling by one degree (called "chilling") is one-to-one over a large class of interesting Go endgames. Its inverse, called "warming", is monotonic. This ensures a direct correspondence between winning strategies for chilled games and warmed games. Many Go endgames chill into numbers and infinitesimals that were analyzed in "Winning Ways": star, up, down, tiny. Others chill into new types of infinitesimals with interesting properties.
Using these techniques, it is possible to determine precise and rigorous strategies for an interesting class of Go endgame problems, in reasonable time, using only pencil and paper. Yet some of these problems are so difficult that a professional Japanese 7-dan was unable to learn the solution, even after two hours study and several unsuccessful practice plays (trying both Black and White) against a mathematically perfect opponent. Another challenging 19 x 19 problem has the property that although nearly all moves are worth no more than one point, and White can ultimately win by one point, it requires at 101 moves to do so against an astute opponent.
Self-correcting is usually easy when the function has the random self-reducibility property. One class of such functions that has this property is the class of multivariate polynomials over finite fields (shown by Beaver, Feigenbaum and Lipton). We extend this result in two directions. First, we show that polynomials are random self-reducible over more general domains: specifically, over the rationals and over noncommutative rings. Secondly, we show that one can get self-correctors even when the program satisfies weaker conditions, i.e. when the program has more errors, or when the program behaves in a more adversarial manner by changing the function it computes between successive calls.
Self-testing is a much harder task. Previously it was known how to self-test for a few special examples of functions, such as the class of linear functions. We show how to self-test the class of univariate polynomial functions and multivariate polynomial functions in few variables over $Z_p$.
We also initiate the study of self-testing (and self-correcting) programs which only approximately compute $f$. This setting captures in particular the digital computation of real valued functions. We present a rigorous framework and obtain the first results in this area: namely that the class of linear functions, the log function and floating point exponentiation can be self- tested. All of the above functions also have self-correctors.
This is a result of joint work with Richard Lipton, Ronitt Rubinfeld, Madhu Sudan and Avi Wigderson.
Our approach builds on some interesting relations between those problems and Monge sequences, chordal bipartite graphs and totally balanced matrices.
(Joint work with Ilan Adler (UC Berkeley) and Alan Hoffman (IBM Yorktown))
Randomization is frequently useful for effective divide-and-conquer algorithms in geometry. By choosing a random sample of the objects defining our problem we can partition the underlying space into parts so that each part receives its "fair share" of the overall complexity of the problem. We illustrate this use of randomization by an example >From the theory of arrangements. We also discuss other paradigms for the use of randomization, including randomized incremental constructions and randomized re-weighing techniques.
We will briefly develop the rudiments of the general theories proposed by Clarkson-Shor and Haussler-Welzl (epsilon nets) for formalizing the contexts in which random samples are "universally good" samples for certain types of queries. These theories help explain the unusual effectiveness of randomization in geometry. If time allows we will also mention recent work of Chazelle/Friedman, Matousek, and Agarwal in rendering some of the above techniques deterministic.
Distinguishing Sequence (State Identification): We know the state diagram of M and wish to determine its unknown initial state.
Unique Input-Output Sequence (State Verification): We wish to verify that M starts from a specific initial state.
Checking Sequence (Fault Detection): We do not know the state diagram of M. Rather, we are given the diagram of a specification machine A, and wish to check that M is a correct implementation of A.
(This is joint work with David Lee.)
Joint work with Daphne Koller and Steven Phillips.
Let $S$ be a set containing $\alpha n$ elements belonging to the domain $D=[0,1 ,\dots, m]$, $\alpha <1$. In open addressing, random mappings $f(x,probe):D\times[1,\dots,n]\mapsto[0,n-1]$ are used to locate data within a size $n$ table. Collisions are resolved by placing data in the first vacant probe location within the table. In Coalesced Hashing, an item $x$ has a single probe location $f(x)$. Items colliding at an occupied location are added to the list extending from the occupant.
1) We construct highly random functions to map $[0,m]\mapsto[0,n]$, where the function uses a random seed of $O(\log\log m+\log^2n)$ bits, and evaluation requires ``only'' a constant number of word operations.
2) We show these functions can be used to achieve the same asymptotic performance as fully random functions for
a) Uniform Hashing,
b) Linear Probing,
c) Double Hashing, and
d) Coalesced Hashing.
Thus, the performance of each scheme is established in a randomized sense for a class of programmable hash functions, which gives the strongest proof of performance to date.
Some of our results for c) and d) are new even in the case of full randomness.
In particular, we show that for Double Hashing, the expected insertion time for the $\alpha n$-th item is
$${1\over 1-\alpha}+O({1\over n(1-\alpha)^5}).$$
This bound extends to any Double Hashing-like scheme where the first two probes are approximately independent, and subsequent probes are computed from any reasonable deterministic function of the first two probes. The $O(1/n)$ performance penalty extends to any fixed moment of the probe count.
For Coalesced Hashing, we show how to obtain the generating function governing the asymptotic distribution of the chain lengths, as well as semiclosed forms for the exact distribution. The discrepancy between the asymptotic formula and the exact expectation can also be measured quite accurately. The results extend to Coalesced Hashing with a cellar.
The discipline of limited randomness requires fundamentally different problem models and methods of analysis, since the hash table statistics cannot be viewed as the outcome of $\alpha n$ random insertions. We argue that these restrictions incur penalties outweighed by their rewards, and close with a few questions deserving further thought.
The work on Double Hashing is joint with Jeanette Schmidt.
In 1957 Hirsch conjectured that D(d,2d)=d and D(d,n)<=n-d. The best
upper bound untill a few months ago was D(d,n)
We address these concerns by introducing an important practical element that underlies the philosophy behind paging: locality of reference. We devise a graph-theoretical model, the access graph, for studying locality of reference. We use it to prove results that address the practical concerns mentioned above. In addition, we use our model to answer the following questions: How well is LRU likely to perform on a given program? Is there a universal paging algorithm that achieves (nearly) the best possible paging performance on every program? We do so without compromising the benefits of the Sleator-Tarjan model, while bringing it closer to practice.
This is joint work with Allan Borodin, Prabhakar Raghavan, and Baruch Schieber.
In this work, we describe the first polynomial-time combinatorial algorithms for approximately solving the multicommodity flow problem. The running time of our randomized algorithm is (up to log factors) the same as the time needed to solve $k$ single-commodity flow problems, thus giving the surprising result that approximately computing a $k$-commodity maximum flow is not much harder than computing about $k$ single-commodity maximum flows in isolation.
In fact, we prove that a $k$-commodity flow problem can be approximately solved by approximately solving $O(k \log k \log n)$ single-commodity minimum-cost flow problems. Hence our $k$-commodity algorithm runs in $O(knm\log^4 n)$ time with high probability. We also describe a deterministic algorithm that uses an $O(k)$-factor more time. Given any multicommodity flow problem as input, both algorithms are guaranteed to provide a feasible solution to a modified flow problem in which all capacities are increased by a $(1+\epsilon)$-factor, or to provide a proof that there is no feasible solution to the original problem.
We also describe faster approximation algorithms for multicommodity flow problems with a special structure, such as those that arise in the ``sparsest cut'' problems, ``minimum-ratio cut'' problems, and the uniform concurrent flow problems.
Joint work with: T. Leighton, F. Makedon, C. Stein, E. Tardos, S. Tragoudas.
To appear in: Proc. 23th ACM Symposium on the Theory of Computing, May 1991.
Such a database must be updated in the wake of locally generated changes to its contents. Due to previous disconnections of parts of the network, a maintenance protocol may need to update processors holding widely varying versions of the database, while taking advantage of prior partial knowledge.
In this talk, we describe efficient randomized and deterministic solutions for this problem, which have only polylogarithmic overhead in both time and communication complexities. Previous solutions required polynomial overhead in at least one of these measures.
In this talk, I will describe the model of space bounded interactive proof systems, in particular when the verifier is a 2-way probabilistic finite state automaton. I will present some results on the classes of languages they accept. For example, if they are not required to halt with high probability on rejected inputs, these interactive proof systems can accept any recursively enumerable language.
I will also describe applications of these results to problems in formal language theory, the theory of time-varying Markov processes and to the non-approximability of NP-complete and P-complete problems.
There are many variants of PDL, differing in the class of programs allowed. Two of the most interesting are PDL_reg and PDL_cf, in which the programs are regular and context-free, respectively. The former correspond to iterative while-programs, or to flowcharts, and the latter to recursive procedures. As it turns out, the difficulty of deciding truth of formulas in such logics can range from being NP-complete to being highly undecidable.
The talk will describe and put into perspective the work of many people during the last 15 years. It will include some very recent results concerning the PDL of concurrency, that of recursive programs, and that of logic programs. We will also pose a number of open problems.
This combines joint work with Gemmell, Lipton, Rubinfeld, Shen and Wigderson.