Stanford Theory Lunch Speaker List
Spring 2005

Thursdays, 12:15pm, in the Theory Lounge (wing 4B in the Gates Computer Science building)

Mar 31 Don Knuth
Generating Trees
Apr 7 Dilys Thomas
Search problems in Arrays
Apr 14 Don Knuth
The Coffee Cup (or The Napkin) Problem
Apr 21 Mihaela Enachescu
On Network Coding
Apr 28 Hovav Shacham
Projective Hashing with Applications
May 5 Anupam Datta
Symbolic logic for complexity-theoretic model of cryptographic protocols
May 12 Krishnaram Kenthapadi
Balanced Allocation on Graphs
May 19 Cesar Sanchez
Thread Allocation for Distributed Real-Time and Embedded Systems
May 26 Sanatan Rai
Eigenvalue Distribution of Random Geometric Graphs


Generating Trees

Don Knuth

Expanding on ideas from Pre-Fascicle 4a: Generating all trees (version of 31 March 2005) of Volume 4 of The Art of Computer Programming, Don talks about the link between generating trees, partitions, and more. He presents a method of generating binary, which can be extended (literature) to generate ternary trees, and further even: to generate any tree with a given # of nodes (say $n$), and any sequence of $t_k$'s where $t_k$ is the # of link fields of the $k$th node in pre-order, and $k$ ranges from 1 to $n-1$.


Search problems in Arrays

Dilys Thomas

To locate an element in a one dimensional sorted array of size n requires log(n) look ups in the cell-probe model by the binary search algorithm. This is tight as it can be shown that no algorithm can do better than log(n) lookups. We will look at a natural extension of this problem to multi-dimensional arrays. i.e. How many lookups are required to locate an element in a partially ordered multidimensional array: i.e. an array where elements increase in value along each dimension. An interesting exponential jump is observed as we move from 1 to 2 dimensions after which the increase is polynomial.


The Coffee Cup (or The Napkin) Problem

Don Knuth

Assume you have n people sitting at a circular table and n cups of coffee (each holding a napkin) inbetween them. It is not clear which napkin to take: the one in the left cup or the right one. Assuming everybody picks a direction (left or right) at random, how many people are left without a napkin (in expectation)? What if the seating is done my an adversarial maitre d'hotel?

References regarding this problem: Mathematical Puzzles A Connoisseur's Collection by Peter Winkler. The article Napkins In a Random Setting by Anders Claesson and T. Kyle Petersen will likely soon include the latest findings presented in this talk, and discovered by Don Knuth just a couple of days before this theory lunch.


On Network Coding

Mihaela Enachescu

In this talk I will introduce a recent model for multicommodity flow when the commodities are "bits" (information). Unlike the traditional model, in this model a node in the network can not only forward the commodities it receives, but it can combine them and then forward (by apply some function such as xor for example). This model is strictly more powerful in the case of directed graphs especially for multicasting, and k-pairs routing, but it's power is not yet known in an undirected networks.

Specific results covered in the talk: the min-cut/max-flow theorem for the multciast case, an example to show the logarithmic gap between network coding and multicommodity flow also for the multicast case, and some open problems.


Projective Hashing with Applications

Hovav Shacham

Let X be a set, and L \subset X a language. A hash function family is a family of functions H: K \times X -> \Pi such that, for each x \in X, and each key k \in K, H_k(x) can be computed efficiently. Such a family is *projective* if, for each k, there is a key \alpha(k) such that knowledge of \alpha(k) allows H_k(x) to be computed when x is in L, but not otherwise.

We consider two notions that projective hashes might satisfy. The first, smoothness, says that H_k(x) is independent of \alpha(k) when x is not in L. The second, universality, says that knowing H_k(x) for a single x \in L doesn't fix the value H_k(y) for some y \notin L.

We then give a simple example construction satisfying these properties, based on the Decisional Diffie-Hellman problem.

Finally, we present two major results in cryptography that rely on smooth projective hashing. The first (due to Cramer and Shoup) is the first efficient encryption scheme secure in a strong sense (the so-called CCA2 model). The second, due to Gennaro and Lindell, is the first construction for authenticated key exchange between two parties that share only a low-entropy key (e.g., a human-memorable password).


Balanced Allocation on Graphs

Krishnaram Kenthapadi

It is well known that if n balls are inserted into n bins, with high probability, the bin with maximum load contains (1+ o(1)) log n/loglog n balls. Azar, Broder, Karlin, and Upfal showed that instead of choosing one bin, if d > 1 bins are chosen at random and the ball inserted into the least loaded of the d bins, the maximum load reduces drastically to loglog n/log d + O(1).

We study the two choice balls and bins process when balls are not allowed to choose any two random bins, but only bins that are connected by an edge in an underlying graph. We show that for n balls and n bins, if the graph is almost regular with degree n^e, where e is not too small, the previous bounds on the maximum load continue to hold. Precisely, the maximum load is loglog n + O(1/e) + O(1). So even if the graph has degree $n^{\Omega(1/\log\log n)}$, the maximum load is O(loglog n). For general $\Delta$-regular graphs, we show that the maximum load is $\log\log n + O(\frac{\log n}{\log (\Delta/\log^4 n)}) + O(1)$ and also provide an almost matching lower bound of $\log \log n + \frac{\log n}{\log (\Delta \log n)}$. Further this does not hold for non-regular graphs even if the minimum degree is high.

Voecking showed that the maximum bin size with d choice load balancing can be further improved to loglog n /d + O(1) by breaking ties to the left. This requires d random bin choices. We show that such bounds can be achieved by making two random accesses and querying d/2 contiguous bins in each access. By grouping a sequence of n bins into 2n/d groups, each of d/2 consecutive bins, if each ball chooses two groups at random and inserts the new ball into the least-loaded bin in the lesser loaded group, then the maximum load is loglog n /d + O(1) with high probability. Furthermore, it also turns out that this grouping into groups of size d/2 is also essential in achieving this bound, that is, instead of choosing two random groups, if we simply choose two random sets of d/2 consecutive bins, then the maximum load jumps to loglog n/log d.

Joint work with Rina Panigrahy.


Symbolic logic for complexity-theoretic model of cryptographic protocols

Anupam Datta

Symbolic protocol analysis based on the Dolev-Yao model [DY84] and complexity-theoretic based analysis (starting from work by Goldwasser-Micali on encryption [GM84]) have emerged as two largely independent research areas over the last two decades. The connection between these two models has remained unclear. The symbolic model operates under the perfect cryptography assumption (e.g., in order to decrypt a ciphertext it is essential to possess the key). The attacker is allowed to perform only a fixed set of actions (e.g., decryption with known key). This idealization has paved the way for formal analysis techniques and tools which have been successfully applied to practical protocols. On the other hand, the complexity-theoretic model allows more fine-grained specification of protocol properties (e.g., secrecy of a message is defined as not possessing any partial information about the message). The attacker model is much more powerful: she can carry out any probabilistic polytime computation in order to execute an attack. However, protocol correctness proofs are long, detailed, done by hand, and require explicit reasoning about probability and complexity.

In this work, our goal is to get the best of both worlds by (i) using a symbolic logic for specifying and proving protocol properties; while (ii) using the complexity-theoretic model for interpreting these properties and defining the execution model, in particular, the adversary capabilities. Our result is a logic called Computational PCL which is developed along these lines. The soundness theorem for CPCL's proof system ensures that our axiomatic proofs (which do not explicitly mention probability or complexity) carry the same meaning as hand-proofs done by cryptographers.

Joint work with A. Derek, J. C. Mitchell, V. Shmatikov and M. Turuani. To appear in ICALP 2005.


Thread Allocation for Distributed Real-Time and Embedded Systems

Cesar Sanchez

Deadlock situations (also known as ``dead embrace'') have been studied in Computer Science since the 60s. Dijsktra and others studied the problem in the centralized case, and proposed solutions to it. These solutions can be classified into three categories:

1. Deadlock detection: this is is the common approach in databases, where the optimistic assumption that deadlocks are infrequent is taken. Then, a deadlock detection algorithm is run on the side together with a mechanism to unroll deadlocked processes.

2. Deadlock prevention: one of the necessary conditions for occurrence of deadlocks is (statically) eliminated, typically at the high price in concurrency.

3. Deadlock avoidance: at run-time and before each resource is granted, a check is performed to guarantee that the reachable state is ``safe''. Otherwise, the resource is not granted. All resources in safe states are guaranteed to be ``potentially'' available.

Different solutions to the deadlock avoidance problem vary depending on the knowledge of the processes and the resources they demand.

In the distributed case, it is known that the general solution for deadlock avoidance is impractical, since it requires maintaining global states and global atomicity of actions.

We present here efficient solutions for the particular case of distributed deadlock-avoidance where (a) threads are the only resource managed, and (b) all processes are instances of static (and acyclic) call-graphs. This is commonly the case of distributed real-time embedded systems.

This is joint work with Henny Sipma and Zohar Manna, as a part of a broader collaboration with Venkita Subramonian and Chris Gill from Washington University.


Eigenvalue Distribution of Random Geometric Graphs

Sanatan Rai

Random geometric graphs are the abstract model for wireless sensor networks. I shall present recent results that show that the spectrum of the random graph is close to the spectrum on the deterministic grid. Time permitting, I shall draw connexions with random matrix theory and free probability.


    (archive of past announcements)