STC is the Stanford Theory Colloquium. What follows is an archive of many of the talks that have been given in the Colloquium.

Talks are often held in Jordan Hall, on the Main Quad of Stanford's Campus. Click here for directions. Talks are also held in the Gates Building, near the Main Quad of Stanford's campus. Click here for directions.

Contents:

  1. 16 April 1997: Bernard Chazelle (Princeton) . Discrepancy Theory and Computational Geometry.
  2. 31 May 1996: Sanjeev Arora (Princeton). Polynomial-time Approximation Schemes for Euclidean TSP and other Geometric Problems.
  3. 30 May 1996: Laszlo Lovasz (Yale University). The delight of walking randomly.
  4. 11 April 1996: Christos Papadimitriou (UC Berkeley). Computational Approaches to Organization Theory.
  5. 25 January 1996: Andrew Yao (Princeton). An Overview of Quantum Cryptography.
  6. 1 December 1995: Frank Harary (New Mexico State University). Recent results on hypercube theory.
  7. 9 November 1995: Prabhakar Raghavan (IBM Almaden). Queueing theory without independence assumptions.
  8. 23 May 1995: N.G. de Bruijn (Eindhoven University). Reflections on type-theoretical computer-aided checking of mathematics.
  9. 5 April 1995: Robin Milner (Cambridge University). Some applications of the pi calculus.
  10. 4 November 1992: Christos Papadimitriou (UC San Diego). Complexity as a Metaphor.
  11. 19 May 1994: Moshe Y. Vardi (IBM Almaden and Rice University). Infinitary logics in computer science.
  12. 4 May 1994: Leonard Adleman (University of Southern California). Finite model theory: A personal perspective.
  13. 21 April 1994: Umesh Vazirani. "Go with the winners" algorithms.
  14. 10 February 1994: Sanjeev Arora (UC Berkeley). Probabilistic checking of proofs and hardness of approximation problems.
  15. 18 November 1993: Tom Leighton (MIT). Multicommodity flows: A survey of recent research.
  16. 1 November 1993: David S. Johnson (AT&T Bell Laboratories) . Finite model theory: A personal perspective.
  17. 11 March 1993: Ashok K. Chandra (IBM Almaden) . Towards More Realistic Models of Parallel Computers.
  18. 9 February 1993: David Harel (Weizmann Institute). Towards a theory of infinite computable structures and databases.
  19. 9 February 1993: Samson Abramsky (Imperial College). Games, types, and interactions.
  20. 12 November 1992: Donald Knuth (Stanford). Textbook example of recursion.
  21. 21 May 1992: Joel Spencer (NYU). Flip a coin! From Erdos to algorithms.
  22. 27 April 1992: Christos Papadimitriou (UC San Diego). Inefficient existence proofs and complexity.
  23. 2 April 1992: Ronald Fagin (IBM Almaden). Finite model theory: A personal perspective.
  24. 5 March 1992: Richard M. Karp (UC Berkeley and ICSI). Physical mapping of chromosomes: A combinatorial problem in molecular biology.

16 April 1997

Jordan 040. 4:15 p.m.

Bernard Chazelle (Princeton)

Discrepancy Theory and Computational Geometry

The recent development of a theory of computational geometric sampling has revolutionized the design of geometric algorithms, and led to the solution of some of the most outstanding problems in the field. Much of this development owes to the interplay between computational geometry and discrepancy theory. I will discuss some intriguing aspects of this development, including the use of data structuring ideas to prove theorems in discrepancy thoery.

Biography: Bernard Chazelle is professor of computer science at Princeton University. He received his PhD in computer science from Yale University in 1980. His research interests lie in the design and analysis of algorithms, in particular geometric algorithms. Recently, his work has focussed on derandomization, graph algorithms, and decomposition algorithms for 3D solids.

WWW: http://www.cs.princeton.edu/~chazelle


31 May 1996

Gates Building B08. 11:00 a.m.

Sanjeev Arora (Princeton)

Polynomial-time Approximation Schemes for Euclidean TSP and other Geometric Problems

The Euclidean Traveling Salesman Problem is NP-hard. In 1976, Christofides discovered a polynomial-time algorithm that always computes a salesman tour whose cost is at most 1.5 times the optimal. No better algorithm has been discovered since.

In this talk we will present a new algorithm that is an approximation scheme: for any fixed c>0, it computes a tour whose cost is at most 1+c times the optimal. The algorithm's running time is n^{O(1/c)}.

The algorithm generalizes to many other geometric problems, including the famous Minimum Steiner Tree problem. It also generalizes (with some increase in running time) to d-dimensional instances.

Biography: Sanjeev Arora is a faculty member at Princeton University. His research interests include computational complexity, uses of randomness in computation, probabilistically checkable proofs, and computing approximate solutions to NP-hard optimization problems.

WWW: http://www.cs.princeton.edu/~arora/


30 May 1996

Laszlo Lovasz (Yale University)

The delight of walking randomly

Imagine you arrive at a wonderful city you have never been before: say, Paris. Leaving your hotel, you stroll along a street to the next corner; then you make a random turn and walk a block; then you make a random turn again a walk to the next block etc.

How much time will he need (on the average) before you get back to your hotel? Or before you get to the Louvre? How long does it take to see the whole city? How long does it take before you are hopelessly lost (i.e., you can be anywhere with the same probability)?

To the physisist, you are doing a Brownian motion; to the probabilist, you follow a Markov chain; and to the computer scientists, you use the best (in many cases, the only known) way to generate a random place in Paris.

Biography: László Lovász was born on March 9, 1948 in Budapest, Hungary. He is married, has 4 children. He obtained his doctoral degree in mathematics from the Eötvös Loránd University in Budapest, Hungary in 1971. Currently he is Professor of Mathematics and Computer Science at Yale University. He is a member of the Hungarian Academy of Sciences, and the Academia Europaea. His field of research is discrete mathematics, in particular its applications in the theory of algorithms and the theory of computing.

WWW: http://www.cs.yale.edu/HTML/YALE/CS/Brochure/faculty/lovasz.html


11 April 1996

Christos Papadimitriou (UC Berkeley)

Computational Approaches to Organization Theory

Starting with Herbert Simon almost four decades ago, and recently with increased intensity, economists have studied organizations as systems of decision-making agents that operate under imperfect information, communication, rationality, and coordination. In theoretical computer science on the other hand the study of these and similar constraints is commonplace. In this talk I will recount several recent instances in which work on theoretical computer science (namely, on-line algorithms and combinatorial optimization, with Deng Xiaotie and Mihalis Yannakakis) seems to shed some light on aspects of organization theory. In restricted contexts, one can prove results that can be paraphrased as organizational conventional wisdom: "Never let too many managers compete for the same resource," and "A manager should roughly agree with each subordinate on issues of the subordinate's domain."

Biography: Christos H. Papadimitriou currently teaches at the University of California at Berkeley. Before joining UCB, he taught at Harvard, MIT, Stanford, Athens Polytechnic, and UC San Diego. He has published four books Computational Complexity, The Theory of Database Concurrency Control, Combinatorial Optimization: Algorithms and Complexity, and Elements of the Theory of Computation. His interests include the theory of algorithms and complexity and its applications to databases, optimization, artificial intelligence, and game theory.

WWW: http://HTTP.CS.Berkeley.EDU/~christos


25 January 1996

4:15 p.m.

Andrew Yao (Princeton)

An Overview of Quantum Cryptography

In recent years, there has been much interest in the possible use of quantum mechanical effects to perform computational tasks that are hard by classical means. Research efforts can be broken down into two broad categories: "quantum computation" and "quantum cryptography". The former is concerned with constructing quantum computers to solve currently time-consuming computations such as factoring large integers, while the latter is aimed at designing communication protocols, with the use of quantum signals, to achieve cryptographic goals. This talk will be focused on the latter subject. We will introduce the basic principles of quantum cryptography, and review the theoretical and experimental results that have been obtained. No previous knowledge of quantum computation will be assumed.

Biography: Andrew C. Yao is William and Edna Macaleer Professor of Engineering and Applied Science at Princeton University. He received a Ph.D. degree in Physics from Harvard University in 1972, and a Ph.D. degree in Computer Science from the University of Illinois in 1975. He had served on the faculty at MIT, UC Berkeley and Stanford before joining Princeton University in 1986. Andrew Yao is a fellow of the ACM, and he was awarded the SIAM George Polya Prize in 1987 for his work in computational complexity.


1 December 1995

Jordan Hall 041. 3:30 p.m.

Frank Harary (New Mexico State University)

Recent results on hypercube theory

Motivated by the increasing use of hypercubes in the architecture of massively parallel computers, we are investigating their graph theoretic structure. First, several equivalent characterizations and presentations of hypercubes, Q = Q_n, are given. It is well known that boolean functions can be regarded as subsets of the node set of a hypercube. In fact, boolean algebras are representable as hypercubes. We shall discuss cubical graphs and their cubical dimensions, spanning subgraphs of Q, embedding into Q of arbitrary graphs by subdivision of edges, Gray codes as hamiltonian cycles, packing and mispacking, fault tolerance, and the hamiltonicity of the middle bigraph of odd Q.

Biography: FRANK HARARY holds the PhD from the U of California, Berkeley, 1948, in mathematical logic, This degree is so old that it was obtained B.C. (before computers). He has been awarded three honorary doctorates: U of Aberdeen, Scotland 1975 (mathematics); U of Lund, Sweden 1978 (social sciences); U of Exeter, England 1992 (computer science). He holds fellowships at both Cambridge U (Churchill College) and Oxford U (Wolfson College) and is also a Fellow of the National Academy of Sciences, India. He was awarded a Humboldt Senior Fellowship Prize in 1978. He founded both the Journal of Combinatorial Theory (1966) and the Journal of Graph Theory (1977, when it won the American Association of Publishers award for "Best new journal of the year"). He is a member of the editorial board of 14 scholarly journals. He has published over 600 papers on graph theory and its applications, not only in mathematics and computer science journals, but also in the fields of anthropology, art, biology, chemistry, geography, linguistics, mechanical engineering, operations research, physics, psychology, and statistics. He has written 7 books and edited 10 others. His 1969 book, "Graph Theory", became the fifth most cited work in the mathematical research literature. He has delivered over one thousand invited lectures at conferences and universities in a total of 71 countries. In particular, he has spoken in all 50 states of the USA and all 10 provinces of Canada. He has completed 5 alphabetical lists from A to Z, namely: co-authors, journals of published papers, American universities addressed, American cities where lectures have been given, and foreign cities. From the U of Michigan he is professor emeritus of mathematics. He is also distinguished professor emeritus of computer science at New Mexico State U.


9 November 1995

Jordan Hall 040. 4:15 p.m.

Prabhakar Raghavan (IBM Almaden)

Queueing theory without independence assumptions

Classical queueing theory relies heavily on unproven independence assumptions --- for instance that the behavior of a node in a network is independent from one time step to the next. We describe two approaches to studying queueing phenomena without such assumptions, pointing out the advantages and disadvantages of each. We discuss applications to contention resolution (as in ethernet protocols) and to various packet routing protocols.

Biography: Prabhakar Raghavan received his PhD from the University of California, Berkeley, in 1986. He worked at the Mathematical Sciences Department of IBM's T.J. Watson Research Center until July 1995; he is currently Manager of Mathematics and Related Computer Science at IBM's Almaden Research Center. His research interests are centered on applications of randomized algorithms and combinatorial optimization.


23 May 1995

Jordan Hall 040. 4:15 p.m.

N.G. de Bruijn (Eindhoven University)

Reflections on type-theoretical computer-aided checking of mathematics

Around 1967, at the very start of the mathematics correctness checking project ``Automath'', a~number of ideas were developed that evolved into fundamental decisions about the design of the system that was to be used. At that time the concept of such a project got hardly any moral support from the worlds of mathematicians, logicians and computer scientists. This may account for the fact that many of the ideas involved in Automath were developed in some kind of isolation. Some of those ideas were quite unconventional at that time, and some of them still are. The talk will try to explain where those ideas came from.

Biography: Professor de Bruijn is one of the world's most distinguished mathematicians. After writing many deep papers in mathematical analysis during the 1940s and 1950s, culminating in his definitive book on asymptotic methods, he began to turn his attention to questions of automating the validation of mathematical proofs. He has published papers on a great variety of topics, including an early Dutch computer program for pentominoes. He is also known for many theorems in combinatorial mathematics and graph theory, for studies of Penrose tiles, etc.

Related paper: N.G. de Bruijn, Reflections on Automath. In: Selected Papers on Automath, edited by R.P. Nederpelt, J.H. Geuvers and R.C. de Vrijer, Studies in Logic, vol. 133, pp. 201-228. North-Holland 1994.


5 April 1995

Jordan Hall 040. 4:15 p.m.

Robin Milner (Cambridge University)

Some applications of the pi calculus

The pi calculus was originated by Milner, Parrow and Walker, building on ideas of Nielsen and Engberg. It expresses process systems which change their connectivity, i.e., "mobile" systems. This allows one to use it for two purposes which apparently differ widely. The first is to analyse the disciplined mobility exhibited by certain practical systems, such as a mobile phone system, by expressing invariants which they satisfy. The second is to provide a common substrate for functional programming and communicational behaviour.

I shall discuss both these applications, as far as time allows. Along the way I shall define the pi calculus formally, and then use whatever of its theory is needed for the applications. In particular, I shall use a simple system of sorts (analogous to simple types in lambda calculus) in a refined version due to Pierce and Sangiorgi. I shall also explain the representation of the lambda calculus in the pi calculus, and show that its soundness depends on certain key properties of bisimilarity.

No previous knowledge of the pi calculus will be assumed.

Biography: 1991 Turing Award winner Robin Milner was appointed Professor of Computer Science at Cambridge (UK) in January 1995. Before that he was a Professor at Edinburgh, where he worked for twenty-two years. In 1986 he became founding Director of the Laboratory for Foundations of Computer Science at Edinburgh.


4 November 1992

Christos Papadimitriou (UC San Diego)

Complexity as a Metaphor

In conventional applications of complexity, the computational problem of interest is shown to be, say, NP-hard and therefore presumably intractable. This talk deals with a metaphorical application of complexity, in which NP-completeness of a problem, typically one that is rather artificial and of no direct computational interest, is used as indirect evidence that an area or approach is mathematically nasty or conceptually complex. We discuss examples in which computational complexity is used as an allegory for chaos in dynamical systems, unfairness in mathematical economics, and unbounded rationality in game theory.

Biography: Christos H. Papadimitriou is the Irwin and Joan Jacobs Professor of Information and Computer Science at the University of California at San Diego. Before joining UCSD, he taught at Harvard, MIT, Stanford, and Athens Polytechnic. He has published four books, "Computational Complexity," "Elements of the Theory of Computation," "Combinatorial Optimization: Algorithms and Complexity," and "The Theory of Database Concurrency Control," as well as numerous journal articles on various aspects of the theory of algorithms and complexity, as well as its applications to optimization, databases, logic, artificial intelligence, and mathematical economics.


19 May 1994

Jordan Hall 041. 4:15 p.m.

Moshe Y. Vardi (IBM Almaden and Rice University)

Infinitary logics in computer science

Infinitary logic extends first-order logic by allowing infinitary conjunctions and disjunctions (i.e., conjunctions with an infinite number of conjuncts and disjunctions with an infinite number of disjuncts). One usually think of infinitary logic as a fairly esoteric logic, which is not of much interest in computer science. Surprisingly, a certain fragment L of infinitary logic is of great interest in computer science. This fragment L is obtained by restricting formulas, which can be of infinite length, to contain a finite number of distinct variables. The advantage of dealing with L is that its the expressive power can be completely characterized in game-theoretic terms. I will describe applications of this logic to the study of 0-1 laws and the expressive power of database logic programs.

The talk, which represents joint work with P. Kolaitis, will be self-contained. Only a basic familiarity with first-order logic will be assumed.

Biography: Moshe Vardi is interested in applications of logic to computer science: database theory, finite-model theory, knowledge theory, and program specification and verification. Recent work includes the use of knowledge to analyze distributed systems, connections between fixpoint logics and complexity classes, and using automata to verify concurrent and nondeterministic programs. Currently serving as the chairperson of the Rice University Computer Science Department, he is also affiliated with the IBM Almaden Research Center.


4 May 1994

Leonard Adleman (University of Southern California)

Finite model theory: A personal perspective

Recently a small instance of the `Hamiltonian path problem' was encoded in molecules of DNA and solved inside of a test tube using standard methods of molecular biology. In this talk, that experiment will be reviewed and the implication will be discussed. Can practical molecular computers actually be built? Might they be as much as a billion times faster that current super computers? Are there implications for biology, chemistry and medicine? What are the directions for future research? It is hoped that an exchange of ideas among biologists, chemists, computer scientists, engineers, mathematicians, physicists and others will ensue.

FREE SUPER COMPUTERS will be given to all who attend this talk.

Biography: Currently the Henry Salvatori Professor of Computer Science at USC, Leonard Adleman coinvented the RSA public key cryptosystem and has worked on primality testing algorithms. His student Fred Cohen and he worked on computer viruses, and he also currently researches AIDS and immunology. A former MIT associate professor of mathematics, he received his Ph.D. from UC Berkeley.


21 April 1994

Jordan Hall 041. 4:15 p.m.

Umesh Vazirani

"Go with the winners" algorithms

We can view certain randomized optimization algorithms as rules for randomly moving a particle around the state space of all possible solutions to the optimization problem. One general paradigm for designing heuristics is to run several simulations simultaneously, and every so often classify the particles as "doing well" or "doing badly," and move each particle that is "doing badly" to the position of one that is "doing well." Such "go with the winners" schemes are obviously reminiscent of genetic algorithms but the latter typically have much additional special structure. We give a rigorous analysis of such a "go with the winners" scheme in the concrete setting of searching for a deep leaf in a tree. The running time of the scheme is at most a polynomial in parameters of the tree being explored.

By contrast, more straightforward search strategies can be shown to take exponential time in the worst case. Our original motivation for studying this model was that it mimics the polynomial time behavior of simulated annealing algorithms. The tree analysis suggests that a "go with the winners" implementation of simulated annealing might be superior to several independent runs of simulated annealing.

Joint work with David Aldous.


10 February 1994

Jordan Hall 041. 4:15 p.m.

Sanjeev Arora (UC Berkeley)

Probabilistic checking of proofs and hardness of approximation problems

The classical theory of NP-completeness, pioneered by Cook, Karp and Levin in the 70s, explains the apparent computational intractability of optimization problems by proving them NP-hard. But its techniques were unable to explain why for many optimization problems, even finding good approximation algorithms is hard (or at least, why significant amounts of research failed to find such algorithms).

Recent work has yielded a simple explanation: many NP-hard optimization problems are also NP-hard to approximate. Among the problems for which this has been proved are Independent Set, Chromatic Number, Set Cover, Vertex Cover, Max-Cut, Decoding to the nearest codeword, Learning in the presence of Errors, and many others. At the heart of these new results is a new type of NP-completeness reduction which acts globally (as opposed to the classical reductions, which used local transformations involving gadgets etc.). A more interesting way to view this reduction is as a new probabilistic definition of NP : NP is exactly the class of languages for which membership proofs (i.e., certificates) can be checked in polynomial time by using only O(log n) random bits and examining only O(1) bits in them.

The talk will be a survey of this new area (including the work of other researchers) and its open problems.

Biography: Sanjeev Arora is a graduate student at UC Berkeley and has co-authored several papers which contributed to the above theory. These will form part of his dissertation (he is graduating this year). His other research interests include computational complexity, and the computational uses of randomness.


18 November 1993

Jordan Hall 041. 4:15 p.m.

Tom Leighton (MIT)

Multicommodity flows: A survey of recent research

The multicommodity flow problem consists of shipping a collection of different commodities through a common network so that the total flow going through each edge in the network does not exceed its capacity. Multicommodity flow problems arise in a wide variety of applications and have been extensively studied during the past several decades.

In the talk, we will survey recent work on improved algorithms for the multicommodity flow problem. Particular emphasis will be placed on approximation algorithms that can compute near optimal flow paths in a relatively small amount of time. A new and very simple local-control algorithm for multicommodity flow will also be presented. The latter algorithm is notable in that it does not rely on augmenting paths, shortest paths, min-cost paths, or similar techniques to push flow through the network. As a consequence, the algorithm is particularly well-suited for applications involving distributed networks.

We will also briefly mention recent work on the development of a max-flow/min-cut theorem for multicommodity flows.

The talk will be introductory in nature.

Biography: F. Thomson Leighton, professor of Applied Mathematics at MIT and a member of the MIT Laboratory of Computer Science, performs research in algorithms, VLSI, complexity theory, and combinatorics, with an emphasis in parallel algorithms and architectures. He is currently the editor-in-chief of the Journal of the ACM.


1 November 1993

Jordan Hall 040. 4:15 p.m.

David S. Johnson (AT&T Bell Laboratories)

Finite model theory: A personal perspective

We show how to apply dynamic and linear programming to design sophisticated potential functions for infinite, multi-dimensional Markov chains, thereby deriving bounds on the expected quality of packings produced by the BEST FIT bin packing algorithm and solving a long-standing open problem.

In bin packing, we are given a sequence of N items with sizes in the interval (0,1], and asked to pack them into a minimum number of unit-capacity bins. Under BEST FIT (in practice perhaps the best of the simple on-line heuristics), items are packed in order, with each item going into the bin with the smallest gap that can hold it (ties broken arbitrarily). If item sizes are uniformly distributed in the interval (0,1], it can be shown that the expected waste (sum of the gaps in all the bins) grows sublinearly with N. Surprisingly, if the distribution of sizes is uniform over an interval (0,u], u < 1, experiments indicate that the waste grows linearly. This also holds if sizes are uniform over the discrete sizes i/K, 1 <= i <= J, when J < K-2 is sufficiently large. Unfortunately, proving this observation (for ANY such continuous or discrete distribution), has remained an open problem for almost a decade.

Ed Coffman, Peter Shor, Richard Weber and I have been studying the simplest of the distributions for which linear waste appears to occur (the discrete uniform distribution with K=11,J=8). Assuming one is prepared to trust the result of a 24-hour computer run, we now have a proof. This talk will cover the bin packing background, and then give an overview of the mathematical and algorithmic techniques involved in our proof, as well as the general applicability and limitations of the approach. We also discuss related computer proofs that show that the expected waste is bounded when K=11,J<8 or K<11,J<K-1.

The talk should be accessible to a general Math/CS/OR audience.


11 March 1993

Ashok K. Chandra (IBM Almaden)

Towards More Realistic Models of Parallel Computers

A PRAM is a very convenient model of parallel computation, both for theoretical analysis and for programming. Unfortunately, it cannot be built. In practice, access to remote memory takes much more time than local access - sometimes by a factor of a thousand or more when data transfers take place using messages or pages. Such large factors have to be addressed explicitly when designing efficient algorithms. This is often done in an ad-hoc fashion. Is there a robust model of computation which can take these issues into account and guide efficient algorithm design?

Communication delays arise from two major limitations: bandwidth and latency, which have different consequences for algorithm design. A Block PRAM model of computation (BPRAM for short) models these two effects by allowing processors to communicate by transferring blocks of data - here n words can be transferred in time l+bn where l corresponds to latency, and b represents the (inverse of) bandwidth per processor. This model can be used to study several phenomena which arise in practice: inherent limits to parallelizing communication, computation-communication tradeoffs, bandwidth limits for problems such as sorting, FFT etc., importance of using both temporal and spatial locality of reference, the role of the l.p product of a machine as a determiner of the difficulty of generating efficient algorithms, etc. Furthermore, one can model phenomena akin to practice where tiny changes in the communication pattern can result in catastrophic changes in the running time of certain algorithms.

The research was done jointly with Alok Aggarwal and Marc Snir.


9 February 1993

1:30 p.m.

David Harel (Weizmann Institute)

Towards a theory of infinite computable structures and databases

Computer scientists are interested predominantly in finite objects. However, insight into finite objects can often be obtained by considering infinite, but recursive (i.e., computable), variants thereof. This leads to the notion of a recursive graph, or, more generally, a recursive structure in the model-theoretic sense. Moreover, in the database world, infinite computable collections of data are not out of place: a table of logarithms or trigonometric funtions, for example, can be viewed as an infinite relation that is given by a recursive procedure for generating any desired part of it. This is a rather general notion of a deductive database.

The talk will describe our recent work on infinite structures and databases, including the high undecidability of many problems on recursive graphs, a completeness result for computable database queries, and ways of deducing results for the finite case from those for the infinite case and vice versa. In view of the recent results about the impossibility of approximating problems hard for Max-SNP, one of our results seems to be particularly interesting, since it shows that any problem in NP whose infinite version is highly undecidable cannot be in Max-SNP (in fact, it is not even in Max-NP).


9 February 1993

10:00 a.m.

Samson Abramsky (Imperial College)

Games, types, and interactions

We introduce the ideas of Game semantics for programming languages and constructive logics, in which types (or formulas) denote games, and programs (or proofs) denote winning strategies. Games have a natural duality (between player and opponent); there are also constructions for combining games, with fine control over the flow of information. These constructions lead to a model of Linear Logic, which captures the dynamical intuitions which make that logic important to Computer Science. The winning strategies corresponding to proofs are ``dynamic tautologies'', processes which conserve the flow of information. They also lead to a subtle model of sequential processes, constrained enough to lead to very strong definability results. We present one such result, a ``Full Completeness theorem'' for Multiplicative Linear Logic (plus MIX): every winning strategy is the denotation of a unique cut-free proof net.


12 November 1992

Donald Knuth (Stanford)

Textbook example of recursion

We discuss properties of recursive schemas related to McCarthy's ``91 function'' and to Takeuchi's triple recursion. Several theorems are proposed as interesting candidates for machine verification, and some intriguing open questions are raised.


21 May 1992

Joel Spencer (NYU)

Flip a coin! From Erdos to algorithms

The Probabilistic Method, as developed over the past half century by Paul Erdos, is a powerful proof technique in Discrete Mathematics. To prove the existence of a combinatorial object - e.g., a graph, a tournament, a coloring - one devises a ``thought experiment'' for which the probability of creating the desired object is positive. In recent decades this technique has found natural application in Theoretical Computer Science. Here is is desirable to replace Hilbertian existence arguments with polynomial time algorithms and this can be done. Sometimes.


27 April 1992

Christos Papadimitriou (UC San Diego)

Inefficient existence proofs and complexity

We point out that in several well-known instances in mathematics the existence of a mathematical object is established by a constructive argument based on an exponentially large graph. These include Brouwer's Fix-point Theorem, the Borsuk-Ulam Theorem, the Arrow-Debreu theory in mathematical economics, Chevalley's Theorem in number theory, the convergence of Hopfield neural nets, Smith's Theorem in graph theory, and many others. We show that this phenomenon can be captured by complexity classes, containing several important complete problems.

COMMENT: Of interest to Theoreticians and Mathematicians.

Biography: Christos H. Papadimitriou is the Irwin and Joan Jacobs Professor of Information and Computer Science at the University of California at San Diego. Before joining UCSD, he taught at Harvard, MIT, Stanford, and Athens Polytechnic. He has published four books, "Computational Complexity," "Elements of the Theory of Computation," "Combinatorial Optimization: Algorithms and Complexity," and "The Theory of Database Concurrency Control," as well as numerous journal articles on various aspects of the theory of algorithms and complexity, as well as its applications to optimization, databases, logic, artificial intelligence, and mathematical economics.


2 April 1992

Ronald Fagin (IBM Almaden)

Finite model theory: A personal perspective

Finite-model theory is a study of the logical properties of finite mathematical structures. This talk gives an overview, including:
  1. Differences between the model theory of finite structures and infinite structures. Most of the classical theorems of logic fail for finite structures, which gives us a challenge to develop new concepts and tools, appropriate for finite structures.
  2. The relationship between finite-model theory and complexity theory. Surprisingly enough, it turns out that in some cases, we can characterize complexity classes (such as NP) in terms of logic, without using any notion of machine, computation, or time.
  3. Zero-one laws. There is a remarkable phenomenon, which says that certain properties (such as those expressible in first-order logic) are either almost surely true or almost surely false.
  4. Descriptive complexity. Here we consider how complex a formula must be to express a given property.

In recent years, there has been a re-awakening of interest in finite-model theory. One goal of this talk is to help "fan the flames" of interest, by introducing the audience to this fascinating area.


5 March 1992

Richard M. Karp (UC Berkeley and ICSI)

Physical mapping of chromosomes: A combinatorial problem in molecular biology

A fundamental tool for exploring the structure of a long DNA sequence is to construct a ``library'' consisting of many cloned fragments of the sequence. Each fragment can be replicated indefinitely and then ``fingerprinted''to obtain partial information about its structure. Two common types of fingerprinting are:
  1. restriction fingerprinting, in which an enzyme called a restriction nuclease cleaves the fragment wherever a particular short sequence of nucleotides occurs, and the lengths of the resulting pieces are measured;
  2. hybridization fingerprinting, in which each fragment is tested to determine which of certain short nucleotide sequences called probes it contains.
An important combinatorial problem is to determine, from such fingerprint information, the most probable arrangement of the cloned fragments along the overall sequence. Such a physical map can then be used for tasks of biological or medical significance, such as determining the locations of particular genes within a chromosome.

The most common approach to physical mapping is to compute, for each pair of fragments,a weight (either positive or negative) related to the probability that the two fragments overlap, and then construct an interval graph of maximum total weight. We argue that this approach makes insufficient use of fingerprint information. Instead, we cast the problem as a variant of the traveling-salesman problem in which the cost of travel between two cities depends on a small neighborhood around the cities within the overall tour. We then discuss some of the issues of algorithm design that arise in constructing efficient local optimization methods for this generalized traveling-salesman problem.

No prior knowledge of molecular biology will be needed to understand this talk.