QED Manifesto - Notes and CommentsRobert S. Boyer <boyer@CLI.COM>
Date: Tue, 4 May 93 09:37:25 CDT
From: Robert S. Boyer <boyer@CLI.COM>
Subject: QED Manifesto - Notes and Comments
I enjoyed your remarks about the manifesto and the commentary. I
agree with some of what you say, and disagree with some. Here are
1. On discovery. Your first `non flame' remark seems to accuse the
manifesto of trying to formalize discovery. But I don't read it that
way. Far from it. You say that otherwise QED is a pointless
exercise. Here we seem to disagree radically. I think that if QED
simply helps with, say, program verification, it will be far from
pointless. I believe it will help there. I've done a lot of program
verification, and I can tell you I'd just love to be able to tap into
a formalized edifice that treats Fourier transforms and other parts of
engineering mathematics. I would say that the formalization of
discovery is extremely hard, much harder than the QED project. The
difficulty stems from the fact that discovery can be done in so many
disparate, even mysterious ways. Poincare's description of suddenly
seeing a complete solution to some ultra hard problem, after not
having thought about it for months, just as he stepped on a trolley,
makes me think that mechanizing discovery is a long way off.
2. On the increasing complexity of high technology. From the
conversations I have had, I believe the increasing size of the
internal documentation of chips is strictly a function of the
increasing number of transistors one can fit in a square inch. It is
not caused by stupidity or ineptness, as you seem to imply.
Semiconductor companies do whatever they think will work to reduce
costs, and use imagination and abstraction a good deal. When they
know how to do better, they will, or someone else will.
3. On how much the foundations of mathematics are likely to change.
We certainly disagree on the status of the foundations of mathematics!
I would say that first order logic and set theory are here to stay in
this sense: whatever follows, and surely there will be some
improvements, will retain essentially all of the familiar notions,
theorems, and rules of inference that we currently have. And PRA will
continue to be adequate, in principle, for proof theory. Euclid's
geometry and Newton's calculus only had to be discovered once. So too
set theory and the primitive recursive functions. I believe the real
foundations for mathematics have been mainly found, once for all, and
finally, about 1900-1930. Time will tell. To the extent that
self-referring objects are worth studying but not yet understood, I
suspect they can be accommodated within existing mathematical theories
or minor extensions thereof. For example, Scott showed how to give a
model for the lambda calculus, in which functions can (appear to) take
themselves as arguments. Undoubtedly mathematics will improve greatly
in the future, but I personally don't expect the basic foundations to
change any more than I expect plane geometry to change.
4. On the social process. I do not agree at all that "it is
well-known that what constitutes a proof is at least partly a social
phenomenon". Unless you are playing on two different meanings of the
word "proof". There is the "proof" that people, with their
meat-brains do, and some of those "proofs" are simply false or
fraudulent, every bit as open to mistakes and even corruption as
banking! And then there is the Hilbert-Goedel notion of proof
(sequence of wffs such that each is an axiom or follows from previous
members ...). This notion of proof is no more social than the
sequence of integers! Do you also think that a "social process"
determines whether 27 follows 26 in the sequence of natural numbers?
Whether something is a proof, in this second, exact sense, is just as
much a matter of mathematics as 1, 2, 3.
Do you think that there is anything in principle that would prevent
the entering of Hilbert's proof of the Prime Ideal Theorem or the
recent computer-aided proof of the four color theorem into existing
proof checking systems? I don't. I suspect that whatever form of
induction Hilbert used is now well understood and well formalized in
ZF. The curious part of the four color proof was its use of long
computer runs, but, again in principle, program verification is up to
handling this. There is nothing theoretically difficult, or even new,
about using a calculation in a proof. (Note to everyone: please
notice the several uses of "in principle" above before accusing me of
the ridiculous claim that anyone could, in practice, get the four
color conjecture checked by an existing system any time soon!)
5. On irrelevant mathematics. As for objection 3 of the manifesto,
it may be that there is lots of irrelevant mathematics, but it is, I
think, politically dumb to put something like that into an organizing
document. It is hard enough in this society even to talk about
anything being important. We seem to be getting to the moral state
that all things are equal, including plus and minus one. While I
personally can imagine standing up in public and saying that "the
fundamental theorem of calculus" is IMPORTANT, I'm not sure I would
ever see the point of standing up, in public, and saying "the paper by
Smith and Jones in the last issue of the Proceedings of the Society
for Write Only Papers" is irrelevant and doesn't deserve checking. As
soon as you start asserting that lots of mathematics is irrelevant,
someone will start asking you to name names.
6. Objections 4 and 5 may be dumb, but they are made, and I think all
common objections should be treated in the manifesto. The more the
better. In one paper, de Bruijn remarks that the two standard initial
objections to Automath were (a) Goedel proved you cannot do that and
(b) Peano already did that.
7. On formalizing mathematical notations. You seem to think that
formalizing existing mathematical notations is a lot harder than I do.
Perhaps we have had radically different exposures to notations. I was
personally convinced in 1967, from working with A. P. Morse's book "A
Theory of Sets" (who carefully introduces loads of standard
notations), that the formalization of any mathematics notation can
probably be done in an acceptable way in a reasonable amount of time.
The most damage I have seen in formalizing notations used in practice
by skilled mathematicians is that when formalizing some things (a) you
have to insist that the bound variables be explicitly identified and
mentioned and (b) a few extra marks must be added here and there to
distinguish one defined concept from another. A good case study is
the O(n) notation, and some nice remarks on the many various things
this can mean can be found in "Concrete Mathematics". I would be
happy to discuss a particular problematic notation or two of your
choice with you.
8. On multiplicities.
It has been seen repeatedly in our studies of computational support for
complex systems that there is no one method, approach, notation, language or
even paradigm that can be adequate, so that multiplicities must be allowed
even at the lowest levels of processing [Landauer-Bellman92].
I feel you have given up much too early! There is so much evidence
that the beautiful edifice of mathematics can be formalized in, say,
PRA and developed in, say, ZF! Not that PRA or ZF are the only
vehicles or even the right vehicles. The problem here, I think, is
exactly opposite to what you think. The problem is that there are TOO
MANY EQUIVALENT, ULTRAPOWERFUL FORMALISMS! We are like children in a
candy shop, the heaven of infinities of equivalent formalisms of
9. On getting `approval' for the major defined concepts.
One may usefully choose a preferred definition (for expository
purposes, for example), but the alternatives are as valid and also
important for exploring the relationships among the different
concepts in the subfield.
I agree, and it will certainly be important in QED to prove the
equivalence of various definitions. But I believe it will be
psychologically important to get a panel of the best experts to agree
that at least one of the definitions of each important mathematical
concept is "right". Same for the statement of at least one version of
each important, named theorem. This will give some people entering
things into QED that build on the work of others confidence that when
they invoke, say, the fundamental theorem of calculus, they are
employing the common notion of the reals, continuous function,
integral, and derivative.
10. who decides what goes into QED?
>From the earliest, I think that general consensus has and out to have
been that this should be a really open business, as open as we can
make it. For example, we have this remark of inclusiveness from de
Bruijn back in the 70's:
The idea is to make a language such that everything we
write in it is interpretable as correct mathematics ...
This may include the writing of a vast mathematical
encyclopedia, to which everybody (either a human or a
machine) may contribute what he likes. The idea of a kind
of formalized encyclopedia was already conceived and
partly carried out by Peano around 1900, but that was still
far from what we might call automatically readable.
11. The ongoing denegration of the meta discussion of root logic.
Starting with a choice of root logic before we know what we are going to need
PRA and its equivalents has been in use in proof theory for about 60
years, about as long as the discussion of ZF vs.
von-Neumann-Bernays-Goedel set theory. The questions `which root
logic', `which strong logics', `which syntax', `which theorems' will
probably all go on in parallel, as they have for many decades. The
discussion of LF vs. FS0 vs. PRA vs. ?? is not improved, not even
dampened, by assertions that it is premature.
12. Methods more important than theorems.
It is only superficially that the importance of mathematical theory
is in the theorems. Actually, it is in the methods ...
I can agree with that intuitively, and I think that is the way to
think if you want to pursue mathematical discovery via AI. But QED is
less ambitious than AI. When math journals contain formal statements
and proofs of "methods", then they can be added to QED. The success
of programs like Macsyma is rooted in the success at formalizing
methods for doing some kinds of symbolic manipulations. Maybe some
day someone will reduce, say, the method of `forcing', to something as
routine as differentiation, but I don't think it has happened yet and
so it would be extremely hard to add something like that to a system