**Mail folder:**QED**Next message:**Robert S. Boyer: "QED Manifesto - Notes and Comments"**Previous message:**Konrad Slind: "Cold Water"**Reply:**Robert S. Boyer: "QED Manifesto - Notes and Comments"

Message-id: <199305032105.AA20489@aerospace.aero.org> Date: Mon, 3 May 93 14:04:58 PDT From: cal@aero.org To: qed@mcs.anl.gov Subject: QED Manifesto - Notes and Comments Sender: qed-owner Precedence: bulk

file qed930503.txt first notes on QED discussion I have read the QED Manifesto and the subsequent discussion with much interest and not a little horror. I believe that the project is fundamentally misguided, and that even if one ignores that, the designs under discussion will be inadequate to the task. Ok, so enough for the flames; now for some serious comment. First on the manifesto itself, then on some of the subsequent commentary. If I'm being too sarcastic at times, then I apologize, but it just irritates me to see that naivete about "the way to do mathematics". I have no doubt that each of you who write "_the_ way math is (or should be) being done" is right about _a_ way it is being done. My complaint is just that none of you take into account the enormously different styles of methods that are in use. None of those methods is exclusive, and mathematics is being done in an amazingly large variety of ways, sometimes by the very same person. It is this variation of methods that QED must support in order to gain acceptance from the mathematical community. As to my use of "misguided", well, I think QED is trying to formalize an inherently non-formalizable process; that of discovering new mathematics (even though that is expressly not a goal of QED, I believe that it should be, since otherwise QED is a pointless exercise). The real world is out there to foil any attempt at completely severing mathematics from external influences (though that is exactly what formalization is). The attempt is still worthwhile, but we should all keep in mind the quote M.Thayer used: "God has more disk space than we do", which I take to mean only that reality seems to be much more diverse than any of our formal mechanisms can handle adequately. The strong emotional reaction we see here results from the excessive reliance of some people, myself included, on a notion of "truth" to provide life's certainties, and on mathematics to provide that notion of truth. This is not a technical issue, but it pervades the discussion of QED. Comments on Manifesto Motivations I agree with some of the situation description in the first reason, about the increase of mathematical knowledge. Of course, most of that increase is in areas known to be "irrelevancies", with a very small theorem-to-definition ratio, and I am not convinced that even publishing those results in the usual way is useful. By the way, I can't quite decide why the theorem-to-definition ratio matters to me. It is something about getting out more than you put in. It seems that good mathematics should have a high ratio (yes, I know that the ratio is not meaningfully well-defined, but there is some notion there that is important). Problem solving and theorem proving are only one of the motivations for doing mathematics. Another is less goal-directed, a kind of exploration of new (or old) mathematical territory. The criteria for success here are harder to elaborate, because the time scale is different (e.g., Levi-Civita's development of tensors preceded their use in relativity; there are many other examples). Despite the phrase "knowledge and techniques" in the first sentence of the manifesto, all of the subsequent discussion of the mathematics itself is about theorems and proving theorems. The methods of a mathematical theory are often more important than the theorems anyway, since it is often only the methods that generalize across theories. I believe that the expected (rightly, I think) enormous sizes of "internal documentation of ... " and "specifications of ..." represent failures of imagination and abstraction, and simply follow from the amount of thought and money typically allowed for their construction. Also, I think they partly reflect the inadequacy of the available software tools to support computation with abstractions. I wholly support the third motivation, for education, and hope that there is LOTS more study of the kinds of tools that must be available in such a system; no present system comes close, regardless of subject matter. Some of my own research is relevant here, in the area of software support of complex systems [Landauer-Bellman92], [Landauer-Bellman93]. The fourth motivation is, in my mind, neither beautiful nor compelling. It is simply wrong-headed. In the history of mathematics, we have seen many instances when the very foundations of certain areas of mathematics have changed (calculus by infinitesimals to calculus by epsilons and deltas, Riemann's "hypotheses" vs. Kronecker(?)'s "facts" that underly the foundations of Geometry; Cantor's notion of infinite vs. "dass ist nicht Mathematik; dass ist Theologie"; the axiomatic method itself; some less well-known examples are Aczel's set theory with an anti-foundation axiom [Aczel88]; Chaitin's complexity theory [Chaitin93] and Barwise's situation theory [Barwise89] are possibly other examples), sometimes when new application areas (such as natural language) are treated. I believe that mathematics will need further changes to treat computing adequately (we are still stuck with the mathematics that derives from our understanding of compilation; there are harder problems we have ignored), and to treat the full complexity of biological systems at all. In a sense, we have started in the middle somewhere, as in physics, with concepts and methods that we can comprehend in some direct way. Then we built structures on top of that, and we also built structures underneath that make us feel like the others are well supported. Unlike physics, however, our criterion for success in mathematics is not just the application of methods to "the real world"; it is also (or perhaps predominantly) in applications of those methods to other parts of mathematics. Methods that unify and simplify several parts of mathematics at once are successful; group theory and category theory are two well-known examples of that process, and another perhaps more obscure one is the use of sheaves and then schemes to redo the basics of algebraic geometry. I think this foundational aspect is a key to the motivation for QED, and to its expected failure. We become uncomfortable when there is no foundation (set of bottom elements, concrete objects, atoms, fundamental concepts, or some other term that makes us think we understand these basic objects) from which we can build the defined structures. And yet we can reason with the unfounded elements quite well, if not quite without trepidation. What mathematics has done in the past is to build a rigid structure (well, actually several rigid structures) somewhere in the middle, with tentacles leading up into the large and down into the small. As long as we work in that structure, we are comfortable, but when we get to the boundaries at either end, we have difficulty. As the manifesto says, in the Reply to Objection 7, "The QED system will ... perhaps ... never [be used] in the discovery of ... deep results." This shows a recognition that the ends of the structure are difficult. It is true, though, that the foundational issues are the most difficult. For instance, all of the examples above are difficulties at the bottom of this structure. Now QED wants to build such a structure, which is perfectly reasonable. But also the manifesto describes it as though that structure will be foundationally complete (yes, this is an undefined term, but it means that there is no further foundation needed), and I think that attitude is wrong. I believe mathematics will follow the same history as physics, namely, that there is no ultimate foundation, only plateaus on which we seem to rest for a while, until we examine them closely enough. This is why I expect QED, as it is described in the manifesto, to be wishful thinking. I think the idea could be salvaged by changing the expressed attitude about mathematical foundations, but that is always difficult. The fifth motivation is too militaristic for me. If the thought police return, they will be the ones who have QED, not us. Moreover, it is well-known that what constitutes a proof is at least partly a social phenomenon (witness Hilbert's use of a form of Mathematical Induction that does not use successive integers to prove the Prime Ideal Theorem; that style of inference was new to him and not well-accepted for a while; witness also Haken and Appelt's proof of the 4-color theorem, which is still not completely accepted), and that flexibility must be maintained for mathematics to progress. Just to throw another grenade into the discussion, I also think that self-reference is important. We seem to find potentially infinite towers on well-founded bases (e.g., the set of non-negative integers) much more congenial than infinitely descending towers, and when there is self-reference, we get even more worried. The tremendous success of the language / meta-language separation in providing foundational coherence in mathematics and logic has blinded most of its users to its cost in expressiveness. Instead of working to find logics that allow study of self-reference, we simply eliminated it altogether, because it could (and did) cause trouble, as illustrated by various set-theoretic paradoxes [Barwise-Etchemendy87]. Self-reference has always been somewhat suspect in mathematics and in logic because of these paradoxes. The only approach I have seen that even addresses this issue is Barwise's situation theory, which is being developed in conjunction with new logics of self-reference [Barwise89], and in fact, a new set theory that supports them [Aczel88] [Barwise-Moss91]. These logics arose from the study of natural language, with its very definite non-separation of the subject of discourse from the discourse itself (the chief culprits here are words like ``this'', ``that'', and ``the other thing''). This new kind of logic both facilitates reasoning with self-reference and also helps resolve some of the sticky paradoxes of set theory. Replies to Objections It is clear that there are many similar logics in use now. Many people naively assume the pseudo-"mathematical" ideal of having one base system into which everything can be converted and compared. I claim first that it can't be done, and second that it shouldn't be done. First, any formal model is limiting by definition, so choosing one such model automatically and irrevocably precludes studying certain things. Second, the well-known failures of mathematical methods in computing and in biological systems imply that new and different mathematics are needed. These may require new inferential mechanisms as well. It is much more important to standardize on the allowed variability in a system than to standardize on one kind of expression. Also, if the goal is truly _all_ of mathematics, then all the different kinds of limited inference mechanisms must be used also, so that one can easily determine, given a collection of allowed inference mechanisms, what are and are not theorems and other derived methods. Note also that there will be new mathematics required in the construction and analysis of such a large system, so that certain kinds of self-reference will be necessary (in so far as QED is a formal system, it must be representable within QED's logics; some careful form of computational reflection will be essential). I completely agree with the Reply to Objection 2, but would recommend that results and notations and methods and algorithms also have an associated pedigree, to keep track of source information. We have found that to be an invaluable aid to validating the behavior and internal information in a complex software system. Moreover, it might help reduce the exponentially growing volume of published mathematics to allow "publication" into QED rather than paper. The Reply to Objection 3 does not address the issue raised there at all. I think that Objection 3 may be correct (though not relevant, since most of the mathematics that has been published is not relevant as I mentioned above), and the reply also both correct and irrelevant. The issue is not that we should do the easy stuff first, but rather how and by what means should the system be "bootstrapped". I was glad to see that notion mentioned in the discussion. Objections 4 and 5 are specious. Both are correct; both are irrelevant. First, one should make a careful distinction between "computationally checked" and "mechanically checked", and note that neither is possible, since we cannot formally check the operation of the computers or other mechanisms; they are objects of the "real world" and mathematics is not. Nonetheless, QED may be feasible; it seems worth the effort to find out. The Reply to Objection 6 is ok as far as it goes, but I suggest that some professional journals would eventually be interested because it would greatly simplify the problem of relating a new piece of mathematics to the rest of the field. The repeated failure of technical reviewers to exhibit the requisite breadth of knowledge is a hint that it is more and more to gain that breadth, and not just because the subject is growing fast. Objection 7 is exactly right, and should be carefully taken into account. The Reply to Objection 7 is naive, both from the notational point of view, and from the computational point of view. The notion that Tex "demonstrates that it is no problem for computers to deal with any known mathematical notation" is ludicrous; it only demonstrates that we can cause the marks to appear on paper using Tex, not that there is any set of computational methods that can be used to describe the relationships those marks are intended to denote, or even to identify what context or conventions are to be used to interpret the notations. Mathematicians are amazingly productive in inventing new notations that convert problems in one domain into problems in another. One scenario for this progression is that new notations are (1) first used as abbreviations in terms of other base symbols, then (2) combinations of the accepted rules of manipulation of the base symbols are gradually found useful or interesting for manipulating those abbreviations, and finally (3) those manipulations of the abbreviations become formalized into theories about safe manipulations of the new notations. It is my opinion that QED must use this same mechanism (and many others, too) of incorporating new processes that manipulate objects in one of several different ways, depending on how deep their abbreviation definitions are. For a simple example of this abbreviation process, consider the rational numbers. The expression 1/4 is not a number, it is a problem. It is such a familiar problem, though, that we know lots of manipulations that we can safely do with it, most of which were derived from safe manipulations with integers, which can in turn be derived from safe manipulations with sets (in one definition of integers as sets that is used to prove Peano's axioms). We therefore think of 1/4 as a fundamental object, and treat it that way in most investigations. The computational naivete lies in the thought that everything will be translated into a common internal form (an implication which may not be intended is that it be the same form, as implied by the root logic; simple syntactic analyses and prettyprinting are, as the reply says, quite feasible). It has been seen repeatedly in our studies of computational support for complex systems that there is no one method, approach, notation, language or even paradigm that can be adequate, so that multiplicities must be allowed even at the lowest levels of processing [Landauer-Bellman92]. In fact, the last sentence of the reply is one of the horrifying things about the entire proposal. If QED is not intended to support the development of new mathematics from the very first deep insights, then I think it will not be used by anyone except clerks, and that few mathematicians will even notice it, let alone support it. Objection 8 is right also. Any piece of code as large as QED must be is inherently unreliable for many reasons, not the least of which are the software and hardware base (compilers, operating systems, and processors). The Reply to Objection 8 is also right. However, it is not the human errors in proofs that are in question; it is the programming errors in proof discovery, construction, and verification that are the problem. These errors will be harder to discover, because humans are less familiar with the peculiarities of computer programs than they are with the peculiarities of human reasoning. Still, improved accuracy is likely, after all the presently accepted body of mathematics is entered and verified. There are two other parts of the reply that are difficult for me. I hope it is clear that "expressing the full foundation of the QED system in a few pages of mathematics" will require some new mathematics, since the expressiveness of current mathematical systems of notation cannot handle the variability of components that would be required for such a compact representation. Lastly, "agreement by experts ... that the definitions are 'right' ... " smacks of more cliquishness than I like. In every subfield there are some generally accepted definitions, and some alternative sets of definitions that are all basically equivalent (sometimes that equivalence is the major result of the theory). One may usefully choose a preferred definition (for expository purposes, for example), but the alternatives are as valid and also important for exploring the relationships among the different concepts in the subfield. I do think there are some real objections to the concept of QED as stated. My main non-technical concern is the social or political issue: who decides what goes into QED? Not every mathematician will play this game, no matter what the rules are (mathematicians, indeed all humans, are just like that). This objection is related to Objection 7, and to my mind much more serious than the trivialities of syntax. Background Problems "A second reason is that few researchers are interested in doing the hard work of checking proofs". I do not believe for a moment that anyone who thinks of herself or himself as a research mathematician is going to spend any significant time "checking proofs", especially according to rules that are not common mathematical practice. Even if the rules were commonly accepted, that is clerk work, not for humans. The result is that the researchers involved in QED will primarily be computer folk who will concentrate on the software that makes checking proofs easier (which is commendable), and then inevitably (I believe that some of the discussion already exhibits this tendency) on suitable restrictions on the mathematics to make producing the software easier (which is deplorable). 1. Reliability. This is also Objection 1. 2, 3, 4. Logic. This is answered by the use of multiple logics and appropriate interconnection theories. The research necessary here is to extend the work of [Nelson-Oppen] to more complicated theories with more complicated interactions. 5. Syntax. This is a design requirement, not a problem. It is both technically feasible (using cooperative grammars) and essential. 6. Human. This is Objection 2 also. 7. Extensibility. Why do these plateaus occur? They also occur in mathematical theory, and in most sciences also, and these endeavors are much more complex than the scope of any of the theorem provers in question. I believe that the plateaus are evidence for our earlier claim that no formal system can encompass all aspects of a real system. In so far as the correspondence between the formal system and the real system is close (that is, the formal system is a good model of the real system), it will take time to perform the analyses, discover the relationships in the model, and design the experiments that define the boundaries of that correspondence. That time is part of the plateau, as is the time spent exploring the boundaries, discovering new anomalous relationships, and inventing new models. The plateaus end in science and mathematics because humans are great at stepping outside systems to look at them from different viewpoints; we do not know how that is done. 8. Infrastructure. QED will need tools that work with objects within the system, not just with objects in the application domain. 9. Rigor. This seems like Objection 2 also. 10. Standards. The problem mentioned is the result of naive standardization instead of explicit management of variability. It should not matter what syntax is used to express induction schemas, as long as the requisite information is there. Also, by the way, there are many different induction schemas that interact in different ways with other inference rules, according to the amount of information that is included with them. It is essential for the utility of QED to allow the full range of variability here, and to have explicit models for the interactions. 11. User Interface. Ease of use for whom? I would be glad if proving a theorem in QED were no harder than proving a theorem using pen and paper. However, while there are long-standing traditions about many mathematical objects and notations for them, there are few or no traditions of any kind for notations that describe manipulations of expressions in those more common notations. There is a basic difference of feeling between doing something (e.g., moving around in a mathematical space) and telling an idiot savant how and where to move. As a species, we have been moving around far longer than we have been telling others how to move. This difference between actions and words for actions may be insurmountable. Root Logic Starting with a choice of root logic before we know what we are going to need is self-defeating. All of the discussion of a "root logic" is unproductive until much more of the support infrastructure is in place. Which logic is not the first question; that is like deciding what programming language to use before you know what the program is supposed to do. The most important part of the description here is the last paragraph, but it appears that its message is ignored elsewhere in the manifesto. Recommendations The first two planning steps are fairly interesting. There is an almost reasonable list of items to be considered for inclusion, but it is much too superficial. It only contains "object-like" elements, confuses the software items with the "application domain" items, misses completely the notion of methods and processes in both domains. It is only superficially that the importance of mathematical theory is in the theorems. Actually, it is in the methods, which are in one sense just like the theorems, except that they have much more complicated parameter objects (which can include methods as well). One hint of this difficulty is in the cases in which essentially the very same proof can be used to prove two different theorems in two different parts of mathematics; different objects, different axioms, and yet the properties of those objects are sufficiently similar that the same proof can be used. Other things missing are primarily software infrastructure items, such as the integration agents that make interconnections between parts of the system (it is clear, I hope, that this will be a very widely distributed system), the registration agents that will maintain knowledge of certain information or services (it is clear also, I hope, that this will be a very dynamic system), the "middlemen" that will direct requests for information or services to appropriate providers, and a host of other software support structures, such as self-monitoring agents that watch for trouble in various parts of the system. My main point here on the software structure is simple: QED will need more software infrastructure than you will believe at first, and perhaps more than you will ever believe. The third planning step is separable from all the others, and would be greatly beneficial to a number of projects. The fourth step is another sort of mire. Many theorems have several different statements, all somewhat related. There should be a careful collection of several versions of useful theorems, and their relationship should be indicated. Moreover, there are some expositions of mathematics in which results are proven in different orders; one takes a known theorem as a definition and proves a result otherwise treated as an axiom. The inferential structure of the subject can be traveled in many directions. Sometimes these different views of the foundations of a subject area are very helpful for understanding, and this style of analysis should be available in QED. Comments on Discussion The discussion has mainly been in a few categories: choices of language, logic, implementation (db, ui) the effect of Bourbaki priorities of mathematical subjects little theories, and other information organizations existing theorem provers and information sharing existing logics and the question of which root logic to use I will start with some common incorrect assumptions that I think occur in the manifesto and in the comments. Because of the length of this note already (and I only thought I was making a _few_ comments!), I will save my comments on the discussion for later. Most discussants are missing the notion of context of statements in a formal language (Beeson is an exception here). Instead, most of the commenters are trying to make the statements all self-contained, and that is only possible by overly restricting the range of allowable interpretations. Mathematical notations depend on context for the interpretation of the symbols, both syntax and semantics, and it seems to me that actually much more information is in the context of an expression than is in the particular expression. Digression: "lingua franca" is a great term; referring to one universal language within another, though they were universal at different times and for different purposes. Most discussants are missing the notion of how much software infrastructure will be needed in such a system (Boyer is an exception here). Instead, most of the commenters are concentrating on the superficial (as far as the programming is concerned) questions of application domain structure, root logic, and algorithms for processing the application domain objects. Most discussants seem to be assuming (mistakenly in my view) that mathematics is in the theorems, not the processes and relationships among concepts. Until those are handled explicitly and properly, QED will have little or nothing to do with the practice of mathematics, despite its declared subject matter. Most discussants are still assuming that rigor means everything is reducible to the same level of detail in the foundations. I do not think that is true. Each object and each process will exist at many different levels of detail simultaneously, and inferences at some levels will automatically convert to inferences at other levels. There is no reason to translate everything into a single level. Besides, I do not believe that will be possible. In so far as we can translate everything into one formal system, that formal system explicitly limits what we can write and how we can process it, and I think that historically, mathematics has shown a great inability to be so constrained. References [Aczel88] Peter Aczel, Non-well-founded Sets, CSLI Lecture Notes No. 14, Center for the Study of Language and Information, Stanford U., U. Chicago Press (1988) [Barwise89] Jon Barwise, The Situation in Logic, CSLI Lecture Notes No. 17, Center for the Study of Language and Information, Stanford U., U. Chicago Press (1989) [Barwise-Etchemendy87] Jon Barwise, John Etchemendy, The Liar: An Essay on Truth and Circularity, Oxford U. Press (1987) [Barwise-Moss91] Jon Barwise, Larry Moss, ``Hypersets'', Mathematical Intelligencer, Vol. 13, No. 4, pp. 31-41 (1991) [Chaitin93] Gregory Chaitin, Information-Theoretic Completeness, Lecture Notes, UCLA [Landauer-Bellman92] Christopher Landauer, Kirstie L. Bellman, ``Integrated Simulation Environments'' (invited paper), Proceedings of DARPA Variable-Resolution Modeling Conference, 5-6 May 1992, Herndon, Virginia, Conference Proceedings CF-103-DARPA, published by RAND (March 1993) [Landauer-Bellman93] Christopher Landauer, Kirstie L. Bellman, ``The Role of Self-Referential Logics in a Software Architecture Using Wrappings'', Proceedings of ISS '93: the 3rd Irvine Software Symposium, 30 April 1993, U.C. Irvine, California (1993) [Nelson-Oppen] Greg Nelson, Derek C. Oppen, ``Simplification by Cooperating Decision Procedures'', ACM Trans. Programming Languages and Systems, v. 1(2), pp. 245-257 (October 1979)