Root LogicsRandy Pollack <firstname.lastname@example.org>
Date: Thu, 22 Apr 93 16:13:21 BST
From: Randy Pollack <email@example.com>
Subject: Root Logics
I want to say something about the Root Logic, but will motivate this
by replying to Michael Beeson's recent comment:
> I suggest the following organization of the QED effort:
> 1. Machine Math language design and specification.
> 2. Formalization of a sizeable body of mathematics in Machine Math.
> 3. Specification of low-order logic(s) as target language(s).
> 4. Construction of Machine Math compiler(s) into the target language(s).
> 5. Construction of proof-checkers for the target languages.
> 6. Construction of database software to manipulate a large body of
> mathematics in Machine Math form.
I take a different view: The Root Logic must be agreed upon first,
along with some notion of "modularization" for the Root Logic, such as
little theories or whatever. In fact, these are the only things that
have to be agreed upon.
Once the Root Logic is decided on, everyone can go and write proof
tools for whatever language and logic they like, using whatever proof
search technology they like, as long as the proofs can be translated
into the Root Logic and its theory mechanism. Everyone can use the
high-level language and logic of his/her choice, within the
restriction that its proofs can be translated to proofs in the Root
Logic with its theories. It is even possible that some existing proof
tools might be modified to translate their internal justifications
into explicit proofs in the Root Logic. Of course the correctness of
derivations in the Root Logic must be efficiently checkable without
any fancy proof search; all proof search is done by whatever front-end
is used. Thus I am assuming that the Root Logic has a reasonably
efficient notation for completely explicit proofs. (See my comment on
This suggestion is not an argument against an agreed upon high-level
logic, but that is a much harder problem than to agree on a Root
Logic, and it is not necessary to solve this harder problem. Also,
different kinds of mathematics may be more amenable to different
specialized proof tools.
This brings up a point related to Javier's recent comment (these
messages are coming faster than I can write!):
The mathematics QED should attempt to cover, is I think, the
fundamental issue we should be discussing now. From a marketing point,
this I think makes a lot of sense. Using an automobile manufacturer's
analogy, let's design and produce a car for the market, not find a
market for a car we designed and produced without consideration of the
Yes, we should be discussing what mathematics QED will cover, and we
may well design specialized proof tools for these various "markets" as
I've suggested above. However, we should not design the Root Logic,
in which all the different developments of parts of mathematics are
put together with some particular market in mind. The Root Logic is
the one thing that, once agreed upon, is very difficult to change. It
should be of sufficient expressiveness to handle unexpected problems.
On the Root Logic
Bob Boyer writes
The key point, I think that classicist David Hilbert would say, is
that WE CAN ALL AGREE to present and discuss essentially all important
logics and theories in a `finitistic' format, such as PRA admits.
David McAllester writes
Although I am sympathetic to the idea that we can all accept primitive
recursive arithmetic (PRA) I am little troubled by using it as the
foundation of QED. Consider the mean value theorem (MVT) of calculus.
I agree that this thoerem can be encoded as "P is a proof in ZFC of
ZFC-MVT" where ZFC-MVT is some formal statement of the mean value
theorem in the langauge of set theory. I am concerned about the
difficulty of using this entry in the QED library. It seems
Larry Paulson writes
Mathematics done in simple base logics is not encrypted at all. Isabelle
supports the illusion that you are working directly in the formalized logic,
be it ZF, HOL, Constructive Type Theory or what have you. AUTOMATH type
theories also support a natural proof style.
This topic of representing logics in other logics is not (yet) a
cleaned-up and codified field of mathematics, so different words are
used for the same thing, and the same words for different things.
Let me distinguish three approaches for representing logics, although
they may be combined in different ways, and I'm probably leaving some
In PRA, we use the natural numbers, one canonical inductively defined
class, to encode PR definable classes of well-founded trees, e.g. the
syntax and derivation trees of some object logic. This is really
encryption, as McAllester said, and clearly depends on the power of
computation over the canonical class.
In Feferman's FS0, we have can directly construct classes of finitely
branching well-founded trees, and use these classes directly to
represent isomorphically trees of syntax and derivations of some
object logic. This is a lot less like encryption, a lot more natural
than in PRA. These two ways of representing a logic are similar in
that they employ the power of the meta-logic to represent and compute
over inductively defined classes; the terms and derivations of the
object logic are objects of the meta-logic. One might, for example,
quantify over derivations (well, not in PRA, but in L0), and do
induction over derivations to prove admissible rules of the object
logic. This is the point Feferman is making in  when he says
(sect. 21) "Examples of elementary meta-logical theorems about Prov(S)
which can be proved in FS0 ... are the Deduction Theorem and the
The third approach for representing logics is different, and can be
used in meta-logics with no inductively defined classes, and no power
to construct them. It is used in Automath, and more recently by
Martin-L\"of and also the Edinburgh Logical Framework (LF) . The
idea is to use the inductive structure of the meta-theory itself to
represent the inductive structure of the object theory. I'm being
vague because I don't know how to explain this other than by example:
see ,, or . LF is a first-order logic; its consistency is
provable in PA, and it represents exactly the same functions as simply
typed lambda calculus (much less than PR), yet it can faithfully
represent a great many powerful object-logics. This method is also
used in some higher-order meta-logics. For example some versions of
Automath had some second-order power. Isabelle, as mentioned by Larry
Paulson in this mailing list, uses this method of representing logics.
When Larry says "Isabelle supports the illusion that you are working
directly in the formalized logic" he means using this form of
representation. Isabelle is higher-order (but without dependent
types), but for representing logics in this way it really needs little
more than first order. Lambda-prolog also uses this third approach.
(The question of how strong does the meta-logic have to be to use this
third approach has been somewhat quantified by Amy Felty in a series
This third approach sounds pretty good. Larry says he can "support
the illusion that you are working directly in the formalized logic",
and I've just made a point that it can be carried out in very weak
meta-logics. What's the problem? I see two major issues: how many
logics can be represented in this way, and how easy are the
representations to use. (Both of these points are hinted at by
Feferman in .)
How Many Object-Logics are Representable
The representations in this third style use the connectives of the
meta-logic to represent meta-connectives of the object logic. For
example, the implication of the meta-theory is used for "consequence"
(the horizontal line in rules) of the object logic (see ). Is it
possible to "faithfully and adequately" represent every object logic
in this way? (Clearly "every object logic" is too vague to be
meaningful.) This is addressed in  for LF, and is a technical
question; even the definition of "faithful and adequate" is not clear.
It is even more problematic for stronger meta-logics than LF; for
example if functions are definable by Primitive Recursion then the
function type A->B, which is used to code syntactic abstraction (i.e.
a B-thing with one A-thing hole) contains more than just syntactic
abstractions. The correctness of representations in the first two
approaches mentioned above is more straightfoward: one just builds
copies of the object-logic's informal term, formula and derivation
trees. Similarly, more object-logics are representable with the first
two approaches: we just build the derivation trees. Notice, however,
that for FS0 it is still possible to find naturally occurring formal
systems that are not representable: FS0 only represents finitely
branching trees, not the so-called "generalised inductive
definitions", including infinitely branching well-founded trees. (See
 for an example of a use of generalised inductive definitions in
formalizing everyday mathematics. By the way, does anyone know of a
philosophical school that accepts inductive definitions but rejects
generalized inductive definitions where the infinite branching is
given by a finite rule?)
How Easy is it to Use the Representations
In the third approach, the terms, formulas, and derivations of the
object logic are not objects of the meta-logic. For one thing, you
cannot do induction over them, so the meta-theory of the object logic
is not provable. (How much meta-theory you can do may depend on how
much extra power the meta-logic has. In Isabelle's representation of
first-order logic, is the Deduction Theorem provable? It is certainly
not provable in the LF presentation of FOL.) In such a representation
the Deduction Theorem is only representable as a tactic; i.e a program
in some meta-language of the framework that actually transforms
represented derivations of A |- B (these are terms of the framework)
into derivations of |- A->B (also terms of the framework).
Correctness of this tactic cannot even be stated in the framework.
(Of course, we don't have to state or prove correctness of a tactic to
use it, but actually running tactics blows up the size of proof
trees!) In FS0, the Deduction theorem for a representation of FOL is
just a theorem of FS0. This is how mathematics is actually done!
Which Root Logic?
It may sound like I am putting foward FS0, but I am only using FS0 as
an example of a certain style. I am arguing against the
LF/Isabelle/Automath style of representation. I am arguing for a Root
Logic that can express inductively defined classes. How strong is the
Root Logic? How complex are the inductively defined classes it can
express? I have argued that FS0 is slightly deficient in only having
finitely branching trees. (Feferman suggests this can be easily
fixed.) I only happen to know these are very useful from experience
. The QED project needs to collect this kind of experience in
order to make reasonable decisions on the Root Logic.
Another weak but expressive logic, very different in character from
FS0 is the Martin-L\"of logical framework, presented somewhat
informally in . It has generalized inductive definitions, and is
acceptable to most constructivists.
 "Finitary inductively presented logics", S. Feferman, Logic
Colloquium '88, Ferro, Bonotto, Valenti and Zanardo (eds.), Elsevier,
North-Holland, 1989., pp. 191-220.
 "A Framework for Defining Logics", Harper, Honsell and Plotkin,
JACM, Jan. 1993.
 "Using Typed Lambda Calculus to Implement Formal Systems on a
Machine", Avron, Honsell, Mason and Pollack, JAR 9:309-354, 1992.
 "Representing Logics in Type Theory" Philippa Gardner, PhD thesis,
Computer Science Dept., University of Edinburgh, July 1992.
 "Pure Type Systems Formalized", McKinna and Pollack, TLCA'93
(Utrecht), LNCS 664, March 1993. Also available by anonymous ftp from
ftp.ed.ac.uk; cd to export/lego and get README.
 "Programming in Martin-L\"of's Type Theory", Nordstr\"om,
Petersson and Smith, Oxford University Press, 1990.