Date: Sun, 25 Apr 93 00:49:19 -0400
Subject: Boyer's hypothesis
One assumption behind the QED project is what I will call the BOYER
HYPOTHESIS. My reason for given it this name is the following: Bob
Boyer has been polling various people involved in automated theorem
proving for some time now, and put this idea forward based on the
evidence provided to him by his informants.
For any theorem the length l_c of its computer proof is not significantly
larger than the length l_i of its rigourous (but possibly informal)
Obviously there are a lot of vagaries in the above formulation of the
Boyer Hypothesis, but current evidence seems to suggest that the
statement does not depend too strongly on how the various quantities
are measured. Moreover, current evidence also suggests the Boyer
constant l_c/l_i is < 100.
I would like to propose that we ask ourselves how likely is it that
the Boyer constant is of this order of magnitude for current research
mathematics (current, means mathematics done within the last 30
years). I think this is important, because I feel that what has been
done with automated tools is still an insignificant part of
mathematics and includes virtually nothing from algebraic topology,
partial differential equations, algebraic number theory, etc..
If it turns out that the Boyer constant for proofs in these areas is
10000, then much of what we are talking about in this forum about
"doing real mathematics" is highly speculative and may border on
Unfortunately, I do not have a clue how one would answer this question
in a convincing way other than by experimentation -- which seems to go
against the aprioristic program suggested earlier in this discussion
by several contributors.