Frege's Mathematical Setting - Mark Wilson - A Priori - University of Canterbury - New Zealand

Frege's Mathematical Setting

Mark Wilson, University of Pittsburgh

I have not yet any clear view as to the extent to which we are at liberty arbitrarily to create imaginaries, and to endow them with supernatural properties
—John Graves, ‘Letter to William Hamilton’1

Introduction

Although Gottlob Frege was a professional mathematician, trained at one of the world's greatest centers for mathematical research, it has been common for modern commentators to assume that his interests in the foundations of arithmetic were almost entirely ‘philosophical’ in nature, unlike the more ‘mathematical’ motivations of a Karl Weierstrass or Richard Dedekind. As Philip Kitcher expresses the thesis:

The mathematicians did not listen [to Frege because]... none of thetechniques of elementary arithmetic cause any trouble akin to theproblems generated by the theory of series or results about theexistence of limits.2

Indeed, Frege's own presentation of his work easily encourages such a reading. Nonetheless, recent research into his professional background reveals ties to a rich mathematical problematic that, pace Kitcher, was as central to the 1870's as any narrow questions about series and limits per se. An appreciation of the basic facts involved (which this essay will attempt to describe in non-technical terms) can only heighten our appreciation of the depths of Frege's thought and of the persistent difficulties that any adequate philosophy of mathematics must confront. Although it may be possible to appreciate Frege's approach to language on its own terms, some awareness of the rather unusual examples that he encountered in the course of his mathematical work can only enhance our understanding of his motivations within linguistic philosophy as well.

In the most general terms, the ontological world of nineteenth century mathematics expanded far beyond its traditionally circumscribed boundaries, a phenomenon that first became evident in the extension element problems that we shall emphasize in this essay. In response, a philosophy of relative logicism emerged that sought to explain the mysterious new entities as logical constructions of some sort or other. The absolute logicism that Frege proposed with respect to the regular number systems can be viewed as a natural outgrowth, and improvement upon, these established logicist traditions. Many of Frege’s methodological remarks enjoy a sharper piquancy, I believe, if they are examined against this richer mathematical backdrop (Frege rarely draws explicit notice to such issues, but his central examples (‘the direction of a line’) often represented commonplaces within the prior discussions). Beyond its relevance to Frege’s thinking, an acquaintance with the extension element problem can revive our own appreciation of the weird wonders of the philosophy of mathematics, lest we forget about the unexpected factors that frequently force mathematics to alter its courses in uncharted ways.

Extension elements within geometry

In working on a mathematical problem, we often find it hard to reason rigorously directly from point A to point B, due to some barrier (imagine a large mountain as a metaphor) lying between the points. However, we can sometimes espy another location C outside the borders of our native country that would sustain an easy path Aö Cö B. An early illustration of this phenomena dating to the 1530's can be found in the problem of extracting the roots of cubic and quartic equations: mathematicians such as Gerolama Cardano uncovered algebraic techniques that eventually led to the real roots desired, but their computational pathways wandered through strange intermediate values such as ‘-3 + /-2’. As time wore on, these intervening ‘imaginary’ (or ‘complex’) numbers gradually assumed a vital importance within mathematical practice generally, but the exact rationale of their employment, beyond raw expediency, remained hazy.

In the first half of the nineteenth century, a plethora of ‘foreign elements’ had invaded many traditional mathematical topics and it became clear that a modernized philosophy of mathematics was needed to rationalize their employment. For example, after analytic geometry (= the use of algebraic equations to represent geometrical facts such as ‘line L crosses circle C’) had been invented in the 1600's by Descartes and Fermat, it was quickly observed that the conclusions of standard Euclidean argumentation could be often replicated by swift manipulation on formulae. Furthermore, the algebraic pathway to a geometrical conclusion can often obtain its result without engaging in all of the delicate fussing about subcases that one finds in Euclid (proofs in his traditional diagram-based, constructive style are commonly called ‘synthetic,’ in contrast to an ‘analytic’ approach relying upon algebra). What secret power permits this dramatic algebraic simplification? Inspection of such ‘analytic’ proofs indicates that their reasoning pathways often travel through intermediate ‘points’ bearing strange coordinate locations such as <2.5, -3 + /-2>.

But how can reasoning patterns developed with respect to numerical values produce such surprising unifications within geometry? Various English mathematicians of the early nineteenth century articulated a somewhat mystical faith that the blind application of algebraic algorithms must always lead to correct results, even if the paths pursued seem completely unintelligible—this point of view was often called the generality of algebra. On this reckoning, the imaginary points arrive, in the phrase of the mathematician E. Hankel, as ‘a gift from algebra.’ Bertrand Russell expressed the obvious objection to this facile manner of thinking:

As well might a postman presume that, because every house in a streetis uniquely determined by its number, therefore there must be a housefor every imaginable number.3

Indeed, a brute appeal to algebraic formalism is plainly an inadequate rationale for introducing novel entities into mathematics—as Russell suggests, one would quickly reach ridiculous conclusions if one applied ‘the generality of algebra’ within all walks of life.

In the 1820's, a number of synthetic geometers, lead by J. L. Poncelet, decided that the simplifications offered by algebraic proof must spring from a deeper source: namely, the world of standard geometry could be greatly improved if mathematicians would tolerate a variety of ‘extension elements’ lying just outside the perimeters that circumscribe traditional Euclidean thinking (in the manner that convenient location C lies beyond the borders of our mountainous country). These early ‘projective geometers’ typically justified their unseen supplements through appeals to ‘persistence of form,’ a methodological doctrine we shall explore in a moment. However, this thesis is greatly troubled by its own inherent vagueness and, by Frege’s time, most rigor-minded mathematicians had replaced appeals to ‘persistence of form’ by relative logicism: the claim that the extension elements can be justified using purely logical resources alone. The methodological appeal of this newer point of view has diminished over time, its percepts having become displaced in turn by Hilbert-style axiomatics at the turn of the twentieth century (in a manner to be considered at the end of this essay). Frege’s methodological motivations have often been misunderstood, I believe, through a failure to properly locate their placement within these forgotten relative logicist traditions.

Let us now examine how the original ‘persistence of form’ thinking operated, because relative logicism should be seen as an adaptation of its basic contours (here, and in several other portions of the essay, I will include details that a casual reader may wish to skip-they are provided to help interested parties find their way through the standard history of mathematics literature ). Let’s begin with a simple circle C and a line L running through C. Their intersections engender two regular points a and b. We will now tell a story of how a and b can become invisible if their surrounding geometrical relationships become altered in a natural way. We will set up a somewhat complicated web of geometric constructions around a, b, C and L and then slowly adjust their internal relationships so that a and b seem as if they have been merely ‘pushed off the page.’ First locate the exterior point p such that rays emerging from p intersect the circle tangentially at a and b (vide the lefthand side of our diagram). This p is called the pole of the line L relative to C while L reciprocally serves as the polar of p (such ‘pole and polar’ arrangements possess many interesting geometrical properties). Note that an arbitrary ray from p will typically cross C in two spots, linking together pairs of points on the circle in the match ups indicated by the small numbers. Let’s now choose an arbitrary new point v upon C and use it to project our C-based match ups onto the line L. Here we should think of v as acting like the lamp in a film projector that projects the circular match ups registered within C’s ‘film’ onto the ‘screen’ represented by line L (this variety of mathematics is called ‘projective geometry’ precisely because it studies the transfer of ordering arrangements from one surface to another). The projected image seen on L will be a nested mapping of points to one another known as an involution. Such mappings display a host of special geometric properties prized by the ancient geometers, including the fact that the distances x and x* of the paired points will obey the relationship x.x* = +D within a suitable coordinate system. Plainly, our original a and b points serve as the two centers of this nested involution in an obvious way (their locations x satisfy the ‘self-correspondent’ condition x.x = +D).

Frege's Mathematical setting

Here are the considerations that encourage ‘persistence of form’ thinking. Picture our diagram’s maze of lines as if they constitute a little mechanism whose moveable parts are linked to one another. Let’s now adjust those parts by gradually pushing the pole point pinside C (conceive of p as a lever that forces the other parts of the diagram to shift positions). We can easily ‘see’ what will happen: as p moves towards C, its polar L will shift in the opposite direction, eventually leading to the situation pictured on the right hand side of the diagram, with p inside C and L outside. Our ‘lamp’ v continues to project an involution match up onto L but its point-wise associations will become overlapping after p crosses into C. And, finally, our two erstwhile centers a and b seem to vanish.

Or do they? Why not assume a and b remain present, but have merely become invisible through being pushed off the page? That is, the collective geometrical ‘mechanisms’ on the left and right sides of our diagram should be regarded as essentially the same, except that their a and b parts can no longer be ‘seen’ in lefthand circumstances. Thus ‘persistence of form’: we conclude that our diagram’s missing a and b are still present in lefthand circumstances, because the same geometrical unities (= ‘form’) preserve themselves as we gradually adjust our diagrams. Revisiting algebra’s ability to simplify traditional proofs from this new point of view, we recognize that algebra obtains its unificatory advantages by automatically supplying imaginary coordinate names to extension elements that, properly speaking, should have been added to traditional geometry through ‘persistence of form’ considerations. That is, ‘the generality of algebra’ achieves its apparent successes within a geometrical context only through a happy accident: its computational procedures happen to provide names for the auxiliary elements required to keep the organic ‘mechanisms’ of Euclidean geometry intact under adjustments in ‘form.’ Considering our involution equation x.x* = +D, we find that, as p moves inside C, an originally positive D gradually shrinks to 0 and then becomes negative once p moves inside C. Solving the ‘self-correspondent’ condition x.x = -D for its ‘centers,’ we find that our missing centers a and b take up the imaginary coordinate locations +/-D and -/-D along L. So our invisible a and b are not truly ‘gifts from algebra’; their real sources are the invariant properties contained in our family of geometrical constructions.

Once this extraordinary ontological gambit is accepted, we realize that traditional Euclidean proofs often became complicated because they could not cite those vital parts of the general geometrical ‘form’ that had been pushed into ‘invisibility.’ This prohibition forced traditional argumentation to work around the missing pieces by dividing a proof into a large number of subcases, distinguished from one another according to their missing parts. Restoring the invisible ingredients allows a modernized geometry to treat a multitude of cases in a unitary manner.

An allied simplification of Euclidean proof can be also achieved if we tolerate a supplementary line at infinity such that, if an ellipse is moved across its bounds, the full figure will reappear in our ‘local space’ as an hyperbola (such an identification of seemingly different figures again permits a great simplification in our proofs). In the illustration, our regular local Euclidean plane has been contracted to lie inside the gray circle, so that the manner in which an ellipse moves across the line at infinity can be observed (note that diametrical opposite points along the line at infinity are to be identified). Observe that normally parallel lines will now intersect at points upon this infinitely distant line—we will revisit these ‘points at infinity’ in our discussion of Frege’s Grundlagen.

In 1851 the mathematician H.J.S. Smith explained the ‘persistence of form’ doctrine as follows:

[I]f we once demonstrate a property for a figure in any one of its general states, and if we then suppose the figure to change its form, subject of course to the conditions with which it was first traced, the property we have proved, though it may become unmeaning, can never become untrue, even if every point and every line, by means of which it was originally proved, should wholly disappear.5

Because our diagram’s basic ‘pole and polar’ properties persist after point p moves inside C, we may postulate ‘unmeaning’ (= without representation in intuition) ideal points to fill out the interconnected mechanism that neatly explains why we continue to see the same basic traits after the adjustments. It is obvious that such an unrefined methodology can easily lead to gross error if one posits ‘persistent elements’ in the wrong places through ‘persistence of form.’ Relative logicism, in fact, was proposed as a tamer doctrine that could add the supplementary objects to traditional subject matters without the risk of horrible error. In doing so, most relative logicistic approaches will work with the same evaluative concepts as stand at the center of the original ‘persistence of form’ arguments, but they will handle these extensions with greater display of methodological control. With respect to our invisible ‘points’ a and b, most relative logicist treatments focus upon the property ‘sets up an involution along L based upon C and v’ as the ‘evaluative concept’ invoked on the road to constructing ‘logical elements’ that serve as a and b surrogates in a sounder manner.

Although we can’t survey such issues here, many of the rival methodologists that Frege criticized (e.g., Hermann Schubert ) maintained that unsupplementedappeals to ‘persistence of form’ can provide an adequate defense for conceptual innovation within mathematics (including the introduction of the natural numbers in the first place). In such critics’ behalf, we might observe note that ‘persistence of form’ doctrines directly highlight the epistemological considerations that actually inspired the postulation of the extra mathematical entities whereas the motives that drive conceptual development within mathematics are often left obscure in logicist accounts.

Extension elements within number theory

Relative logicism’s career can’t be completely appreciated without some knowledge of parallel developments that arose in connection with the ‘ideal numbers’ of algebraic number theory (once again, the unconcerned reader may skim this section).

The original impetus for introducing ‘ideal numbers’ came when C.F. Gauss wrote of ‘complex integers’ in his Disquisitiones Arithmeticae in 1801.7 In themselves, ‘complex integers’ are nothing new; they simply comprise numbers of the form a + bi where and b are normal integers (= ‘whole numbers’) and i = %-1. One of the most salient facts about the regular integers is that they break uniquely into prime factors (i.e., 24 can be only expressed as 2x2x2x3) whereas a more general number such as π or 6 - 2i can be decomposed into myriad sets of factors. The great advantage of possessing prime factors is that they allow a great deal of control over the integers that gets typically lost within the more amorphous realms of number. But Gauss realized that if we remain within a restricted orbit of complex numbers—his ‘complex integers’—, then a variety of unique factorization persists within this enlarged realm, with all the advantages to be gained therefrom. Unique factorization then allowed Gauss to answer certain important questions in number theory easily, e.g. how to characterize all integers whose fourth powers give a remainder of n when divided by p. As in our geometrical case, once we envision the regular integers as enriched with a slightly extended halo of ‘complex integers’, the commonalities of behavior amongst the regular integers become more transparent. This fact impressed Gauss greatly:

It is simply that a true basis for the theory of the biquadradic residues i.e., the questions about fourth powers] is to be found only by making the field of the higher arithmetic, which usually covers only the real whole numbers, include also the imaginary ones, the latter being given full equality of citizenship with the former. As soon as one has perceived the bearing of this principle, the theory appears in an entirely new light, and its results become surprisingly simple.8

In the 1840's, treating matters related to Gauss' investigations and to Fermat's ‘last theorem’, E. E. Kummer realized that unique factorization becomes lost again as we move out to further collections of generalized ‘integers.’ Consider the ‘algebraic integers’ that arise when 15 is added to the rational numbers. In this range of numbers, 10 breaks into irreducible factors in two distinct ways: as 2.5 and (5 + 15)(5 -15). If we only had further factors to work with, e.g. 5 and 3, unique factorization could be restored in this realm because 2

2 = (5 + 3)(5 -3), 5 = (5) , 5 + 15 = 5(5 + 3) and 5 -15 = 5(5 -3). In such terms, 10 can be seen as ‘really’ decomposing into (5)(5)(5 + 3)(53)—the apparent non-unique factorizations of 10 arising as these four basic ingredients get paired up in different ways. But we can’t remedy the situation by simply including 5 and 3 along with the numbers generated by 15, because that closure will include a lot of values we don't want. Kummer finessed these difficulties in an intriguing way. He would only add an unspecified ‘ideal number’ to the 15 field to capture the highest commonality between 2 and 5 +15, without identifying the missing ‘factor’ concretely with ‘5 + 3’ or any other concrete representation of that type. Instead, he let the pairing9 ‘(2, 5 +15)’ name his desired ‘ideal number’ and observed that other pairs such as ‘(4, 10 + 2 15)’ must denote the same ‘ideal factor.’ He wrote:

In order to secure a sound definition of the true (usually ideal) prime factors of complex numbers, it was necessary to use the properties of prime factors of complex numbers which hold in every case and which are entirely independent of the contingency of whether or not actual decomposition takes place; just as in geometry, if it is the question of the common chords of two circles even though the circles do not intersect, one seeks an actual definition of these ideal common chords which shall hold for all positions of the circles. There are several such permanent properties of complex numbers which could be used as definitions of ideal prime factors..., I have chosen one as thesimplest and most general... One sees therefore that ideal prime factors disclose the inner nature of complex numbers, make them transparent, as it were, and show their inner crystalline nature.10

In other words, a specific group of algebraic numbers may cry out for supplementary ‘ideal factors’ to consolidate their behaviors into a fully satisfactory domain. In a famous letter to Kronecker, Kummer compares this enlargement process to the postulation of unseen elements in chemistry (a apt comparison because, in the chemical doctrine of Kummer's time, such ‘elements’ were never supposed to appear in ‘naked’ form in nature, rather like the quarks of modern science). Note that Kummer also aligns his practices with the geometrical circumstances we have surveyed.

‘Free creativity’ and relative logicism.

For such reasons, most mathematicians had concluded by 1860 that mathematics no longer needed to confine its researches to the more or less fixed domains characteristic of classical thinking (Euclidean geometry and the real numbers). To be sure, earlier investigators such as Euler had explored the properties of the complex numbers intently, but such researches had been largely ignored by methodologists such as Kant. Under the influence of nineteenth century Romanticism, it became common to assert that the ‘free creativity’ of mathematicians allows them to explore whatever domains they may wish.

But, clearly, unbridled appeal to ‘free creativity’ will easily engender potential problems with respect to rigor and reliability, especially in situations where ones ‘free creativity’ extends to infinitary domains and processes. A celebrated illustration of these dangers arose in context of G.F.B. Riemann’s celebrated work in complex function theory (an episode presumably familiar to Frege, as his teacher Alfred Clebsch had labored to render Riemann’s results mathematically respectable). Riemann had argued that the behavior of such functions can be better understood if they are aligned with so-called ‘Riemann surfaces’, which are spaces that cannot always be understood in regular spatial terms. To prove key facts about his ‘surfaces,’ Riemann relied upon an existence criterion he dubbed ‘Dirichlet's Principle’: if a collection of functions can be graded by positive number assignments, then some minimal function must exist within this set.11 Here’s a simple illustration of what is at issue. Take a wire rim of arbitrary shape and apply a soap film to it. Such a membrane stores internal energy according to its degree of bending; so a calculation of the energy stored within a particular coating will grade that shape in the ‘positive number assignment’ manner required by Dirichlet’s principle. In real life, we intuitively expect that the film will eventually assume an equilibrium configuration that stores energy in a minimal way (sometimes there will be several placements that manage this). Dirichlet's principle simply converts these intuitive expectations into a general principle. But Karl Weierstrass showed that this assumption cannot be true in general. Let our ‘rim’ consist of a regular oval plus a single point above its center. Now consider the sequence of bell-like patterns illustrated, where our film attaches to our oval and point in the manner required. As we progressively examine the sequence of shapes C1, C2 ,..., we find that their total degree of bending continuously decreases but never reaches a minimum. Their limit C4 displays a discontinuous jump that disqualifies C4 from qualifying as a true soap film altogether. We have thus constructed a descending set of positive energy films whose lower bound does not represent a mathematical object of the same type as the C,i contrary to Dirichlet’s principle. Without some deep repair, brute appeals to Dirichlet's Principle cannot be regarded as reliable.

Such failures of intuitive expectation when infinite collections are concerned led many nineteenth century mathematicians to decide that only logic could properly settle what occurs when such limits are reached. In particular, Richard Dedekind observed that normal Euclidean ruler and compass constructions will not install all the points we wish upon a straight line, but will only carry us to positions such as 2, 22, etc. But Kantian spatial intuition can only certify the presence of points of this limited ilk, leaving a line with a lot of unfilled gaps in it. Dedekind maintained that the plugging of these ‘holes’ was tacitly prompted by logical thinking on the part of the mathematician, not by any variety of true geometrical intuition:

All constructions that occur in Euclid's Elements, can, so far as I can see, be just as accurately effected [in an algebraically constructed discontinuous] space; the discontinuity of this space would not be noticed in Euclid's science, would not be felt at all.... All the more beautiful it appears to me that without any notion of measurable quantities and simply a finite number of simple thought-steps man can advance to the creation of the pure continuous number domain; and only by this means in my view is it possible for him to render the notion of continuous space clear and definite.12

In such doctrines, the thesis I have dubbed ‘relative logicism’ was born: logical thinking has a capacity to create further entities to fill in unwanted gaps within some independently given domain. The doctrine is a relative logicism, because logic requires properties within the preexisting domain to guide its creation of the supplementary entities.13 Thus logical construction becomes viewed as the crucial methodology that allows the ‘free creativity’ of the mathematician to explore enlarged domains of objects unreachable by ‘intuitive’ consideration. Clearly, a logic-based approach might also avoid the vagaries of ‘persistence of form’ doctrine in tackling the extension element problems we have surveyed.

To be sure, both Dedekind and Frege were also absolute logicists with respect to the sundry number systems, which is not surprising, given that such thinking represents a natural extension of the relative logicist point of view (albeit not an obligatory move, for the latter position was accepted by many mathematicians who rejected absolute logicism itself). We shall see that Frege’s own ‘absolute logicist’ thinking was influenced by several relative logicist programs popular in his era.

Frege plainly views the limit-fixing capacities of a proper ‘logic’ in a manner similar to Dedekind’s. In a revealing passage where he compares the merits of his own system of logic to schemes such as George Boole's, he turns this theme to his advantage:

If we look at the [concepts that can be defined in a logic like Boole's], we notice that...the boundary of the concept...is made up of parts of the boundaries of concepts already given...It is the fact that attention is primarily given to this sort of formation of new concepts from old ones...which is surely responsible for the impression one easily gets in logic that for all our to-ing and fro-ing we never really leave the same spot.... [But i]f we compare what we have here with the definitions contained in our examples of the continuity of a function and of a limit and again that of following a series which I gave in §26 of my Begriffsschrift, we see that there's no question there of using the boundary lines we already have to form the boundaries of the new ones. Rather totally new boundary lines are drawn by such definitions—and these are the scientifically fruitful ones.14

Start with simple concepts A, B and C. From these, traditional formal logic could only construct simple compounds like (A & -B) v C, which corresponds to the gray region within the illustrated trio of Euler's circles (a.k.a. ‘Venn diagrams’). Note that the boundary of (A & -B) v C is comprised of arcs from the circles A,B,C. If logic could range no further from home base than that, its powers would truly prove as circumscribed as critics like Kant had assumed. Employing Frege’s richer logic, utilizing both relations and second order quantifiers, we can define the wholly distinct line that serves as the envelope of all the basic circles upon which it depends. This boundary is ‘totally new’, not coincident with any arcs of its spawning circles.

However, we must distinguish between two different logical capacities here: the ability to express what is required in the hypothetical bounding curve and the ability to prove that such a curve actually exists. The first task is accomplished by the system laid down in the Begriffschift, but the second requires some supplementary doctrine about the existence of ‘logical objects,’ in the manner, say, of the notorious ‘Axiom V’ of the Grundgesetze. But there were several contemporaneous doctrines about ‘logical object existence’ afloat within the general relative logicist tradition and to these we shall now turn.

Definition by abstraction and equivalence classes

In 1871, Richard Dedekind suggested both an improvement and a rationalization of Kummer's approach in a famous supplement to Dirichlet's lectures on number theory.15 He asks, in effect, ‘What does Kummer want his “ideal numbers” to do?’ Answer: to serve as divisors of a certain collection of algebraic numbers. ‘Why,’ Dedekind then proposed, ‘don’t we let the entire set of numbers we want divided comprise the missing ‘ideal number’? That is, let us simply replace Kummer’s posited ‘ideal number’ (2, 5 + 15) with the infinite set of numbers it needs to factor {2, 3, 3 + 15, 5 + 15, 4,... }, a single gizmo which avoids the multiple representations to which Kummer appeals. Dedekind explained:

[I]t has seemed desirable to replace the ideal number of Kummer, which is never defined in its own right,... by a noun for something that actually exists...16

Dedekind's sets (which he dubbed ‘ideals’) are distinguished by the fact that they are closed under the property that if elementsλ and µ are already in the ideal, then so is αλ + βµ, where α and β are any rational numbers. Dedekind suggested that we reinterpret Kummer's procedure as follows: rather than adding ‘ideal numbers’ into an original range of numbers N, we should instead climb from N to a new range of objects N * formed by considering all the ‘ideal’ sets that can be manufactured from N. The original members of N become replaced at the N* level by their ‘principle ideal’ surrogates, viz., those sets that simply consist of all multiples of a single N element. The advantage of working within this higher domain of sets is that, unlike in N, unique factorization obtains within N *. This basic format for interrelating structures, where one domain is built from another through set-theoretic processes, is now standard in modern algebra courses, although, historically, it took some time before the equivalence class approach became canonical.

In a similar vein, we could ‘jump up’ to a new realm of geometry G * by considering as its ‘points’ all sets of involution mappings operating over our old-fashioned geometryG . An old-fashioned point in G will reappear within G * as the center of a nested G -involution, whereas the new ‘points’ will correspond to overlapping G -involutions.

The basic trick displayed here—manufacturing ‘new’ entities by forming sets of old objects—is, of course, employed by Frege in his own construction of the natural numbers, which are treated as equivalence classes of concepts whose extensions can be mapped to one another in one-one fashion. As we shall see later, the rationale Frege offers for this process is rather different than that suggested by Dedekind. Nonetheless, both men regarded these set-theoretic constructions as sanctioned by logic. If the ‘laws of thought’ can build the missing elements needed to bring a mathematical domain to satisfactory ontological completion, it appears that we have finally reached a satisfactory resolution to the puzzle of the extension elements that does not upset mathematics' claims to be both a priori and grounded within intuitive sources of knowledge.

Appeals to equivalence classes will seem quite natural if one regards the novel elements as formed by conceptual abstraction in a traditional philosophical mode: one first surveys a range of concrete objects and then abstracts their salient commonalities. It is possible (but not certain17) that Dedekind viewed his invocation of set theory as simply a mathematical precisification of the ‘abstraction’ process described by earlier logicians. The notion of replacing Kummer's ideal number (2, 5 + 15) by the set ‘ideal’ {2, 3, 3 + 15, 5 + 15, 4,... } will seem natural because the latter set represent the source objects from whose shared features Kummer ‘abstracted’ his ideal factor. Indeed, the noted geometer Fredrigo Enriques explicitly rationalized Dedekind's procedures in exactly this vein:

For it can be admitted that entities connected by such a relation [of equivalence class type] possess a certain property in common, giving rise to a concept which is a logical function of the entities in question and which is in this way defined by abstraction.18 In fact, Dedekind pursued the traditional abstractionist story a step further by recommending that, once one has ‘jumped up’ into the required set theoretic realm, we complete the abstractive process by replacing these sets by ‘freely created’ mathematical objects that retain only the properties we really need inside the enlarged realm itself (our set theoretic construction merely serves as a disposable ladder to lift us safely into the autonomous higher realm we seek). In an often quoted letter to Heinrich Weber, Dedekind wrote, referring to his famous articulation of real numbers as sets (= ‘sections’ or ‘cuts’) of rational numbers:

You say that the irrational number ought to be nothing other than the section itself, whereas I prefer it to be created as something new (different from the section) which corresponds to the section and produces the section. We have the right to allow ourselves such a power of creation and it is more appropriate to proceed thus, on account of treating all numbers equally.19

Dedekind’s ‘throw away your constructive ladder after you have climbed it’ represented a fairly common theme within the abstractionist tradition.

As it happens, in his own ‘logical’ approach to simple arithmetic, Dedekind does not bother with equivalence classes per se, but only employs set theory to build a specific exemplar of arithmetical structure. Because of this difference, Frege is often portrayed within the folklore of modern philosophical commentary as the thinker who tried to argue ‘philosophically’ that numbers had to be identified with sets of equinumerous concepts on the grounds that such identification was the only proposal that abstracts properly from all of number's potential applications, whereas the more ‘mathematical’ Dedekind sought only to articulate ‘freely created’ objects sufficient ‘to do a mathematical job.’20

I find little textual evidence for attributing such motivations to Frege. He is critical of ‘abstractionist’ views generally and often observes that extension elements can be acceptably introduced in a wide variety of ways. Generally, the remarks that are often misread as Fregean expressions of a ‘supply a unique abstractionist story for justifying the numbers’ philosophy merely express the formal requirement that, however the new mathematical entities are handled, their introduction must be executed in a manner that insures that the new objects will be properly counted (so that, in whatever manner we define the complex points a and b, there must be exactly two of them).

Relative logicism without using sets

There were a number of alternative approaches to relative logicism within Frege’s era that have become largely forgotten today but which seem to have influenced his own philosophical thinking. In particular, it was often emphasized that concepts should be given conceptual priority over their extensions. Christoph Sigwart wrote in an influential logic primer of the period:

[Some logicians believe] that concepts are gained by abstraction, i.e., by a process which separates the particular objects from those by which they are distinguished from each other, and gathers the former together into a unity. But the supporters of this view forget that, in order to resolve an object of thought into its particular characteristics, judgments are necessary which have for their predicates general ideas..., and as these concepts make the process of abstraction possible, they must have been originally obtained in some other way... [To try to] form a concept by abstraction is this way is to look for the spectacles we are wearing by aid of the spectacles themselves.21

In this regard, it was often remarked that predicative concepts can be directly converted into a species of ‘concept-object,’ as when we frame the abstract object motherhood from the everyday trait ... is a mother.

In mid-century, the German geometer Karl von Staudt22 tacitly relied upon this observation when he proposed an influential program for converting ‘persistence of form’ considerations into more respectable patterns of definitional extension within standard geometry. Following the pattern of our motherhood example, he observes that we are citing a similar concept-object when we speak of the common direction of two parallel lines. That is, starting with the relational concept ‘x is parallel to line L0,’ logic allows us to speak instead of an abstract object ‘the direction of line L0.’ Von Staudt then made the remarkable suggestion that these commonplace concept-objects could serve as adequate replacements for Poncelet’s ‘points at infinity’—we simply let the direction of L0 become the missing ‘point’ that sits at the far end of L .0 In an allied vein, he suggests that we convert ‘x maps to y under a right-handed overlapping involution’ into a concept-object and let it replace one of the missing complex points that serve as the centers of this involution (he utilizes ‘x maps to y under a left-handed overlapping involution’ to instantiate the other missing center).

Historically, the suggestion that concepts-treated-as-objects could be substituted in place of otherwise problematic entities was quite unprecedented in mathematical practice,23 but, once this unexpected pill was swallowed, von Staudt found he could rationalize all of projective geometry’s maneuvers through a straightforward, if tedious, program of redefinition. The trick is to amalgamate the new concept-objects into the old world of geometry by redefining our old geometrical notions to suit the new elements. Thus we must redefine the our original Euclidean notion of ‘lying upon’ (call it ‘lies upon0’) so that our new ‘points at infinity’ can be meaningfully held to ‘lie upon1’ the line L0 (obviously, no concept-objects can lie upon0 L0 if ‘lies upon0’ is understood in the old sense).

This process of carefully crafted redefinition must be repeated several times before von Staudt can work his way to the full conceptual world needed within extended geometry. Observe that von Staudt’s program employs simple concept-objects directly as replacements for the entities sought, rather than collecting together infinite equivalence classes in Dedekind's manner. In fact, the infinities Dedekind’s technique blithely evokes were often rejected as extravagant by critics in this period.24 Although Frege himself employs extensions in his own constructions, he may have originally intended to utilize simpler concept-objects in von Staudt’s manner (I’ll suggest how in the next section). However this may be, Frege discusses the ‘direction of L0’ case in §§64-68 of the Grundlagen (as well as the concept von Staudt substitutes for the ‘line at infinity’) without remarking upon their utilization within the prior geometer’s work (with which Frege was undoubtedly familiar).

As noted above, in performing these extension element introductions, we must carefully circumscribe the concepts employed so that the requisite number of new objects will be engendered in the conversion to concept-objects. Thus there should normally be exactly two complex points acting as the centers of an overlapping involution. Von Staudt distinguishes between righthand and lefthand mappings simply as a trick for getting this ‘object count’ to come out properly. Frege’s worries about the proper ‘criterion of identity’ for a specific class of ‘objects’ are closely tied to sharp mathematical demands such as this.

We might observe that von Staudt’s approach and Dedekind’s share a common theme: they both return to the original motives that inspired the introduction of the extra elements and search for the evaluative concepts that sparked such postulation. Thus the points at infinity were inspired by the evaluative concept ‘x is parallel to y,’ for we evaluate lines L0 and L1 as meeting inthe same infinite point only if L0 and L1 are parallel to one another. Both men then construct a suitable ‘logical object’ from the evaluative concept highlighted: von Staudt proposing that we replace a point at infinity by ‘the direction of L0’ while Dedekind’s approach suggests that the set {L| L is parallel to L0 } be employed.

As Frege conceptualized these issues, the evaluative concepts selected must adequately serve as the core of the ‘recognition judgements’ that indicate how our newly introduced elements display their handiwork within concrete mathematical circumstances (according to the philosophy ‘by their fruits, you shall know them’). Furthermore, the highlighted judgements must embody proper standards of identity for their corresponding concept-objects. Question: how do we know that we are considering the same points at infinity in a given context? Answer: only if the lines with which they are associated lie parallel to one another. Question: how do we determine whether (2, 5 + 15) represents the same ideal factor as (3, 3 + 15)? Answer: only if they satisfy the same divisibility tests for the regular numbers in the base ring.

As other essays in this volume make evident, there has been much contemporary philosophical interest in a revived ‘neo-logicism’ that attempts to base all invocation of ‘abstract objects’ upon unsupplemented ‘recognition judgements’ similar to the ‘Hume’s Principle’ of Grundlagen, §73. I doubt that such a program would have enjoyed Frege’s philosophical imprimatur, for his central intention in highlighting ‘recognition judgements’ in the Grundlagen is to isolate the precise traits that earlier mathematicians had utilized in their vague appeals to ‘persistence of form.’ As such, these evaluative concepts merely provide the raw material with which a proper program for introducing the desired objects in an absolute logicist fashion might begin. The notion that claims like Hume’s principle alone could constitute an adequate method for handling questions of mathematical existence would have almost certainly struck Frege as an unhappy return to the methodological vagaries of earlier times.25

Plucker’s recarving of content and the context principle

There was a particular recasting of von Staudt’s work in an algebraic vein that drew much attention during Frege’s student days at Göttingen. It merits a brisk survey here, for it potentially casts a revealing light upon many of Frege’s puzzling claims about his celebrated but obscure ‘context principle.’ This technique carves out simple ‘concept-objects’ in von Staudt’s manner through reversing the direction of functionality within target mathematical claims. For technical reasons, Frege eventually employed equivalence class constructions in the Grundlagen, but the discussion preceding often suggests a sympathy for the ‘reversing functionality’ approach. This technique was developed in the early 1870's by the mathematicians Otto Stolz and Felix Klein,26 following the percepts of their teacher, Julius Plücker, often regarded as ‘the father of algebraic geometry’ today. Plücker had introduced a revolutionary perspective into the subject by carving up previously understood ‘geometrical contents’ in novel ways. In so-called ‘homogeneous coordinates’ (see any college geometry text), the equation of a planar straight line assumes the form Ax + By + Cz = 0. When we first consider this equation, we naturally regard the list of constants [A,B,C] as acting upon the range of variability (x,y,z). That is, we read the equation as claiming that the function [A,B,C] carves out the range of points (a,b,c) that lie upon a common straight line. But what happens if we instead hold a specific point (a,b,c) fixed and allow let the erstwhile [A,B,C] ‘constants’ to vary, i.e., we consider the reversed equation Xa + Yb + Zc = 0? Here we let ‘(a,b,c)’ act as the function which then carves out a range of lines [A,B,C]. In fact, the locus of this new ‘range’ comprises a natural geometrical entity: it represents the pencil of all lines running through (a,b,c), whose individual rays are now distinguished by the varying ‘line coordinates’ [A,B,C]. To highlight these symmetries better, we might rewrite the claim that ‘point (a,b,c) lies upon the line [A,B,C]’ as ‘[A,B,C]T (a,b,c) = 0’ where standard matrix multiplication isemployed. Then, according to whether we select the [] block or the () block as open to variation, we will parse our original proposition as representing the actions of distinct ‘unsaturated’ functions acting upon distinct ranges of saturated ‘objects’ (borrowing Frege’s terminology from ‘Concept and Object’). From this point of view, a given curve can be carved with equal justice into either the union of its range of points or the intersection of its range of tangent lines, depending upon the direction of functionality chosen.27 Readers of the Ricketts article elsewhere in this volume will note the immediate affinities of this Plückerian point of view with Frege’s own thinking upon ‘range’ and ‘variation.’

The older geometer’s work inspired a large number of contemporaneous attempts to reconfigure geometrical intuition by carving space into various choices of primitive ‘elements’. The most famous of these investigations was Sophus Lie's ‘sphere geometry’, but Frege himself worked upon a decomposition where the ‘elements’ were pairs of points treated as fused unities.28 Such examples provide a concrete (and rather startling) significance to Frege’s frequent assertions that propositional contents can be ‘carved up’ in unexpected ways, e.g.,

[I]nstead of putting a judgement together out of an individual as subject and an already previously formed concept as a predicate, we do the opposite and arrive at the concept by splitting up the content of a possible judgement... [T]he ideas of these properties and relations are [not] formed apart from objects: on the contrary they arise simultaneously with the first judgement in which they are ascribed to things.29

Returning to relative logicism, Stolz and Klein applied Plücker’s ‘recarving of content’ point of view to von Staudt’s extension element program in an interesting fashion. When we treat the ‘(a,b,c)’ piece of ‘[A,B,C]T (a,b,c) = 0’ as a function, we find that no triple beginning with a zero (i.e., (0, b, c)) will carve out a true pencil of intersecting lines—the various lines whose coordinates [A,B,C] algebraically satisfy ‘[A,B,C]T (a,b,c) = 0’ will run parallel to one another, ratherthan sharing a common point. Ah ha, Stolz and Klein recognized, isn’t this exactly the algebraic feature we require in a point at infinity? So why don’t we redefine our old ‘(a,b,c) lies upon0 [A,B,C]’ claim so that (0,b,c) becomes meaningfully permitted to lie upon1 [A,B,C]? We only need to guarantee that we set up the right number of new ‘points’ when we proceed in this way (the trick is to follow the ‘recognition judgement’ that (0,a,b) and (0,c,d) qualify as the same ‘point’ if and only if they are exact multiples of one another). In other words, through a mixture of Plückerian recarving and definitional extension, we extend the reach of the expression ‘[A,B,C]T (a,b,c) = 0’ to cover point at infinity situations. Utilizing allied tricks with involutions, Stolz and Klein handled the complex points nicely as well.

Many of Frege’s characteristic remarks about ‘recognition judgements’ and ‘contextual definition’ fit the Stolz/Klein techniques nicely. We first extract suitable concept-objects out of a family of claims through reversed function recarving and then expand these assertions into a fuller range by installing identity conditions upon these new ‘objects’ through suitable ‘recognition judgements.’ In this regard, Frege elsewhere remarks that, if we wish, the complex geometrical points could be introduced as the (finite) ‘commonalities’ between an arbitrary circle C and any line L not intersecting C.30 Since many distinct circle/line pairs correspond to the same imaginary points, we face the problem of finding a ‘recognition judgement’ that will resolve when (C, L) and (C*, L*) represent the same complex points. It is only because addressing this question directly proves a bit tricky that most geometers favor involution mappings as the canonical means for introducing the complex points.

It is quite conceivable that Frege began the Grundlagen with a plan to introduce the integers through an allied ‘functional recarving’ pattern. Begin with the claim ‘Concept C maps in 1-1 fashion to Cn’ where Cn is some canonical concept that, logically, is satisfied by exactly n members (for 0, such a canonical concept could be ‘x … x’). Now reverse the direction of the functionality within our mapping claim to obtain ‘The concept-object corresponding to “maps in 1-1 fashion to Cn” belongs to concept C’ (or, more briefly, ‘the number belonging to Cn belongs to the concept C’). ‘Hume’s principle’ will then serve as the requisite ‘recognition judgement’ that determines whether two of these newly introduced ‘numbers’ qualify as the same or not. Under this approach, we do not require infinite Dedekind-style sets, but only simple concept-objects obtained through functional reversal.

However, closer analysis shows that such ploys can only supply context dependent ‘objects’ that qualify only as ‘incomplete symbols’ in Bertrand Russell’s sense and cannot behave as the entirely self-sufficient manner that naïve Plücker-like thinking first suggests. Many commentators have noted that Frege's deliberations in the Grundlagen take an abrupt turn in §68, when, without preparation, extensions suddenly enter the scene.31 If Frege had originally expected to apply a Plücker-like strategy to his numbers but recognized their ‘incomplete symbol’-like features by the time he came to §68, his initial friendliness towards ‘definitions in context’ and his stress upon ‘context principle’ recarvings of content would appear better motivated. Such a mid-stream shift in strategy would explain his puzzling remark in §68:

I believe that for ‘extension of the concept’ we could simply write ‘concept’. But this would be open to the two objections:

  1. that this contradicts my earlier statement that individual numbers are objects, as is indicated by the use of the definite article in expressions like ‘the number two’ and by the impossibility of speaking of ones, twos, etc. in the plural, as also by the fact that the number constitutes only an element in the predicate of a statement of number;
  2. that concepts can have identical extensions without themselves coinciding.
I am, as it happens, convinced that both these objections can be met; but to do this would take us too far afield for present purposes. I assume that it is well known what the extension of a concept is.32

Certainly, Plücker-like recarvings provide a more vivid application for Frege’s context principle than do the equivalence class techniques he actually adopts (in the latter, the existence of the needed ‘logical objects’ must be established through axiom V-like postulation, rather than simple ‘conceptual recarvings’).

I hasten to add that it is hard, on the basis of the available texts, to establish that Frege ever had such a strategy in view. I have devoted a fair amount of space to these precedents because (1) given his own mathematical work and training, Frege was plainly aware of these proposals and (2) Plückerian examples cast a potentially revealing light upon his often elusive remarks about ‘propositional content.’

With respect to the latter, the recarving techniques suggest that modern geometers continue to traffic in the same fixed realm of Euclidean facts as the ancients, but over time that original domain has become progressively recarved into ever richer ranges of novel geometrical objects (i.e., the holistic ‘propositional content’of the underlying facts do not alter under the recarvings, but their ontological parsing adjusts considerably). From this point of view, science should not regard a proposition’s ‘objective content’ as altered even when its surface expression gets reconfigured in quite unexpected ways. Such themes emerge in Frege’s writings in a variety of ways. For example, he often argued that, insofar as objective science was concerned, a holistically conceived proposition does not lose its ‘scientific content’ if it loses (or gains) some ‘intuitive garb’ it had previously displayed (or lacked). In his earliest mathematical work, Frege experimented with methods for aligning claims about the (affine) complex points on a plane with imagery comprised of entanglements of 3D lines above the plane.33 The purpose of this exercise was to associate an artificial ‘intuitive presentation’ with the claims about the complex point facts. Frege did not regard the ‘propositional content’ of the original claims as altered by this annexation; the supplementation was viewed merely as a convenient tool to help the geometer reason more easily about the ‘unintuitive’ matters at hand. In §26 of the Grundlagen, Frege describes two imaginary creatures whose limited projective ‘intuitions’ correspond to different aspects of geometrical reality in classic ‘inverted spectrum’ fashion:

Over all geometrical theorems they would be in complete agreement,only interpreting the words differently in terms of their respectiveintuitions. With the word ‘point’ for example, one would connect oneintuition and the other another. We can therefore still say that thisword has for them an objective meaning, providing only that by thismeaning we do not understand any of the peculiarities of theirrespective intuitions.34

Once again the implication seems to be: insofar as scientific communication is concerned, their sundry theorems traffic in the same ‘objective content,’ despite the different intuitive trappings in which the two creatures privately cloak these ‘contents.’

Such considerations suggest the following picture of truth in mathematics.

Within Euclidean geometry, the original fixed set of holistic facts is delivered to us through Kantian ‘intuition,’ although the modern geometer can displace these original ‘intuitive presentations’ at will and supplement the geometrical domain with sundry ‘logical objects.’ Arithmetic, at first blush, seems to have its fundamental contents supplied by intuition in an allied way, but closer analysis shows that numbers secretly serve as purely logical evaluators and can be safely applied to any subject matter whatsoever (I’ll enlarge upon this reasoning in the next section).

However, Frege’s writings are not sufficiently explicit upon many of these issues although they all constitute natural responses to the scientific dilemmas of his time. Modern commentators frequently discuss Frege’s notions of ‘propositional content’ in a manner decoupled from the rather radical methodological policies that he adopts within his own mathematical projects. I suggest that this policy of divorcement may overlook vital clues to his actual thinking.

Absolute logicism

This essay has been largely devoted to the thesis of relative logicism as an account of how long established mathematical domains might spawn satellite ‘logical objects’ to aid in understanding the original setting. Absolute logicism, as advocated by Frege and Dedekind, claims that various traditional mathematical domains can themselves be regarded as comprised as ‘logical objects’ engendered by the need to understand the structure of non-mathematical realms.35 Once again, such doctrines were not spawned by philosophical musings alone, but by a mathematical need to understand more precisely the range of cases in which number-like evaluators could be profitably employed.

For example, the regular complex numbers can nicely compute how repeated rotations will compose within a plane (if we can independently manipulate an adjustable rod to reach positions a and b through operations A and B, where will the rod reach if operation B is applied after A? Answer: a.b). Can we find more general complex number-like gizmos that can capture three-dimensional movements in a comparable vein? Such research led to the sundry ‘dual numbers,’ quaternions and allied number-like systems that were widely studied in Frege’s era (such inquiries have become important once again in the context of modern robotics). Alternatively, one might try to tackle these representational problems by applying the regular complex numbers in unexpected ways. In fact, in early work36 Frege experimented with grading a restricted class of functional representations correlated with infinitesimal rotations in this fashion, somewhat in the manner of Sophus Lie. Frege’s interest in the application problem for the various number systems may have emerged from these background concerns: under what conditions can a calculus historically devised for purpose P can be successfully transferred to novel purpose Q? And the natural answer suggests itself: only if a certain logical structure within the local realm of traits under consideration is present. One can ascertain this vein of thinking most clearly in Frege’s approach to the real and complex numbers. Although he never completed the intended developments, Peter Simons37 has supplied a plausible delineation of how the scheme would have worked: a real number r is treated as an evaluator of a given property P*’s position within a linearly ordered family of properties P. More explicitly, to claim that ‘a is π meters long’ indicates that ‘a possesses that length property L* which occupies the πth place within a broader family of length traits <L, L1 , Abut>, where this collection represents the smallest family of traits that contains L1 (= the property of having the same length as the standard meter bar in Paris) and is also closed under end-to-end composition (Li + j represents the length property framed when two objects possessing length properties Li and Lj are abutted end-to-end). Considering π as a concept-object that marks a property’s position within such a relational family, π gets identified with the set of all quadruples <P*, P , P1 , R> that can be mapped onto a canonical non-empty family of properties constructed with logical materials alone (Frege would have employed his already defined integers to build up (and complete) a suitable canonical family of fraction-like properties).

Prescinding from these technical complexities, the natural numbers serve as logical evaluators of the cardinal size of a concept C, whereas the real numbers evaluate its comparative position within a linear family C of related concepts (if such a family is pertinent to C). Our philosophical task in setting up the sundry number systems is to elucidate the underlying logical basis for the relevant evaluation of the concept C and to then employ some method for providing logic-based concept-objects able to capture the assessment under review. We are thereby adopting the same basic methodology as pertains within geometry’s circumstances, but our real number evaluations needn’t rest upon any underlying range of intuitively supplied facts comparable to those required in geometrical assessment, simply because only the logical structure of the family C is wanted for their applicability. Thus we obtain an absolute logicism for the number systems that is impossible within geometrical circumstances. In rejecting the support of Kantian ‘intuition’ for his number systems, Frege conformed to opinions commonly shared by investigators then exploring the application range of number-like evaluators.

Axiomatic postulation

Until 1900 or so, von Staudt’s program was commonly regarded as providing the ‘right explanation’ for why the strange extension elements could be legitimately added to traditional geometry, although few studied his techniques in detail simply because the work involved was so tedious. However, this methodological consensus vanished virtually overnight with the rise of the axiomatic approach followed by David Hilbert and his school. Under their approach, Euclidean and projective geometry become regarded as ‘implicitly defined’ quite independently by their own parochial collections of axioms, leaving the question of their interrelationships to be determined by now standard model-theoretic techniques (‘Can a model M of the Euclidean group be extended to frame a projective model M*?,’ etc.). If so, von Staudt’s tiresome stagewise constructions can often be avoided: if you think that Euclidean geometry might be better understood in a system with imaginary points, directly specify axiomatically the richer structure you desire and then indicate how Euclidean geometry can be profitably embedded within it. Don't waste your time trying to construct what you seek out as strange and improbable concept-object slicings of your original domain.38 The almost instantaneous popularity of this new point of view drove von Staudt’s method of relying upon ‘recognition judgement’-based concepts into intellectual oblivion. Post-Hilbertian commentators often sarcastically dismissed von Staudt’s efforts as motivated by antiquated, ‘extra-mathematical’ demands upon mathematical existence, not unlike the criticisms of Frege considered at the head of this article. In this vein, the irrepressible E.T. Bell wrote:

In proving that geometry could, conceivably, get along withoutanalysis, von Staudt simultaneously demonstrated the utter futility ofsuch a parthnogenetic mode of propagation, should all geometersever be singular enough to insist upon an exclusive indulgence inunnatural practices.39

Essentially, Hilbert’s appeals to independent axiomatization provided a fresh methodology for rigorously implementing the philosophy of ‘free creativity’ enunciated earlier: mathematicians are free to cook up any internally consistent realm they please, unfettered by foundational tethers to more familiar mathematical territory. As Hilbert wrote Frege in a celebrated exchange: Of course I must also be able to do as I please in the matter of positing characteristics; for as soon as I have posited an axiom, it will exist and be ‘true’... If the arbitrarily posited axioms together with all their consequences do not contradict one another, then they are trueand the things defined by these axioms exist. For me, this is the criterion of truth and existence.40

As a relative logicist, Frege would have been heartily opposed to ‘formalism’ of this ilk.

Despite these obvious differences in philosophical attitude, Frege’s tone in his exchanges with Hilbert and Hilbert’s amanuensis A. Korselt often seems excessively harsh, as if Frege were writing from a conservative and antiquated geometrical methodology that he does not adopt within his own mathematical work. Perhaps our ruminations on relative logicism suggest some deeper reasons for his unhappiness. In his 1898 lectures on geometry,41 Hilbert claims that his attention to subgroups of axioms allows us to ‘diagnose the structure of our spatial intuition.’ he had in mind situations such as the following. A certain restricted group of non-metrical axioms F about points and lines within 3D space are sufficient to establish the 2D claim known as Desargues' theorem:

If two triangles are placed so that the straight lines connecting corresponding sides meet in a point, then the points of intersection of corresponding sides will lie upon a common line.

Essentially, the relevant proof proceeds by collapsing the 3D upper diagram of a triangle projected from one plane into the 2D situation displayed in the lower half of the illustration. But Hilbert proved the surprising fact that a purely 2D analog of the facts F could not logically force the truth of Desargues’ theorem alone. Results of this type instantly made Hilbert’s work widely celebrated and Frege’s reluctance to extend him any credit for his discoveries seems quite uncharitable. Some of Frege’s discomfort may trace to the fact that Hilbert’s conception of ‘logically forces’ is tacitly first-order in nature (more or less), whereas Frege’s basic approach to ‘logic’ tolerates the liberal invocation of extra ‘logical objects,’ whether they arise as abstracted sets or as Plückerish recarvings. But from the recarving point of view, the dimensionality of the plane is not a fixed matter, for a plane will change its dimensions if it is carved into circles as its primitive elements rather than points. Starting within a 2D group of facts F, it might be possible to devise extra ‘logical objects’ through recarving that will permit a reinstatement of the standard 3D proof of Desargues’ theorem. From this point of view, Hilbert’s claim that Desargues’ theorem is ‘logically independent’ of the 2D F group may lack clear significance. In his final essay on geometry, Frege attempted to render greater justice to Hilbert’s independence results. As various commentators have observed,42 that essay articulates what is, in effect, a model-theoretical account of first-order logical consequence. This similarity does not show that Frege himself has adopted a modern‘semantic approach to logic’; it is more likely that he remained loyal to the nineteenth century traditions which had assumed that ‘logic’ must somehow validate appeals to novel ‘objects’ of a ‘direction of line L0’ variety.

To the modern reader, this old-fashioned appraisal of logic's ‘creative’ capabilities may seem startling, as we no longer expect that ‘logic’ alone can erect mighty layers of supplementary ‘objects’ above an originally limited domain. But this limited ‘first order view’ of logic’s capacities did not become canonical until the 1930's and Frege’s underlying objections to Hilbert’s point of view may trace tacitly to this divergence.

After 1904, Hilbert hoped that the consistency and completeness of his free standing axiomatic schemes could sometimes be established by elementary means (otherwise, the direct construction of a suitable model was required43). But Kurt Gödel’s famous second incompleteness theorem showed that the consistency of a sufficiently rich axiomatic system can be authenticated only if the consistency of some yet stronger theory is assumed. So the problems that originally bedeviled the naïve ‘free creativity’ thesis return again: how can we ascertain that our ‘free creativity’ does not depend upon an inconsistently described structure? In light of the unavailability of elementary checks upon consistency, modern mathematical orthodoxy has settled upon the following resolution: mathematics is free to study any subject that can be legitimated as a well defined class within the set theoretical hierarchy. This view can be called set theoretic absolutism, for it makes reduction to set theory the final arbiter of mathematical existence. Although such ontological absolutism represents ‘official policy’ today, some mathematicians and philosophers (who often don't like set theory much) harbor in their bosoms opinions closer to naïve ‘free creativity’: mathematics should be free to study the properties of any self-consistent, free standing construct. But we currently lack any well developed philosophy of mathematics that can support this hope (which seems to rely upon an unsupported faith that Dirichlet Principle-like problems will never visit us again). In these respects, we are still confronted with the same task of reconciling ‘safe procedure’ with ‘free creativity’ that had troubled Frege and the other relative logicists of the nineteenth century.

Notes

  1. First written 1998; rewritten 2008. Thanks to Jeremy Avigad, Bill Demopoulos, Michael Friedman, Jeremy Heis, Penelope Maddy, Tom Ricketts, Jamie Tappenden and Michael Thompson for their helpful comments.
  2. ‘Frege's Epistemology’, Philosophical Review 88, 1992. The prevalence of such attitudes is surveyed by Jamie Tappenden, ‘Extending Knowledge and 'Fruitful Concepts'’, Noûs 29, 1995.
  3. An Essay on the Foundations of Geometry (New York: Dover, 1956), p. 44.
  4. A good source is Jeremy Gray, Worlds Out of Nothing (London: Springer-Verlag, 2007). For an important philosophical examination of ‘proof unity,’ cf. Ken Manders’ forthcoming ‘The Euclidean Diagram.’
  5. ‘On Some of the Methods at Present in Use in Pure Geometry’ in Collected Papers, Vol. 1 (New York: Chelsea, 1965), p. 4.
  6. Mathematical Essays and Recreations (Chicago: Open Court, 1910). Schubert interprets Kronecker’s celebrated pronouncement, ‘God created the integers; all else is the work of man’ as expressing the thesis that the other entities of mathematics are engendered from the natural numbers through ‘persistence of form’-like ‘free creativity.’ Jeremy Heis has recently directed my attention to allied remarks in the influential Logik by Wilhelm Wundt (Stuttgart: Ferdinand Enke, 1880-3). For an excellent discussion of the general manner in which Frege’s uncharitable readings of his rivals have distorted our modern appreciation of their merits, see W.W. Tait, ‘Frege Versus Cantor and Dedekind’ in W.W. Tait, ed, Early Analytic Philosophy (Chicago: Open Court, 1997).
  7. K.F. Gauss, Disquisitiones Arithmeticae, A.A. Clark, trans. (New Haven: Yale University Press, 1965).
  8. Quoted in L.W. Reid, The Elements of the Theory of Algebraic Numbers (New York: MacMillan, 1910), p 208.
  9. ‘(n,m)’ is standard notation for the greatest common divisor of n and m.
  10. D.E. Smith, A Source Book in Mathematics I, New York: Dover, pp. 120-4.
  11. A. F. Monna, Dirichlet's Principle (Utrecht: Oostheok, Scheltema and Holkema, 1975).
  12. Essays on the Theory of Numbers, translated by W.W. Beman (New York: Dover, 1963), p. 38.
  13. It should also be considered an elective logicism, in the sense that the mathematician can choose the specific sub-range of potential ‘logical objects’ she favors in order to frame a closed extension domain with nice properties (e.g., a ring with unique factorization). Pace those interpreters who maintain that Frege is an absolutist with respect to quantifier ranges, I believe that he is an electivist at heart. But such interpretative issues would take us too far afield here.
  14. ‘Boole's Logical Calculus and the Concept-script’ in Posthumous Writings, translated by P. Long and R. White (Chicago: University of Chicago, 1979), p. 34. A similar passage, other aspects of which are discussed in Tappenden, ‘Extending Concepts’, can be found in the Grundlagen, §88.
    In the context of Fourier series, A-L. Cauchy had mistakenly assumed that the limit of continuous functions must be continuous, when this property obtains only if the generating functions are ‘uniformly continuous’. This distinction, introduced by Stokes and Weierstrass, hinges upon distinctions of quantifier scope. Cauchy's error, which was widely discussed in the 1870's, is probably what Frege has in mind here (although there are analogous examples within the calculus of variations that would have also been familiar to him).
  15. In Gesammelte Mathematische Werke, Vol. III, R. Fricke, E. Noether, O. Ore, eds. (Braunscheweig: F. Vieweg and Sohn, 1930). Dedekind's first use of the equivalence class idea emerges, almost in passing, to introduce some modular arithmetics in ‘Abriss einer Theorie der Höheren Kongruezen in Bezug auf einen Reellen Primzahl-Modulus’ in Vol. 1 of the same collection.
  16. Theory of Algebraic Numbers, translated by John Stillwell (Cambridge: Cambridge University Press, 1996), p. 94. This 1877 work still provides an excellent introduction to the subject and its motivations.
  17. For an excellent survey of Dedekind’s opinions, see Jeremy Avigad, ‘Methodology and Metaphysics in the Development of Dedekind’s Theory of Ideals’ in J. Ferreirós and J. Gray, The Architecture of Modern Mathematics (Oxford: Oxford University Press, 2006). It is fairly common to employ ‘abstraction’ as a means of rendering a subject matter ‘representation independent.’
  18. The Historical Development of Logic, translated by Jerome Rosenthal, New York: Holt, Rinehart and Winston, 1929, p. 132. He employs the ‘direction of L ‘example more or less as Frege did, citing the geometers Vailati and Burali-Forti in this regard.
  19. ‘Letter to Hermann Weber’ in William Ewald, ed., From Kant to Hilbert, vol. 2 (Oxford: Oxford University Press, 1996).
  20. Paul Bernacerraf, ‘What the Numbers Could Not Be’ in P. Benacerraf and H. Putnam, eds., Readings in the Philosophy of Mathematics, 2nd edition (Cambridge: Cambridge University Press, 1983). Howard Stein, ‘Logos, Logic and Logistiké’ in Aspray and Kitcher, eds, History and Philosophy of Modern Mathematics (Minneapolis: University of Minnesota, 1988).
  21. Christoph Sigwart, Logic, Vol. I, translated by Helen Dendy, Swan Sonnenschein and Co. (London: 1895) p. 248-9.
  22. Geometrie der Lage (Nurnberg: Bauer and Raspe, 1847) and Beiträge der Lage (Nurnberg: Bauer and Raspe, 1856).
  23. Hans Freudenthal, ‘The Impact of von Staudt's Foundations of Geometry’ in P. Plaumann and K. Strambach, eds., Geometry—von Staudt's Point of View (Dordrecht: D. Reidel, 1981). Mark Wilson, ‘Frege: The Royal Road from Geometry’, in William Demopoulos, ed., Frege's Philosophy of Mathematics (Cambridge: Harvard University, 1995).
  24. Harold Edwards, ‘The Genesis of Ideal Theory’, Archive for the History of the Exact Sciences, 23 (1980).
  25. Crispin Wright and Bob Hale, The Reason’s Proper Study (Oxford: Oxford University Press, 2004).
  26. Stolz, ‘Die geometrische Bedeutung der complexen Elemente in der analytischen Geometrie,’ Mathematische Annalen, vol. 4, 1871; Klein, Elementary Mathematics from an Advanced Standpoint: Geometry (New York: Dover, 1941).
  27. If we write down a formula with respect to the line coordinates [A,B,C] belonging to a curve, we typically get a new equation: the ‘point equation’ x3 - y2 z = 0 converts to the ‘line equation’ 4X3 + 27Y2 Z = 0. The latter formula reveals singularities that we might not have noticed in its ‘point equation’ garb. The striking revelations possible through functional recarving probably made a deep impression upon Frege’s philosophical thinking.
  28. ‘Lecture on the Geometry of Pairs of Points in the Plane’ in Gottlob Frege: Collected Papers on Mathematics, Logic and Philosophy, Brian McGuinness, ed. (Oxford: Basil Blackwood, 1984).
  29. ‘Boole's Logical Calculus’ in Gottlob Frege: Posthumous Writings, Hermes, Kambartel and Kaulbach (Oxford: Basil Blackwood, 1979), p. 17.
  30. ‘On a Geometrical Representation of Imaginary Forms in the Plane’ in Collected Papers, p. 2.
  31. Michael Dummett, Frege's Philosophy of Mathematics (Cambridge: Harvard University Press, 1991).
  32. Translated by J.L. Austin as The Foundations of Arithmetic (New York: Harper and Row, 1960), p. 80.
  33. ‘Imaginary Forms in the Plane’ in Collected Papers, op cit. J.L. Coolidge, The Geometry of the Complex Domain (Oxford: Clarendon Press, 1924) surveys the history of allied investigations.
  34. Grundlagen, p. 35. He presumes that these hypothetical individuals do not ‘intuit’ any of the metrical characteristics that break the formal duality between planar ‘line’ and ‘point’ for the likes of us.
  35. Frege also insisted that our ‘numbers’ must be directly applicable to mathematical situations as well, for we want to gauge the size of various collections of ‘natural numbers’ through the application of these very same ‘numbers’ (Russell’s type-based ‘number’ constructions, notoriously, could not do this).
  36. ‘Methods of Calculation based upon an Extension of the Concept of Quality’ in Collected Papers.
  37. ‘Frege's Theory of Real Numbers.’ History and Philosophy of Logic,1987.
  38. This is the policy recommended in O. Veblen and J.W. Young. Projective Geometry (Boston: Ginn, 1910). To be sure, many of von Staudt’s techniques will return as methods of constructing extensions to old models, but the relative logicist suggestion that they capture the ‘recognition judgements’ that conceptually prompted the enlargements is abandoned. In this same regard, Hilbert helped popularize the modern employment of Dedekind’s ideals within algebra, without any pretension that the equivalence classes somehow ‘abstract’ from the original domain.
  39. The Development of Mathematics, p. 349. Allied attitudes are expressed, in a more philosophical context, in the two historical articles reprinted in Ernest Nagel's Teleology Revisited (New York: Columbia University Press, 1982).
  40. On the Foundations of Geometry and Formal Theories of Arithmetic, E-H. Kluge, trans. (New Haven: Yale University, 1971), p. 12.
  41. Lectures on the Foundations of Geometry 1891-1902, M. Hallett and U. Majer, eds. (Berlin: Springer-Verlag, 2004). Hallett’s editorial comments are particularly helpful.
  42. William Demopoulos, ‘Frege, Hilbert, and the Conceptual Structure of Model Theory,’ History and Philosophy of Logic 1994: 15(2).
  43. Hilbert was never one-sided in his thinking and served, in fact, as a great advocate for algebraic construction in Dedekind’s vein. He could have readily accepted that there might be natural mathematical objects (‘differentiable manifolds’) that can be readily constructed but are not easily captured within an axiomatic frame.
  • A Priori
    College of Arts

    University of Canterbury
    Private Bag 4800, Christchurch
    New Zealand
  • Follow us
    FacebookYoutubetwitterLinked In