If mathematics is regarded as a science, then the philosophy ofmathematics can be regarded as a branch of the philosophy of science,next to disciplines such as the philosophy of physics and thephilosophy of biology. However, because of its subject matter, thephilosophy of mathematics occupies a special place in the philosophyof science. Whereas the natural sciences investigate entities that arelocated in space and time, it is not at all obvious that this is alsothe case for the objects that are studied in mathematics. In additionto that, the methods of investigation of mathematics differ markedlyfrom the methods of investigation in the natural sciences. Whereas thelatter acquire general knowledge using inductive methods, mathematicalknowledge appears to be acquired in a different way: by deduction frombasic principles. The status of mathematical knowledge also appears todiffer from the status of knowledge in the natural sciences. Thetheories of the natural sciences appear to be less certain and moreopen to revision than mathematical theories. For these reasonsmathematics poses problems of a quite distinctive kind for philosophy.Therefore philosophers have accorded special attention to ontologicaland epistemological questions concerning mathematics.
On the one hand, philosophy of mathematics is concerned with problemsthat are closely related to central problems of metaphysics andepistemology. At first blush, mathematics appears to study abstractentities. This makes one wonder what the nature of mathematicalentities consists in and how we can have knowledge of mathematicalentities. If these problems are regarded as intractable, then onemight try to see if mathematical objects can somehow belong to theconcrete world after all.
On the other hand, it has turned out that to some extent it ispossible to bring mathematical methods to bear on philosophicalquestions concerning mathematics. The setting in which this has beendone is that ofmathematical logic when it is broadlyconceived as comprising proof theory, model theory, set theory, andcomputability theory as subfields. Thus the twentieth century haswitnessed the mathematical investigation of the consequences of whatare at bottom philosophical theories concerning the nature ofmathematics.
When professional mathematicians are concerned with the foundations oftheir subject, they are said to be engaged in foundational research.When professional philosophers investigate philosophical questionsconcerning mathematics, they are said to contribute to the philosophyof mathematics. Of course the distinction between the philosophy ofmathematics and the foundations of mathematics is vague, and the moreinteraction there is between philosophers and mathematicians workingon questions pertaining to the nature of mathematics, the better.
The general philosophical and scientific outlook in the nineteenthcentury tended toward the empirical: platonistic aspects ofrationalistic theories of mathematics were rapidly losing support.Especially the once highly praised faculty of rational intuition ofideas was regarded with suspicion. Thus it became a challenge toformulate a philosophical theory of mathematics that was free ofplatonistic elements. In the first decades of the twentieth century,three non-platonistic accounts of mathematics were developed:logicism, formalism, and intuitionism. There emerged in the beginningof the twentieth century also a fourth program: predicativism. Due tocontingent historical circumstances, its true potential was notbrought out until the 1960s. However it deserves a place beside thethree traditional schools that are discussed in most standardcontemporary introductions to philosophy of mathematics, such as(Shapiro 2000) and (Linnebo 2017).
The logicist project consists in attempting to reduce mathematics tologic. Since logic is supposed to be neutral about mattersontological, this project seemed to harmonize with theanti-platonistic atmosphere of the time.
The idea that mathematics is logic in disguise goes back to Leibniz.But an earnest attempt to carry out the logicist program in detailcould be made only when in the nineteenth century the basic principlesof central mathematical theories were articulated (by Dedekind andPeano) and the principles of logic were uncovered (by Frege).
Frege devoted much of his career to trying to show how mathematics canbe reduced to logic (Frege 1884). He managed to derive the principlesof (second-order) Peano arithmetic from the basic laws of a system ofsecond-order logic. His derivation was flawless. However, he relied onone principle which turned out not to be a logical principle afterall. Even worse, it is untenable. The principle in question isFrege’sBasic Law V:
\[ \{x|Fx\}=\{x|Gx\} \text{ if and only if } \forall x(Fx \equiv Gx), \]In words: the set of theFs is identical with theset of theGs iff theFs areprecisely theGs.
In a famous letter to Frege, Russell showed that Frege’s BasicLaw V entails a contradiction (Russell 1902). This argument has cometo be known asRussell’s paradox (seesection 2.4).
Russell himself then tried to reduce mathematics to logic in anotherway. Frege’s Basic Law V entails that corresponding to everyproperty of mathematical entities, there exists a class ofmathematical entities having that property. This was evidently toostrong, for it was exactly this consequence which led toRussell’s paradox. So Russell postulated that only properties ofmathematical objects that have already been shown to exist, determineclasses. Predicates that implicitly refer to the class that they wereto determine if such a class existed, do not determine a class. Thus atyped structure of properties is obtained: properties of groundobjects, properties of ground objects and classes of ground objects,and so on. This typed structure of properties determines a layereduniverse of mathematical objects, starting from ground objects,proceeding to classes of ground objects, then to classes of groundobjects and classes of ground objects, and so on.
Unfortunately, Russell found that the principles of his typed logicdid not suffice for deducing even the basic laws of arithmetic. Heneeded, among other things, to lay down as a basic principle thatthere exists an infinite collection of ground objects. This couldhardly be regarded as a logical principle. Thus the second attempt toreduce mathematics to logic also faltered.
And there matters stood for more than fifty years. In 1983, CrispinWright’s book on Frege’s theory of the natural numbersappeared (Wright 1983). In it, Wright breathes new life into thelogicist project. He observes that Frege’s derivation ofsecond-order Peano Arithmetic can be broken down in two stages. In afirst stage, Frege uses the inconsistent Basic Law V to derive whathas come to be known asHume’s Principle:
The number of theFs = the number of theGsif and only if \(F\approx G\),
where \(F \approx G\) means that theFs and theGsstand in one-to-one correspondence with each other.(This relation of one-to-one correspondence can be expressed insecond-order logic.) Then, in a second stage, the principles ofsecond-order Peano Arithmetic are derived from Hume’s Principleand the accepted principles of second-order logic. In particular,Basic Law V isnot needed in the second part of thederivation. Moreover, Wright conjectured that in contrast toFrege’s Basic Law V, Hume’s Principle is consistent.George Boolos and others observed that Hume’s Principle isindeed consistent (Boolos 1987).
Wright went on to claim that Hume’s Principle can be regarded asa truth of logic. If that is so, then at least second-order Peanoarithmetic is reducible to logic alone. Thus a new form of logicismwas born; today this view is known asneo-logicism (Hale& Wright 2001). Most philosophers of mathematics today doubt thatHume’s Principle is a principle oflogic. Indeed, evenWright later sought to qualify this claim. Nonetheless, manyphilosophers of mathematics feel that the introduction of naturalnumbers through Hume’s Principle is attractive from anontological and from an epistemological point of view. Linnebo arguesthat because the left-hand-side of Hume’s Principle merelyre-carves the content of its right-hand-side, not much isneeded from the world to make Hume’s Principle true. For thisreason, he calls natural numbers and mathematical objects that can beintroduced in a similar waylight mathematical objects(Linnebo 2018).
Wright’s work has drawn the attention of philosophers ofmathematics to thekind of principles of which Basic Law Vand Hume’s Principle are examples. These principles are calledabstraction principles. At present, philosophers ofmathematics attempt to construct general theories of abstractionprinciples that explain which abstraction principles are acceptableand which are not, and why (Weir 2003; Fine 2002). Also, it hasemerged that in the context of weakened versions of second-orderlogic, Frege’s Basic Law V is consistent. But these weakbackground theories only allow very weak arithmetical theories to bederived from Basic Law V (Burgess 2005).
Intuitionism originates in the work of the mathematician L.E.J.Brouwer (van Atten 2004), and it is inspired by Kantian views of whatobjects are (Parsons 2008, chapter 1). According to intuitionism,mathematics is essentially an activity of construction. The naturalnumbers are mental constructions, the real numbers are mentalconstructions, proofs and theorems are mental constructions,mathematical meaning is a mental construction… Mathematicalconstructions are produced by theideal mathematician, i.e.,abstraction is made from contingent, physical limitations of the reallife mathematician. But even the ideal mathematician remains a finitebeing. She can never complete an infinite construction, even thoughshe can complete arbitrarily large finite initial parts of it. Thisentails that intuitionism resolutely rejects the existence of theactual (or completed) infinite; only potentially infinite collectionsare given in the activity of construction. A basic example is thesuccessive construction in time of the individual natural numbers.
From these general considerations about the nature of mathematics,based on the condition of the human mind (Moore 2001), intuitionistsinfer to a revisionist stance in logic and mathematics. They findnon-constructive existence proofs unacceptable. Non-constructiveexistence proofs are proofs that purport to demonstrate the existenceof a mathematical entity having a certain property without evenimplicitly containing a method for generating an example of such anentity. Intuitionism rejects non-constructive existence proofs as‘theological’ and ‘metaphysical’. Thecharacteristic feature of non-constructive existence proofs is thatthey make essential use of theprinciple of excludedthird
\[ \phi \vee \neg \phi, \]or one of its equivalents, such as the principle of doublenegation
\[ \neg \neg \phi \rightarrow \phi \]In classical logic, these principles are valid. The logic ofintuitionistic mathematics is obtained by removing the principle ofexcluded third (and its equivalents) from classical logic. This ofcourse leads to a revision of mathematical knowledge. For instance,the classical theory of elementary arithmetic,PeanoArithmetic, can no longer be accepted. Instead, an intuitionistictheory of arithmetic (calledHeyting Arithmetic) is proposedwhich does not contain the principle of excluded third. Althoughintuitionistic elementary arithmetic is weaker than classicalelementary arithmetic, the difference is not all that great. Thereexists a simple syntactical translation which translates all classicaltheorems of arithmetic into theorems which are intuitionisticallyprovable.
In the first decades of the twentieth century, parts of themathematical community were sympathetic to the intuitionistic critiqueof classical mathematics and to the alternative that it proposed. Thissituation changed when it became clear that in higher mathematics, theintuitionistic alternative differs rather drastically from theclassical theory. For instance, intuitionistic mathematical analysisis a fairly complicated theory, and it is very different fromclassical mathematical analysis. This dampened the enthusiasm of themathematical community for the intuitionistic project. Nevertheless,followers of Brouwer have continued to develop intuitionisticmathematics onto the present day (Troelstra & van Dalen 1988).
David Hilbert agreed with the intuitionists that there is a sense inwhich the natural numbers are basic in mathematics. But unlike theintuitionists, Hilbert did not take the natural numbers to be mentalconstructions. Instead, he argued that the natural numbers can betaken to besymbols. Symbols are strictly speaking abstractobjects. Nonetheless, it is essential to symbols that they can beembodied by concrete objects, so we may call themquasi-concrete objects (Parsons 2008, chapter 1). Perhapsphysical entities could play the role of the natural numbers. Forinstance, we may take a concrete ink trace of the form | to be thenumber 0, a concretely realized ink trace || to be the number 1, andso on. Hilbert thought it doubtful at best that higher mathematicscould be directly interpreted in a similarly straightforward andperhaps even concrete manner.
Unlike the intuitionists, Hilbert was not prepared to take arevisionist stance toward the existing body of mathematical knowledge.Instead, he adopted an instrumentalist stance with respect to highermathematics. He thought that higher mathematics is no more than aformal game. The statements of higher-order mathematics areuninterpreted strings of symbols. Proving such statements is no morethan a game in which symbols are manipulated according to fixed rules.The point of the ‘game of higher mathematics’ consists, inHilbert’s view, in proving statements of elementary arithmetic,which do have a direct interpretation (Hilbert 1925).
Hilbert thought that there can be no reasonable doubt about thesoundness of classical Peano Arithmetic — or at least about thesoundness of a subsystem of it that is calledPrimitive RecursiveArithmetic (Tait 1981). And he thought that every arithmeticalstatement that can be proved by making a detour through highermathematics, can also be proved directly in Peano Arithmetic. In fact,he strongly suspected thatevery problem of elementaryarithmetic can be decided from the axioms of Peano Arithmetic. Ofcourse solving arithmetical problems in arithmetic is in some casespractically impossible. The history of mathematics has shown thatmaking a “detour” through higher mathematics can sometimeslead to a proof of an arithmetical statement that is much shorter andthat provides more insight than any purely arithmetical proof of thesame statement.
Hilbert realized, albeit somewhat dimly, that some of his convictionscan actually be considered to be mathematical conjectures. For a proofin a formal system of higher mathematics or of elementary arithmeticis a finite combinatorial object which can, modulo coding, beconsidered to be a natural number. But in the 1920s the details ofcoding proofs as natural numbers were not yet completelyunderstood.
On the formalist view, a minimal requirement of formal systems ofhigher mathematics is that they are at least consistent. Otherwiseevery statement of elementary arithmetic can be proved inthem. Hilbert also saw (again, dimly) that the consistency of a systemof higher mathematics entails that this system is at least partiallyarithmetically sound. So Hilbert and his students set out to provestatements such as the consistency of the standard postulates ofmathematical analysis. Of course such statements would have to beproved in a ‘safe’ part of mathematics, such as elementaryarithmetic. Otherwise the proof does not increase our conviction inthe consistency of mathematical analysis. And, fortunately, it seemedpossible in principle to do this, for in the final analysisconsistency statements are, again modulo coding, arithmeticalstatements. So, to be precise, Hilbert and his students set out toprove the consistency of, e.g., the axioms of mathematical analysis inclassical Peano arithmetic. This project was known asHilbert’s program (Zach 2006). It turned out to be moredifficult than they had expected. In fact, they did not even succeedin proving the consistency of the axioms of Peano Arithmetic in PeanoArithmetic.
Then Kurt Gödel proved that there exist arithmetical statementsthat are undecidable in Peano Arithmetic (Gödel 1931). This hasbecome known as his Gödel’sfirst incompletenesstheorem. This did not bode well for Hilbert’s program, butit left open the possibility that the consistency of highermathematics is not one of these undecidable statements. Unfortunately,Gödel then quickly realized that, unless (God forbid!) PeanoArithmetic is inconsistent, the consistency of Peano Arithmetic isindependent of Peano Arithmetic. This is Gödel’ssecondincompleteness theorem. Gödel’s incompletenesstheorems turn out to be generally applicable to all sufficientlystrong but consistent recursively axiomatizable theories. Together,they entail that Hilbert’s program fails. It turns out thathigher mathematics cannot be interpreted in a purely instrumental way.Higher mathematics can prove arithmetical sentences, such asconsistency statements, that are beyond the reach of PeanoArithmetic.
All this does not spell the end of formalism. Even in the face of theincompleteness theorems, it is coherent to maintain that mathematicsis the science of formal systems.
One version of this view was proposed by Curry (Curry 1958). On thisview, mathematics consists of a collection of formal systems whichhave no interpretation or subject matter. (Curry here makes anexception for metamathematics.) Relative to a formal system, one cansay that a statement is true if and only if it is derivable in thesystem. But on a fundamental level,all mathematical systemsare on a par. There can be at most pragmatical reasons for preferringone system over another. Inconsistent systems can prove all statementsand therefore are pretty useless. So when a system is found to beinconsistent, it must be modified. It is simply a lesson fromGödel’s incompleteness theorems that a sufficiently strongconsistent system cannot prove its own consistency.
There is a canonical objection against Curry’s formalistposition. Mathematicians do not in fact treat all apparentlyconsistent formal systems as being on a par. Most of them areunwilling to admit that the preference of arithmetical systems inwhich the arithmetical sentence expressing the consistency of PeanoArithmetic are derivable over those in which its negation isderivable, for instance, can ultimately be explained in purelypragmatical terms. Many mathematicians want to maintain that theperceived correctness (incorrectness) of certain formal systems mustultimately be explained by the fact that they correctly (incorrectly)describe certain subject matters.
Detlefsen has emphasized that the incompleteness theorems do notpreclude that the consistency ofparts of higher mathematicsthat are in practice used for solving arithmetical problems thatmathematicians are interested in can be arithmetically established(Detlefsen 1986). In this sense, something can perhaps be rescued fromthe flames even if Hilbert’s instrumentalist stance towards allof higher mathematics is ultimately untenable.
Another attempt to salvage a part of Hilbert’s program was madeby Isaacson (Isaacson 1987). He defends the view thatin somesense, Peano Arithmetic may be complete after all (Isaacson1987). He argues that true sentences undecidable in Peano Arithmeticcan only be proved by means ofhigher-order concepts. Forinstance, the consistency of Peano Arithmetic can be proved byinduction up to a transfinite ordinal number (Gentzen 1938). But thenotion of an ordinal number is a set-theoretic, and hencenon-arithmetical, concept. If the only ways of proving the consistencyof arithmetic make essential use of notions which arguably belong tohigher-order mathematics, then the consistency of arithmetic, eventhough it can be expressed in the language of Peano Arithmetic, is anon-arithmetical problem. And generalizing from this, one can wonderwhether Hilbert’s conjecture thatevery problem ofarithmetic can be decided from the axioms of Peano Arithmetic mightnot still be true.
As was mentioned earlier, predicativism is not ordinarily described asone of the schools. But it is only for contingent reasons that beforethe advent of the second world war predicativism did not rise to thelevel of prominence of the other schools.
The origin of predicativism lies in the work of Russell. On a cue ofPoincaré, he arrived at the following diagnosis of the Russellparadox. The argument of the Russell paradox defines the collection Cof all mathematical entities that satisfy \(\neg x\in x\). Theargument then proceeds by asking whether C itself meets thiscondition, and derives a contradiction.
The Poincaré-Russell diagnosis of this argument states thatthis definition does not pick out a collection at all: it isimpossible to define a collection S by a condition that implicitlyrefers to S itself. This is called thevicious circleprinciple. Definitions that violate the vicious circle principleare calledimpredicative. A sound definition of a collectiononly refers to entities that exist independently from the definedcollection. Such definitions are calledpredicative. AsGödel later pointed out, a platonist would find this line ofreasoning unconvincing. If mathematical collections existindependently of the act of defining, then it is not immediately clearwhy there could not be collections that canonly be definedimpredicatively (Gödel 1944).
All this led Russell to develop the simple and the ramified theory oftypes, in which syntactical restrictions were built in that makeimpredicative definitions ill-formed. In simple type theory, the freevariables in defining formulas range over entities to which thecollection to be defined do not belong. In ramified type theory, it isrequired in addition that the range of the bound variables in definingformulas do not include the collection to be defined. It was pointedout insection 2.1 that Russell’s type theory cannot be seen as a reduction ofmathematics to logic. But even aside from that, it was observed earlyon that especially in ramified type theory it is too cumbersome toformalize ordinary mathematical arguments.
When Russell turned to other areas of analytical philosophy, HermannWeyl took up the predicativist cause (Weyl 1918). LikePoincaré, Weyl did not share Russell’s desire to reducemathematics to logic. And right from the start he saw that it would bein practice impossible to work in a ramified type theory. Weyldeveloped a philosophical stance that is in a sense intermediatebetween intuitionism and platonism. He took the collection of naturalnumbers as unproblematically given. But the concept of an arbitrarysubset of the natural numbers was not taken to be immediately given inmathematical intuition. Only those subsets which are determined byarithmetical (i.e., first-order) predicates are taken to bepredicatively acceptable.
On the one hand, it emerged that many of the standard definitions inmathematical analysis are impredicative. For instance, the minimalclosure of an operation on a set is ordinarily defined as theintersection of all sets that are closed under applications of theoperation. But the minimal closure itself is one of the sets that areclosed under applications of the operation. Thus, the definition isimpredicative. In this way, attention gradually shifted away fromconcern about the set-theoretical paradoxes to the role ofimpredicativity in mainstream mathematics. On the other hand, Weylshowed that it is often possible to bypass impredicative notions. Iteven emerged that most of mainstream nineteenth century mathematicalanalysis can be vindicated on a predicative basis (Feferman 1988).
In the 1920s, History intervened. Weyl was won over to Brouwer’smore radical intuitionistic project. In the meantime, mathematiciansbecame convinced that the highly impredicative transfinite set theorydeveloped by Cantor and Zermelo was less acutely threatened byRussell’s paradox than previously suspected. These factorscaused predicativism to lapse into a dormant state for severaldecades.
Building on work in generalized recursion theory, Solomon Fefermanextended the predicativist project in the 1960s (Feferman 2005). Herealized that Weyl’s strategy could be iterated into thetransfinite. Also those sets of numbers that can be defined by usingquantification over the sets that Weyl regarded as predicativelyjustified, should be counted as predicatively acceptable, and so on.This process can be propagated along an ordinal path. This ordinalpath stretches as far into the transfinite as thepredicativeordinals reach, where an ordinal is predicative if it measuresthe length of a provable well-ordering of the natural numbers. Thiscalibration of the strength of predicative mathematics, which is dueto Feferman and (independently) Schütte, is nowadays fairlygenerally accepted. Feferman then investigated how much of standardmathematical analysis can be carried out within a predicativistframework. The research of Feferman and others (most notably HarveyFriedman) shows that most of twentieth century analysis is acceptablefrom a predicativist point of view. But it is also clear that not allof contemporary mathematics that is generally accepted by themathematical community is acceptable from a predicativist standpoint:transfinite set theory is a case in point.
In the years before the second world war it became clear that weightyobjections had been raised against each of the three anti-platonistprograms in the philosophy of mathematics. Predicativism was perhapsan exception, but it was at the time a program without defenders. Thusroom was created for a renewed interest in the prospects ofplatonistic views about the nature of mathematics. On the platonisticconception, the subject matter of mathematics consists ofabstractentities.
Gödel was a platonist with respect to mathematical objects andwith respect to mathematical concepts (Gödel 1944; Gödel1964). But his platonistic view was more sophisticated than that ofthe mathematician in the street.
Gödel held that there is a strong parallelism between plausibletheories of mathematical objects and concepts on the one hand, andplausible theories of physical objects and properties on the otherhand. Like physical objects and properties, mathematical objects andconcepts are not constructed by humans. Like physical objects andproperties, mathematical objects and concepts are not reducible tomental entities. Mathematical objects and concepts are as objective asphysical objects and properties. Mathematical objects and conceptsare, like physical objects and properties, postulated in order toobtain a good satisfactory theory of our experience. Indeed, in a waythat is analogous to our perceptual relation to physical objects andproperties, throughmathematical intuition we stand in aquasi-perceptual relation with mathematical objects and concepts. Ourperception of physical objects and concepts is fallible and can becorrected. In the same way, mathematical intuition is not fool-proof— as the history of Frege’s Basic Law V shows— butit can be trained and improved. Unlike physical objects andproperties, mathematical objects do not exist in space and time, andmathematical concepts are not instantiated in space or time.
Our mathematical intuition providesintrinsic evidence formathematical principles. Virtually all of our mathematical knowledgecan be deduced from the axioms ofZermelo-Fraenkel set theory withthe Axiom of Choice (ZFC). In Gödel’s view, we havecompelling intrinsic evidence for the truth of these axioms. But healso worried that mathematical intuition might not be strong enough toprovide compelling evidence for axioms that significantly exceed thestrength of ZFC.
Aside from intrinsic evidence, it is in Gödel’s view alsopossible to obtainextrinsic evidence for mathematicalprinciples. If mathematical principles are successful, then, even ifwe are unable to obtain intuitive evidence for them, they may beregarded as probably true. Gödel says that:
… success here means fruitfulness in consequences, particularlyin ‘verifiable’ consequences, i.e. consequences verifiablewithout the new axiom, whose proof with the help of the new axiom,however, are considerably simpler and easier to discover, and whichmake it possible to contract into one proof many different proofs[…] There might exist axioms so abundant in their verifiableconsequences, shedding so much light on a whole field, yielding suchpowerful methods for solving problems […] that, no matterwhether or not they are intrinsically necessary, they would have to beaccepted at least in the same sense as any well-established physicaltheory. (Gödel 1947, p. 477)
This inspired Gödel to search for new axioms which can beextrinsically motivated and which can decide questions such as thecontinuum hypothesis which are highly independent of ZFC (cf.section 5.1).
Gödel shared Hilbert’s conviction that all mathematicalquestions have definite answers. But platonism in the philosophy ofmathematics should not be taken to be ipso facto committed to holdingthat all set-theoretical propositions have determinate truth values.There are versions of platonism that maintain, for instance, that alltheorems of ZFC are made true by determinate set-theoretical facts,but that there are no set-theoretical facts that make certainstatements that are highly independent of ZFC truth-determinate. Itseems that the famous set theorist Paul Cohen held some such view(Cohen 1971).
Quine formulated a methodological critique of traditional philosophy.He suggested a different philosophical methodology instead, which hasbecome known asnaturalism (Quine 1969). According tonaturalism, our best theories are our bestscientifictheories. If we want to obtain the best available answer tophilosophical questions such asWhatdo we know? andWhich kinds of entities exist?, we should not appeal totraditional epistemological and metaphysical theories. We should alsorefrain from embarking on a fundamental epistemological ormetaphysical inquiry starting from first principles. Rather, we shouldconsult and analyze our best scientific theories. They contain, albeitoften implicitly, our currently best account of what exists, what weknow, and how we know it.
Putnam applied Quine’s naturalistic stance to mathematicalontology (Putnam 1972). At least since Galilei, our best theories fromthe natural sciences are mathematically expressed. Newton’stheory of gravitation, for instance, relies heavily on the classicaltheory of the real numbers. Thus an ontological commitment tomathematical entities seems inherent to our best scientific theories.This line of reasoning can be strengthened by appealing to the Quineanthesis of confirmational holism. Empirical evidence does not bestowits confirmatory power on any one individual hypothesis. Rather,experience globally confirms the theory in which the individualhypothesis is embedded. Since mathematical theories are part andparcel of scientific theories, they too are confirmed by experience.Thus, we have empirical confirmation for mathematical theories. Evenmore appears true. It seems that mathematics is indispensable to ourbest scientific theories: it is not at all obvious how wecould express them without using mathematical vocabulary.Hence the naturalist stance commands us to accept mathematicalentities as part of our philosophical ontology. This line ofargumentation is called anindispensability argument (Colyvan2001).
If we take the mathematics that is involved in our best scientifictheories at face value, then we appear to be committed to a form ofplatonism. But it is a more modest form of platonism thanGödel’s platonism. For it appears that the natural sciencescan get by with (roughly) function spaces on the real numbers. Thehigher regions of transfinite set theory appear to be largelyirrelevant to even our most advanced theories in the natural sciences.Nevertheless, Quine thought (at some point) that the sets that arepostulated by ZFC are acceptable from a naturalistic point of view;they can be regarded as a generous rounding off of the mathematicsthat is involved in our scientific theories. Quine’s judgementon this matter is not universally accepted. Feferman, for instance,argues that all the mathematical theories that are essentially used inour currently best scientific theories are predicatively reducible(Feferman 2005). Maddy even argues that naturalism in the philosophyof mathematics is perfectly compatible with a non-realist view aboutsets (Maddy 2007, part IV).
In Quine’s philosophy, the natural sciences are the ultimatearbiters concerning mathematical existence and mathematical truth.This has led Charles Parsons to object that this picture makes theobviousness of elementary mathematics somewhat mysterious (Parsons1980). For instance, the question whether every natural number has asuccessor ultimately depends, in Quine’s view, on our bestempirical theories; however, somehow this fact appears more immediatethan that. In a kindred spirit, Maddy notes that mathematicians do nottake themselves to be in any way restricted in their activity by thenatural sciences. Indeed, one might wonder whether mathematics shouldnot be regarded as a science in its own right, and whether theontological commitments of mathematics should not be judged rather onthe basis of the rational methods that are implicit in mathematicalpractice.
Motivated by these considerations, Maddy set out to inquire into thestandards of existence implicit in mathematical practice, and into theimplicit ontological commitments of mathematics that follow from thesestandards (Maddy 1990). She focussed on set theory, and on themethodological considerations that are brought to bear by themathematical community on the question which large cardinal axioms canbe taken to be true. Thus her view is closer to that of Gödelthan to that of Quine. In more recent work, she isolates two maximsthat seem to be guiding set theorists when contemplating theacceptability of new set theoretic principles:unify andmaximize (Maddy 1997). The maxim “unify” is aninstigation for set theory to provide a single system in which allmathematical objects and structures of mathematics can be instantiatedor modelled. The maxim “maximize” means that set theoryshould adopt set theoretic principles that are as powerful andmathematically fruitful as possible.
Bernays observed that when a mathematician is at work she“naively” treats the objects she is dealing with in aplatonistic way. Every working mathematician, he says, is a platonist(Bernays 1935). But when the mathematician is caught off duty by aphilosopher who quizzes her about her ontological commitments, she isapt to shuffle her feet and withdraw to a vaguely non-platonisticposition. This has been taken by some to indicate that there issomething wrong with philosophical questions about the nature ofmathematical objects and of mathematical knowledge.
Carnap introduced a distinction between questions that are internal toa framework and questions that are external to a framework (Carnap1950). It has been argued that Carnap’s distinction in someguise survives the demise of the logical empiricist framework in whichit was first articulated (Burgess 2004b). Tait has attempted to workout in detail how the resulting distinction can be applied tomathematics (Tait 2005). This has resulted in what might be regardedas a deflationary versions of platonism.
According to Tait, questions of existence of mathematical entities canonly be sensibly asked and reasonably answered from within (axiomatic)mathematical frameworks. If one is working in number theory, forinstance, then one can ask whether there are prime numbers that have agiven property. Such questions are then to be decided on purelymathematical grounds. Philosophers have a tendency to step outside theframework of mathematics and ask “from the outside”whether mathematical objectsreally exist and whethermathematical propositions arereally true. In this questionthey are asking for supra-mathematical or metaphysical grounds formathematical truth and existence claims. Tait argues that it is hardto see how any sense can be made of such external questions. Heattempts to deflate them, and bring them back to where they belong: tomathematical practice itself. Of course not everyone agrees with Taiton this point. Linsky and Zalta have developed a systematic way ofanswering precisely the sort of external questions that Taitapproaches with disdain (Linsky & Zalta 1995).
It comes as no surprise that Tait has little use for Gödelianappeals to mathematical intuition in the philosophy of mathematics, orfor the philosophical thesis that mathematical objects exist“outside space and time”. More generally, Tait believesthat mathematics is not in need of a philosophical foundation; hewants to let mathematics speak for itself. In this sense, his positionis reminiscent of the (in some sense Wittgensteinian)naturalontological attitude that is advocated by Arthur Fine in therealism debate in the philosophy of science.
Benacerraf formulated an epistemological problem for a variety ofplatonistic positions in the philosophy of science (Benacerraf 1973).The argument is specifically directed against accounts of mathematicalintuition such as that of Gödel. Benacerraf’s argumentstarts from the premise that our best theory of knowledge is thecausal theory of knowledge. It is then noted that according toplatonism, abstract objects are not spatially or temporally localized,whereas flesh and blood mathematicians are spatially and temporallylocalized. Our best epistemological theory then tells us thatknowledge of mathematical entities should result from causalinteraction with these entities. But it is difficult to imagine howthis could be the case.
Today few epistemologists hold that the causal theory of knowledge isour best theory of knowledge. But it turns out that Benacerraf’sproblem is remarkably robust under variation of epistemologicaltheory. For instance, let us assume for the sake of argument thatreliabilism is our best theory of knowledge. Then the problem becomesto explain how we succeed in obtaining reliable beliefs aboutmathematical entities.
Hodes has formulated a semantical variant of Benacerraf’sepistemological problem (Hodes 1984). According to our currently bestsemantic theory, causal-historical connections between humans and theworld of concreta enable our words to refer to physical entities andproperties. According to platonism, mathematics refers to abstractentities. The platonist therefore owes us a plausible account of howwe (physically embodied humans) are able to refer to them. On the faceof it, it appears that the causal theory of reference will be unableto supply us with the required account of the ‘microstructure ofreference’ of mathematical discourse.
A version of platonism has been developed which is intended to providea solution to Benacerraf’s epistemological problem (Linsky &Zalta 1995; Balaguer 1998). This position is known asplenitudinous platonism. The central thesis of this theory isthat every logically consistent mathematical theorynecessarily refers to an abstract entity. Whether themathematician who formulated the theory knows that it refers or doesnot know this, is largely immaterial. By entertaining a consistentmathematical theory, a mathematician automatically acquires knowledgeabout the subject matter of the theory. So, on this view, there is noepistemological problem to solve anymore.
In Balaguer’s version, plenitudinous platonism postulates amultiplicity of mathematical universes, each corresponding to aconsistent mathematical theory. Thus, in particular a question such asthe continuum problem (cf.section 5.1) does not receive a unique answer: in some set-theoretical universesthe continuum hypothesis holds, in others it fails to hold. However,not everyone agrees that this picture can be maintained. Martin hasdeveloped an argument to show that multiple universes can always to alarge extent be “accumulated” into a single universe(Martin 2001).
In Linsky and Zalta’s version of plenitudinous platonism, themathematical entity that is postulated by a consistent mathematicaltheory has exactly the mathematical properties which are attributed toit by the theory. The abstract entity corresponding to ZFC, forinstance, ispartial in the sense that it neither makes thecontinuum hypothesis true nor false. The reason is that ZFC neitherentails the continuum hypothesis nor its negation. This does notentail that all ways of consistently extending ZFC are on a par. Someways may be fruitful and powerful, others less so. But the view doesdeny that certain consistent ways of extending ZFC are preferablebecause they consist of true principles, whereas others contain falseprinciples.
Benacerraf’s work motivated philosophers to develop bothstructuralist and nominalist theories in the philosophy of mathematics(Reck & Price 2000). And since the late 1980s, combinations ofstructuralism and nominalism have also been developed.
As if saddling platonism with one difficult problem were not enough (section 3.4), Benacerraf formulated a challenge for set-theoretic platonism(Benacerraf 1965). The challenge takes the following form.
There exist infinitely many ways of identifying the natural numberswith pure sets. Let us restrict, without essential loss of generality,our discussion to two such ways:
\[\begin{align*} \mathrm{I}{:} & \\ 0 &= \varnothing \\ 1 &= \{\varnothing\} \\ 2 &= \{\{\varnothing\}\} \\ 3 &= \{\{\{\varnothing\}\}\} \\ \vdots&\\ &\\ \mathrm{II}{:} & \\ 0 &= \varnothing \\ 1 &= \{\varnothing \} \\ 2 &= \{\varnothing , \{ \varnothing \}\}\\ 3 &= \{\varnothing , \{\varnothing \}, \{\varnothing , \{\varnothing \}\}\} \\ \vdots& \end{align*}\]The simple question that Benacerraf asks is:
Which of these consists solely of true identity statements: I orII?
It seems very difficult to answer this question. It is not hard to seehow a successor function and addition and multiplication operationscan be defined on the number-candidates of I and on thenumber-candidates of II so that all the arithmetical statements thatwe take to be true come out true. Indeed, if this is done in thenatural way, then we arrive atisomorphic structures (in theset-theoretic sense of the word), and isomorphic structures make thesame sentences true (they areelementarily equivalent). It isonly when we ask extra-arithmetical questions, such as ‘\(1 \in3\)?’ that the two accounts of the natural numbers yielddiverging answers. So it is impossible that both accounts are correct.According to story I, \(3 = \{\{\{\varnothing \}\}\}\), whereasaccording to story II, \(3 = \{\varnothing , \{\varnothing \},\{\varnothing , \{\varnothing \}\}\}\). If both accounts were correct,then the transitivity of identity would yield a purely set theoreticfalsehood.
Summing up, we arrive at the following situation. On the one hand,there appear to be no reasons why one account is superior to theother. On the other hand, the accounts cannot both be correct. Thispredicament is sometimes called labelled Benacerraf’sidentification problem.
The proper conclusion to draw from this conundrum appears to be thatneither account I nor account II is correct. Since similarconsiderations would emerge from comparing other reasonable-lookingattempts to reduce natural numbers to sets, it appears that naturalnumbers are not sets after all. It is clear, moreover, that a similarargument can be formulated for the rational numbers, the realnumbers… Benacerraf concludes that they, too, are not sets atall.
It is not at all clear whether Gödel, for instance, is committedto reducing the natural numbers to pure sets. A platonist can upholdthe claim that the natural numbers can be embedded into theset-theoretic universe while maintaining that the embedding should notbe seen as an ontological reduction. Indeed, on Linsky andZalta’s plenitudinous platonist account, the natural numbershave no properties beyond those that are attributed to them by ourtheory of the natural numbers (Peano Arithmetic). But then it seemsthat platonists would have to take a similar line with respect to therational numbers, the complex numbers, …. Whereas maintainingthat the natural numbers are sui generis admittedly has some appeal,it is perhaps less natural to maintain that the complex numbers, forinstance, are also sui generis. And, anyway, even if the naturalnumbers, the complex numbers, … are in some sense not reducibleto anything else, one may wonder if there may not be another way toelucidate their nature.
Shapiro draws a useful distinction betweenalgebraic andnon-algebraic mathematical theories (Shapiro 1997). Roughly,non-algebraic theories are theories which appear at first sight to beabout a unique model: theintended model of the theory. Wehave seen examples of such theories: arithmetic, mathematicalanalysis… Algebraic theories, in contrast, do not carry a primafacie claim to be about a unique model. Examples are group theory,topology, graph theory…
Benacerraf’s challenge can be mounted for the objects thatnon-algebraic theories appear to describe. But his challenge does notapply to algebraic theories. Algebraic theories are not interested inmathematical objects per se; they are interested in structural aspectsof mathematical objects. This led Benacerraf to speculate whether thesame could not be true also of non-algebraic theories. Perhaps thelesson to be drawn from Benacerraf’s identification problem isthat even arithmetic does not describe specific mathematical objects,but instead only describes structural relations?
Shapiro and Resnik hold that all mathematical theories, evennon-algebraic ones, describestructures. This position isknown as structuralism (Shapiro 1997; Resnik 1997). Structuresconsists of places that stand in structural relations to each other.Thus, derivatively, mathematical theories describe places or positionsin structures. But they do not describe objects. The number three, forinstance, will on this view not be an object but a place in thestructure of the natural numbers.
Systems are instantiations of structures. The systems thatinstantiate the structure that is described by a non-algebraic theoryare isomorphic with each other, and thus, for the purposes of thetheory, equally good. The systems I and II that were described insection 4.1 can be seen as instantiations of the natural number structure.\(\{\{\{\varnothing \}\}\}\) and \(\{\varnothing , \{\varnothing \},\{\varnothing , \{\varnothing \}\}\}\) are equally suitable forplaying the role of the number three. But neitherare thenumber three. For the number three is an open place in the naturalnumber structure, and this open place does not have any internalstructure. Systems typically contain structural properties over andabove those that are relevant for the structures that they are takento instantiate.
Sensible identity questions are those that can be asked from within astructure. They are those questions that can be answered on the basisof structural aspects of the structure. Identity questions that gobeyond a structure do not make sense. One can pose the questionwhether \(3 \in 4\), but not cogently: this question involves acategory mistake. The question mixes two different structures: \(\in\)is a set-theoretical notion, whereas 3 and 4 are places in thestructure of the natural numbers. This seems to constitute asatisfactory answer to Benacerraf’s challenge.
In Shapiro’s view, structures are not ontologically dependent onthe existence of systems that instantiate them. Even if there were noinfinite systems to be found in Nature, the structure of the naturalnumbers would exist. Thus structures as Shapiro understands them areabstract, platonic entities. Shapiro’s brand of structuralism isoften labeledante rem structuralism.
In textbooks on set theory we also find a notion of structure.Roughly, the set theoretic definition says that a structure is anordered \(n+1\)-tuple consisting of a set, a number of relations onthis set, and a number of distinguished elements of this set. But thiscannot be the notion of structure that structuralism in the philosophyof mathematics has in mind. For the set theoretic notion of structurepresupposes the concept of set, which, according to structuralism,should itself be explained in structural terms. Or, to put the pointdifferently, a set-theoretical structure is merely asystemthat instantiates a structure that is ontologically prior to it.
Nonetheless, the motivation for extending ante rem structuralism evento the most encompassing mathematical discipline (set theory) is notentirely evident (Burgess 2015). Recall that the main motivation forarriving at a structuralist understanding of a mathematical disciplinelies in Benacerraf’s identification problem. For set theory, itseems hard to mount an identification challenge: sets are not usuallydefined in terms of more primitive concepts.
It appears thatante rem structuralism describes the notionof a structure in a somewhat circular manner. A structure is describedas places that stand in relation to each other, but a place cannot bedescribed independently of the structure to which it belongs. Yet thisis not necessarily a problem. For theante rem structuralist,the notion of structure is a primitive concept, which cannot bedefined in other more basic terms. At best, we can construct anaxiomatic theory of mathematical structures.
But Benacerraf’s epistemological problem still appears to beurgent. Structures and places in structures may not be objects, butthey are abstract. So it is natural to wonder how we succeed inobtaining knowledge of them. This problem has been taken by certainphilosophers as a reason for developing a nominalist theory ofmathematics and then to reconcile this theory with basic tenets ofstructuralism.
Goodman and Quine tried early on to bite the bullet: they embarked ona project to reformulate theories from natural science without makinguse of abstract entities (Goodman & Quine 1947). The nominalisticreconstruction of scientific theories proved to be a difficult task.Quine, for one, abandoned it after this initial attempt. In the pastdecades many theories have been proposed that purport to give anominalistic reconstruction of mathematics. (Burgess & Rosen 1997)contains a good critical discussion of such views.
In a nominalist reconstruction of mathematics, concrete entities willhave to play the role that abstract entities play in platonisticaccounts of mathematics, and concrete relations (such as thepart-whole relation) have to be used to simulate mathematicalrelations between mathematical objects. But here problems arise.First, already Hilbert observed that, given the discretization ofnature in quantum mechanics, the natural sciences may in the end claimthat there are only finitely many concrete entities (Hilbert 1925).Yet it seems that we would need infinitely many of them to play therole of the natural numbers — never mind the real numbers. Wheredoes the nominalist find the required collection of concrete entities?Secondly, even if the existence of infinitely many concrete objects isassumed, it is not clear that even elementary mathematical theoriessuch as Primitive Recursive Arithmetic can be “simulated”by means of nominalistic relations (Niebergall 2000).
Field made an earnest attempt to carry out a nominalisticreconstruction of Newtonian mechanics (Field 1980). The basic idea isthis. Field wanted to use concrete surrogates of the real numbers andfunctions on them. He adopted a realist stance toward the spatialcontinuum, and took regions of space to be as physically real aschairs and tables. And he took regions of space to be concrete (afterall, they are spatially located). If we also count the verydisconnected ones, then there are as many regions of Newtonian spaceas there are subsets of the real numbers. And then there are enoughconcrete entities to play the role of the natural numbers, the realnumbers, and functions on the real numbers. And the theory of the realnumbers and functions on them is all that is needed to formulateNewtonian mechanics. Of course it would be even more interesting tohave a nominalistic reconstruction of a truly contemporary scientifictheory such as Quantum Mechanics. But given that the project can becarried out for Newtonian mechanics, some degree of initial optimismseems justified.
This project clearly has its limitations. It may be possiblenominalistically to interpret theories of function spaces on the realnumbers, say. But it seems far-fetched to think that along Fieldianlines a nominalistic interpretation of set theory can be found.Nevertheless, if it is successful within its confines, thenField’s program has really achieved something. For it would meanthat, to some extent at least, mathematical entities appear to bedispensable after all. He would thereby have taken an important steptowards undermining the indispensability argument for Quinean modestplatonism in mathematics, for, to some extent, mathematical entitiesappear to be dispensable after all.
Field’s strategy only has a chance of working if Hilbert’sfear that in a very fundamental sense our best scientific theories mayentail that there are only finitely many concrete entities, isill-founded. If one sympathizes with Hilbert’s concern but doesnot believe in the existence of abstract entities, then one might bitethe bullet and claim that there are only finitely manymathematical entities, thus contradicting the basicprinciples of elementary arithmetic. This leads to a position that hasbeen calledultra-finitism (Essenin-Volpin 1961).
On most accounts, ultra-finitism leads, like intuitionism, torevisionism in mathematics. For it would seem that one would then haveto say that there is a largest natural number, for instance. From theoutside, a theory postulating only a finite mathematical universeappears proof-theoretically weak, and therefore very likely to beconsistent. But Woodin has developed an argument that purports to showthat from the ultra-finitist perspective, there are no grounds forasserting that the ultra-finitist theory is likely to be consistent(Woodin 2011).
Regardless of this argument (the details of which are not discussedhere), many already find the assertion that there is a largest numberhard to swallow. But Lavine has articulated a sophisticated form ofset-theoretical ultra-finitism which is mathematically non-revisionist(Lavine 1994). He has developed a detailed account of how theprinciples of ZFC can be taken to be principles that describedeterminately finite sets, if these are taken to include indefinitelylarge ones.
Field’s physicalist interpretation of arithmetic and analysisnot only undermines the Quine-Putnam indispensability argument. Italso partially provides an answer to Benacerraf’sepistemological challenge. Admittedly it is not a simple task to givean account of how humans obtain knowledge of spacetime regions. But atleast according to many (but not all) philosophers spacetime regionsare physically real. So we are no longer required to explicate howflesh and blood mathematicians stand in contact with non-physicalentities. But Benacerraf’s identification problem remains. Onemay wonder why one spacetime point or region rather than another playsthe role of the number \(\pi\), for instance.
In response to the identification problem, it seems attractive tocombine a structuralist approach with Field’s nominalism. Thisleads to versions ofnominalist structuralism, which can beoutlined as follows. Let us focus on mathematical analysis. Thenominalist structuralist denies that any concrete physical system isthe unique intended interpretation of analysis. All concrete physicalsystems that satisfy the basic principles of Real Analysis (RA) woulddo equally well. So the content of a sentence \(\phi\) of the languageof analysis is (roughly) given by:
Every concrete system S that makes RA true, also makes \(\phi\)true.
This entails that, as withante rem structuralism, onlystructural aspects are relevant to the truth or falsehood ofmathematical statements. But unlikeante rem structuralism,noabstract structure is postulated above and beyond concretesystems.
According toin rebus structuralism, no abstract structuresexist over and above the systems that instantiate them; structuresexist onlyin the systems that instantiate them. For thisreason nominalistin rebus structuralism is sometimesdescribed as “structuralism without structures”.Nominalist structuralism is a form ofin rebus structuralism.Butin rebus structuralism is not exhausted by nominaliststructuralism. Even the version of platonism that takes mathematics tobe about structures in the set-theoretic sense of the word can beviewed as a form ofin rebus structuralism.
In mathematical discourse, non-algebraic structures (such as‘the’ natural numbers) and mathematical objects (such as‘the’ number 1) are referred to by definite descriptions.This strongly suggests that mathematical symbols (N, 1) have a uniquereference rather than a ‘distributed’ one asinrebus structuralism would have it. Butin rebusstructuralists argue that such mathematical symbols function asdedicated variables in much the same way as in ‘Tommyneeds his letters from home’, a world war II slogan, the name‘Tommy’ is chosen to stand for some arbitrary concretesoldier, and re-used on many occasions without changing its reference(Pettigrew 2008).
If Hilbert’s worry is wellfounded in the sense that there are noconcrete physical systems that make the postulates of mathematicalanalysis true, then the above nominalist structuralist rendering ofthe content of a sentence \(\phi\) of the language of analysis getsthe truth conditions of such sentences wrong. For then forevery universally quantified sentence \(\phi\), itsparaphrase will come out vacuously true. So an existential assumptionto the effect that there exist concrete physical systems that canserve as a model for RA is needed to back up the above analysis of thecontent of mathematical statements. Perhaps something likeField’s construction fits the bill.
Putnam noticed early on that if the above explication of the contentof mathematical sentences is modified somewhat, a substantially weakerbackground assumption is sufficient to obtain the correct truthconditions (Putnam 1967). Putnam proposed the followingmodalrendering of the content of a sentence \(\phi\) of the language ofanalysis:
Necessarily, every concrete system S that makes RA true, alsomakes \(\phi\) true.
This is a stronger statement than the nonmodal rendering that waspresented earlier. But it seems equally plausible. And an advantage ofthis rendering is that the following modal existential backgroundassumption is sufficient to make the truth conditions of mathematicalstatements come out right:
It is possible that there exists a concrete physical systemthat can serve as a model for RA.
(‘It is possible that’ here means ‘It is or mighthave been the case that’.) Now Hilbert’s concern seemsadequately addressed. For on Putnam’s account, the truth ofmathematical sentences no longer depends on physical assumptions aboutthe actual world.
It is admittedly not easy to give a satisfying account of how weknow that this modal existential assumption is fulfilled. Butit may be hoped that the task is less daunting than the task ofexplaining how we succeed in knowing facts about abstract entities.And it should not be forgotten that the structuralist aspect of this(modal) nominalist position keeps Benacerraf’s identificationchallenge at bay.
Putnam’s strategy also has its limitations. Chihara sought toapply Putnam’s strategy not only to arithmetic and analysis butalso to set theory (Chihara 1973). Then a crude version of therelevant modal existential assumption becomes:
It is possible that there exist concrete physical systemsthat can serve as a model for ZFC.
Parsons has noted that when possible worlds are needed which containcollections of physical entities that have large transfinitecardinalities or perhaps are even too large to have a cardinal number,it becomes hard to see these as possible concrete or physical systems(Parsons 1990a). We seem to have no reason to believe that there couldbephysical worlds that contain highly transfinitely manyentities.
According to the previous proposals, the statements of ordinarymathematics are true when suitably, i.e., nominalistically,interpreted. The nominalistic account of mathematics that will now bediscussed holds that all existential mathematical statements are falsesimply because there are no mathematical entities. (For the samereason all universal mathematical statements will be triviallytrue.)
Fictionalism holds that mathematical theories are like fiction storiessuch as fairy tales and novels. Mathematical theories describefictional entities, in the same way that literary fiction describesfictional characters. This position was first articulated in theintroductory chapter of (Field 1989), and has in recent years beengaining in popularity.
This crude description of the fictionalist position immediately opensup the question what sort of entities fictional entities are. Thisappears to be a deep metaphysical ontological problem. One way toavoid this question altogether is to deny that there exist fictionalentities. Mathematical theories should be viewed as invitations toparticipate in games of pretence, in which we act as if certainmathematical entities exist. Pretence or make-believe operators shieldtheir propositional objects from existential exportation (Leng2010).
Anyway, as said above, on the fictionalist view, a mathematical theoryisn’t literally true. Nonetheless, mathematics is used to gettruths across. So we mustsubtract something from what isliterally said when we assert a physical theory that involvesmathematics, if we want to get at the truth. But this requires atheory of how this subtraction of content works. Such atheory has been developed in (Yablo, 2014).
If the fictionalist thesis is correct, then one demand that must beimposed on mathematical theories is surely consistency. Yet Field addsto this a second requirement: mathematics must beconservative over natural science. This means, roughly, thatwhenever a statement of an empirical theory can be derived usingmathematics, it can in principle also be derived without using anymathematical theories. If this were not the case, then anindispensability argument could be played out against fictionalism.Whether mathematics is in fact conservative over physics, forinstance, is currently a matter of controversy. Shapiro has formulatedan incompleteness argument that intends to refute Field’s claim(Shapiro 1983).
If there are indeed no mathematical (fictional) entities, as one formof fictionalism has it, then Benacerraf’s epistemologicalproblem does not arise. Fictionalism then shares this advantage overmost forms of platonism with nominalistic reconstructions ofmathematics. But the appeal to pretence operators entails that thelogical form of mathematical sentences then differs somewhat fromtheir surface form. If there are fictional objects, then the surfaceform of mathematical sentences can be taken to coincide with theirlogical form. But if they exist as abstract entities, thenBenacerraf’s epistemological problem reappears.
Whether Benacerraf’s identification problem is solved is notcompletely clear. In general, fictionalism is a non-reductionistaccount. Whether an entity in one mathematical theory is identicalwith an entity that occurs in another theory is usually leftindeterminate by mathematical “stories”. Yet Burgess hasrightly emphasized that mathematics differs from literary fiction inthe fact that fictional characters are usually confined to one work offiction, whereas the same mathematical entities turn up in diversemathematical theories (Burgess 2004). After all, entities with thesamename (such as \(\pi)\) turn up in different theories.Perhaps the fictionalist can maintain that when mathematicians developa new theory in which an “old” mathematical entity occurs,the entity in question is made more precise. More determinateproperties are ascribed to it than before, and this is all right aslong as overall consistency is maintained.
The canonical objection to formalism seems also applicable tofictionalism. The fictionalists should find some explanation of thefact that extending a mathematical theory in one way, is oftenconsidered preferable over continuing it in a another way that isincompatible with the first. There is often at least an appearancethat there is a right way to extend a mathematical theory.
In recent years, subdisciplines of the philosophy of mathematics havestarted to arise. They evolve in a way that is not completelydetermined by the “big debates” about the nature ofmathematics. In this section, we look at a few of thesedisciplines.
Many regard set theory as in some sense the foundation of mathematics.It seems that just about any piece of mathematics can be carried outin set theory, even though it is sometimes an awkward setting fordoing so. In recent years, the philosophy of set theory is emerging asa philosophical discipline of its own. This is not to say that inspecific debates in the philosophy of set theory it cannot make anenormous difference whether one approaches it from a formalistic pointof view or from a platonistic point of view, for instance.
The thesis that set theory is most suitable for serving as thefoundations of mathematics is by no means uncontroversial. Over thepast decades,category theory has presented itself as a rivalfor this role. Category theory is a mathematical theory that wasdeveloped in the middle of the twentieth century. Unlike in settheory, in category theory mathematical objects areonlydefined up to isomorphism. This means that Benacerraf’sidentification problem cannot be raised for category theoreticalconcepts and ‘objects’. At the same time, (roughly)everything that can be done in set theory can be done in categorytheory (but not always in a natural manner), and vice versa (again notalways in a natural manner). This means that for a structuralistperspective, category theory is an attractive candidate for providingthe foundations of mathematics (McLarty 2004).
One question that has been important from the beginning of set theoryconcerns the difference between sets and proper classes. (Thisquestion has a natural counterpart for category theory: the differencebetween small and large categories.) Cantor’s diagonal argumentforces us to recognize that the set-theoretical universe as a wholecannot be regarded as a set. Cantor’s Theorem shows that thepower set (i.e., the set of all subsets) of any given set has a largercardinality than the given set itself. Now suppose that theset-theoretical universe forms a set: the set of all sets. Then thepower set of the set of all sets would have to be a subset of the setof all sets. This would contradict the fact that the power set of theset of all sets would have a larger cardinality than the set of allsets. So we must conclude that the set-theoretical universe cannotform a set.
Cantor called pluralities that are too large to be considered as a setinconsistent multiplicities (Cantor 1932). Today,Cantor’s inconsistent multiplicities are calledproperclasses. Some philosophers of mathematics hold that properclasses still constitute unities, and hence can be seen as a sort ofcollection. They are, in a Cantorian spirit, just collections that aretoo large to be sets. Nevertheless, there are problems with this view.Just as there can be no set of all sets, there can for diagonalizationreasons also not be a proper class of all proper classes. So theproper class view seems compelled to recognize in addition a realm ofsuper-proper classes, and so on. For this reason, Zermelo claimed thatproper classes simply do not exist. This position is less strange thanit looks at first sight. On close inspection, one sees that in ZFC onenever needs to quantify over entities that are too large to be sets(although there exist systems of set theory that do quantify overproper classes). On this view, the set-theoretical universe ispotentially infinite in an absolute sense of the word. It never existsas a completed whole, but is forever growing, and hence foreverunfinished (Zermelo 1930). This way of speaking indicates that in ourattempts to understand this notion of potential infinity, we are drawnto temporal metaphors. It is not surprising that these temporalmetaphors cause some philosophers of mathematics acute discomfort. Forthis reason, contemporary philosophers of mathematics who aresympathetic to Zermelo’s potentialist interpretation of the settheoretic universe, tend to regard the modality involved in thisinterpretation as a non-temporal one: the nature of this modality ishotly debated (Linnebo 2013, Studd 2019).
A second subject in the philosophy of set theory concerns thejustification of the accepted basic principles of mathematics, i.e.,the axioms of ZFC. An important historical case study is the processby which the Axiom of Choice came to be accepted by the mathematicalcommunity in the early decades of the twentieth century (Moore 1982).The importance of this case study is largely due to the fact that anopen and explicit discussion of its acceptability was held in themathematical community. In this discussion, general reasons foraccepting or refusing to accept a principle as a basic axiom came tothe surface. On the systematic side, two conceptions of the notion ofset have been elaborated which aim to justify all axioms of ZFC in onefell swoop. On the one hand, there is theiterativeconception of sets, which describes how the set-theoreticaluniverse can be thought of as generated from the empty set by means ofthe power set operation (Boolos 1971, Linnebo 2013). On the otherhand, there is thelimitation of size conception of sets,which states that every collection which is not too big to be a set,is a set (Hallett 1984). The iterative conception motivates someaxioms of ZFC very well (the power set axiom, for instance), but faresless well with respect to other axioms, such as the replacement axiom(Potter 2004, Part IV). The limitation of size conception motivatesother axioms better (such as the restricted comprehension axiom). Itseems fair to say that there is nouniform conception thatclearly justifies all axioms of ZFC.
The motivation of putative axioms that go beyond ZFC constitutes athird concern of the philosophy of set theory (Maddy 1988; Martin1998). One such class of principles is constituted by thelargecardinal axioms. Nowadays, large cardinal hypotheses are reallytaken to mean some kind of embedding properties between the settheoretic universe and inner models of set theory (Kanamori 2009).Most of the time, large cardinal principles entail the existence ofsets that are larger than any sets which can be guaranteed by ZFC toexist.
The weaker of the large cardinal principles are supported by intrinsicevidence (seesection 3.1). They follow from what are calledreflection principles.These are principles that state that the set theoretic universe as awhole is so rich that it is very similar to some set-sized initialsegment of it. The stronger of the large cardinal principles hithertoonly enjoy extrinsic support. Many researchers are skeptical about thepossibility that reflection principles, for instance, can be foundthat support them (Koellner 2009); others, however, disagree (Welch& Horsten 2016).
Gödel hoped that on the basis of such large cardinal axioms, themost important open question of set theory could eventually besettled. This is thecontinuum problem. Thecontinuumhypothesis was proposed by Cantor in the late nineteenth century.It states that there are no sets S which are too large for there to bea one-to-one correspondence between S and the natural numbers, but toosmall for there to exist a one-to-one correspondence between S and thereal numbers. Despite strenuous efforts, all attempts to settle thecontinuum problem failed. Gödel came to suspect that thecontinuum hypothesis is independent of the accepted principles of settheory (ZFC). Around 1940, he managed to show that the continuumhypothesis is consistent with ZFC. A few decades later, Paul Cohenproved that the negation of the continuum hypothesis is alsoconsistent with ZFC. Thus Gödel’s conjecture of theindependence of the continuum hypothesis was eventually confirmed.
But Gödel’s hope that large cardinal axioms could solve thecontinuum problem turned out to be unfounded. The continuum hypothesisis independent of ZFC even in the context of large cardinal axioms.Nevertheless, large cardinal principles have manage to settlerestricted versions of the continuum hypothesis (in the affirmative).The existence of so-called Woodin cardinals ensures that setsdefinable in analysis are either countable or the size of thecontinuum. Thus thedefinable continuum problem issettled.
In recent years, attempts have been focused on finding principles of adifferent kind which might be justifiable and which might yet decidethe continuum hypothesis (Woodin 2001a, Woodin 2001b). One of the moregeneral philosophical questions that have emerged from this researchis the following: which conditions have to be satisfied in order for aprinciple to be a putative basic axiom of mathematics?
Some of the researchers who seek to decide the continuum hypothesisthink that it is true; others think that it is false. But there arealso many set theorists and philosophers of mathematics who believethat the continuum hypothesis not just undecidable in ZFC butabsolutely undecidable, i.e. that it is neither provable (inthe informal sense of the word) nor disprovable (in the informal senseof the word) because it is neither true nor false. If the mathematicaluniverse is aset theoretic multiverse, for instance, thenthere are equally models that make the continuum hypothesis true andequally good models that make it false, and there is no more to besaid (Hamkins, 2015).
In the second half of the nineteenth century Dedekind proved that thebasic axioms of arithmetic have, up to isomorphism, exactly one model,and that the same holds for the basic axioms of Real Analysis. If atheory has, up to isomorphism, exactly one model, then it is said tobecategorical. So modulo isomorphisms, arithmetic andanalysis each have exactly one intended model. Half a century laterZermelo proved that the principles of set theory are“almost” categorical orquasi-categorical: forany two models \(M_1\) and \(M_2\) of the principles of set theory,either \(M_1\) is isomorphic to \(M_2\), or \(M_1\) is isomorphic to astrongly inaccessible rank of \(M_2\), or \(M_2\) is isomorphic to astrongly inaccessible rank of \(M_1\) (Zermelo 1930). In recent years,attempts have been made to develop arguments to the effect thatZermelo’s conclusion can be strengthened to a full categoricityassertion (McGee 1997; Martin 2001), but we will not discuss thesearguments here.
At the same time, the Löwenheim-Skolem theorem says that everyfirst-order formal theory that has at least one model with an infinitedomain, must have models with domains of all infinite cardinalities.Since the principles of arithmetic, analysis and set theory had betterpossess at least one infinite model, the Löwenheim-Skolem theoremappears to apply to them. Is this not in tension with Dedekind’scategoricity theorems?
The solution of this conundrum lies in the fact that Dedekind did noteven implicitly work with first-order formalizations of the basicprinciples of arithmetic and analysis. Instead, he informally workedwithsecond-order formalizations.
Let us focus on arithmetic to see what this amounts to. The basicpostulates of arithmetic contain the induction axiom. In first-orderformalizations of arithmetic, this is formulated as a scheme: for eachfirst-order arithmetical formula of the language of arithmetic withone free variable, one instance of the induction principle is includedin the formalization of arithmetic. Elementary cardinalityconsiderations reveal that there are infinitely many properties ofnatural numbers that are not expressed by a first-order formula. Butintuitively, it seems that the induction principle holds forall properties of natural numbers. So in a first-orderlanguage, the full force of the principle of mathematical inductioncannot be expressed. For this reason, a number of philosophers ofmathematics insist that the postulates of arithmetic should beformulated in asecond-order language (Shapiro 1991).Second-order languages contain not just first-order quantifiers thatrange over elements of the domain, but also second-order quantifiersthat range over properties (or subsets) of the domain. Infull second-order logic, it is insisted that thesesecond-order quantifiers range overall subsets of thedomain. If the principles of arithmetic are formulated in asecond-order language, then Dedekind’s argument goes through andwe have a categorical theory. For similar reasons, we also obtain acategorical theory if we formulate the basic principles of realanalysis in a second-order language, and the second-order formulationof set theory turns out to be quasi-categorical.
Ante rem structuralism, as well as the modal nominaliststructuralist interpretation of mathematics, could benefit from asecond-order formulation. If theante rem structuralist wantsto insists that the natural number structure is fixed up toisomorphism by the Peano axioms, then she will want to formulate thePeano axioms in second-order logic. And the modal nominaliststructuralist will want to insist that the relevant concrete systemsfor arithmetic are those that make thesecond-order Peanoaxioms true (Hellman 1989). Similarly for real analysis and settheory. Thus the appeal to second-order logic appears as the finalstep in the structuralist project of isolating the intended models ofmathematics.
Yet appeal to second-order logic in the philosophy of mathematics isby no means uncontroversial. A first objection is that the ontologicalcommitment of second-order logic is higher than the ontologicalcommitment of first-order logic. After all, use of second-order logicseems to commit us to the existence of abstract objects: classes. Inresponse to this problem, Boolos has articulated an interpretation ofsecond-order logic which avoids this commitment to abstract entities(Boolos 1985). His interpretation spells out the truth clauses for thesecond-order quantifiers in terms of plural expressions, withoutinvoking classes. For instance, an second-order expression of the form\(\exists x F(x)\) is interpreted as: “there are some(first-order objects)x such that theyhave the propertyF”. This interpretation iscalled theplural interpretation of second-order logic. It iscontroversial whether there is a real difference between themathematical use of pluralities and of sets (Linnebo 2003).Nevertheless it is clear that an appeal to the plural interpretationof second-order logic will be tempting for nominalist versions ofstructuralism.
A second objection against second-order logic can be traced back toQuine (Quine 1970). This objection states that the interpretation offull second-order logic is connected with set-theoretical questions.This is already indicated by the fact that most regimentations ofsecond-order logic adopt a version of the axiom of choice as one ofits axioms. But more worrisome is the fact that second-order logic isinextricably intertwined with deep problems in set theory, such as thecontinuum hypothesis. For theories such as arithmetic that intend todescribe an infinite collection of objects, even a matter aselementary as the question of the cardinality of the range of thesecond-order quantifiers, is equivalent to the continuum problem.Also, it turns out that there exists a sentence which is asecond-order logical truth if and only if the continuum hypothesisholds (Boolos 1975). We have seen that the continuum problem isindependent of the currently accepted principles of set theory. Andmany researchers believe it to be absolutely truth-valueless. If thisis so, then there is an inherent indeterminacy in the very notion ofsecond-order infinite model. And many contemporary philosophers ofmathematics take the latter not to have a determinate truth value.Thus, it is argued, the very notion of an (infinite) model of fullsecond-order logic is inherently indeterminate.
If one does not want to appeal to full second-order logic, then thereare other ways to ensure categoricity of mathematical theories. Oneidea would be to make use of quantifiers which are somehowintermediate between first-order and second-order quantifiers. Forinstance, one might treat “there are finitely manyx”as a primitive quantifier. This will allow oneto, for instance, construct a categorical axiomatization ofarithmetic.
But ensuring categoricity of mathematical theories does not requireintroducing stronger quantifiers. Another option would be to take theinformal concept of algorithmic computability as a primitive notion(Halbach & Horsten 2005; Horsten 2012). A theorem of Tennenbaumstates that all first-order models of Peano Arithmetic in whichaddition and multiplication are computable functions, are isomorphicto each other. Nowour operations of addition andmultiplication are computable: otherwise we could never have learnedthese operations. This, then, is another way in which we may be ableto isolate the intended models of our principles of arithmetic.Against this account, however, it may be pointed out that it seemsthat the categoricity of intended models for real analysis, forinstance, cannot be ensured in this manner. For computation on modelsof the principles of real analysis, we do not have a theorem thatplays the role of Tennenbaum’s theorem.
If one accepts a certain open-endedness of the collection ofarithmetical predicates, then a categoricity theorem of sorts forarithmetic can be obtained without overstepping the bounds offirst-order logic and without appealing to an informal concept ofcomputability. Suppose that there are two mathematicians, A and B, whoboth assert the first-order Peano-axioms in their own idiolect.Suppose furthermore that A and B regard the collection of predicatesfor which mathematical induction is permissible as open-ended, and areboth willing to accept the other’s induction scheme as true.Then A and B have the wherewithal to convince themselves that bothidiolects describe isomorphic structures (Parsons 1990b). Sucharguments are called internal categoricity arguments. They are widelydebated in contempory philosophy of mathematics: see for instance(Button & Walsh 2019).
Many of those who are sceptical of the philosophical use ofcategoricity argments in the philosophy of mathematics take all of ourconsistent mathematical theories to have many structurally differentmodels, and take all or many of those models to be on a par with oneanother. As we saw in the previous sub-section, the set theoreticmultiverse view is a case in point, and so is set theoreticpotentialism. But one can go further, and defend the thesis that anyconsistent mathematical theory describes a free-standing mathematicaluniverse, and that no such theory is more true than any other (Linsky& Zalta 1995, Bueno 2011).
These theories belong to a family of views that is calledmathematical pluralism, which is an increasingly prominenttheme in the philosophy of mathematics. Historically, thisconstellation of views has roots in the work of Hilbert and of Carnap.In a debate with Frege, Hilbert insisted that consistency suffices fora mathematical theory to have a subject matter (Resnik 1974); Carnapargued that choice between alternative large-scale theories(frameworks) is ultimately never more than apragmatic matter(Carnap 1950).
As is everywhere the case in philosophy, there is disagreement here:for a critique of the doctrine that mathematical truth is anirrevocably use-relative notion, see (Koellner 2009b), and for aretort, see (Warren 2015). Some react to mathematical pluralism bytaking it one step further still, and argue that also all inconsistentmathematical theories should be regarded as true (in a relativisedsense). Moreover, some mathematical theories that are trivial in thesense of being inconsistent, are commonly taken to be just asvaluable as many venerable consistent ones:“Historically, there are three [to the author’s knowledge]mathematical theories which had a profound impact on mathematics andlogic, and were found to be trivial. There are Cantor’s naiveset theory, Frege’s formal theory of logic and the first versionof Church’s formal theory of mathematical logic. All three hadprofound reprecussions on subsequent mathematics” (Friend 2013,p. 294).
Until fairly recently, the subject of computation did not receive muchattention in the philosophy of mathematics. This may be due in part tothe fact that in Hilbert-style axiomatizations of number theory,computation is reduced to proof in Peano Arithmetic. But thissituation has changed in recent years. It seems that along with theincreased importance of computation in mathematical practice,philosophical reflections on the notion of computation will occupy amore prominent place in the philosophy of mathematics in the years tocome.
Church’s Thesis occupies a central place in computabilitytheory. It says that every algorithmically computable function on thenatural numbers can be computed by a Turing machine.
As a principle, Church’s Thesis has a somewhat curious status.It appears to be abasic principle. On the one hand, theprinciple is almost universally held to be true. On the other hand, itis hard to see how it can be mathematically proved. The reason is thatits antecedent contains an informal notion (algorithmic computability)whereas its consequent contains a purely mathematical notion (Turingmachine computability). Mathematical proofs can only connect purelymathematical notions—or so it seems. The received view was thatour evidence for Church’s Thesis is quasi-empirical. Attempts tofind convincing counterexamples to Church’s Thesis have come tonaught. Independently, various proposals have been made tomathematically capture the algorithmically computable functions on thenatural numbers. Instead of Turing machine computability, the notionsof general recursiveness, Herbrand-Gödel computability,lambda-definability… have been proposed. But these mathematicalnotions all turn out to be equivalent. Thus, to use Gödelianterminology, we have accumulated extrinsic evidence for the truth ofChurch’s Thesis.
Kreisel pointed out long ago that even if a thesis cannot be formallyproved, it may still be possible to obtain intrinsic evidence for itfrom a rigorous but informal analysis of intuitive notions (Kreisel1967). Kreisel calls these exercises ininformal rigour.Detailed scholarship by Sieg revealed that the seminal article (Turing1936) constitutes an exquisite example of just this sort of analysisof the intuitive concept of algorithmic computability (Sieg 1994).
Currently, the most active subjects of investigation in the domain offoundations and philosophy of computation appear to be the following.First, energy has been invested in developing theories of algorithmiccomputation on structures other than the natural numbers. Inparticular, efforts have been made to obtain analogues ofChurch’s Thesis for algorithmic computation on variousstructures. In this context, substantial progress has been made inrecent decades in developing a theory of effective computation on thereal numbers (Pour-El 1999). Second, attempts have been made toexplicate notions of computability other than algorithmiccomputability by humans. One area of particular interest here is thearea ofquantum computation (Deutschet al.2000).
We know much about the concepts offormal proof andformal provability, their connection with algorithmiccomputability, and the principles by which these concepts aregoverned. We know, for instance, that the proofs of a formal systemare computably enumerable, and that provability in a sound (strongenough) formal system is subject to Gödel’s incompletenesstheorems. But a mathematical proof as you find it in a mathematicaljournal is not a formal proof in the sense of the logicians: it is a(rigorous)informal proof (Myhill 1960, Detlefsen 1992,Antonutti 2010).
First, whereas the collection of sentences provable in a formal systemis always computably enumerable, we know much less about theextension of the concept of informal provability. Lucas(Lucas 1961), and later Penrose (Penrose 1989, 1994), have argued thatinformal mathematical provability outstrips provability in any givenformal system. But their arguments are widely regarded asunpersuasive. Benacerraf has argued against Lucas and Penrose that itcannot be excluded that there is a formal system \(T\) such that in factmathematical provability extensionally coincides with provability in\(T\), even though we cannot know that it does (Benacerraf 1967). Othershave argued that the concept of informal mathematical provability isnot even clear enough for the question whether its extension iscomputably enumerable to have a definite answer (Horsten & Welch2016).
Second, there is no agreement about what thestandard is foran argument to qualify as a mathematical proof. According to what maybe called the received view, a mathematical argument for a statement \(p\)constitutes an informal mathematical proof if the argument allows acompetent mathematician to transform it into aformaldeduction of \(p\) from generally accepted mathematical axioms(Avigad 2021). An informal mathematical proof can then be taken to beaderivation-indicator for \(p\) (Azzouni 2004). But the receivedview of the standard of mathematical proof has come under attack inrecent years. It has been argued, for instance, that theinterpolations of reasons in an informal mathematical proof until alogically correct and non-elliptical first-order derivation isreached, can be aninfinite process (Rav 1999, p.14-15).Others are mounting a defence of the received view, so that there is alively debate about these issues at the moment (Tatton-Brown forthcoming,Di Toffoli 2021).
The past decades have witnessed the first occurrences of mathematicalproofs in which computers appear to play an essential role. Thefour-colour theorem is one example. It says that for every map, onlyfour colours are needed to colour countries in such a way that no twocountries that have a common border receive the same color. Thistheorem was proved in 1976 (Appel et al. 1977). But the proofdistinguishes many cases which were verified by a computer. Thesecomputer verifications are too long to be double-checked by humans.The proof of the four colour theorem gave rise to a debate about thequestion to what extent computer-assisted proofs count as proofs inthe true sense of the word.
The received view has it that mathematical proofs yield a prioriknowledge. Yet when we rely on a computer to generate part of a proof,we appear to rely on the proper functioning of computer hardware andon the correctness of a computer program. These appear to be empiricalfactors. Thus one is tempted to conclude that computer proofs yieldquasi-empirical knowledge (Tymoczko 1979). In other words,through the advent of computer proofs the notion of proof has lost itspurely a priori character. Burge, in contrast, held the view thatbecause the empirical factors on which we rely when we accept computerproofs do not appear as premises in the argument, computer proofs canyield a priori knowledge after all (Burge 1998). (Burge laterretracted this claim: see (Burge 2013, p.31).)
In the twentieth century, research in the philosophy of mathematicsrevolved mostly around the nature of mathematical objects, thefundamental laws that govern them, and how we acquire mathematicalknowledge about them. These arefoundational concerns thatare intimately connected with traditional metaphysical andepistemological questions.
In the second half of the twentieth century, research in thephilosophy of science to a significant extent moved away fromfoundational concerns. Instead, philosophical questions relating tothe growth of scientific knowledge and of scientific understandingbecame more central. As early as the 1970s, there were voices thatargued that a similar shift of attention should take place in thephilosophy of mathematics. Lakatos initiated the philosophicalinvestigation of the evolution of mathematical concepts(Lakatos 1976). He argued that the content of a mathematical conceptevolves in roughly the following way. A mathematician formulates adeep conjecture, but is unable to prove it. Then counterexamplesagainst the conjecture are found. In response, the definition of oneor more central concepts in the conjecture is changed in such a waythat the counterexamples are at least eliminated. Still the thusrevised conjecture cannot be proved, and gradually new counterexamplesappear. The procedure of revising the definition of one or morecentral concepts is applied again and again, until a proof of theconjecture is found. Lakatos calls this procedureconceptstretching. In recent decades, Lakatos’ model of conceptchange in mathematics has been revised and refined (Mormann 2002).
For some decades, the view that the philosophy of mathematics shouldtake a historical and sociological turn remained restricted to asomewhat marginal school of thought in the philosophy of mathematics.However, in recent years the opposition between this new movement ofmathematical practice on the one hand, and ‘mainstream’philosophy of mathematics on the other hand, is softening.Philosophical questions relating to mathematical practice, theevolution of mathematical theories, and mathematical explanation andunderstanding have become more prominent, and have been related tomore traditional themes from the philosophy of mathematics (Mancosu2008). This trend will doubtlessly continue in the years to come.
For an example, let us briefy return to the subject of computer proofs(seesection 5.3). The source of the discomfort that mathematicians experience whenconfronted with computer proofs appears to be the following. A“good” mathematical proof should do more than to convinceus that a certain statement is true. It should also explainwhy the statement in question holds. And this is done byreferring to deep relations between deep mathematical concepts thatoften link different mathematical domains (Manders 1989). Until now,computer proofs typically only employ fairly low level mathematicalconcepts. They are notoriously weak at developing deep concepts ontheir own, and have difficulties with linking concepts in fromdifferent mathematical fields. All this leads us to a philosophicalquestion which is just now beginning to receive the attention that itdeserves: what is mathematical understanding?
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
Aristotle, Special Topics: mathematics |Brouwer, Luitzen Egbertus Jan |choice, axiom of |Church-Turing Thesis |Frege, Gottlob: theorem and foundations for arithmetic |Gödel, Kurt |Hilbert, David: program in the foundations of mathematics |infinity |Kant, Immanuel: philosophy of mathematics |logic, history of: intuitionistic logic |logic: intuitionistic |mathematical: explanation |mathematics, philosophy of: fictionalism |mathematics, philosophy of: formalism |mathematics, philosophy of: indispensability arguments in the |mathematics, philosophy of: intuitionism |mathematics, philosophy of: nominalism |mathematics, philosophy of: Platonism |mathematics, philosophy of: structuralism |mathematics: constructive |model theory: first-order |plural quantification |Russell’s paradox |set theory |set theory: continuum hypothesis |set theory: independence and large cardinals |style: in mathematics |Turing, Alan |type theory |Whitehead, Alfred North |Wittgenstein, Ludwig: philosophy of mathematics
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054