Rebuilding the foundations
Our editors will review what you’ve submitted and determine whether to revise the article.
Arithmetization of analysis
Before the 19th century, analysis rested on makeshift foundations ofarithmetic andgeometry, supporting the discrete and continuous sides of the subject, respectively. Mathematicians since the time of Eudoxus had doubted that “all is number,” and when in doubt they used geometry. Thispragmatic compromise began to fall apart in 1799, whenGauss found himself obliged to usecontinuity in a result that seemed to be discrete—thefundamental theorem of algebra.
The theorem says that anypolynomialequation has a solution in the complex numbers. Gauss’s firstproof fell short (although this was not immediately recognized) because it assumed as obvious a geometric result actually harder than the theorem itself. In 1816 Gauss attempted another proof, this time relying on a weaker assumption known as theintermediate value theorem: iff(x) is a continuousfunction of a real variablex and iff(a) < 0 andf(b) > 0, then there is ac betweena andb such thatf(c) = 0.
The importance of proving the intermediatevalue theorem was recognized in 1817 by the Bohemian mathematicianBernhard Bolzano, who saw an opportunity to remove geometric assumptions from algebra. His attempted proof introduced essentially the modern condition for continuity of a functionf at a pointx:f(x +h) −f(x) can be made smaller than any given quantity, providedh can be made arbitrarily close to zero. Bolzano also relied on an assumption—the existence of a greatest lower bound: if a certain propertyM holds only for values greater than some quantityl, then there is a greatest quantityu such thatM holds only for values greater than or equal tou. Bolzano could go no further than this, because in his time thenotion of quantity was still too vague. Was it a number? Was it aline segment? And in any case how does one decide whether points on a line have a greatest lower bound?
The same problem was encountered by the German mathematicianRichard Dedekind when teachingcalculus, and he later described his frustration with appeals to geometric intuition:
For myself this feeling of dissatisfaction was so overpowering that I made a fixed resolve to keep meditating on the question till I should find a purely arithmetic and perfectly rigorous foundation for the principles ofinfinitesimal analysis.…I succeeded on November 24, 1858.
Dedekind eliminated geometry by going back to an idea of Eudoxus but taking it a step further. Eudoxus said, in effect, that a point on the line is uniquely determined by its position among the rationals. That is, two points are equal if the rationals less than them (and the rationals greater than them) are the same. Thus, each point creates aunique “cut” (L,U) in the rationals, a partition of theset of rationals into setsL andU with each member ofL less than every member ofU.
Dedekind’s small but crucial step was to dispense with the geometric points supposed to create the cuts. He defined the real numbers to be the cuts (L,U) just described—that is, as partitions of the rationals with each member ofL less than every member ofU. Cuts included representatives of all rational and irrational quantities previously considered, but now the existence of greatest lower bounds became provable and hence also the intermediate value theorem and all its consequences. In fact, all the basic theorems about limits and continuous functions followed from Dedekind’s definition—an outcome called the arithmetization of analysis. (SeeSidebar: Infinitesimals.)
The full program of arithmetization, based on a different but equivalent definitions of the real numbers, is mainly due to Weierstrass in the 1870s. He relied on rigorous definitions of the real numbers and limits to justify the computations previously made with infinitesimals. Bolzano’s 1817 definition of continuity of a functionf at a pointx, mentioned above, came close to saying what it meant for the limit off(x +h) to bef(x). The final touch of precision was added with Cauchy’s “epsilon-delta” definition of 1821: for each ε > 0 there is a δ > 0 such that |f(x +h) −f(x)| < ε for all |h| < δ.
Analysis in higher dimensions
While geometry was being purged from the foundations of analysis, its spirit was taking over the superstructure. The study of complex functions, or functions with two or more variables, became allied with the rich geometry of higher-dimensional spaces. Sometimes the geometry guided the development ofconcepts in analysis, and sometimes it was the reverse. A beautiful example of this interaction was the concept of aRiemann surface. The complex numbers can be viewed as a plane (seeFluid flow), so a function of acomplex variable can be viewed as a function on the plane. Riemann’s insight was that other surfaces can also be provided with complexcoordinates, and certain classes of functions belong to certain surfaces. For example, bymapping the plane stereographically onto the sphere, each point of the sphere except the north pole is given a complex coordinate, and it is natural to map the north pole toinfinity, ∞. When this isdone, all rational functions make sense on the sphere; for example,1/z is defined for all points of the sphere by making the natural assumptions that1/0 = ∞ and1/∞ = 0. This leads to a remarkable geometric characterization of the class of rational complex functions: they are the differentiable functions on the sphere. One similarly finds that theelliptic functions (complex functions that are periodic in two directions) are the differentiable functions on the torus.
Functions of three, four, … variables are naturally studied with reference to spaces of three, four, … dimensions, but these are not necessarily the ordinary Euclidean spaces. The idea of differentiable functions on the sphere or torus was generalized to differentiable functions onmanifolds (topological spaces ofarbitrary dimension). Riemann surfaces, for example, are two-dimensional manifolds.
Manifolds can be complicated, but it turned out that their geometry, and the nature of the functions on them, is largely controlled by theirtopology, the rather coarse properties invariant under one-to-one continuous mappings. In particular, Riemann observed that the topology of a Riemann surface is determined by its genus, the number of closed curves that can be drawn on the surface without splitting it into separate pieces. For example, the genus of a sphere is zero and the genus of a torus is one. Thus, a single integer controls whether the functions on the surface are rational, elliptic, or something else.
The topology of higher-dimensional manifolds is subtle, and it became a major field of 20th-centurymathematics. The first inroads were made in 1895 by the French mathematicianHenri Poincaré, who was drawn into topology from complex function theory anddifferential equations. The concepts of topology, by virtue of their coarse and qualitative nature, are capable of detecting order where the concepts of geometry and analysis can see onlychaos. Poincaré found this to be the case in studying thethree-body problem, and it continues with the intense study ofchaotic dynamical systems.
Themoral of these developments is perhaps the following: It may be possible and desirable to eliminate geometry from the foundations of analysis, but geometry still remains present as a higher-level concept. Continuity can be arithmetized, but the theory of continuity involves topology, which is part of geometry. Thus, the ancient complementarity between arithmetic and geometry remains the essence of analysis.
John Colin Stillwell













