(2003-07-26) Zero is a number like any other, only more so...
Zero is probably the most misunderstood number. Eventhe imaginary numberi is probably better understood,(because it's usually introduced only to comparatively sophisticated audiences). It took humanity thousands of years to realize what a great mathematicalsimplification it was to have anordinary number used to indicate "nothing",the absence of anything to count... The momentous introduction of zero metamorphosed the ancientIndian system of numerationinto the familiar decimal system we use today.
Thecounting numbers start with 1,but thenatural integers start with 0... Most mathematicians prefer to start with zero the indexingof the terms in a sequence, if at all possible. Physicists do that too, in order to mark theoriginof a continous quantity: If you want to measure 10 periods of a pendulum,say "0" when yousee it cross a given point from left to right (say) and start your stopwatch. Keep counting each time the same event happens again and stop your timepiece when youreach "10", for this will mark the passing of 10 periods. If you don't want to use zero in that context, just say something like "Umpf"when you first press your stopwatch; many do...
A universal tradition, which probably predates the introduction of zero by a few millenia,is to usecounting numbers (1,2,3,4...) to name successive intervals of time;a newborn baby is "in its first year", whereas a 24-year old is in his 25th. When applied tocalendars, this unambiguoustradition seems to disturb more people than it should. Since the years of the first century are numbered 1 to 100, the second century goesfrom 101 to 200, and the twentieth century consists of the years 1901 to 2000. The third millenium starts with January 1, 2001. Quantum mechanics was born in the nineteenth century (withPlanck's explanation for the blackbody law, on 1900-12-14).
For some obscure reason, many people seem to have a mental block about someordinary mathematics applied to zero. A number of journalists, whoshould have known better,once questioned the simple fact thatzero is even. Of course it is: Zero certainly qualifies as a multiple of two(it's zero times two). Also, in the integer sequence, anyeven number is surrounded by twoodd ones, just like zero is surrounded by the odd integers-1 and +1... Nevertheless, we keep hearing things like: "Zero,should be an exception, an integerthat'sneither even nor odd." Well, why on Earth would anyone want to introduce such unnatural exceptions where none is needed?
What about 00 ? Well, anything raised to the power of zero is equal to unity anda closer examination would reveal that there'sno need to make an exception for zero in this case either: Zero to the power of zero is equal to one! Any other "convention" would invalidate a substantial portion of the mathematicalliterature (especially concerning common notations for polynomials and/or power series).
A related discussion involves thefactorial of zero (0!)which is also equal to 1. However, most people seem less reluctant to accept this one, because the generalizationof the factorial function (involving theGamma function) happens to be continous about the origin...
(2003-07-26) The unit number to which all nonzero numbersrefer.
(2003-07-26) =3.141592653589793238462643383279502884+ Pi is the ratio of the perimeter of a circle to its diameter.
The symbol for the most famous transcendental number was introduced in a 1706 textbook by WilliamJones (1675-1749) reportedly because it's the first letter of theGreek verb perimetrein ("to measure around") from whichthe word "perimeter" is derived. Euler popularized the notation after 1736. It's not clear whether Euler knew of the previous usage pioneered by Jones.
Historically, ancient mathematicians did convince themselves that LR/2 was thearea of the surfacegenerated by a segment of length R when one of its extremities (the "apex")is fixed and the other extremity has a trajectory of length L (which remains perpendicular to that segment).
The record shows that they did this for planar geometry (in which case the trajectory isa circle) but the same reasoning would applyto nonplanar trajectories as well (any curvedrawn on the surface of sphere centered on the apex will do).
They reasoned that the trajectory (the circle) could be approximatedby a polygonal line with many small sides. The surface could then be seen as consisting of many thin triangles whose heightswerevery nearly equal to R, whereas the base wasvery nearly a portion of thetrajectory. As the area of each triangle is R/2 times such a portion, the area of thewhole surface is R/2 times the length of the entiretrajectory [QED?].
Of course, this type of reasoning was made fully rigorous only with the advent ofinfinitesimal calculus, but it did convince everyone of theexistence of asingle number which wouldgiveboth the perimeter (2R) and thesurface area (R)of a circle of radius R...
Theancient problem of squaring the circle asked for aruler and compass constructionof a square having the same area as a circle of given diameter. Such a thing would constitute a proof that is constructible, which it's not. Therefore, it's not possible to square the circle...
isn't even algebraic (i.e., it's not the root of any polynomial with integer coefficients). All constructible numbers are algebraic but the converse doesn't hold. For example, thecube root of two is algebraicbut not constructible, which is to say that there's no solutionto another ancient puzzle known as the Delian problem (or duplication of the cube). A number which is not algebraic is called transcendental.
In 1882, was shown to be transcendental by C.L. Ferdinand von Lindemann (1852-1939) using little more than the tools devised 9 years earlier by Charles Hermite to prove the transcendenceof e (1873).
was proved irrational much earlier (1761) by Lambert (1728-1777).
Since 1988, Pi Day is celebrated worldwide on March 14 (3-14 is the beginningof the decimal expansion ofPi and it's also the Birthday of Albert Einstein, 1879-1955). This geeky celebration was the brainchild of the physicist Larry Shaw (1939-2017). The thirtieth Pi Day was celebrated by Google with the above Doodle on their home page, on 2018-03-14.
On that fateful Day, Stephen Hawking (1942-2018) died at the age of 76.
(2003-07-26) 2 =1.414213562373095048801688724209698+ Root 2. The diagonal of a square of unit side. Pythagoras' Constant.
He is unworthy of the name of man who is ignorant of the fact thatthe diagonal of a square is incommensurable with its side. Plato (427-347 BC)
When they learned about the irrationality of 2, the Pythagoreans sacrificed100 oxen to the gods (a so-calledhecatomb)... The followers of Pythagoras (c. 569-475 BC) kept this sensational discovery a secret to be revealedto the initiated mathematikoi only.
Hippasus of Metapontum is credited with the classical proof (ca. 500 BC) which is summarized below. It is based on the fundamental theorem of arithmetic (i.e., the unique factorization of any integer into primes).
The irrationality of the square root of 2 may also be proved very nicely using the method of infinite descent, without any notion of divisibility !
(2013-07-17) 3 =1.732050807568877293527446341505872+ Root 3. Diameter of a cube of unit side. Constant of Theodorus.
Theodorus taught mathematics toPlato,who reported that he was teaching about the irrationality of the square root of allintegers besides perfect squares "up to 17", before 399 BC. Of course, the theorem of Theodorus is true without that artificial restriction (which Theodorusprobably imposed for pedagogical purposes only). Once the conjecture is made, the truth of the general theoremis fairly easy to establish.
Elsewhere on this site, wegive a very elegant short modern proof of the general theorem, by the method of infinite descent. A more pedestrianapproach, probably used by Theodorus, is suggested below...
There's also a partial proof which settles only thecases below 17. Some students of the history of mathematicsjumped to the conclusion that this must have been the (lost) reasoning of Theodorus (although this guess flies in the face thatthe Greek words used by Plato do mean "up to 17" and not "up to 16"). Let's present that weak argument, anachronistically, in the vocabularyofcongruences, for the sake of brevity:
If q is an odd integer with a rational square root expressedin lowest terms as x/y, then:
q y2 = x2
Because q is odd, so are both sides (or else x and y would have a commoneven factor). Therefore, the two odd squares are congruent to 1modulo 8 and q must be too. Below 17, the only possible values of q are1 and 9 (both of which are perfect squares).
This particular argument doesn't settle the case of q = 17 (which Theodurus was presenting in class as solved) and it's notmuch simpler (if at all) than a discussionbased on a full factorization of both sides (leading to a complete proof bymere generalization of the method which hadestablished the irrationality of thesquare rootof 2, one century earlier).
Therefore, my firm opinion is that Theodorus himself knew verywell that his theorem was perfectly general, because he had proved it so... The judgement of history that the square root of 3was the second number proved to be irrational seems fair. So does the naming of that constant and the relatedtheorem after Theodorusof Cyrene (465-398 BC).
(2003-07-26) =1.61803398874989484820458683436563811772+ The diagonal of a regular pentagon of unit side: = (1+5) / 2
This ubiquitous number is variously known as theGolden Number, theGolden Section, theGolden Mean,theDivine Proportion or theFibonacci Ratio (because it's the limit of the ratio of consecutive terms in theFibonacci sequence). It's theaspect ratioof a rectangle whosesemiperimeter is to the larger side what the larger side is to the smaller one.
e is called Euler's number (not to be confused withEuler's constant ).
e was first proved transcendental byCharles Hermite (1822-1901) in 1873.
The letter e may now no longer be used to denote anythingother than this positive universal constant. Edmund Landau (1877-1938)
(2014-05-15) Rise time and fixed-point probability: 1/1! - 1/2! + 1/3! - 1/4! + 1/5! - ...
Every electrical engineer knows that the time constant of a first-orderlinear filter is the time it takes to reach 63.2%of a sudden level change.
For example, to measure a capacitor C with an oscilloscope, use a known resistor Rand feed a square wave to the input of the basicfirst-order filter formed by R and C. Assuming the period of the wave is much larger than RC, the value of RC is equalto the time it takes the output to change by 63.2% of the peak-to-peakamplitude on every transition.
).
Proof : If the time constant of a first-order lowpass filter is taken as the unit of time, thenits response to a unit step will be 1-exp(-t) at time t. That's 10% at time ln(10/9) and 90% at time ln(10) The rise time is the interval between those two times,namely ln(9) or nearly 2.2. The reciprocal of that is about 45.512%. More precisely:
The time constant (RC) of a first-order lowpass filter is 45.5% of its rise time.
Waveform
Rise Times
Time unit
0 to 63.212%
10-90%
0-100%
RC-filtered long-period squarewave
1
ln 9
n/a
RC
2.1972
Sinewave
¼+asin(1-1/e)/
asin(0.8)/
0.5
Period
0.3589
0.2952
Probability of existence of a fixed point :
The number 63.212...% is also famously known as the probability that apermutation of many elements willhave at least onefixed point (i.e., an element equal to its image). Technically, it's only thelimit of that as the number of elements tends to infinity. However, theconvergence is so rapid that the difference is negligible. The exact probability for n elements is:
With n = 10, for example, this is 28319 / 44800 = 0.63212053571428... (which approximates the limit to a precision smaller than 37 ppb).
A random self-mapping (not necessarilybijective) of a set of n points will have at least one fixed pointwith a probability that tends slowly to that same limitwhen n tends to infinity. The exact probability is:
1 ( 1 - 1/n )n
For n = 10, this is 0.6513215599, which is 1.92% more than the limit.
(2003-07-26) The alternating sum 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 + 1/7 - 1/8+... or the straight sum ½ + (½)2/2 + (½)3/3+ (½)4/4 + (½)5/5 + ...
Both expressions come from the Mercator series : ln 2 = -H(-1) = H(½)
where H(x) = x + x2/2 + x3/3 + x4/4 + ... + xn/n + ... = - ln(1-x)
The first few decimals of thispervasive constant are worth memorizing!
(2011-08-24)
When many actual computations useddecimal logarithms,every engineer memorized the 5-digit value (0.30103) and trusted it to 8-digit precision.
Ifdecibels (dB) are used, a power factor of 2 thus corresponds to 3 dB or, more precisely, 3.0103 dB.
To afilter designer, the attenuation of afirst-orderfilter is quoted as 6 dB per octave whichmeans that amplitudes change by a factor of 2 when frequencies change by an octave (which is a factor of 2 in frequency). A second-order low-pass filter would have an ultimate slope of 12 dB per octave, etc.
(2003-07-26) =0.577215664901532860606512090082402431+ The limit of [1 + 1/2 + 1/3 + 1/4 + ... + 1/n] ln(n) , as n
The previous sum can be recast as the partial sum of a convergent series, by introducingtelescoping terms. The general term of that series (for n ≥ 2) is:
Therefore, since terms in absolutely convergent series can be reordered:
1 = n ≥ 2 p ≥ 2 1 / pnp = p ≥ 2 n ≥ 2 1 / pnp Therefore, using thezeta function 1 = p ≥ 2 ((p)-1) / p
The constant was calculated to 16 digits by Euler in 1781. The symbol is due to Mascheroni, who gave 32 digits in 1790 (his other claim to fame is theMohr-Mascheroni theorem). Only the first 19 of Mascheroni's digits were correct. The mistake was only spotted in 1809 byJohann von Soldner (the eponym ofanother constant) who obtained 24 correct decimals...
In 1878, the thing was worked out to 263 decimal places by the astronomerJohn Couch Adams (1819-1892)who hadalmostdiscovered Neptune as a young man (in 1846).
In 1962, gamma was computed electronically to 1271 digits byD.E. Knuth,then to 3566 digits by Dura W. Sweeney(1922-1999) with anew approach.
7000 digits were obtained in 1974 (W.A. Beyer & M.S. Waterman)and 20 000 digits in 1977 (byR.P. Brent,using Sweeney's method). Teaming up withEdwin McMillan(1907-1991;Nobel1951) Brent would produce more than 30 000 digits in 1980.
Alexander J. Yee, a 19-year old freshman at Northwestern University,made UPI news (on 2007-04-09) forhis computationof 116 580 041 decimal places in 38½ hourson a laptop computer, in December 2006. Reportedly, this broke a previous record of 108 million digits,set in 47 hours and 36 minutes of computation (from September 23 to 26, 1999) by the Frenchmen Xavier Gourdon (X1989) and Patrick Demichel.
(and also of Log 2) as of2009-03-13. Kondo and Yee then collaborated to produce 1 trillion digits of 2 in 2010. Laterthat year, they computed 5 trillion digits of , breaking the previous record of 2.7 trillion digits of (2009-12-31) held by theFrenchmanFabrice Bellard (X1993, born in 1972).
Everybody's guess is that istranscendentalbut this constant has not even been provenirrational yet...
Charlesde la Vallée-Poussin (1866-1962) is best known forhaving given an independent proof of thePrime Number Theorem in 1896, at the same time asJacquesHadamard (1865-1963). In 1898, he investigated the average fraction by whichthe quotient of a positive integer n by a lesser primefalls short of an integer. Vallée-Poussin proved that this tends to for large values of n (and not to ½, as might have been guessed).
What caused the admiration of Alf van der Poorten is the proof of the irrationality of (3) by the French mathematician Roger Apéry (1916-1994) in 1977. That proofis based on an equation featuring a rapidly-converging series:
(3)
5
(-1) k1
2
k3
2k k
The reciprocal of Apéry's constant 1/(3) is equally important: (A088453)
Such a shortcut must be avoided unless one is prepared to give up the most trusted properties of thesquare root function, including:
(xy) = x y
If you are not convinced that the square root function (and its familiar symbol) should be strictly limited tononnegative real numbers, just consider what the above relationwould mean with x = y = -1.
Neither of the two complex numbers (i and -i) whose square is -1 can be described as the "square root of -1". The square root functioncannot be definedas a continuous function over the domain of complex numbers. Continuity can be rescued if the domain of the function is changed to astrange beast consisting of two properly connected copies (Riemann sheets) of the complex plane sharing the same origin. Such considerations do not belong in an introduction to complex numbers. Neither does the deceptive square-root symbol ().
Exotic Mathematical Constants
(2008-04-13) The Delian constant is the scaling factor which doubles a volume.
The cube root of 2 is much less commonly encountered thanitssquare root (1.414...). There's little need to remember that it's roughly equal to 1.26 but it can be useful (e.g., a 5/8" steel ball weighs almost twice as much as a 1/2" one).
The fact that this quantity cannot be constructed "classically" (i.e., withruler and compass alone) shows that there's no "classical" solution to the so-calledDelian problem whereby the Athenians were asked by the Oracle of Apollo at Delos to resize the altar of Apolloto make it "twice as large".
The Delian constant has also grown to be a favoriteexample of an algebraic number of degree 3 (arguably, it's the simplest such number). Thus, itscontinued fraction expansion (CFE) has been under considerable scrutiny... There does not seem to be anything special about it, but the question remainstheoreticallyopen whether it's truly normal or not (by contrast, the CFE of any algebraic number of degree 2 is periodic ).
In Western music theory, the chromatic octave (the interval which doublesthe frequency of a tone) is subdivided into 12 equal intervals (semitones). That's to say: three equal steps of four semitones each result ina doubling of the frequency. An interval of four semitones is knownas a major third. Three consecutive major thirds correspond to a doubling of thefrequency. Thus,theDelian constant (1.259921...) is the frequency ratio corresponding to a major third.
A Delian brick is a cuboid with sides proportional to 1, 21/3 and 22/3.
That term was coined by Ed Pegg on 2018-06-19. A planar cut across the middle of its longest side splits a Delian brick intotwo Delian bricks.
2 aspect ratio for rectangles, on which is based the common A-series of paper sizes (as are the B-series, used for some playing cards, and the C-series for envelopes.)
(2009-02-08) Gauss's constant (G) is the reciprocal of agm (1,2)
The symbol G is also used forCatalan's constantwhich is best denoted (2) whenever there is any risk of confusion.
(2015-07-12) Conventional coefficient pertaining to the diffraction limit on resolution.
This is equal to the first zero of the J1Bessel function divided by . Commonly approximated 1.22 or 1.220.
This coefficient appears in the formula which gives the limit of theangular resolutionof a perfect lens of diameter D for light of wavelength :
= 1.220 / D
This precise coefficient is arrived at theoretically by using Rayleigh's criterion which states that two points of light (e.g., distant stars) can't be distinguished if their angular separation is less than the diameterof theirAiry disks (the diameter of the first dark circle in the interference patterndescribed theoretically byGeorge Airy in 1835).
The precise value of the factor to use is ultimately a matterof convention about what constitutes optical distinguishability. The theoretical criterion on which the above formula is based wasoriginally proposed byRayleighfor sources of equal magnitudes. It has proved more appealing than all other considerations,including the empirical Dawes' limit, which ignores the relevance of wavelength. Dawes' limit would correspond to a coefficient of about 1.1 at a wavelength of 507 nm (most relevant to the scotopic astronomical observations used by Dawes).
Note that the digital deconvolution of images allows finer resolutionsthan what the above classical formula implies.
This is often called Mertens' constant in honor of the number theoristFranzMertens (1840-1927). It is to the sequence of primes whatEuler's constant is to the sequence of integers. It's sometimes also calledKronecker's constantor theReciprocal Prime Constant.
Proposals have been made to name this constantafter Charles de la Vallée-Poussin (1866-1962) and/orJacques Hadamard (1865-1963),the two mathematicians who first proved (independently)the Prime Number Theorem, in 1896.
(2006-06-15) The product of all the factors [ 1 - 1 / (q2- q) ] for prime values of q.
For any prime p besides 2 and 5, the decimal expansion of 1/p has a period at most equalto p-1 (since only this many different nonzero "remainders" can possiblyshow up in thelong division process). Primes yielding this maximal period are called long primes [to base ten] by recreational mathematicians and others. The number 10 is a primitive root modulo such a prime p, which is to say that the first p-1 powers of 10 aredistinctmodulo p (the cycle thenrepeats, byFermat's little theorem). Putting a = 10, this is equivalent to the condition:
a (p-1)/d 1 (modulo p) for any prime factor d of (p-1).
For a given prime p, there are (p-1) satisfactory values of a (modulo p),where is Euler'stotient function. Conversely, for a given integer a, we may investigate the set of long primes to base a...
It seems that the proportion C(a) of such primes(among all prime numbers) is equal to the above numericalconstant C, for many values of a (including negative ones) and that it's always a rational multiple of C. The precise conjecture tabulated below originated with EmilArtin (1898-1962) who communicated it toHelmut Hassein September 1927.
Neither -1 nor aquadratic residue can be a primitive root modulo p > 3. Hence, the table's first row is as stated.
Artin'sconjecture for primitive roots (1927) first refined byDickLehmer
Base a
Proportion C(a) of primes p for which a is a primitive root
-1 or b 2
0
a =b k
C(a) = v(k) C(b) v ismultiplicative: v(qn) = q(q-2) / (q2-q-1) if q is prime
sf(a) 4 = 1 *
C(a) = 1
1
1 + q q2
C
Otherwise, C(a) = C = 0.3739558136192022880547280543464164151116...
(*) In the above, sf (a) is thesquarefree part of a, namely the integer of least magnitude which makes the product asf (a) a square. The squarefree part of a negative integer is the opposite of thesquarefree part of its absolute value.
The conjecture can be deduced from its special case about prime values of a, which states the density is C unless a is 1 modulo 4, in which case it's equal to:
[ (a 2 -a ) / (a 2 -a - 1 ) ] C
In 1984, Rajiv Gupta and M. Ram Murty showed Artin's conjecture to be truefor infinitely many values of a. In 1986, David Rodney ("Roger")Heath-Brown proved nonconstructively that there are at most 2 primes for which it fails... Yet, we don't know about any single value of a for which the result is certain!
(2003-07-30) =1.451369234883381050283968485892027449493+ Ramanujan-Soldner constant, zero of the logarithmic integral: li() = 0
is theonly positive root of thelogarithmic integral function "li" (which shouldn't be confused with the older capitalized offset logarithmic integral "Li", still used by number theorists when x is large: Li x = li x - li 2 ).
The above integrals must be understood asCauchy principal valueswhenever the singularity at t = 1 is in the interval of integration...
The function li is also called integral logarithm (French: logarithme integral).
(2017-11-25) Defined by Landau and expressed as an integral by Ramanujan.
Asymptotically, the density of integersbelow x expressible as the sum of two squares is inversely proportional tothe square root of thenatural logarithm of x. The coefficient of proprtionality is, by definition, the Landau-Ramanujan constant.
Ramanujan expressed as an integral the constant so defined by Landau.
(2004-02-19) For no good reason, this is sometimes called the Omega constant.
It's the solution of the equation x =e-x  or, equivalently, x = ln(1/x) In other words, it's the value at point 1 of Lambert's W function.
The value of that constant could be obtained by iteratingthe function e-x, but the convergence is very slow. It'smuch better to iterate the function:
f (x) = (1+x)/ (1+ex)
This has the same fixed-point but features a zeroderivative there,so that the convergence isquadratic (the number of correct digits is roughly doubled with each iteration). This fast approach is an example of Newton's method.
What's known as the [first]Feigenbaum constantis the "bifurcation velocity" ()which governs the geometric onset of chaos via period-doublingin iterative sequences (with respect to some parameterwhich is used linearly in each iteration, to damp a given functionhaving a quadratic maximum). Thisuniversal constant was unearthed in October 1975 by Mitchell J. Feigenbaum (1944-2019). The related "reduction parameter" () is thesecondFeigenbaum constant...
(2021-06-21)
F =
dx
= e +
e-x dx
(x)
2 + (Log x)2
Historically, this constant was successively computed...
It's conjectured that the above upper bound, using the Gamma function, is actually the true value of Bloch's constant, but this hasn't been proved yet. When André Bloch (1893-1948) originally statedhis theorem, he merely stated that the universal constant B he introduced wasno less than 1/72.
Bloch's theorem (1925)
Consider the space S of all schlicht functions (holomorphic injections on the open disk D of radius 1 centered on 0). The largest disk contained in f (D) has a radius which is no less than a certainuniversal positive constant B.
Bloch's constant is defined as the largest value of B for which the theorem holds. Originally, Bloch only proved that B ≥ 1/72.
Some Third-Tier Mathematical Constants
(2016-01-19) = 23.1406926327792690... Raising this transcendental number to the power of i gives e i = -1.
Because i is irrational but not transcendental, the Gelfond-Schneider theoremimplies that Gelfond's constant is transcendental.
(2004-05-22) Sum of the reciprocals of [pairs of] twin primes: (1/3+1/5) + (1/5+1/7) + (1/11+1/13) + (1/17+1/19) + (1/29+1/31) + ...
This constant is named after the Norwegian mathematician who proved thesum to be convergent, in 1919: Viggo Brun (1885-1978).
The scientific notation used above and throughout Numericana indicates anumerical uncertainty by giving anestimate of the standard deviation (). This estimate is shown between parenthesesto the right of the least significant digit (expressed in units of that digit). The magnitude of the error is thus stated to be less than thiswith a probability of 68.27% or so.
Thomas R. Nicely, professor of mathematics at Lynchburg College.started his computation of Brun's constant in 1993. He made headlines inthe process, by uncovering a flaw in thePentiummicroprocessor's arithmetic, which ultimatelyforced a costly ($475M) worldwide recall by Intel.
Usually, mathematicians have to shoot somebodyto get this much publicity. Dr. Thomas R. Nicely (quoted in The Cincinnati Enquirer)
Nicely kept updating his estimate of Brun's constant for afew years until 2010 or so, at which point he was basing his computationon the exact number of twin primes found below 1.6×1015. Because he felt a general audience could not be expected to be familiar withthe aforementioned standard way scientists report uncertainties, Nicely chose to report the so-called 99% confidence level, which is three times as big. (More precisely, is a 99.73% confidence level.) The following expressions thus denote the same value,with thesame uncertainty:
The sum of the reciprocals of the Fibonacci numbersproved irrational by Marc Prévost,in the wake of Roger Apéry's celebrated proofof the irrationality of (3), which has been known asApéry's constant ever since.
The question of the irrationality of the sum of the reciprocals of the Fibonacci numberswas formally raised by Paul Erdös and may still be erroneouslylistedas open, despite the proof ofMarc Prévost(Universitédu Littoral Côte d'Opale).
(2003-08-05) Grossman's Constant. [Not known much beyond the above accuracy.]
A 1986 conjecture ofJerrold W. Grossman(which was proved in 1987 by Janssen & Tjaden)states that the following recurrence defines aconvergent sequence for only onevalue of x, which is now calledGrossman's Constant:
a0 = 1 ; a1 = x ; an+2 =
an
1 +an+1
Similarly, there's another constant, first investigated by Michael Somos in 2000, above whichvalue of x the following quadratic recurrence diverges (below it,there's convergence to a limit that's less than 1): (where the terminal "7-" stands for something probably close to "655").
a0 = 0 ; a1 = x ; an+2 = an+1 ( 1 +an+1an)
(2003-08-06) Ramanujan's number: exp(163) isalmost an integer.
The attribution of this irrational constant to Ramanujan was madebySimon Plouffe,as a monument to a famous 1975April fools column byMartin Gardner inScientific American (Gardnerwrote that this constant had been proved to be an integer,as "conjectured by Ramanujan" in 1914 [sic!] ).
Actually, this particular property of 163 was first noticed in 1859byCharles Hermite (1822-1901). It doesn't appear in Ramanujan's relevant 1914 paper.
There arereasonswhy the expression exp (n) should be close to an integer for specific integral values of n. In particular, when n is a largeHeegner number(43, 67 and 163 are the largest Heegner numbers). The value n = 58, which Ramanujandid investigate in 1914, is alsomost interesting. Below are the first values of n for which exp (n) is less than 0.001 away from an integer:
In 1960, Hillel Furstenberg and Harry Kesten showed that, for a certain classof random sequences, geometric growth wasalmost always obtained,although they did not offer any efficient wayto compute the geometric ratio involved in each case. The work of Furstenberg and Kesten was used in the research that earned the1977 Nobel Prize in Physicsfor Philip Anderson, Neville Mott, and John van Vleck. This had a variety of practical applications in many domains,including lasers, industrial glasses,and even copper spirals for birth control...
At UC Berkeley in 1999,DivakarViswanathinvestigated the particular random sequences in which each term iseitherthe sum or the difference of the two previous ones (a fair coin is flipped to decide whether to add or subtract). As stated by Furstenberg and Kesten, the absolute values of the numbers inalmost allsuch sequences tend to have a geometric growth whose ratio is a constant. Viswanath was able to compute this particular constant to 8 decimals.
Currently, more than 14 significant digits are known (seeA078416).
(2012-07-01) Concatenating the digits of the primes forms a normal number.
Boreldefined a normal number (tobase ten)as a real number whose decimal expansion is completely random, in the sense thatall sequences of digits of a prescribed length are equally likely to occurat a random position in the decimal expansion.
It is well-known that almost all real numbers are normal in thatsense (which is to say that the set of the other real numbers is contained ina set of zero measure). Pi is conjectured to be normal but this is not known for sure.
It is actually surprisingly difficult to define explicitely a number that can be provento be normal. So far, all such numbers have been defined in terms ofa peculiar decimal expansion. The simplest of those is Champernowne's Constant whosedecimal expansion is obtained by concatenating the digits of all the integers in sequence. This number was proved to be decimally normal in 1933, by David G. Champernowne(1912-2000)as an undergraduate.
In 1946, that conjecture was provedbyArthur H. Copeland(1898-1970)andPaul Erdös (1913-1996) and this last number was named after them.
The 6+1 Basic Dimensionful Physical Constants( Proleptic SI )
(2003-07-26) The speed of light in avacuum. [Exact, by definition of the meter (m)]
InApril 2000,Kenneth Brecher (of Boston University) produced experimental evidence, at an unprecedented level of accuracy,which supports the main tenet of Einstein'sSpecial Theory of Relativity,namely that the speed of light (c)does not depend on the speed of the source.
Brecher was able to claim a fabulous accuracy of less than one part in 1020,improving the state-of-the-art by 10 orders of magnitude! Brecher's conclusions were based on the study of the sharpness ofgamma ray bursts (GRB) received from very distant sources: In such explosive events, gamma rays are emitted from points of very different[vectorial] velocities. Even minute differences in the speeds of thesephotons would translate into significantly different times of arrival,after traveling over immense cosmological distances. As no such spread is observed, a careful analysis of the data translatesinto the fabulous experimental accuracy quoted above in support of Einstein'stheoretical hypothesis.
When he announced his results at the April 2000 APS meeting in Long Beach (CA), Brecher declared that the constant c appears "even more fundamental than light itself"and he urged his colleagues to give it a proper name andstart calling itEinstein's constant. The proposal was well received and has only been gaining momentum ever since,to the point that the "new" name seems now fairly well accepted.
Since 1983, the constant c has been used to define the meter in terms ofthe second, by enacting asexact the above value of 299792458 m/s.
Where does the symbol "c" come from?
Historically, "c" was used for a constant which later came to be identified as the speed ofelectromagnetic propagation multiplied by the square root of 2 (this would be c2, in modern terms). This constant appeared in Weber's force law and was thus known as "Weber's constant" for a while.
On at least one occasion, in 1873, James Clerk Maxwell(who normally used "V" to denote the speed of light)adjusted the meaning of "c" to let it denote the speed ofelectromagnetic waves instead.
In 1894, Paul Drude (1863-1906) made this explicit and was instrumentalin popularizing "c" as the preferred notation for thespeed of electromagnetic propagation. However, Drude still kept using the symbol "V" for the speed of light in anoptical context, because the identification of light withelectromagnetic waves was not yet common knowledge: Electromagnetic waves had first been observed in 1888,byHeinrich Hertz (1857-1894). Einstein himself used "V"for the speed of light and/or electromagnetic waves as late as 1907.
c may also be called the celerity of light: [Phase] celerity and [group] speed are normallytwo different things,but they coincide for light in avacuum.
(2003-07-26) o =4 107 N/A2= 1.256637061435917295... H/m Magnetic permeability of thevacuum. [Definition of the ampere (A)]
The relation o o c 2 = 1 and theexact value of c yield an exact SI value, with a finite decimalexpansion, forCoulomb's constant (inCoulomb's law):
1
= 8.9875517873681764 10 9 9 10 9 N. m 2 / C 2
4o
Consequently, the electric constant (dielectric permittivity of the vacuum) has a known infinite decimal expansion, derived from the above:
A photon of frequency has an energy h where h is Planck's constant. Using the pulsatance this is where is Dirac's constant.
The constant =h/2 is actually knownunder several names:
Dirac's constant.
The reduced Planck constant.
The rationalized Planck constant.
Thequantum of angular momentum.
Thequantum of spin (although some spins arehalf-multiples of this).
The constantis pronounced either "h-bar" or (more rarely) "h-cross". It is equal tounity in the natural systemof units of theoreticians (h is 2). The spins of all particles are multiples of /2 = h/4 (an even multiple for bosons,an odd multiple for fermions).
There's a widespreadbelief that the letter h initially meant Hilfsgrösse ("auxiliary parameter" or,literally, "helpful quantity" in German) because that's the neutral way Max Planck (1858-1947) introduced it, in 1900.
Units :
As noted at the outset, the actual numerical value of Planck's constantdepends on the units used. This, in turn, depends on whether we choose to express the rate of change ofa periodic phenomenon directly as the change with time of its phase expressed in angular units (pulsatance) or as the number of cycles perunit of time (frequency). The latter can be seen as a specialcase of the former when the angular unit of choice is a complete revolution (i.e., a "cycle" or "turn" of 2 radians).
A key symptom that angular units ought to be involved in the measurementof spin is that the sign of a spin depends on the conventional orientationof space (it's anaxial quantity).
Likewise, angular momentum and the dynamic quantity which induces achange in it (torque) areaxial properties normally obtained as the cross-product of two radial vectors. One good way to stress this fact is to express torque in Joules per radian (J/rad) when obtained as the cross-product of a distance in meters (m)and a force in newtons (N).
1 N.m = 1 J / rad = 2 J / cycle = 2 W / Hz = 120 W / rpm
Note that torque and spectral power have the same physical dimension.
Evolution from measured to defined values :
Current technology of thewatt balance(which compares an electromagnetic force with a weight)isalmost able to measure Planck's constant with the sameprecision as the best comparisons with the International prototype of the kilogram,the only SI unit still defined in terms of an arbitrary artifact. It is thus likely that Planck's constant could be given ade jurevalue in the near future, which would amount to a new definition of the SI unit of mass.
Resolution 7 of the 21st CGPM (October 1999) recommends "that national laboratories continue their efforts to refine experiments that linkthe unit of mass to fundamental or atomic constants with a view to a future redefinitionof the kilogram". Although precise determinations ofAvogadro's constant were mentionedin the discussion leading up to that resolution, thewatt balance approach wasconsidered more promising. It's also more satisfying to define the kilogram in termsof thefundamental Planck constant,rather than make it equivalent to a certain number of atoms in a silicon crystal. (Incidentally, the mass of N identical atoms in a crystal isslightly less than N timesthe mass of an isolated atom, because of the negative energy of interaction involved.)
In 1999, Peter J. Mohr and Barry N. Taylor haveproposedto define the kilogram in terms of an equivalentfrequency = 1.35639274Hz, which would h equal to c/, or 6.626068927033756019661385...J/Hz.
Instead, it would probably be better to assign h or [rather]h/2 a rounded decimal valuede jure. This would make the future definition of the kilogram somewhat less straightforward,but would facilitate actual usage when the utmost precision is called for. To best fit the "kilogram frequency" proposed by Mohr and Taylor,thede jure value ofwould have been:
1.054571623J.s/rad
However, a mistake which was corrected with the 2010 CODATA set makes thatvalue substantially incompatible with our best experimental knowledge. Currently (2011) the simplest candidate for a de jure definition is:
= 1.0545717J.s/rad
Note: " ħ " is howyour browser displays 's "h-bar"(ħ).
In 2018, an exact value of h will define the kilogram :
The instrument which will perform the defining measurement is the Watt Balance invented in 1975 by Bryan Kibble(1938-2016). In2016, the metrology community decided to rename the instrument a Kibble balance, in his honor (in a unanimous decision by theCCU = Consultative Committee for Units).
(2003-08-10) k =1.3806488(13) 10-23 J/K Definingentropy and/or relating temperature to energy.
Boltzmann's constant is currently a measured quantity. However, it would be sensible to assign it a de jure value that would serve as an improved definition of the unit of thermodynamic temperature,the kelvin (K) which is currently defined in terms of the temperature of the triple point ofwater (i.e., 273.16 K = 0.01°C, both expressions being exact by definition ).
History :
What's now known as Boltzmann's relation was first formulatedby Boltzmann in 1877. It gives theentropy S of a system known to be in one of equiprobable states. FollowingAbraham Pais,Eric W. Weisstein reports thatMax Planckfirst used the constant k in 1900.
S = k ln ()
The constant k became known asBoltzmann's constant around 1911 (Boltzmann had died in 1906) under the influence of Planck. Before that time,Lorentzand others had named the constant after Planck !
(2003-08-10) Numberof things per mole of stuff : 6.02214129(27)1023/mol InJanuary 2011, theIACargued for 6.02214082(18) 1023/mol
The constant is named after the Italian physicistAmedeo Avogadro (1776-1856)who formulated what is now known asAvogadro's Law, namely:
At the same temperature and[low] pressure,equal volumes of different gases contain the same number of molecules.
The current definition of themole states that there are as manycountablethings in amole as there are atoms in 12 grams ofcarbon-12 (the most common isotope of carbon).
Keeping this definition and giving ade jure value to the Avogadro numberwould effectively constitute a definition of the unit of mass. Rather, the above definition could be dropped, so that ade jure valuegiven to Avogadro's number would constitute a proper definition of themole which would then be only approximatively equal to 12 g of carbon-12 (or 27.97697027 g of silicon-28).
(2003-07-26) The "mechanical equivalent of light". [Definition of the candela (cd)]
The frequency of 540 THz (5.4 1014 Hz)corresponds to yellowish-green light. This translates into a wavelength of about 555.1712185 nm in avacuum,or about 555.013 nm in the air, which is usually quoted as 555 nm.
This frequency,sometimes dubbed "the most visible light",was chosen as a basis forluminous unitsbecause it corresponds to a maximal combined sensitivity for thecones of the human retina (the receptors which allow normalcolor vision under bright-lightphotopic conditions).
The situation is quite differentunder low-lightscotopic conditions, where human vision isessentially black-and-white (due torods notcones ) with a peak response around a wavelength of 507 nm.
(2007-10-25) Newton's constant of gravitation: G = 6.67410-11 m3 / kg s2
Assuming the above evolutions [1,2,3 ] come to pass,the SI scheme would define every unit in terms of de jure values of fundamental constants, using only onearbitrary definition for the unit of time (thesecond). There would be no need for that remaining arbitrary definition if theNewtonian constant of gravitation (the remaining fundamental constant) was given a de jure value.
There's no hope of ever measuring the constant of gravitation directly with enough precision to allow a metrologicaldefinition of the unit of time (the SI second) based on such a measurement.
However, if our mathematical understanding of the physical world progresseswell beyond its current state, we may eventually be able to find a theoreticalexpression for the mass of the electron in terms of G. This would equate the determination of G to a measurement of themass of the electron. Possibly,that could be done with the required metrological precision...
Fundamental Physical Constants
Here are a few physical constants of significant metrological importance, with the most precisely known ones listed first. For the utmost in precision, this is roughly the order in which they should be either measured or computed.
. That number is a difficult-to-compute function of the fine structure constant () which is actually known with a far lesser relative precision. However, that "low" precision pertains to a small corrective termaway from unity and the overall precision is much better.
The list starts with numbers that are known exactly (no uncertainty whatsoever) simply because of the way SI units are currently defined. Such exact numbers include the speed of light (c) in meters per second (cf.SI definition of the meter) orthe vacuum permeability (0 ) in henries per meter (or, equivalently,newtons per squared ampère, seeSI definition of the ampere).
Except as noted, all values are derived from CODATA 2010.
x / x
Physical Constants (sorted by relative uncertainty)
Carl Sagan once needed an "obvious" universal length as a basic unit in a graphic message intended for [admittedly very unlikely] extra-terrestrial decoders. That famous picture was attached to the two space probes (Pioneer 10 and 11, launched in 1972 and 1973) which would become the first man-made objects everto leave theSolar System.
Sagan chose one of the most prevalent lengths in the Cosmos, namelythe wavelength of 21 cm corresponding to thehyperfine spin-flip transition of neutral hydrogen (isolated hydrogen atoms do pervade theUniverse).
Hydrogen Line : 1420.4057517667 MHz 21.106114054179 cm
Back in 1970, the value of the hyperfine "spin-flip" transitionfrequency of the ground state of atomic hydrogen (protium) had alreadybeen measured with superb precision by Hellwig et al. :
1420.405751768 MHz.
This was based on a direct comparison with the hyperfine frequency of cesium-133,carried out at NBS (now NIST). In 1971,Essen et al pushed the frontiers of precisionto a level that has not been equaled since then. Their results stood fornearly 40 years as the most precise measurement ever performed (the value of the magnetic moment of the electron expressed in Bohr magnetons is nowknown with slightly better precision).
1420.4057517667 MHz
Three years earlier (in 1967) a newdefinition of the SI secondhad been adopted based on cesium-133, for technological convenience. Now, the world is almost ripe for a new definition of the unit of timebased on hydrogen, the simplest element. Such a new definition might have much better prospects of being ultimately tied tothe theoretical constants of Physics in the future.
A similar hyperfine "spin-flip" transition is observed for the 3He+ ion, which is another system consisting ofa single electron orbiting a fermion. Like the proton, the helionhas a spin of 1/2 in its ground state (unlike the proton, it alsoexists in a rare excited state of spin 3/2). The correspondingfrequency was measureed to be:
8665.649905MHz
1966
8665.649867MHz
1969
A very common microscopic yardstickis the equilibrium bond length in a hydrogen molecule (i.e., the average distance between the two protons inan ordinary molecule of hydrogen). It is not yet tied to the above fundamental constantsand it's only known at modest experimental precision:
0.7414 Å = 7.414 10-11 m
Primary Conversion Factors
Below are thestatutory quantities which allow exact conversions between variousphysical units in different systems:
149597870700 m to the au: Astronomical unit of length. (2012) Enacted by the International Astronomical Union on August 31, 2012. This is the end of along roadwhich began in 1672 as Cassini proposed a unit equal to the mean distancebetween the Earth and the Sun. This was recast as the radius of thecircular trajectory of a tiny mass that would orbit an isolated solar mass in one "year" (first an actual sidereal year, then a fixed approximation thereof,known as the Gaussian year).
. (The obscure siriometer introduced in 1911byCarl Charlier (1862-1934) for interstellar distances is 1 Mau = 1.495978707 1017 m, or about 4.848 pc.)
25.4 mm to the inch: International inch. (1959) Enacted by an international treaty, effective January 1, 1959. This gives the followingexact metric equivalences for other units of length: 1 ft = 0.3048 m,1 yd = 0.9144 m,1 mi = 1609.344 m
39.37 "US survey" inches to the meter: "US Survey" inch. (1866, 1893) This equivalence is now obsolete, except in some records of theUS Coast and Geodetic Survey. The International units defined in 1959 areexactly 2 ppm smaller than their"US Survey" counterparts (the ratio is 999998/1000000).
1 lb = 0.45359237 kg: International pound. (1959) Enacted by an international treaty, effective January 1, 1959. This gives the followingexact metric equivalences for othercustomary units of mass:1 oz = 28.349523125 g,1 ozt = 31.1034768 g,1 gn = 64.79891 mg,since there are 7000 gn to the lb, 16 oz to the lb, and 480 gn to thetroy ounce(ozt).
231 cubic inches to the Winchester gallon: U.S. Gallon. (1707, 1836) This is now tied to the 1959 International inch, which makes the [Winchester]US gallon equal toexactly 3.785411784 L.
4.54609 L to the Imperial gallon: U.K. Gallon. (1985) This is the latest andfinal metric equivalence for a unitproposed in 1819 (and effectively introducedin 1824) as the volume of 10 lb of water at 62°F.
9.80665 m/s2: Standard acceleration of gravity. (1901) Multiplying this by a unit of mass gives a unit of force equal to theweightof that mass under standard conditionsapproximately equivalent tothose that would prevail at 45° of latitude on Earth, at sea-level. The value was enacted by the third CGPM in 1901. 1 kgf = 9.80665 N and 1 lbf = 4.4482216152605 N.
101325 Pa = 1 atm: Normal atmospheric pressure. (1954) As enacted by the 10th CGPM in 1954,theatmosphere unit (atm) isexactly 760 Torr. It's onlyapproximately 760 mmHg, because of the following specificationfor the mmHg and other units of pressure based onthe conventional density of mercury.
13595.1 g/L (or kg/m3): Conventional density of mercury. This makes 760 mmHg equal a pressure of (0.76)(13595.1)(9.80665) orexactly 101325.0144354 Pa, which was rounded down in 1954to give the official value of theatm stated above. Thetorr (whosesymbol is capitalized:Torr) was then definedas 1/760 of the rounded value, which makes the mmHg very slightly largerthan the torr, although both are used interchangeably in practice. The mmHg is based on thisconventional density (which is close to the actualdensity of mercury at 0°C)regardless of whatever the actual density ofmercury may be under the prevailing temperature at the time measurements are taken. Beware of what apparently authoritative sources may say on this subject...
999.972 g/L (or kg/m3): Conventional density of "water". This is the conventional conversion factor between so-calledrelative density and absolute density. This is also the factor to use for units of pressure expressed as heights ofa water column (just like the above conventional density of mercury isused for similar purposes to obtain temperature-independent pressure units). This density is clearly very close to that of natural water at its densest point. However, it'sbest considered to be a conventionalconversion factor.
The above number can be traced to the 1904 work of the Swiss-born French metrologistCharles E. Guillaume (1861-1938;Nobel 1920). Guillaume had joined the BIPM in 1883 and would be its director from 1915 to 1936. From 1901 (3rd CGPM) to 1964 (12th CGPM), the liter was (unfortunately) not defined as a cubic decimeter, but insteadas the volume of 1 kg of water in its densest state under 1 atm of pressure (which indicates a temperature of about 3.984°C) Guillaume measured that volume to be 1000.028 cc, which is equivalent to theabove conversion factor (to a 9-digit accuracy).
4.184 J to the calorie (cal): Thermochemical calorie. (1935) This is currently understood as the value of acalorie, unless otherwisespecified (the 1956 "IST" calorie described below is slightly different). Watch out! The kilocalorie (1 kcal = 1000 cal) wasdubbed "Calorie" or "Cal" [capital "C"] in dietetics before 1969 (it still is, at times).
2326 J/kg = 1 Btu/lb: IST heat capacity of water, per °F. (1956) This defines the IT or IST ("International [Steam] Tables") flavor of the Btu("British Thermal Unit") in SI units, once the lb/kg ratio is known. That value was adopted in July 1956 by the5th International Conference on the Properties of Steam,which took place in London, England. The subsequent definition of the pound as 0.45359237 kg (effective since January 1, 1959) makes the official Btu equal toexactly 1055.05585262 J. The rarely used centigrade heat unit (chu) is defined as 1.8 Btu (exactly 1899.100534716 J).The Btu was apparently introduced by Michael Faraday (before 1820?) as the quantity of heat required to raise one pound (lb) ofwater from 63°F to 64°F. This deprecated definition is roughly compatible with the modern one (and it remains mentally helpful) but it's metrologically inferior.
Dimensionless Physical Constants
(2018-06-02) A constant Galileo once had to measure is now known perfectly.
Galileo detected the simultaneity of two events by ear. When two bangs were less than about 11 ms apart he heard a single soundand considered the two events simultaneous. That's probably why he chose that particular duration as his unit of time which he called a tempo (plural tempi). The precise definition of the unitwas in terms of a particular water-clock which he was using to measure longer durations.
Using a simple pendulum of length R, he would produce a bang one quarter-period after the release byhave metal gong just underneath the pivot point. On the other hand, he could also release a ball in free fall from a height H over another gong. Releasing the two things simultanously, he could tell if the two durations were equal (within the aforementioned precision) and adjust either length until they were.
Galileo observed that the ratio R/H was always the same and he measured the value of that constant as precisely as he could. Nowadays, we know the ideal value of that constant:
R/H = 8 /2 = 0.8105694691387021715510357...
This much can be derived in any freshman physics classusing the elementary principles established by Newton after Galileo's death.
Thus, Galileo's results can now be used backwards to estimate how goodhis experiemental methods were. (Indeed, they were as good as can be expectedwhen simultaneity is appreciated by ear.)
The dream of some theoretical physicists is now to advance our theoriesto the point that the various dimensionless physical constants which arenow mysterious to us can be explained as easily as what I'vecalled Galileo's constant here (for shock value).
Combining Planck's constant (h), with thetwo electromagnetic constants and/or thespeed of light (recall that o o c 2 = 1) there's essentially only one way to obtain a quantity whose dimension is thesquare of an electric charge. The ratio of the square of the charge of an electron to that quantity isa pure dimensionless number known as Sommerfeld's constant or the fine-structure constant :
= 0c e2/ 2h = e2/ 2hc0 = 1 / 137.035999...
The value of this constant has captured the imagination of many generationsof physicists, professionals and amateurs alike. Many wild guesses have been made, often based on little more than dubious numerology.
In 1948, Edward Teller(1908-2003) suggested that the electromagnetic interaction might be weakeningin cosmological time and he ventured the guess that the fine-structure constant couldbe inversely proportional to the logarithm of the age of the Universe. This proposal was demolished by Dennis Wilkinson(1918-2013) using ordinary mineralogy, whichshows that the rate of alpha-decay for U-238 could not have varied by much more than 10% in a billion years (that rate is extremely sensitiveto the exact value of the fine-structure constant). Teller's proposal was further destroyed by precise measurements from the fossil reactors at Oklo (Gabon) which show that the fine-structure constant had essentiallythe same value as today two billion years ago.
(2016-11-17) relates electricity to gravity. Paul Dirac tried to link it toother dimensionless physical constants.
In 1919, Hermann Weyl (1885-1955) remarked that the radius of the Universe and the radius of an electron would be exactly in the above ratioif the mass of the Universe was to gravitational energy what the mass of anelectron is to electromagnetic energy (using, for example, the electrostatic argumentleading to theclassical radius of the electron).
In 1937, Dirac singled outthe interractions between an electron and a proton instead, which led him to ponder a quantity equal to the above divided by the proton-to-electron mass ratio :
In 1966, E. Pascual Jordan (1902-1980) used Dirac's "variable gravity" cosmology to argue that the Earth had doubled in size since thecontinents were formed, thus advocating a very misguided alternative to plate tectonics (or continental drift).