Nowadays, we call Taylor's Theorem several variants of thefollowing expansion of a smooth function f abouta regular point a, in terms of a polynomial whose coefficients aredetermined by the successive derivatives of the function at that point:
f (a + x) = f (a) + f ' (a) x + f '' (a) x/2 + ... + f(n) (a) xn/ n! + Rn(x)
A Taylor expansion about the origin (a = 0) is often called a Taylor-Maclaurin expansion, in honor ofColin Maclaurin(1698-1746) who focused on that special case in 1742.
Other variants of Taylor's theorem differ by the distinct explicit expressions which can begiven for the so-called remainder Rn .
Taylor published two versions of his theorem in 1715. In a letter to his friendJohnMachin (1680-1751) dated July 26, 1712, Taylor gave Machin credit for the idea. Several variants or precursors of the theorem had also been discovered independently by James Gregory (1638-1675),Isaac Newton (1643-1727),Gottfried Leibniz (1646-1716),Abraham de Moivre (1667-1754) andJohann Bernoulli (1667-1748).
The term Taylor series was apparently coined by SimonLhuilier (1785).
(2019-11-25) Due to Lagrange, Cauchy (1821), Young (1910) and Schlömilch (1923).
The aforementioned difference Rn(x) between the value of a function f and that of its Taylor polynomial (at order n) has a nice exact expression:
Taylor's theorem was brought to great prominence in 1772 byJoseph-Louis Lagrange (1736-1813)who declared it the basis for differential calculus (he made this part of his own lectures at Polytechnique in 1797).
Arguably, this was a rebuttal to religious concerns which had been raised in 1734 (The Analyst) byGeorge Berkeley (1685-1753) Bishop of Cloyne (1734-1753) about the infinitesimal foundations ofCalculus.
The mathematical concepts behind differentiation and/or integration are sopervasive that they can be introduced or discussed outside of the historical contextwhich originally gave birth to them, one century before Lagrange.
The starting point of Lagrange is an exact expression, valid for any polynomial f of degree n or less, in any commutative ring :
f (a + x) = f (a) + D1f (a) x + D2f (a) x2 + ... + Dnf (a) xn
In the ordinary interpretation of Calculus [over anyfieldof characteristic zero] the following relation holds, for any polynomial f :
D0f (a) = f (a) Dkf (a) = f(k) (a)/ k!
However, the expressions below are true even whenneither the reciprocal of k! nor higher-order derivatives are defined, over any commutative ring.
Lagrange's definitions of Dkf (a) are just based on thebinomial theorem. Dkf is simply a polynomial of degree k. No divisions are needed.
The following manipulations are limited to the case when f is a polynomial whose order is at most n. So only finitely many terms are involved in the data and in the results. With infinitely many terms, the convergence of neither would be guaranteed.
(2008-12-23) Acomplex power series converges insidea disk and diverges outside of it (the situation at different points ofthe boundary circle may vary).
That disk is called the disk of convergence. Its radius is the radius of convergence and its boundary is the circle of convergence.
The result advertised above is often called Abel's power series theorem. Although it was known well before him, Abel iscredited for making this part of a general discussion which includes the statusof points on the circumference of the circle of convergence. The main tool for that is another theorem due to Abel, discussed in the next section.
(2018-06-01) Slice of the disk of convergence with its apex on the boundary.
(2021-08-08)
(2019-12-06) Lagrange-Bürmann Inversion Formula.
Brian Keiffer (Yahoo!2011-08-07) Defining exp (x) =n xn/n! and e =exp (1) prove that exp (x) =e x
In their opendisk of convergence (i.e., circular boundary excluded, unless it's at infinity) power series are absolutely convergent series. So, in that domain, the sum of the series is unchanged by modifying theorder of the terms (commutativity) and/or grouping themtogether (associativity). This allows us to establish directly the following fundamental property (using thebinomial theorem):
exp (x) exp (y) = exp (x+y)
exp (x) exp (y)
=
xn/n!
yn/n!
=
(xn/n!) (ym/m!)
=
xk yn-k
k! (n-k)!
=
(x+y)n
n!
= exp (x+y)
This lemma shows immediately that exp (-x) = (exp x)-1. Then, by induction on the absolute value of the integer n, we can establish that:
exp (n x) = (exp x) n
With m = n and y = n x, this gives exp (y) = (exp y/m)m . So :
exp (y / m) = (exp y) 1/m
Chaining those two results, we obtain, for any rational q = n/m
exp (q y) = (exp y) q
By continuity, the result holds for any real q = x. In particular, with y = 1:
exp (x) = (exp 1) x = e x
(2008-12-23) Power series that coincide wherever theirdisks of convergence overlap.
In the realm of real or complex numbers, two polynomials which coincide at infinitely many distinct points are necessarily equal (: as a polynomial with infinitely many roots,their difference must be zero).
This result on polynomials doesn't have an immediate generalization to analytic functions for the simple reason thatthere are analytic functions with infinitely many zeroes. The sine function is one example of an analytic functionwith infinitely discrete seroes.
However, an analytic function defined on a nonempty open domaincan be extended in only one way to a larger open domain of definitionwhich doesn't encircle any point outside the previous one. Such an extension of an analytic function is called an analytic continuation thereof.
Divergent Series :
Loosely speaking, analytic continuations can make sense of divergent series in a consistent way. Consider, for example, the classic summation formula for the geometric series, which converges when |z| < 1 :
1 + z + z2 + z3 + z4 + ... + zn + ... = 1/ (1-z)
The right-hand-side always makes sense, unless z = 1. It's thus tempting to equate it formally to the left-hand-side, even when the latter diverges! This viewpoint has been shown to be consistent. It makes perfect sense of the following "sums" of divergent series which may otherwise look like monstrosities (respectively obtained for z = -1, 2, 3) :
I am greatly impressed by the quick and accurate generalization of my question, which gave me adeeper understanding of the related material. Thank you for creating such a great site!
(2021-07-22) A slight generalization of the technique introduced above.
Instead of retaining only the terms of a power series whose indices are multiples ofa given modulus k, we may wish to keep only indices whose residues modulo k are a prescribed remainder r. Thus, we're now after:
f k,r(z) = na kn+r z kn+r
That can be worked out with our previous result (the special case r = 0) by applying it to the function z(k-r) f (z). Using = exp(2i/k), we have:
In the example k = 4 for f =exp, we have = i and, therefore:
f 4,r(z) = ¼ [ez + (-i)reiz + (-1)re-z + ir e-iz ]
That translates into four equations, for r = 0, 1, 2 or 3:
f 4,0(z) = ¼ [ez + eiz + e-z + e-iz ] = ½ [ ch z + cos z ] f 4,1(z) = ¼ [ez i eiz e-z + i e-iz ] = ½ [ sh z + sin z ] f 4,2(z) = ¼ [ez eiz + e-z e-iz ] = ½ [ ch z cos z ] f 4,3(z) = ¼ [ez + i eiz e-z i e-iz ] = ½ [ sh z sin z ]
The whole machinery may be an overkill in this case, where the above four relationsare fairly easy to obtain directly from the expansions of cos, ch, sin and sh. However, that's a good opportunity to introduce the methodology which is needed in less trivial caseswith other roots of unity...
(2021-10-05) Applying the methods of calculus to discrete sequences.
Difference Operator (discrete derivative) :
f (n) = f (n+1)f (n)
Like the usual differential operator (d) this is a linear operator, as are all iterated difference operators k recursively defined, for k ≥ 0 :
0f = f k+1f = k (f ) = (kf )
Unlike the differential operator d of infinitesimal calculus, the above difference operator yieldsordinary finite quantities whose products can't be neglected; there'sa third term in the corresponding product rule :
(uv) = ( u) v + u ( v) + ( u) ( v)
Falling Powers (falling factorials) :
The number of ways to pick a sequence of m objects out of n possible choices (allowing repetitions) is nm, pronouncedn to the power of m.
When objects already picked are disallowed, the result is denoted nm and called n to the falling power of m. It's the product of m decreasing factors:
nm = (n)m = n (n-1) (n-2) ... (n+1-m)
As usual, that's 1 when m = 0, because it's the product of zero factors. Falling powers are closely related to choice numbers:
C(n,m) = C = C
=
n m
=
n!
=
nm
=
(n)m
(n-m)! m!
m!
m!
Falling powers are to FDC what powers are to infinitesimal calculus since:
nm = m nm-1
Iterating this relation yields the pretty formula:
k nm = mk nm-k
Gregory-Newton forward-difference formula :
f (n) =
n k
kf (0)
When n is a natural integer, the right-hand side is a finite sum, as all binomial coefficients with k > n vanish.
: By induction on n (the case n = 0 being trivial):
Assuming the formula holds for a given n, we apply it to f and obtain:
f (n+1)f (n) = f (n) =
n k
k+1f (0)
=
n k-1
kf (0)
Note the zero leading term (k = 0) in the re-indexed rightmost sum. We may add this finite sumtermwise to the previous expansion of f (n) to obtain: