Movatterモバイル変換


[0]ホーム

URL:


 border
 border

Power Series

Related articles on this site:

Related Links (Outside this Site)

Complex Variables,Complex Analysis   by John H. Mathews  (2000).
Complex Variables,Contour Integration  by Joceline Lega  (1998).

 
border
border

Power Series and Analytic Continuations

 Brook Taylor  1685-1731
 Brook Taylor  (1685-1731) (2009-01-07)  
Smooth functions as sums of power series.

Brook Taylor (1685-1731) invented the calculus of finite differences  and came up with the fundamental techniqueof  integration by parts.

Nowadays,  we call Taylor's Theorem  several variants of thefollowing expansion of a smooth function f  abouta regular point a, in terms of a polynomial whose coefficients aredetermined by the successive derivatives of the function at that point:

f (a + x)   =  f (a)  + f ' (a) x  + f '' (a) x/2  +  ...  + f(n) (a) xn/ n!  +  Rn(x)

A Taylor expansion about the origin  (a = 0) is often called a Taylor-Maclaurin expansion,  in honor ofColin Maclaurin(1698-1746) who focused on that special case in 1742.

Other variants of Taylor's theorem differ by the distinct explicit expressions which can begiven for the so-called remainder  Rn .

Taylor published two versions of his theorem in 1715.  In a letter to his friendJohnMachin (1680-1751)  dated July 26, 1712, Taylor gave Machin credit for the idea. Several variants or precursors of the theorem had also been discovered independently by James Gregory (1638-1675),Isaac Newton (1643-1727),Gottfried Leibniz (1646-1716),Abraham de Moivre (1667-1754) andJohann Bernoulli (1667-1748). 

The term Taylor series  was apparently coined by SimonLhuilier (1785).


(2019-11-25)  
Due to Lagrange, Cauchy (1821), Young (1910) and Schlömilch (1923).

The aforementioned  difference  Rn(x) between the value of a function f  and that of its Taylor polynomial (at order n)  has a nice exact  expression:

Taylor Remainder   (Brook Taylor, 1715)
Rn(x)   =   (x-t)n
Vinculum
n!
 f  (n+1) (a+t)   dt

:  If  n > 0 ,  we use  Taylor's own integration by parts  to obtain:

Rn(x)   =   Rn-1(x)   f  (n) (a)  xn / n!

By induction  on  n, the general formula then follows from the trivial case  (n = 0)  whichis just the fundamental theorem of calculus:

R0(x)   =  f (a+x)   f (a)   Halmos

Taylor-Lagrange Formula :

Lagrange Remainder   (Joseph-Louis Lagrange)
≤ | x | ,  Rn(x)   =   (x)n
Vinculum
n!
 f  (n+1) (a+)   dt

:  That's the result of applying the mean-value theorem to Taylor's original expression  (1715)  for  Rn(x) as given in the previous section. Halmos

Taylor-Cauchy Formula

 Come back later, we're still working on this one...

Taylor-Young formula (1910) :   Rn(x) is negligible  compared to  xn as  x  tends to  0.  A formulation due to William Henry Young (1863-1942).

Rn(x)   <<   x n

 Come back later, we're still working on this one...


 Lagrange (1736-1813) (2015-04-19)  
Lagrange's strict algebraic  interpretationof differential calculus.

Taylor's theorem  was brought to great prominence in 1772 byJoseph-Louis Lagrange (1736-1813)who declared it the basis for differential calculus (he made this part of his own lectures at Polytechnique  in 1797).

Arguably, this was a rebuttal to religious concerns which had been raised in 1734 (The Analyst)  byGeorge Berkeley (1685-1753) Bishop of Cloyne (1734-1753) about the infinitesimal  foundations ofCalculus.

The mathematical concepts behind differentiation and/or integration are sopervasive that they can be introduced or discussed outside of the historical contextwhich originally gave birth to them, one century before Lagrange.

The starting point of Lagrange is an exact expression,  valid for any polynomial  f of degree  n  or less,  in any commutative ring :

f (a + x)   =  f (a)  + D1f (a) x  + D2f (a) x2 +  ...  + Dnf (a) xn

In the ordinary interpretation of Calculus  [over anyfieldof characteristic zero]  the following relation holds, for any polynomial f :

D0f (a)   =  f (a)       Dkf (a)   =  f(k) (a)/ k!

However, the expressions below are true even whenneither the reciprocal of  k!  nor higher-order derivatives are defined, over any commutative ring.

Lagrange's definitions of   Dkf (a)   are just based on thebinomial theorem. Dkf  is simply a polynomial of degree  k.  No divisions are needed.

The following manipulations are limited to the case when f is a polynomial whose order is at most  n. So only finitely many terms are involved in the data and in the results. With infinitely many terms, the convergence of neither would be guaranteed.

 Come back later, we're still working on this one...


(2008-12-23)  
Acomplex power series converges insidea disk and diverges outside of it  (the situation at different points ofthe boundary circle may vary).

That disk is called the disk of convergence. Its radius is the radius of convergence and its boundary is the circle of convergence.

The result advertised above is often called Abel's power series theorem. Although it was known well before him,  Abel iscredited for making this part of a general discussion which includes the statusof points on  the circumference of the circle of convergence. The main tool for that is another  theorem due to Abel, discussed in the next section.

 Come back later, we're still working on this one...


(2018-06-01)  
Slice of the disk of convergence with its apex on the boundary.

 Come back later, we're still working on this one...


(2021-08-08)  

 Come back later, we're still working on this one...


(2019-12-06)  
Lagrange-Bürmann Inversion Formula.

 Come back later, we're still working on this one...


Brian Keiffer (Yahoo!2011-08-07)  
Defining  exp (x) =n xn/n!  and e =exp (1)  prove that exp (x) =e x

In their open disk of convergence (i.e., circular boundary excluded, unless it's at infinity) power series are absolutely convergent  series. So, in that domain, the sum of the series is unchanged by modifying theorder of the terms  (commutativity)  and/or  grouping themtogether  (associativity). This allows us to establish directly the following fundamental property (using thebinomial theorem):

exp (x) exp (y)   =  exp (x+y)

exp (x) exp (y)   =   
xn/n!
yn/n!   =   
 
(xn/n!) (ym/m!)

   =   
 
 
xk yn-k
Vinculum
k! (n-k)!
   =   
 
 (x+y)n
Vinculum
n!
   =   exp (x+y)

This lemma shows immediately that exp (-x) = (exp x)-1. Then, by induction on the absolute value of the integer  n, we can establish that:

exp (n x)   =   (exp x) n

With  m = n  and  y = n x,  this gives  exp (y)   =   (exp y/m)m .   So :

exp (y / m)   =   (exp y) 1/m

Chaining those two results, we obtain, for any rational  q = n/m

exp (q y)   =   (exp y) q

By continuity, the result holds for any real  q = x.  In particular, with y = 1:

exp (x)   =   (exp 1) x  =  e x QED


(2008-12-23)  
Power series that coincide wherever theirdisks of convergence overlap.

In the realm of real or complex  numbers, two polynomials which coincide at infinitely many distinct points are necessarily equal (:  as a polynomial with infinitely many roots,their difference must be zero).

This result on polynomials doesn't have an immediate generalization to analytic functions  for the simple reason thatthere are analytic functions  with infinitely many zeroes. The sine  function is one example of an analytic functionwith infinitely discrete  seroes. 

However,  an analytic function defined on a nonempty open domaincan be extended in only one way to a larger open domain of definitionwhich doesn't encircle any point outside the previous one. Such an extension of an analytic function is called an analytic continuation  thereof.

Divergent Series :

Loosely speaking, analytic continuations can make sense of divergent series  in a consistent way. Consider, for example, the classic summation formula for the geometric series,  which converges when |z| < 1 :

1  +  z  + z + z + z +  ...  + z +  ...  =   1/ (1-z)

The right-hand-side always makes sense, unless  z = 1. It's thus tempting to equate it formally to the left-hand-side, even when the latter diverges! This viewpoint has been shown to be consistent. It makes perfect sense of the following "sums" of divergent series  which may otherwise look like monstrosities (respectively obtained for  z = -1, 2, 3) :

1    1  + 1    1  + ...  +  (-1) +  ...  =  
1  +  2  +  4  +  8  + 16  +  ...  +  2 +  ...  =   -1
1  +  3  +  9  +  27  + 81  +  ...  +  3 +  ...  =  


Dimitrina Stavrova (2008-12-22; e-mail)  
What is the sum of  8n / (3n)! over allnatural integers  n ?

Answer :  1/3 ( e2  + 2 cos (3) / e )  =  2.423641733185364535425...

That's a special case (for  z = 2,  k = 3, an =1/n! )  of this problem:

For an integer  k  and a known series  f (z)  = n az n , find the value of:

   fk (z)  =  n a kn z kn   

The key is to introduce a primitive kth  root of unity, like  = exp (2i / k).

1  +   + 2  +  ...  + k-1   =   0           k   =   1

The quantity   1  + j  +  ...  + (k-1) j   is  k when  j  is a multiple of  k and vanishes  otherwise. Matching coefficients of az j , we obtain:

   f (z)  + f ( z)  + f (2 z)  +  ...  + f (k-1 z)     =    k fk (z)   

For f (z)  =  e z  this gives the advertised result as f3 (2) in the form:

1/3[ exp (2)  + exp (2)  + exp (22) ]   where    =  ½ (-1 + i 3 )

On 2008-12-26, Dimitrina Stavrova wrote:   [edited summary]
I am greatly impressed by the quick and accurate generalization of my question, which gave me adeeper understanding of the related material.  Thank you for creating such a great site!
DimitrinaStavrova, Ph.D.
Sofia, Bulgaria

Thanks for the kind words, Dimitrina.

  xn / 3n!  inclosed form


(2021-07-22)  
A slight generalization of the technique introduced above.

Instead of retaining only the terms of a power series whose indices are multiples ofa given modulus  k,  we may wish to keep only indices whose residues modulo  k  are a prescribed remainder  r. Thus,  we're now after:

   f k,r(z)  =  n a kn+r z kn+r   

That can be worked out with our previous result (the special case  r = 0)  by applying it to the function  z(k-r) f (z). Using   = exp(2i/k),  we have:

z(k-r)f (z)  + ( z)(k-r)f ( z) +...+ (k-1 z)(k-r)f (k-1 z)  =  k  z(k-r)f k,r(z)

Dividing both sides by  k z(k-r),  using k = 1,  we obtain the desired result:

f k,r(z)  =  (1/k) [f (z) +-rf ( z)+-2rf (2 z)+ ... +-(k-1)rf (k-1 z) ]

In the example  k = 4  for f =exp, we have  = i  and,  therefore:

f 4,r(z)   =   ¼  [ez  +  (-i)reiz  + (-1)re-z  + ir e-iz ]

That translates into four equations,  for  r = 0, 1, 2 or 3:

f 4,0(z)   =   ¼  [ez  +  eiz  + e-z  +  e-iz ]  =  ½  [ ch z  +  cos z ]
f 4,1(z)   =   ¼ [ez    i eiz   e-z +  i e-iz ]  =  ½  [ sh z  +  sin z ]
f 4,2(z)   =   ¼  [ez   eiz  + e-z    e-iz ]  =  ½  [ ch z    cos z ]
f 4,3(z)   =   ¼  [ez  +  i eiz   e-z    i e-iz ]  =  ½  [ sh z    sin z ]

The whole machinery may be an overkill in this case,  where the above four relationsare fairly easy to obtain directly from the expansions of cos, ch, sin and sh. However,  that's a good opportunity to introduce the methodology which is needed in less trivial caseswith other roots of unity...

Let's use this to compute the deformed exponential  exp(z) when  q = -1.

expq(z)   =   

  zn
 Vinculum
n!
  q n(n-1)/2

When  q = -1,  the value  of  q (n-1)n/2 depends only on what  n  is modulo 4:

n mod 40123
(-1) (n-1) n/2+1+1-1-1

Therefore,  exp-1(z)   =  f4,0 (z)  + f4,1 (z)   f4,2 (z)   f4,3 (z).  So:

   exp-1(z)   =   cos z  +  sin z  =   2  sin (z+/4)   

The technique applies when  q  is a root of unity. For example,  with  q = i  the series splitsaccording to the residue of the index  n  modulo 8:

n mod 801234567
i (n-1) n/2+1+1+i-i-1-1-i+i

expi(z)    =    f8,0  + f8,1  + if8,2    if8,3  f8,4  f8,5   if8,6 +  if8,7

where  f 8,r  = 1/8 

e-mr i/4 exp ( z em i/4)

After a fairly tedious computation,  this boils down to:

expi(z)    =    [e (7z+1) i /4 + e 5z i /4  e (3z+1) i /4 + e z i /4 ]

The case  k = 3  ( = e2i/3) is much simpler,  entailing only a 3-way split:

n mod 3012
(n-1) n/2+1+1

exp(z)    =    f3,0  + f3,1  + f3,2

f 3,0(z)   =  1/3  [ez  +  ez  + ez * ]
f 3,1(z)   =  1/3  [ez  +  * ez +    ez * ]
f 3,2(z)   =  1/3  [ez  +  ez +  *  ez * ]

exp(z)    =  1/3  [(2+) ez  + (1+2*) ez  + (2+)  ez * ]

Period (along  n)  of  exp ( n(n-1)i/k )    (A022998)
k12345678910111213141516
m1238512716920112413281532


 Brook Taylor (2021-10-05)  
Applying the methods of calculus to discrete sequences.

Difference Operator    (discrete derivative) :

f (n)   =  f (n+1)f (n)

Like the usual differential operator  (d)  this is a linear  operator,  as are all iterated  difference operators k  recursively defined,  for  k ≥ 0 :

0f   =  f
k+1f   =  k (f )   =   (kf )

Unlike the differential operator  d  of infinitesimal calculus,  the above difference operator   yieldsordinary finite quantities whose products can't be neglected;  there'sa third term in the corresponding product rule :

(uv)   =  ( u) v   +   u ( v)  +   ( u) ( v)

Falling Powers   (falling factorials) :

The number of ways to pick a sequence of  m  objects out of  n  possible choices  (allowing repetitions) is  nmpronounced n  to the power of  m.

When objects already picked are disallowed,  the result is denoted  nm  and called n to the falling power of m.  It's the product of  m  decreasing factors:

nm   =   (n)m   =   n (n-1) (n-2) ... (n+1-m)

As usual,  that's  1  when  m = 0,  because it's the product of zero factors. Falling powers are closely related to choice numbers:

C(n,m)   =   C   =   C   =   n
m
   =   n!   =   nm   =   (n)m
VinculumVinculumVinculum
(n-m)! m!m!m!

Falling powers are to FDC what powers are to infinitesimal calculus since:

nm   =   m  nm-1

Iterating this relation yields the pretty formula:

k nm   =  mk  nm-k

Gregory-Newton forward-difference formula :

   f (n)   =   

 n
k
 kf (0)    

When  n  is a natural integer, the right-hand side is a finite  sum, as all binomial coefficients  with  k > n  vanish.

:  By induction  on  n  (the case  n = 0  being trivial):

Assuming the formula holds for a given  n,  we apply it to  f  and obtain:

 f (n+1)f (n)   =  f (n)   =   

 n
k
 k+1f (0)    =   

 n
k-1
 kf (0)
 

Note the zero leading term (k = 0) in the re-indexed rightmost sum.  We may add this finite sumtermwise to the previous expansion of f (n)  to obtain:

 f (n+1)   =   

   n
k-1
  +  n
k
  kf (0)   =   

 n+1
k
 kf (0)

This says that the formula holds for  n+1   QED

Falling powers,  make the above look like a Taylor-MacLaurin  expansion:

f (n)   =  f (0)  + f (0) n  + 2f (0) n2/2  +  ...  + kf (0) nk/k!  +  ...

When one k  is zero,  so are all the subsequent ones and the aboveright-hand-side gives directly f  as a polynomial function of  n.

border
border
visits since Dec. 23, 2008
 (c) Copyright 2000-2023, Gerard P. Michon, Ph.D.

[8]ページ先頭

©2009-2025 Movatter.jp