Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikibooksThe Free Textbook Project
Search

Distribution Theory/Bump functions

From Wikibooks, open books for an open world
<Distribution Theory

Preliminary definitions

[edit |edit source]

Definition:

Letφ:UR{\displaystyle \varphi :U\to \mathbb {R} } be a function, whereU{\displaystyle U} is an open subset ofRd{\displaystyle \mathbb {R} ^{d}}. We say

Definition:

Let(X,τ){\displaystyle (X,\tau )} be a topological space and letf:XR{\displaystyle f:X\to \mathbb {R} } be a function. Then thesupport off{\displaystyle f} is defined to be the set

suppf:={xX|f(x)0}¯{\displaystyle \operatorname {supp} f:={\overline {\left\{x\in X|f(x)\neq 0\right\}}}};

the bar above the set on the right denotes thetopological closure.

Definition:

Abump function is a functionφ{\displaystyle \varphi } from an open setURd{\displaystyle U\subseteq \mathbb {R} ^{d}} toR{\displaystyle \mathbb {R} } such that the following two conditions are satisfied:

  1. suppφ{\displaystyle \operatorname {supp} \varphi } is compact
  2. φC(U){\displaystyle \varphi \in {\mathcal {C}}^{\infty }(U)}

Multiindex notation

[edit |edit source]

The multiindex notation is an efficient way of denoting several things in multi-dimensional space. For instance, it takes fairly long to denote a partial derivative in the usual way; in the usual notation, a partial derivative is denoted

1k1dkdf{\displaystyle \partial _{1}^{k_{1}}\cdots \partial _{d}^{k_{d}}f}

for somek1,,kdN{\displaystyle k_{1},\ldots ,k_{d}\in \mathbb {N} }. Now in multiindex notation, thek1,,kd{\displaystyle k_{1},\ldots ,k_{d}} are assembled into a vectorα=(k1,,kd){\displaystyle \alpha =(k_{1},\ldots ,k_{d})}, and the term

αf{\displaystyle \partial _{\alpha }f}

is then used instead of the partial derivative notation used above. Now, for one partial derivative this may not be a huge advantage (unless one is talking about ageneral partial derivative), but for instance when one sums all partial derivatives of a polynomialp{\displaystyle p}, say, then one obtains expressions as such:

αN0dαp{\displaystyle \sum _{\alpha \in \mathbb {N} _{0}^{d}}\partial _{\alpha }p} (Note that this is well-defined, as the sum is finite.)

Now compare this to the much longer

k1=1kn=11k1dkdp{\displaystyle \sum _{k_{1}=1}^{\infty }\cdots \sum _{k_{n}=1}^{\infty }\partial _{1}^{k_{1}}\cdots \partial _{d}^{k_{d}}p};

as you can see, we saved a lot of time, and that's what's all about. Multiindex notation was invented byLaurent Schwartz.

Other multiindex conventions are the following (we use a convention byBéla Bollobás and denote[d]:={1,,d}{\displaystyle [d]:=\{1,\ldots ,d\}}):

Further, theabsolute value of a multiindexα=(k1,,kd){\displaystyle \alpha =(k_{1},\ldots ,k_{d})} is defined as

|α|:=j=1dkd{\displaystyle |\alpha |:=\sum _{j=1}^{d}k_{d}}.

A few sample theorems on multiindices are these (we'll need them often):

Theorem (multiindex binomial formula):

LetαN0d{\displaystyle \alpha \in \mathbb {N} _{0}^{d}} be a multiindex,x,yRd{\displaystyle x,y\in \mathbb {R} ^{d}}. Then

(x+y)α=0βα(αβ)xβyαβ{\displaystyle (x+y)^{\alpha }=\sum _{\mathbf {0} \leq \beta \leq \alpha }{\binom {\alpha }{\beta }}x^{\beta }y^{\alpha -\beta }}.

Note that this formula looks exactly as in the one-dimensional case, with one dimensional variables replaced by multiindex variables. This will be a recurrent phenomenon.

Proof:

We prove the theorem by induction on|α|{\displaystyle |\alpha |}. For|α|=0{\displaystyle |\alpha |=0} the case is clear. Now suppose the theorem has been proven where|α|=n{\displaystyle |\alpha |=n}, and let instead|α|=n+1{\displaystyle |\alpha |=n+1}. Thenα{\displaystyle \alpha } has at least one nonzero component; let's say thej{\displaystyle j}-th component ofα{\displaystyle \alpha } is nonzero. Thenα:=αej{\displaystyle \alpha ':=\alpha -e_{j}} (ej{\displaystyle e_{j}} denoting thej{\displaystyle j}-th unit vector, i.e.ej=(0,,0,1j-th place,0,,0){\displaystyle e_{j}=\left(0,\ldots ,0,\overbrace {1} ^{j{\text{-th place}}},0,\ldots ,0\right)}) is a multiindex of absolute valuen{\displaystyle n}. By induction,

(x+y)α=0βα(αβ)xβyαβ{\displaystyle (x+y)^{\alpha '}=\sum _{\mathbf {0} \leq \beta \leq \alpha '}{\binom {\alpha '}{\beta }}x^{\beta }y^{\alpha '-\beta }}

and hence, multiplying both sides by(x+y)ej=xj+yj{\displaystyle (x+y)^{e_{j}}=x_{j}+y_{j}},

(x+y)α=(xj+yj)0βα(αβ)xβyαβ=ejβα(αβej)xβyα(βej)+0βα(αβ)xβyαβ=(ααej)xαyα(αej)+ejβα((αβej)xβyα(βej)+(αβ)xβyαβ)+(α0)x0yα0=0βα(αβ)xβyαβ{\displaystyle {\begin{aligned}(x+y)^{\alpha }&=(x_{j}+y_{j})\sum _{\mathbf {0} \leq \beta \leq \alpha '}{\binom {\alpha '}{\beta }}x^{\beta }y^{\alpha '-\beta }\\&=\sum _{e_{j}\leq \beta \leq \alpha }{\binom {\alpha '}{\beta -e_{j}}}x^{\beta }y^{\alpha '-(\beta -e_{j})}+\sum _{\mathbf {0} \leq \beta \leq \alpha '}{\binom {\alpha '}{\beta }}x^{\beta }y^{\alpha -\beta }\\&={\binom {\alpha '}{\alpha -e_{j}}}x^{\alpha }y^{\alpha '-(\alpha -e_{j})}+\sum _{e_{j}\leq \beta \leq \alpha '}\left({\binom {\alpha '}{\beta -e_{j}}}x^{\beta }y^{\alpha '-(\beta -e_{j})}+{\binom {\alpha '}{\beta }}x^{\beta }y^{\alpha -\beta }\right)+{\binom {\alpha '}{\mathbf {0} }}x^{\mathbf {0} }y^{\alpha -\mathbf {0} }\\&=\sum _{\mathbf {0} \leq \beta \leq \alpha }{\binom {\alpha }{\beta }}x^{\beta }y^{\alpha -\beta }\end{aligned}}}

because

(αβej)+(αβ)=(α+ejβ)=(αβ){\displaystyle {\binom {\alpha '}{\beta -e_{j}}}+{\binom {\alpha '}{\beta }}={\binom {\alpha '+e_{j}}{\beta }}={\binom {\alpha }{\beta }}}

by the respective rule for the usual1{\displaystyle 1}-dim. binomial coefficient.{\displaystyle \Box }

Theorem (multiindex product rule):

LetαN0d{\displaystyle \alpha \in \mathbb {N} _{0}^{d}} be a multiindex,URd{\displaystyle U\subseteq \mathbb {R} ^{d}} be open andf,gCα(U){\displaystyle f,g\in {\mathcal {C}}^{\alpha }(U)}. Then

α(fg)=βα(αβ)βfαβg{\displaystyle \partial _{\alpha }(f\cdot g)=\sum _{\beta \leq \alpha }{\binom {\alpha }{\beta }}\partial _{\beta }f\cdot \partial _{\alpha -\beta }g};

in particular,fgCα(U){\displaystyle f\cdot g\in {\mathcal {C}}^{\alpha }(U)}.

Proof:

Again, we proceed by induction on|α|{\displaystyle |\alpha |}. As before, pickj[d]{\displaystyle j\in [d]} such that thej{\displaystyle j}-th entry ofα{\displaystyle \alpha } is nonzero, and defineα:=αej{\displaystyle \alpha ':=\alpha -e_{j}}. Then by induction

α(fg)=ejβα(αβ)βfαβg=βα(αβ)(ejβfαβg+βfejαβg)=ejβα(αβej)βfα(βej)+0βα(αβ)βfαβg=(ααej)αf+ejβα((αβej)βfα(βej)+(αβ)βfαβg)+(α0)0fα0g{\displaystyle {\begin{aligned}\partial _{\alpha }(f\cdot g)&=\partial _{e_{j}}\sum _{\beta \leq \alpha '}{\binom {\alpha '}{\beta }}\partial _{\beta }f\cdot \partial _{\alpha '-\beta }g\\&=\sum _{\beta \leq \alpha '}{\binom {\alpha '}{\beta }}\left(\partial _{e_{j}}\partial _{\beta }f\cdot \partial _{\alpha '-\beta }g+\partial _{\beta }f\cdot \partial _{e_{j}}\partial _{\alpha '-\beta }g\right)\\&=\sum _{e_{j}\leq \beta \leq \alpha }{\binom {\alpha '}{\beta -e_{j}}}\partial _{\beta }f\cdot \partial _{\alpha '-(\beta -e_{j})}+\sum _{\mathbf {0} \leq \beta \leq \alpha '}{\binom {\alpha '}{\beta }}\partial _{\beta }f\cdot \partial _{\alpha -\beta }g\\&={\binom {\alpha '}{\alpha -e_{j}}}\partial _{\alpha }f+\sum _{e_{j}\leq \beta \leq \alpha '}\left({\binom {\alpha '}{\beta -e_{j}}}\partial _{\beta }f\cdot \partial _{\alpha '-(\beta -e_{j})}+{\binom {\alpha '}{\beta }}\partial _{\beta }f\cdot \partial _{\alpha -\beta }g\right)+{\binom {\alpha '}{\mathbf {0} }}\partial _{\mathbf {0} }f\cdot \partial _{\alpha -\mathbf {0} }g\end{aligned}}}{\displaystyle \Box }

Note that the proof is essentially the same as in the previous theorem, since by the product rule, differentiation in one direction has the same effect as multiplying the "sum of derivatives" to the existing derivatives.

Note that the dimension of the respective multiindex must always match the dimension of the space we are considering.

Stability properties, TVS of bump functions, convergence

[edit |edit source]
Retrieved from "https://en.wikibooks.org/w/index.php?title=Distribution_Theory/Bump_functions&oldid=3177516"
Category:

[8]ページ先頭

©2009-2025 Movatter.jp