Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Product rule

From Wikipedia, the free encyclopedia
Formula for the derivative of a product
This article is about the derivative of a product. For the relation between derivatives of 3 dependent variables, seeTriple product rule. For a counting principle in combinatorics, seeRule of product. For conditional probabilities, seeChain rule (probability).

Geometric illustration of a proof of the product rule[1]
Part of a series of articles about
Calculus
abf(t)dt=f(b)f(a){\displaystyle \int _{a}^{b}f'(t)\,dt=f(b)-f(a)}

Incalculus, theproduct rule (orLeibniz rule[2] orLeibniz product rule) is a formula used to find thederivatives of products of two or morefunctions. For two functions, it may be stated inLagrange's notation as(uv)=uv+uv{\displaystyle (u\cdot v)'=u'\cdot v+u\cdot v'} or inLeibniz's notation asddx(uv)=dudxv+udvdx.{\displaystyle {\frac {d}{dx}}(u\cdot v)={\frac {du}{dx}}\cdot v+u\cdot {\frac {dv}{dx}}.}

The rule may be extended or generalized to products of three or more functions, to a rule for higher-order derivatives of a product, and to other contexts.

Discovery

[edit]

Discovery of this rule is credited toGottfried Leibniz, who demonstrated it using"infinitesimals" (a precursor to the moderndifferential).[3] (However, J. M. Child, a translator of Leibniz's papers,[4] argues that it is due toIsaac Barrow.) Here is Leibniz's argument:[5] Letu andv be functions. Thend(uv) is the same thing as the difference between two successiveuv's; let one of these beuv, and the otheru+du timesv+dv; then:d(uv)=(u+du)(v+dv)uv=udv+vdu+dudv.{\displaystyle {\begin{aligned}d(u\cdot v)&{}=(u+du)\cdot (v+dv)-u\cdot v\\&{}=u\cdot dv+v\cdot du+du\cdot dv.\end{aligned}}}

Since the termdu·dv is "negligible" (compared todu anddv), Leibniz concluded thatd(uv)=vdu+udv{\displaystyle d(u\cdot v)=v\cdot du+u\cdot dv}and this is indeed the differential form of the product rule. If we divide through by the differentialdx, we obtainddx(uv)=vdudx+udvdx{\displaystyle {\frac {d}{dx}}(u\cdot v)=v\cdot {\frac {du}{dx}}+u\cdot {\frac {dv}{dx}}}which can also be written inLagrange's notation as(uv)=vu+uv.{\displaystyle (u\cdot v)'=v\cdot u'+u\cdot v'.}

First proofs

[edit]

Both Leibniz andNewton gaveproofs that are notrigorous by modern standards. Leibniz reasoned with "infinitely smaller quantities", interpreting products asareas ofrectangles, while Newton reasoned with "flowing quantities".[6][7]

Examples

[edit]

Proofs

[edit]

Limit definition of derivative

[edit]

Leth(x) =f(x)g(x) and suppose thatf andg are each differentiable atx. We want to prove thath is differentiable atx and that its derivative,h(x), is given byf(x)g(x) +f(x)g(x). To do this,f(x)g(x+Δx)f(x)g(x+Δx){\displaystyle f(x)g(x+\Delta x)-f(x)g(x+\Delta x)} (which is zero, and thus does not change the value) is added to the numerator to permit its factoring, and then properties of limits are used.h(x)=limΔx0h(x+Δx)h(x)Δx=limΔx0f(x+Δx)g(x+Δx)f(x)g(x)Δx=limΔx0f(x+Δx)g(x+Δx)f(x)g(x+Δx)+f(x)g(x+Δx)f(x)g(x)Δx=limΔx0[f(x+Δx)f(x)]g(x+Δx)+f(x)[g(x+Δx)g(x)]Δx=limΔx0f(x+Δx)f(x)ΔxlimΔx0g(x+Δx)+limΔx0f(x)limΔx0g(x+Δx)g(x)Δx=f(x)g(x)+f(x)g(x).{\displaystyle {\begin{aligned}h'(x)&=\lim _{\Delta x\to 0}{\frac {h(x+\Delta x)-h(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)g(x+\Delta x)-f(x)g(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)g(x+\Delta x)-f(x)g(x+\Delta x)+f(x)g(x+\Delta x)-f(x)g(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {{\big [}f(x+\Delta x)-f(x){\big ]}\cdot g(x+\Delta x)+f(x)\cdot {\big [}g(x+\Delta x)-g(x){\big ]}}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}\cdot \lim _{\Delta x\to 0}g(x+\Delta x)+\lim _{\Delta x\to 0}f(x)\cdot \lim _{\Delta x\to 0}{\frac {g(x+\Delta x)-g(x)}{\Delta x}}\\[5pt]&=f'(x)g(x)+f(x)g'(x).\end{aligned}}}The fact thatlimΔx0g(x+Δx)=g(x){\displaystyle \lim _{\Delta x\to 0}g(x+\Delta x)=g(x)} follows from the fact that differentiable functions are continuous.

Linear approximations

[edit]

By definition, iff,g:RR{\displaystyle f,g:\mathbb {R} \to \mathbb {R} } are differentiable atx{\displaystyle x}, then we can writelinear approximations:f(x+h)=f(x)+f(x)h+ε1(h){\displaystyle f(x+h)=f(x)+f'(x)h+\varepsilon _{1}(h)} andg(x+h)=g(x)+g(x)h+ε2(h),{\displaystyle g(x+h)=g(x)+g'(x)h+\varepsilon _{2}(h),}where the error terms are small with respect toh: that is,limh0ε1(h)h=limh0ε2(h)h=0,{\textstyle \lim _{h\to 0}{\frac {\varepsilon _{1}(h)}{h}}=\lim _{h\to 0}{\frac {\varepsilon _{2}(h)}{h}}=0,}also writtenε1,ε2o(h){\displaystyle \varepsilon _{1},\varepsilon _{2}\sim o(h)}. Then:f(x+h)g(x+h)f(x)g(x)=(f(x)+f(x)h+ε1(h))(g(x)+g(x)h+ε2(h))f(x)g(x)=f(x)g(x)+f(x)g(x)h+f(x)g(x)hf(x)g(x)+error terms=f(x)g(x)h+f(x)g(x)h+o(h).{\displaystyle {\begin{aligned}f(x+h)g(x+h)-f(x)g(x)&=(f(x)+f'(x)h+\varepsilon _{1}(h))(g(x)+g'(x)h+\varepsilon _{2}(h))-f(x)g(x)\\[.5em]&=f(x)g(x)+f'(x)g(x)h+f(x)g'(x)h-f(x)g(x)+{\text{error terms}}\\[.5em]&=f'(x)g(x)h+f(x)g'(x)h+o(h).\end{aligned}}}The "error terms" consist of items such asf(x)ε2(h),f(x)g(x)h2{\displaystyle f(x)\varepsilon _{2}(h),f'(x)g'(x)h^{2}} andhf(x)ε1(h){\displaystyle hf'(x)\varepsilon _{1}(h)} which are easily seen to have magnitudeo(h).{\displaystyle o(h).} Dividing byh{\displaystyle h} and taking the limith0{\displaystyle h\to 0} gives the result.

Quarter squares

[edit]

This proof uses thechain rule and thequarter square functionq(x)=14x2{\displaystyle q(x)={\tfrac {1}{4}}x^{2}} with derivativeq(x)=12x{\displaystyle q'(x)={\tfrac {1}{2}}x}. We have:uv=q(u+v)q(uv),{\displaystyle uv=q(u+v)-q(u-v),}and differentiating both sides gives:f=q(u+v)(u+v)q(uv)(uv)=(12(u+v)(u+v))(12(uv)(uv))=12(uu+vu+uv+vv)12(uuvuuv+vv)=vu+uv.{\displaystyle {\begin{aligned}f'&=q'(u+v)(u'+v')-q'(u-v)(u'-v')\\[4pt]&=\left({\tfrac {1}{2}}(u+v)(u'+v')\right)-\left({\tfrac {1}{2}}(u-v)(u'-v')\right)\\[4pt]&={\tfrac {1}{2}}(uu'+vu'+uv'+vv')-{\tfrac {1}{2}}(uu'-vu'-uv'+vv')\\[4pt]&=vu'+uv'.\end{aligned}}}

Multivariable chain rule

[edit]

The product rule can be considered a special case of thechain rule for several variables, applied to the multiplication functionm(u,v)=uv{\displaystyle m(u,v)=uv}:d(uv)dx=(uv)ududx+(uv)vdvdx=vdudx+udvdx.{\displaystyle {d(uv) \over dx}={\frac {\partial (uv)}{\partial u}}{\frac {du}{dx}}+{\frac {\partial (uv)}{\partial v}}{\frac {dv}{dx}}=v{\frac {du}{dx}}+u{\frac {dv}{dx}}.}

Non-standard analysis

[edit]

Letu andv be continuous functions inx, and letdx,du anddv beinfinitesimals within the framework ofnon-standard analysis, specifically thehyperreal numbers. Using st to denote thestandard part function that associates to afinite hyperreal number the real infinitely close to it, this givesd(uv)dx=st((u+du)(v+dv)uvdx)=st(uv+udv+vdu+dudvuvdx)=st(udv+vdu+dudvdx)=st(udvdx+(v+dv)dudx)=udvdx+vdudx.{\displaystyle {\begin{aligned}{\frac {d(uv)}{dx}}&=\operatorname {st} \left({\frac {(u+du)(v+dv)-uv}{dx}}\right)\\&=\operatorname {st} \left({\frac {uv+u\cdot dv+v\cdot du+du\cdot dv-uv}{dx}}\right)\\&=\operatorname {st} \left({\frac {u\cdot dv+v\cdot du+du\cdot dv}{dx}}\right)\\&=\operatorname {st} \left(u{\frac {dv}{dx}}+(v+dv){\frac {du}{dx}}\right)\\&=u{\frac {dv}{dx}}+v{\frac {du}{dx}}.\end{aligned}}}This was essentiallyLeibniz's proof exploiting thetranscendental law of homogeneity (in place of the standard part above).

Smooth infinitesimal analysis

[edit]

In the context of Lawvere's approach to infinitesimals, letdx{\displaystyle dx} be a nilsquare infinitesimal. Thendu=u dx{\displaystyle du=u'\ dx} anddv=v dx{\displaystyle dv=v'\ dx}, so thatd(uv)=(u+du)(v+dv)uv=uv+udv+vdu+dudvuv=udv+vdu+dudv=udv+vdu{\displaystyle {\begin{aligned}d(uv)&=(u+du)(v+dv)-uv\\&=uv+u\cdot dv+v\cdot du+du\cdot dv-uv\\&=u\cdot dv+v\cdot du+du\cdot dv\\&=u\cdot dv+v\cdot du\end{aligned}}}sincedudv=uv(dx)2=0.{\displaystyle du\,dv=u'v'(dx)^{2}=0.} Dividing bydx{\displaystyle dx} then givesd(uv)dx=udvdx+vdudx{\displaystyle {\frac {d(uv)}{dx}}=u{\frac {dv}{dx}}+v{\frac {du}{dx}}} or(uv)=uv+vu{\displaystyle (uv)'=u\cdot v'+v\cdot u'}.

Logarithmic differentiation

[edit]

Leth(x)=f(x)g(x){\displaystyle h(x)=f(x)g(x)}. Taking theabsolute value of each function and thenatural log of both sides of the equation,ln|h(x)|=ln|f(x)g(x)|{\displaystyle \ln |h(x)|=\ln |f(x)g(x)|}Applying properties of the absolute value and logarithms,ln|h(x)|=ln|f(x)|+ln|g(x)|{\displaystyle \ln |h(x)|=\ln |f(x)|+\ln |g(x)|} Taking thelogarithmic derivative of both sides and then solving forh(x){\displaystyle h'(x)}:h(x)h(x)=f(x)f(x)+g(x)g(x){\displaystyle {\frac {h'(x)}{h(x)}}={\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}}Solving forh(x){\displaystyle h'(x)} and substituting backf(x)g(x){\displaystyle f(x)g(x)} forh(x){\displaystyle h(x)} gives:h(x)=h(x)(f(x)f(x)+g(x)g(x))=f(x)g(x)(f(x)f(x)+g(x)g(x))=f(x)g(x)+f(x)g(x).{\displaystyle {\begin{aligned}h'(x)&=h(x)\left({\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}\right)\\&=f(x)g(x)\left({\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}\right)\\&=f'(x)g(x)+f(x)g'(x).\end{aligned}}}Note: Taking the absolute value of the functions is necessary for thelogarithmic differentiation of functions that may have negative values, as logarithms are onlyreal-valued for positive arguments. This works becauseddx(ln|u|)=uu{\displaystyle {\tfrac {d}{dx}}(\ln |u|)={\tfrac {u'}{u}}}, which justifies taking the absolute value of the functions for logarithmic differentiation.

Generalizations

[edit]

Product of more than two factors

[edit]

The product rule can be generalized to products of more than two factors. For example, for three factors we haved(uvw)dx=dudxvw+udvdxw+uvdwdx.{\displaystyle {\frac {d(uvw)}{dx}}={\frac {du}{dx}}vw+u{\frac {dv}{dx}}w+uv{\frac {dw}{dx}}.}For a collection of functionsf1,,fk{\displaystyle f_{1},\dots ,f_{k}}, we haveddx[i=1kfi(x)]=i=1k((ddxfi(x))j=1,jikfj(x))=(i=1kfi(x))(i=1kfi(x)fi(x)).{\displaystyle {\frac {d}{dx}}\left[\prod _{i=1}^{k}f_{i}(x)\right]=\sum _{i=1}^{k}\left(\left({\frac {d}{dx}}f_{i}(x)\right)\prod _{j=1,j\neq i}^{k}f_{j}(x)\right)=\left(\prod _{i=1}^{k}f_{i}(x)\right)\left(\sum _{i=1}^{k}{\frac {f'_{i}(x)}{f_{i}(x)}}\right).}

Thelogarithmic derivative provides a simpler expression of the last form, as well as a direct proof that does not involve anyrecursion. Thelogarithmic derivative of a functionf, denoted hereLogder(f), is the derivative of thelogarithm of the function. It follows thatLogder(f)=ff.{\displaystyle \operatorname {Logder} (f)={\frac {f'}{f}}.}Using that the logarithm of a product is the sum of the logarithms of the factors, thesum rule for derivatives gives immediatelyLogder(f1fk)=i=1kLogder(fi).{\displaystyle \operatorname {Logder} (f_{1}\cdots f_{k})=\sum _{i=1}^{k}\operatorname {Logder} (f_{i}).}The last above expression of the derivative of a product is obtained by multiplying both members of this equation by the product of thefi.{\displaystyle f_{i}.}

Higher derivatives

[edit]
Main article:General Leibniz rule

It can also be generalized to thegeneral Leibniz rule for thenth derivative of a product of two factors, by symbolically expanding according to thebinomial theorem:dn(uv)=k=0n(nk)d(nk)(u)d(k)(v).{\displaystyle d^{n}(uv)=\sum _{k=0}^{n}{n \choose k}\cdot d^{(n-k)}(u)\cdot d^{(k)}(v).}

Applied at a specific pointx, the above formula gives:(uv)(n)(x)=k=0n(nk)u(nk)(x)v(k)(x).{\displaystyle (uv)^{(n)}(x)=\sum _{k=0}^{n}{n \choose k}\cdot u^{(n-k)}(x)\cdot v^{(k)}(x).}

Furthermore, for thenth derivative of an arbitrary number of factors, one has a similar formula withmultinomial coefficients:(i=1kfi)(n)=j1+j2++jk=n(nj1,j2,,jk)i=1kfi(ji).{\displaystyle \left(\prod _{i=1}^{k}f_{i}\right)^{\!\!(n)}=\sum _{j_{1}+j_{2}+\cdots +j_{k}=n}{n \choose j_{1},j_{2},\ldots ,j_{k}}\prod _{i=1}^{k}f_{i}^{(j_{i})}.}

Higher partial derivatives

[edit]

Forpartial derivatives, we have[8]nx1xn(uv)=S|S|uiSxin|S|viSxi{\displaystyle {\partial ^{n} \over \partial x_{1}\,\cdots \,\partial x_{n}}(uv)=\sum _{S}{\partial ^{|S|}u \over \prod _{i\in S}\partial x_{i}}\cdot {\partial ^{n-|S|}v \over \prod _{i\not \in S}\partial x_{i}}}where the indexS runs through all2nsubsets of{1, ...,n}, and|S| is thecardinality ofS. For example, whenn = 3,3x1x2x3(uv)=u3vx1x2x3+ux12vx2x3+ux22vx1x3+ux32vx1x2+2ux1x2vx3+2ux1x3vx2+2ux2x3vx1+3ux1x2x3v.{\displaystyle {\begin{aligned}&{\partial ^{3} \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}(uv)\\[1ex]={}&u\cdot {\partial ^{3}v \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}+{\partial u \over \partial x_{1}}\cdot {\partial ^{2}v \over \partial x_{2}\,\partial x_{3}}+{\partial u \over \partial x_{2}}\cdot {\partial ^{2}v \over \partial x_{1}\,\partial x_{3}}+{\partial u \over \partial x_{3}}\cdot {\partial ^{2}v \over \partial x_{1}\,\partial x_{2}}\\[1ex]&+{\partial ^{2}u \over \partial x_{1}\,\partial x_{2}}\cdot {\partial v \over \partial x_{3}}+{\partial ^{2}u \over \partial x_{1}\,\partial x_{3}}\cdot {\partial v \over \partial x_{2}}+{\partial ^{2}u \over \partial x_{2}\,\partial x_{3}}\cdot {\partial v \over \partial x_{1}}+{\partial ^{3}u \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}\cdot v.\\[-3ex]&\end{aligned}}}

Banach space

[edit]

SupposeX,Y, andZ areBanach spaces (which includesEuclidean space) andB :X ×YZ is acontinuousbilinear operator. ThenB is differentiable, and its derivative at the point (x,y) inX ×Y is thelinear mapD(x,y)B :X ×YZ given by(D(x,y)B)(u,v)=B(u,y)+B(x,v)(u,v)X×Y.{\displaystyle (D_{\left(x,y\right)}\,B)\left(u,v\right)=B\left(u,y\right)+B\left(x,v\right)\qquad \forall (u,v)\in X\times Y.}

This result can be extended[9] to more general topological vector spaces.

In vector calculus

[edit]
Main article:Vector calculus identities § First derivative identities

The product rule extends to various product operations of vector functions onRn{\displaystyle \mathbb {R} ^{n}}:[10]

There are also analogues for other analogs of the derivative: iff andg are scalar fields then there is a product rule with thegradient:(fg)=fg+fg{\displaystyle \nabla (f\cdot g)=\nabla f\cdot g+f\cdot \nabla g}

Such a rule will hold for any continuousbilinear product operation. LetB :X ×YZ be a continuous bilinear map between vector spaces, and letf andg be differentiable functions intoX andY, respectively. The only properties of multiplication used in the proof using thelimit definition of derivative is that multiplication is continuous and bilinear. So for any continuous bilinear operation,H(f,g)=H(f,g)+H(f,g).{\displaystyle H(f,g)'=H(f',g)+H(f,g').}This is also a special case of the product rule for bilinear maps inBanach space.

Derivations in abstract algebra and differential geometry

[edit]

Inabstract algebra, the product rule is the defining property of aderivation. In this terminology, the product rule states that the derivative operator is a derivation on functions.

Indifferential geometry, atangent vector to amanifoldM at a pointp may be defined abstractly as an operator on real-valued functions which behaves like adirectional derivative atp: that is, alinear functionalv which is a derivation,v(fg)=v(f)g(p)+f(p)v(g).{\displaystyle v(fg)=v(f)\,g(p)+f(p)\,v(g).}Generalizing (and dualizing) the formulas of vector calculus to ann-dimensional manifoldM, one may takedifferential forms of degreesk andl, denotedαΩk(M),βΩ(M){\displaystyle \alpha \in \Omega ^{k}(M),\beta \in \Omega ^{\ell }(M)}, with the wedge orexterior product operationαβΩk+(M){\displaystyle \alpha \wedge \beta \in \Omega ^{k+\ell }(M)}, as well as theexterior derivatived:Ωm(M)Ωm+1(M){\displaystyle d:\Omega ^{m}(M)\to \Omega ^{m+1}(M)}. Then one has thegraded Leibniz rule:d(αβ)=dαβ+(1)kαdβ.{\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{k}\alpha \wedge d\beta .}

Applications

[edit]

Among the applications of the product rule is a proof thatddxxn=nxn1{\displaystyle {d \over dx}x^{n}=nx^{n-1}}whenn is a positive integer (this rule is true even ifn is not positive or is not an integer, but the proof of that must rely on other methods). The proof is bymathematical induction on the exponentn. Ifn = 0 thenxn is constant andnxn − 1 = 0. The rule holds in that case because the derivative of a constant function is 0. If the rule holds for any particular exponentn, then for the next value,n + 1, we havedxn+1dx=ddx(xnx)=xddxxn+xnddxx(the product rule is used here)=x(nxn1)+xn1(the induction hypothesis is used here)=(n+1)xn.{\displaystyle {\begin{aligned}{\frac {dx^{n+1}}{dx}}&{}={\frac {d}{dx}}\left(x^{n}\cdot x\right)\\[1ex]&{}=x{\frac {d}{dx}}x^{n}+x^{n}{\frac {d}{dx}}x&{\text{(the product rule is used here)}}\\[1ex]&{}=x\left(nx^{n-1}\right)+x^{n}\cdot 1&{\text{(the induction hypothesis is used here)}}\\[1ex]&{}=\left(n+1\right)x^{n}.\end{aligned}}}Therefore, if the proposition is true forn, it is true also for n + 1, and therefore for all naturaln.

See also

[edit]

References

[edit]
  1. ^Note: This is a usual image since the 17th century, essentially the same illustration given inJames Stewart:Calculus: Early Transcendentals edition 7, p. 185 in the section "The geometry of the Product Rule".
  2. ^"Leibniz rule – Encyclopedia of Mathematics".
  3. ^Michelle Cirillo (August 2007)."Humanizing Calculus".The Mathematics Teacher.101 (1):23–27.doi:10.5951/MT.101.1.0023.
  4. ^Leibniz, G. W. (2005) [1920],The Early Mathematical Manuscripts of Leibniz(PDF), translated by J.M. Child, Dover, p. 28, footnote 58,ISBN 978-0-486-44596-0
  5. ^Leibniz, G. W. (2005) [1920],The Early Mathematical Manuscripts of Leibniz(PDF), translated by J.M. Child, Dover, p. 143,ISBN 978-0-486-44596-0
  6. ^Eugene Boman, and Robert Rogers.Real Analysishttps://math.libretexts.org/Bookshelves/Analysis/Real_Analysis_(Boman_and_Rogers)/02%3A_Calculus_in_the_17th_and_18th_Centuries/2.01%3A_Newton_and_Leibniz_Get_Started
  7. ^"A Story of Real Analysis"(PDF). Archived fromthe original(PDF) on 11 March 2025.
  8. ^Michael Hardy (January 2006)."Combinatorics of Partial Derivatives"(PDF).The Electronic Journal of Combinatorics.13.arXiv:math/0601149.Bibcode:2006math......1149H.
  9. ^Kreigl, Andreas; Michor, Peter (1997).The Convenient Setting of Global Analysis(PDF). American Mathematical Society. p. 59.ISBN 0-8218-0780-3.
  10. ^Stewart, James (2016),Calculus (8 ed.), Cengage, Section 13.2.
Precalculus
Limits
Differential calculus
Integral calculus
Vector calculus
Multivariable calculus
Sequences and series
Special functions
and numbers
History of calculus
Lists
Integrals
Miscellaneous topics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Product_rule&oldid=1336929753"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp