Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Fundamental theorem of calculus

From Wikipedia, the free encyclopedia
Relationship between derivatives and integrals
Part of a series of articles about
Calculus
abf(t)dt=f(b)f(a){\displaystyle \int _{a}^{b}f'(t)\,dt=f(b)-f(a)}

Thefundamental theorem of calculus is atheorem that links the concept ofdifferentiating afunction (calculating itsslopes, or rate of change at every point on its domain) with the concept ofintegrating a function (calculating the area under its graph, or the cumulative effect of small contributions). Roughly speaking, the two operations can be thought of as inverses of each other.

The first part of the theorem, thefirst fundamental theorem of calculus, states that for acontinuous functionf , anantiderivative or indefinite integralF can be obtained as the integral off over an interval with a variable upper bound.[1]

Conversely, the second part of the theorem, thesecond fundamental theorem of calculus, states that the integral of a functionf over a fixedinterval is equal to the change of any antiderivativeF between the ends of the interval. This greatly simplifies the calculation of a definite integral provided an antiderivative can be found bysymbolic integration, thus avoidingnumerical integration.

History

[edit]
See also:History of calculus

The fundamental theorem of calculus relates differentiation and integration, showing that these two operations are essentiallyinverses of one another. Before the discovery of this theorem, it was not recognized that these two operations were related. AncientGreek mathematicians knew how to compute area viainfinitesimals, an operation that we would now call integration. The origins of differentiation likewise predate the fundamental theorem of calculus by hundreds of years; for example, in the fourteenth century the notions ofcontinuity of functions andmotion were studied by theOxford Calculators and other scholars. The historical relevance of the fundamental theorem of calculus is not the ability to calculate these operations, but the realization that the two seemingly distinct operations (calculation of geometric areas, and calculation of gradients) are actually closely related.

Calculus as a unified theory of integration and differentiation started from the conjecture and the proof of the fundamental theorem of calculus. The first published statement and proof of a rudimentary form of the fundamental theorem, strongly geometric in character,[2] was byJames Gregory (1638–1675).[3][4]Isaac Barrow (1630–1677) proved a more generalized version of the theorem,[5] while his studentIsaac Newton (1642–1727) completed the development of the surrounding mathematical theory.Gottfried Leibniz (1646–1716) systematized the knowledge into a calculus for infinitesimal quantities and introducedthe notation used today.

Sketch of geometric proof

[edit]
The area shaded in red stripes is close toh timesf(x). Alternatively, if the functionA(x) were known, this area would be exactlyA(x +h) −A(x). These two values are approximately equal, particularly for smallh.

The first fundamental theorem may be interpreted as follows. Given acontinuous functiony=f(x){\displaystyle y=f(x)} whose graph is plotted as a curve, one defines a corresponding "area function"xA(x){\displaystyle x\mapsto A(x)} such thatA(x) is the area beneath the curve between0 andx. The areaA(x) may not be easily computable, but it is assumed to be well defined.

The area under the curve betweenx andx +h could be computed by finding the area between0 andx +h, then subtracting the area between0 andx. In other words, the area of this "strip" would beA(x +h) −A(x).

There is another way toestimate the area of this same strip. As shown in the accompanying figure,h is multiplied byf(x) to find the area of a rectangle that is approximately the same size as this strip. So:A(x+h)A(x)f(x)h{\displaystyle A(x+h)-A(x)\approx f(x)\cdot h}

Dividing by h on both sides, we get:A(x+h)A(x)hf(x){\displaystyle {\frac {A(x+h)-A(x)}{h}}\approx f(x)}

This estimate becomes a perfect equality when h approaches 0:f(x)=limh0A(x+h)A(x)h =def A(x).{\displaystyle f(x)=\lim _{h\to 0}{\frac {A(x+h)-A(x)}{h}}\ {\stackrel {\text{def}}{=}}\ A'(x).}That is, the derivative of the area functionA(x) exists and is equal to the original functionf(x), so the area function is anantiderivative of the original function.

Thus, the derivative of the integral of a function (the area) is the original function, so that derivative and integral areinverse operations which reverse each other. This is the essence of the Fundamental Theorem.

Intuitive understanding

[edit]

Intuitively, the fundamental theorem states thatintegration anddifferentiation are inverse operations which reverse each other.

The second fundamental theorem says that the sum ofinfinitesimal changes in a quantity (the integral of the derivative of the quantity) adds up to the net change in the quantity. To visualize this, imagine traveling in a car and wanting to know the distance traveled (the net change in position along the highway). You can see the velocity on the speedometer but cannot look out to see your location. Each second, you can find how far the car has traveled usingdistance = speed × time, that is, multiplying the current speed (in kilometers or miles per hour) by the time interval (1 second =13600{\displaystyle {\tfrac {1}{3600}}} hour). By summing up all these small steps, you can approximate the total distance traveled, in spite of not looking outside the car:distance traveled=(velocity ateach time)×(timeinterval)=vt×Δt.{\displaystyle {\text{distance traveled}}=\sum \left({\begin{array}{c}{\text{velocity at}}\\{\text{each time}}\end{array}}\right)\times \left({\begin{array}{c}{\text{time}}\\{\text{interval}}\end{array}}\right)=\sum v_{t}\times \Delta t.}AsΔt{\displaystyle \Delta t} becomesinfinitesimally small, the summing up corresponds tointegration. Thus, the integral of the velocity function (the derivative of position) computes how far the car has traveled (the net change in position).

The first fundamental theorem says that the value of any function is the rate of change (the derivative) of its integral from a fixed starting point up to any chosen end point. Continuing the above example using a velocity as the function, you can integrate it from the starting time up to any given time to obtain a distance function whose derivative is that velocity. (To obtain your highway-marker position, you would need to add your starting position to this integral and to take into account whether your travel was in the direction of increasing or decreasing mile markers.)

Formal statements

[edit]

There are two parts to the theorem. The first part deals with the derivative of anantiderivative, while the second part deals with the relationship between antiderivatives anddefinite integrals.

First part

[edit]

This part is sometimes referred to as thefirst fundamental theorem of calculus.[6]

Letf be a continuousreal-valued function defined on aclosed interval[a,b]. LetF be the function defined, for allx in[a,b], byF(x)=axf(t)dt.{\displaystyle F(x)=\int _{a}^{x}f(t)\,dt.}

ThenF isuniformly continuous on[a,b] and differentiable on theopen interval(a,b), andF(x)=f(x){\displaystyle F'(x)=f(x)}for allx in(a,b) soF is an antiderivative off.

Corollary

[edit]
Fundamental theorem of calculus (animation)

The fundamental theorem is often employed to compute the definite integral of a functionf{\displaystyle f} for which an antiderivativeF{\displaystyle F} is known. Specifically, iff{\displaystyle f} is a real-valued continuous function on[a,b]{\displaystyle [a,b]} andF{\displaystyle F} is an antiderivative off{\displaystyle f} in[a,b]{\displaystyle [a,b]}, thenabf(t)dt=F(b)F(a).{\displaystyle \int _{a}^{b}f(t)\,dt=F(b)-F(a).}

The corollary assumescontinuity on the whole interval. This result is strengthened slightly in the following part of the theorem.

Second part

[edit]

This part is sometimes referred to as thesecond fundamental theorem of calculus[7] or theNewton–Leibniz theorem.

Letf{\displaystyle f} be a real-valued function on aclosed interval[a,b]{\displaystyle [a,b]} andF{\displaystyle F} a continuous function on[a,b]{\displaystyle [a,b]} which is an antiderivative off{\displaystyle f} in(a,b){\displaystyle (a,b)}:F(x)=f(x).{\displaystyle F'(x)=f(x).}

Iff{\displaystyle f} isRiemann integrable on[a,b]{\displaystyle [a,b]} thenabf(x)dx=F(b)F(a).{\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).}

The second part is somewhat stronger than the corollary because it does not assume thatf{\displaystyle f} is continuous.

When an antiderivativeF{\displaystyle F} off{\displaystyle f} exists, then there are infinitely many antiderivatives forf{\displaystyle f}, obtained by adding an arbitrary constant toF{\displaystyle F}. Also, by the first part of the theorem, antiderivatives off{\displaystyle f} always exist whenf{\displaystyle f} is continuous.

Proof of the first part

[edit]

For a given functionf, define the functionF(x) asF(x)=axf(t)dt.{\displaystyle F(x)=\int _{a}^{x}f(t)\,dt.}

For any two numbersx1 andx1 + Δx in[a,b], we have

F(x1+Δx)F(x1)=ax1+Δxf(t)dtax1f(t)dt=x1x1+Δxf(t)dt,{\displaystyle {\begin{aligned}F(x_{1}+\Delta x)-F(x_{1})&=\int _{a}^{x_{1}+\Delta x}f(t)\,dt-\int _{a}^{x_{1}}f(t)\,dt\\&=\int _{x_{1}}^{x_{1}+\Delta x}f(t)\,dt,\end{aligned}}}the latter equality resulting from the basic properties of integrals and the additivity of areas.

According to themean value theorem for integration, there exists a real numberc[x1,x1+Δx]{\displaystyle c\in [x_{1},x_{1}+\Delta x]} such thatx1x1+Δxf(t)dt=f(c)Δx.{\displaystyle \int _{x_{1}}^{x_{1}+\Delta x}f(t)\,dt=f(c)\cdot \Delta x.}

It follows thatF(x1+Δx)F(x1)=f(c)Δx,{\displaystyle F(x_{1}+\Delta x)-F(x_{1})=f(c)\cdot \Delta x,}and thus thatF(x1+Δx)F(x1)Δx=f(c).{\displaystyle {\frac {F(x_{1}+\Delta x)-F(x_{1})}{\Delta x}}=f(c).}

Taking the limit asΔx0,{\displaystyle \Delta x\to 0,} and keeping in mind thatc[x1,x1+Δx],{\displaystyle c\in [x_{1},x_{1}+\Delta x],} one getslimΔx0F(x1+Δx)F(x1)Δx=limΔx0f(c),{\displaystyle \lim _{\Delta x\to 0}{\frac {F(x_{1}+\Delta x)-F(x_{1})}{\Delta x}}=\lim _{\Delta x\to 0}f(c),}that is,F(x1)=f(x1),{\displaystyle F'(x_{1})=f(x_{1}),}according to the definition of the derivative, the continuity off, and thesqueeze theorem.[8]

Proof of the corollary

[edit]

SupposeF is an antiderivative off, withf continuous on[a,b]. LetG(x)=axf(t)dt.{\displaystyle G(x)=\int _{a}^{x}f(t)\,dt.}

By thefirst part of the theorem, we knowG is also an antiderivative off. SinceF′ −G′ = 0 themean value theorem implies thatFG is aconstant function, that is, there is a numberc such thatG(x) =F(x) + c for allx in[a,b]. Lettingx =a, we haveF(a)+c=G(a)=aaf(t)dt=0,{\displaystyle F(a)+c=G(a)=\int _{a}^{a}f(t)\,dt=0,}which meansc = −F(a). In other words,G(x) =F(x) −F(a), and soabf(x)dx=G(b)=F(b)F(a).{\displaystyle \int _{a}^{b}f(x)\,dx=G(b)=F(b)-F(a).}

Proof of the second part

[edit]

This is a limit proof byRiemann sums.

To begin, we recall themean value theorem. Stated briefly, ifF is continuous on the closed interval[a,b] and differentiable on the open interval(a,b), then there exists somec in(a,b) such thatF(c)(ba)=F(b)F(a).{\displaystyle F'(c)(b-a)=F(b)-F(a).}

Letf be (Riemann) integrable on the interval[a,b], and letf admit an antiderivativeF on(a,b) such thatF is continuous on[a,b]. Begin with the quantityF(b) −F(a). Let there be numbersx0, ...,xn such thata=x0<x1<x2<<xn1<xn=b.{\displaystyle a=x_{0}<x_{1}<x_{2}<\cdots <x_{n-1}<x_{n}=b.}

It follows thatF(b)F(a)=F(xn)F(x0).{\displaystyle F(b)-F(a)=F(x_{n})-F(x_{0}).}

Now, we add eachF(xi) along with its additive inverse, so that the resulting quantity is equal:F(b)F(a)=F(xn)+[F(xn1)+F(xn1)]++[F(x1)+F(x1)]F(x0)=[F(xn)F(xn1)]+[F(xn1)F(xn2)]++[F(x2)F(x1)]+[F(x1)F(x0)].{\displaystyle {\begin{aligned}F(b)-F(a)&=F(x_{n})+[-F(x_{n-1})+F(x_{n-1})]+\cdots +[-F(x_{1})+F(x_{1})]-F(x_{0})\\&=[F(x_{n})-F(x_{n-1})]+[F(x_{n-1})-F(x_{n-2})]+\cdots +[F(x_{2})-F(x_{1})]+[F(x_{1})-F(x_{0})].\end{aligned}}}

The above quantity can be written as the following sum:

F(b)F(a)=i=1n[F(xi)F(xi1)].{\displaystyle F(b)-F(a)=\sum _{i=1}^{n}[F(x_{i})-F(x_{i-1})].}1'

The functionF is differentiable on the interval(a,b) and continuous on the closed interval[a,b]; therefore, it is also differentiable on each interval(xi−1,xi) and continuous on each interval[xi−1,xi]. According to the mean value theorem (above), for eachi there exists aci{\displaystyle c_{i}} in(xi−1,xi) such thatF(xi)F(xi1)=F(ci)(xixi1).{\displaystyle F(x_{i})-F(x_{i-1})=F'(c_{i})(x_{i}-x_{i-1}).}

Substituting the above into (1'), we getF(b)F(a)=i=1n[F(ci)(xixi1)].{\displaystyle F(b)-F(a)=\sum _{i=1}^{n}[F'(c_{i})(x_{i}-x_{i-1})].}

The assumption impliesF(ci)=f(ci).{\displaystyle F'(c_{i})=f(c_{i}).} Also,xixi1{\displaystyle x_{i}-x_{i-1}} can be expressed asΔx{\displaystyle \Delta x} of partitioni{\displaystyle i}.

F(b)F(a)=i=1n[f(ci)(Δxi)].{\displaystyle F(b)-F(a)=\sum _{i=1}^{n}[f(c_{i})(\Delta x_{i})].}2'
A converging sequence of Riemann sums. The number in the upper left is the total area of the blue rectangles. They converge to the definite integral of the function.

We are describing the area of a rectangle, with the width times the height, and we are adding the areas together. Each rectangle, by virtue of themean value theorem, describes an approximation of the curve section it is drawn over. AlsoΔxi{\displaystyle \Delta x_{i}} need not be the same for all values ofi, or in other words that the width of the rectangles can differ. What we have to do is approximate the curve withn rectangles. Now, as the size of the partitions get smaller andn increases, resulting in more partitions to cover the space, we get closer and closer to the actual area of the curve.

By taking the limit of the expression as the norm of the partitions approaches zero, we arrive at theRiemann integral. We know that this limit exists becausef was assumed to be integrable. That is, we take the limit as the largest of the partitions approaches zero in size, so that all other partitions are smaller and the number of partitions approaches infinity.

So, we take the limit on both sides of (2'). This gives uslimΔxi0F(b)F(a)=limΔxi0i=1n[f(ci)(Δxi)].{\displaystyle \lim _{\|\Delta x_{i}\|\to 0}F(b)-F(a)=\lim _{\|\Delta x_{i}\|\to 0}\sum _{i=1}^{n}[f(c_{i})(\Delta x_{i})].}

NeitherF(b) norF(a) is dependent onΔxi{\displaystyle \|\Delta x_{i}\|}, so the limit on the left side remainsF(b) −F(a).F(b)F(a)=limΔxi0i=1n[f(ci)(Δxi)].{\displaystyle F(b)-F(a)=\lim _{\|\Delta x_{i}\|\to 0}\sum _{i=1}^{n}[f(c_{i})(\Delta x_{i})].}

The expression on the right side of the equation defines the integral overf froma tob. Therefore, we obtainF(b)F(a)=abf(x)dx,{\displaystyle F(b)-F(a)=\int _{a}^{b}f(x)\,dx,}which completes the proof.

Relationship between the parts

[edit]

As discussed above, a slightly weaker version of the second part follows from the first part.

Similarly, it almost looks like the first part of the theorem follows directly from the second. That is, supposeG is an antiderivative off. Then by the second theorem,G(x)G(a)=axf(t)dt{\textstyle G(x)-G(a)=\int _{a}^{x}f(t)\,dt}. Now, supposeF(x)=axf(t)dt=G(x)G(a){\textstyle F(x)=\int _{a}^{x}f(t)\,dt=G(x)-G(a)}. ThenF has the same derivative asG, and thereforeF′ =f. This argument only works, however, if we already know thatf has an antiderivative, and the only way we know that all continuous functions have antiderivatives is by the first part of the Fundamental Theorem.[9]For example, iff(x) =ex2, thenf has an antiderivative, namelyG(x)=0xf(t)dt{\displaystyle G(x)=\int _{0}^{x}f(t)\,dt}and there is no simpler expression for this function. It is therefore important not to interpret the second part of the theorem as the definition of the integral. Indeed, there are manyfunctions that are integrable but lack elementary antiderivatives, and discontinuous functions can be integrable but lack any antiderivatives at all. Conversely, many functions that have antiderivatives are not Riemann integrable (seeVolterra's function).

Examples

[edit]

Computing a particular integral

[edit]

Suppose the following is to be calculated:25x2dx.{\displaystyle \int _{2}^{5}x^{2}\,dx.}

Here,f(x)=x2{\displaystyle f(x)=x^{2}} and we can useF(x)=13x3{\textstyle F(x)={\frac {1}{3}}x^{3}} as the antiderivative. Therefore:25x2dx=F(5)F(2)=533233=125383=1173=39.{\displaystyle \int _{2}^{5}x^{2}\,dx=F(5)-F(2)={\frac {5^{3}}{3}}-{\frac {2^{3}}{3}}={\frac {125}{3}}-{\frac {8}{3}}={\frac {117}{3}}=39.}

Using the first part

[edit]

Supposeddx0xt3dt{\displaystyle {\frac {d}{dx}}\int _{0}^{x}t^{3}\,dt}is to be calculated. Using the first part of the theorem withf(t)=t3{\displaystyle f(t)=t^{3}} givesddx0xt3dt=f(x)=x3.{\displaystyle {\frac {d}{dx}}\int _{0}^{x}t^{3}\,dt=f(x)=x^{3}.}

This can also be checked using the second part of the theorem. Specifically,F(t)=14t4{\textstyle F(t)={\frac {1}{4}}t^{4}} is an antiderivative off(t){\displaystyle f(t)}, soddx0xt3dt=ddxF(x)ddxF(0)=ddxx44=x3.{\displaystyle {\frac {d}{dx}}\int _{0}^{x}t^{3}\,dt={\frac {d}{dx}}F(x)-{\frac {d}{dx}}F(0)={\frac {d}{dx}}{\frac {x^{4}}{4}}=x^{3}.}

An integral where the corollary is insufficient

[edit]

Supposef(x)={sin(1x)1xcos(1x)x00x=0{\displaystyle f(x)={\begin{cases}\sin \left({\frac {1}{x}}\right)-{\frac {1}{x}}\cos \left({\frac {1}{x}}\right)&x\neq 0\\0&x=0\\\end{cases}}}Thenf(x){\displaystyle f(x)} is not continuous at zero. Moreover, this is not just a matter of howf{\displaystyle f} is defined at zero, since the limit asx0{\displaystyle x\to 0} off(x){\displaystyle f(x)} does not exist. Therefore, the corollary cannot be used to compute01f(x)dx.{\displaystyle \int _{0}^{1}f(x)\,dx.}But consider the functionF(x)={xsin(1x)x00x=0.{\displaystyle F(x)={\begin{cases}x\sin \left({\frac {1}{x}}\right)&x\neq 0\\0&x=0.\\\end{cases}}}Notice thatF(x){\displaystyle F(x)} is continuous on[0,1]{\displaystyle [0,1]} (including at zero by thesqueeze theorem), andF(x){\displaystyle F(x)} is differentiable on(0,1){\displaystyle (0,1)} withF(x)=f(x).{\displaystyle F'(x)=f(x).} Therefore, part two of the theorem applies, and01f(x)dx=F(1)F(0)=sin(1).{\displaystyle \int _{0}^{1}f(x)\,dx=F(1)-F(0)=\sin(1).}

Theoretical example

[edit]

The theorem can be used to prove thatabf(x)dx=acf(x)dx+cbf(x)dx.{\displaystyle \int _{a}^{b}f(x)dx=\int _{a}^{c}f(x)dx+\int _{c}^{b}f(x)dx.}

Since,abf(x)dx=F(b)F(a),acf(x)dx=F(c)F(a), and cbf(x)dx=F(b)F(c),{\displaystyle {\begin{aligned}\int _{a}^{b}f(x)dx&=F(b)-F(a),\\\int _{a}^{c}f(x)dx&=F(c)-F(a),{\text{ and }}\\\int _{c}^{b}f(x)dx&=F(b)-F(c),\end{aligned}}}the result follows from,F(b)F(a)=F(c)F(a)+F(b)F(c).{\displaystyle F(b)-F(a)=F(c)-F(a)+F(b)-F(c).}

Generalizations

[edit]

The functionf does not have to be continuous over the whole interval. Part I of the theorem then says: iff is anyLebesgue integrable function on[a,b] andx0 is a number in[a,b] such thatf is continuous atx0, thenF(x)=axf(t)dt{\displaystyle F(x)=\int _{a}^{x}f(t)\,dt}

is differentiable forx =x0 withF′(x0) =f(x0). We can relax the conditions onf still further and suppose that it is merely locally integrable. In that case, we can conclude that the functionF is differentiablealmost everywhere andF′(x) =f(x) almost everywhere. On thereal line this statement is equivalent toLebesgue's differentiation theorem. These results remain true for theHenstock–Kurzweil integral, which allows a larger class of integrable functions.[10]

In higher dimensions Lebesgue's differentiation theorem generalizes the Fundamental theorem of calculus by stating that for almost everyx, the average value of a functionf over a ball of radiusr centered atx tends tof(x) asr tends to 0.

Part II of the theorem is true for any Lebesgue integrable functionf, which has an antiderivativeF (not all integrable functions do, though). In other words, if a real functionF on[a,b] admits a derivativef(x) atevery pointx of[a,b] and if this derivativef is Lebesgue integrable on[a,b], then[11]F(b)F(a)=abf(t)dt.{\displaystyle F(b)-F(a)=\int _{a}^{b}f(t)\,dt.}

This result may fail for continuous functionsF that admit a derivativef(x) at almost every pointx, as the example of theCantor function shows. However, ifF isabsolutely continuous, it admits a derivativeF′(x) at almost every pointx, and moreoverF′ is integrable, withF(b) −F(a) equal to the integral ofF′ on[a,b]. Conversely, iff is any integrable function, thenF as given in the first formula will be absolutely continuous withF′ =f almost everywhere.

The conditions of this theorem may again be relaxed by considering the integrals involved asHenstock–Kurzweil integrals. Specifically, if a continuous functionF(x) admits a derivativef(x) at all but countably many points, thenf(x) is Henstock–Kurzweil integrable andF(b) −F(a) is equal to the integral off on[a,b]. The difference here is that the integrability off does not need to be assumed.[12]

Theversion of Taylor's theorem that expresses the error term as an integral can be seen as a generalization of the fundamental theorem.

There is a version of the theorem forcomplex functions: supposeU is anopen set inC andf :UC is a function that has aholomorphic antiderivativeF onU. Then for every curveγ : [a,b] →U, thecurve integral can be computed asγf(z)dz=F(γ(b))F(γ(a)).{\displaystyle \int _{\gamma }f(z)\,dz=F(\gamma (b))-F(\gamma (a)).}

The fundamental theorem can be generalized to curve and surface integrals in higher dimensions and onmanifolds. One such generalization offered by thecalculus of moving surfaces is thetime evolution of integrals. The most familiar extensions of the fundamental theorem of calculus in higher dimensions are thedivergence theorem and thegradient theorem.

One of the most powerful generalizations in this direction is thegeneralized Stokes theorem (sometimes known as the fundamental theorem of multivariable calculus):[13] LetM be an orientedpiecewisesmoothmanifold ofdimensionn and letω{\displaystyle \omega } be a smoothcompactly supported(n − 1)-form onM. IfM denotes theboundary ofM given its inducedorientation, thenMdω=Mω.{\displaystyle \int _{M}d\omega =\int _{\partial M}\omega .}

Hered is theexterior derivative, which is defined using the manifold structure only.

The theorem is often used in situations whereM is an embedded oriented submanifold of some bigger manifold (e.g.Rk) on which the formω{\displaystyle \omega } is defined.

The fundamental theorem of calculus allows us to pose a definite integral as a first-order ordinary differential equation.abf(x)dx{\displaystyle \int _{a}^{b}f(x)\,dx}can be posed asdydx=f(x),y(a)=0{\displaystyle {\frac {dy}{dx}}=f(x),\;\;y(a)=0}withy(b){\displaystyle y(b)} as the value of the integral.

See also

[edit]

Notes

[edit]

References

[edit]
  1. ^Weisstein, Eric W."First Fundamental Theorem of Calculus".mathworld.wolfram.com. Retrieved2024-04-15.
  2. ^Malet, Antoni (1993). "James Gregorie on tangents and the "Taylor" rule for series expansions".Archive for History of Exact Sciences.46 (2).Springer-Verlag:97–137.doi:10.1007/BF00375656.S2CID 120101519.Gregorie's thought, on the other hand, belongs to a conceptual framework strongly geometrical in character. (page 137)
  3. ^See, e.g., Marlow Anderson, Victor J. Katz, Robin J. Wilson,Sherlock Holmes in Babylon and Other Tales of Mathematical History, Mathematical Association of America, 2004,p. 114.
  4. ^Gregory, James (1668).Geometriae Pars Universalis.Museo Galileo: Patavii: typis heredum Pauli Frambotti.{{cite book}}: CS1 maint: publisher location (link)
  5. ^Child, James Mark; Barrow, Isaac (1916).The Geometrical Lectures of Isaac Barrow. Chicago:Open Court Publishing Company.{{cite book}}: CS1 maint: publisher location (link)
  6. ^Apostol 1967, §5.1
  7. ^Apostol 1967, §5.3
  8. ^Leithold, L. (1996),The calculus of a single variable (6th ed.), New York: HarperCollins College Publishers, p. 380.
  9. ^Spivak, Michael (1980),Calculus (2nd ed.), Houston, Texas: Publish or Perish Inc.
  10. ^Bartle (2001), Thm. 4.11.
  11. ^Rudin 1987, th. 7.21
  12. ^Bartle (2001), Thm. 4.7.
  13. ^Spivak, M. (1965).Calculus on Manifolds. New York: W. A. Benjamin. pp. 124–125.ISBN 978-0-8053-9021-6.

Bibliography

[edit]

Further reading

[edit]
  • Courant, Richard; John, Fritz (1965),Introduction to Calculus and Analysis, Springer.
  • Larson, Ron; Edwards, Bruce H.; Heyd, David E. (2002),Calculus of a single variable (7th ed.), Boston: Houghton Mifflin Company,ISBN 978-0-618-14916-2.
  • Malet, A.,Studies on James Gregorie (1638-1675) (PhD Thesis, Princeton, 1989).
  • Hernandez Rodriguez, O. A.; Lopez Fernandez, J. M. . "Teaching the Fundamental Theorem of Calculus: A Historical Reflection",Loci: Convergence (MAA), January 2012.
  • Stewart, J. (2003), "Fundamental Theorem of Calculus",Calculus: early transcendentals, Belmont, California: Thomson/Brooks/Cole.
  • Turnbull, H. W., ed. (1939),The James Gregory Tercentenary Memorial Volume, London{{citation}}: CS1 maint: location missing publisher (link).

External links

[edit]
Wikibooks has more on the topic of:Fundamental theorem of calculus
Precalculus
Limits
Differential calculus
Integral calculus
Vector calculus
Multivariable calculus
Sequences and series
Special functions
and numbers
History of calculus
Lists
Integrals
Miscellaneous topics
Major topics inmathematical analysis
Authority control databasesEdit this at Wikidata
Retrieved from "https://en.wikipedia.org/w/index.php?title=Fundamental_theorem_of_calculus&oldid=1320460686"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp