Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Differential calculus

From Wikipedia, the free encyclopedia
Area of mathematics; subarea of calculus
The graph of a function, drawn in black, and a tangent line to that function, drawn in red. The slope of the tangent line equals the derivative of the function at the marked point.
Part of a series of articles about
Calculus
abf(t)dt=f(b)f(a){\displaystyle \int _{a}^{b}f'(t)\,dt=f(b)-f(a)}

Inmathematics,differential calculus is a subfield ofcalculus that studies the rates at which quantities change.[1] It is one of the two traditional divisions of calculus, the other beingintegral calculus—the study of the area beneath a curve.[2]

The primary objects of study in differential calculus are thederivative of afunction, related notions such as thedifferential, and their applications. The derivative of a function at a chosen input value describes therate of change of the function near that input value. The process of finding a derivative is calleddifferentiation. Geometrically, the derivative at a point is theslope of thetangent line to thegraph of the function at that point, provided that the derivative exists and is defined at that point. For areal-valued function of a single real variable, the derivative of a function at a point generally determines the bestlinear approximation to the function at that point.

Differential calculus andintegral calculus are connected by thefundamental theorem of calculus. This states that differentiation is the reverse process tointegration.

Differentiation has applications in nearly all quantitative disciplines. Inphysics, the derivative of thedisplacement of a moving body with respect to time is thevelocity of the body, and the derivative of the velocity with respect to time isacceleration. The derivative of themomentum of a body with respect totime equals the force applied to the body; rearranging this derivative statement leads to the famousF =ma equation associated withNewton's second law of motion. Thereaction rate of achemical reaction is a derivative. Inoperations research, derivatives determine the most efficient ways to transport materials and design factories.

Derivatives are frequently used to find themaxima and minima of a function. Equations involving derivatives are calleddifferential equations and are fundamental in describingnatural phenomena. Derivatives and their generalizations appear in many fields of mathematics, such ascomplex analysis,functional analysis,differential geometry,measure theory, andabstract algebra.

Derivative

[edit]
Main article:Derivative
The graph of an arbitrary functiony=f(x){\displaystyle y=f(x)}. The orange line is tangent tox=a{\displaystyle x=a}, meaning at that exact point, the slope of the curve and the straight line are the same.
The derivative at different points of a differentiable function

The derivative off(x){\displaystyle f(x)} at the pointx=a{\displaystyle x=a} is theslope of the tangent to(a,f(a)){\displaystyle (a,f(a))}.[3] In order to gain an intuition for this, one must first be familiar with finding the slope of a linear equation, written in the formy=mx+b{\displaystyle y=mx+b}. The slope of an equation is its steepness. It can be found by picking any two points and dividing the change iny{\displaystyle y} by the change inx{\displaystyle x}, meaning thatslope = change in ychange in x{\displaystyle {\text{slope }}={\frac {{\text{ change in }}y}{{\text{change in }}x}}}. For, the graph ofy=2x+13{\displaystyle y=-2x+13} has a slope of2{\displaystyle -2}, as shown in the diagram below:

The graph ofy=2x+13{\displaystyle y=-2x+13}
change in ychange in x=6+3=2{\displaystyle {\frac {{\text{change in }}y}{{\text{change in }}x}}={\frac {-6}{+3}}=-2}

For brevity,change in ychange in x{\displaystyle {\frac {{\text{change in }}y}{{\text{change in }}x}}} is often written asΔyΔx{\displaystyle {\frac {\Delta y}{\Delta x}}}, withΔ{\displaystyle \Delta } being the Greek letter delta, meaning 'change in'. The slope of a linear equation is constant, meaning that the steepness is the same everywhere. However, many graphs such asy=x2{\displaystyle y=x^{2}} vary in their steepness. This means that you can no longer pick any two arbitrary points and compute the slope. Instead, the slope of the graph can be computed by considering the tangent line—a line that 'just touches' a particular point.[a] The slope of a curve at a particular point is equal to the slope of the tangent to that point. For example,y=x2{\displaystyle y=x^{2}} has a slope of4{\displaystyle 4} atx=2{\displaystyle x=2} because the slope of the tangent line to that point is equal to4{\displaystyle 4}:

The graph ofy=x2{\displaystyle y=x^{2}}, with a straight line that is tangent to(2,4){\displaystyle (2,4)}. The slope of the tangent line is equal to4{\displaystyle 4}. (The axes of the graph do not use a 1:1 scale.)


The derivative of afunction is then simply the slope of this tangent line.[b] Even though the tangent line only touches a single point at the point of tangency, it can be approximated by a line that goes through two points. This is known as asecant line. If the two points that the secant line goes through are close together, then the secant line closely resembles the tangent line, and, as a result, its slope is also very similar:

The dotted line goes through the points(2,4){\displaystyle (2,4)} and(3,9){\displaystyle (3,9)}, which both lie on the curvey=x2{\displaystyle y=x^{2}}. Because these two points are fairly close together, the dotted line and tangent line have a similar slope. As the two points become closer together, the error produced by the secant line becomes vanishingly small.


The advantage of using a secant line is that its slope can be calculated directly. Consider the two points on the graph(x,f(x)){\displaystyle (x,f(x))} and(x+Δx,f(x+Δx)){\displaystyle (x+\Delta x,f(x+\Delta x))}, whereΔx{\displaystyle \Delta x} is a small number. As before, the slope of the line passing through these two points can be calculated with the formulaslope =ΔyΔx{\displaystyle {\text{slope }}={\frac {\Delta y}{\Delta x}}}. This gives

slope=f(x+Δx)f(x)Δx{\displaystyle {\text{slope}}={\frac {f(x+\Delta x)-f(x)}{\Delta x}}}

AsΔx{\displaystyle \Delta x} gets closer and closer to0{\displaystyle 0}, the slope of the secant line gets closer and closer to the slope of the tangent line. This is formally written as

limΔx0f(x+Δx)f(x)Δx{\displaystyle \lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}}

The above expression means 'asΔx{\displaystyle \Delta x} gets closer and closer to 0, the slope of the secant line gets closer and closer to a certain value'. The value that is being approached is the derivative off(x){\displaystyle f(x)}; this can be written asf(x){\displaystyle f'(x)}. Ify=f(x){\displaystyle y=f(x)}, the derivative can also be written asdydx{\displaystyle {\frac {dy}{dx}}}, withd{\displaystyle d} representing aninfinitesimal change. For example,dx{\displaystyle dx} represents an infinitesimal change in x.[c] In summary, ify=f(x){\displaystyle y=f(x)}, then the derivative off(x){\displaystyle f(x)} is

dydx=f(x)=limΔx0f(x+Δx)f(x)Δx{\displaystyle {\frac {dy}{dx}}=f'(x)=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}}

provided such a limit exists.[4][d] We have thus succeeded in properly defining the derivative of a function, meaning that the 'slope of the tangent line' now has a precise mathematical meaning. Differentiating a function using the above definition is known as differentiation from first principles. Here is a proof, using differentiation from first principles, that the derivative ofy=x2{\displaystyle y=x^{2}} is2x{\displaystyle 2x}:

dydx=limΔx0f(x+Δx)f(x)Δx=limΔx0(x+Δx)2x2Δx=limΔx0x2+2xΔx+(Δx)2x2Δx=limΔx02xΔx+(Δx)2Δx=limΔx02x+Δx{\displaystyle {\begin{aligned}{\frac {dy}{dx}}&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}\\&=\lim _{\Delta x\to 0}{\frac {(x+\Delta x)^{2}-x^{2}}{\Delta x}}\\&=\lim _{\Delta x\to 0}{\frac {x^{2}+2x\Delta x+(\Delta x)^{2}-x^{2}}{\Delta x}}\\&=\lim _{\Delta x\to 0}{\frac {2x\Delta x+(\Delta x)^{2}}{\Delta x}}\\&=\lim _{\Delta x\to 0}2x+\Delta x\\\end{aligned}}}

AsΔx{\displaystyle \Delta x} approaches0{\displaystyle 0},2x+Δx{\displaystyle 2x+\Delta x} approaches2x{\displaystyle 2x}. Therefore,dydx=2x{\displaystyle {\frac {dy}{dx}}=2x}. This proof can be generalised to show thatd(axn)dx=anxn1{\displaystyle {\frac {d(ax^{n})}{dx}}=anx^{n-1}} ifa{\displaystyle a} andn{\displaystyle n} areconstants. This is known as thepower rule. For example,ddx(5x4)=5(4)x3=20x3{\displaystyle {\frac {d}{dx}}(5x^{4})=5(4)x^{3}=20x^{3}}. However, many other functions cannot be differentiated as easily aspolynomial functions, meaning that sometimes further techniques are needed to find the derivative of a function. These techniques include thechain rule,product rule, andquotient rule. Other functions cannot be differentiated at all, giving rise to the concept ofdifferentiability.

A closely related concept to the derivative of a function is itsdifferential. Whenx andy are real variables, the derivative off atx is the slope of the tangent line to the graph off atx. Because the source and target off are one-dimensional, the derivative off is a real number. Ifx andy are vectors, then the best linear approximation to the graph off depends on howf changes in several directions at once. Taking the best linear approximation in a single direction determines apartial derivative, which is usually denotedy/x. The linearization off in all directions at once is called thetotal derivative.

History of differentiation

[edit]
Main article:History of calculus

The concept of a derivative in the sense of atangent line is a very old one, familiar to ancientGreek mathematicians such asEuclid (c. 300 BC),Archimedes (c. 287–212 BC), andApollonius of Perga (c. 262–190 BC).[5]Archimedes also made use ofindivisibles, although these were primarily used to study areas and volumes rather than derivatives and tangents (seeThe Method of Mechanical Theorems). The use of infinitesimals to compute rates of change was developed significantly byBhāskara II (1114–1185); indeed, it has been argued[6] that many of the key notions of differential calculus can be found in his work, such as "Rolle's theorem".[7]

The mathematician,Sharaf al-Dīn al-Tūsī (1135–1213), in hisTreatise on Equations, established conditions for some cubic equations to have solutions, by finding the maxima of appropriate cubic polynomials. He obtained, for example, that the maximum (for positivex) of the cubicax2x3 occurs whenx = 2a / 3, and concluded therefrom that the equationax2 =x3 + c has exactly one positive solution whenc = 4a3 / 27, and two positive solutions whenever0 <c < 4a3 / 27.[8] The historian of science,Roshdi Rashed,[9] has argued that al-Tūsī must have used the derivative of the cubic to obtain this result. Rashed's conclusion has been contested by other scholars, however, who argue that he could have obtained the result by other methods which do not require the derivative of the function to be known.[10]

The modern development of calculus is usually credited toIsaac Newton (1643–1727) andGottfried Wilhelm Leibniz (1646–1716), who provided independent[e] and unified approaches to differentiation and derivatives. The key insight, however, that earned them this credit, was thefundamental theorem of calculus relating differentiation and integration: this rendered obsolete most previous methods for computing areas and volumes.[f] For their ideas on derivatives, both Newton and Leibniz built on significant earlier work by mathematicians such asPierre de Fermat (1607-1665),Isaac Barrow (1630–1677),René Descartes (1596–1650),Christiaan Huygens (1629–1695),Blaise Pascal (1623–1662) andJohn Wallis (1616–1703). Regarding Fermat's influence, Newton once wrote in a letter that "I had the hint of this method [of fluxions] from Fermat's way of drawing tangents, and by applying it to abstract equations, directly and invertedly, I made it general."[11] Isaac Barrow is generally given credit for the early development of the derivative.[12] Nevertheless, Newton and Leibniz remain key figures in the history of differentiation, not least because Newton was the first to apply differentiation totheoretical physics, while Leibniz systematically developed much of the notation still used today.

Since the 17th century many mathematicians have contributed to the theory of differentiation. In the 19th century, calculus was put on a much more rigorous footing by mathematicians such asAugustin Louis Cauchy (1789–1857),Bernhard Riemann (1826–1866), andKarl Weierstrass (1815–1897). It was also during this period that the differentiation was generalized toEuclidean space and thecomplex plane.

The 20th century brought two major steps towards our present understanding and practice of derivation :Lebesgue integration, besides extending integral calculus to many more functions, clarified the relation between derivation and integration with the notion ofabsolute continuity. Later thetheory of distributions (afterLaurent Schwartz) extended derivation to generalized functions (e.g., theDirac delta function previously introduced inQuantum Mechanics) and became fundamental to nowadays applied analysis especially by the use ofweak solutions topartial differential equations.

Applications of derivatives

[edit]

Optimization

[edit]

Iff is adifferentiable function on (or anopen interval) andx is alocal maximum or alocal minimum off, then the derivative off atx is zero. Points wheref'(x) = 0 are calledcritical points orstationary points (and the value off atx is called acritical value). Iff is not assumed to be everywhere differentiable, then points at which it fails to be differentiable are also designated critical points.

Iff is twice differentiable, then conversely, a critical pointx off can be analysed by considering thesecond derivative off atx :

  • if it is positive,x is a local minimum;
  • if it is negative,x is a local maximum;
  • if it is zero, thenx could be a local minimum, a local maximum, or neither. (For example,f(x) =x3 has a critical point atx = 0, but it has neither a maximum nor a minimum there, whereasf(x) = ±x4 has a critical point atx = 0 and a minimum and a maximum, respectively, there.)

This is called thesecond derivative test. An alternative approach, called thefirst derivative test, involves considering the sign of thef' on each side of the critical point.

Taking derivatives and solving for critical points is therefore often a simple way to find local minima or maxima, which can be useful inoptimization. By theextreme value theorem, a continuous function on aclosed interval must attain its minimum and maximum values at least once. If the function is differentiable, the minima and maxima can only occur at critical points or endpoints.

This also has applications in graph sketching: once the local minima and maxima of a differentiable function have been found, a rough plot of the graph can be obtained from the observation that it will be either increasing or decreasing between critical points.

Inhigher dimensions, a critical point of ascalar valued function is a point at which thegradient is zero. Thesecond derivative test can still be used to analyse critical points by considering theeigenvalues of theHessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum; if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is called a "saddle point", and if none of these cases hold (i.e., some of the eigenvalues are zero) then the test is considered to be inconclusive.

Calculus of variations

[edit]
Main article:Calculus of variations

One example of an optimization problem is: Find the shortest curve between two points on a surface, assuming that the curve must also lie on the surface. If the surface is a plane, then the shortest curve is a line. But if the surface is, for example, egg-shaped, then theshortest path is not immediately clear. These paths are calledgeodesics, and one of the most fundamental problems in the calculus of variations is finding geodesics. Another example is: Find the smallest area surface filling in a closed curve in space. This surface is called aminimal surface and it, too, can be found using the calculus of variations.

Physics

[edit]

Calculus is of vital importance in physics: many physical processes are described by equations involving derivatives, calleddifferential equations. Physics is particularly concerned with the way quantities change and develop over time, and the concept of the "time derivative" — the rate of change over time — is essential for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant inNewtonian physics:

  • velocity is the derivative (with respect to time) of an object's displacement (distance from the original position)
  • acceleration is the derivative (with respect to time) of an object's velocity, that is, the second derivative (with respect to time) of an object's position.

For example, if an object's position on a line is given by

x(t)=16t2+16t+32,{\displaystyle x(t)=-16t^{2}+16t+32,\,\!}

then the object's velocity is

x˙(t)=x(t)=32t+16,{\displaystyle {\dot {x}}(t)=x'(t)=-32t+16,\,\!}

and the object's acceleration is

x¨(t)=x(t)=32,{\displaystyle {\ddot {x}}(t)=x''(t)=-32,\,\!}

which is constant.

Differential equations

[edit]
Main article:Differential equation

A differential equation is a relation between a collection of functions and their derivatives. Anordinary differential equation is a differential equation that relates functions of one variable to their derivatives with respect to that variable. Apartial differential equation is a differential equation that relates functions of more than one variable to theirpartial derivatives. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example,Newton's second law, which describes the relationship between acceleration and force, can be stated as the ordinary differential equation

F(t)=md2xdt2.{\displaystyle F(t)=m{\frac {d^{2}x}{dt^{2}}}.}

Theheat equation in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation

ut=α2ux2.{\displaystyle {\frac {\partial u}{\partial t}}=\alpha {\frac {\partial ^{2}u}{\partial x^{2}}}.}

Hereu(x,t) is the temperature of the rod at positionx and timet andα is a constant that depends on how fast heat diffuses through the rod.

Mean value theorem

[edit]
Main article:Mean value theorem
The mean value theorem: For each differentiable functionf:[a,b]R{\displaystyle f:[a,b]\to \mathbb {R} } witha<b{\displaystyle a<b} there is ac(a,b){\displaystyle c\in (a,b)} withf(c)=f(b)f(a)ba{\displaystyle f'(c)={\tfrac {f(b)-f(a)}{b-a}}}.

The mean value theorem gives a relationship between values of the derivative and values of the original function. Iff(x) is a real-valued function anda andb are numbers witha <b, then the mean value theorem says that under mild hypotheses, the slope between the two points(a,f(a)) and(b,f(b)) is equal to the slope of the tangent line tof at some pointc betweena andb. In other words,

f(c)=f(b)f(a)ba.{\displaystyle f'(c)={\frac {f(b)-f(a)}{b-a}}.}

In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose thatf has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph off must equal the slope of one of the tangent lines off. All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function.

Taylor polynomials and Taylor series

[edit]
Main articles:Taylor polynomial andTaylor series

The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued functionf(x) at the pointx0 is a linearpolynomiala +b(xx0), and it may be possible to get a better approximation by considering a quadratic polynomiala +b(xx0) +c(xx0)2. Still better might be a cubic polynomiala +b(xx0) +c(xx0)2 +d(xx0)3, and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficientsa,b,c, andd that makes the approximation as good as possible.

In theneighbourhood ofx0, fora the best possible choice is alwaysf(x0), and forb the best possible choice is alwaysf'(x0). Forc,d, and higher-degree coefficients, these coefficients are determined by higher derivatives off.c should always bef''(x0)/2, andd should always bef'''(x0)/3!. Using these coefficients gives theTaylor polynomial off. The Taylor polynomial of degreed is the polynomial of degreed which best approximatesf, and its coefficients can be found by a generalization of the above formulas.Taylor's theorem gives a precise bound on how good the approximation is. Iff is a polynomial of degree less than or equal tod, then the Taylor polynomial of degreed equalsf.

The limit of the Taylor polynomials is an infinite series called theTaylor series. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are calledanalytic functions. It is impossible for functions with discontinuities or sharp corners to be analytic; moreover, there existsmooth functions which are also not analytic.

Implicit function theorem

[edit]
Main article:Implicit function theorem

Some natural geometric shapes, such ascircles, cannot be drawn as thegraph of a function. For instance, iff(x,y) =x2 +y2 − 1, then the circle is the set of all pairs(x,y) such thatf(x,y) = 0. This set is called the zero set off, and is not the same as the graph off, which is aparaboloid. The implicit function theorem converts relations such asf(x,y) = 0 into functions. It states that iff iscontinuously differentiable, then around most points, the zero set off looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative off. The circle, for instance, can be pasted together from the graphs of the two functions±1 -x2. In a neighborhood of every point on the circle except(−1, 0) and(1, 0), one of these two functions has a graph that looks like the circle. (These two functions also happen to meet(−1, 0) and(1, 0), but this is not guaranteed by the implicit function theorem.)

The implicit function theorem is closely related to theinverse function theorem, which states when a function looks like graphs ofinvertible functions pasted together.

See also

[edit]
Differential calculus at Wikipedia'ssister projects

Notes

[edit]
  1. ^This is not a formal definition of what a tangent line is. The definition of the derivative as a limit makes rigorous this notion of tangent line.
  2. ^Though the technical definition of afunction is somewhat involved, it is easy to appreciate what a function is intuitively. A function takes an input and produces an output. For example, the functionf(x)=x2{\displaystyle f(x)=x^{2}} takes a number and squares it. The number that the function performs an operation on is often represented using the letterx{\displaystyle x}, but there is no difference whatsoever between writingf(x)=x2{\displaystyle f(x)=x^{2}} and writingf(y)=y2{\displaystyle f(y)=y^{2}}. For this reason,x{\displaystyle x} is often described as a 'dummy variable'.
  3. ^The term infinitesimal can sometimes lead people to wrongly believe there is an 'infinitely small number'—i.e. a positive real number that is smaller than any other real number. In fact, the term 'infinitesimal' is merely a shorthand for a limiting process. For this reason,dydx{\displaystyle {\frac {dy}{dx}}} is not a fraction—rather, it is the limit of a fraction.
  4. ^Not every function can be differentiated, hence why the definition only applies if 'the limit exists'. For more information, see the Wikipedia article ondifferentiability.
  5. ^Newton began his work in 1665 and Leibniz began his in 1676. However, Leibniz published his first paper in 1684, predating Newton's publication in 1693. It is possible that Leibniz saw drafts of Newton's work in 1673 or 1676, or that Newton made use of Leibniz's work to refine his own. Both Newton and Leibniz claimed that the other plagiarized their respective works. This resulted in a bittercontroversy between them over who first invented calculus, which shook the mathematical community in the early 18th century.
  6. ^This was a monumental achievement, even though a restricted version had been proven previously byJames Gregory (1638–1675), and some key examples can be found in the work ofPierre de Fermat (1601–1665).

References

[edit]

Citations

[edit]
  1. ^"Definition of DIFFERENTIAL CALCULUS".www.merriam-webster.com. Retrieved2020-05-09.
  2. ^"Definition of INTEGRAL CALCULUS".www.merriam-webster.com. Retrieved2020-05-09.
  3. ^Alcock, Lara (2016).How to Think about Analysis. New York: Oxford University Press. pp. 155–157.ISBN 978-0-19-872353-0.
  4. ^Weisstein, Eric W."Derivative".mathworld.wolfram.com. Retrieved2020-07-26.
  5. ^SeeEuclid's Elements, TheArchimedes Palimpsest andO'Connor, John J.;Robertson, Edmund F.,"Apollonius of Perga",MacTutor History of Mathematics Archive,University of St Andrews
  6. ^Ian G. Pearce.Bhaskaracharya II.Archived 2016-09-01 at theWayback Machine
  7. ^Broadbent, T. A. A.; Kline, M. (October 1968). "Reviewed work(s):The History of Ancient Indian Mathematics by C. N. Srinivasiengar".The Mathematical Gazette.52 (381):307–8.doi:10.2307/3614212.JSTOR 3614212.S2CID 176660647.
  8. ^Berggren 1990, p. 307.
  9. ^Berggren 1990, p. 308.
  10. ^Berggren 1990, pp. 308–309.
  11. ^Sabra, A I. (1981).Theories of Light: From Descartes to Newton. Cambridge University Press. p. 144.ISBN 978-0521284363.
  12. ^Eves, H. (1990).

Works cited

[edit]
  • Berggren, J. L. (1990). "Innovation and Tradition in Sharaf al-Din al-Tusi's Muadalat".Journal of the American Oriental Society.110 (2):304–309.doi:10.2307/604533.JSTOR 604533.

Other sources

[edit]
International
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Differential_calculus&oldid=1335348294"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp