Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Analysis of algorithms

From Wikipedia, the free encyclopedia
Study of resources used by an algorithm
This article includes a list ofgeneral references, butit lacks sufficient correspondinginline citations. Please help toimprove this article byintroducing more precise citations.(March 2010) (Learn how and when to remove this message)
For looking up a given entry in a given ordered list, both thebinary and thelinear search algorithm (which ignores ordering) can be used. The analysis of the former and the latter algorithm shows that it takes at mostlog2n andn check steps, respectively, for a list of sizen. In the depicted example list of size 33, searching for"Morin, Arthur" takes 5 and 28 steps with binary (shown incyan) and linear (magenta) search, respectively.
Graphs of functions commonly used in the analysis of algorithms, showing the number of operationsN versus input sizen for each function

Incomputer science, theanalysis of algorithms is the process of finding thecomputational complexity ofalgorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining afunction that relates the size of an algorithm's input to the number of steps it takes (itstime complexity) or the number of storage locations it uses (itsspace complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, sobest, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually anupper bound, determined from the worst case inputs to the algorithm.

The term "analysis of algorithms" was coined byDonald Knuth.[1] Algorithm analysis is an important part of a broadercomputational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a givencomputational problem. These estimates provide an insight into reasonable directions of search forefficient algorithms.

In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input.Big O notation,Big-omega notation andBig-theta notation are used to this end.[2] For instance,binary search is said to run in a number of steps proportional to the logarithm of the sizen of the sorted list being searched, or inO(logn), colloquially "inlogarithmic time". Usuallyasymptotic estimates are used because differentimplementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called ahidden constant.

Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called amodel of computation. A model of computation may be defined in terms of anabstract computer, e.g.Turing machine, and/or by postulating that certain operations are executed in unit time.For example, if the sorted list to which we apply binary search hasn elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at mostlog2(n) + 1 time units are needed to return an answer.

Cost models

[edit]

Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual run-time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant.

Two cost models are generally used:[3][4][5][6][7]

  • theuniform cost model, also calledunit-cost model (and similar variations), assigns a constant cost to every machine operation, regardless of the size of the numbers involved
  • thelogarithmic cost model, also calledlogarithmic-cost measurement (and similar variations), assigns a cost to every machine operation proportional to the number of bits involved

The latter is more cumbersome to use, so it is only employed when necessary, for example in the analysis ofarbitrary-precision arithmetic algorithms, like those used incryptography.

A key point which is often overlooked is that published lower bounds for problems are often given for a model of computation that is more restricted than the set of operations that you could use in practice and therefore there are algorithms that are faster than what would naively be thought possible.[8]

Run-time analysis

[edit]

Run-time analysis is a theoretical classification that estimates and anticipates the increase inrunning time (or run-time or execution time) of analgorithm as itsinput size (usually denoted asn) increases. Run-time efficiency is a topic of great interest incomputer science: Aprogram can take seconds, hours, or even years to finish executing, depending on which algorithm it implements. Whilesoftware profiling techniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis.

Shortcomings of empirical metrics

[edit]

Since algorithms areplatform-independent (i.e. a given algorithm can be implemented in an arbitraryprogramming language on an arbitrarycomputer running an arbitraryoperating system), there are additional significant drawbacks to using anempirical approach to gauge the comparative performance of a given set of algorithms.

Take as an example a program that looks up a specific entry in asortedlist of sizen. Suppose this program were implemented on Computer A, a state-of-the-art machine, using alinear search algorithm, and on Computer B, a much slower machine, using abinary search algorithm.Benchmark testing on the two computers running their respective programs might look something like the following:

n (list size)Computer A run-time
(innanoseconds)
Computer B run-time
(innanoseconds)
168100,000
6332150,000
250125200,000
1,000500250,000

Based on these metrics, it would be easy to jump to the conclusion thatComputer A is running an algorithm that is far superior in efficiency to that ofComputer B. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error:

n (list size)Computer A run-time
(innanoseconds)
Computer B run-time
(innanoseconds)
168100,000
6332150,000
250125200,000
1,000500250,000
.........
1,000,000500,000500,000
4,000,0002,000,000550,000
16,000,0008,000,000600,000
.........
63,072 × 101231,536 × 1012 ns,
or 1 year
1,375,000 ns,
or 1.375 milliseconds

Computer A, running the linear search program, exhibits alinear growth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run-time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits alogarithmic growth rate. Quadrupling the input size only increases the run-time by aconstant amount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it is running an algorithm with a much slower growth rate.

Orders of growth

[edit]
Main article:Big O notation

Informally, an algorithm can be said to exhibit a growth rate on the order of amathematical function if beyond a certain input sizen, the functionf(n) times a positive constant provides anupper bound or limit for the run-time of that algorithm. In other words, for a given input sizen greater than somen0 and a constantc, the run-time of that algorithm will never be larger thanc ×f(n). This concept is frequently expressed using Big O notation. For example, since the run-time ofinsertion sortgrows quadratically as its input size increases, insertion sort can be said to be of orderO(n2).

Big O notation is a convenient way to express theworst-case scenario for a given algorithm, although it can also be used to express the average-case — for example, the worst-case scenario forquicksort isO(n2), but the average-case run-time isO(n logn).

Empirical orders of growth

[edit]

Assuming the run-time follows power rule,tkna, the coefficienta can be found[9] by taking empirical measurements of run-time{t1,t2} at some problem-size points{n1,n2}, and calculatingt2/t1 = (n2/n1)a so thata = log(t2/t1)/log(n2/n1). In other words, this measures the slope of the empirical line on thelog–log plot of run-time vs. input size, at some size point. If the order of growth indeed follows the power rule (and so the line on the log–log plot is indeed a straight line), the empirical value of will stay constant at different ranges, and if not, it will change (and the line is a curved line)—but still could serve for comparison of any two given algorithms as to theirempirical local orders of growth behaviour. Applied to the above table:

n (list size)Computer A run-time
(innanoseconds)
Local order of growth
(n^_)
Computer B run-time
(innanoseconds)
Local order of growth
(n^_)
157100,000
65321.04150,0000.28
2501251.01200,0000.21
1,0005001.00250,0000.16
.........
1,000,000500,0001.00500,0000.10
4,000,0002,000,0001.00550,0000.07
16,000,0008,000,0001.00600,0000.06
.........

It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one.

Evaluating run-time complexity

[edit]

The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the followingpseudocode:

1get a positive integer n from input2if n > 103print "This might take a while..."4for i = 1to n5for j = 1to i6print i * j7print "Done!"

A given computer will take adiscrete amount of time to execute each of theinstructions involved with carrying out this algorithm. Say that the actions carried out in step 1 are considered to consume time at mostT1, step 2 uses time at mostT2, and so forth.

In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1–3 and step 7 is:

T1+T2+T3+T7.{\displaystyle T_{1}+T_{2}+T_{3}+T_{7}.\,}

Theloops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute (n + 1 )times,[10] which will consumeT4(n + 1 ) time. The inner loop, on the other hand, is governed by the value of j, whichiterates from 1 toi. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumesT6 time, and the inner loop test (step 5) consumes 2T5 time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time, and the inner loop test (step 5) consumes 3T5 time.

Altogether, the total time required to run the inner loopbody can be expressed as anarithmetic progression:

T6+2T6+3T6++(n1)T6+nT6{\displaystyle T_{6}+2T_{6}+3T_{6}+\cdots +(n-1)T_{6}+nT_{6}}

which can befactored[11] as

[1+2+3++(n1)+n]T6=[12(n2+n)]T6{\displaystyle \left[1+2+3+\cdots +(n-1)+n\right]T_{6}=\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}}

The total time required to run the inner looptest can be evaluated similarly:

2T5+3T5+4T5++(n1)T5+nT5+(n+1)T5=T5+2T5+3T5+4T5++(n1)T5+nT5+(n+1)T5T5{\displaystyle {\begin{aligned}&2T_{5}+3T_{5}+4T_{5}+\cdots +(n-1)T_{5}+nT_{5}+(n+1)T_{5}\\={}&T_{5}+2T_{5}+3T_{5}+4T_{5}+\cdots +(n-1)T_{5}+nT_{5}+(n+1)T_{5}-T_{5}\end{aligned}}}

which can be factored as

T5[1+2+3++(n1)+n+(n+1)]T5=[12(n2+n)]T5+(n+1)T5T5=[12(n2+n)]T5+nT5=[12(n2+3n)]T5{\displaystyle {\begin{aligned}&T_{5}\left[1+2+3+\cdots +(n-1)+n+(n+1)\right]-T_{5}\\={}&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{5}+(n+1)T_{5}-T_{5}\\={}&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{5}+nT_{5}\\={}&\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}\end{aligned}}}

Therefore, the total run-time for this algorithm is:

f(n)=T1+T2+T3+T7+(n+1)T4+[12(n2+n)]T6+[12(n2+3n)]T5{\displaystyle f(n)=T_{1}+T_{2}+T_{3}+T_{7}+(n+1)T_{4}+\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}}

which reduces to

f(n)=[12(n2+n)]T6+[12(n2+3n)]T5+(n+1)T4+T1+T2+T3+T7{\displaystyle f(n)=\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}}

As arule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n2 is the highest-order term, so one can conclude thatf(n) =O(n2). Formally this can be proven as follows:

Prove that[12(n2+n)]T6+[12(n2+3n)]T5+(n+1)T4+T1+T2+T3+T7cn2, nn0{\displaystyle \left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq cn^{2},\ n\geq n_{0}}

[12(n2+n)]T6+[12(n2+3n)]T5+(n+1)T4+T1+T2+T3+T7(n2+n)T6+(n2+3n)T5+(n+1)T4+T1+T2+T3+T7 (for n0){\displaystyle {\begin{aligned}&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\\\leq {}&(n^{2}+n)T_{6}+(n^{2}+3n)T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\ ({\text{for }}n\geq 0)\end{aligned}}}

Letk be a constant greater than or equal to [T1..T7]

T6(n2+n)+T5(n2+3n)+(n+1)T4+T1+T2+T3+T7k(n2+n)+k(n2+3n)+kn+5k=2kn2+5kn+5k2kn2+5kn2+5kn2 (for n1)=12kn2{\displaystyle {\begin{aligned}&T_{6}(n^{2}+n)+T_{5}(n^{2}+3n)+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq k(n^{2}+n)+k(n^{2}+3n)+kn+5k\\={}&2kn^{2}+5kn+5k\leq 2kn^{2}+5kn^{2}+5kn^{2}\ ({\text{for }}n\geq 1)=12kn^{2}\end{aligned}}}

Therefore[12(n2+n)]T6+[12(n2+3n)]T5+(n+1)T4+T1+T2+T3+T7cn2,nn0 for c=12k,n0=1{\displaystyle \left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq cn^{2},n\geq n_{0}{\text{ for }}c=12k,n_{0}=1}

A moreelegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's run-time breaks down as follows:[12]

4+i=1ni4+i=1nn=4+n25n2 (for n1)=O(n2).{\displaystyle 4+\sum _{i=1}^{n}i\leq 4+\sum _{i=1}^{n}n=4+n^{2}\leq 5n^{2}\ ({\text{for }}n\geq 1)=O(n^{2}).}

Growth rate analysis of other resources

[edit]

The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption ofmemory space. As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of afile which that program manages:

whilefile is still open:let n =size of fileforevery 100,000kilobytes of increase in file sizedouble the amount of memory reserved

In this instance, as the file size n increases, memory will be consumed at anexponential growth rate, which is orderO(2n). This is an extremely rapid and most likely unmanageable growth rate for consumption of memoryresources.

Relevance

[edit]

Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless.

Constant factors

[edit]

Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 232 = 4 GiB (greater ifsegmented memory is used) and on 64-bit machines 264 = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms areO(1) for a large enough constant, or for small enough data.

This interpretation is primarily useful for functions that grow extremely slowly: (binary)iterated logarithm (log*) is less than 5 for all practical data (265536 bits); (binary) log-log (log logn) is less than 6 for virtually all practical data (264 bits); and binary log (logn) is less than 64 for virtually all practical data (264 bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may haveK>kloglogn{\displaystyle K>k\log \log n} so long asK/k>6{\displaystyle K/k>6} andn<226=264{\displaystyle n<2^{2^{6}}=2^{64}}.

For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used inhybrid algorithms, likeTimsort, which use an asymptotically efficient algorithm (heremerge sort, with time complexitynlogn{\displaystyle n\log n}), but switch to an asymptotically inefficient algorithm (hereinsertion sort, with time complexityn2{\displaystyle n^{2}}) for small data, as the simpler algorithm is faster on small data.

See also

[edit]

Notes

[edit]
  1. ^"Knuth: Recent News". 28 August 2016. Archived fromthe original on 28 August 2016.
  2. ^Cormen, Thomas H., ed. (2009).Introduction to algorithms (3rd ed.). Cambridge, Mass: MIT Press. pp. 44–52.ISBN 978-0-262-03384-8.OCLC 311310321.
  3. ^Alfred V. Aho; John E. Hopcroft; Jeffrey D. Ullman (1974).The design and analysis of computer algorithms. Addison-Wesley Pub. Co.ISBN 9780201000290., section 1.3
  4. ^Juraj Hromkovič (2004).Theoretical computer science: introduction to Automata, computability, complexity, algorithmics, randomization, communication, and cryptography. Springer. pp. 177–178.ISBN 978-3-540-14015-3.
  5. ^Giorgio Ausiello (1999).Complexity and approximation: combinatorial optimization problems and their approximability properties. Springer. pp. 3–8.ISBN 978-3-540-65431-5.
  6. ^Wegener, Ingo (2005),Complexity theory: exploring the limits of efficient algorithms, Berlin, New York:Springer-Verlag, p. 20,ISBN 978-3-540-21045-0
  7. ^Robert Endre Tarjan (1983).Data structures and network algorithms. SIAM. pp. 3–7.ISBN 978-0-89871-187-5.
  8. ^Examples of the price of abstraction?, cstheory.stackexchange.com
  9. ^How To Avoid O-Abuse and BribesArchived 2017-03-08 at theWayback Machine, at the blog "Gödel's Lost Letter and P=NP" by R. J. Lipton, professor of Computer Science at Georgia Tech, recounting idea by Robert Sedgewick
  10. ^an extra step is required to terminate the for loop, hence n + 1 and not n executions
  11. ^It can be proven byinduction that1+2+3++(n1)+n=n(n+1)2{\displaystyle 1+2+3+\cdots +(n-1)+n={\frac {n(n+1)}{2}}}
  12. ^This approach, unlike the above approach, neglects the constant time consumed by the loop tests which terminate their respective loops, but it istrivial to prove that such omission does not affect the final result

References

[edit]

External links

[edit]
Note: This template roughly follows the 2012ACM Computing Classification System.
Hardware
Computer systems organization
Networks
Software organization
Software notations andtools
Software development
Theory of computation
Algorithms
Mathematics ofcomputing
Information systems
Security
Human–centered computing
Concurrency
Artificial intelligence
Machine learning
Graphics
Applied computing
Retrieved from "https://en.wikipedia.org/w/index.php?title=Analysis_of_algorithms&oldid=1286246799"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp