This article includes a list ofgeneral references, butit lacks sufficient correspondinginline citations. Please help toimprove this article byintroducing more precise citations.(August 2011) (Learn how and when to remove this message) |
Incomputing, aMonte Carlo algorithm is arandomized algorithm whose output may be incorrect with a certain (typically small)probability. Two examples of such algorithms are theKarger–Stein algorithm[1] and the Monte Carlo algorithm forminimum feedback arc set.[2]
The name refers to theMonte Carlo casino in thePrincipality of Monaco, which is well-known around the world as an icon of gambling. The term "Monte Carlo" was first introduced in 1947 byNicholas Metropolis.[3]
Las Vegas algorithms are adual of Monte Carlo algorithms and never return an incorrect answer. However, they may make random choices as part of their work. As a result, the time taken might vary between runs, even with the same input.
If there is a procedure for verifying whether the answer given by a Monte Carlo algorithm is correct, and the probability of a correct answer is bounded above zero, then with probability one, running the algorithm repeatedly while testing the answers will eventually give a correct answer. Whether this process is a Las Vegas algorithm depends on whether halting with probability one is considered to satisfy the definition.
While the answer returned by adeterministic algorithm is always expected to be correct, this is not the case for Monte Carlo algorithms. Fordecision problems, these algorithms are generally classified as eitherfalse-biased ortrue-biased. Afalse-biased Monte Carlo algorithm is always correct when it returnsfalse; atrue-biased algorithm is always correct when it returnstrue. While this describes algorithms withone-sided errors, others might have no bias; these are said to havetwo-sided errors. The answer they provide (eithertrue orfalse) will be incorrect, or correct, with some bounded probability.
For instance, theSolovay–Strassen primality test is used to determine whether a given number is aprime number. It always answerstrue for prime number inputs; for composite inputs, it answersfalse with probability at least1⁄2 andtrue with probability less than1⁄2. Thus,false answers from the algorithm are certain to be correct, whereas thetrue answers remain uncertain; this is said to be a1⁄2-correct false-biased algorithm.
For a Monte Carlo algorithm with one-sided errors, the failure probability can be reduced (and the success probability amplified) by running the algorithmk times. Consider again the Solovay–Strassen algorithm which is1⁄2-correct false-biased. One may run this algorithm multiple times returning afalse answer if it reaches afalse response withink iterations, and otherwise returningtrue. Thus, if the number is prime then the answer is always correct, and if the number is composite then the answer is correct with probability at least 1−(1−1⁄2)k = 1−2−k.
For Monte Carlo decision algorithms with two-sided error, the failure probability may again be reduced by running the algorithmk times and returning themajority function of the answers.
Thecomplexity classBPP describesdecision problems that can be solved by polynomial-time Monte Carlo algorithms with a bounded probability of two-sided errors, and the complexity classRP describes problems that can be solved by a Monte Carlo algorithm with a bounded probability of one-sided error: if the correct answer isfalse, the algorithm always says so, but it may answerfalse incorrectly for some instances where the correct answer istrue.[4] In contrast, the complexity classZPP describes problems solvable by polynomial expected time Las Vegas algorithms.ZPP ⊆ RP ⊆ BPP, but it is not known whether any of these complexity classes is distinct from each other; that is, Monte Carlo algorithms may have more computational power than Las Vegas algorithms, but this has not been proven.[4] Another complexity class,PP, describes decision problems with a polynomial-time Monte Carlo algorithm that is more accurate than flipping a coin but where the error probability cannot necessarily be bounded away from1⁄2.[4]
Randomized algorithms are primarily divided by its two main types, Monte Carlo and Las Vegas, however, these represent only a top of the hierarchy and can be further categorized.[4]
"Both Las Vegas and Monte Carlo are dealing with decisions, i.e., problems in theirdecision version."[4] "This however should not give a wrong impression and confine these algorithms to such problems—both types ofrandomized algorithms can be used on numerical problems as well, problems where the output is not simple ‘yes’/‘no’, but where one needs to receive a result that is numerical in nature."[4]
| Efficiency | Optimum | Failure (LV) / Error (MC) | |
|---|---|---|---|
| Las Vegas (LV) | Probabilistic | Certain | |
| Sherwood | Certain, or Sherwood probabilistic (stronger bound than regular LV) | Certain | 0 |
| Numerical | Probabilistic, certain, or Sherwood probabilistic | Certain | or 0 |
| Monte Carlo (MC) | Certain | Probabilistic | (probability which through repeated runs grows sub-exponentially will inhibit usefulness of the algorithm; typical case is) |
| Atlantic City | Certain | Probabilistic | |
| Numerical | Certain | Probabilistic | (algorithm type dependent) |
Previous table represents a general framework for Monte Carlo and Las Vegas randomized algorithms.[4] Instead of the mathematical symbol one could use, thus making probabilities in the worst case equal.[4]
Well-known Monte Carlo algorithms include the Solovay–Strassen primality test, theBaillie–PSW primality test, theMiller–Rabin primality test, and certain fast variants of theSchreier–Sims algorithm incomputational group theory.
For algorithms that are a part ofStochastic Optimization (SO) group of algorithms, where probability is not known in advance and is empirically determined, it is sometimes possible to merge Monte Carlo and such an algorithm "to have both probability bound calculated in advance and a Stochastic Optimization component."[4] "Example of such an algorithm isAnt Inspired Monte Carlo."[4][5] In this way, "drawback of SO has been mitigated, and a confidence in a solution has been established."[4][5]