Differentmodels of computation give rise to different reasons that an algorithm may be non-deterministic, and different ways to evaluate its performance or correctness:
Aconcurrent algorithm can perform differently on different runs due to arace condition. This can happen even with a single-threaded algorithm when it interacts with resources external to it. In general, such an algorithm is considered to perform correctly only whenall possible runs produce the desired results.
Aprobabilistic algorithm's behavior depends on arandom number generator called by the algorithm. These are subdivided intoLas Vegas algorithms, for which (like concurrent algorithms) all runs must produce correct output, andMonte Carlo algorithms which are allowed to fail or produce incorrect results with low probability. The performance of such an algorithm is often measured probabilistically, for instance using an analysis of itsexpected time.
Incomputational complexity theory, nondeterminism is often modeled using an explicit mechanism for making a nondeterministic choice, such as in anondeterministic Turing machine. For these models, a nondeterministic algorithm is considered to perform correctly when, for each input,there exists a run that produces the desired result, even when other runs produce incorrect results. This existential power makes nondeterministic algorithms of this sort more efficient than known deterministic algorithms for many problems. TheP versus NP problem encapsulates this conjectured greater efficiency available to nondeterministic algorithms. Algorithms of this sort are used to definecomplexity classes based onnondeterministic time andnondeterministic space complexity. They may be simulated usingnondeterministic programming, a method for specifying nondeterministic algorithms and searching for the choices that lead to a correct run, often using abacktracking search.
The termnondeterministic algorithm was used byRobert W. Floyd as early as 1967.[6]The paper uses the graphical language offlow charts which is a different way to formalize algorithms compared to automata or Turing machines and at that time was closer to the practice of programming on electronic computers.
Inphilosophy ideas revolving arounddeterminism vs.free will go back at least to ancient Greece. It is worth noting that nondeterminacy as a concept in computer science refers to a rather limited choice between previously explicitly defined, often only finitely many options in each computational step, while in philosophy the possible options do not necessarily have to be laid out or formally defined beforehand. In particular because of this additional property nondeterminism in computer science constitutes a new development compared to nondeterminism in traditional philosophy.
^Williams, H. C.;Shallit, J. O. (1994), "Factoring integers before computers", in Gautschi, Walter (ed.),Mathematics of Computation 1943–1993: a half-century of computational mathematics; Papers from the Symposium on Numerical Analysis and the Minisymposium on Computational Number Theory held in Vancouver, British Columbia, August 9–13, 1993, Proceedings of Symposia in Applied Mathematics, vol. 48, Amer. Math. Soc., Providence, RI, pp. 481–531,doi:10.1090/psapm/048/1314885,ISBN978-0-8218-0291-5,MR1314885; see p. 504, "Perhaps Pocklington also deserves credit as the inventor of the randomized algorithm".
^Rabin, M. O.; Scott, D. (April 1959). "Finite Automata and Their Decision Problems".IBM Journal of Research and Development.3 (2):114–125.doi:10.1147/rd.32.0114.