Sourcestd/numeric.d
CustomFloatFlags: int;signedstoreNormalizedallowDenorminfinitynanprobabilitynegativeUnsignedallowDenormZeroOnlyieeenoneCustomFloat(uint bits) if (bits == 8 || bits == 16 || bits == 32 || bits == 64 || bits == 80)CustomFloat(uint precision, uint exponentWidth, CustomFloatFlags flags = CustomFloatFlags.ieee) if (((flags & flags.signed) + precision + exponentWidth) % 8 == 0 && (precision + exponentWidth > 0))CustomFloat(uint precision, uint exponentWidth, CustomFloatFlags flags, uint bias) if (isCorrectCustomFloat(precision, exponentWidth, flags));import std.math.trigonometry : sin, cos;// Define a 16-bit floating point valuesCustomFloat!16 x;// Using the number of bitsCustomFloat!(10, 5) y;// Using the precision and exponent widthCustomFloat!(10, 5,CustomFloatFlags.ieee) z;// Using the precision, exponent width and format flagsCustomFloat!(10, 5,CustomFloatFlags.ieee, 15) w;// Using the precision, exponent width, format flags and exponent offset bias// Use the 16-bit floats mostly like normal numbersw = x*y - 1;// Functions calls require conversionz = sin(+x) + cos(+y);// Use unary plus to concisely convert to a realz = sin(x.get!float) + cos(y.get!float);// Or use get!Tz = sin(cast(float) x) + cos(cast(float) y);// Or use cast(T) to explicitly convert// Define a 8-bit custom float for storing probabilitiesalias Probability =CustomFloat!(4, 4, CustomFloatFlags.ieee^CustomFloatFlags.probability^CustomFloatFlags.signed );auto p = Probability(0.5);
FPTemporary(F) if (isFloatingPoint!F)FPTemporary!F.FPTemporary stems from the optimizedfloating-point operations and registers present in virtually allprocessors. When adding numbers in the example above, the addition mayin fact be done inreal precision internally. In that case,storing the intermediateresult indouble format is not onlyless precise, it is also (surprisingly) slower, because a conversionfromreal todouble is performed every pass through theloop. This being a lose-lose situation,FPTemporary!F has beendefined as thefastest type to use for calculations at precisionF. There is no need to define a type for themost accuratecalculations, as that is alwaysreal.Finally, there is no guarantee that usingFPTemporary!F willalways be fastest, as the speed of floating-point calculations dependson very many factors.import std.math.operations : isClose;// Average numbers in an arraydouble avg(indouble[] a){if (a.length == 0)return 0;FPTemporary!double result = 0;foreach (e; a) result += e;return result / a.length;}auto a = [1.0, 2.0, 3.0];assert(isClose(avg(a), 2));
secantMethod(alias fun)import std.math.operations : isClose;import std.math.trigonometry : cos;float f(float x){return cos(x) - x*x*x;}auto x =secantMethod!(f)(0f, 1f);assert(isClose(x, 0.865474));
findRoot(T, DF, DT)(scope DFf, const Ta, const Tb, scope DTtolerance)tolerance(T.init, T.init)) : bool) && is(typeof(f(T.init)) == R, R) && isFloatingPoint!R);findRoot(T, DF)(scope DFf, const Ta, const Tb);f and a range[a ..b] such thatf(a) andf(b) have opposite signs or at least one of them equals ±0, returns the value ofx in the range which is closest to a root off(x). Iff(x) has more than one root in the range, one will be chosen arbitrarily. Iff(x) returns NaN, NaN will be returned; otherwise, this algorithm is guaranteed to succeed. Uses an algorithm based on TOMS748, which uses inverse cubic interpolation whenever possible, otherwise reverting to parabolic or secant interpolation. Compared to TOMS748, this implementation improves worst-case performance by a factor of more than 100, and typical performance by a factor of 2. For 80-bit reals, most problems require 8 to 15 calls tof(x) to achieve full machine precision. The worst-case performance (pathological cases) is approximately twice the number of bits.References"On Enclosing Simple Roots of Nonlinear Equations", G. Alefeld, F.A. Potra, Yixun Shi, Mathematics of Computation 61, pp733-744 (1993). Fortran code available fromwww.netlib.org as algorithm TOMS478.
findRoot(T, R, DF, DT)(scope DFf, const Tax, const Tbx, const Rfax, const Rfbx, scope DTtolerance)tolerance(T.init, T.init)) : bool) && is(typeof(f(T.init)) == R) && isFloatingPoint!R);findRoot(T, R, DF)(scope DFf, const Tax, const Tbx, const Rfax, const Rfbx);findRoot(T, R)(scope R delegate(T)f, const Ta, const Tb, scope bool delegate(T lo, T hi)tolerance = (Ta, Tb) => false);DFf | Function to be analyzed |
Tax | Left bound of initial range off known to contain the root. |
Tbx | Right bound of initial range off known to contain the root. |
Rfax | Value off(ax). |
Rfbx | Value off(bx).fax andfbx should have opposite signs. (f(ax) andf(bx) are commonly known in advance.) |
DTtolerance | Defines an early termination condition. Receives the current upper and lower bounds on the root. The delegate must returntrue when these bounds are acceptable. If this function always returnsfalse, full machine precision will be achieved. |
findLocalMin(T, DF)(scope DFf, const Tax, const Tbx, const TrelTolerance = sqrt(T.epsilon), const TabsTolerance = sqrt(T.epsilon))f(x) via bracketing.Given a functionf and a range(ax ..bx),returns the value ofx in the range which is closest to a minimum off(x).f is never evaluted at the endpoints ofax andbx.Iff(x) has more than one minimum in the range, one will be chosen arbitrarily.Iff(x) returns NaN or -Infinity,(x,f(x), NaN) will be returned;otherwise, this algorithm is guaranteed to succeed.DFf | Function to be analyzed |
Tax | Left bound of initial range of f known to contain the minimum. |
Tbx | Right bound of initial range of f known to contain the minimum. |
TrelTolerance | Relative tolerance. |
TabsTolerance | Absolute tolerance. |
Preconditionsax andbx shall be finite reals.relTolerance shall be normal positive real.absTolerance shall be normal positive real no less thenT.epsilon*2.
f(x) anderror = 3 * (absTolerance * fabs(x) +relTolerance). The method used is a combination of golden section search andsuccessive parabolic interpolation. Convergence is never much slowerthan that for a Fibonacci search.References"Algorithms for Minimization without Derivatives", Richard Brent, Prentice-Hall, Inc. (1973)
import std.math.operations : isClose;auto ret =findLocalMin((double x) => (x-4)^^2, -1e7, 1e7);assert(ret.x.isClose(4.0));assert(ret.y.isClose(0.0, 0.0, 1e-10));
euclideanDistance(Range1, Range2)(Range1a, Range2b)euclideanDistance(Range1, Range2, F)(Range1a, Range2b, Flimit)a andb. The two ranges must have the same length. The three-parameterversion stops computation as soon as the distance is greater than orequal tolimit (this is useful to save computation if a smalldistance is sought).dotProduct(Range1, Range2)(Range1a, Range2b)dotProduct(F1, F2)(in F1[]avector, in F2[]bvector);dotProduct(F, uint N)(ref scope const F[N]a, ref scope const F[N]b)a andb. The two ranges must have the same length. If both ranges definelength, the check is done once; otherwise, it is done at eachiteration.cosineSimilarity(Range1, Range2)(Range1a, Range2b)a andb. The two ranges must have the same length. If both ranges definelength, the check is done once; otherwise, it is done at eachiteration. If either range has all-zero elements, return 0.normalize(R)(Rrange, ElementType!Rsum = 1)range by multiplying each element with anumber chosen such that values sum up tosum. If elements inrange sum to zero, assignssum / range.length toall. Normalization makes sense only if all elements inrange arepositive.normalize assumes that is the case without checking it.range were zero or ifrange is empty.double[] a = [];assert(!normalize(a));a = [ 1.0, 3.0 ];assert(normalize(a));writeln(a);// [0.25, 0.75]assert(normalize!(typeof(a))(a, 50));// a = [12.5, 37.5]a = [ 0.0, 0.0 ];assert(!normalize(a));writeln(a);// [0.5, 0.5]
sumOfLog2s(Range)(Ranger)r.The error of this method is much smaller than with a naive sum of log2.import std.math.traits : isNaN;writeln(sumOfLog2s(newdouble[0]));// 0writeln(sumOfLog2s([0.0L]));// -real.infinitywriteln(sumOfLog2s([-0.0L]));// -real.infinitywriteln(sumOfLog2s([2.0L]));// 1assert(sumOfLog2s([-2.0L]).isNaN());assert(sumOfLog2s([real.nan]).isNaN());assert(sumOfLog2s([-real.nan]).isNaN());writeln(sumOfLog2s([real.infinity]));// real.infinityassert(sumOfLog2s([-real.infinity]).isNaN());writeln(sumOfLog2s([0.25, 0.25, 0.25, 0.125]));// -9
entropy(Range)(Ranger)entropy(Range, F)(Ranger, Fmax)r in bits. Thisfunction assumes (without checking) that the values inr are allin[0, 1]. For the entropy to be meaningful, oftenr shouldbe normalized too (i.e., its values should sum to 1). Thetwo-parameter version stops evaluating as soon as the intermediateresult is greater than or equal tomax.kullbackLeiblerDivergence(Range1, Range2)(Range1a, Range2b)a andb, which is the sumai * log(ai / bi). The baseof logarithm is 2. The ranges are assumed to contain elements in[0, 1]. Usually the ranges are normalized probability distributions,but this is not required or checked bykullbackLeiblerDivergence. If any elementbi is zero and thecorresponding elementai nonzero, returns infinity. (Otherwise,ifai == 0 && bi == 0, the termai * log(ai / bi) isconsidered zero.) If the inputs are normalized, the result ispositive.import std.math.operations : isClose;double[] p = [ 0.0, 0, 0, 1 ];writeln(kullbackLeiblerDivergence(p, p));// 0double[] p1 = [ 0.25, 0.25, 0.25, 0.25 ];writeln(kullbackLeiblerDivergence(p1, p1));// 0writeln(kullbackLeiblerDivergence(p, p1));// 2writeln(kullbackLeiblerDivergence(p1, p));// double.infinitydouble[] p2 = [ 0.2, 0.2, 0.2, 0.4 ];assert(isClose(kullbackLeiblerDivergence(p1, p2), 0.0719281, 1e-5));assert(isClose(kullbackLeiblerDivergence(p2, p1), 0.0780719, 1e-5));
jensenShannonDivergence(Range1, Range2)(Range1a, Range2b)jensenShannonDivergence(Range1, Range2, F)(Range1a, Range2b, Flimit)a andb, which is the sum(ai * log(2 * ai / (ai + bi)) + bi * log(2 *bi / (ai + bi))) / 2. The base of logarithm is 2. The ranges areassumed to contain elements in[0, 1]. Usually the ranges arenormalized probability distributions, but this is not required orchecked byjensenShannonDivergence. If the inputs are normalized,the result is bounded within[0, 1]. The three-parameter versionstops evaluations as soon as the intermediate result is greater thanor equal tolimit.import std.math.operations : isClose;double[] p = [ 0.0, 0, 0, 1 ];writeln(jensenShannonDivergence(p, p));// 0double[] p1 = [ 0.25, 0.25, 0.25, 0.25 ];writeln(jensenShannonDivergence(p1, p1));// 0assert(isClose(jensenShannonDivergence(p1, p), 0.548795, 1e-5));double[] p2 = [ 0.2, 0.2, 0.2, 0.4 ];assert(isClose(jensenShannonDivergence(p1, p2), 0.0186218, 1e-5));assert(isClose(jensenShannonDivergence(p2, p1), 0.0186218, 1e-5));assert(isClose(jensenShannonDivergence(p2, p1, 0.005), 0.00602366, 1e-5));
gapWeightedSimilarity(alias comp = "a == b", R1, R2, F)(R1s, R2t, Flambda)s andt based on all of theircommon subsequences of all lengths. Gapped subsequences are alsoincluded.gapWeightedSimilarity counts thefollowing matches:string[]s = ["Hello","brave","new","world"];string[]t = ["Hello","new","world"];assert(gapWeightedSimilarity(s,t, 1) == 7);Note how the gaps in matching are simply ignored, for example ("Hello", "new") is deemed as good a match as ("new","world"). This may be too permissive for some applications. Toeliminate gapped matches entirely, uselambda = 0:
string[]s = ["Hello","brave","new","world"];string[]t = ["Hello","new","world"];assert(gapWeightedSimilarity(s,t, 0) == 4);The call above eliminated the gapped matches ("Hello", "new"),("Hello", "world"), and ("Hello", "new", "world") from thetally. That leaves only 4 matches.The most interesting case is when gapped matches still participate inthe result, but not as strongly as ungapped matches. The result willbe a smooth, fine-grained similarity measure between the inputstrings. This is where values of
lambda between 0 and 1 enterinto play: gapped matches areexponentially penalized with thenumber of gaps with baselambda. This means that an ungappedmatch adds 1 to the return value; a match with one gap in eitherstring addslambda to the return value; ...; a match with a totalofn gaps in both strings addspow(lambda, n) to the returnvalue. In the example above, we have 4 matches without gaps, 2 matcheswith one gap, and 1 match with three gaps. The latter match is ("Hello", "world"), which has two gaps in the first string and one gapin the second string, totaling to three gaps. Summing these up we get4 + 2 * lambda + pow(lambda, 3).string[]s = ["Hello","brave","new","world"];string[]t = ["Hello","new","world"];assert(gapWeightedSimilarity(s,t, 0.5) == 4 + 0.5 * 2 + 0.125);
gapWeightedSimilarity is useful wherever a smooth similaritymeasure between sequences allowing for approximate matches isneeded. The examples above are given with words, but any sequenceswith elements comparable for equality are allowed, e.g. characters ornumbers.gapWeightedSimilarity uses a highly optimized dynamicprogramming implementation that needs16 * min(s.length,t.length) extra bytes of memory andΟ(s.length * t.length) timeto complete.gapWeightedSimilarityNormalized(alias comp = "a == b", R1, R2, F)(R1s, R2t, Flambda, FsSelfSim = F.init, FtSelfSim = F.init)gapWeightedSimilarityNormalizedcomputes a normalized version of the similarity that is computed asgapWeightedSimilarity(s, t, lambda) /sqrt(gapWeightedSimilarity(s, t, lambda) * gapWeightedSimilarity(s, t,lambda)). The functiongapWeightedSimilarityNormalized (aso-called normalized kernel) is bounded in[0, 1], reaches0only for ranges that don't match in any position, and1 only foridentical ranges.sSelfSim andtSelfSim are meant foravoiding duplicate computation. Many applications may have alreadycomputedgapWeightedSimilarity(s, s, lambda) and/orgapWeightedSimilarity(t, t, lambda). In that case, they can be passedassSelfSim andtSelfSim, respectively.import std.math.operations : isClose;import std.math.algebraic : sqrt;string[]s = ["Hello","brave","new","world"];string[]t = ["Hello","new","world"];writeln(gapWeightedSimilarity(s,s, 1));// 15writeln(gapWeightedSimilarity(t,t, 1));// 7writeln(gapWeightedSimilarity(s,t, 1));// 7assert(isClose(gapWeightedSimilarityNormalized(s,t, 1), 7.0 / sqrt(15.0 * 7), 0.01));
GapWeightedSimilarityIncremental(Range, F = double) if (isRandomAccessRange!Range && hasLength!Range);gapWeightedSimilarityIncremental(R, F)(Rr1, Rr2, Fpenalty);string[] s = ["Hello","brave","new","world"];string[] t = ["Hello","new","world"];auto simIter =gapWeightedSimilarityIncremental(s, t, 1.0);assert(simIter.front == 3);// three 1-length matchessimIter.popFront();assert(simIter.front == 3);// three 2-length matchessimIter.popFront();assert(simIter.front == 1);// one 3-length matchsimIter.popFront();assert(simIter.empty);// no more match
s, Ranget, Flambda);s andt and a penaltylambda. Constructor completes inΟ(s.length * t.length)time and computes all matches of length 1.opSlice();popFront();front();empty();gcd(T, U)(Ta, Ub)gcd(T)(Ta, Tb)a andb by usingan efficient algorithm such asEuclid'sorStein's algorithm.Ta | Integer value of any numerical type that supports the modulo operator%. If bit-shifting<< and>> are also supported, Stein's algorithm will be used; otherwise, Euclid's algorithm is used as a fallback. |
Ub | Integer value of any equivalent numerical type. |
lcm(T, U)(Ta, Ub)lcm(T)(Ta, Tb)writeln(lcm(1, 2));// 2writeln(lcm(3, 4));// 12writeln(lcm(5, 6));// 30
Fft;size);size or smaller.size must be a power of two.fft(F = double, R)(Rrange) constrange must be a random-access range with slicing and a length equal tosize as provided at the construction of this object. The contents of range can be either numeric types, which will be interpreted as pure real values, or complex types with properties or members.re and.im that can be read.NotePure real FFTs are automatically detected and the relevant optimizations are performed.
ConventionsThe exponent is negative and the factor is one, i.e., output[j] := sum[ exp(-2 PI i j k / N) input[k] ].
fft(Ret, R)(Rrange, Retbuf) constinverseFft(F = double, R)(Rrange) constConventionsThe exponent is positive and the factor is 1/N, i.e., output[j] := (1 / N) sum[ exp(+2 PI i j k / N) input[k] ].
inverseFft(Ret, R)(Rrange, Retbuf) constfft(F = double, R)(Rrange);fft(Ret, R)(Rrange, Retbuf);inverseFft(F = double, R)(Rrange);inverseFft(Ret, R)(Rrange, Retbuf);NoteIn addition to convenience, these functions are slightly more efficient than manually creating an Fft object for a single use, as the Fft object is deterministically destroyed before these functions return.
decimalToFactorial(ulongdecimal, ref ubyte[21]fac);decimal value into a value in the factorial numbersystem stored infac.ulongdecimal | The decimal value to convert into the factorial number system. |
ubyte[21]fac | The array to store the factorial number. The array is of size 21 asulong.max requires 21 digits in the factorial number system. |
fac.ubyte[21]fac;size_t idx =decimalToFactorial(2982,fac);writeln(fac[0]);// 4writeln(fac[1]);// 0writeln(fac[2]);// 4writeln(fac[3]);// 1writeln(fac[4]);// 0writeln(fac[5]);// 0writeln(fac[6]);// 0