The idea is to have a Decimal data type, for every use where decimalsare needed but binary floating point is too inexact.
The Decimal data type will support the Python standard functions andoperations, and must comply with the decimal arithmetic ANSI standardX3.274-1996[1].
Decimal will be floating point (as opposed to fixed point) and willhave bounded precision (the precision is the upper limit on thenumber of significant digits in a result). However, precision isuser-settable, and a notion of significant trailing zeroes is supportedso that fixed-point usage is also possible.
This work is based on code and test functions written by Eric Price,Aahz and Tim Peters. Just before Python 2.4a1, the decimal.pyreference implementation was moved into the standard library; alongwith the documentation and the test suite, this was the work ofRaymond Hettinger. Much of the explanation in this PEP is taken fromCowlishaw’s work[2], comp.lang.python and python-dev.
Here I’ll expose the reasons of why I think a Decimal data type isneeded and why other numeric data types are not enough.
I wanted a Money data type, and after proposing a pre-PEP incomp.lang.python, the community agreed to have a numeric data typewith the needed arithmetic behaviour, and then build Money over it:all the considerations about quantity of digits after the decimalpoint, rounding, etc., will be handled through Money. It is not thepurpose of this PEP to have a data type that can be used as Moneywithout further effort.
One of the biggest advantages of implementing a standard is thatsomeone already thought out all the creepy cases for you. And to astandard GvR redirected me: Mike Cowlishaw’s General DecimalArithmetic specification[2]. This document defines a generalpurpose decimal arithmetic. A correct implementation of thisspecification will conform to the decimal arithmetic defined inANSI/IEEE standard 854-1987, except for some minor restrictions, andwill also provide unrounded decimal arithmetic and integer arithmeticas proper subsets.
In decimal math, there are many numbers that can’t be represented witha fixed number of decimal digits, e.g. 1/3 = 0.3333333333…….
In base 2 (the way that standard floating point is calculated), 1/2 =0.1, 1/4 = 0.01, 1/8 = 0.001, etc. Decimal 0.2 equals 2/10 equals1/5, resulting in the binary fractional number0.001100110011001… As you can see, the problem is that some decimalnumbers can’t be represented exactly in binary, resulting in smallroundoff errors.
So we need a decimal data type that represents exactly decimalnumbers. Instead of a binary data type, we need a decimal one.
So we go to decimal, but whyfloating point?
Floating point numbers use a fixed quantity of digits (precision) torepresent a number, working with an exponent when the number gets toobig or too small. For example, with a precision of 5:
1234==>1234e012345==>12345e0123456==>12346e1
(note that in the last line the number got rounded to fit in five digits).
In contrast, we have the example of along integer with infiniteprecision, meaning that you can have the number as big as you want,and you’ll never lose any information.
In a fixed point number, the position of the decimal point is fixed.For a fixed point data type, check Tim Peter’s FixedPoint atSourceForge[4]. I’ll go for floating point because it’s easier toimplement the arithmetic behaviour of the standard, and then you canimplement a fixed point data type over Decimal.
But why can’t we have a floating point number with infinite precision?It’s not so easy, because of inexact divisions. E.g.: 1/3 =0.3333333333333… ad infinitum. In this case you should store aninfinite amount of 3s, which takes too much memory, ;).
John Roth proposed to eliminate the division operator and force theuser to use an explicit method, just to avoid this kind of trouble.This generated adverse reactions in comp.lang.python, as everybodywants to have support for the/ operator in a numeric data type.
With this exposed maybe you’re thinking “Hey! Can we just store the 1and the 3 as numerator and denominator?”, which takes us to the nextpoint.
Rational numbers are stored using two integer numbers, the numeratorand the denominator. This implies that the arithmetic operationscan’t be executed directly (e.g. to add two rational numbers you firstneed to calculate the common denominator).
Quoting Alex Martelli:
The performance implications of the fact that summing tworationals (which take O(M) and O(N) space respectively) gives arational which takes O(M+N) memory space is just too troublesome.There are excellent Rational implementations in both pure Pythonand as extensions (e.g., gmpy), but they’ll always be a “nichemarket” IMHO. Probably worth PEPping, not worth doing withoutDecimal – which is the right way to represent sums of money, atruly major use case in the real world.
Anyway, if you’re interested in this data type, you maybe will want totake a look atPEP 239: Adding a Rational Type to Python.
The result is a Decimal data type, with bounded precision and floatingpoint.
Will it be useful? I can’t say it better than Alex Martelli:
Python (out of the box) doesn’t let you have binary floating pointnumberswith whatever precision you specify: you’re limited towhat your hardware supplies. Decimal, be it used as a fixed orfloating point number, should suffer from no such limitation:whatever bounded precision you may specify on number creation(your memory permitting) should work just as well. Most of theexpense of programming simplicity can be hidden from applicationprograms and placed in a suitable decimal arithmetic type. As perhttp://speleotrove.com/decimal/,a single data type can beused for integer, fixed-point, and floating-point decimalarithmetic – and for money arithmetic which doesn’t drive theapplication programmer crazy.
There are several uses for such a data type. As I said before, I willuse it as base for Money. In this case the bounded precision is notan issue; quoting Tim Peters:
A precision of 20 would be way more than enough to account fortotal world economic output, down to the penny, since thebeginning of time.
Here I’ll include information and descriptions that are part of thespecification[2] (the structure of the number, the context, etc.).All the requirements included in this section are not for discussion(barring typos or other mistakes), as they are in the standard, andthe PEP is just for implementing the standard.
Because of copyright restrictions, I can not copy here explanationstaken from the specification, so I’ll try to explain it in my ownwords. I firmly encourage you to read the original specificationdocument[2] for details or if you have any doubt.
The specification is based on a decimal arithmetic model, as definedby the relevant standards: IEEE 854[3], ANSI X3-274[1], and theproposed revision[5] of IEEE 754[6].
The model has three components:
Numbers may be finite or special values. The former can berepresented exactly. The latter are infinites and undefined (such as0/0).
Finite numbers are defined by three parameters:
The numerical value of a finite number is given by:
(-1)**sign*coefficient*10**exponent
Special values are named as following:
The context is a set of parameters and rules that the user can selectand which govern the results of operations (for example, the precisionto be used).
The context gets that name because it surrounds the Decimal numbers,with parts of context acting as input to, and output of, operations.It’s up to the application to work with one or several contexts,but definitely the idea is not to get a context per Decimal number.For example, a typical use would be to set the context’s precision to20 digits at the start of a program, and never explicitly use contextagain.
These definitions don’t affect the internal storage of the Decimalnumbers, just the way that the arithmetic operations are performed.
The context is mainly defined by the following parameters (seeContext Attributes for all context attributes):
The specification defines two default contexts, which should be easilyselectable by the user.
Basic Default Context:
Extended Default Context:
The table below lists the exceptional conditions that may arise duringthe arithmetic operations, the corresponding signal, and the definedresult. For details, see the specification[2].
| Condition | Signal | Result |
|---|---|---|
| Clamped | clamped | see spec[2] |
| Division by zero | division-by-zero | [sign,inf] |
| Inexact | inexact | unchanged |
| Invalid operation | invalid-operation | [0,qNaN] (or [s,qNaN] or [s,qNaN,d]when the cause is a signaling NaN) |
| Overflow | overflow | depends on the rounding mode |
| Rounded | rounded | unchanged |
| Subnormal | subnormal | unchanged |
| Underflow | underflow | see spec[2] |
Note: when the standard talks about “Insufficient storage”, as long asthis is implementation-specific behaviour about not having enoughstorage to keep the internals of the number, this implementation willraise MemoryError.
Regarding Overflow and Underflow, there’s been a long discussion inpython-dev about artificial limits. The general consensus is to keepthe artificial limits only if there are important reasons to do that.Tim Peters gives us three:
…eliminating bounds on exponents effectively means overflow(and underflow) can never happen. But overflowis a valuablesafety net in real life fp use, like a canary in a coal mine,giving danger signs early when a program goes insane.Virtually all implementations of 854 use (and as IBM’s standardeven suggests) “forbidden” exponent values to encode non-finitenumbers (infinities and NaNs). A bounded exponent can do this atvirtually no extra storage cost. If the exponent is unbounded,then additional bits have to be used instead. This cost remainshidden until more time- and space- efficient implementations areattempted.
Big as it is, the IBM standard is a tiny start at supplying acomplete numeric facility. Having no bound on exponent size willenormously complicate the implementations of, e.g., decimal sin()and cos() (there’s then no a priori limit on how many digits ofpi effectively need to be known in order to perform argumentreduction).
Edward Loper give us an example of when the limits are to be crossed:probabilities.
That said, Robert Brewer and Andrew Lentvorski want the limits to beeasily modifiable by the users. Actually, this is quite possible:
>>>d1=Decimal("1e999999999")# at the exponent limit>>>d1Decimal("1E+999999999")>>>d1*10# exceed the limit, got infinityTraceback (most recent call last): File"<pyshell#3>", line1, in?d1*10......Overflow:above Emax>>>getcontext().Emax=1000000000# increase the limit>>>d1*10# does not exceed any moreDecimal("1.0E+1000000000")>>>d1*100# exceed againTraceback (most recent call last): File"<pyshell#3>", line1, in?d1*100......Overflow:above Emax
round-down: The discarded digits are ignored; the result isunchanged (round toward 0, truncate):
1.123-->1.121.128-->1.121.125-->1.121.135-->1.13
round-half-up: If the discarded digits represent greater than orequal to half (0.5) then the result should be incremented by 1;otherwise the discarded digits are ignored:
1.123-->1.121.128-->1.131.125-->1.131.135-->1.14
round-half-even: If the discarded digits represent greater thanhalf (0.5) then the result coefficient is incremented by 1; if theyrepresent less than half, then the result is not adjusted; otherwisethe result is unaltered if its rightmost digit is even, or incrementedby 1 if its rightmost digit is odd (to make an even digit):
1.123-->1.121.128-->1.131.125-->1.121.135-->1.14
round-ceiling: If all of the discarded digits are zero or if thesign is negative the result is unchanged; otherwise, the result isincremented by 1 (round toward positive infinity):
1.123-->1.131.128-->1.13-1.123-->-1.12-1.128-->-1.12
round-floor: If all of the discarded digits are zero or if thesign is positive the result is unchanged; otherwise, the absolutevalue of the result is incremented by 1 (round toward negativeinfinity):
1.123-->1.121.128-->1.12-1.123-->-1.13-1.128-->-1.13
round-half-down: If the discarded digits represent greater thanhalf (0.5) then the result is incremented by 1; otherwise thediscarded digits are ignored:
1.123-->1.121.128-->1.131.125-->1.121.135-->1.13
round-up: If all of the discarded digits are zero the result isunchanged, otherwise the result is incremented by 1 (round away from0):
1.123-->1.131.128-->1.131.125-->1.131.135-->1.14
I must separate the requirements in two sections. The first is tocomply with the ANSI standard. All the requirements for this arespecified in the Mike Cowlishaw’s work[2]. He also provided avery large suite of test cases.
The second section of requirements (standard Python functions support,usability, etc.) is detailed from here, where I’ll include all thedecisions made and why, and all the subjects still being discussed.
The explicit construction does not get affected by the context (thereis no rounding, no limits by the precision, etc.), because the contextaffects just operations’ results. The only exception to this is whenyou’reCreating from Context.
There’s no loss and no need to specify any other information:
Decimal(35)Decimal(-124)
Strings containing Python decimal integer literals and Python floatliterals will be supported. In this transformation there is no lossof information, as the string is directly converted to Decimal (thereis not an intermediate conversion through float):
Decimal("-12")Decimal("23.2e-7")
Also, you can construct in this way all special values (Infinity andNot a Number):
Decimal("Inf")Decimal("NaN")
The initial discussion on this item was what shouldhappen when passing floating point to the constructor:
Decimal(1.1)==Decimal('1.1')Decimal(1.1)==Decimal('110000000000000008881784197001252...e-51')Several people alleged that (1) is the better option here, becauseit’s what you expect when writingDecimal(1.1). And quoting JohnRoth, it’s easy to implement:
It’s not at all difficult to find where the actual number ends andwhere the fuzz begins. You can do it visually, and the algorithmsto do it are quite well known.
But If Ireally want my number to beDecimal('110000000000000008881784197001252...e-51'), why can’t IwriteDecimal(1.1)? Why should I expect Decimal to be “rounding”it? Remember that1.1is binary floating point, so I canpredict the result. It’s not intuitive to a beginner, but that’s theway it is.
Anyway, Paul Moore showed that (1) can’t work, because:
(1)saysD(1.1)==D('1.1')but1.1==1.1000000000000001soD(1.1)==D(1.1000000000000001)together:D(1.1000000000000001)==D('1.1')
which is wrong, because if I writeDecimal('1.1') it is exact, notD(1.1000000000000001). He also proposed to have an explicitconversion to float. bokr says you need to put the precision in theconstructor and mwilson agreed:
d=Decimal(1.1,1)# take float value to 1 decimal placed=Decimal(1.1)# gets `places` from pre-set context
But Alex Martelli says that:
Constructing with some specified precision would be fine. Thus,I think “construction from float with some default precision” runsa substantial risk of tricking naive users.
So, the accepted solution through c.l.p is that you can not call Decimalwith a float. Instead you must use a method: Decimal.from_float(). Thesyntax:
Decimal.from_float(floatNumber,[decimal_places])
wherefloatNumber is the float number origin of the constructionanddecimal_places are the number of digits after the decimalpoint where you apply a round-half-up rounding, if any. In this wayyou can do, for example:
Decimal.from_float(1.1,2):ThesameasdoingDecimal('1.1').Decimal.from_float(1.1,16):ThesameasdoingDecimal('1.1000000000000001').Decimal.from_float(1.1):ThesameasdoingDecimal('1100000000000000088817841970012523233890533447265625e-51').
Based on later discussions, it was decided to omit from_float() from theAPI for Py2.4. Several ideas contributed to the thought process:
Theresultsofthisconstructorcanbesomewhatunpredictableanditsuseisgenerallynotrecommended.
Aahz suggested to construct from tuples: it’s easierto implementeval()’s round trip and “someone who has numericvalues representing a Decimal does not need to convert them to astring.”
The structure will be a tuple of three elements: sign, number andexponent. The sign is 1 or 0, the number is a tuple of decimal digitsand the exponent is a signed int or long:
Decimal((1,(3,2,2,5),-2))# for -32.25
Of course, you can construct in this way all special values:
Decimal((0,(0,),'F'))# for InfinityDecimal((0,(0,),'n'))# for Not a Number
No mystery here, just a copy.
Decimal(value1)Decimal.from_float(value2,[decimal_places])
wherevalue1 can be int, long, string, 3-tuple or Decimal,value2 can only be float, anddecimal_places is an optionalnon negative int.
This item arose in python-dev from two sources in parallel. Ka-PingYee proposes to pass the context as an argument at instance creation(he wants the context he passes to be used only in creation time: “Itwould not be persistent”). Tony Meyer asks from_string to honor thecontext if it receives a parameter “honour_context” with a True value.(I don’t like it, because the doc specifies that the context behonored and I don’t want the method to comply with the specificationregarding the value of an argument.)
Tim Peters gives us a reason to have a creation that uses context:
In general number-crunching, literals may be given to highprecision, but that precision isn’t free andusually isn’tneeded
Casey Duncan wants to use another method, not a bool arg:
I find boolean arguments a general anti-pattern, especially givenwe have class methods. Why not use an alternate constructor likeDecimal.rounded_to_context(“3.14159265”).
In the process of deciding the syntax of that, Tim came up with abetter idea: he proposes not to have a method in Decimal to createwith a different context, but having instead a method in Context tocreate a Decimal instance. Basically, instead of:
D.using_context(number,context)
it will be:
context.create_decimal(number)
From Tim:
While all operations in the spec except for the two to-stringoperations use context, no operations in the spec support anoptional local context. That the Decimal() constructor ignorescontext by default is an extension to the spec. We must supply acontext-honoring from-string operation to meet the spec. Irecommend against any concept of “local context” in any operation– it complicates the model and isn’t necessary.
So, we decided to use a context method to create a Decimal that willuse (only to be created) that context in particular (for furtheroperations it will use the context of the thread). But, a method withwhat name?
Tim Peters proposes three methods to create from diverse sources(from_string, from_int, from_float). I proposed to use one method,create_decimal(), without caring about the data type. MichaelChermside: “The name just fits my brain. The fact that it uses thecontext is obvious from the fact that it’s Context method”.
The community agreed with that. I think that it’s OK because a newbiewill not be using the creation method from Context (the separatemethod in Decimal to construct from float is just to prevent newbiesfrom encountering binary floating point issues).
So, in short, if you want to create a Decimal instance using aparticular context (that will be used just at creation time and notany further), you’ll have to use a method of that context:
# n is any datatype accepted in Decimal(n) plus floatmycontext.create_decimal(n)
Example:
>>># create a standard decimal instance>>>Decimal("11.2233445566778899")Decimal("11.2233445566778899")>>>>>># create a decimal instance using the thread context>>>thread_context=getcontext()>>>thread_context.prec28>>>thread_context.create_decimal("11.2233445566778899")Decimal("11.2233445566778899")>>>>>># create a decimal instance using other context>>>other_context=thread_context.copy()>>>other_context.prec=4>>>other_context.create_decimal("11.2233445566778899")Decimal("11.22")
As the implicit construction is the consequence of an operation, itwill be affected by the context as is detailed in each point.
John Roth suggested that “The other type should be handled in the sameway the decimal() constructor would handle it”. But Alex Martellithinks that
this total breach with Python tradition would be a terriblemistake. 23+”43” is NOT handled in the same way as 23+int(“45”),and a VERY good thing that is too. It’s a completely differentthing for a user to EXPLICITLY indicate they want construction(conversion) and to just happen to sum two objects one of which bymistake could be a string.
So, here I define the behaviour again for each data type.
An int or long is a treated like a Decimal explicitly constructed fromDecimal(str(x)) in the current context (meaning that the to-string rulesfor rounding are applied and the appropriate flags are set). Thisguarantees that expressions likeDecimal('1234567')+13579 matchthe mental model ofDecimal('1234567')+Decimal('13579'). Thatmodel works because all integers are representable as strings withoutrepresentation error.
Everybody agrees to raise an exception here.
Aahz is strongly opposed to interact with float, suggesting anexplicit conversion:
The problem is that Decimal is capable of greater precision,accuracy, and range than float.
The example of the valid python expression,35+1.1, seems to suggestthatDecimal(35)+1.1 should also be valid. However, a closer lookshows that it only demonstrates the feasibility of integer to floatingpoint conversions. Hence, the correct analog for decimal floating pointis35+Decimal(1.1). Both coercions, int-to-float and int-to-Decimal,can be done without incurring representation error.
The question of how to coerce between binary and decimal floating pointis more complex. I proposed allowing the interaction with float,making an exact conversion and raising ValueError if exceeds theprecision in the current context (this is maybe too tricky, becausefor example with a precision of 9,Decimal(35)+1.2 is OK butDecimal(35)+1.1 raises an error).
This resulted to be too tricky. So tricky, that c.l.p agreed to raiseTypeError in this case: you could not mix Decimal and float.
There isn’t any issue here.
In the last pre-PEP I said that “The Context must be omnipresent,meaning that changes to it affects all the current and future Decimalinstances”. I was wrong. In response, John Roth said:
The context should be selectable for the particular usage. Thatis, it should be possible to have several different contexts inplay at one time in an application.
In comp.lang.python, Aahz explained that the idea is to have a“context per thread”. So, all the instances of a thread belongs to acontext, and you can change a context in thread A (and the behaviourof the instances of that thread) without changing nothing in thread B.
Also, and again correcting me, he said:
(the) Context applies only to operations, not to Decimalinstances; changing the Context does not affect existing instancesif there are no operations on them.
Arguing about special cases when there’s need to perform operationswith other rules that those of the current context, Tim Peters saidthat the context will have the operations as methods. This way, theuser “can create whatever private context object(s) it needs, andspell arithmetic as explicit method calls on its private contextobject(s), so that the default thread context object is neitherconsulted nor modified”.
+,-,*,/,//,**,%,divmod) and comparison (==,!=,<,>,<=,>=,cmp)operators in the following cases (checkImplicit Construction tosee what types could OtherType be, and what happens in each case):-,+,abs).m=Decimal(...)m==eval(repr(m))
There’s been some discussion in python-dev about the behaviour ofhash(). The community agrees that if the values are the same, thehashes of those values should also be the same. So, while Decimal(25)== 25 is True, hash(Decimal(25)) should be equal to hash(25).
The detail is that you can NOT compare Decimal to floats or strings,so we should not worry about them giving the same hashes. In short:
hash(n)==hash(Decimal(n))# Only if n is int, long, or Decimal
Regarding str() and repr() behaviour, Ka-Ping Yee proposes that repr()have the same behaviour as str() and Tim Peters proposes that str()behave like the to-scientific-string operation from the Spec.
This is possible, because (from Aahz): “The string form alreadycontains all the necessary information to reconstruct a Decimalobject”.
And it also complies with the Spec; Tim Peters:
There’s no requirement to have a methodnamed “to_sci_string”,the only requirement is thatsome way to spell to-sci-string’sfunctionality be supplied. The meaning of to-sci-string isprecisely specified by the standard, and is a good choice for bothstr(Decimal) and repr(Decimal).
This section explains all the public methods and attributes of Decimaland Context.
Decimal has no public attributes. The internal information is storedin slots and should not be accessed by end users.
Following are the conversion and arithmetic operations defined in theSpec, and how that functionality can be achieved with the actualimplementation.
str():>>>d=Decimal('123456789012.345')>>>str(d)'1.23456789E+11'
to_eng_string():>>>d=Decimal('123456789012.345')>>>d.to_eng_string()'123.456789E+9'
create_decimal(). The standardconstructor orfrom_float() constructor cannot be used becausethese do not use the context (as is specified in the Spec for thisconversion).abs():>>>d=Decimal('-15.67')>>>abs(d)Decimal('15.67')
+:>>>d=Decimal('15.6')>>>d+8Decimal('23.6')
-:>>>d=Decimal('15.6')>>>d-8Decimal('7.6')
compare(). This method (and not thebuilt-in function cmp()) should only be used when dealing withspecial values:>>>d=Decimal('-15.67')>>>nan=Decimal('NaN')>>>d.compare(23)'-1'>>>d.compare(nan)'NaN'>>>cmp(d,23)-1>>>cmp(d,nan)1
/:>>>d=Decimal('-15.67')>>>d/2Decimal('-7.835')
//:>>>d=Decimal('-15.67')>>>d//2Decimal('-7')
max(). Only use this method (and not thebuilt-in function max()) when dealing withspecial values:>>>d=Decimal('15')>>>nan=Decimal('NaN')>>>d.max(8)Decimal('15')>>>d.max(nan)Decimal('NaN')
min(). Only use this method (and not thebuilt-in function min()) when dealing withspecial values:>>>d=Decimal('15')>>>nan=Decimal('NaN')>>>d.min(8)Decimal('8')>>>d.min(nan)Decimal('NaN')
-:>>>d=Decimal('-15.67')>>>-dDecimal('15.67')
+:>>>d=Decimal('-15.67')>>>+dDecimal('-15.67')
*:>>>d=Decimal('5.7')>>>d*3Decimal('17.1')
normalize():>>>d=Decimal('123.45000')>>>d.normalize()Decimal('123.45')>>>d=Decimal('120.00')>>>d.normalize()Decimal('1.2E+2')
quantize():>>>d=Decimal('2.17')>>>d.quantize(Decimal('0.001'))Decimal('2.170')>>>d.quantize(Decimal('0.1'))Decimal('2.2')
%:>>>d=Decimal('10')>>>d%3Decimal('1')>>>d%6Decimal('4')
remainder_near():>>>d=Decimal('10')>>>d.remainder_near(3)Decimal('1')>>>d.remainder_near(6)Decimal('-2')
to_integral():>>>d=Decimal('-123.456')>>>d.to_integral()Decimal('-123')
same_quantum():>>>d=Decimal('123.456')>>>d.same_quantum(Decimal('0.001'))True>>>d.same_quantum(Decimal('0.01'))False
sqrt():>>>d=Decimal('123.456')>>>d.sqrt()Decimal('11.1110756')
**:>>>d=Decimal('12.56')>>>d**2Decimal('157.7536')
Following are other methods and why they exist:
adjusted(): Returns the adjusted exponent. This concept isdefined in the Spec: the adjusted exponent is the value of theexponent of a number when that number is expressed as though inscientific notation with one digit before any decimal point:>>>d=Decimal('12.56')>>>d.adjusted()1
from_float(): Class method to create instances from float datatypes:>>>d=Decimal.from_float(12.35)>>>dDecimal('12.3500000')
as_tuple(): Show the internal structure of the Decimal, thetriple tuple. This method is not required by the Spec, but TimPeters proposed it and the community agreed to have it (it’s usefulfor developing and debugging):>>>d=Decimal('123.4')>>>d.as_tuple()(0, (1, 2, 3, 4), -1)>>>d=Decimal('-2.34e5')>>>d.as_tuple()(1, (2, 3, 4), 3)
These are the attributes that can be changed to modify the context.
prec (int): the precision:>>>c.prec9
rounding (str): rounding type (how to round):>>>c.rounding'half_even'
trap_enablers (dict): if trap_enablers[exception] = 1, then anexception is raised when it is caused:>>>c.trap_enablers[Underflow]0>>>c.trap_enablers[Clamped]0
flags (dict): when an exception is caused, flags[exception] isincremented (whether or not the trap_enabler is set). Should bereset by the user of Decimal instance:>>>c.flags[Underflow]0>>>c.flags[Clamped]0
Emin (int): minimum exponent:>>>c.Emin-999999999
Emax (int): maximum exponent:>>>c.Emax999999999
capitals (int): boolean flag to use ‘E’ (True/1) or ‘e’(False/0) in the string (for example, ‘1.32e+2’ or ‘1.32E+2’):>>>c.capitals1
The following methods comply with Decimal functionality from the Spec.Be aware that the operations that are called through a specificcontext use that context and not the thread context.
To use these methods, take note that the syntax changes when theoperator is binary or unary, for example:
>>>mycontext.abs(Decimal('-2'))'2'>>>mycontext.multiply(Decimal('2.3'),5)'11.5'
So, the following are the Spec operations and conversions and how toachieve them through a context (whered is a Decimal instance andn a number that can be used in anImplicit construction):
to_sci_string(d)to_eng_string(d)create_decimal(number), seeExplicit constructionfornumber.abs(d)add(d,n)subtract(d,n)compare(d,n)divide(d,n)divide_int(d,n)max(d,n)min(d,n)minus(d)plus(d)multiply(d,n)normalize(d)quantize(d,d)remainder(d)remainder_near(d)to_integral(d)same_quantum(d,d)sqrt(d)power(d,n)Thedivmod(d,n) method supports decimal functionality throughContext.
These are methods that return useful information from the Context:
Etiny(): Minimum exponent considering precision.>>>c.Emin-999999999>>>c.Etiny()-1000000007
Etop(): Maximum exponent considering precision.>>>c.Emax999999999>>>c.Etop()999999991
copy(): Returns a copy of the context.As of Python 2.4-alpha, the code has been checked into the standardlibrary. The latest version is available from:
http://svn.python.org/view/python/trunk/Lib/decimal.py
The test cases are here:
http://svn.python.org/view/python/trunk/Lib/test/test_decimal.py
This document has been placed in the public domain.
Source:https://github.com/python/peps/blob/main/peps/pep-0327.rst
Last modified:2025-02-01 08:59:27 GMT