@@ -2121,17 +2121,67 @@ Q. Is the CPython implementation fast for large numbers?
21212121A. Yes. In the CPython and PyPy3 implementations, the C/CFFI versions of
21222122the decimal module integrate the high speed `libmpdec
21232123<https://www.bytereef.org/mpdecimal/doc/libmpdec/index.html> `_ library for
2124- arbitrary precision correctly-rounded decimal floating point arithmetic.
2124+ arbitrary precision correctly-rounded decimal floating point arithmetic [ # ]_ .
21252125``libmpdec `` uses `Karatsuba multiplication
21262126<https://en.wikipedia.org/wiki/Karatsuba_algorithm> `_
21272127for medium-sized numbers and the `Number Theoretic Transform
21282128<https://en.wikipedia.org/wiki/Discrete_Fourier_transform_(general)#Number-theoretic_transform> `_
2129- for very large numbers. However, to realize this performance gain, the
2130- context needs to be set for unrounded calculations.
2129+ for very large numbers.
21312130
2132- >>> c = getcontext()
2133- >>> c.prec = MAX_PREC
2134- >>> c.Emax = MAX_EMAX
2135- >>> c.Emin = MIN_EMIN
2131+ The context must be adapted for exact arbitrary precision arithmetic. :attr: `Emin `
2132+ and :attr: `Emax ` should always be set to the maximum values, :attr: `clamp `
2133+ should always be 0 (the default). Setting :attr: `prec ` requires some care.
21362134
2137- .. versionadded :: 3.3
2135+ The easiest approach for trying out bignum arithmetic is to use the maximum
2136+ value for :attr: `prec ` as well [# ]_::
2137+
2138+ >>> setcontext(Context(prec=MAX_PREC, Emax=MAX_EMAX, Emin=MIN_EMIN))
2139+ >>> x = Decimal(2) ** 256
2140+ >>> x / 128
2141+ Decimal('904625697166532776746648320380374280103671755200316906558262375061821325312')
2142+
2143+
2144+ For inexact results, :attr: `MAX_PREC ` is far too large on 64-bit platforms and
2145+ the available memory will be insufficient::
2146+
2147+ >>> Decimal(1) / 3
2148+ Traceback (most recent call last):
2149+ File "<stdin>", line 1, in <module>
2150+ MemoryError
2151+
2152+ On systems with overallocation (e.g. Linux), a more sophisticated approach is to
2153+ adjust :attr: `prec ` to the amount of available RAM. Suppose that you have 8GB of
2154+ RAM and expect 10 simultaneous operands using a maximum of 500MB each::
2155+
2156+ >>> import sys
2157+ >>>
2158+ >>> # Maximum number of digits for a single operand using 500MB in 8 byte words
2159+ >>> # with 19 (9 for the 32-bit version) digits per word:
2160+ >>> maxdigits = 19 * ((500 * 1024**2) // 8)
2161+ >>>
2162+ >>> # Check that this works:
2163+ >>> c = Context(prec=maxdigits, Emax=MAX_EMAX, Emin=MIN_EMIN)
2164+ >>> c.traps[Inexact] = True
2165+ >>> setcontext(c)
2166+ >>>
2167+ >>> # Fill the available precision with nines:
2168+ >>> x = Decimal(0).logical_invert() * 9
2169+ >>> sys.getsizeof(x)
2170+ 524288112
2171+ >>> x + 2
2172+ Traceback (most recent call last):
2173+ File "<stdin>", line 1, in <module>
2174+ decimal.Inexact: [<class 'decimal.Inexact'>]
2175+
2176+ In general (and especially on systems without overallocation), it is recommended
2177+ to estimate even tighter bounds and set the :attr: `Inexact ` trap if all calculations
2178+ are expected to be exact.
2179+
2180+
2181+ .. [# ]
2182+ .. versionadded :: 3.3
2183+
2184+ .. [# ]
2185+ .. versionchanged :: 3.9
2186+ This approach now works for all exact results except for non-integer powers.
2187+ Also backported to 3.7 and 3.8.
0 commit comments