NumPy 1.20.0 Release Notes#
This NumPy release is the largest so made to date, some 684 PRs contributed by184 people have been merged. See the list of highlights below for more details.The Python versions supported for this release are 3.7-3.9, support for Python3.6 has been dropped. Highlights are
Annotations for NumPy functions. This work is ongoing and improvements canbe expected pending feedback from users.
Wider use of SIMD to increase execution speed of ufuncs. Much work has beendone in introducing universal functions that will ease use of modernfeatures across different hardware platforms. This work is ongoing.
Preliminary work in changing the dtype and casting implementations in order toprovide an easier path to extending dtypes. This work is ongoing but enoughhas been done to allow experimentation and feedback.
Extensive documentation improvements comprising some 185 PR merges. This workis ongoing and part of the larger project to improve NumPy’s online presenceand usefulness to new users.
Further cleanups related to removing Python 2.7. This improves codereadability and removes technical debt.
Preliminary support for the upcoming Cython 3.0.
New functions#
The random.Generator class has a newpermuted function.#
The new function differs fromshuffle andpermutation in that thesubarrays indexed by an axis are permuted rather than the axis being treated asa separate 1-D array for every combination of the other indexes. For example,it is now possible to permute the rows or columns of a 2-D array.
(gh-15121)
sliding_window_view provides a sliding window view for numpy arrays#
numpy.lib.stride_tricks.sliding_window_view constructs views on numpyarrays that offer a sliding or moving window access to the array. This allowsfor the simple implementation of certain algorithms, such as running means.
(gh-17394)
numpy.broadcast_shapes is a new user-facing function#
broadcast_shapes gets the resulting shape frombroadcasting the given shape tuples against each other.
>>>np.broadcast_shapes((1,2),(3,1))(3, 2)>>>np.broadcast_shapes(2,(3,1))(3, 2)>>>np.broadcast_shapes((6,7),(5,6,1),(7,),(5,1,7))(5, 6, 7)
(gh-17535)
Deprecations#
Using the aliases of builtin types likenp.int is deprecated#
For a long time,np.int has been an alias of the builtinint. This isrepeatedly a cause of confusion for newcomers, and existed mainly for historicreasons.
These aliases have been deprecated. The table below shows the full list ofdeprecated aliases, along with their exact meaning. Replacing uses of items inthe first column with the contents of the second column will work identicallyand silence the deprecation warning.
The third column lists alternative NumPy names which may occasionally bepreferential. See alsoData types for additional details.
Deprecated name | Identical to | NumPy scalar type names |
|---|---|---|
|
| |
|
|
|
|
|
|
|
|
|
|
| |
|
| |
|
|
|
|
| numpy.unicode_ |
To give a clear guideline for the vast majority of cases, for the typesbool,object,str (andunicode) using the plain versionis shorter and clear, and generally a good replacement.Forfloat andcomplex you can usefloat64 andcomplex128if you wish to be more explicit about the precision.
Fornp.int a direct replacement withnp.int_ orint is alsogood and will not change behavior, but the precision will continue to dependon the computer and operating system.If you want to be more explicit and review the current use, you have thefollowing alternatives:
np.int64ornp.int32to specify the precision exactly.This ensures that results cannot depend on the computer or operating system.np.int_orint(the default), but be aware that it depends onthe computer and operating system.The C types:
np.cint(int),np.int_(long),np.longlong.np.intpwhich is 32bit on 32bit machines 64bit on 64bit machines.This can be the best type to use for indexing.
When used withnp.dtype(...) ordtype=... changing it to theNumPy name as mentioned above will have no effect on the output.If used as a scalar with:
np.float(123)
changing it can subtly change the result. In this case, the Python versionfloat(123) orint(12.) is normally preferable, although the NumPyversion may be useful for consistency with NumPy arrays (for example,NumPy behaves differently for things like division by zero).
(gh-14882)
Passingshape=None to functions with a non-optional shape argument is deprecated#
Previously, this was an alias for passingshape=().This deprecation is emitted byPyArray_IntpConverter in the C API. If yourAPI is intended to support passingNone, then you should check forNoneprior to invoking the converter, so as to be able to distinguishNone and().
(gh-15886)
Indexing errors will be reported even when index result is empty#
In the future, NumPy will raise an IndexError when aninteger array index contains out of bound values even if a non-indexeddimension is of length 0. This will now emit a DeprecationWarning.This can happen when the array is previously empty, or an emptyslice is involved:
arr1=np.zeros((5,0))arr1[[20]]arr2=np.zeros((5,5))arr2[[20],:0]
Previously the non-empty index[20] was not checked for correctness.It will now be checked causing a deprecation warning which will be turnedinto an error. This also applies to assignments.
(gh-15900)
Inexact matches formode andsearchside are deprecated#
Inexact and case insensitive matches formode andsearchside were validinputs earlier and will give a DeprecationWarning now. For example, below aresome example usages which are now deprecated and will give aDeprecationWarning:
importnumpyasnparr=np.array([[3,6,6],[4,5,1]])# mode: inexact matchnp.ravel_multi_index(arr,(7,6),mode="clap")# should be "clip"# searchside: inexact matchnp.searchsorted(arr[0],4,side='random')# should be "right"
(gh-16056)
Deprecation ofnumpy.dual#
The modulenumpy.dual is deprecated. Instead of importing functionsfromnumpy.dual, the functions should be imported directly from NumPyor SciPy.
(gh-16156)
outer andufunc.outer deprecated for matrix#
np.matrix use withouter or generic ufunc outercalls such asnumpy.add.outer. Previously, matrix wasconverted to an array here. This will not be done in the futurerequiring a manual conversion to arrays.
(gh-16232)
Further Numeric Style types Deprecated#
The remaining numeric-style type codesBytes0,Str0,Uint32,Uint64, andDatetime64have been deprecated. The lower-case variants should be usedinstead. For bytes and string"S" and"U"are further alternatives.
(gh-16554)
Thendincr method ofndindex is deprecated#
The documentation has warned against using this function since NumPy 1.8.Usenext(it) instead ofit.ndincr().
(gh-17233)
ArrayLike objects which do not define__len__ and__getitem__#
Objects which define one of the protocols__array__,__array_interface__, or__array_struct__ but are not sequences(usually defined by having a__len__ and__getitem__) will behavedifferently during array-coercion in the future.
When nested inside sequences, such asnp.array([array_like]), thesewere handled as a single Python object rather than an array.In the future they will behave identically to:
np.array([np.array(array_like)])
This change should only have an effect ifnp.array(array_like) is not 0-D.The solution to this warning may depend on the object:
Some array-likes may expect the new behaviour, and users can ignore thewarning. The object can choose to expose the sequence protocol to opt-into the new behaviour.
For example,
shapelywill allow conversion to an array-like usingline.coordsrather thannp.asarray(line). Users may work aroundthe warning, or use the new convention when it becomes available.
Unfortunately, using the new behaviour can only be achieved bycallingnp.array(array_like).
If you wish to ensure that the old behaviour remains unchanged, please createan object array and then fill it explicitly, for example:
arr=np.empty(3,dtype=object)arr[:]=[array_like1,array_like2,array_like3]
This will ensure NumPy knows to not enter the array-like and use it asa object instead.
(gh-17973)
Future Changes#
Arrays cannot be using subarray dtypes#
Array creation and casting usingnp.array(arr,dtype)andarr.astype(dtype) will use different logic whendtypeis a subarray dtype such asnp.dtype("(2)i,").
For such adtype the following behaviour is true:
res=np.array(arr,dtype)res.dtypeisnotdtyperes.dtypeisdtype.baseres.shape==arr.shape+dtype.shape
Butres is filled using the logic:
res=np.empty(arr.shape+dtype.shape,dtype=dtype.base)res[...]=arr
which uses incorrect broadcasting (and often leads to an error).In the future, this will instead cast each element individually,leading to the same result as:
res=np.array(arr,dtype=np.dtype(["f",dtype]))["f"]
Which can normally be used to opt-in to the new behaviour.
This change does not affectnp.array(list,dtype="(2)i,") unless thelist itself includes at least one array. In particular, the behaviouris unchanged for a list of tuples.
(gh-17596)
Expired deprecations#
The deprecation of numeric style type-codes
np.dtype("Complex64")(with upper case spelling), is expired."Complex64"corresponded to"complex128"and"Complex32"corresponded to"complex64".The deprecation of
np.sctypeNAandnp.typeNAis expired. Bothhave been removed from the public API. Usenp.typeDictinstead.(gh-16554)
The 14-year deprecation of
np.ctypeslib.ctypes_load_libraryis expired.Useload_libraryinstead, which is identical.(gh-17116)
Financial functions removed#
In accordance with NEP 32, the financial functions are removedfrom NumPy 1.20. The functions that have been removed arefv,ipmt,irr,mirr,nper,npv,pmt,ppmt,pv, andrate. These functions are available in thenumpy_financiallibrary.
(gh-17067)
Compatibility notes#
isinstance(dtype,np.dtype) and nottype(dtype)isnotnp.dtype#
NumPy dtypes are not direct instances ofnp.dtype anymore. Code thatmay have usedtype(dtype)isnp.dtype will always returnFalse andmust be updated to use the correct versionisinstance(dtype,np.dtype).
This change also affects the C-side macroPyArray_DescrCheck if compiledagainst a NumPy older than 1.16.6. If code uses this macro and wishes tocompile against an older version of NumPy, it must replace the macro(see alsoC API changes section).
Same kind casting in concatenate withaxis=None#
Whenconcatenate is called withaxis=None,the flattened arrays were cast withunsafe. Any other axischoice uses “same kind”. That different defaulthas been deprecated and “same kind” casting will be usedinstead. The newcasting keyword argumentcan be used to retain the old behaviour.
(gh-16134)
NumPy Scalars are cast when assigned to arrays#
When creating or assigning to arrays, in all relevant cases NumPyscalars will now be cast identically to NumPy arrays. In particularthis changes the behaviour in some cases which previously raised anerror:
np.array([np.float64(np.nan)],dtype=np.int64)
will succeed and return an undefined result (usually the smallest possibleinteger). This also affects assignments:
arr[0]=np.float64(np.nan)
At this time, NumPy retains the behaviour for:
np.array(np.float64(np.nan),dtype=np.int64)
The above changes do not affect Python scalars:
np.array([float("NaN")],dtype=np.int64)
remains unaffected (np.nan is a Pythonfloat, not a NumPy one).Unlike signed integers, unsigned integers do not retain this special case,since they always behaved more like casting.The following code stops raising an error:
np.array([np.float64(np.nan)],dtype=np.uint64)
To avoid backward compatibility issues, at this time assignment fromdatetime64 scalar to strings of too short length remains supported.This means thatnp.asarray(np.datetime64("2020-10-10"),dtype="S5")succeeds now, when it failed before. In the long term this may bedeprecated or the unsafe cast may be allowed generally to make assignmentof arrays and scalars behave consistently.
Array coercion changes when Strings and other types are mixed#
When strings and other types are mixed, such as:
np.array(["string",np.float64(3.)],dtype="S")
The results will change, which may lead to string dtypes with longer stringsin some cases. In particularly, ifdtype="S" is not provided any numericalvalue will lead to a string results long enough to hold all possible numericalvalues. (e.g. “S32” for floats). Note that you should always providedtype="S" when converting non-strings to strings.
Ifdtype="S" is provided the results will be largely identical to before,but NumPy scalars (not a Python float like1.0), will still enforcea uniform string length:
np.array([np.float64(3.)],dtype="S")# gives "S32"np.array([3.0],dtype="S")# gives "S3"
Previously the first version gave the same result as the second.
Array coercion restructure#
Array coercion has been restructured. In general, this should not affectusers. In extremely rare corner cases where array-likes are nested:
np.array([array_like1])
Things will now be more consistent with:
np.array([np.array(array_like1)])
This can subtly change output for some badly defined array-likes.One example for this are array-like objects which are not also sequencesof matching shape.In NumPy 1.20, a warning will be given when an array-like is not also asequence (but behaviour remains identical, see deprecations).If an array like is also a sequence (defines__getitem__ and__len__)NumPy will now only use the result given by__array__,__array_interface__, or__array_struct__. This will result indifferences when the (nested) sequence describes a different shape.
(gh-16200)
Writing to the result ofnumpy.broadcast_arrays will export readonly buffers#
In NumPy 1.17numpy.broadcast_arrays started warning when the resulting arraywas written to. This warning was skipped when the array was used through thebuffer interface (e.g.memoryview(arr)). The same thing will now occur for thetwo protocols__array_interface__, and__array_struct__ returning read-onlybuffers instead of giving a warning.
(gh-16350)
Numeric-style type names have been removed from type dictionaries#
To stay in sync with the deprecation fornp.dtype("Complex64")and other numeric-style (capital case) types. These were removedfromnp.sctypeDict andnp.typeDict. You should usethe lower case versions instead. Note that"Complex64"corresponds to"complex128" and"Complex32" correspondsto"complex64". The numpy style (new) versions, denote the fullsize and not the size of the real/imaginary part.
(gh-16554)
Theoperator.concat function now raises TypeError for array arguments#
The previous behavior was to fall back to addition and add the two arrays,which was thought to be unexpected behavior for a concatenation function.
(gh-16570)
nickname attribute removed from ABCPolyBase#
An abstract propertynickname has been removed fromABCPolyBase as itwas no longer used in the derived convenience classes.This may affect users who have derived classes fromABCPolyBase andoverridden the methods for representation and display, e.g.__str__,__repr__,_repr_latex, etc.
(gh-16589)
float->timedelta anduint64->timedelta promotion will raise a TypeError#
Float and timedelta promotion consistently raises a TypeError.np.promote_types("float32","m8") aligns withnp.promote_types("m8","float32") now and both raise a TypeError.Previously,np.promote_types("float32","m8") returned"m8" whichwas considered a bug.
Uint64 and timedelta promotion consistently raises a TypeError.np.promote_types("uint64","m8") aligns withnp.promote_types("m8","uint64") now and both raise a TypeError.Previously,np.promote_types("uint64","m8") returned"m8" whichwas considered a bug.
(gh-16592)
numpy.genfromtxt now correctly unpacks structured arrays#
Previously,numpy.genfromtxt failed to unpack if it was called withunpack=True and a structured datatype was passed to thedtype argument(ordtype=None was passed and a structured datatype was inferred).For example:
>>>data=StringIO("21 58.0\n35 72.0")>>>np.genfromtxt(data,dtype=None,unpack=True)array([(21, 58.), (35, 72.)], dtype=[('f0', '<i8'), ('f1', '<f8')])
Structured arrays will now correctly unpack into a list of arrays,one for each column:
>>>np.genfromtxt(data,dtype=None,unpack=True)[array([21, 35]), array([58., 72.])]
(gh-16650)
mgrid,r_, etc. consistently return correct outputs for non-default precision input#
Previously,np.mgrid[np.float32(0.1):np.float32(0.35):np.float32(0.1),]andnp.r_[0:10:np.complex64(3j)] failed to return meaningful output.This bug potentially affectsmgrid,ogrid,r_,andc_ when an input with dtype other than the defaultfloat64 andcomplex128 and equivalent Python types were used.The methods have been fixed to handle varying precision correctly.
(gh-16815)
Boolean array indices with mismatching shapes now properly giveIndexError#
Previously, if a boolean array index matched the size of the indexed array butnot the shape, it was incorrectly allowed in some cases. In other cases, itgave an error, but the error was incorrectly aValueError with a messageabout broadcasting instead of the correctIndexError.
For example, the following used to incorrectly giveValueError:operandscouldnotbebroadcasttogetherwithshapes(2,2)(1,4):
np.empty((2,2))[np.array([[True,False,False,False]])]
And the following used to incorrectly returnarray([],dtype=float64):
np.empty((2,2))[np.array([[False,False,False,False]])]
Both now correctly giveIndexError:booleanindexdidnotmatchindexedarrayalongdimension0;dimensionis2butcorrespondingbooleandimensionis1.
(gh-17010)
Casting errors interrupt Iteration#
When iterating while casting values, an error may stop the iterationearlier than before. In any case, a failed casting operation alwaysreturned undefined, partial results. Those may now be even moreundefined and partial.For users of theNpyIter C-API such cast errors will nowcause theiternext() function to return 0 and thus abortiteration.Currently, there is no API to detect such an error directly.It is necessary to checkPyErr_Occurred(), whichmay be problematic in combination withNpyIter_Reset.These issues always existed, but new API could be addedif required by users.
(gh-17029)
f2py generated code may return unicode instead of byte strings#
Some byte strings previously returned by f2py generated code may now be unicodestrings. This results from the ongoing Python2 -> Python3 cleanup.
(gh-17068)
The first element of the__array_interface__["data"] tuple must be an integer#
This has been the documented interface for many years, but there was stillcode that would accept a byte string representation of the pointer address.That code has been removed, passing the address as a byte string will nowraise an error.
(gh-17241)
poly1d respects the dtype of all-zero argument#
Previously, constructing an instance ofpoly1d with all-zerocoefficients would cast the coefficients tonp.float64.This affected the output dtype of methods which constructpoly1d instances internally, such asnp.polymul.
(gh-17577)
The numpy.i file for swig is Python 3 only.#
Uses of Python 2.7 C-API functions have been updated to Python 3 only. Userswho need the old version should take it from an older version of NumPy.
(gh-17580)
Void dtype discovery innp.array#
In calls usingnp.array(...,dtype="V"),arr.astype("V"),and similar a TypeError will now be correctly raised unless allelements have the identical void length. An example for this is:
np.array([b"1",b"12"],dtype="V")
Which previously returned an array with dtype"V2" whichcannot representb"1" faithfully.
(gh-17706)
C API changes#
ThePyArray_DescrCheck macro is modified#
ThePyArray_DescrCheck macro has been updated since NumPy 1.16.6 to be:
#define PyArray_DescrCheck(op) PyObject_TypeCheck(op, &PyArrayDescr_Type)Starting with NumPy 1.20 code that is compiled against an earlier versionwill be API incompatible with NumPy 1.20.The fix is to either compile against 1.16.6 (if the NumPy 1.16 release isthe oldest release you wish to support), or manually inline the macro byreplacing it with the new definition:
PyObject_TypeCheck(op,&PyArrayDescr_Type)
which is compatible with all NumPy versions.
Size ofnp.ndarray andnp.void_ changed#
The size of thePyArrayObject andPyVoidScalarObjectstructures have changed. The following header definition has beenremoved:
#define NPY_SIZEOF_PYARRAYOBJECT (sizeof(PyArrayObject_fields))since the size must not be considered a compile time constant: it willchange for different runtime versions of NumPy.
The most likely relevant use are potential subclasses written in C whichwill have to be recompiled and should be updated. Please see thedocumentation forPyArrayObject for more details and contactthe NumPy developers if you are affected by this change.
NumPy will attempt to give a graceful error but a program expecting afixed structure size may have undefined behaviour and likely crash.
(gh-16938)
New Features#
where keyword argument fornumpy.all andnumpy.any functions#
The keyword argumentwhere is added and allows to only consider specifiedelements or subaxes from an array in the Boolean evaluation ofall andany. This new keyword is available to the functionsall andanyboth vianumpy directly or in the methods ofnumpy.ndarray.
Any broadcastable Boolean array or a scalar can be set aswhere. Itdefaults toTrue to evaluate the functions for all elements in an array ifwhere is not set by the user. Examples are given in the documentation ofthe functions.
where keyword argument fornumpy functionsmean,std,var#
The keyword argumentwhere is added and allows to limit the scope in thecalculation ofmean,std andvar to only a subset of elements. Itis available both vianumpy directly or in the methods ofnumpy.ndarray.
Any broadcastable Boolean array or a scalar can be set aswhere. Itdefaults toTrue to evaluate the functions for all elements in an array ifwhere is not set by the user. Examples are given in the documentation ofthe functions.
(gh-15852)
norm=backward,forward keyword options fornumpy.fft functions#
The keyword argument optionnorm=backward is added as an alias forNoneand acts as the default option; using it has the direct transforms unscaledand the inverse transforms scaled by1/n.
Using the new keyword argument optionnorm=forward has the directtransforms scaled by1/n and the inverse transforms unscaled (i.e. exactlyopposite to the default optionnorm=backward).
(gh-16476)
NumPy is now typed#
Type annotations have been added for large parts of NumPy. There isalso a newnumpy.typing module that contains useful types forend-users. The currently available types are
ArrayLike: for objects that can be coerced to an arrayDtypeLike: for objects that can be coerced to a dtype
(gh-16515)
numpy.typing is accessible at runtime#
The types innumpy.typing can now be imported at runtime. Codelike the following will now work:
fromnumpy.typingimportArrayLikex:ArrayLike=[1,2,3,4]
(gh-16558)
New__f2py_numpy_version__ attribute for f2py generated modules.#
Because f2py is released together with NumPy,__f2py_numpy_version__provides a way to track the version f2py used to generate the module.
(gh-16594)
mypy tests can be run via runtests.py#
Currently running mypy with the NumPy stubs configured requireseither:
Installing NumPy
Adding the source directory to MYPYPATH and linking to the
mypy.ini
Both options are somewhat inconvenient, so add a--mypy option to runteststhat handles setting things up for you. This will also be useful in the futurefor any typing codegen since it will ensure the project is built before typechecking.
(gh-17123)
Negation of user defined BLAS/LAPACK detection order#
distutils allows negation of libraries when determining BLAS/LAPACKlibraries.This may be used to remove an item from the library resolution phase, i.e.to disallow NetLIB libraries one could do:
NPY_BLAS_ORDER='^blas'NPY_LAPACK_ORDER='^lapack'pythonsetup.pybuild
That will use any of the accelerated libraries instead.
(gh-17219)
Allow passing optimizations arguments to asv build#
It is now possible to pass-j,--cpu-baseline,--cpu-dispatch and--disable-optimization flags to ASV build when the--bench-compareargument is used.
(gh-17284)
The NVIDIA HPC SDK nvfortran compiler is now supported#
Support for the nvfortran compiler, a version of pgfortran, has been added.
(gh-17344)
dtype option forcov andcorrcoef#
Thedtype option is now available fornumpy.cov andnumpy.corrcoef.It specifies which data-type the returned result should have.By default the functions still return anumpy.float64 result.
(gh-17456)
Improvements#
Improved string representation for polynomials (__str__)#
The string representation (__str__) of all six polynomial types innumpy.polynomial has been updated to give the polynomial as a mathematicalexpression instead of an array of coefficients. Two package-wide formats forthe polynomial expressions are available - one using Unicode characters forsuperscripts and subscripts, and another using only ASCII characters.
(gh-15666)
Remove the Accelerate library as a candidate LAPACK library#
Apple no longer supports Accelerate. Remove it.
(gh-15759)
Object arrays containing multi-line objects have a more readablerepr#
If elements of an object array have arepr containing new lines, then thewrapped lines will be aligned by column. Notably, this improves therepr ofnested arrays:
>>>np.array([np.eye(2),np.eye(3)],dtype=object)array([array([[1., 0.], [0., 1.]]), array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])], dtype=object)
(gh-15997)
Concatenate supports providing an output dtype#
Support was added toconcatenate to providean outputdtype andcasting using keywordarguments. Thedtype argument cannot be providedin conjunction with theout one.
(gh-16134)
Thread safe f2py callback functions#
Callback functions in f2py are now thread safe.
(gh-16519)
numpy.core.records.fromfile now supports file-like objects#
numpy.core.records.fromfile can now use file-like objects, for instanceio.BytesIO
(gh-16675)
RPATH support on AIX added to distutils#
This allows SciPy to be built on AIX.
(gh-16710)
Use f90 compiler specified by the command line args#
The compiler command selection for Fortran Portland Group Compiler is changedinnumpy.distutils.fcompiler. This only affects the linking command. Thisforces the use of the executable provided by the command line option (ifprovided) instead of the pgfortran executable. If no executable is provided tothe command line option it defaults to the pgf90 executable, which is an aliasfor pgfortran according to the PGI documentation.
(gh-16730)
Add NumPy declarations for Cython 3.0 and later#
The pxd declarations for Cython 3.0 were improved to avoid using deprecatedNumPy C-API features. Extension modules built with Cython 3.0+ that use NumPycan now set the C macroNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION to avoidC compiler warnings about deprecated API usage.
(gh-16986)
Make the window functions exactly symmetric#
Make sure the window functions provided by NumPy are symmetric. There werepreviously small deviations from symmetry due to numerical precision that arenow avoided by better arrangement of the computation.
(gh-17195)
Performance improvements and changes#
Enable multi-platform SIMD compiler optimizations#
A series of improvements for NumPy infrastructure to pave the way toNEP-38, that can be summarized as follow:
New Build Arguments
--cpu-baselineto specify the minimal set of requiredoptimizations, default value isminwhich provides the minimumCPU features that can safely run on a wide range of usersplatforms.--cpu-dispatchto specify the dispatched set of additionaloptimizations, default value ismax-xop-fma4which enablesall CPU features, except for AMD legacy features.--disable-optimizationto explicitly disable the whole newimprovements, It also adds a newC compiler #definitioncalledNPY_DISABLE_OPTIMIZATIONwhich it can be used asguard for any SIMD code.
Advanced CPU dispatcher
A flexible cross-architecture CPU dispatcher built on the top ofPython/Numpy distutils, support all common compilers with a wide range ofCPU features.
The new dispatcher requires a special file extension
*.dispatch.ctomark the dispatch-ableC sources. These sources have the ability to becompiled multiple times so that each compilation process represents certainCPU features and provides different #definitions and flags that affect thecode paths.New auto-generated C header ``core/src/common/_cpu_dispatch.h``
This header is generated by the distutils module
ccompiler_opt, andcontains all the #definitions and headers of instruction sets, that had beenconfigured through command arguments ‘–cpu-baseline’ and ‘–cpu-dispatch’.New C header ``core/src/common/npy_cpu_dispatch.h``
This header contains all utilities that required for the whole CPUdispatching process, it also can be considered as a bridge linking the newinfrastructure work with NumPy CPU runtime detection.
Add new attributes to NumPy umath module(Python level)
__cpu_baseline__a list contains the minimal set of requiredoptimizations that supported by the compiler and platform according to thespecified values to command argument ‘–cpu-baseline’.__cpu_dispatch__a list contains the dispatched set of additionaloptimizations that supported by the compiler and platform according to thespecified values to command argument ‘–cpu-dispatch’.
Print the supported CPU features during the run of PytestTester
(gh-13516)
Changes#
Changed behavior ofdivmod(1.,0.) and related functions#
The changes also assure that different compiler versions have the same behaviorfor nan or inf usages in these operations. This was previously compilerdependent, we now force the invalid and divide by zero flags, making theresults the same across compilers. For example, gcc-5, gcc-8, or gcc-9 nowresult in the same behavior. The changes are tabulated below:
Operator | Old Warning | New Warning | Old Result | New Result | Works on MacOS |
|---|---|---|---|---|---|
np.divmod(1.0, 0.0) | Invalid | Invalid and Dividebyzero | nan, nan | inf, nan | Yes |
np.fmod(1.0, 0.0) | Invalid | Invalid | nan | nan | No? Yes |
np.floor_divide(1.0, 0.0) | Invalid | Dividebyzero | nan | inf | Yes |
np.remainder(1.0, 0.0) | Invalid | Invalid | nan | nan | Yes |
(gh-16161)
np.linspace on integers now uses floor#
When using aint dtype innumpy.linspace, previously float values wouldbe rounded towards zero. Nownumpy.floor is used instead, which rounds toward-inf. This changes the results for negative values. For example, thefollowing would previously give:
>>>np.linspace(-3,1,8,dtype=int)array([-3, -2, -1, -1, 0, 0, 0, 1])
and now results in:
>>>np.linspace(-3,1,8,dtype=int)array([-3, -3, -2, -2, -1, -1, 0, 1])
The former result can still be obtained with:
>>>np.linspace(-3,1,8).astype(int)array([-3, -2, -1, -1, 0, 0, 0, 1])
(gh-16841)